uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,565,015
arxiv
\section{Introduction} For many problems in science MCMC has become an indispensable tool due to its ability to sample from arbitrary probability distributions known up only to a constant. Comprehensive introductions on MCMC methods can be found in \cite{newman1999monte,liu2008monte,robert2004monte,landau2014guide}. Estimators resulting from MCMC scale independently of dimensionality. However, they have the fairly slow universal convergence rate of $n^{-1}$, where $n$ denotes the number of samples generated, in the mean squared error (MSE), same as classic Monte Carlo methods using pseudo-random numbers. For the latter, faster convergence rates of order close to $n^{-2}$ can be achieved when samples are generated by a suitable low-discrepancy sequence, i.e.\ points which are homogeneously distributed over space (\cite{dick2013high}). These so called quasi-Monte Carlo (QMC) methods, despite their generally deteriorating performance with increasing (effective) dimension (\cite{wang2003effective,caflisch1997valuation}), can nonetheless lead to significant computational savings compared to standard Monte Carlo. However, they generally require the integral of interest to be expressible in terms of an expectation with respect to a unit hypercube, which limits their general application. The first applications of QMC in the context of MCMC go back to \cite{chentsov1967pseudorandom} and \cite{sobol1974pseudo}, which assume a discrete state space. In \cite{chentsov1967pseudorandom}, the driving sequence of uniformly distributed independent and identically distributed (IID) random numbers is replaced by a completely uniformly distributed (CUD) sequence. The same approach is used in \cite{owen2005quasi} and \cite{chen2011consistency}. In \cite{liao1998variance}, a Gibbs sampler that runs on randomly shuffled QMC points is introduced. Later, \cite{chaudhary2004acceleration} uses a weighting of rejected samples to generate balanced proposals. Both successfully applied QMC in MCMC, albeit without providing any theoretical investigation. \cite{craiu2007acceleration} uses QMC in multiple-try Metropolis-Hastings, and \cite{lemieux2006exact} within an exact sampling method introduced by \cite{propp1996exact}. In \cite{l2008randomized} the so called array-randomised QMC (RQMC) was introduced that uses quasi-Monte Carlo to update multiple chains that run in parallel. Further, the roter-router model, which is a deterministic analogue to a random walk on a graph, was applied in \cite{doerr2009deterministic} on a number of problems. We note that most of these approaches resulted in relatively modest performance improvements over non-QMC methods \cite{chen2011consistency}. Based on the coupling argument by Chentsov from \cite{chentsov1967pseudorandom}, it was proven in \cite{owen2005quasi} that an MCMC method defined on a finite state space still has the correct target as its stationary distribution when the driving sequence of IID numbers is replaced by weakly CUD (WCUD) numbers. Subsequently, \cite{tribble2008construction} provided proofs of some theoretical properties of WCUD sequences, along with numerical results using a Gibbs sampler driven by WCUD numbers, which achieves significant performance improvements compared to using IID inputs. More recently, the result from \cite{owen2005quasi} was generalised to WCUD numbers and continuous state spaces by Chen (\cite{chen2011consistency}). In this work, we consider the theoretical and numerical properties of the parallel MCMC method introduced in \cite{calderhead2014general}, which we here call multiple proposal MCMC (MP-MCMC) as it proposes and samples {\it multiple} points in each iteration. We extend this methodology to the use of non-reversible transition kernels and introduce an adaptive version, for which we show ergodicity. Further, we derive an importance sampling MP-MCMC approach, in which all proposed points from one iteration are accepted and then suitably weighted in order to consistently estimate integrals with respect to the posterior. We then combine these novel MP-MCMC algorithms with QMC by generalising them to use arbitrary CUD numbers as their driving sequence, and we establish conditions under which consistency holds. Due to the fact that the state space is covered by multiple proposals in each iteration, one might expect that using QMC numbers as the seed in MP-MCMC should harvest the benefits of low-discrepancy sequences more effectively than in the single proposal case previously considered. Moreover, the importance sampling approach mentioned above enables MP-MCMC to remove the discontinuity introduced by the acceptance threshold when sampling from the multiple proposals, which improves the performance when using QMC numbers as the driving sequence. Indeed, when combining the multiple proposal QMC approach together with the importance sampling method we observe in numerical simulations a convergence rate of order close to $n^{-2}$ for this novel MCMC method, similar to traditional QMC methods. This work is, to the best of our knowledge, the first publication showing substantial benefits in the use of QMC in MCMC for arbitrary posteriors that are not known analytically and are not hierarchical, i.e.\ do not possess a lower-dimensional structure for their conditional probabilities. Hierarchical MCMC sampling problems using QMC for medium dimensions that have been considered in the literature include the $11$-dimensional hierarchical Poisson model for pump failures from \cite{gelfand1990sampling}, which was treated via QMC Gibbs sampling methods by \cite{liao1998variance} and \cite{owen2005quasi}, respectively, and a $42$-dimensional probit regression example from \cite{finney1947estimation}, treated in \cite{tribble2008construction} via the use of a QMC seed in a Gibbs sampling scheme introduced in \cite{albert1993bayesian}. In these problems however, conditional distributions are available explicitly such that direct sampling can be applied. In this paper, we begin with re-defining the MP-MCMC algorithm previously introduced in \cite{calderhead2014general} and then consider a number of novel extensions, which result finally in a parallel CUD driven method that achieves a higher rate of convergence similar to QMC. The list of novel algorithms we consider is presented in Table 1. Throughout the paper, we also prove some theoretical results for the proposed algorithms as well as investigating their performance in practice. For the purpose of clarity and readability, we will often state the lemma and refer the reader to the appropriate section in the appendix for its full proof. \begin{table}[h] \ra{1.2} \centering \caption{ Summary of algorithms introduced in this work, and associated properties} \centering \resizebox{.75\textwidth}{!}{ \begin{tabular}{ @{} *7c @{}} \bottomrule {Algorithm} & {Section} & {Adaptive} & {IS} & {PSR} & {CUD} & {Num.\ conv.\ rate} \\ \midrule \ref{algorithm:multiproposal_MH} & \ref{subsec:derivation_mpmcmc} & \xmark & \xmark & \cmark & \xmark & $n^{-1}$ \\ \ref{algorithm:adaptive_mp_mcmc} & \ref{subsubsec:an_adaptive_mpmcmc_algorithm} & \cmark & \xmark & \cmark & \xmark & $n^{-1}$\\ \ref{algorithm:importance_sampling_mp_mcmc} & \ref{subsubsec:algorithm_description_is_mpmcmc} & \xmark & \cmark & \cmark & \xmark & $n^{-1}$\\ \ref{algorithm:adaptive_importance_sampling_mp_mcmc} & \ref{subsubsec:algorithm_description_adaptive_IS_mpmcmc} & \cmark & \cmark & \cmark & \xmark & $n^{-1}$\\ \ref{algorithm:multiproposal_quasi_MH} & \ref{subsubsec:algorithm_description_mpqmcmc} & \xmark & \xmark & \xmark & \cmark & $n^{-1}$\\ \ref{algorithm:importance_sampling_mp_qmcmc} & \ref{subsubsec:algorithm_description_IS_mpqmcmc} & \xmark & \cmark & \xmark & \cmark & $\approx n^{-2}$ \\ \ref{algorithm:adaptive_importance_sampling_mp_qmcmc} & \ref{subsubsec:algorithm_description_adaptive_IS_mpqmcmc} & \cmark & \cmark & \xmark & \cmark & $\approx n^{-2}$ \\ \bottomrule \end{tabular} \label{table:results_bayesian_linear_regression} } \end{table} In Section \ref{Section_Concepts_QMC} we introduce the basics of QMC and give a short review of the literature regarding CUD points, discuss some CUD constructions and display the construction used in this work. Next, in Section \ref{sec:multiple_proposal_mcmc}, we present the multiple proposal MCMC (MP-MCMC) framework from \cite{calderhead2014general}, first using pseudo-random numbers, and introduce two new formulations of MP-MCMC as a single state Markov chain over a product space, which we use for proving a number of theoretical properties. We also formally prove a law of large numbers and central limit theorem for MP-MCMC and carefully consider a variety of novel extensions. In particular, we consider the use of optimised and non-reversible transitions, as well as adaptivity of the proposal kernel, for which we prove ergodicity. We then compare their relative performance through a simulation study. In Section \ref{sec:importance_sampling} we consider the use of importance sampling within an MP-MCMC framework. We suggest an adaptive version of this algorithm and prove its ergdocity, and consider the importance sampling approach as the limiting case of sampling from the finite state Markov chain on the multiple proposals. We conclude by proving asymptotic unbiasedness of the proposed methods and empirically comparing their performance. In Section \ref{sec:multiproposal_quasi_MH} we generalise the previously introduced MP-MCMC algorithms to the case of using CUD numbers as the driving sequence, instead of pseudo-random numbers. We describe two regularity conditions that we then use to prove consistency of the proposed method, and we discuss how CUD numbers should be best incorporated within MCMC algorithms generally. We prove asymptotic unbiasedness of the two proposed algorithms and demonstrate through a couple of numerical simulations an increased convergence rate of the empirical variance of our estimators, approaching $n^{-2}$ rather than usual $n^{-1}$ for traditional MCMC methods. Finally, we present some conclusions and discuss the very many avenues for future work. \section{Some Concepts from Quasi Monte Carlo}\label{Section_Concepts_QMC} Quasi-Monte Carlo (QMC) techniques approximate integrals by an equal-weight quadrature rule similar to standard Monte Carlo. However, instead of using IID random samples as evaluation points, one uses low-discrepancy sequences designed to cover the underlying domain more evenly. Common choices for such sequences for QMC include Sobol sequences and digital nets \cite{dick2013high}. Due to the increased spatial coverage of the domain QMC generally yields better convergence rates than standard Monte Carlo. \subsection{QMC background} Standard QMC approaches generally permit the use of a high-dimensional hypercube as the domain of integration. However, using standard approaches, e.g., inverse transformations as introduced in \cite{devroye1986non}, samples from arbitrary domains may be constructed, as long as the inverse CDF is available. When the inverse of the CDF is not available directly, one must resort to alternative sampling methods, which motivates the development of the MCMC methods later in this paper. \subsubsection{Discrepancy and Koksma-Hlawka inequality} Estimators based on QMC use a set of deterministic sample points $\vec{x}_i\in[0,1]^d$ for $i=1,...,n$ and $d \in \mathbb{N}$, that are members of a low-discrepancy sequence. Roughly speaking, these points are distributed inside $[0,1]^d$ such that the uncovered areas are minimised. Typically, the same holds true for projections of these points onto lower-dimensional faces of the underlying hypercube. Referring to \cite{niederreiter1992random}, for a set of QMC points $P=\{\vec{x}_1,...,\vec{x}_n\}$, the star discrepancy can be defined as \begin{align} D^{*d}_n(P) = \sup_{\vec{a} \in(0,1]^d} \left|\frac{1}{n}\sum_{i=1}^n \mathbf{I}_{(0,\vec{a}]}(\vec{x}_i) - \prod_{i=1}^n a_i \right|, \label{eq:star_discrepancy} \end{align} where $\vec{a}$ has coordinates $0\le a_j \le 1$ for any $j=1,...,d$, respectively. This gives us a measure of how well-spaced out a set of points is on a given domain. One of the main results in QMC theory, the Koksma-Hlawka inequality, provides an upper bound for the error of a QMC estimate based on \eqref{eq:star_discrepancy} by \begin{align} \left|\frac{1}{n}\sum_{i=1}^n f(\vec{x}_i) - \int_{[0,1]^d}f(\vec{x})\mathrm{d}\vec{x} \right| \le V\left(f\right) \cdot D^{*d}_n(P), \label{eq:koksma_hlawka_inequality} \end{align} where $V(f)$ denotes the variation of $f$ in the sense of Hardy-Krause. For a sufficiently smooth $f$, $V(f)$ can be expressed as the sum of terms \begin{align} \int_{[0,1]^k} \left| \frac{\partial^k f}{\partial x_{i_1} ... \partial x_{i_k}} \right|_{x_j=1, j\neq i_1, ..., i_k} \mathrm{d}x_{i_1}...\mathrm{d}x_{i_k}, \end{align} where $i_1 < ... < i_k$ and $k \le d$. A more general definition for the case of non-smooth $f$ and in a multi-dimensional setting is beyond the scope of this work, but can be found in \cite{owen2005multidimensional}. In \eqref{eq:koksma_hlawka_inequality}, $V(f)$ is assumed to be finite. Thus, the error of the approximation is deterministically bounded by a smoothness measure of the integrand and a quality measure for the point set. The Koksma-Hlawka equation \eqref{eq:koksma_hlawka_inequality} (for the case $d=1$) was first proven by Koksma \cite{koksma1942ageneral}, and the general case ($d\in \mathbb{N}$) was subsequently proven by Hlawka \cite{hlawka1961funktionen}. Note that for some functions arising in practise it holds $V(f) = \infty$, e.g.\ the inverse Gaussian map from the hypercube to the hypersphere \cite{basu2016transformations}, so that equation \eqref{eq:koksma_hlawka_inequality} cannot be applied.\\ \subsubsection{Convergence rates} The use of low-discrepancy sequences instead of pseudorandom numbers may allow a faster convergence rate of the sampling error. Given an integrand $f$ with $V(f)<\infty$, constructions for QMC points can achieve convergence rates close to $\mathcal{O}(n^{-2})$ in the MSE, compared to $\mathcal{O}(n^{-1})$ for standard Monte Carlo \cite{dick2013high}. For smooth functions, it is possible to achieve convergence rates of order $\mathcal{O}(n^{-2\alpha} \log(n)^{2d\alpha})$ when $f$ is $\alpha$-times differentiable (\cite{dick2009quasi}). However, if $f$ has only bounded variation but is not differentiable, convergence rates of in general only $\mathcal{O}(n^{-2}\log(n)^{2d})$ hold true \cite{sharygin1963lower}. For practical applications where the dimensionality $d$ is large and the number of samples $n$ is moderate, QMC does therefore not necessarily perform better than standard Monte Carlo. In some settings, using randomised QMC (RQMC) one can achieve convergence rates of $\mathcal{O}(n^{-3})$ \cite{l2006randomized, l2008randomized} in MSE, and $n^{-3}$ for the empirical variance in certain examples \cite{l2018sorting}. \subsubsection{The curse of dimensionality} The curse of dimensionality describes the phenomenon of exceeding increase in the complexity of a problem with the dimensionality it is set in \cite{richard1957dynamic}. Classical numerical integration methods such as quadrature rules become quickly computationally infeasible to use when the number of dimensions increases. This is since the number of evaluation points typically increases exponentially with the the dimension, making such integration schemes impractical for dimensions that are higher than say $d=6$. However, in \cite{paskov1995faster} a high-dimensional ($d=360$) problem from mathematical finance was successfully solved using quasi-Monte Carlo (Halton and Sobol sequences). Since then, much research has been undertaken to lift the curse of dimensionality in QMC, referring to \cite{kuo2005lifting}, \cite{dick2010digital} and \cite{dick2013high}. In general, a well-performing integration rule in a high-dimensional setting will depend on the underlying integrand or a class of integrands. \cite{caflisch1997valuation} introduced the notion of effective dimension, which identifies the number of coordinates of a function or indeed of a suitable decomposition (e.g.\ ANOVA), respectively, which carries most of the information about the function. This concept accounts for the fact that not all variables in a function are necessarily informative about the variability in the function, and may therefore be neglected when integrated. In practical applications the effective dimension can be very low ($d=2,3$) compared to the actual number of variables in the integrand. To model such situations, weighted function spaces have been introduced in \cite{sloan1998quasi}. In principle, the idea is to assign a weight to every coordinate or to any subset of coordinates for a particular decomposition of the integrand, thereby prioritising variables with high degree of information on the integrand. Weighted function spaces have a Hilbert space structure. For a particular class of such spaces, namely reproducing kernel Hilbert spaces (RKHS), the worst-case error of the integration, defined as the largest error for any function in the unit ball of the RKHS, can be expressed explicitely in terms of the reproducing kernel. Based on this, it is possible to prove the existence of low-discrepancy sets that provide an upper bound of the worst-case error proportional to $N^{-1+\delta}$ for any $\delta >0$, where the constant does neither depend on $N$ nor on $d$. Furthermore, there exist explicit constructions for such amenable point sets, e.g.\ the greedy algorithm for shifted rank-1 lattice rules by \cite{sloan2002constructing} and the refined fast implementation based on Fast Fourier Transforms provided by \cite{nuyens2006fast}. Modern quasi-Monte Carlo implementations can thus be useful in applications with up to hundreds and even thousands of dimensions. The constructions of QMC point sets used for MCMC in this work are generic in the sense that their construction does actually not depend on the underlying integrand. Major performance gains compared to standard Monte Carlo can still be achieved for moderately large dimensions, which we will see in Section \ref{sec:multiproposal_quasi_MH}. However, the incorporation of QMC constructions tailored to an inference problem solved by MCMC could a valuable future extension of this work. \subsubsection{Randomised QMC} Despite possibly far better convergence rates of QMC methods compared to standard MCMC, they produce estimators that are biased and lack practical error estimates. The latter is due to the fact that evaluating the Koksma-Hlawka inequality requires not only computing the star discrepancy, which is an NP-hard problem \cite{gnewuch2009finding}, but also computing the total variation $V(f)$, which is generally even more difficult than integrating $f$. However, both drawbacks can be overcome by introducing a randomisation into the QMC construction which preserves the underlying properties of the QMC point distribution. For this task, there have been many approaches suggested in the literature, such as shifting using Cranley-Patterson rotations \cite{cranley1976randomization}, digital shifting \cite{dick2013high}, and scrambling \cite{owen1997monte, owen1997scrambled}. In some cases, randomisation can even improve the convergence rate of the unrandomised QMC method, e.g.\ scrambling applied to digital nets in \cite{owen1997scrambled, dick2011higher} under sufficient smoothness conditions. In these situations, the average of multiple QMC randomisations yields a lower error than the worst-case QMC error. \subsubsection{Completely uniformly distributed points} \label{subsubsec:completely_uniformly_distributed_points} Conceptually, QMC is based on sets of points which fill an underlying hypercube homogeneously. Through suitable transformations applied to those points, samples are created which respresent the underlying target. In constrast, MCMC relies on an iterative mechanism which makes use of ergodicity. More presicely, based on a current state a subsequent state is proposed and then accepted or rejected, in such a way that the resulting samples represent the underlying target. In that sense, QMC is about filling space, relying on equidistributedness, while MCMC is about moving forward in time, relying on ergodicity. Averages of samples can therefore be considered as space averages in QMC and time-averages in MCMC, respectively. Standard MCMC works in the following way: based on a given $d$-dimensional sample, a new sample is proposed using $d$ IID random numbers in $(0,1)$ using a suitable transformation. Then an accept/reject mechanism is employed, i.e.\ the proposed sample is accepted with a certain probability, for which another random point in $(0,1)$ is required. Thus, for $n$ steps we require $n(d+1)$ points, $u_1,...,u_{n(d+1)}\in (0,1)$. The idea in applying QMC to MCMC is to replace the IID points $u_i$, for $i=1,...,n(d+1)$, by more evenly distributed points. There are two sources for problems connected to this approach: first, the sequence of states in the resulting process will not be Markovian, and thus consistency is not straightforward as the standard theory relies on the Markovian assumption. However, we know for instance from adaptive MCMC that even if the underlying method is non-Markovian ergodicity can still be proven \cite{haario2001adaptive,haario2006dram, roberts2007coupling, latuszynski2013adaptive, andrieu2006ergodicity, roberts2009examples, andrieu2008tutorial}. Typically, computer simulations of MCMC are driven by a pseudo-random number generator (PRNG). A PRNG is an algorithm that generates a sequence of numbers which imitate the properties of random numbers. The generation procedure is however deterministic as it is entirely determined by an initial value. We remark that carefully considered, a sequence constructed by an MCMC method using a PRNG does therefore actually not fulfill the Markov property either since the underlying seed is deterministic. However, it is generally argued that given a good choice, a pseudo-random number sequence has properties that are sufficiently similar to actual IID numbers as to consider the resulting algorithm as probabilistic. A first formal criteria for a good choice of pseudo-random numbers typically used in computer simulations was formulated in Yao's test \cite{yao1982theory}. Roughly speaking, a sequence of words passes the test if, given a reasonable computational power, one is not able to distinguish from a sequence generated at random. For modern versions of empirical tests for randomness properties in PRNGs we refer to the Dieharder test suite \cite{brown2017dieharder} and the Test01 software library \cite{l2007testu01}. As an example, the spacings of points which are selected according to the underlying PRNG on a large interval are tested for being exponentially distributed. Asymptotically, this holds true for the spacings of truly randomly chosen points. A second source of problems in using QMC seeds in MCMC arises since MCMC is inherently sequential, which is a feature that QMC methods generally do not respect. For example, the Van der Corput sequence (\cite{van1935b}), which will be introduced below, has been applied as a seed for MCMC in \cite{morokoff1993quasi}. In their example, the first of an even number of heat particles, which are supposed to move according to a symmetric random walk, always moves to the left, when sampled by the Van der Corput sequence. This peculiar behaviour occurs since, although the VdC-sequence is equidistristributed over $[0,1]$, non-overlapping tupels of size $d= 2 m$ for $m\in \mathbb{N}$ are not equidistributed over $[0,1]^d$, as is shown later. The convergence of a QMC method, i.e.\ the succesful integration of a function on $\mathbb{R}^d$, relies on the equidistributedness of tupels $(u_{(n-1)d+1}, ..., u_{nd})\in [0,1]^d$ for $n\rightarrow \infty$, where $d$ is fixed. In order to prevent failure when using QMC in MCMC such as in \cite{morokoff1993quasi}, tupels of the form $(u_{(n-1)d'+1}, ..., u_{nd'}) \in [0,1]^{d'}$ must satisfy equidistributedness for $n\rightarrow \infty$ while $d'$ is variable. This naturally leads us to the definition of CUD numbers: a sequence $(u_i)_{i}\subset [0,1]$ is called completely uniformly distributed (CUD) if for any $d\ge 1$ the points $\vec{x}^{(d)}_i = (u_i, \ldots, u_{i+d-1})\in [0,1]^d$ fulfill \begin{align*} D_n^{*d}(\vec{x}_1^{(d)}, \ldots, \vec{x}_n^{(d)})\rightarrow 0, \quad \text{ as } \quad n\rightarrow \infty. \end{align*} In other words, any sequence of overlapping blocks of $u_i$ of size $d$ yield the desirable uniformity property $D_n^{*d}\rightarrow 0$ for a CUD sequence $(u_i)_{i\ge 1}$. It was shown in \cite{chentsov1967pseudorandom} that this is equivalent to any sequence of non-overlapping blocks of $u_i$ of size $d$ satisfying $D_n^{*d}\rightarrow 0$, i.e.\ \begin{align} D_n^{*d}(\vec{\tilde{x}}_1^{(d)}, \ldots, \vec{\tilde{x}}_n^{(d)})\rightarrow 0, \quad \text{ as } \quad n\rightarrow \infty, \label{eq:cud_convergence_non_overlapping} \end{align} where $\vec{\tilde{x}}_i^{(d)}:=(u_{d(i-1)+1},\ldots, u_{di})\in [0,1]^d$. In \cite{chen2011consistency}, Chen et al.\ prove that if in standard MCMC the underlying driving sequence of IID numbers is replaced by CUD numbers, the resulting algorithm consistently samples from the target distribution under certain regularity conditions. One can easily show that every sequence of IID numbers is also CUD. \vspace{4mm} \noindent \textbf{Constructions in the literature} \vspace{1.5mm} \noindent There are a number of techniques to construct CUD sequences in the literature. In \cite{levin1999discrepancy}, several constructions of CUD sequences are introduced, but none of them amenable for actual implementation \cite{chen2011consistencythesis}. In \cite{chen2011consistencythesis}, an equidistributed linear feedback shift register (LFSR) sequence implemented by Matsumoto and Nishimura is used, which is shown to have the CUD property. In \cite{owen2005quasi} the author uses a CUD sequence that is based on the linear congruential generator (LCG) developed in \cite{entacher1998quasi}. The lattice construction from \cite{niederreiter1977pseudo} and the shuffling strategy for QMC points from \cite{liao1998variance} are also both shown to produce CUD points in \cite{tribble2008construction}. Furthermore, \cite{chen2012new} presents constructions of CUD points based on fully equidistributed LFSR, and antithetic and round trip sampling, of which the former we will use for our simulations later on. The construction introduced in \cite{tribble2008construction} relies on a LCG with initial seed $1$ and increment equal to zero. For a given sequence length, a good multiplier is found by the primitive roots values displayed in \cite{l1999tables}. \vspace{4mm} \noindent \textbf{Illustration of a CUD sequence} \vspace{1.5mm} \noindent As an illustration, we display in Figure \ref{fig:cuds_vs_psr} an implementation of the the CUD construction, which was introduced in \cite{chen2012new} and relies on a LFSR with a transition mechanism based on primitive polynomials over the Galois field $GF(2)$. The resulting sequence is visually more homogeneously distributed than that generated using pseudo-random numbers. For a complete description of the construction, sufficient for a reader to implement the method themselves, we refer to section 3 in \cite{chen2012new}. Additionally, we provide our own Python implementation of this CUD generator in \cite{tobias_schwedes_2018_1255042}, as well as the one introduced by \cite{tribble2008construction}. \vspace{4mm} \noindent \textbf{Construction used in this work} \vspace{1.5mm} \noindent Given a target defined on a $d$-dimensional space we employ a technique of running through a generated CUD sequence $d$ times, similar to \cite{owen2005quasi}, thereby creating tupels of size $d$. The resulting tupels are pairwise different from each other and every tupel is used exactly once in the simulation. Similarly to \cite{owen2005quasi}, we prepend a tupel of values close to zero to the resulting tupel sequence, imitating the property of an integration lattice containing a point at the origin.\\ More precisely, we use the CUD construction based on section 3 in \cite{chen2012new}, which creates sequences of length $L=2^m-1$ for integers $10 \le m \le 32$. Given a sequence $u_1,...,u_L \in (0,1)$ and dimensionality $d$, we cut off the sequence at $T:= \lfloor L/d \rfloor \cdot d \le L$, leading to the trimmed sequence $u_1,...,u_T$. Since $L-T \le d \ll T,L$, trimming has no relevant influence on the outcomes of simulations. In order to make efficient use of the generated sequence, we generate tupels of size $d$ and of the form \begin{align*} &(u_1,...,u_d), (u_{d+1}, \ldots, u_{2d}), ..., (u_{T-d+1}, ..., u_T),\\ &(u_2,...,u_{d+1}), (u_{d+2}, \ldots, u_{2d+1}), ..., (u_{T-d+2}, ..., u_T, u_1),\\ &...\\ &(u_d,...,u_{2d-1}), (u_{2d}, \ldots, u_{3d-1}), ..., (u_{T}, u_1, ..., u_{d-1}). \end{align*} The sequence of points $v_n$, $n=1,...,dT$, given by $u_{1}, ..., u_T$, $u_2, ..., u_T, u_1$, $...$, $u_{d}, ..., u_T, u_1, ..., u_{d-1}$, still satisfies the CUD property. This is true since the shifting of indices in $u_1,...,u_T$ to $u_{k+1},...,u_T, u_1, ..., u_k$ for any $k\in \mathbb{N}$ does not influence the CUD property. Further, appending a CUD sequence to another CUD sequence of the same length preserves the CUD property, too. Finally, prepending a single tupel of size $d$ does not affect the CUD property for overlapping tupels of size $d$. \begin{figure}[h] \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{./plots_cuds/cud2d.eps} \caption{A completely uniformly distributed sequence.} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{./plots_cuds/randomd2.eps} \caption{A pseudo-randomly generated sequence.} \end{subfigure} \caption{\small{Segments of CUD and pseudo-random finite sequences in $(0,1)^2$.}} \label{fig:cuds_vs_psr} \end{figure} \vspace{4mm} \noindent \textbf{QMC sequences are generally not CUD} \vspace{1.5mm} \noindent Finally we note that care must be taken in the choice of the QMC sequence applied to MCMC since not every low discrepancy sequence is a CUD sequence. For the \textit{van der Corput sequence} $(u_n)_{n}$ (\cite{van1935b}), any $u_{2n}\in (0,1/2)$ and any $u_{2n-1}\in [1/2,1)$ for all $n\ge 1$. Thus, \begin{align} \vec{x}^{(2)}_{2n} &\in (0,1/2)\times [1/2,1), \text{ and }\\ \vec{x}^{(2)}_{2n-1} &\in [1/2,1)\times (0,1/2). \end{align} Therefore, the sequence of overlapping tupels $\vec{x}^{(2)}_{n}$ never hits the square $(0,1/2)\times(0,1/2)$, which implies $D^{*2}_n \ge 1/4$ for any $n$. Note that the same holds true for non-overlapping tupels $\tilde{\vec{x}}^{(2)}_{n}$. Hence, the van der Corput sequence is not a CUD sequence. \section{Pseudo-random MP-MCMC} \label{sec:multiple_proposal_mcmc} In \cite{calderhead2014general}, a natural generalisation of the well-known Metropolis-Hastings algorithm (\cite{hastings1970monte}) that allows for parallelising a single chain is achieved by proposing multiple points in parallel. In every MCMC iteration, samples are drawn from a finite state Markov chain on the proposed points, which is constructed in such a way that the overall procedure has the correct target density $\pi$ on a state space $\Omega \subset\mathbb{R}^d$ for $d \in \mathbb{N}$, as its stationary distribution. In this section we introduce this algorithm, demonstrate that this approach mathematically corresponds to a Metropolis-Hastings algorithm over a product space and prove its consistency, as well as some asymptotic limit theorems, before considering how to extend this algorithm to improve its sampling performance. \subsection{Derivation} \label{subsec:derivation_mpmcmc} Before presenting the MP-MCMC algorithm we first note that any joint probability distribution $p(\vec{y}_{1:N+1})$, where $\vec{y}_{1:N+1}=\vec{y}_{[1:N+1]}=(\vec{y}_1,...,\vec{y}_{N+1})$ with $\vec{y}_i\in \Omega$ $\forall i=1,...,N+1$, can be factorised in $N+1$ different ways, using conditional probabilities of the form, $p(\vec{y}_{1:N+1}) = p(\vec{y}_i)p(\vec{y}_{\setminus i}|\vec{y}_i)$, where $\vec{y}_{\setminus i}:=\vec{y}_{[1:i-1,i+1:N+1]}$. If the target $\pi$ is the marginal distribution for $\vec{y}_i$ of $\vec{y}_{1:N+1}\sim p$ and any $i=1,...,N+1$, then \begin{align*} p(\vec{y}_{1:N+1}) = \pi(\vec{y}_i)\kappa (\vec{y}_i, \vec{y}_{\setminus i}), \end{align*} for a proposal distribution $\kappa$ satisfying $\kappa(\vec{y}_i, \vec{y}_{\setminus i})\equiv p(\vec{y}_{\setminus i}|\vec{y}_i)$. Thus, in the $i$th factorisation, $\vec{y}_i \sim \pi$, while the other $\vec{y}_{\setminus i}\sim \kappa(y_i, \cdot)$ Referring to \cite{tjelmeland2004using, calderhead2014general}, a uniform auxiliary variable $I\in \{1,...,N+1 \}$ can be introduced that determines which factorisation is used, such that \begin{align} p(\vec{y}_{1:N+1}, I=i) = \frac{1}{N+1}\pi(\vec{y}_i)\kappa(\vec{y}_i, \vec{y}_{\setminus i}). \label{eq:factorisation_derivation_mpmcmc} \end{align} \begin{algorithm}[h] \SetAlgoLined \KwIn{ Initialise starting point $\vec{x}_0=\vec{y}_1\in \Omega\subset \mathbb{R}^d$, number of proposals $N$, number of accepted samples per iteration $M$, auxiliary variable $I=1$ and counter $n=1$\;} \For{\textnormal{each MCMC iteration $\ell=1,2,...$}}{ Sample $\vec{y}_{\setminus I}$ conditioned on $I$, i.e., draw $N$ new points from the proposal kernel $\kappa(\vec{y}_I, \cdot) = p(\vec{y}_{\setminus I}|\vec{y}_I)$ \; Calculate the stationary distribution of $I$ conditioned on $\vec{y}_{1:N+1}$, i.e.\ $\forall$ $i=1,...,N+1$, $p(I=i|\vec{y}_{1:N+1}) = \pi(\vec{y}_i)\kappa(\vec{y}_{{i}}, \vec{y}_{\setminus{i}}) / \sum_j \pi(\vec{y}_j)\kappa(\vec{y}_{{j}}, \vec{y}_{\setminus{j}})$, which can be done in parallel\; \For{$m=1,...,M$}{ Sample new $I$ via the stationary distribution $p(\cdot|\vec{y}_{1:N+1})$\; Set new sample $\vec{x}_{n+m} = \vec{y}_I$\; } Update counter $n=n+M$ } \caption{Multiple-proposal Metropolis-Hastings} \label{algorithm:multiproposal_MH} \end{algorithm} \begin{comment} \begin{align} p(\vec{x}_{1:N+1}, I=i) &= p( I=i )p(\vec{x}_{1:N+1}|I=i)\\ &= \frac{1}{N+1}\pi(\vec{x}_i)\kappa(\vec{x}_i, \vec{x}_{\setminus i}). \end{align} \end{comment} \subsubsection{A Markov chain over a product space} \label{subsubsec:markov_chain_over_product_space} The MP-MCMC method generates $M\in \mathbb{N}$ new samples per iteration, and can be considered as a single Markov chain over the product space of proposal and auxiliary variables $(\vec{y}_{1:N+1}, I_{1:M})\in \Omega^{N+1}\times \{1,...,N+1\}^M$ by applying a combination of two transition kernels, each of which preserves the underlying joint stationary distribution. First, the states of a finite state Markov chain are created by updating the proposals $\vec{y}_{\setminus{i}}$ conditioned on $\vec{y}_{i}$ and $I_M=i$, which clearly preserves the joint target distribution as we sample directly from $\kappa(\vec{y}_i, \cdot)$; this is equivalent to a Gibbs sampling step. The choice of the proposal kernel is up to the practitioner, and kernels based on Langevin diffusion and Hamiltonian dynamics have successfully been applied (\cite{calderhead2014general}). Secondly, $I_m$ conditioned on $\vec{y}_{1:N+1}$ and $I_{m-1}$ is sampled $M$ times, i.e.\ for $m=1,...,M$, using a transition matrix $A$, where $A(i,j)=A(i,j|\vec{y}_{1:N+1})$ denotes the probability of transitioning from $I_{m-1}=i$ to $I_m=j$. Here, $I_0$ denotes the $M$th (i.e. last) sample of $I$, i.e.\ $I_M$, from the previous iteration. An illustration of this procedure is given in Figure \ref{fig:illustration_mpmcmc_single_chain}. Using the factorisation from \eqref{eq:factorisation_derivation_mpmcmc}, the joint distribution of $(\vec{y}_{1:N+1}, i_m)$ for $I_m=i_m$ denoting the $m$th sample of $I$ and $m=1,...,M$, can be expressed as, \begin{align*} p(\vec{y}_{1:N+1}, I_m=i_m) = \frac{1}{N+1} \pi(\vec{y}_{i_m}) \kappa(\vec{y}_{i_m}, \vec{y}_{\setminus{i_m}}). \end{align*} Observe that $\vec{y}_{i_m}$ has the correct density $\pi(\vec{y}_{i_m})$ for any $m=1,...,M$. Thus, those are the samples we collect in every iteration. For the particular case where $A(i,j)=p(I=j | \vec{y}_{1:N+1})$, independent of $i$, denotes the stationary transition matrix on the states $I_1,...,I_M$ given $y_{1:N+1}$, the entire procedure described here is given in Algorithm \ref{algorithm:multiproposal_MH}. Note that, for the sake of clarity, we make a distinction between proposals $\vec{y}_{1:N+1}$ from one iteration, and the accepted samples $\vec{x}_{n}$ with $n\in \mathbb{N}$. \begin{comment} \subsubsection{A Markov chain over a product space} The MP-MCMC method generates $M\in \mathbb{N}$ new samples per iteration, and can be considered as a single Markov chain over the product space of proposal and auxiliary variables $(\vec{y}_{1:N+1}, I_{1:M})\in \mathbb{R}^{(N+1)d}\times \{1,...,N+1\}^M$ by applying a combination of two transition kernels, each of which preserves the underlying joint stationary distribution. First, the states of a finite state Markov chain are created by updating the proposals $\vec{y}_{\setminus{i}}$ conditioned on $\vec{y}_{i}$ and $I_M=i$, which clearly preserves the joint target distribution as we sample directly from $\kappa(\vec{y}_i, \cdot)$; this is equivalent to a Gibbs sampling step. The choice of the proposal kernel is up to the practitioner, and kernels based on Langevin diffusion and Hamiltonian dynamics have successfully been applied (\cite{calderhead2014general}). Secondly, $I_m$ conditioned on $\vec{y}_{1:N+1}$ and $I_{m-1}$ is sampled $M$ times, i.e.\ for $m=1,...,M$, using a transition matrix $A$, where $A(i,j)=A(i,j|\vec{y}_{1:N+1})$ denotes the probability of transitioning from $I_{m-1}=i$ to $I_m=j$. Here, $I_0$ denotes the $M$th (=last) sample of $I$ from the previous iteration. An illustration of this procedure is given in Figure \ref{fig:illustration_mpmcmc_single_chain}. If $I_0=i_0$, the joint probability of $(\vec{y}_{1:N+1}, I_{1:M}=i_{1:M})$ conditioned on $i_0$ yields \begin{align} p(\vec{y}_{1:N+1}, i_{1:M}|i_0) &= \pi(\vec{y}_{i_0}) \kappa(\vec{y}_{i_0}, \vec{y}_{\setminus{i_0}})p(i_{1:M}|\vec{y}_{1:N+1},i_0)\\ &= \pi(\vec{y}_{i_0}) \kappa(\vec{y}_{i_0}, \vec{y}_{\setminus{i_0}}) \prod_{m=1}^M A({i_{m-1}}, {i_m}). \end{align} Observe that $\vec{y}_{i_0}$ has the correct density $\pi(\vec{y}_{i_0})$. \textit{Given that updates by $A$ do not alter the joint distribution of $(\vec{y}_{1:N+1}, i_{1:M})$, every $y_{i_m}$ for $m=1,...,M$ has this property}. \tobi{Alternatively to this, we could say: using the factorisation from \ref{eq:factorisation_derivation_mpmcmc}, the joint distribution of $(\vec{y}_{1:N+1}, i_m)$ for $I_m=i_m$ denoting the $m$th sample of $I$ and $m=1,...,M$, can be expressed as, \begin{align} p(\vec{y}_{1:N=1}, I_m=i_m) = \frac{1}{N+1} \pi(y_{i_m}) \kappa(y_{i_m}, y_{\setminus{i_m}}). \end{align} Observe that $\vec{y}_{i_m}$ has the correct density $\pi(\vec{y}_{i_m})$ for any $m=1,...,M$.} Thus, those are the samples we collect in every iteration. For the particular case where $A(i,j)=p(I=j | \vec{y}_{1:N+1})$, independent of $i$, denotes the stationary transition matrix on the states $I_1,...,I_M$, the entire procedure described here is given in Algorithm \ref{algorithm:multiproposal_MH}. Note that, in the sake of clarity, we make a distinction between proposals $\vec{y}_{1:N+1}$ from one iteration, and the accepted samples $\vec{x}_{n}$ with $n\in \mathbb{N}$. \end{comment} \begin{figure \centering \resizebox{\linewidth}{!}{ \begin{tikzpicture}[scale=.33, font=\sffamily, dot/.style = {state, fill=gray!20!white, line width=0.01mm, inner sep=1pt, minimum size=0.1pt, minimum width=0.02cm}, >=triangle 45] \node at (0, -2) (x1) {}; \node at (5, 10) (yiphantom) {}; \node at (7.5, 10) (yi) {\text{\small ${y}^{(i)}_{1:N+1}$}}; \node at (16, 7.5) (Ii1) {\text{\small $I_{1}^{(i)}|{y}^{(i)}_{1:N+1}$}}; \node[minimum size=20pt] at (16, 3) (Ii2) {$...$}; \node at (16, -1.5) (IiM) {\text{\small $I_{M}^{(i)}|{y}^{(i)}_{1:N+1}$}}; \draw[ black, line width=0.05mm] [->] (x1) -- (yiphantom) node[midway, color=black, below right= -0.2cm] {\text{\small $\kappa({y}^{(i-1)}_{I_M^{(i-1)}}, \cdot )$}}; \draw[ black, line width=0.05mm] [->] (yi) -- (Ii1) ; \draw[ black, line width=0.05mm] [->] (Ii1) -- (Ii2) node[midway, color=black, left] {\text{\scriptsize $A(I^{(i)}_1, I^{(i)}_2)$}}; \draw[ black, line width=0.05mm] [->] (Ii2) -- (IiM) node[midway, color=black, left] {\text{\scriptsize $A(I^{(i)}_{M-1}, I^{(i)}_M)$}}; \node at (20, -2) (x2) {}; \node at (25, 10) (yiplus1phantom) {}; \node at (27.5, 10) (yiplus1) {\text{\small ${y}^{(i+1)}_{1:N+1}$}}; \node at (36, 7.5) (Iiplus11) {\text{\small $I_{1}^{(i+1)}|{y}^{(i+1)}_{1:N+1}$}}; \node[minimum size=20pt] at (36, 3) (Iiplus12) {$...$}; \node at (36, -1.5) (Iiplus1M) {\text{\small $I_{M}^{(i+1)}|{y}^{(i+1)}_{1:N+1}$}}; \draw[ black, line width=0.05mm] [->] (x2) -- (yiplus1phantom) node[midway, color=black, below right= -0.2cm] {\text{\small $\kappa({y}^{(i)}_{I_M^{(i)}}, \cdot )$}}; \draw[ black, line width=0.05mm] [->] (x2) -- (yiplus1phantom) ; \draw[ black, line width=0.05mm] [->] (yiplus1) -- (Iiplus11) ; \draw[ black, line width=0.05mm] [->] (Iiplus11) -- (Iiplus12) node[midway, color=black, left] {\text{\scriptsize $A(I^{(i+1)}_1, I^{(i+1)}_2)$}}; \draw[ black, line width=0.05mm] [->] (Iiplus12) -- (Iiplus1M) node[midway, color=black, left] {\text{\scriptsize $A(I^{(i+1)}_{M-1}, I^{(i+1)}_M)$}}; \node at (40, -2) (x3) {}; \node at (45, 10) (yiplus2phantom) {}; \draw[ black, line width=0.05mm] [->] (x3) -- (yiplus2phantom) node[midway, color=black, below right= -0.2cm] {\text{\small $\kappa({y}^{(i+1)}_{I_M^{(i+1)}}, \cdot )$}}; \node at (13, -4.5) (iter_i) {$i$th iteration}; \node at (33, -4.5) (iter_i) {$(i+1)$th iteration}; \begin{scope} [on background layer]{\fill[rounded corners= 10pt, shading = axis, left color=white, right color=black!20,black!60,thick,dotted,fill=black!20] ($(-5, 11.5)$) rectangle ($(0, -3)$);} [on background layer]{\draw[rounded corners= 10pt, black!60,thick,dotted,fill=black!20] ($(5, 11.5)$) rectangle ($(20, -3)$);} [on background layer]{\draw[rounded corners= 10pt,black!60,thick,dotted,fill=black!20] ($(25, 11.5)$) rectangle ($(40., -3)$);} [on background layer]{\fill[rounded corners= 10pt,shading = axis, left color=black!20, right color=white,black!60,thick,dotted,fill=black!20] ($(45, 11.5)$) rectangle ($(50, -3)$);} \end{scope} \node at (40, -2) (x3) {}; \end{tikzpicture} } \caption{Visualisation of MP-MCMC as a single Markov chain on the product space of proposal samples and auxiliary variables} \label{fig:illustration_mpmcmc_single_chain} \end{figure} Considering MP-MCMC as a Markov chain over the product space of proposals and auxiliary variables has the advantage that a formula for the transition kernel of the resulting chain can be derived. This is useful for later statements considering ergodicity of adaptive versions of MP-MCMC in Section \ref{subsec:adaptive_mpmcmc}, which require computations on the transition probabilities. \subsubsection{Transition probabilities on the product space} \label{subsubsec:transition_probabilities_product_space} An explicit form for the transition kernel density $\hat{P}(\tilde{z}, z)$, from state $\tilde{z}=(\tilde{\vec{y}}_{1:N+1}, \tilde{i}_{1:M})$ to state $z=(\vec{y}_{1:N+1}, i_{1:M})$, where $\tilde{z},z \in \Omega^{N+1}\times \{1,...,N+1\}^M$, is given by, \begin{align} \hat{P}(\tilde{z}, z) = \kappa(\vec{y}_{i_0}, \vec{y}_{\setminus{i_0}}) \prod_{m=1}^M A(i_{m-1}, {i_m}) \label{eq:last_eq_transition_kernel_mp_mcmc} \end{align} where we implicitly used that $i_0 = \tilde{i}_M$ and $\vec{y}_{i_0}=\tilde{\vec{y}}_{\tilde{i}_M}$. Waiving the latter assumption, we need to add the term $\delta_{\tilde{\vec{y}}_{\tilde{i}_M}}(\vec{y}_{i_0})$ to the expression of the transition kernel. A more thorough derivation of equation \eqref{eq:last_eq_transition_kernel_mp_mcmc}, as well as the subsequent equation \eqref{eq:transition_kernel_set_mp_mcmc1}, is presented in Appendix \ref{appendix:transition_probabilities_product_space}. Let us introduce the notation $B_{1:{n}}=B_1 \times ... \times B_{n}$ for any sets $B_1, ..., B_n$ and $n \in \mathbb{N}$. Let $B\in \mathcal{B}(\Omega^{N+1}) \times \mathcal{P}(\{1,...,N+1\}^M)$ such that $B= C_{1:{N+1}} \times D_{1:M} \subset \Omega^{N+1}\times \{1,...,N+1\}^M$, where $\mathcal{P}(\{1,...,N+1\}^M)$ denotes the power set of $\{1,...,N+1\}^M$. The probability $\operatorname{Pr}(z \in B |\tilde{z})=\hat{P}(\tilde{z}, B)$ of a new state $z=(\vec{y}_{1:N+1},i_{1:M})\in B$ given a current state $\tilde{z}=(\tilde{\vec{y}}_{1:N+1}, \tilde{i}_{1:M})$ with $\tilde{i}_M=i_0$ can be expressed as, \begin{align} \hat{P}(\tilde{z}, B) = \chi_{C_{i_0}(\tilde{\vec{y}}_{\tilde{i}_M})}\int_{C_{\setminus{i_0}}}\kappa(\vec{y}_{i_0}=\tilde{\vec{y}}_{\tilde{i}_M}, \vec{y}_{\setminus{i_0}}) \sum_{i_{1:M}\in D_{1:M}} \prod_{m=1}^M A(i_{m-1}, i_m| \vec{y}_{i_0}=\tilde{\vec{y}}_{\tilde{i}_M}) \mathrm{d}\vec{y}_{\setminus{i_0}}, \label{eq:transition_kernel_set_mp_mcmc1} \end{align} where we use the notation $B_{\setminus{i}} = B_1\times ...\times B_{i-1} \times B_{i+1} \times .. \times B_n$ for any sets $B_1, ..., B_n$ and $i=1,...,n\in \mathbb{N}$. Note that conditioning on $\tilde{z}$ in \eqref{eq:transition_kernel_set_mp_mcmc1} reduces to conditioning on the last accepted sample $\tilde{\vec{y}}_{\tilde{i}_M}$, which is the only sample of relevance for the subsequent iteration. Thus, the domain of the transition kernel $P$ can be reduced to $\Omega$ by using the identification $\hat{P}(\tilde{\vec{y}}_{\tilde{i}_M}, B)\equiv \hat{P}(\tilde{z}, B)$. \subsubsection{Equivalence of MP-MCMC to a chain on the accepted samples} \label{subsubsec:equivalence_mpmcmc} An alternative representation of MP-MCMC as a Markov chain over the product space of proposals and auxiliary variables $(\vec{y}_{1:N+1}, I_{1:M})$ in one iteration is to understand it as a chain over the space of accepted samples $(\vec{x}_1, ..., \vec{x}_M)$ in one iteration. Indeed, given the current accepted set of samples, any set of samples generated in a future iteration are independent of the past iterations. This representation is useful since it allows to see MP-MCMC as a Markov chain over a single real space, which will be used to prove limit theorems in Section \ref{subsection:limit_theorems}. Further, explicit transition probabilities for this representation are derived in what follows, which are then used to prove ergodicity statements of adaptive versions of MP-MCMC in Section \ref{subsec:adaptive_mpmcmc}. Note that since $\vec{x}_i \in \Omega$ for any $i=1,...,M$, we have $(\vec{x}_1, ..., \vec{x}_M) \in \Omega^M$. \subsubsection{Transition probabilities on the accepted samples} \label{subsubsec:trans_probs_sample_state_space} Clearly, we would like to have an expression for the transitions between actual accepted states rather than proposal and auxiliary variables. It is possible to derive from the transition kernel $\hat{P}$, corresponding to the states of proposals and auxiliary variables $(\vec{y}_{1:N+1},I_{1:M})$, the transition kernel $P$, corresponding to only the actually accepted states $\vec{x}_{1:M} \in \Omega^M$, i.e.\ where $\vec{x}_m = \vec{y}_{I_m}$ for $m=1, ..., M$. The probability of accepted states $\vec{x}_{1:M}\in B=B_{1:M}\in \mathcal{B}({\Omega^M})$ given a previous state $\tilde{\vec{x}}_{1:M}$ can be expressed in terms of the transition kernel $\tilde{P}$ on the model state space of proposals and auxiliary variables from \eqref{eq:transition_kernel_set_mp_mcmc1} as follows, \begin{align} {P}(\tilde{\vec{x}}_{1:M}, B) &= \hat{P}\left( \tilde{\vec{x}}_M, \bigcup_{i_1,...,i_M=1}^{N+1}\ \bigcap_{m=1}^M S_{i_m}(B) \right) \label{eq:mp_mcmc_transition_kernel_general_case1a} \end{align} where \begin{align} S_{i_m}(B)=\Omega^{d(i_m-1)}\times B_m \times \Omega^{d(N+1-i_m)} \times \{1:N+1\}^{m-1} \times \{i_m\}\times \{1:N+1 \}^{M-m} \label{eq:Sim_set} \end{align} for any $i_m=1,...,N+1$ and $m=1, ..., M$. The sets $S_{i_m}(B)$ are pairwise disjoint. A comprehensive derivation of equation \eqref{eq:mp_mcmc_transition_kernel_general_case1a} can be found in Appendix \ref{appendix:transition_probabilities_accepted_samples}. \begin{comment} \textbf{Detailed balance for transient $A$ case} \begin{align} p(z)P(z, z') &= P(I_1=i_1) \pi(y_{i_1}) \kappa(y_{i_1}, y_{\setminus{i_1}})\prod_{m=2}^M A(i_{m-1}, i_m) \cdot \kappa(y_{i_M}', y_{\setminus{i_M}}') \prod_{m=1}^M A(i_{m-1}', i_m')\\ &= \frac{1}{N+1} \pi(y_{i_1}) \kappa(y_{i_1}, y_{\setminus{i_1}})\prod_{m=2}^M \frac{\pi(y_{i_m})\kappa(y_{i_m}, y_{\setminus{i_m}}}{\pi(y_{i_{m-1}})\kappa(y_{i_{m-1}}, y_{\setminus{i_{m-1}}})}\\ &\quad\quad \kappa(y_{i_1}, y_{\setminus{i_1}})\prod_{m=1}^M \frac{\pi(y_{i_m'}')\kappa(y_{i_m'}', y_{\setminus{i_m'}}'}{\pi(y'_{i'_{m-1}})\kappa(y'_{i'_{m-1}}, y'_{\setminus{i'_{m-1}}})}\\ &= \frac{1}{N+1} \pi(y_{i_M})\kappa(y_{i_M}, y_{\setminus{i_M}}) \cdot \pi(y'_{i'_M})\kappa(y'_{i'_M},y'_{\setminus{i'_M}}) / \pi(y'_{i_M}) \end{align} \end{comment} \begin{comment} We may derive the explicit form for the transition kernel $P(z, z')$, from state $z=(\vec{y}_{1:N+1}, i_{1:M})$ to state $z'=(y'_{1:N+1}, i'_{1:M})$, where $z,z' \in \mathbb{R}^{(N+1)d}\times \{1,...,N+1\}^M$, by considering, \begin{align} P(z, z') &= \kappa(\vec{y}_{I_M}, \vec{y}_{\setminus{I_M}}') p(I_{1:M}'|\vec{y}_{1:N+1}', I_M) \label{eq:1st_eq_transition_kernel_mp_mcmc} \\ &= \kappa(\vec{y}_{I_M}, \vec{y}_{\setminus{I_M}}') \prod_{m=1}^M \alpha(I'_{m-1}, {I_m'}) \label{eq:last_eq_transition_kernel_mp_mcmc} \end{align} where $\vec{y}_{I_M} = y'_{I_M}$, and we used that $I_M=I'_0$. If the former is not assumed, we need to add the term $\delta_{\vec{y}_{I_M}}(\vec{y}_{I_M}')$ to the expression of the transition kernel.\\ Let us now introduce the notation $A_{1:{n}}=A_1 \times ... \times A_{n}$, for any sets $A_1, ..., A_n$. When $B\in \mathcal{B}(\mathcal{Z})$ with $B= C_{1:{N+1}} \times D_{1:M} \subset \mathbb{R}^{(N+1)d}\times \{1,...,N+1\}^M$, then we might ask, what is the probability of a new state $z=(\vec{y}_{1:N+1},I_{1:M})\in B$, given the current state $z^0=(y^{0}_{1:N+1}, I_{1:M}^{0}=i_{1:M}^0)$? Using the notation $\vec{y}_{i_0} = y^0_{i_M^0}$, it can be expressed as \begin{align} {P}(z^0, B) &= P((y^{0}_{1:N+1}, I_{1:M}^{0}), C_{1:{N+1}} \times D_{1:M}) \\ &= \chi_{C_{i_0}(\vec{y}_{i_0}^0)}\int_{C_{\setminus{{i_0}}}}\kappa(\vec{y}_{i_0}, \vec{y}_{\setminus{i_0}}) \sum_{i_{1:M}\in D_{1:M}} \prod_{m=1}^M \alpha(i_{m-1}, i_m) \mathrm{d}\vec{y}_{\setminus{i_0}}, \end{align} where we used the notation $A_{\setminus{i}} = A_1\times ...\times A_{i-1} \times A_{i+1} \times .. \times A_n$ for any sets $A_1, ..., A_n$ and $i=1,...,n$.\\ The question is how the transition kernel $P$ in terms of states of proposals and auxiliary variables $(\vec{y}_{1:N+1},I_{1:M})$ relates to the transition kernel $\tilde{P}$ in terms of the actually accepted states $\vec{x}_{1:M}$, i.e.\ where $\vec{x}_m = \vec{y}_{I_m}$ for $m=1, ..., M$. The probability of the accepted states $\vec{x}_{1:M}\in A=A_{1:M}\in \mathcal{B}({\mathbb{R}^{dM}})$ can be expressed in terms of the transition kernel for the proposed and auxiliary variable states as follows, \begin{align} \tilde{P}(\vec{x}_0, A) &= P\left( (\vec{y}_{1:N+1}^0, I^0_{1:M}=i^0_{1:M}), \bigcup_{i_1,...,i_M=1}^{N+1}\ \bigcap_{m=1}^M S_{i_m} \right) \label{eq:mp_mcmc_transition_kernel_general_case} \end{align} where \begin{align} S_{i_m}=\mathbb{R}^{d(i_m-1)}\times A_m \times \mathbb{R}^{d(N+1-i_m)} \times \{1:N+1\}^{m-1} \times \{i_m\}\times \{1:N+1 \}^{M-m} \end{align} for any $i_m=1,...,N+1$ and $m=1, ..., M$. A more comprehensible derivation of \eqref{eq:mp_mcmc_transition_kernel_general_case} can be found in the Appendix. The sets $S_{i_m}$ are pairwise disjoint. Equation \eqref{eq:mp_mcmc_transition_kernel_general_case} follows from a simple consideration of composing sets allowing sets for $\vec{x}_{1:M}\in A=A_{1:M}$. \end{comment} \begin{comment} \begin{figure} \centering \begin{subfigure}{.9\linewidth} \resizebox{\linewidth}{!}{ \begin{tikzpicture}[scale=.35, font=\sffamily, dot/.style = {state, fill=gray!20!white, line width=0.01mm, inner sep=1pt, minimum size=0.1pt, minimum width=0.02cm}, >=triangle 45] \node at (-5, -2) (x1) {}; \node at (5, 10) (yiphantom) {}; \node at (7.5, 10) (yi) {${y}^{(i)}_{1:N+1}$}; \node at (16, 8) (Ii1) {$I_{1}^{(i)}|{y}^{(i)}_{1:N+1}$}; \node[minimum size=20pt] at (16, 3) (Ii2) {$...$}; \node at (16, -2) (IiM) {$I_{M}^{(i)}|{y}^{(i)}_{1:N+1}$}; \draw[ black, line width=0.05mm] [->] (x1) -- (yiphantom) node[midway, color=black, below right= -0.2cm] {\text{\small $\kappa({y}^{(i-1)}_{I_M^{(i-1)}}, \cdot )$}}; \draw[ black, line width=0.05mm] [->] (yi) -- (Ii1) ; \draw[ black, line width=0.05mm] [->] (Ii1) -- (Ii2) node[midway, color=black, left] {\text{\footnotesize $A(I^{(i)}_1, I^{(i)}_2)$}}; \draw[ black, line width=0.05mm] [->] (Ii2) -- (IiM) node[midway, color=black, left] {\text{\footnotesize $A(I^{(i)}_{M-1}, I^{(i)}_M)$}}; \node at (20, -2) (x2) {}; \node at (30, 10) (yiplus1phantom) {}; \node at (30, 10) (yiplus1phantom) {}; \draw[ black, line width=0.05mm] [->] (x2) -- (yiplus1phantom) node[midway, color=black, below right= -0.2cm] {\text{\small $\kappa({y}^{(i)}_{I_M^{(i)}}, \cdot )$}}; \begin{scope} [on background layer]{\fill[rounded corners= 10pt, shading = axis, left color=white, right color=black!20,black!60,thick,dotted,fill=black!20] ($(-15, 11.5)$) rectangle ($(-5, -3)$);} [on background layer]{\draw[rounded corners= 10pt, black!60,thick,dotted,fill=black!20] ($(5, 11.5)$) rectangle ($(20, -3)$);} [on background layer]{\fill[rounded corners= 10pt,shading = axis, left color=black!20, right color=white,black!60,thick,dotted,fill=black!20] ($(30, 11.5)$) rectangle ($(40, -3)$);} \end{scope} \node at (-12, -4.5) (iter_imin1) {$(i-1)$th iteration}; \node at (13, -4.5) (iter_i) {$i$th iteration}; \node at (38, -4.5) (iter_iplus1) {$(i+1)$th iteration}; \end{tikzpicture} } \caption{Visualisation of MP-MCMC as a single Markov chain on the product space of proposal samples and auxiliary variables} \end{subfigure} \end{figure} \end{comment} \begin{comment} \begin{algorithm}[h] \SetAlgoLined \KwIn{ Initialise starting point $\vec{x}_1=\vec{y}_1$, number of proposals $N$, number of accepted samples per iteration $M$, auxiliary variable $I=1$ and counter $n=1$\;} \For{\textnormal{each MCMC iteration $\ell=1,2,...$}}{ Sample $\vec{y}_{\setminus I}$ conditioned on $I$, i.e., draw $N$ new points from the proposal kernel $\kappa(\vec{y}_I, \cdot) = p(\vec{y}_{\setminus I}|\vec{y}_I)$ \; \For{$m=1,...,N$}{ Compute the transitions $A(I,\cdot)$, which can be done in parallel\; Sample new $I$ via $A(I,\cdot)$\; Set new sample $\vec{x}_{n+m} = \tilde{x}_I$\; } Update counter $n=n+N$ } \caption{Multiple-proposal Metropolis-Hastings} \label{algorithm:multiproposal_MH} \end{algorithm} \end{comment} \subsection{Consistency} \label{subsection:consistency} The question then arises how to choose the transition probabilities $A(i,j)$, for any $i,j=1,...,N+1$, such that the target distribution $\pi$ for accepted samples is preserved. Given the transition matrix $[A(i,j)]_{i,j}$ on the finite states $i,j$ determining the transitions of $I$, and using the factorisation in \eqref{eq:factorisation_derivation_mpmcmc}, the detailed balance condition for updating $I$ in $(\vec{y}_{1:N+1}, I)$ is given by \begin{align*} \frac{1}{N+1}\pi(\vec{y}_i)\kappa(\vec{y}_i, \vec{y}_{\setminus{i}}) A(i,j) = \frac{1}{N+1}\pi(\vec{y}_i)\kappa(\vec{y}_i, \vec{y}_{\setminus{i}}) A(j,i), \end{align*} for all $i,j=1,...,N+1$. Clearly, the detailed balance condition implies the balance condition, \begin{align} \frac{1}{N+1} \pi(\vec{y}_i)\kappa(\vec{y}_{i}, \vec{y}_{\setminus{i}}) = \sum_{j=1}^{N+1} \pi(\vec{y}_j) \kappa(\vec{y}_{j}, \vec{y}_{\setminus{j}}) A(j,i), \label{eq:balance_condition_mpmcmc} \end{align} for any $i=1,...,N+1$. If \eqref{eq:balance_condition_mpmcmc} holds true, the joint distribution $p(\vec{y}_{1:N+1}, I)$ is invariant if $I$ is sampled using the transition matrix $A$. We say that the sequence $(\vec{x}_i)_i$ of an MP-MCMC algorithm consistently samples $\pi$, if \begin{align} \lim_{n\rightarrow \infty} \frac{1}{n} \sum_{i=1}^n f(\vec{x}_i) = \int_\Omega f(\vec{x}) \pi(\vec{x}) \mathrm{d}\vec{x}, \label{eq:consistency} \end{align} for any continuous bounded $f$ on $\Omega \subset \mathbb{R}^d$. We have chosen this notion of consistency to represent the fact that implicitely the formulation of low-discprenacy sets is closely related to the Riemannian integral, whose well-definedness is ensured for continuous and bounded functions. For further details we refer to the integrability condition introduced in Section \ref{subsection:consistency}. If the underlying Markov chain on the states $(\vec{y}_{1:N+1}, I)$ satisfies \eqref{eq:balance_condition_mpmcmc} and is positive Harris with invariant distribution $p(\vec{y}_{1:N+1}, I)$, then \eqref{eq:consistency} holds true, which is an immediate consequence of the ergodic theorem, Theorem 17.1.7, in \cite{meyn2012markov}. For a definition of positive Harris we refer to Section 10.1.1 in \cite{meyn2012markov}. \begin{comment}Due to Portmanteau's lemma, this is for $d=1$ equivalent to the empirical distribution function $F_n(x) := \frac{1}{n}\sum_{m=1}^n \mathbbm{1}_{[\vec{x}_m, \infty)}(x)$, satisfying $F_n(x) \rightarrow F(x)$ as $n\rightarrow \infty$ for any continuity point $x$ of $F$, where $F(x) := \int_{-\infty}^x \pi(y)\mathrm{d}y$. \tobi{How generalised in $d\in\mathbb{N}$ case?} \end{comment} \subsubsection{Sampling from the stationary distribution of $I$} \label{subsubsection:sampling_from_stationary_distribution} As stated in Algorithm \ref{algorithm:multiproposal_MH}, one can sample directly from the steady-state distribution of the Markov chain, conditioned on $\vec{y}_{1:N+1}$. The stationary distribution of $I$, given $\vec{y}_{1:N+1}$, also used in \cite{calderhead2014general}, equals \begin{align} p(I=j | \vec{y}_{1:N+1}) = \frac{\pi(\vec{y}_j)\kappa(\vec{y}_j, \vec{y}_{\setminus j})}{ \sum_{k=1}^{N+1}\pi(\vec{y}_k) \kappa(\vec{y}_k, \vec{y}_{\setminus k})}, \label{eq:barker_acceptance} \end{align} for any $j=1,...,N+1$. One can easily see that detailed balance holds for the stationary transition matrix $A(i,j)= p(I=j|\vec{y}_{1:N+1})$. Note that \eqref{eq:barker_acceptance} is a generalisation of Barker's algorithm \cite{barker1965monte} for multiple proposals in one iteration. For $N=1$, the term on the right hand side reduces to Barker's acceptance probability. Since this probability is always smaller or equal to Peskun's acceptance probability, which is used in the usual Metropolis-Hastings algorithm, samples generated by Barker's algorithm yield mean estimates with an asymptotic variance that is at least as large as when generated by Metropolis-Hastings \cite{peskun1973optimum}[Theorem 2.2.1]. A generalisation for Peskun's acceptance probability in the multiple proposal setting is introduced in \cite{tjelmeland2004using}, which aims to minimise the diagonal entries of the transition matrix iteratively starting from the stationary transition matrix, while preserving its reversibility. The resulting MP-MCMC method is investigated numerically in \ref{subsec:extensions}. \subsubsection{Sampling from the transient distribution of $I$} Instead of sampling $I$ from the stationary finite state Markov chain conditioned on $\vec{y}_{1:N+1}$, \cite{calderhead2014general} proposes the choice $A(i,j)=A(i,j|\vec{y}_{1:N+1})$, defined by \begin{align} A(i,j) = \begin{cases} \frac{1}{N}\min(1, R(i,j)) &\mbox{if } j \neq i \\ 1-\sum_{j\neq i} A(i,j) &\mbox{otherwise,} \end{cases} \label{eq:transient_transition} \end{align} where $R(i,j) = {\pi(\vec{y}_j)\kappa(\vec{y}_j,\vec{y}_{\setminus j})} / {[\pi(\vec{y}_i)\kappa(\vec{y}_i,\vec{y}_{\setminus i})]}$. Referring to Proposition 1 in \cite{calderhead2014general}, detailed balance is fulfilled for $A$ as given in \eqref{eq:transient_transition}. Note that this choice is a generalisation of the original Metropolis-Hastings acceptance probability. For $N=1$, i.e.\ if only a single state is proposed and a single state is accepted in each iteration, and if we replace the choice of $A$ in Algorithm \ref{algorithm:multiproposal_MH} by \eqref{eq:transient_transition}, the resulting algorithm reduces to the usual Metropolis-Hastings algorithm. \subsection{Some practical aspects} In this section, we discuss some practical considerations regarding the use of MP-MCMC and some properties unique to this approach. \subsubsection{Parallelisation} In recent years, much effort has been focused on developing parallelisable MCMC strategies for performing inference on increasingly more computationally challenging problems. A number of specific approaches have been considered previously that incorporate different levels of parallelisation, for example subsampling data to scale MCMC algorithms to big data scenarios (\cite{neiswanger2013asymptotically}, \cite{wang2013parallelizing}), parallelising geometric calculations used in designing efficient proposal mechanisms \cite{welling2011bayesian} and \cite{ahn2012bayesian}, and for certain cases, parallelising the likelihood computation (\cite{agarwal2011distributed}, \cite{smola2010architecture}). One major advantage of MP-MCMC compared to many MCMC methods, including standard Metropolis-Hastings and other single proposal algorithms, is that it is inherently parallelisable as a single chain. More precisely, the likelihoods associated with the multiple proposals in any iteration can be computed {\it in parallel} as these expressions are independent of each other. Evaluating the likelihood is typically far more expensive than prior or proposal densities, and once all proposal likelihoods are computed within one iteration of MP-MCMC, sampling from the finite state chain typically requires minimal computational effort. In standard single proposal Metropolis-Hastings the likelihood calculations are computed sequentially. In contrast, the computational speed-up of using MP-MCMC is close to a factor $N$, if in every iteration $N$ samples are drawn and $N$ computing cores are available. We note that it is natural to match the number of proposals to the number of cores that are available for the simulation, or indeed to a mutiple of that number. The latter does not yield further computational speed-up compared to using $N$ proposals but other amenable features arise from an increased number of proposals, as we will see later in this paper. Indeed, it is for this reason that MP-MCMC outperforms the obvious approach of running multiple single MCMC algorithms in parallel and subsequently combining their samples. \subsubsection{Computation time} Compared to $N$ independent chains, MP-MCMC will generally be expected to perform slightly slower on an identical parallel machine with $N$ cores, due to the communication overhead following likelihood computations. More precisely, before sampling from the finite state chain in MP-MCMC, all likelihood evaluations must be completed and communicated. The overall time for a single iteration will therefore be dependent on the slowest computing time among all individual likelihood evaluations, although we note that measuring computation times is generally dependent on the underlying operating system architecture, hardware, and the quality or optimality of the implementation. At the current experimental state of our code we found it therefore not helpful to include computing times in our simulation studies, but rather investigate platform, language and implementation independent performance by comparing statistical efficiency with a fixed number of total samples. \subsubsection{Minimal number of iterations and information gain} In practice we need to make a couple of choices regarding the number of iterations to use, as well as the number of proposals to make within each iteration. When using MP-MCMC with proposals that depend on the previous iteration, employing too small a number of iterations together with a large number of proposals typically leads to a less useful estimate than single proposal MCMC (Barker to Barker comparison). What is meant here is that the MSE of global estimates using MP-MCMC, e.g.\ arithmetic mean, becomes large. This can be explained by a limited relative global information gain by increasing the proposal number: in a single MCMC iteration, proposals are typically determined using a single previously generated sample, for instance based on posterior information, such as the local geometry, e.g. MALA and its Riemannian versions. Increasing the proposal number in a particular MCMC iteration increases the \textit{local} information gain around this point in the posterior. Visually, proposed samples in one iteration of MP-MCMC can be considered as a cloud of points, with some centre and covariance structure, which covers a certain region of the state space. This local coverage improves with increasing proposal numbers. Thus, increasing the number of proposals will in turn increase the \textit{global} information gain about the posterior by covering the state space more thoroughly, only if sufficiently many MCMC iterations are taken, which each time moves the centre point of our cloud of points. There is therefore clearly a trade off to be made between local coverage, with the corresponding increased parallelisation of this approach, and more global sequential moves, such that the target is sufficiently well explored. Through a number of numerical experiments for increasing proposal numbers, we found that typically at least $\ge 250$ iterations will be sufficient to achieve good results. \subsection{Empirical results} In the following, the performance of MP-MCMC is investigated in terms of its MSE convergence and in comparison with single proposal MCMC for a generic one-dimensional Gaussian posterior. We consider two types of acceptance probabilities used in the transition kernel, one is Barker's acceptance probability \eqref{eq:barker_acceptance} and the other is Peskun's acceptance probabilities \eqref{eq:transient_transition}. Note that both generalisations of single proposal acceptance probabilities to multiple proposals are not unique, and other ways of generalising exist, e.g.\ see Section \ref{subsubsec:optimised_transition_kernels}. To make a fair comparison between single and multiple proposal methods, we accept $N$ samples in every iteration, which is equal to the number of proposals. Proposals are generated using a simplified manifold MALA (SmMALA) kernel \cite{girolami2011riemann}, \begin{align} \kappa(\vec{x}, \cdot) = N \left(\vec{x}+\varepsilon^2/2 G(\vec{x})^{-1} \nabla \log \pi(\vec{x}), \varepsilon^2 G(\vec{x})^{-1}\right), \label{eq:smMALA_kernel} \end{align} where $G(\vec{x})$ denotes the local covariance matrix given by the expected Fisher information \cite{girolami2011riemann} for any $\vec{x}\in \Omega$. Referring to Figure \ref{fig:mse_SingleVsMulti1}, a performance gain from switching from MCMC (Barker) to standard MP-MCMC (Barker) is achieved, resulting in an average MSE reduction of ca.\ $30 \%$ overall. There is no significant difference between MP-MCMC (Barker) and MP-MCMC (Peskun). However, usual Metropolis-Hastings outperforms all other methods; in comparison with standard MP-MCMC, this corresponds to an average MSE reduction of ca.\ $30 \%$ overall. Thus, although average acceptance rates are significantly increased by using the multiple proposal approach, referring to \cite{calderhead2014general}, the resulting samples are not necessarily more informative about the underlying posterior. However, MP-MCMC still yields the advantage of enabling computations of likelihoods to be performed in parallel and may be extended in many ways to further improve performance. \begin{figure}[h] \centering \begin{subfigure}[b]{0.60\textwidth} \includegraphics[width=\textwidth]{./plots_mpmcmc/mse_SingleVsMultiNew.eps} \end{subfigure} \caption{\small{MSE of arithmetic mean for MCMC using Barker's and Peskun's acceptance probabilities, and MP-MCMC, resp., sampling from a one-dimensional standard Normal posterior for increasing proposal numbers and sample sizes. The results are based on $25$ MCMC runs, and the error bars correspond to twice a standard deviation, respectively. The difference between all convergence rates is only a constant}} \label{fig:mse_SingleVsMulti1} \end{figure} \subsection{Extensions of standard MP-MCMC} \label{subsec:extensions} We now consider the following extensions to the MP-MCMC, which can be made to improve sampling performance and which we investigate empirically. \subsubsection{Introducing an auxiliary proposal state} \label{intro_aux_prop_state} When sampling from $A$ as in \eqref{eq:transient_transition}, the probability of transitioning from $I=i$ to $I=j$ may become small when the number of proposals is large. This is since the acceptance ratio $R(i,j)$ depends not only on the states $\vec{y}_i$ and $\vec{y}_j$, but all proposed states. One may therefore introduce an auxiliary variable $\vec{z}$ as proposed in \cite{calderhead2014general, tjelmeland2004using} in order to make proposals of the form $\tilde{\kappa}(\vec{y}_i, \vec{y}_{\setminus i}) = \kappa(\vec{y}_i,\vec{z})\kappa(\vec{z},\vec{y}_{\setminus i})$. Throughout this work, we make use of this extension for numerical simulations. Assuming that $\kappa$ samples the proposals $\vec{y}_{\setminus i}$ independently from each other, then the acceptance ratio simplifies to \begin{align*} R(i,j) = \frac{\pi(\vec{y}_j)\tilde{\kappa}(\vec{y}_j,\vec{y}_{\setminus j})}{\pi(\vec{y}_i)\tilde\kappa(\vec{y}_i,\vec{y}_{\setminus i})} = \frac{\pi(\vec{y}_j)\kappa(\vec{y}_j,\vec{z})\kappa(\vec{z},\vec{y}_i)}{\pi(\vec{y}_i)\kappa(\vec{y}_i,\vec{z})\kappa(\vec{z},\vec{y}_j)}. \end{align*} For a symmetric sample distribution $\kappa$, the acceptance ratio further simplifies to $R(i,j)={\pi(\vec{y}_j)}/\pi(\vec{y}_i)$. \subsubsection{Non-reversible transition kernels} \label{subsubsec:non_reversible_transition_kernels} \noindent In the context of a finite state Markov chain, \cite{suwa2010markov} and \cite{todo2013geometric} introduce an MCMC method that redefines the transition matrix, allocating the probability of an individual state to the transition probabilities to other states, with the aim of minimising the average rejection rate. In application, the resulting acceptance rate is close to $1$, or even equal to $1$ in many iterations. The resulting transition matrix is no longer reversible; thus, it does not fulfill the detailed balance condition, however it still satisfies the balance condition such that transitioning from one state to another preserves the underlying stationary distribution. The proposed algorithm is immediately applicable for the finite state sampling step in MP-MCMC. The resulting algorithm is a non-reversible MP-MCMC, which preserves the stationary distribution $\pi$ of individual samples. Numerical experiments of this method can be found in Section \ref{subsubsec:empirical_results_extensions}. \subsubsection{Optimised transition kernels} \label{subsubsec:optimised_transition_kernels} \noindent Given a Markov chain over finitely many proposed states, \cite{tjelmeland2004using}[Section 4] proposes an algorithm that iteratively updates the transition matrix of an MCMC algorithm, starting from Barker's acceptance probabilities defined on finitely many states. The matrix is updated until only at most one diagonal element is non-zero. Since every update leaves the detailed balance condition valid, the resulting MCMC algorithm is reversible. Again, this method is straightforward to apply in the finite state sampling step of MP-MCMC, resulting in a reversible MP-MCMC, which clearly leaves the stationary distribution $\pi$ of individual samples unaltered. For a single proposal, this algorithm reduces to the usual Metropolis-Hastings. We now consider the performance in numerical experiments, comparing this method to the MP-MCMC with non-reversible transition kernels from \ref{subsubsec:non_reversible_transition_kernels} and the standard MP-MCMC. \subsubsection{Empirical results of MP-MCMC extensions} \label{subsubsec:empirical_results_extensions} In what follows, we compare the performance of standard MP-MCMC to the MP-MCMC algorithms with improved transition kernels introduced in Section \ref{subsubsec:non_reversible_transition_kernels} and Section \ref{subsubsec:optimised_transition_kernels}. As posterior distribution we consider a one-dimensional Gaussian, and for the underlying proposal kernel we use the SmMALA formalism from \eqref{eq:smMALA_kernel}. As measures of performance we consider MSE convergence, acceptance rate and mean squared jumping distance (MSJD) for increasing numbers of proposals. Here, the acceptance rate is defined by the probability of transitioning from one state to any of the $N$ different ones. Further, MSJD $= 1/n \sum_{\ell =1}^n \| \vec{x}_{n+1} - \vec{x}_n \|^2$, which is related to measuring the lag $1$ autocorrelation, and can be applied to find the optimal choice of parameters determining the proposal kernel (\cite{pasarica2010adaptively}). Referring to Figure \ref{fig:acpt_msjd_improved_transitions}, switching from MP-MCMC to any of the MP-MCMC algorithms introduced in the previous two sections increases the average acceptance rate and the MSJD for small proposal numbers significantly, and to a similar extend. While the number of proposals increases, the difference to standard MP-MCMC disappears, as they tend to the same maximal value; for the acceptance rate, this is $1$. At the same time, a performance gain in terms of MSE, referring to Figure \ref{fig:mse_nonrev}, is only achieved by switching from the standard MP-MCMC transition kernel to the non-reversible kernel (\ref{subsubsec:non_reversible_transition_kernels}), resulting in an average MSE reduction of ca.\ $20 \%$ overall. Applying the optimised transition algorithm (\ref{subsubsec:optimised_transition_kernels}) does not significantly reduce the MSE, i.e.\ on average less than $5 \%$ overall, compared to standard MP-MCMC. This is interesting since it makes the point that, although the choice of proposals is exactly the same, an increased acceptance rate does not imply that the resulting samples are significantly more informative about the posterior. In contrast to this observation we will see in Section \ref{sec:importance_sampling} that actually, more informative estimates, i.e.\ exhibiting lower variance, can indeed be achieved by accepting all proposed samples (i.e.\ acceptance rate $=1$) by suitably weighting them. The number of iterations in the optimisation procedure used to generate the transition kernel from Section \ref{subsubsec:optimised_transition_kernels} increases significantly with the number of proposals, and therefore the computation cost. To sample a constant number of $250$ iterations in our toy example, we found the cost of performing the optimisation for more than $\approx 125$ proposals prohibitive. For reference and comparison reasons, we displayed the results for proposal numbers only up to this number for standard MP-MCMC and non-reversible MP-MCMC, too. Summarising, we were able to improve the constant in front of the convergence rate by using non-reversible and optimised transition kernels, however not the rate itself. \begin{figure}[h] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_mpmcmc/Acpt_imprTrans.eps} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_mpmcmc/MSJD_imprTrans.eps} \end{subfigure} \caption{\small{Acceptance rates and MSJD for standard MP-MCMC, MP-MCMC with non-reversible (Non-rev) and optimised transitions (Opt-trans), sampling from a one-dimensional standard Normal posterior for increasing proposal numbers and sample sizes. The results are based on $10$ MCMC runs, and the error bars correspond to twice a standard deviation}} \label{fig:acpt_msjd_improved_transitions} \end{figure} \begin{figure}[h] \centering \begin{subfigure}[b]{0.60\textwidth} \includegraphics[width=\textwidth]{./plots_mpmcmc/MSE_imprTrans.eps} \end{subfigure} \caption{\small{MSE of arithmetic mean for MP-MCMC, MP-MCMC with reversible transitions (Non-rev) and MP-MCMC with optimised transitions (Opt-trans), resp., sampling from a one-dimensional standard Normal posterior for increasing proposal numbers and sample sizes. The results are based on $25$ MCMC runs, and the error bars correspond to twice a standard deviation, resp.}} \label{fig:mse_nonrev} \end{figure} \subsection{Limit theorems} \label{subsection:limit_theorems} In this section, the law of large numbers (LLN), a central limit theorem (CLT), and an expression for the asymptotic variance is derived. Essential for the derivation is the observation that MP-MCMC can be considered as a single Markov chain on the product space of variables $(\vec{y}_{1:N+1},I_{1:M})$, i.e.\ of proposals and auxiliary variables, when $N$ proposals are generated and $M$ samples are accepted in every iteration, respectively. Here, $\vec{y}_i\in\Omega$ and $I_m \in\{1,...,N+1\}$ for $i=1,...,N+1$ and $m=1,...,M$. The joint probability on the associated space is given by \begin{align*} p(\vec{y}_{1:N+1},I_{1:M}) &= p(\vec{y}_{i}) \kappa(\vec{y}_{i}, \vec{y}_{\setminus{i}}) p(I_{1:M} |\vec{y}_{1:N+1}) \end{align*} for any $i=1,...,N+1$, where $\kappa(\vec{y}_i,\vec{y}_{\setminus{i}})$ denotes the proposal distribution. If ergodicity holds, then $p(\vec{y}_i)=\pi(\vec{y}_i)$ asymptotically, i.e.\ the samples collected at each iteration will be distributed according to the target. In that case, updating $(\vec{y}_{1:N+1},I_{1:M})$ leaves the joint distribution invariant. Throughout this section, we will state the main results and definitions while referring to the appendix for the full proof of results to aid readability. \subsubsection{Law of large numbers} \label{subsubsec:lln} Given a scalar-valued function $f$ on $\Omega$, and samples $\vec{x}_1, \vec{x}_2, ...$ of an MP-MCMC simulation according to Algorithm \ref{algorithm:multiproposal_MH}, we wish to prove that $\hat\mu_{n}\rightarrow \mu$ a.s., where $\hat\mu_{n} = (1/n) \sum_{i} f(\vec{x}_i)$ and $\mu = \int f(\vec{x}) \pi(\vec{x}) \mathrm{d}\vec{x}$, which is equivalent to $\hat\mu_{n,M,N}\rightarrow\mu$ a.s.\ for $n\rightarrow \infty$, where \begin{align} \hat\mu_{n,M,N} = \frac{1}{nM} \sum_{i=1}^n\sum_{m=1}^{M} f(\vec{x}_m^{(i)}), \label{eq:lln_mp_mcmc_inner_and_outer_iterations} \end{align} where $\vec{x}_m^{(i)}$ denotes the $m$th sample in the $i$th MCMC iteration. \begin{Lemma}[Law of Large Numbers] \label{lemma:lln_mp_mcmc} Assuming that the single Markov chain on the product space of accepted samples per iteration, as defined by MP-MCMC, is positive Harris \cite[10.1]{meyn2012markov}, then the law of large numbers holds true. \begin{proof} A proof is given in Appendix \ref{appendix:proof_lemma_lln_mp_mcmc}. \end{proof} \end{Lemma} \subsubsection{Central limit theorem} \noindent The result we would like to have is the following: given a scalar-valued function $f$ on $\Omega$, and samples $\vec{x}_1, \vec{x}_2, ...$ of an MP-MCMC simulation, then \begin{align*} \sqrt{n}\left(\hat\mu_n - \mu \right) \xrightarrow{\mathcal{D}} \mathcal{N}(0,\sigma^2), \end{align*} for some $\sigma^2>0$, which is equivalent to \begin{align*} \sqrt{nM}\left(\hat\mu_{n,M,N} - \mu \right) \xrightarrow{\mathcal{D}} \mathcal{N}(0,\sigma^2) \quad \text{for} \quad n\rightarrow\infty, \end{align*} where we used the same notation as in section \ref{subsubsec:lln}. In order to prove the CLT we assume the chain to be positive Harris, and that the asymptotic variance can be expressed by the limit of the variances at iteration $n$ for $n \rightarrow \infty$, and is well-defined and positive. Since this seems to be natural to assume, we refer to this assumption by saying the MP-MCMC Markov chain is \textit{well-behaved}. Since this assumption is not easily verifiable in practice, we also give a formal definition of a uniform ergodicity condition on the Markov chain from \cite{meyn2012markov} that ensures the above. For the sake of readability, we refer to Appendix \ref{appendix:proof_lemma_ctl_mp_mcmc} for this formal condition. \begin{Lemma}[Central Limit Theorem] \label{lemma:clt_mp_mcmc} Assuming that the single Markov chain on the product space of accepted samples per iteration, as defined by MP-MCMC, is positive Harris and \textit{well-behaved}, then the central limit theorem holds true, and the asymptotic variance of the sequence $(f(\vec{x}_i))_{i \ge 1}$ is given by \begin{align*} \sigma^2 = \zeta^{(0)} + 2\sum_{1\le \ell<m \le M} \zeta^{(0)}_{\ell,m} + \frac{2}{M}\sum_{k=1}^\infty \sum_{\ell,m=1}^{M} \zeta^{(k)}_{\ell,m} \end{align*} where $\zeta^{(0)}=\operatorname{Var}_{\pi}(f(\vec{x}))$, $\zeta_{m,n}^{(0)} = \operatorname{Cov}(f(\vec{x}_m),f(\vec{x}_{n}))$, and $\zeta^{(k)}_{\ell,m} = \operatorname{Cov}( f(\vec{x}_\ell^{(i)}, f(\vec{x}_m^{(i+k)}))$ for any $ i,k \in \mathbb{N}$ and $\ell, m=1,...,M\in\mathbb{N}$. \begin{proof} For a proof, we refer to Appendix \ref{appendix:proof_lemma_ctl_mp_mcmc}. \end{proof} \end{Lemma} \subsection{Adaptive MP-MCMC} \label{subsec:adaptive_mpmcmc} We now introduce adaptive versions of MP-MCMC within a general framework of adaptivity for Markov chains, and present theory based on \cite{roberts2007coupling} that allows us to prove ergodicity in adaptive MP-MCMC. Further, an explicit adaptive MP-MCMC method is introduced as Algorithm \ref{algorithm:adaptive_mp_mcmc} in Section \ref{subsubsec:an_adaptive_mpmcmc_algorithm}, for which we prove ergodicity based on the results mentioned above. The performance of IS-MP-MCMC compared to MP-MCMC is then investigated in a simulation study. \subsubsection{Adaptive transition kernels in MP-MCMC} In the following we consider MP-MCMC as a single Markov chain over the accepted samples in one iteration. For any $n\in \mathbb{N}$, let $\vec{Z}_n = \vec{x}_{1:M}^{(n)} \in \mathbb{R}^{Md}$ denote the state of the MP-MCMC algorithm at time $n$, i.e.\ the vector of accepted samples in iteration $n$. Further let $\Gamma_n$ denote a $\mathcal{Y}$-valued random variable which determines the kernel choice for updating $\vec{Z}_n$ to $\vec{Z}_{n+1}$. We have \begin{align*} p(\vec{Z}_{n+1} =\vec{z}_{n+1}| \vec{Z}_i=\vec{z}_i, \Gamma_i=\gamma_i, i=1,...,n ) &= p(\vec{Z}_{n+1} =\vec{z}_{n+1}| \vec{Z}_n=\vec{z}_{n}, \Gamma_n=\gamma_n) \\ &= P_{\gamma_n}(\vec{z}_n,\vec{z}_{n+1}). \end{align*} The dependency of $\Gamma_n$ on previous samples and kernel choices, i.e.\ $\Gamma_n|\vec{Z}_i, \Gamma_i, i=1,...,n$, will be determined by the corresponding adaptive algorithm. For given starting values $\vec{Z}_0=\vec{z}, \Gamma_0=\gamma$, let \begin{align*} P^{(n)}_{\gamma} (\vec{z}, \vec{z}_n) = p(\vec{Z}_n=\vec{z}_n| \vec{Z}_0=\vec{z}, \Gamma_0=\gamma) \end{align*} denote the corresponding $n$-step conditional probability density of the associated adaptive algorithm. Finally, let us define the total variation between the joint distribution $p$ of variables $\vec{z}=\vec{x}_{1:M}$ and $P^{(n)}_{\gamma}$ by \begin{align*} T(\vec{z},\gamma, n) = \left\| P^{(n)}_{\gamma}(\vec{z},\cdot) - p(\cdot) \right\| = \sup_{B} \left|\int_B P^{(n)}_{\gamma}(\vec{z},\vec{\tilde{z}}) - p(\vec{\tilde{z}})\mathrm{d}\vec{\tilde{z}} \right|. \end{align*} Note that under the joint distribution $p$, all individual samples from every iteration are distributed according to the target $\pi$. \subsubsection{Ergodicity for adaptive chains} Following \cite{roberts2007coupling}, we call the underlying adaptive algorithm ergodic if $T(\vec{z},\gamma, n)\rightarrow 0$ for $n\rightarrow \infty$. Referring to their Theorem 1 and Theorem 2, the following results give sufficient conditions, under which ergodicity holds true. An integral requirement in both theorems is that the changes of the transition kernel due to adaptation tend to zero. To that end, we define the random variable \begin{align*} D_n = D_n(\Gamma_{n+1},\Gamma_n) = \sup_{\vec{z}}\| P_{\Gamma_{n+1}}(\vec{z},\cdot) - P_{\Gamma_n}(\vec{z}, \cdot)\|. \end{align*} Note that $D_n \rightarrow 0$ for $n\rightarrow \infty$ does not mean that $\Gamma_n$ necessarily converges. Also, the amount of adaptation to be infinite, i.e.\ $\sum_n D_n = \infty$, is allowed. We note that there exist more general adaptive schemes in the literature (\cite{atchad2011adaptive}) which allow that different transition kernels $P_\gamma$ have different stationary distributions, however we do not consider these here. \begin{theorem}[Theorem 1, \cite{roberts2007coupling}] \label{thm:theorem1roberts2007} Given an adaptive MP-MCMC algorithm with adaptation space $\mathcal{Y}$ and such that $p$ is the stationary distribution for any transition kernel $P_\gamma$, $\gamma \in \mathcal{Y}$. If, \begin{itemize} \item (Simultaneous Uniform Ergodicity) For any $\varepsilon>0$ there is $N=N(\epsilon)\in \mathbb{N}$ such that $\| P^{(N)}_\gamma(\vec{z},\cdot) - p(\cdot) \|\le \varepsilon$ for any $\vec{z}$ and $\gamma\in \mathcal{Y}$; and \item (Diminishing Adaptation) $D_n\rightarrow 0$ for $n \rightarrow \infty$ in probability, \end{itemize} then the adaptive algorithm is ergodic. \end{theorem} The first condition in the previous result is relatively strong, and might not always be verifiable in practice. However, ergodicity still holds if the uniform convergence of $P_\gamma$ is relaxed to the following containment condition: roughly speaking, for given starting values for $\vec{Z}_0$ and $\Gamma_0$, and sufficiently large $n$, $P^n_\gamma$ is close to $p$ with high probability. To formalise the containment condition, let us define for any $\varepsilon>0$, $\vec{z}\in \mathbb{R}^{Md}$ and $\gamma \in \mathcal{Y}$, \begin{align*} M_\epsilon (\vec{z},\gamma) = \inf \{ n\ge 1: \| P_\gamma^n(\vec{z},\cdot) - p(\cdot) \| \le \epsilon \}, \end{align*} and state the following ergodicity result. \begin{theorem}[Theorem 2, \cite{roberts2007coupling}] \label{thm:theorem2roberts2007} Consider an adaptive MP-MCMC algorithm that has diminishing adaptation, and let $\vec{z}^* \in \Omega^M, \gamma^* \in \mathcal{Y}$. If \begin{itemize} \item (Containment) For any $\delta>0$ there exists a $N\in \mathbb{N}$ such that $P(M_\epsilon (\vec{Z}_n,\Gamma_n) \le N| \vec{Z}_0=\vec{z}^*, \Gamma_0=\gamma^*)\ge 1-\delta$ for any $n \in \mathbb{N}$, \end{itemize} then the adaptive algorithm is ergodic. \end{theorem} The proofs of both previous theorems are based on coupling the adaptive chain with another chain that is adaptive only up to a certain iteration. Referring to \cite{bai2011containment} containment can actually be derived via the much easier to verify simultaneous polynomial ergodicity or simultaneous geometrical ergodicity. The latter immediately implies containment. \begin{Lemma}[Asymptotic distribution of adaptive MP-MCMC] Suppose that either the conditions of Theorem \ref{thm:theorem1roberts2007} or \ref{thm:theorem2roberts2007} are satisfied, then the accepted samples $\vec{x}^{(n)}_m\in \Omega$, i.e.\ the $m$th sample from the $n$th iteration for $n\in \mathbb{N}$ and $m=1,...,M\in \mathbb{N}$, given by $\vec{y}^{(n)}_{I_m^{(n)}}=\vec{x}^{(n)}_m$, are asymptotically distributed according to the target $\pi$. \begin{proof} As the asymptotic behaviour of the Markov chain, defined on states $\vec{x}_{1:M}$, is not influenced by the initial distribution, we may assume the joint target $p$ as initial distribution. The statement then follows immediately. \end{proof} \end{Lemma} \begin{comment} In the following we omit the upper index denoting the iteration of the Markov chain. Thus, in any given iteration, \begin{align} p(\vec{x}_{i})&= p(\vec{y}_{I_m}| I_m=i) \\ &= \int p(\vec{y}_{1:N+1}| I_m=i)) \mathrm{d}\vec{y}_{\setminus{i}}\\ &= \int \pi(\vec{y}_i) \kappa(\vec{y}_i, \vec{y}_{\setminus{i}})\mathrm{d}\vec{y}_{\setminus{i}}\\ &= \pi(\vec{y}_i). \end{align} Thus, for any given $I_m$ and $1 \le m \le M$, it holds $\vec{y}_{I_m} = \vec{x}_m\sim \pi$. \end{proof} \end{Lemma} \end{comment} \subsubsection{An adaptive MP-MCMC algorithm} \label{subsubsec:an_adaptive_mpmcmc_algorithm} In this section, we consider an adaptive version of the MP-MCMC algorithm, which allows for iterative updates of the proposal covariance. The underlying proposal distribution is formulated in a general fashion, however ergodicity will be proven for the special case of a Normal distribution. In that case, the resulting algorithm can be considered as a generalisation of the adaptive MCMC algorithm introduced by Haario et al.\ \cite{haario2001adaptive} allowing for multiple proposals. However, a different covariance estimator than the standard empirical covariance used in \cite{haario2001adaptive} is applied here: the estimate for the proposal covariance in iteration $n+1$ incorporates information from all previous proposals of iterations $1,2,..., n\in \mathbb{N}$. The proposals are thereby weighted according to the stationary distribution of the auxiliary variable. Note that weighting proposed states does not necessarily decrease the asymptotic variance of the resulting mean estimates (\cite{delmas2009does}), although in many cases it does (\cite{frenkel2006waste,ceperley1977monte}). This holds in particular when the number of proposed states is large, which is why we find the weighting estimator preferable over the standard empirical covariance estimate. The resulting method is displayed as Algorithm \ref{algorithm:adaptive_mp_mcmc}, where we have highlighted the differences compared to Algorithm \ref{algorithm:multiproposal_MH}. \begin{algorithm}[h] \SetAlgoLined \KwIn{Initialise starting point $\vec{x}_0=\vec{y}_1\in\Omega$, number of proposals $N$, number of accepted samples per iteration $M$, auxiliary variable $I=1$, counter $n=1$, initial \hl{mean estimate $\vec{\mu}_1$ and covariance estimate $\Sigma_1$}\;} \For{\textnormal{each MCMC iteration $\ell=1,2,...$}}{ Sample $\vec{y}_{\setminus I}$ conditioned on $I$ \hl{and $\Sigma_\ell$}, i.e., draw $N$ new points from the proposal kernel $\kappa_{\text{\hl{$\Sigma_\ell$}}}(\vec{y}_I, \cdot) = p(\vec{y}_{\setminus I}|\vec{y}_I, $\hl{$\Sigma_\ell$}$)$ \; Calculate the stationary distribution of $I$ conditioned on $\vec{y}_{1:N+1}$ \hl{and $\Sigma_\ell$}, i.e.\ $\forall$ $i=1,...,N+1$, $p(I=i|\vec{y}_{1:N+1}, $\text{\hl{$\Sigma_\ell$}}$) = \pi(\vec{y}_i)\kappa_{\text{\hl{$\Sigma_\ell$}}}(\vec{y}_{{i}}, \vec{y}_{\setminus{i}}) / \sum_j \pi(\vec{y}_j)\kappa_{\text{\hl{$\Sigma_\ell$}}}(\vec{y}_{{j}}, \vec{y}_{\setminus{j}})$, which can be done in parallel\; \For{$m=1,...,M$}{ Sample new $I$ via the stationary distribution $p(\cdot|\vec{y}_{1:N+1},$\hl{$\Sigma_\ell$}$)$\; Set new sample $\vec{x}_{n+m} = \vec{y}_I$\; } Update counter $n=n+M$ \; \hl{Compute $\tilde{\vec{\mu}}_{\ell+1}=\sum_{i} p(I=i| {{\vec{y}}}_{1:N+1}, \Sigma_\ell)\vec{y}_i$}\; \hl{Set $\vec{\mu}_{\ell+1} = \vec{\mu}_{\ell} + \frac{1}{\ell+1}\left(\tilde{\vec{\mu}}_{\ell+1} - \vec{\mu}_{\ell}\right)$}\; \hl{Compute $\tilde{{\Sigma}}_{\ell+1} = \sum_{i}p(I=i| {\vec{y}}_{1:N+1}, {{\Sigma}}_\ell)[{\vec{y}}_i-\vec{\mu}_{\ell+1}][{\vec{y}}_i-\vec{\mu}_{\ell+1}]^T$}\; \hl{Set $\Sigma_{\ell+1} = \Sigma_{\ell} + \frac{1}{\ell+1}(\tilde{\Sigma}_{\ell+1} - \Sigma_{\ell})$}\; } \caption{Adaptive MP-MCMC \newline All code altered compared to original MP-MCMC, Algorithm \ref{algorithm:multiproposal_MH}, is highlighted} \label{algorithm:adaptive_mp_mcmc} \end{algorithm} \vspace{4mm} \noindent \textbf{Ergodicity} \vspace{1.5mm} \noindent In what follows we prove ergodicity of the underlying adaptive MP-MCMC method with $\kappa_\gamma = N_\gamma$, i.e.\ the proposal distribution being normally distributed, based on the sufficient conditions provided by Theorem \ref{thm:theorem2roberts2007}. We prove three different ergodicity results, each based on slightly different requirements. We begin with the case of when proposals are sampled independently of previous samples. In all cases we assume the target $\pi$ to be absolutely continuous with respect to the Lebesgue measure. Further, we say that $\mathcal{Y}$ is bounded if there are $0<c_1<c_2<\infty$ such that $c_1 I \le \gamma \le c_2I$ for any $\gamma \in \mathcal{Y}$, where the ``$\le$'' is understood in the usual way considering matrices: For two matrices $A,B \in \mathbb{R}^{n\times n}$, $A\le B$ means that $B-A$ is positive semi-definite. \begin{theorem} \label{thm:ergodicity_adap_mpmcmc_independent} Let us assume that the proposal distribution $\kappa_\gamma=N_{\gamma}, \gamma \in \mathcal{Y}$, depends on previous samples only through the parameter $\gamma$ but is otherwise independent. If $\mathcal{Y}$ is bounded, then the adaptive MP-MCMC method described by Algorithm \ref{algorithm:adaptive_mp_mcmc} is ergodic. \begin{proof} The proof is based on Theorem \ref{thm:theorem2roberts2007}, and can be found in Appendix \ref{subsec:proof_ergodicity_adapt_mpmcmc_independent}. \end{proof} \end{theorem} The following result allows for adapting the mean value of the Normal distribution in addition to its covariance adaptively. The mean value is estimated via weighted proposals, as defined in equation \eqref{eq:def_importance_sampling_estimator}. \begin{corollary} \label{cor:ergodicity_adapt_mpmcmc_independent_mean} Let us assume that the proposal distribution $\kappa_\gamma= N_{\gamma}, \gamma=(\vec{\mu}, \Sigma) \in \mathcal{Y}$ depends on previous samples only through the parameter $\gamma$ but is otherwise independent. If $\mathcal{Y}$ is bounded, i.e.\ both mean and covariance estimates are bounded, then the adaptive MP-MCMC method described by Algorithm \ref{algorithm:adaptive_mp_mcmc} is ergodic. \begin{proof} The containment condition follows analogously to the proof for Theorem \ref{thm:ergodicity_adap_mpmcmc_independent}. The proof of diminishing adaptation requires integration of $N_{\gamma_n+s(\gamma_{n+1}-\gamma_n)}$, where $\gamma_n = (\vec{\mu}_n, \Sigma_n)$, similar to equation \eqref{eq:upper_bound_dim_adapt}. An estimate similar to \eqref{eq:exp_term_estimate_dim_adapt2} follows, where the bound is given by additional constant terms multiplied by either $\|\vec{\mu}_{n+1}- \vec{\mu}_n\|$ or $\|\Sigma_{n+1}- \Sigma_n\|$. Both terms can be bounded by a constant multiplied by $1/n\rightarrow 0$ for $n\rightarrow \infty$, which concludes the proof. \end{proof} \end{corollary} Now, we consider the case where we assume that the target $\pi$ has bounded support, i.e.\ there is $S\subset \mathbb{R}^d$ with $\lambda(S)<\infty$ such that $\pi(\vec{z})=0$ for any $\vec{z} \not\in S$. The dependence of the proposal distribution on previous samples is not restricted anymore only to the adaptation parameters. \begin{theorem} \label{thm:ergodicity_adapt_mpmcmc_bounded} If $\kappa_\gamma=N_{\gamma}, \gamma \in \mathcal{Y}$, and the target distribution $\pi$ has bounded support in $\mathbb{R}^d$ and its density is continuous, then the adaptive MP-MCMC method described by Algorithm \ref{algorithm:adaptive_mp_mcmc} is ergodic. \begin{proof} The proof is again based on Theorem \ref{thm:theorem2roberts2007}, and can be found in Appendix \ref{subsec:ergodicity_adapt_mpmcmc_bounded}. \end{proof} \end{theorem} \begin{corollary} Under the same conditions as in Theorem \ref{thm:ergodicity_adapt_mpmcmc_bounded}, except that we do not assume continuity, the statement still holds true if we instead assume that $\pi$ is bounded from above and below on its support, i.e.\ $\exists \ 0<\eta \le \rho < \infty$ such that \begin{align} \eta \le \pi(\vec{x}) \le \rho \label{eq:boundedness_target_support} \end{align} for any $\vec{x}$ in the support of $\pi$. \begin{proof} The statement follows similarly to the proof of Theorem \ref{thm:ergodicity_adapt_mpmcmc_bounded}, except that the boundedness of the transition probability on the finite state chain in \eqref{eq:boundness_finite_chain_transition_probability} is ensured by \eqref{eq:boundedness_target_support} instead of by continuity. \end{proof} \end{corollary} In the following case, we waive both the independence as well as the bounded support condition, however we require a few other assumptions to show ergodicity in Theorem \ref{thm:ergodicity_adaptive_mpmcmc_positive}, among which is the positivity of the target distribution. Therefore, Theorem \ref{thm:ergodicity_adapt_mpmcmc_bounded} and Theorem \ref{thm:ergodicity_adaptive_mpmcmc_positive} can be considered as complementing each other. The containment condition is generally hard to prove in practise wihtout further assumptions, and for the case of unbounded support of a dependent proposal kernel we have not managed to achieve such a direct proof. Hence, we turned to \cite{craiu2015stability} which states sufficient and practically verifiable assumptions under which the rather technical containment condition holds true. More precisely, it is assumed that the underlying Markov chain can only jump within a finite distance of any current sample. Furthermore, the transition kernel is only adapted within a compact region of the state space. Outside of this region, a fixed proposal kernel is used, which defines a chain that converges to the correct stationary distribution. In what follows, $K_D$ is the set of all states within an Euclidean distance $D$ from $K$ for any bounded set $K \subset \Omega^M$. \begin{assumption}[Bounded jump condition] \label{assumption:bounded_jump_condition} Assume that there is a $D< \infty$ such that \begin{align} P_\gamma(\tilde{\vec{z}}, \{ \vec{z}\in \Omega^M: \|\vec{z} - \tilde{\vec{z}} \| \le D \}) = 1 \quad \forall \ \tilde{\vec{z}}\in \Omega^M, \gamma \in \mathcal{Y}. \label{eq:bounded_jump_condition} \end{align} \end{assumption} \begin{assumption}[Non-adaptive kernel condition] \label{assumption:adaptation_within_compact} Assume that there is a bounded $K\subset \Omega^M$ such that \begin{align} P_\gamma(\vec{z}, B) = P(\vec{z}, B) \quad \forall \ B\in \mathcal{B}(\Omega^M), \vec{z}\in \Omega^M\setminus{K}, \label{eq:no_adaptation_outside_compact} \end{align} for some fixed transition kernel $P$ defining a chain that converges to the correct stationary distribution $p$ on $\Omega^M$ in total variation for any initial point. Further, it is assumed that \begin{align} \exists \ M<\infty \quad \text{such that} \quad P(\tilde{\vec{z}}, \mathrm{d}\vec{z}) \le M \lambda(\mathrm{d}\vec{z}), \label{eq:boundedness_above_transition_kernel} \end{align} for any $\tilde{\vec{z}}\in K_D\setminus{K}$ and any $\vec{z} \in K_{2D} \setminus{K_D}$, where $D$ is as in Assumption \ref{assumption:bounded_jump_condition}. Moreover, there are $\varepsilon, \delta >0$ such that \begin{align} P(\tilde{\vec{z}}, \mathrm{d}\vec{z}) \ge \varepsilon \lambda(\mathrm{d}\vec{z}) \label{eq:boundedness_below_transition_kernel} \end{align} for any $\tilde{\vec{z}}, \vec{z}$ with $\| \tilde{\vec{z}} - \vec{z} \|< \delta$ in some bounded rectangle contained in $\Omega^M$ and that contains $K_{2D}\setminus{K_D}$. \end{assumption} We remark that the conditions from equation \eqref{eq:bounded_jump_condition} and \eqref{eq:no_adaptation_outside_compact} are easily enforced upon Algoritm \ref{algorithm:adaptive_mp_mcmc} by making the following changes to the algorithm: if a proposal is generated that does not satisfy the first equation simply remove the proposal and sample a new proposal and repeat until it the condition satisfied. In order to ensure the second assumption, set the proposal distribution to a fixed Gaussian distribution outside of the set $K$. One easily verifies that the remaining conditions \eqref{eq:boundedness_above_transition_kernel} and \eqref{eq:boundedness_below_transition_kernel} are then also satisfied. \begin{theorem}[Ergodicity of adaptive MP-MCMC] \label{thm:ergodicity_adaptive_mpmcmc_positive} Let the conditions from Assumption \ref{assumption:bounded_jump_condition} and Assumption \ref{assumption:adaptation_within_compact} be satisfied. Further, let $\pi$ be continuous and positive, $\Omega \subset \mathbb{R}^d$ open, and $\mathcal{Y}$ be bounded. Then, the adaptive MP-MCMC method described by Algorithm \ref{algorithm:adaptive_mp_mcmc} is ergodic. \begin{proof} For a proof, we refer to Appendix \ref{subsec:ergodicity_adapt_mpmcmc_positive}. \end{proof} \end{theorem} \subsubsection{Adaptive MP-MCMC with non-Gaussian proposals} The question arises what theoretical guarantees hold if the underlying proposal distribution is not Gaussian, different to the case of Algorithm \ref{algorithm:adaptive_mp_mcmc}. As before, we may proceed by proving diminishing adaptation and containment. According to \cite{craiu2015stability}, the first condition, which basically says that the changes in the process become smaller and smaller as time goes by, is typically simple to achieve by carefully choosing the proposal distribution and designing what adaptation is used. As pointed out above, the second condition is hard to prove directly without further assumptions. However, by raising the further two conditions \ref{assumption:bounded_jump_condition} and \ref{assumption:adaptation_within_compact} upon the transition kernel ensures containment. Both conditions are closely related to the choice of the proposal distribution and the design of adaptation, which both are in the hand of the user. This suggests that for algorithms that apply a different adaptation as in Algorithm \ref{algorithm:adaptive_mp_mcmc} and have non-Gaussian proposals similar results as in the last section can be achieved. \begin{comment} \tobi{Actually, the proof of the S.S.A.G.E. can be quite easily shown if we assumed that the target has bounded support. This assumption is also made in the Haario-paper! The proof works by defining the state space on the restriction $\mathcal{X}$ values with positive probability (which makes sense). Assuming that $C=\mathcal{X}$, the first condition (boundedness from below) follows basically in the same way as before, and the second one (boundedness from above) with $V\equiv 1$. This might be the preferable way of proceeding since the other proof using Corollary 2 in \cite{bai2011containment} is a little too crazy? Also note that the bounded support condition implies that the boundedness of the covariance matrix estimate, since only proposals with positive weight are incorporated in the estimate, which means proposals inside the bounded support.} \end{comment} \section{Pseudo-random MP-MCMC with Importance Sampling} \label{sec:importance_sampling} In MP-MCMC, samples are drawn from a Markov chain defined on all $N+1$ proposals in each iteration. These samples are in turn typically used to compute some quantity of interest, which can be expressed as an integral with respect to the target. The same can be achieved by weighting all proposed states from each iteration appropriately without any sampling on the finite state chain. Before explaining the details of the resulting method, we state an intuitive motivation for why we should make use of weighting. \subsection{Introduction and motivation} We start by arguing that increasing the number of accepted samples per iteration while keeping the number of proposals and the number of outer iterations constant is typically beneficial in terms of reducing the empirical variance of estimates. In order to see this, note that for the variance $\sigma_{n,M,N}^2$ of the mean estimator $\hat{\mu}_{n,M,N}$ for $n,M,N\in\mathbb{N}$ it holds that \begin{align*} \sigma_{n,M,N}^2 &= \frac{1}{n}\sum_{i=1}^n \operatorname{Var} \left( F\left( \vec{x}_{1:M}^{(i)} \right) \right)\\ &\quad\quad+ \frac{2}{n} \sum_{1\le i<j\le n} \operatorname{Cov} \left( F \left( \vec{x}_{1:M}^{(i)} \right), F\left( \vec{x}_{1:M}^{(j)} \right) \right), \end{align*} where $\vec{x}_{1:M}^{(i)}$ states the accepted set of $M$ samples in the $i$th iteration, and $F$ is defined as \begin{align} F(\vec{x}_{1:M}) = \frac{1}{M} \sum_{m=1}^{M} f(\vec{x}_{m}). \label{eq:definition_capital_F} \end{align} Further, \begin{align*} \operatorname{Var} \left( F \left( \vec{x}_{1:M}^{(i)} \right) \right) = \frac{1}{M}\operatorname{Var}\left(f(\vec{x}^{(i)})\right) + \frac{1}{M^2}\sum_{\ell,m=1}^M \operatorname{Cov}\left(f(\vec{x}^{(i)}_\ell), f(\vec{x}^{(i)}_m \right). \end{align*} Clearly, the first term decreases with increasing $M$. For the second term, note that for sufficiently large $M$, the relative frequency of accepting a proposal $\vec{y}_\ell^{(i)}$ as a sample among all $N+1$ proposals of the $i$th iteration is approximately equal to the stationary distribution $p(I=\ell|\vec{y}_{1:N+1}^{(i)})$. Therefore, \begin{align} \begin{split} \frac{1}{M^2}\sum_{\ell,m=1}^M \operatorname{Cov}\left(f(\vec{x}^{(i)}_\ell), f(\vec{x}^{(i)}_m \right) &\approx \sum_{\ell,m=1}^{N+1} p(I=\ell|\vec{y}^{(i)}_{1:N+1}) p(I=m|\vec{y}^{(i)}_{1:N+1}) \\ &\quad\quad \cdot \operatorname{Cov}\left(f(\vec{y}_\ell^{(i)}), f(\vec{y}_m^{(i)}) \right), \end{split} \label{eq:motivation_is_cov1} \end{align} which does not depend on $M$. Similarly, \begin{align} \operatorname{Cov}\left( F\left( \vec{x}_{1:M}^{(i)} \right), F\left( \vec{x}_{1:M}^{(j)} \right) \right) &= \frac{1}{M^2} \sum_{\ell,m=1}^M \operatorname{Cov}\left(f(\vec{x}_\ell^{(i)}), f(\vec{x}_m^{(j)}) \right) \nonumber\\ \begin{split} &\approx \sum_{\ell, m =1}^{N+1}p(I=\ell|\vec{y}^{(i)}_{1:N+1}) p(I=m|\vec{y}^{(j)}_{1:N+1})\\ &\quad\quad \cdot \operatorname{Cov}\left(f(\vec{y}_\ell^{(i)}), f(\vec{y}_m^{(j)}) \right), \end{split} \label{eq:motivation_is_cov2} \end{align} for sufficiently large $M$, which again is independent of $M$. In summary, we have \begin{align*} \sigma_{M,N,n}^2 \gtrsim \sigma_{M',N,n}^2 \quad\quad \text{for} \quad M'>M. \end{align*} An increase of $M$ for a given iteration and increasing proposal numbers $N$ has been investigated numerically: for any $N$, we analysed the empirical variance of a mean estimate for $M\in\{2^\alpha N: \alpha=0,...,4\}$. The case of $\alpha = \infty$ corresponds to MP-MCMC with importance sampling (IS-MP-MCMC), described by Algorithm \ref{algorithm:importance_sampling_mp_mcmc}. The underlying target is a one-dimensional Gaussian distribution, and we set the proposal sampler to the SmMALA kernel defined in \eqref{eq:smMALA_kernel}. The corresponding results are displayed in Figure \ref{fig:mp-mcmc_varying_m} (\textit{left}). Indeed, an increase in $M$ yields a decrease in variance as expected. At the same time, the magnitude of the reduction in variance also decreases with increasing $M$. The limiting case, which corresponds to accepting all proposals and then suitably weighting them, exhibits the lowest variance. In some sense this contrast the general observation that an increased acceptance rate in MCMC does not necessarily produce more informative estimates. \begin{figure}[h] \centering \begin{subfigure}[b]{0.65\textwidth} \includegraphics[width=\textwidth]{plots_mpmcmc/acc_more_10runsruns_new_new_largerfont1.eps} \label{a} \end{subfigure} \caption{Variance convergence of the arithmetic mean from MP-MCMC with Riemannian SmMALA proposals on a one-dimensional standard Normal posterior for increasing proposal numbers $N$ and for changing number of accepted proposals per iteration $M=\operatorname{const}$ ({\it right}). Results based on $10$ MCMC simulations. The error bars corresponds to three times a standard deviation} \label{fig:mp-mcmc_varying_m} \end{figure} In the limiting case, $M\rightarrow \infty$, the two approximations in \eqref{eq:motivation_is_cov1}-\eqref{eq:motivation_is_cov2} become equalities, and sampling from the finite state chain in one iteration corresponds in principle to accepting all proposals $\vec{y}_{1:N+1}$ but weighting each $\vec{y}_i$ according to $p(I=i|\vec{y}_{1:N+1})$. This can be formalised as an importance sampling approach for MCMC with multiple proposals. A visualisation of this method is given in Figure \ref{fig:importance_sampling_visualisation}. Due to the considerations above, this approach typically produces a smaller variance than the standard MP-MCMC, where $p(I|\vec{y}_{1:N+1})$ is used to sample from the finite state chain in every iteration. \begin{figure}% \centering \begin{tikzpicture}[scale=.3, font=\sffamily, dot/.style = {state, fill=gray!20!white, line width=0.1mm, inner sep=1pt, minimum size=0.5pt, minimum width=0.02cm}, >=triangle 45] \draw[rotate=45, black!50, line width=0.1mm] \boundellipse{0,1}{10}{4}; \draw[rotate=45, black!50, line width=0.1mm] \boundellipse{0.3,1.3}{7.5}{3}; \draw[rotate=45, black!50, line width=0.1mm] \boundellipse{0.6,1.6}{5}{2}; \draw[rotate=45, black!50, line width=0.1mm] \boundellipse{0.8,1.8}{2.5}{1}; \node[dot, minimum size=5pt, rectangle, label={[label distance=-7 pt]-80:{\small ${y}^{(k)}_1$}}] at (4.5, -4) (x1l0) {}; \node[dot, minimum size=1.2pt, label={[label distance=-7pt]-60:{\small ${y}^{(k)}_2$}}] at (3.5, -8.0) (x2l0) {}; \node[dot, minimum size=1.2pt, label={[label distance=-7pt]-60:{\small ${y}^{(k)}_3$}}] at (8.5, -2) (x3l0) {}; \node[dot, minimum size=6pt, rectangle, label={[label distance=2pt]0:{\small ${y}^{(k)}_4=y^{(k+1)}_4$}}] at (2, 0) (x4l0) {}; \node[dot, minimum size=4pt, label={[label distance=-5pt]-75:{\small ${y}^{(k)}_5$}}] at (0,-5.5) (x5l0) {}; \node[dot, minimum size=3pt, label={[label distance=-0pt]0:{\small ${y}^{(k+1)}_1$}}] at (4, 4.5) (x1l1) {}; \node[dot, minimum size=10pt, label={[label distance=-5pt]30:{\small ${y}^{(k+1)}_2$}}] at (-1., 3) (x2l1) {}; \node[dot, minimum size=3.pt, label={[label distance=-5pt]-60:{\small ${y}^{(k+1)}_3$}}] at (-5, -3) (x3l1) {}; \node[dot, minimum size=1.2pt, label={[label distance=-5pt]90:{\small ${y}^{(k+1)}_5$}}] at (-7., -0) (x5l1) {}; \draw[dashed, red!60, line width=0.25mm] [->] (x1l0) -- (x2l0) ; \draw[dashed, red!60, line width=0.25mm] [->] (x1l0) -- (x3l0) ; \draw[ red!60, line width=0.25mm] [->] (x1l0) -- (x4l0) ; \draw[dashed, red!60, line width=0.25mm] [->] (x1l0) -- (x5l0) ; \draw[dashed, red!60, line width=0.25mm] [->] (x4l0) -- (x1l1) ; \draw[dashed, red!60, line width=0.25mm] [->] (x4l0) -- (x2l1) ; \draw[dashed, red!60, line width=0.25mm] [->] (x4l0) -- (x3l1) ; \draw[dashed, red!60, line width=0.25mm] [->] (x4l0) -- (x5l1) ; \end{tikzpicture} \caption{\small In IS-MP-MCMC, we associate to every proposal ${\vec{y}}^{(\ell)}_{i}$ its weight $w^{(\ell)}_i=p(I^{(\ell)}=i|\vec{y}_{1:N+1}^{(\ell)})$, thereby prioritising proposals that are most informative about the posterior} \label{fig:importance_sampling_visualisation} \end{figure} \subsubsection{Waste-Recyling} Using a different heuristic, \cite{tjelmeland2004using} introduced the importance sampling technique from above, as well as \cite{ceperley1977monte,frenkel2004speed,frenkel2006waste,delmas2009does}. In some of the literature, e.g.\ \cite{frenkel2006waste}, this technique is referred to as Waste-Recycling due to the fact that every proposal is used, including the ones rejected by MCMC. Compared to standard MP-MCMC, i.e.\ using Barker's acceptance probabilities, and when $M=1$, IS-MP-MCMC has been shown to be superior in terms of asymptotic variance \cite{delmas2009does}. However, \cite{delmas2009does} construct an example for the single proposal case where importance sampling (Waste-recycling) can perform worse than the Metropolis-Hastings algorithm if Peskun's acceptance probability is employed. \subsection{Importance sampling MP-MCMC Algorithms} \label{subsubsec:algorithm_description_is_mpmcmc} We now present the importance sampling version of MP-MCMC to estimate the integral $\boldsymbol{\mu} = \boldsymbol{\mu}(f) = \int f(\vec{x})\pi(\vec{x})\mathrm{d}\vec{x}$ for a given function $f:\Omega\rightarrow \mathbb{R}^{d'}$ for $d'\in \mathbb{N}$. In every iteration, each proposal $f(\vec{y}_i)$ is weighted according to the stationary distribution $p(I=i|\vec{y}_{1:N+1})$. The sum of weighted proposals yields an estimate for the mean $\boldsymbol{\mu}$. The resulting method is described by Algorithm \ref{algorithm:importance_sampling_mp_mcmc}. Note that this defines a Markov chain driven algorithm: in every iteration, according to $p(\cdot|{{\vec{y}}}_{1:N+1})$ one sample from the $N+1$ proposals is drawn (line $6$), conditioned on which then $N$ new proposals are drawn in the subsequence iteration. This chain corresponds to the standard MP-MCMC with $M=1$. When we mention the underlying Markov chain corresponding to importance sampling MP-MCMC, we refer to this chain. The importance sampling estimator $\boldsymbol{\mu}_L=\boldsymbol{\mu}_L(f)$ for $L\in \mathbb{N}$ can also be written as \begin{align} \boldsymbol{\mu}_L = \frac{1}{L} \sum_{\ell=1}^L \sum_{i=1}^{N+1} w^{(\ell)}_i f({\vec{y}}_i^{(\ell)}), \label{eq:def_importance_sampling_estimator} \end{align} where $ w^{(\ell)}_i = p(I=i| {{\vec{y}}}_{1:N+1}^{(\ell)})$ for $i=1,...,N+1$ and $\ell=1,...,L$. \begin{algorithm}[h] \SetAlgoLined \KwIn{Initialise starting point (proposal) $\vec{y}_1\in \Omega$, number of proposals $N$, auxiliary variable $I=1$, \hl{integrand $f$ and initial mean estimate $\boldsymbol{\mu}_1 = \boldsymbol{\mu}_1(f)$}\; } \For{\textnormal{each MCMC iteration $\ell=1,2,...$}}{ Sample ${\vec{y}}_{\setminus I}$ conditioned on $I$, i.e., draw $N$ new points from the proposal kernel $\kappa({\vec{y}}_I, \cdot) = p({\vec{y}}_{\setminus I}|{\vec{y}}_I)$ \; Calculate the stationary distribution of $I$ conditioned on ${\vec{y}}_{1:N+1}$, i.e.\ $\forall$ $i=1,...,N+1$, $p(I=i|{{\vec{y}}}_{1:N+1}) = $ $\pi({{\vec{y}}}_i)\kappa({{\vec{y}}}_{{i}}, {{\vec{y}}}_{\setminus{i}}) / \sum_j \pi({{\vec{y}}}_j)\kappa({{\vec{y}}}_{{j}}, {{\vec{y}}}_{\setminus{j}})$, which can be done in parallel\; \hl{ Compute $\tilde{\boldsymbol{\mu}}_{\ell+1}=\sum_{i} p(I=i| {{\vec{y}}}_{1:N+1})f({{\vec{y}}}_i)$}\; \hl{Set $\boldsymbol{\mu}_{\ell+1} = \boldsymbol{\mu}_{\ell} + \frac{1}{\ell+1}\left(\tilde{\boldsymbol{\mu}}_{\ell+1} - \boldsymbol{\mu}_{\ell}\right)$}\; {Sample new $I$ via the stationary distribution $p(\cdot|{{\vec{y}}}_{1:N+1})$}\; } \caption{Importance sampling MP-MCMC \newline All code altered compared to original MP-MCMC, Algorithm \ref{algorithm:multiproposal_MH}, is highlighted} \label{algorithm:importance_sampling_mp_mcmc} \end{algorithm} \subsubsection{Lack of samples representing $\pi$} Despite the amenable properties of the importance sampling approach for MP-MCMC, compared to the standard MP-MCMC, Algorithm \ref{algorithm:importance_sampling_mp_mcmc} has the disadvantage that it does not produce samples that are directly informative and approximately distributed according to a target, but rather an approximation of an integral with respect to the target instead. \subsubsection{Adaptive importance sampling} \label{subsubsec:algorithm_description_adaptive_IS_mpmcmc} In many situations, it may make sense to adaptively learn the proposal kernel about the target, based on the past history of the algorithm. We therefore extend Algorithm \ref{algorithm:importance_sampling_mp_mcmc} to make use of importance sampling, which is described in Algorithm \ref{algorithm:adaptive_importance_sampling_mp_mcmc}. This can be achieved by making use of estimates for global parameters which are informative about the target, e.g.\ mean and covariance. Clearly, the Markov property of the stochastic process resulting from this approach will not hold. This is generally problematic for convergence, and thus for the consistency of the importance sampling estimate. However, given the usual diminishing adaptation condition, i.e.\ when the difference in subsequent updates converges to zero, and some further assumptions on the transition kernel, referring to Section \ref{subsec:adaptive_mpmcmc}, consistency can be shown. Since in the importance sampling method there are no actual samples generated following the target distribution asymptotically, but an estimate for an integral over the target, we understand asymptotic unbiasedness of the resulting estimate when we talk about consistency (\cite{delmas2009does}). With the same notation as in Section \ref{subsec:theoretical_results_adaptive_is_mpmcmcm}, we assume that the proposal kernel $\kappa=\kappa_\gamma$ depends on a parameter $\gamma$ belonging to some space $\mathcal{Y}$. Examples of $\gamma$ are mean and covariance estimates of the posterior distribution. A proof for asymptotic unbiasedness in the case where $\kappa$ is the Normal distribution is given in Corollary \ref{cor:asympt_unbias_adapt_ismpmcmc}. \begin{comment} \tobi{Need to find a good way of representation; in particular, think about not using the vector notation for $\theta$; also, could $\theta$ be expressed as a tupel of parameters, e.g.\ mean and covariance together? How would the integral representation look like? Perhaps it might be better to go with the version of estimating both mean and covariance, and saying later on in principle this approach works for any parameter of the form \eqref{eq:target_parameter_importance_sampling}.}. \end{comment} \noindent In the particular case where sampling proposals depends on previous samples only though the adaptation parameters but is otherwise independent, i.e.\ $\kappa_{\gamma}(\vec{x}, \cdot) = \kappa_\gamma(\cdot)$ $\forall \vec{x} \in \Omega$, we found the use of adaptivity most beneficial in applications for the Bayesian logistic regression from \ref{adaptive_is_mp_qmcmc_empirical_results} and Bayesian linear regression from \ref{adaptive_is_mp_qmcmc_empirical_results}, compared to dependent proposals. \begin{comment} \textbf{CONTINUE WITH THIS HERE:} \tobi{this should be done in the theoretical results bit - try and find the most generic conditions with the most straightforward proof to show that the underlying chain, which accepts only one among all proposals in one iteration.} \textbf{Proof strategy for ergodicity/consistency:} \begin{enumerate} \item Show ergodicity for adaptive MP-MCMC \begin{itemize} \item overall: consider the single Markov chain, as which the MP-MCMC can be considered, as the underlying chain \item Refer to Roberts and Rosenthal (2007) for ergodicity proof: \begin{itemize} \item diminishing adaptation + simultaneous uniform ergodicity $\Rightarrow$ ergodicity \item diminishing adaptation + containment $\Rightarrow$ ergodicity \end{itemize} \end{itemize} \item Show consistency of adaptive importance sampling MP-MCMC (...not much to do really) \begin{itemize} \item underlying samples correspond to adaptive MP-MCMC chain with $M=1$ ($\Rightarrow$ ergodicity $\Rightarrow$ samples asymptotically distribution according to target $\Rightarrow$ importance sampling estimator is asymptotically unbiased (consistent)) \end{itemize} \end{enumerate} \end{comment} \begin{comment} \begin{algorithm}[h] \SetAlgoLined \KwIn{Initialise starting point $\tilde{\vec{x}}_1$, number of proposals $N$, auxiliary variable $I=1$, integrand $f$, estimate $\boldsymbol{\mu}_1 = \boldsymbol{\mu}_1(f)$ and \hl{proposal kernel parameters $\boldsymbol{\theta}_1$}\;} \For{\textnormal{each MCMC iteration $\ell=1,2,...$}}{ Sample $\tilde{\vec{x}}_{\setminus I}$, i.e., draw $N$ new points from the proposal kernel $\kappa(\cdot;$\hl{$\boldsymbol{\theta}_\ell$}$) = p(\tilde{\vec{x}}_{\setminus I}|$\hl{$\boldsymbol{\theta}\ell)$} \; Calculate the stationary distribution of $I$ conditioned on $\tilde{\vec{x}}_{1:N+1},$ \hl{$\boldsymbol{\theta}_\ell$}, i.e.\ $\forall$ $i=1,...,N+1$, $p(I=i|{\tilde{\vec{x}}}_{1:N+1},$ \hl{$\boldsymbol{\theta}_\ell$})$ = $ $\pi({\tilde{\vec{x}}}_i)\kappa( {\tilde{\vec{x}}}_{\setminus{i}};$ \hl{$\boldsymbol{\theta}_\ell$}$) / \sum_k \pi({\tilde{\vec{x}}}_k)\kappa({\tilde{\vec{x}}}_{\setminus{k}};$ \hl{$\boldsymbol{\theta}_\ell$}$)$, which can be done in parallel\; Compute $\tilde{\boldsymbol{\mu}}_{\ell+1}=\sum_{k} p(I=k| {\tilde{\vec{x}}}_{1:N+1},\boldsymbol{\theta}_\ell)f({\tilde{\vec{x}}}_k)$\; Set $\boldsymbol{\mu}_{\ell+1} = \boldsymbol{\mu}_{\ell} + \frac{1}{\ell+1}\left(\tilde{\boldsymbol{\mu}}_{\ell+1} - \boldsymbol{\mu}_{\ell}\right)$\; \hl{ Compute $\tilde{\boldsymbol{\theta}}_{\ell+1}=\sum_{k} p(I=k| {\tilde{\vec{x}}}_{1:N+1},\boldsymbol{\theta}_\ell)g({\tilde{\vec{x}}}_k)$}\; \hl{ Set $\boldsymbol{\theta}_{\ell+1} = \boldsymbol{\theta}_{\ell} + \frac{1}{\ell+1}\left(\tilde{\boldsymbol{\theta}}_{\ell+1} - \boldsymbol{\theta}_{\ell}\right)$}\; Sample $I$ via $p(\cdot|{\tilde{\vec{x}}}_{1:N+1},$ \hl{$\boldsymbol{\theta}_\ell)$}\; } \caption{Adaptive importance sampling MP-MCMC \newline All code altered compared to IS-MP-MCMC, Algorithm \ref{algorithm:importance_sampling_mp_mcmc}, is highlighted} \label{algorithm:adaptive_importance_sampling_mp_mcmc} \end{algorithm} \end{comment} \begin{algorithm}[h] \SetAlgoLined \KwIn{Initialise starting point (proposal) $\vec{y}_1$, number of proposals $N$, auxiliary variable $I=1$, integrand $f$, initial mean estimate $\boldsymbol{\mu}_1 = \boldsymbol{\mu}_1(f)$ \hl{and covariance estimate $\Sigma_1$}\;} \For{\textnormal{each MCMC iteration $\ell=1,2,...$}}{ Sample ${\vec{y}}_{\setminus I}$ conditioned on $I$ \hl{and $\Sigma_\ell$}, i.e., draw $N$ new points from the proposal kernel $\kappa_{\text{\hl{$\Sigma_\ell$}}}({\vec{y}}_I,\cdot) = p({\vec{y}}_{\setminus I}|{\vec{y}}_I,$\hl{$\Sigma_\ell)$} \; Calculate the stationary distribution of $I$ conditioned on ${\vec{y}}_{1:N+1}$ and \hl{$\Sigma_\ell$}, i.e.\ $\forall$ $i=1,...,N+1$, $p(I=i|{{\vec{y}}}_{1:N+1},$\hl{$\Sigma_\ell$})$ = $ $\pi({{\vec{y}}}_i)\kappa_{\text{\hl{$\Sigma_\ell$}}}( {\vec{y}}_i, {{\vec{y}}}_{\setminus{i}}) / \sum_j \pi({{\vec{y}}}_j)\kappa_{\text{\hl{$\Sigma_\ell$}}}( {\vec{y}}_j,{{\vec{y}}}_{\setminus{j}})$, which can be done in parallel\; Compute $\tilde{\boldsymbol{\mu}}_{\ell+1}=\sum_{i} p(I=i| {{\vec{y}}}_{1:N+1}$, \text{\hl{$\Sigma_\ell$}}$)f({{\vec{y}}}_i)$\; Set $\boldsymbol{\mu}_{\ell+1} = \boldsymbol{\mu}_{\ell} + \frac{1}{\ell+1}\left(\tilde{\boldsymbol{\mu}}_{\ell+1} - \boldsymbol{\mu}_{\ell}\right)$\; Sample new $I$ via the stationary distribution $p(\cdot|{{\vec{y}}}_{1:N+1},$ \hl{$\Sigma_\ell)$}\; \hl{Compute $\tilde{{\Sigma}}_{\ell+1} = \sum_{i}p(I=i| {\vec{y}}_{1:N+1}, {{\Sigma}}_\ell)[{\vec{y}}_i-\vec{\mu}_{\ell+1}][{\vec{y}}_i-\vec{\mu}_{\ell+1}]^T$}\; \hl{ Set ${\Sigma}_{\ell+1} = {\Sigma_\ell} + \frac{1}{\ell+1}(\tilde{{\Sigma}}_{\ell+1} - {\Sigma}_{\ell})$}\; } \caption{Adaptive importance sampling MP-MCMC \newline All code altered compared to IS-MP-MCMC, Algorithm \ref{algorithm:importance_sampling_mp_mcmc}, is highlighted } \label{algorithm:adaptive_importance_sampling_mp_mcmc} \end{algorithm} \subsection{Asymptotic unbiasedness of IS-MP-MCMC} \label{subsec:theoretical_results_adaptive_is_mpmcmcm} In this section, we prove the asymptotic unbiasedness of mean and covariance estimates from Algorithm \ref{algorithm:importance_sampling_mp_mcmc} and Algorithm \ref{algorithm:adaptive_importance_sampling_mp_mcmc}, respectively. We further refer to an existing result in the literature which states the asymptotic normality of the IS-MP-MCMC mean estimator. \begin{Lemma}[Asymptotic unbiasedness of IS-MP-MCMC] \label{lemma:asymptotic_unbiasedness_is} Given that the underlying Markov chain is positive Harris,, the IS-MP-MCMC sequence of estimators $(\boldsymbol{\mu}_L)_{L \ge 1}$ from Algorithm \ref{algorithm:importance_sampling_mp_mcmc} is asymptotically unbiased. \begin{proof} The statement is proven in Appendix \ref{proof:lemma:asymptotic_unbiasedness_is}. \end{proof} \end{Lemma} Lemma \ref{lemma:asymptotic_unbiasedness_is} states that after having discarded a sufficiently large burn-in period of (weighted) samples, the importance sampling estimator defined by the remaining samples is unbiased. \begin{corollary}[Asymptotic unbiasedness of adaptive IS-MP-MCMC] \label{cor:asympt_unbias_adapt_ismpmcmc} Under any of the conditions stated in Theorem \ref{thm:ergodicity_adap_mpmcmc_independent}, Theorem \ref{thm:ergodicity_adapt_mpmcmc_bounded} or Theorem \ref{thm:ergodicity_adaptive_mpmcmc_positive}, the sequence of estimators $(\boldsymbol{\mu}_L)_{L \ge 1}$ from Algorithm \ref{algorithm:adaptive_importance_sampling_mp_mcmc} is asymptotically unbiased. \begin{proof} Ergodicity of the adaptive MP-MCMC follows by the respective theorem used. Thus, we may argue analogously to the proof of Lemma \ref{lemma:asymptotic_unbiasedness_is}. \end{proof} \end{corollary} \begin{corollary} \label{corollary:asymptotic_unbiasedness_is_covariance} Under the same conditions as in Lemma \ref{lemma:asymptotic_unbiasedness_is}, the sequence of covariance estimates $({\Sigma}_{L})_{L\ge 1}$ from Algorithm \ref{algorithm:adaptive_mp_mcmc} and Algorithm \ref{algorithm:adaptive_importance_sampling_mp_mcmc} is asymptotically unbiased. \begin{proof} For a proof, we refer to Appendix \ref{proof:corollary:asymptotic_unbiasedness_is_covariance}. \end{proof} \end{corollary} The following result states that IS-MP-MCMC produces asymptotically normal estimates, that outperform standard MP-MCMC (using Barker's acceptance probabilities \eqref{eq:barker_acceptance}) for $M=1$ in terms of their asymptotic variance. Thus, making use of all proposals in every iteration is better than accepting only a single one per iteration. \begin{Lemma}[Proposition 4.1, \cite{delmas2009does}] The IS-MP-MCMC sequence of estimators $(\boldsymbol{\mu}_L)_{L \ge 1}$ from Algorithm \ref{algorithm:importance_sampling_mp_mcmc} is asymptotically normal. Its asymptotic variance is smaller or equal to the asymptotic variance of MP-MCMC with $M=1$. \end{Lemma} \subsection{Bayesian logistic regression} \label{subsubsec:emp_results_adaptive_IS_mp_mcmc} In what follows we consider the Bayesian logistic regression model as formulated in \cite{girolami2011riemann}. The dependent variable $y$ is categorical with binary outcome. The probability of $y$ is based on predictor variables defined by the design matrix $X \in \mathbb{R}^{n\times d}$, and is given by $P(y=1|X,\vec{\theta})=\sigma(X \vec{\theta})$ and $P(y=0|X,\theta)=1-\sigma(X \theta)$, where $\sigma$ denotes the logistic function. Our goal is to perform inference over the regression parameter $\vec{\theta} \in \mathbb{R}^d$, which has the Gaussian prior $\pi(\theta) = \mathcal{N}(\vec{0},\alpha \mathbf{I}_d)$, with $\alpha=100$. For further details, we refer to \cite{girolami2011riemann}. For the above mentioned logistic regression, there are overall $5$ different underlying data sets of varying dimensionality at our disposal, which we denote by \textit{Ripley} (d=3), \textit{Pima} (d=8), \textit{Heart} (d=14), \textit{Australian} (d=15) and \textit{German} (d=25). For brevity, we consider only the lowest-dimensional model data in the following experiments. In later experiments where we use a QMC seed to run the above introduced MCMC algorithms, we investigate their performance on all data sets. \subsubsection{Empirical results} We now compare the performance of IS-MP-MCMC and adaptive IS-MP-MCMC in the context of the Bayesian logistic regression model introduced above. As a reference we also consider the standard, i.e.\ single proposal, random-walk Metropolis-Hastings algorithm. To ensure fairness the total number of samples $n$ produced by the single and multiple proposal algorithms are equal, i.e.\ $n=LN$ if $L$ denotes the number if iterations and $N$ the number of proposals in the multiple proposal case. In all algorithms we choose a Gaussian proposal sampler. For the importance sampling methods, proposals are generated independently of previous samples. As an initial proposal mean and covariance a rough estimate of the posterior mean and covariance is employed. The former is used to initialise the Metropolis-Hastings algorithm. In the adaptive algorithm, proposal mean and covariance estimates are iteratively updated after every iteration. The results for the empirical variance associated to the posterior mean estimates in the above mentioned algorithms are displayed in Figure \ref{fig:is_vs_adapt_mpmcmc}. The importance sampling algorithms outperform Metropolis-Hastings by over an order of magnitude. Further, the adaptive algorithm produces slightly better results than the non-adaptive importance sampler, with an average empirical variance reduction of over $20 \%$. \begin{comment} The proposal kernel is chosen as indepenent from previous samples. We further propose an additional auxiliary variable, as introduced in Section \ref{intro_aux_prop_state}, thereby making use of a SmMALA Riemannian sampling formalism. The importance sampling approaches perform significantly, while the adaptive version of IS-MP-MCMC, which learns the covariance structure iteratively on the fly outperforms the non-adapative version. \end{comment} \begin{figure}[h] \centering \begin{subfigure}[b]{0.65\textwidth} \includegraphics[width=\textwidth]{plots_is_vs_adapt_mpmcmc/emprVar_25mcmcFinal.eps} \label{c} \end{subfigure} \caption{Empirical variance of the arithmetic mean for IS-MP-MCMC and adaptive IS-MP-MCMC in the Bayesian logistic regression model from \cite{girolami2011riemann} ($d=3$) for increasing proposal numbers $N$ and total number of samples $n$; also displayed is Metropolis-Hastings (M-H) for reference. Here, M-H was tuned to an approximately optimal acceptance rate of $20$-$25 \%$. Results are based on $25$ MCMC simulations. The error bars correspond to three times a standard deviation} \label{fig:is_vs_adapt_mpmcmc} \end{figure} \begin{comment} To be put in here: \begin{itemize} \item \tobi{forget for now:} Increase of mean jumping distance as a function of proposal number \item \tobi{forget for now:} Increase of acceptance rate as a function of proposal number \item \tobi{forget for now:} Increase of mixing performance for increasing proposal number (\cite{calderhead2014general} HMC) \item Compare empirical variance and squared bias of standard MH compared to MP-MCMC; consider not only Barker case \item Improved transition matrix for MP-MCMC from \cite{tjelmeland2004using} (increased acceptance ratio) \end{itemize} \end{comment} \section{Combining QMC with MP-MCMC} \label{sec:multiproposal_quasi_MH} In this section, we introduce a general framework for using CUD numbers in the context of MP-MCMC, leading to a method we shall call MP-QMCMC. The motivation for this is that since in each iteration $N$ proposals are provided as alternatives to any current state, all of which contribute to the exploration of the underlying local region of state space, we might reasonably expect a higher gain using QMC points for these than for the single proposal case, where there is always only one alternative to the current state. We prove the consistency of our proposed method and illustrate in numerical experiments an increased performance. We also extend our methodology to importance sampling, for which we observe an improved rate of convergence of close to $n^{-2}$ in simulations, instead of the standard MCMC rate of $n^{-1}$ in MSE. Summarising, we generalise Algorithms \ref{algorithm:multiproposal_MH}, \ref{algorithm:importance_sampling_mp_mcmc} and \ref{algorithm:adaptive_importance_sampling_mp_mcmc} to using any choice of CUD numbers as the driving sequence. \subsection{MP-QMCMC} We now introduce the above mentioned MP-QMCMC algorithm, and subsequently prove its consistency based on regularity conditions. Moreover, we investigate the algorithm's performance by simulations based on the Bayesian logistic regression model introduced in Section \ref{subsubsec:emp_results_adaptive_IS_mp_mcmc} and $5$ different underlying data sets. We conclude the section with some practical advice on effectively harvesting the benefits of QMC based MCMC algorithms. \subsubsection{Algorithm description} \label{subsubsec:algorithm_description_mpqmcmc} We exchange the usual driving sequence of pseudo-random numbers by CUD numbers. In every iteration, this comprises two situations: in the first, proposals are generated, and in the second one, given the transition matrix for the finite state chain on the proposed states, auxiliary variables are generated. In order to create the $N$ new proposals in $\Omega \subset \mathbb{R}^d$ we utilise $Nd$ numbers from a CUD sequence in $(0,1)$. Further, to sample from the finite state chain $M$ times we utilise another $M$ numbers from the underlying CUD sequence. Our algorithm is designed such that the entire CUD sequence is used, thereby making full use of the spatial homogeneity. A pseudo-code description of the resulting MP-QMCMC is given in Algorithm \ref{algorithm:multiproposal_quasi_MH}, which represents an extension of Algorithm \ref{algorithm:multiproposal_MH} to using any choice of CUD numbers as the driving sequence. The function $\Psi_{\vec{y}_I}$ in line three of the algorithm denotes the inverse of the cumulative distribution function (CDF) of the proposal distribution. Thus, given $N$ vectors $\vec{u}_i \in (0,1)^d$, represented by the joint vector $\vec{u}=(\vec{u}_1, ..., \vec{u}_N)$, $\Psi_{\vec{y}_I}$ assigns $\vec{u}$ to $N$ new proposals $\vec{y}_{\setminus{I}}\in \Omega^N$. Practically, each sub-vector $\vec{u}_i$ of $\vec{u}$ is assigned to one new proposal. For $d>1$, sampling via inversion can be expressed as iteratively sampling from the one-dimensional conditional proposal distribution, i.e.\ sampling the first coordinate of the new proposal via inversion of the conditional CDF of the first coordinate, then given that coordinate sampling the second one accordingly and so on. \begin{algorithm}[H] \SetAlgoLined \KwIn{Initialise starting point ${\vec{x}}_0=\vec{y}_1\in\Omega$, number of proposals $N$, number of accepted samples per iteration $M$, auxiliary variable $I=1$, counter $n=1$ \hl{and number of MCMC iterations $L$}\;} \hl{Generate a CUD sequence $u_1, ..., u_{L(Nd+M)}\in (0,1)$}\; \For{\textnormal{each MCMC iteration $\ell=1,...,$\hl{$L$} }}{ \hl{Set $\vec{u} = (u_{(\ell-1)(Nd+M)+1}, ..., u_{(\ell-1) (Nd+M)+ Nd}) \in (0,1)^{Nd}$}, and sample $\vec{y}_{\setminus I}$ conditioned on $I$, i.e., draw $N$ new points from $\kappa(\vec{y}_I, \cdot) = p(\vec{y}_{\setminus I}|\vec{y}_I)$ \hl{by the inverse $\Psi_{\vec{y}_I}(\vec{u})$} \; Calculate the stationary distribution of $I$ conditioned on $\vec{y}_{1:N+1}$, i.e.\ $\forall$ $i=1,...,N+1$, $p(I=i|\vec{y}_{1:N+1}) = $ $\pi(\vec{y}_i)\kappa(\vec{y}_i,\vec{y}_{\setminus{i}}) / \sum_j \pi(\vec{y}_j)\kappa(\vec{y}_j,\vec{y}_{\setminus{j}})$, which can be done in parallel\; \hl{Set $\vec{v}' = (u_{(\ell-1) (Nd+M)+Nd+1}, ..., u_{\ell (Nd+M)}) \in (0,1]^M$}\; \For{$m=1,...,M$}{ \hl{If $v'_{m} \in (\gamma_{j -1}, \gamma_{j}]$, where $\gamma_{j} = \sum_{i=1}^j p(I=i| \vec{y}_{1:N+1})$ for $j= 1,...,N+1$ and $\gamma_0:=0$, set $x_{n+m}=\vec{y}_j$}\; } Update counter $n=n+M$ } \caption{MP-QMCMC \newline All code altered compared to original (pseudo-random) MP-MCMC, Algorithm \ref{algorithm:multiproposal_MH}, is highlighted } \label{algorithm:multiproposal_quasi_MH} \end{algorithm} \begin{comment} \begin{algorithm}[H] \SetAlgoLined \KwIn{Initialise starting point ${\vec{x}}_1=\vec{y}_1\in\mathbb{R}^d$, number of proposals $N$, number of accepted samples per iteration $M$, auxiliary variable $I=1$, counter $n=1$ \hl{and number of MCMC iterations $L$}\;} \hl{Generate a CUD sequence $u_1, ..., u_{LNd}\in (0,1)$}\; \For{\textnormal{each MCMC iteration $\ell=1,...,$\hl{$L$} }}{ \hl{Set $\vec{u} = (u_{(\ell-1)Nd+1}, ..., u_{\ell Nd}) \in (0,1)^{Nd}$}, and sample $\vec{y}_{\setminus I}$ conditioned on $I$, i.e., draw $N$ new points from $\kappa(\vec{y}_I, \cdot) = p(\vec{y}_{\setminus I}|\vec{y}_I)$ \hl{by the inverse $\Psi_{\vec{y}_I}(\vec{u})$} \; Calculate the stationary distribution of $I$ conditioned on $\vec{y}_{1:N+1}$, i.e.\ $\forall$ $i=1,...,N+1$, $p(I=i|\vec{y}_{1:N+1}) = $ $\pi(\vec{y}_i)\kappa(\vec{y}_i,\vec{y}_{\setminus{i}}) / \sum_j \pi(\vec{y}_j)\kappa(\vec{y}_j,\vec{y}_{\setminus{j}})$, which can be done in parallel\; \For{$m=1,...,M$}{ Sample new $I$ via the stationary distribution $p(\cdot|\vec{y}_{1:N+1})$\; Set new sample $x_{n+m} = \vec{y}_I$\; } Update counter $n=n+M$ } \caption{MP-QMCMC \newline All code altered compared to original (pseudo-random) MP-MCMC, Algorithm \ref{algorithm:multiproposal_MH}, is highlighted \tobi{Classic version here. Does not include CUD points for sampling from finite state chain} } \label{algorithm:multiproposal_quasi_MH_classic} \end{algorithm} \end{comment} \subsubsection{Consistency} In the following section we prove that MP-MCMC driven by CUD numbers instead of pseudo-random numbers produces samples according to the correct stationary distribution under regularity conditions. In general, using CUD points is not expected to yield consistency for any MCMC algorithm that is not ergodic when sampling with IID numbers \cite{chen2011consistency}. Similar to Chen et al., our proof is based on the so called Rosenblatt-Chentsov transformation. \vspace{4mm} \noindent \textbf{Rosenblatt-Chentsov transformation} \vspace{1.5mm} \noindent Let us assume that there is a generator $\psi_\pi$ that produces samples according to the target distribution $\pi$, i.e.\ $\psi_\pi(\vec{u})=\vec{x} \sim \pi$ if $\vec{u}\sim \mathcal{U}[0,1]^d$. For example, $\psi_\pi$ could be based on the inversion method applied to the one-dimensional conditional distributions of the target, assumed that they are available. For $n=LM$ the $N$-proposal Rosenblatt-Chentsov transformation of $\vec{u}_0 \in(0,1)^d$ and a finite sequence of points $u_1, ..., u_{L(Nd+M)}\in (0,1)$ is defined as the finite sequence $\vec{x}_0,\vec{x}_1, ...\vec{x}_{LM}\in \Omega$, where $\vec{x}_0 = \psi_\pi(\vec{u}_0)$ and $\vec{x}_1, ...\vec{x}_{LM}$ are generated according to Algorithm \ref{algorithm:multiproposal_quasi_MH} using $u_1, ..., u_{L(Nd+M)}\in (0,1)$ as driving sequence and $\vec{x}_0$ as initial point. Since the standard version of MP-MCMC fulfills the detailed balance condition, updating samples preserves the underlying stationary distribution $\pi$. Thus, whenever one sample follows $\pi$, all successive samples follow $\pi$. That means, whenever $\vec{x}_0\sim\pi$, all points generated by MP-MCMC follow $\pi$. If the sequence of points $u_1, ..., u_{L(Nd+M)}$ in the $N$-proposal Rosenblatt-Chentsov transformation are uniformly distributed, then this holds for the samples generated by Algorithm \ref{algorithm:multiproposal_quasi_MH}, too. This observation will be used in the following to show the consistency of MP-QMCMC. Before that, we formulate some regularity condition that will be used in the proof. \vspace{4mm} \noindent \textbf{Regularity conditions} \vspace{1.5mm} \noindent Similarly to \cite{chen2011consistency}, the consistency proof which is given below relies on two regularity conditions. The first one defines coupling properties of the sampling method, and the second one suitable integrability over the sample space. \vspace{2mm} \noindent \textbf{1) Coupling}: Let $\phi(\tilde{\vec{x}}_M, (u_1, ..., u_{Nd+M})) = (\vec{x}_1,...,\vec{x}_M) $ denote the innovation operator of MP-MCMC, which assigns to the last sample $\tilde{\vec{x}}_M$ of the current iteration the $M$ new samples from the subsequent iteration. Let $\mathcal{C}\subset (0,1)^{Nd+M}$ have positive Jordan measure. If for any $\vec{u}\in \mathcal{C}$ it holds $\phi(\vec{x}, \vec{u}) = \phi(\vec{x}', \vec{u})$ $\forall$ $\vec{x},\vec{x}' \in \Omega$, then $\mathcal{C}$ is called a \textit{coupling region}. \\ Let $\vec{z}_i = \phi(\vec{x}, \vec{u}_i)$ and $\vec{z}_i' = \phi(\vec{x}', \vec{u}_i)$ be two iterations from Algorithm \ref{algorithm:multiproposal_quasi_MH} based on the same innovations $\vec{u}_i \in (0,1)^{Nd+M}$ but possibly different current states $\vec{x}$ and $\vec{x}'$, respectively.. If $\vec{u}_i\in \mathcal{C}$, then $\vec{z}_j = \vec{z}_j'$ for any $j\ge i$. In other words, if $\vec{u}_i \in \mathcal{C}$, two chains with the same innovation operator but potentially different starting points coincide for all $j \ge i$. As a non-trivial example, standard MP-MCMC with independent proposal sampler has a coupling see Lemma \ref{lemma:coupling_region_barker_independent}. \vspace{2mm} \noindent \textbf{2) Integrability}: For $k\ge 1$, let $\vec{x}_k=\vec{x}^i_j=\vec{x}^i_j(\vec{u}^1,...,\vec{u}^i)$ with $i\ge 1, 1\le j\le M$ and $k=(i-1)M+j$ denote the $k$th $N$-proposal MCMC update, i.e.\ the $j$th sample in the $i$th iteration, according to Algorithm \ref{algorithm:multiproposal_quasi_MH}. The method is called regular if the function $g:(0,1)^{i(Nd+M)}\rightarrow \mathbb{R}$, defined by $g(\vec{u^1}, \ldots, \vec{u}^i) = f(\vec{x}_k(\vec{u^1}, \ldots, \vec{u}^i))$, is Riemann integrable for any bounded and continuous scalar-valued $f$ defined on $\Omega$. \vspace{2mm} \noindent With reference to \cite{chen2011consistency}, it may seem odd at first to use the Riemann integral instead of the Lebesgue integral in the previous formulation. However, QMC numbers are typically designed to meet equi-distribution criteria over rectangular sets or are based upon a spectral condition, such that both have a formulation that is naturally closely related to the Riemann integral. The following theorem is the main result of this section, and states that under the above conditions MP-MCMC is consistent when driven by CUD numbers. \begin{theorem} \label{theorem:consistency_multi_prop_mcmc} Let $\vec{x}_0 \in \Omega$. For $k\ge 1$, let $\vec{x}_k=\vec{x}^i_j$ with $i\ge 1, 1\le j\le M$ be the $k$th $N$-proposal MCMC update according to Algorithm \ref{algorithm:multiproposal_quasi_MH}, which is assumed to be positive Harris with stationary distribution $\pi$ for an IID random seed. The method is also assumed to be regular and to have a coupling region $\mathcal{C}$. Further, let \begin{align*} \vec{u}^i = (v^i_1, ..., v^i_{Nd+M}), \end{align*} for a CUD sequence $(v_i)_{i\ge 0}$ with $v^i_\ell := v_{i(Nd+M)+\ell}$ for $i\ge 0$ and $\ell=1,...,Nd+M$. Then, the sequence $(\vec{x}_k)_{k \ge 1}$ consistently samples $\pi$. \end{theorem} \begin{proof} A proof can be found in Appendix \ref{app:sec:proof_consistency_mpqmcmc}. \end{proof} \begin{comment} \tobi{Old version below: only samples in $\mathbb{R}$ are considered, but we want in $\mathbb{R}^d$. Also, below the sampling of acceptance probabilities / from the finite state Markov chain use CUD numbers, which in our simulations we do not do - since the expected benefit is rather small!} \begin{theorem} \label{theorem:consistency_multi_prop_mcmc} Let $x_0 \in \mathbb{R}$. For $k\ge 1$, let $\vec{x}_k=\vec{x}^i_j$ with $i\ge 1, 1\le j\le N$ be the $k$th $N$-proposal MCMC update according to Algorithm \ref{algorithm:multiproposal_quasi_MH}, which is assumed to be positive Harris with stationary distribution $\pi$. The MCMC is also assumed to be regular and to have a coupling region $\mathcal{C}$. Further, let \begin{align} \vec{u}^i = (v^i_1, ..., v^i_{N_p+M}), \end{align} for a CUD sequence $(v_i)_{i\ge 0}$ with $v^i_j = v_{i(N_p+M)+j}$. Then the sequence $(x_k)_{k \ge 1}$ consistently samples $\pi$. \end{theorem} \begin{proof} A proof can be found in the appendix. \end{proof} \end{comment} Instead of requiring a coupling region, consistency for a continuous but bounded support $\Omega$ of $\pi$ can be achieved in the classical single proposal case by using a contraction argument \cite{chen2011consistencythesis,chen2011consistency}. Given an update function both continuous on the last state and the innovations, one further requires continuity and integrability conditions. In the following lemma, we show that standard MP-MCMC, i.e.\ Algorithm \ref{algorithm:multiproposal_MH}, has a coupling region when proposals are sampled independently of previous samples. \begin{Lemma}[Coupling region for MP-MCMC with independent sampler] \label{lemma:coupling_region_barker_independent} Let $\Psi_{\vec{y_I}}$ denote the inverse of the proposal distribution and $\vec{y}_I$ the last accepted sample from the previous MP-MCMC iteration, i.e.\ \begin{align} \Psi_{\vec{y_I}}(\vec{u}_1, \ldots, \vec{u}_{N}) = \vec{y}_{\setminus{I}}, \label{eq:proposal_sampler_coupling_region} \end{align} are the new proposals in one MCMC iteration, where $\vec{u}_i \in (0,1)^{d}$ for $i=1, ..., N$, and $I \neq 1$ without loss of generality. We assume that proposals are sampled independently of previous samples, i.e.\ $\Psi_{\vec{y_I}} = \Psi$ and $\kappa(\vec{y}_I, \vec{y}_{\setminus{I}}) = \kappa(\vec{y}_{\setminus{I}})$. The proposal $\vec{y}_1$ is always accepted, i.e.\ $\phi(\vec{y_I}, (\vec{u}_1, \ldots, \vec{u}_{N}, \vec{\tilde{u}}))=(\vec{y}_1, ..., \vec{y}_1)$ with $\vec{\tilde{u}}\in (0,1)^M$, if \begin{align*} 0 \le \tilde{u}_{m} \le \frac{\pi(\vec{y}_1)K(\vec{y}_{\setminus 1})}{\sum_i \pi(\vec{y}_i)\kappa( \vec{y}_{\setminus i})} \quad\quad \text{for all }m=1,...,M. \end{align*} Let us assume that \begin{align} \rho = \sup_{\vec{y}_1,\ldots, \vec{y}_{N+1}\in \Omega} \frac{\sum_{i=1}^{N+1} \pi(\vec{y}_i)}{\kappa(\vec{y}_{\setminus 1})} < \infty. \label{eq:kappa_sup_condition} \end{align} Moreover, let us assume that there is a rectangle $[\vec{a}, \vec{b}]\subset [0,1]^{Nd}$ of positive volume with \begin{align} \eta = \inf_{\substack{(\vec{u}_1, \ldots, \vec{u}_{N}) \in [\vec{a}, \vec{b}],\\ \vec{y}_I\in \Omega}} \frac{\pi(\Psi(\vec{u}_1))}{\prod_{j \neq I}\kappa(\Psi(\vec{u}_j)) + \sum_{i\neq I} \kappa(\vec{y}_I) \prod_{j \neq i,I}\kappa(\Psi(\vec{u}_j))\cdot } >0. \label{eq:eta_inf_condition} \end{align} Then, $\mathcal{C}:= [\vec{a}, \vec{b}] \times [0, \eta/\rho]^M$ is a coupling region. \begin{proof} Let us assume that $(\vec{u}_1, ..., \vec{u}_N, \vec{\tilde{u}})\in \mathcal{C}$, where $\vec{u}_i \in (0,1)^d$ for $i=1,...,N$ and $\vec{\tilde{u}} \in (0,1)^M$. The set $\mathcal{C}$ has by definition positive Jordan measure. Note that by the definition of $\rho$ and $\eta$, and equation \eqref{eq:proposal_sampler_coupling_region}, it holds \begin{align} \kappa(\vec{y}_{\setminus 1}) \ge \frac{1}{\rho}\sum_{i=1}^{N+1} \pi(\vec{y}_i), \quad \text{and}, \quad\quad \pi(\vec{y}_1) \ge \eta \sum_{i=1}^{N+1} K(\vec{y}_{\setminus i}). \end{align} Therefore, \begin{align*} \pi(\vec{y}_1)\kappa(\vec{y}_{\setminus 1}) &\ge \eta \sum_{i=1}^{N+1} \kappa(\vec{y}_{\setminus i})\cdot \frac{1}{\rho}\sum_{i=1}^{N+1} \pi(\vec{y}_i) \\ &\ge \tilde{u}_{m} \sum_{i=1}^{N+1} \pi(\vec{y}_i)\kappa(\vec{y}_{\setminus i}) \end{align*} for any $m=1,...,M$. Hence, $\phi(\vec{y}_I, \vec{u}_1, \ldots, \vec{u}_{N}, \vec{\tilde{u}})=(\vec{y}_1, ..., \vec{y}_1)$ for any $\vec{y}_I \in \Omega$. \end{proof} \end{Lemma} Following a similar argumentation to Section 5.3 and 5.4 in \cite{chen2011consistency}, one can prove that the Rosenblatt-Chentsov transformation is regular under continuity and boundedness assumptions. \begin{theorem} \label{theorem:regularity_rosenblatt_chentsiv} If boundedness and continuity holds for the generator $\Psi_\pi$, the inverse CDF $\Psi_{\cdot}: \Omega \times (0,1)^{Nd}\rightarrow \Omega^{N}$, the proposal density $\kappa$ and the target density $\pi$, then the Rosenblatt-Chentsov transformation is regular. \begin{proof} Here, we refer to Theorem 6 in \cite{chen2011consistency}. The idea of the proof is to use that the composition of a continuous function defined on a bounded interval with Riemann integrable functions leads again to a Riemann integrable function, and that the sampling step on the finite state chain does not break Riemann integrability. \end{proof} \end{theorem} \begin{example}[MP-MCMC with independent Gaussian proposals] Let the proposals be Gaussian, and independently sampled from previous samples. Further, let the target distribution be bounded and continuous. According to Lemma \ref{lemma:coupling_region_barker_independent} and Theorem \ref{theorem:regularity_rosenblatt_chentsiv}, the resulting MP-MCMC satisfies the regularity conditions that ensure consistency when run by a CUD driving sequence. \end{example} \subsubsection{Empirical Results: Bayesian logistic regression} In Figure \ref{fig:variance_bias_mpmcmc_qmc_vs_psr} we compare the performance of SmMALA MP-MCMC with its CUD driven counterpart on a Bayesian logistic regression problem from Section \ref{subsubsec:emp_results_adaptive_IS_mp_mcmc} for increasing proposal numbers and sample sizes. What we compare is the empirical variance and the squared bias of estimates for the posterior mean. Since the actual posterior mean is not analytically available, we computed a gold-standard mean based on $25$ simulations of approximately $8 \cdot 10^6$ weighted proposals according to Algorithm \ref{algorithm:importance_sampling_mp_qmcmc}. The bias is then calculated using the gold-standard mean estimate. Here, a proposal mechanism is applied that makes use of an auxiliary proposed state, as described in Section \ref{intro_aux_prop_state}. More precisely, in a first step of the proposal procedure, an auxiliary point is generated independently of previous samples. In a second step, based on geometric information on the auxiliary point, the $N$ proposals, from which MP-MCMC samples are drawn, are generated using the SmMALA kernel \eqref{eq:smMALA_kernel}. Note that formally, this algorithm has an independent proposal sampler, which is why this algorithm satisfies the coupling region condition according to Lemma \ref{lemma:coupling_region_barker_independent} and therefore consistently samples from the posterior. In the lower dimensional models, we see a slight improvement in the rate of convergence for increasing proposal numbers and sample sizes between pseudo-random and QMC driven SmMALA MP-MCMC. However, the improvement basically disappears in higher dimensions. This may first seem as a setback in exploring beneficial aspects of QMC driven MCMC methods since actually we hope for significantly improved rates of convergence. However, some further thought may explain the observed behaviour, which is done in what follows, and identify its source. As a result we are able to implement a methodology presented later in this work that circumvents the problem and therefore allows for significantly improved convergence rates for certain QMC driven MCMC methods compared to pseudo-random methods. \begin{comment} The proposal kernel is chosen as indepenent from previous samples. We further propose an additional auxiliary variable, as introduced in Section \ref{intro_aux_prop_state}, thereby making use of a SmMALA Riemannian sampling formalism. The importance sampling approaches perform slightly better, while the adaptive version of IS-MP-MCMC, which learns the covariance structure iteratively on the fly outperforms the non-adapative version. \end{comment} \begin{figure}[h] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{plots_mpmcmc_qmc_vs_psr/BayLogReg_ripley.eps} \subcaption{Ripley, $d=3$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{plots_mpmcmc_qmc_vs_psr/BayLogReg_pima.eps} \subcaption{Pima, $d=8$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{plots_mpmcmc_qmc_vs_psr/BayLogReg_heart.eps} \subcaption{Heart, $d=14$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{plots_mpmcmc_qmc_vs_psr/BayLogReg_australian.eps} \subcaption{Australian, $d=15$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{plots_mpmcmc_qmc_vs_psr/BayLogReg_german.eps} \subcaption{German, $d=25$} \end{subfigure} \caption{\small{Empirical variance and squared bias considered in an estimation associated to the Bayesian logistic regression problem from \cite{girolami2011riemann} using a pseudo-random (PSR) vs.\ CUD (QMC) seed, resp., for increasing proposal numbers and sample sizes. The results are based on $25$ MCMC simulations and the error bars correspond to three times a standard deviation}} \label{fig:variance_bias_mpmcmc_qmc_vs_psr} \end{figure} \subsubsection{The ``right'' way of using CUD numbers} \label{subsubsec:right_way} Due to the acceptance threshold in standard MP-MCMC, some proposals are accepted at least once while others are not accepted at all. In the latter case, the underlying QMC points that generate such proposals and that contribute to the homogeneous spatial coverage of the hypercube of QMC points are neglected. Since this homogeneity is the core of performance gain due to QMC, we do not expect the best results using CUD points when applying it to standard MP-MCMC. More successful approaches do not exhibit any discontinuity due to an acceptance threshold, e.g.\ Gibbs sampling \cite{owen2005quasi,tribble2008construction}. Our approach for making use of all information carried by the underlying CUD sequence is by incorporating all proposals, which is achieved using the importance sampling approach for MP-MCMC introduced in Section \ref{sec:importance_sampling}. Indeed, as we will see in the following section, with standard MP-MCMC we achieve only a reduction in the constant associated to the rate $n^{-1}$, whereas using importance sampling there are situations where the convergence rate improves to close to $n^{-2}$. \subsection{IS-MP-QMCMC} \label{subsec:IS-MP-QMCMC} As previously discussed in Section \ref{subsubsec:right_way}, importance sampling seems to be a solid approach in the context of MP-MCMC driven by CUD numbers, since it respects all proposals and therefore all numbers in the underlying sequence, thereby making full use of its spatial homogeneity. We now introduce IS-MP-QMCMC, prove its consistency and show an improved convergence rate compared to standard MP-MCMC or other MCMC methods in numerical experiments. The two algorithms introduced here are extensions of Algorithms \ref{algorithm:importance_sampling_mp_mcmc} and \ref{algorithm:adaptive_importance_sampling_mp_mcmc} to using any CUD sequence as their driving sequences. \subsubsection{Algorithm description} \label{subsubsec:algorithm_description_IS_mpqmcmc} Analogously to pseudo-random IS-MP-MCMC, introduced in Section \ref{sec:importance_sampling}, all proposals from one iteration are \textit{accepted}. Thus, only a single sample of $I$ from the finite state Markov chain is generated, which determines the distribution from which the subsequent $N$ proposals are generated. The resulting method is displayed as Algorithm \ref{algorithm:importance_sampling_mp_qmcmc}, which can be viewed as an extension of Algorithm \ref{algorithm:importance_sampling_mp_mcmc} to the CUD case. \begin{algorithm}[h] \SetAlgoLined \KwIn{Initialise starting point (proposal) $\vec{y}_1\in \Omega$, number of proposals $N$, auxiliary variable $I=1$, integrand $f$, initial mean estimate $\boldsymbol{\mu}_1 = \boldsymbol{\mu}_1(f)$ \hl{and number of MCMC iterations $L$} \;} \hl{Generate a CUD sequence $u_1, ..., u_{L(Nd+1)}\in (0,1)$}\; \For{\textnormal{each MCMC iteration $\ell=1,...,$\hl{$L$} }}{ \hl{Set $\vec{u} = (u_{(\ell-1)(Nd+M)+1}, ..., u_{(\ell-1) (Nd+1)+ Nd}) \in (0,1)^{Nd}$}, and sample ${\vec{y}}_{\setminus I}$ conditioned on $I$, i.e., draw $N$ new points from $\kappa({\vec{y}}_I, \cdot) = p({\vec{y}}_{\setminus I}|{\vec{y}}_I)$ \hl{by the inverse $\Psi_{\vec{y}_I}(\vec{u})$} \; Calculate the stationary distribution of $I$ conditioned on ${\vec{y}}_{1:N+1}$, i.e.\ $\forall$ $i=1,...,N+1$, $p(I=i|{{\vec{y}}}_{1:N+1}) = $ $\pi({{\vec{y}}}_i)\kappa({{\vec{y}}}_{{i}}, {{\vec{y}}}_{\setminus{i}}) / \sum_j \pi({{\vec{y}}}_j)\kappa({{\vec{y}}}_{{j}}, {{\vec{y}}}_{\setminus{j}})$, which can be done in parallel\; { Compute $\tilde{\boldsymbol{\mu}}_{\ell+1}=\sum_{i} p(I=i| {{\vec{y}}}_{1:N+1})f({{\vec{y}}}_i)$\;} {Set $\boldsymbol{\mu}_{\ell+1} = \boldsymbol{\mu}_{\ell} + \frac{1}{\ell+1}\left(\tilde{\boldsymbol{\mu}}_{\ell+1} - \boldsymbol{\mu}_{\ell}\right)$}\; \hl{Set $v' = u_{\ell(Nd+1)} \in (0,1]$}\; \hl{If $v' \in (\gamma_{j -1}, \gamma_{j}]$, where $\gamma_{j} = \sum_{i=1}^j p(I=i| \vec{y}_{1:N+1})$ for $j= 1,...,N+1$ and $\gamma_0:=0$, set $I=j$}\; } \caption{Importance sampling MP-QMCMC \newline We highlight all altered code compared to (pseudo-random) IS-MP-MCMC, Algorithm \ref{algorithm:importance_sampling_mp_mcmc}} \label{algorithm:importance_sampling_mp_qmcmc} \end{algorithm} \begin{comment} Old version: No CUD numbers in use for sampling from the finite state chain \begin{algorithm}[h] \SetAlgoLined \KwIn{Initialise starting point (proposal) $\vec{y}_1\in \mathbb{R}^d$, number of proposals $N$, auxiliary variable $I=1$, integrand $f$, mean estimate $\boldsymbol{\mu}_1 = \boldsymbol{\mu}_1(f)$ \hl{and number of MCMC iterations $L$} \;} \hl{Generate a CUD sequence $u_1, ..., u_{LNd}\in (0,1)$}\; \For{\textnormal{each MCMC iteration $\ell=1,...,$\hl{$L$} }}{ \hl{Set $\vec{u} = (u_{(\ell-1)Nd+1}, ..., u_{\ell Nd}) \in (0,1)^{Nd}$}, and sample ${\vec{y}}_{\setminus I}$ conditioned on $I$, i.e., draw $N$ new points from $\kappa({\vec{y}}_I, \cdot) = p({\vec{y}}_{\setminus I}|{\vec{y}}_I)$ \hl{by the inverse $\Psi_{\vec{y}_I}(\vec{u})$} \; Calculate the stationary distribution of $I$ conditioned on ${\vec{y}}_{1:N+1}$, i.e.\ $\forall$ $i=1,...,N+1$, $p(I=i|{{\vec{y}}}_{1:N+1}) = $ $\pi({{\vec{y}}}_i)\kappa({{\vec{y}}}_{{i}}, {{\vec{y}}}_{\setminus{i}}) / \sum_j \pi({{\vec{y}}}_j)\kappa({{\vec{y}}}_{{j}}, {{\vec{y}}}_{\setminus{j}})$, which can be done in parallel\; { Compute $\tilde{\boldsymbol{\mu}}_{\ell+1}=\sum_{i} p(I=i| {{\vec{y}}}_{1:N+1})f({{\vec{y}}}_i)$\;} {Set $\boldsymbol{\mu}_{\ell+1} = \boldsymbol{\mu}_{\ell} + \frac{1}{\ell+1}\left(\tilde{\boldsymbol{\mu}}_{\ell+1} - \boldsymbol{\mu}_{\ell}\right)$}\; {Sample new $I$ via the stationary distribution $p(\cdot|{{\vec{y}}}_{1:N+1})$}\; } \caption{Importance sampling MP-QMCMC \newline All code altered compared to (pseudo-random) IS-MP-MCMC, Algorithm \ref{algorithm:importance_sampling_mp_mcmc}, is highlighted} \label{algorithm:importance_sampling_mp_qmcmc} \end{algorithm} \end{comment} \subsubsection{Asymptotic unbiasedness of IS-MP-QMCMC} \begin{corollary}[Asymptotic unbiasedness of IS-MP-QMCMC] \label{corollary:asymptptic_unbiasedness_is_mp_qmcmc} Under the conditions of Theorem \ref{theorem:consistency_multi_prop_mcmc}, the IS-MP-QMCMC sequence of estimators $(\boldsymbol{\mu}_L)_{L \ge 1}$ from Algorithm \ref{algorithm:importance_sampling_mp_qmcmc} is asymptotically unbiased. \begin{proof} Due to the consistency of the underlying MP-QMCMC chain we may argue analogously to Lemma \ref{lemma:asymptotic_unbiasedness_is}. \end{proof} \end{corollary} \subsubsection{Empirical results: Bayesian linear regression} \label{mp_qmcmc_empirical_results} \footnote{For the Python code of this simulation, we refer to \url{https://github.com/baba-mpe/MP-Quasi-MCMC}} Let us consider the standard linear regression problem, in which the conditional distribution of an observation $\vec{y}\in \mathbb{R}^n$, given a design matrix $X \in \mathbb{R}^{n\times d}$, is specified by \begin{align*} \vec{y} = X \vec{\beta} + \vec{\varepsilon}, \end{align*} where $\vec{\beta} \in \mathbb{R}^d$, and $\vec{\varepsilon} \sim \mathcal{N}(\vec{0},\sigma^2\mathbf{I}_n)$ denotes the $n$-dimensional noise term. Each row of the matrix $X$ is a predictor vector. The resulting likelihood function is given by \begin{align*} \pi(\vec{y}|\beta,X,\sigma^2) \propto (\sigma^2)^{-n/2} \exp\left( -\frac{1}{2\sigma^2}\left( \vec{y}-X\vec{\beta} \right)^T \left( \vec{y} - X \vec{\beta} \right) \right). \end{align*} In a Bayesian context, we are interested in the distribution of the weight vector $\vec{\beta}$ conditioned on the design matrix $X$, the observation $\vec{y}$, the noise variance $\sigma^2$ and some prior information on $\vec{\beta}$. More precisely, given the above we would like to estimate the expectation of $\vec{\beta}$. Our a priori knowledge about $\vec{\beta}$ is expressed by the prior distribution \begin{align*} \pi(\vec{\beta}|\sigma^2) \propto |\det(\Sigma_0)|^{-1} \exp\left( -\frac{1}{2} \vec{\beta}^T \Sigma_0^{-1} \vec{\beta} \right), \end{align*} where $\Sigma_0 = \sigma^2/g (X^TX)^{-1}$ with $g=1/n$. Thus, $\pi(\vec{\beta}|\sigma^2)$ denotes Zellner's g-prior according to \cite{zellner1986prior}. To estimate $\mathbb{E}_{\pi}[\vec{\beta}|\vec{y},X,\sigma^2]$ under the resulting posterior distribution, \begin{align*} \pi(\vec{\beta}|\vec{y},X,\sigma^2) \propto \pi(\vec{\beta}|\sigma^2)\pi(\vec{y}|\vec{\beta},X,\sigma^2), \end{align*} the importance sampling MP-QMCMC, described in Algorithm \ref{algorithm:importance_sampling_mp_qmcmc}, is applied. The underlying data consisting of $X$ and $\vec{y}$ is simulated according to $X \sim \mathcal{N}( \vec{0}, \Sigma_X)$ and $\vec{y} = X^T \vec{\beta^*} + \vec{\varepsilon}$, where $\vec{\beta^*} = (1,...,1)^T \in \mathbb{R}^d$, and $\Sigma_X \in \mathbb{R}^{n\times n}$ has non-negligible entries off its diagonal. Referring to Table \ref{table:results_bayesian_linear_regression}, experiments are performed for dimensionalities between $d=1$ and $d=500$. The bias of the underlying MP-MCMC methods can hereby be computed exactly as the posterior is available analytically and is given by the Gaussian $\pi(\vec{\beta}|\vec{y},X,\sigma^2)= \mathcal{N}(\vec{\mu},\Sigma)$, where \begin{align*} \vec{\mu} = \left( X^TX + \Sigma_0 \right)^{-1} \left( X^TX\vec{\hat{\beta}} + \Sigma_0\vec{\mu}_0\right), \quad \text{and} \quad \Sigma = \sigma\left(X^TX + \Sigma_0\right). \end{align*} Here, $\vec{\hat{\beta}} = \left( X^TX \right)^{-1}X^T\vec{y}$ denotes the ordinary least squares solution for $\vec{\beta}$, where we made use of the Moore-Penrose pseudo-inverse for $X^TX$. To generate proposals we make use of the SmMALA kernel defined in \eqref{eq:smMALA_kernel}. Since samples from a current iteration are therefore dependent on samples from the previous iteration, the consistency of the resulting method using a CUD seed is not covered by Corollary \ref{corollary:asymptptic_unbiasedness_is_mp_qmcmc}. However, as we can see in Figure \ref{fig:variance} and from the results for the MSE reductions given in Table \ref{table:results_bayesian_linear_regression1}, convergence of the MSE seems to hold true. Furthermore, using the importance sampling formulation of MP-QMCMC, together with the CUD driving sequence, results in an improved variance convergence rate of close to $n^{-2}$, compared to $n^{-1}$ when using an IID seed. \begin{figure}[h] \centering \begin{subfigure}[b]{0.6\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlinear/d100_emprVar_BiasSquare_25mcmc.eps} \end{subfigure} \caption{\small{Empirical variance and squared bias of the sample mean in $100$-dimensional Bayesian linear regression based on IS-MP-QMCMC (SmMALA) using a pseudo-random (PSR) vs.\ CUD (QMC) seed, resp., for increasing proposal numbers and sample sizes. The results are based on $25$ MCMC simulations, and the error bars correspond to twice a standard deviation}} \label{fig:variance} \end{figure} \begin{table}[h] \ra{1.1} \centering \caption{ Comparison of MSE convergence rates for IS-MP-MCMC (SmMALA) using a pseudo-random (PSR) vs.\ a CUD (QMC) seed, resp., in Bayesian linear regression for increasing dimensionality. Displayed is also the associated reduction factor in MSE by the use of the deterministic instead of the pseudo-random seed, resp.} \centering \resizebox{.55\textwidth}{!}{ \begin{tabular}{ @{} l @{\hspace{3mm}} c @{\hspace{3mm}} c @{} *4c @{}} \bottomrule \multirow{2}{*}{{Dimension}\hspace{1mm}} & \multicolumn{2}{c}{{MSE Rate}\hspace{-5mm}} & \multicolumn{4}{c}{{Reduction by using QMC}}\\ \cmidrule{2-3} \cmidrule{5-7} & {\hspace{3mm}{PSR}\hspace{3mm}} & {\hspace{3mm}{QMC}\hspace{3mm}} && {{$N=3$}} &{{$N=63$}} &{{$N=1023$}} \\ \midrule 1 & -1.06 & -1.90 && 2.5 & 24.0 & 508.0 \\ 2 & -1.09 & -1.97 && 1.6 & 45.7 & 319.8\\ 5 & -1.03 & -1.88 && 1.9 & 35.2 & 234.1\\ 10 & -1.03 & -1.89 && 2.1 & 26.4 & 375.4\\ 25 & -1.04 & -1.86 && 2.7 & 32.0 & 247.2\\ 50 & -1.04 & -1.89 && 2.2 & 26.2 & 271.3 \\ 100 & -1.04 & -1.88 && 2.5 & 27.2 & 269.2 \\ 250 & -1.03 & -1.90 && 1.8 & 31.5 & 263.7 \\ 500 & -1.04 & -1.79 && 2.7 & 15.7 & 173.3 \\ \bottomrule \end{tabular} \label{table:results_bayesian_linear_regression1} } \end{table} \subsection{Adaptive IS-MP-QMCMC} \label{subsec:adaptive_is_mp_qmcmc} Finally, we introduce an importance sampling method that is based on MP-QMCMC proposals with adaptation of the proposal distribution. As a result we receive Algorithm Algorithm \ref{algorithm:adaptive_importance_sampling_mp_qmcmc}, which states a direct extension of Algorithm \ref{algorithm:adaptive_importance_sampling_mp_mcmc} to the general CUD case. \subsubsection{Algorithm description} \label{subsubsec:algorithm_description_adaptive_IS_mpqmcmc} In every MCMC iteration, the proposal distribution is updated. More precisely, the mean and covariance of the proposal distribution, which is assumed to have a multivariate Gaussian distribution, is determined in each iteration by the average of the weighted sum mean and covariance estimates, respectively, based on previous iterations including the present one. The resulting mean estimate after the final iteration is the output of this algorithm. \begin{algorithm}[h] \SetAlgoLined \KwIn{Initialise starting point (proposal) $\vec{y}_1\in \Omega$, number of proposals $N$, auxiliary variable $I=1$, integrand $f$, initial mean estimate $\boldsymbol{\mu}_1 = \boldsymbol{\mu}_1(f)$, initial covariance estimate $\Sigma_1$ \hl{and number of MCMC iterations $L$}\;} \hl{Generate a CUD sequence $u_1, ..., u_{LNd}\in (0,1)$}\; \For{\textnormal{each MCMC iteration $\ell=1,...,$\hl{$L$} }}{ \hl{Set $\vec{u} = (u_{(\ell-1)Nd+1}, ..., u_{\ell Nd}) \in (0,1)^{Nd}$}, and sample ${\vec{y}}_{\setminus I}$ conditioned on $I$ {and $\Sigma_\ell$}, i.e., draw $N$ new points from $\kappa_{\Sigma_\ell}({\vec{y}}_I, \cdot) = p({\vec{y}}_{\setminus I}|{\vec{y}}_I, \Sigma_\ell)$ \hl{by the inverse $\Psi_{\vec{y}_I}(\vec{u}|\Sigma_\ell)$} \; Calculate the stationary distribution of $I$ conditioned on ${\vec{y}}_{1:N+1}$ and $\Sigma_\ell$, i.e.\ $\forall$ $i=1,...,N+1$, $p(I=i|{{\vec{y}}}_{1:N+1}, \Sigma_\ell) = $ $\pi({{\vec{y}}}_i)\kappa_{\Sigma_\ell}({{\vec{y}}}_{{i}}, {{\vec{y}}}_{\setminus{i}}) / \sum_j \pi({{\vec{y}}}_j)\kappa_{\Sigma_\ell}({{\vec{y}}}_{{j}}, {{\vec{y}}}_{\setminus{j}})$, which can be done in parallel\; { Compute $\tilde{\boldsymbol{\mu}}_{\ell+1}=\sum_{i} p(I=i| {{\vec{y}}}_{1:N+1}, \Sigma_\ell)f({{\vec{y}}}_i)$}\; {Set $\boldsymbol{\mu}_{\ell+1} = \boldsymbol{\mu}_{\ell} + \frac{1}{\ell+1}\left(\tilde{\boldsymbol{\mu}}_{\ell+1} - \boldsymbol{\mu}_{\ell}\right)$}\; \hl{Set $v' = u_{\ell(Nd+1)} \in (0,1]$}\; \hl{If $v' \in (\gamma_{j -1}, \gamma_{j}]$, where $\gamma_{j} = \sum_{i=1}^j p(I=i| \vec{y}_{1:N+1}, \Sigma_\ell)$ for $j= 1,...,N+1$ and $\gamma_0:=0$, set $I=j$}\; {Compute $\tilde{{\Sigma}}_{\ell+1} = \sum_{i}p(I=i| {\vec{y}}_{1:N+1}, {{\Sigma}}_\ell)[{\vec{y}}_i-\vec{\mu}_{\ell+1}][{\vec{y}}_i-\vec{\mu}_{\ell+1}]^T$}\; { Set ${\Sigma}_{\ell+1} = {\Sigma_\ell} + \frac{1}{\ell+1}(\tilde{{\Sigma}}_{\ell+1} - {\Sigma}_{\ell})$}\; } \caption{Adaptive importance sampling MP-QMCMC \newline All code altered compared to (pseudo-random) adaptive IS-MP-MCMC, Algorithm \ref{algorithm:adaptive_importance_sampling_mp_mcmc}, is highlighted} \label{algorithm:adaptive_importance_sampling_mp_qmcmc} \end{algorithm} \begin{comment} Old version: not using CUD number sequence to sample from the finite state chain \begin{algorithm}[h] \SetAlgoLined \KwIn{Initialise starting point (proposal) $\vec{y}_1\in \mathbb{R}^d$, number of proposals $N$, auxiliary variable $I=1$, integrand $f$, mean estimate $\boldsymbol{\mu}_1 = \boldsymbol{\mu}_1(f)$, covariance $\Sigma_1$ \hl{and number of MCMC iterations $L$}\;} \hl{Generate a CUD sequence $u_1, ..., u_{LNd}\in (0,1)$}\; \For{\textnormal{each MCMC iteration $\ell=1,...,$\hl{$L$} }}{ \hl{Set $\vec{u} = (u_{(\ell-1)Nd+1}, ..., u_{\ell Nd}) \in (0,1)^{Nd}$}, and sample ${\vec{y}}_{\setminus I}$ conditioned on $I$ {and $\Sigma_\ell$}, i.e., draw $N$ new points from $\kappa_{\Sigma_\ell}({\vec{y}}_I, \cdot) = p({\vec{y}}_{\setminus I}|{\vec{y}}_I, \Sigma_\ell)$ \hl{by the inverse $\Psi_{\vec{y}_I}(\vec{u}|\Sigma_\ell)$} \; Calculate the stationary distribution of $I$ conditioned on ${\vec{y}}_{1:N+1}$ and $\Sigma_\ell$, i.e.\ $\forall$ $i=1,...,N+1$, $p(I=i|{{\vec{y}}}_{1:N+1}, \Sigma_\ell) = $ $\pi({{\vec{y}}}_i)\kappa_{\Sigma_\ell}({{\vec{y}}}_{{i}}, {{\vec{y}}}_{\setminus{i}}) / \sum_j \pi({{\vec{y}}}_j)\kappa_{\Sigma_\ell}({{\vec{y}}}_{{j}}, {{\vec{y}}}_{\setminus{j}})$, which can be done in parallel\; { Compute $\tilde{\boldsymbol{\mu}}_{\ell+1}=\sum_{i} p(I=i| {{\vec{y}}}_{1:N+1}, \Sigma_\ell)f({{\vec{y}}}_i)$}\; {Set $\boldsymbol{\mu}_{\ell+1} = \boldsymbol{\mu}_{\ell} + \frac{1}{\ell+1}\left(\tilde{\boldsymbol{\mu}}_{\ell+1} - \boldsymbol{\mu}_{\ell}\right)$}\; {Sample new $I$ via the stationary distribution $p(\cdot|{{\vec{y}}}_{1:N+1}, \Sigma_\ell)$}\; {Compute $\tilde{{\Sigma}}_{\ell+1} = \sum_{i}p(I=i| {\vec{y}}_{1:N+1}, {{\Sigma}}_\ell)[{\vec{y}}_i-\vec{\mu}_{\ell+1}][{\vec{y}}_i-\vec{\mu}_{\ell+1}]^T$}\; { Set ${\Sigma}_{\ell+1} = {\Sigma_\ell} + \frac{1}{\ell+1}(\tilde{{\Sigma}}_{\ell+1} - {\Sigma}_{\ell})$}\; } \caption{Adaptive importance sampling MP-QMCMC \newline All code altered compared to (pseudo-random) adaptive IS-MP-MCMC, Algorithm \ref{algorithm:adaptive_importance_sampling_mp_mcmc}, is highlighted} \label{algorithm:adaptive_importance_sampling_mp_qmcmc} \end{algorithm} \end{comment} \subsubsection{Asymptotic unbiasedness of adaptive IS-MP-QMCMC} \begin{corollary}[Asymptotic unbiasedness of adaptive IS-MP-QMCMC] Under the conditions of Theorem \ref{theorem:consistency_multi_prop_mcmc}, and under any of the conditions stated in Theorem \ref{thm:ergodicity_adap_mpmcmc_independent}, Theorem \ref{thm:ergodicity_adapt_mpmcmc_bounded} or Theorem \ref{thm:ergodicity_adaptive_mpmcmc_positive}, the sequence of estimators $(\boldsymbol{\mu}_L)_{L \ge 1}$ from Algorithm \ref{algorithm:adaptive_importance_sampling_mp_qmcmc} is asymptotically unbiased. \begin{proof} Consistency of the adaptive MP-QMCMC follows by the respective theorem used and Theorem \ref{theorem:consistency_multi_prop_mcmc}. Thus, we may argue analogously to the proof of Corollary \ref{cor:asympt_unbias_adapt_ismpmcmc}. \end{proof} \end{corollary} \subsection{Empirical Results: Simple Gaussian example} \label{adaptive_is_mp_qmcmc_simple_Gaussian_example} As a simple reference problem, we consider the estimation of the posterior mean in a generic $1$-dimensional numerical example analogously to \cite{owen2005quasi}, in which the posterior is just a standard Gaussian. For this problem, \cite{owen2005quasi} compared the Metropolis-Hastings algorithm using a pseudo-random seed to using a QMC seed, respectively, in terms of the resulting MSEs of the estimated posterior mean. The independent sampler is given by $\mathcal{N}(0,2.4^2)$ and the random walk sampler by $\mathcal{N}(x, 2.4^2)$, where $x\in \mathbb{R}$ denotes the last accepted sample. Here, we additionally compare those algorithms with the IS-MP-QMCMC and its adaptive version introduced in Section \ref{subsec:IS-MP-QMCMC} and Section \ref{subsec:adaptive_is_mp_qmcmc}, respectively. In the case of the independent sampler, we apply adaptivity within AIS-MP-QMCMC not only in the proposal covariance but also in the proposal mean, according to the estimate in line 6 from Algorithm \ref{algorithm:adaptive_importance_sampling_mp_qmcmc}. For the random walk case, we do not make use of adaptivity in the mean, as the proposal sampler mean is just a sampled point from the finite state chain from one iteration. The low-discrepancy sequence used in \cite{owen2005quasi} is based on a LCG, while the QMC construction used here is based on the LFSR introduced in \cite{chen2012new}, see Section \ref{subsubsec:completely_uniformly_distributed_points} for details. Nevertheless, we receive similar results in the reduction of MSE between standard Metropolis-Hastings and its QMC-driven counterpart as \cite{owen2005quasi} did: for the independent sampler, the MSE is reduced by a factor $\approx 7.0$, and for the random walk sampler by a factor $\approx 2.3$, respectively. In the independent sampler case, the additional performance gain using the importance sampling approaches is significant: compared to standard Metropolis-Hastings, the maximum MSE reduction is $\approx 112.2$, i.e.\ more than an order of magnitude (factor $\approx 16.1$) reduction compared to QMC-driven Metropolis-Hastings. Note that the performance of the multiple proposal methods increases with number of proposals used, thereby making use of less number of iterations to result in the same total number of samples. In the case of random walk proposals, the additional gain is less substantial. Compared to standard Metropolis-Hastings we receive a maximum MSE reduction of $\approx 6.3$, i.e.\ a decrease of $\approx 2.7$ compared to QMC-driven Metropolis-Hastings. \begin{table}[h] \ra{1.3} \centering \caption{ Comparison of standard Metropolis-Hastings with different QMC driven MCMC algorithms for independent and random walk proposals on a 1-dimensional Gaussian numerical example with $65535$ samples. The numbers in the brackets correspond to three-times a standard deviation} \centering \resizebox{1.\textwidth}{!}{ \begin{tabular}{ @{} l @{\hspace{3mm}} c @{\hspace{3mm}} c @{} *3c @{}} \bottomrule \multirow{2}{*}{{Method}\hspace{1mm}} & \multicolumn{2}{c}{{Independence}\hspace{-5mm}} & \multicolumn{3}{c}{{Random Walk}}\\ \cmidrule{2-3} \cmidrule{5-6} & {\hspace{8mm}{Mean}\hspace{8mm}} & {\hspace{8mm}{MSE}\hspace{8mm}} && {{Mean}} &{{MSE}} \\ \midrule PSR Metropolis-Hastings & $-1.64\times 10^{-4}$ & $3.60 \times 10^{-5} (\pm 6.96 \times 10^{-6})$ && $-8.76\times 10^{-5}$ & $6.76 \times 10^{-5} (\pm 6.56 \times 10^{-6})$ \\ QMC Metropolis-Hastings & $1.96\times 10^{-4}$ & $5.17 \times 10^{-6} (\pm 1.61 \times 10^{-6})$ && $1.17 \times 10^{-4}$ & $2.88 \times 10^{-5} (\pm 6.68 \times 10^{-6})$ \\ IS-MP-QMCMC (4) & $-1.97\times 10^{-4}$ & $3.56 \times 10^{-6} (\pm 3.56 \times 10^{-7})$ && $-1.20 \times 10^{-3}$ & $2.74 \times 10^{-5} (\pm 7.70 \times 10^{-6})$ \\ Adapt.\ IS-MP-QMCMC (4) & $-1.64\times 10^{-4}$ & $4.31 \times 10^{-6} (\pm 2.94 \times 10^{-7})$ && $-1.44 \times 10^{-3}$ & $2.81\times 10^{-5} (\pm 8.11 \times 10^{-6})$ \\ IS-MP-QMCMC (32) & $-2.10\times 10^{-5}$ & $7.72 \times 10^{-7} (\pm 1.53 \times 10^{-7})$ && $-2.63 \times 10^{-4}$ & $1.07 \times 10^{-5} (\pm 1.52 \times 10^{-6})$ \\ Adapt.\ IS-MP-QMCMC (32) & $-1.15\times 10^{-4}$ & $7.83 \times 10^{-7} (\pm 1.60 \times 10^{-7})$ && $-5.62 \times 10^{-4}$ & $1.26 \times 10^{-5} (\pm 1.44 \times 10^{-6})$ \\ IS-MP-QMCMC (256) & $9.44\times 10^{-5}$ & $5.32 \times 10^{-7} (\pm 1.29 \times 10^{-7})$ && $-4.39 \times 10^{-4}$ & $1.21 \times 10^{-5} (\pm 2.78 \times 10^{-6})$ \\ Adapt.\ IS-MP-QMCMC (256) & $-1.29\times 10^{-5}$ & $3.21 \times 10^{-7} (\pm 6.17 \times 10^{-8})$ && $-3.29 \times 10^{-4}$ & $1.07 \times 10^{-5} (\pm 1.35 \times 10^{-6})$ \\ \bottomrule \end{tabular} \label{table:results_bayesian_linear_regression} } \end{table} \subsection{Empirical Results: Bayesian logistic regression} \label{adaptive_is_mp_qmcmc_empirical_results} We now apply the proposed adaptive important sampling method as described in Algorithm \ref{algorithm:adaptive_importance_sampling_mp_qmcmc} to the Bayesian logistic regression from Section \ref{subsubsec:emp_results_adaptive_IS_mp_mcmc}. As proposal sampler, we apply a Gaussian kernel that samples independently of previous samples and adaptively updates its mean and covariance according to the weighted estimates from Algorithm \ref{algorithm:adaptive_importance_sampling_mp_qmcmc}, respectively. Note that no gradient information about the posterior is needed in this case. We compare the performance of the respective QMC algorithm to its pseudo-random version in terms of the empirical variance of their posterior mean estimates. As a reference, we perform the same experiments also for standard Metropolis-Hastings and SmMALA Metropolis-Hastings. Thereby, we employ the same total number of samples $n$ as the respective multiple proposal algorithms produce, i.e.\ $n=LN$, where $L$ denotes the number of iterations and $N$ the number of proposals used. This guarantees a fair comparison between the single and multiple proposal algorithms. The results for the datasets of varying dimension and increasing proposal numbers (total number of samples) are displayed in Figure \ref{fig:emprVar_blinreg_plus_mh_smmala}. Again, we observe an improved convergence rate in the empirical variance of our estimator, which in many cases is close to $n^{-2}$. Further, Table \ref{table:results_bayesian_logistic_regression_emprVar} displays the associated reductions in empirical variance between the adaptive IS-MP-QMCMC and the other algorithms. Due to the improved convergence, the reductions increase with increasing proposal numbers and thus total numbers of proposals, leading to significant reductions in all models and compared to all algorithms for a large number of proposals. In some situations, a reduction of more than $4$ orders of magnitude compared to standard Metropolis-Hastings can be observed, and more than $2$ orders of magnitude compared to the SmMALA Metropolis-Hastings as well as compared to pseudo-random IS-MP-MCMC. For any model and throughout all number of proposals employed, the reduction compared to the two reference algorithms is significant. \begin{comment} \begin{figure}[h] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic/ripley_emprVar_BiasSquare_25mcmc.eps} \subcaption{Ripley, $d=3$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic/pima_emprVar_BiasSquare_25mcmc.eps} \subcaption{Pima, $d=8$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic/heart_emprVar_BiasSquare_25mcmc.eps} \subcaption{Heart, $d=14$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic/australian_emprVar_BiasSquare_25mcmc.eps} \subcaption{Australian, $d=15$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic/german_emprVar_BiasSquare_25mcmc.eps} \subcaption{German, $d=25$} \end{subfigure} \caption{\small{Empirical variance and squared bias considered in an estimation associated to the Bayesian logistic regression problem from \cite{girolami2011riemann} using a pseudo-random (PSR) vs.\ CUD (QMC) seed, resp., for increasing proposal numbers and sample sizes}. A Burn-In period of between 2048-8192 samples, increasing with dimensionality $d$, was discarded. The results are based on $25$ MCMC simulations. The error band correspond to three-times the standard deviation.} \label{fig:variance_bias_adaptive_is_mpqmcmc} \end{figure} \end{comment} \begin{comment} \begin{figure}[h] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH/ripley_MSE_25mcmc.eps} \subcaption{Ripley, $d=3$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH/pima_MSE_25mcmc.eps} \subcaption{Pima, $d=8$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH/heart_MSE_25mcmc.eps} \subcaption{Heart, $d=14$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH/australian_MSE_25mcmc.eps} \subcaption{Australian, $d=15$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH/german_MSE_25mcmc.eps} \subcaption{German, $d=25$} \end{subfigure} \caption{\small{MSE considered in an estimation associated to the Bayesian logistic regression problem from \cite{girolami2011riemann} using a pseudo-random (PSR) vs.\ CUD (QMC) seed, resp., for increasing proposal numbers and sample sizes}; also displayed are the results for the random-walk Metropolis-Hastings as a reference, tuned to an acceptance rate of $\approx 23 \%$. A Burn-In period of between 2048-8192 samples, increasing with dimensionality $d$, was discarded. Displayed is also The results are based on $25$ MCMC simulations. The error band correspond to three-times the standard deviation.} \label{fig:mse_blinreg_plus_mh} \end{figure} \end{comment} \begin{comment} \begin{figure}[h] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH/ripley_emprVar_25mcmc.eps} \subcaption{Ripley, $d=3$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH/pima_emprVar_25mcmc.eps} \subcaption{Pima, $d=8$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH/heart_emprVar_25mcmc.eps} \subcaption{Heart, $d=14$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH/australian_emprVar_25mcmc.eps} \subcaption{Australian, $d=15$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH/german_emprVar_25mcmc.eps} \subcaption{German, $d=25$} \end{subfigure} \caption{\small{Empirical variance considered in an estimation associated to the Bayesian logistic regression problem from \cite{girolami2011riemann} using a pseudo-random (PSR) vs.\ CUD (QMC) seed, resp., for increasing proposal numbers and sample sizes}; also displayed are the results for the random-walk Metropolis-Hastings as a reference, tuned to an acceptance rate of $\approx 23 \%$. A Burn-In period of between 2048-8192 samples, increasing with dimensionality $d$, was discarded. Displayed is also The results are based on $25$ MCMC simulations. The error band correspond to three-times the standard deviation.} \label{fig:emprVar_blinreg_plus_mh} \end{figure} \end{comment} \begin{comment} \subsubsection{Include Metropolis-Hastings and Metropolis-Hastings with SmMALA proposals} \begin{figure}[h] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH_SmMALA/ripley_MSE_25mcmc.eps} \subcaption{Ripley, $d=3$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH_SmMALA/pima_MSE_25mcmc.eps} \subcaption{Pima, $d=8$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH_SmMALA/heart_MSE_25mcmc.eps} \subcaption{Heart, $d=14$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH_SmMALA/australian_MSE_25mcmc.eps} \subcaption{Australian, $d=15$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH_SmMALA/german_MSE_25mcmc.eps} \subcaption{German, $d=25$} \end{subfigure} \caption{\small{MSE of adaptive IS-MP-QMCMC compared to adaptive IS-MP-MCMC, SmMALA Metropolis-Hastings (M-H SmMALA) and standard Metropolis-Hastings (M-H), resp., for the Bayesian logistic regression problem from \ref{subsubsec:emp_results_adaptive_IS_mp_mcmc}. Here, M-H SmMALA was tuned to an approximately optimal acceptance rate of $50$-$60 \%$, and M-H to $20$-$25 \% $. For the adaptive methods, a Burn-In of between 0-8192 samples, increasing with dimensionality $d$, was discarded. The results are based on $25$ MCMC simulations.}} \label{fig:mse_blinreg_plus_mh_smmala} \end{figure} \end{comment} \begin{figure}[H] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH_SmMALA/ripley_emprVar_25mcmc.eps} \subcaption{Ripley, $d=3$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH_SmMALA/pima_emprVar_25mcmc.eps} \subcaption{Pima, $d=8$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH_SmMALA/heart_emprVar_25mcmc.eps} \subcaption{Heart, $d=14$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH_SmMALA/australian_emprVar_25mcmc.eps} \subcaption{Australian, $d=15$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_bayesianlogistic_added_MH_SmMALA/german_emprVar_25mcmc.eps} \subcaption{German, $d=25$} \end{subfigure} \caption{\small{Empirical variance of adaptive IS-MP-QMCMC compared to adaptive IS-MP-MCMC, SmMALA Metropolis-Hastings (M-H SmMALA) and standard Metropolis-Hastings (M-H), resp., for the Bayesian logistic regression problem from \ref{subsubsec:emp_results_adaptive_IS_mp_mcmc}. Here, M-H SmMALA was tuned to an approximately optimal acceptance rate of $50$-$60 \%$, and M-H to $20$-$25 \% $. For the adaptive methods, a Burn-In of between 0-8192 samples, increasing with dimensionality $d$, was discarded. The results are based on $25$ MCMC simulations and the errors bands correspond to three times a standard deviation}} \label{fig:emprVar_blinreg_plus_mh_smmala} \end{figure} \begin{comment} \begin{table}[h] \ra{1.2} \centering \caption{ \small Ratio of MSEs of adaptive IS-MP-QMCMC compared to adaptive IS-MP-MCMC, SmMALA Metropolis-Hastings (M-H SmMALA) and standard Metropolis-Hastings (M-H), resp., for the Bayesian logistic regression problem from \ref{subsubsec:emp_results_adaptive_IS_mp_mcmc}. Here, M-H SmMALA was tuned to an approximately optimal acceptance rate of $50$-$60 \%$, and M-H to $20$-$25 \% $. The results are based on $25$ MCMC simulations.} \centering \resizebox{1.\textwidth}{!}{ \begin{tabular}{ @{} l @{} c @{} c @{} *9c @{}} \bottomrule \multirow{2}{*}{} & \multicolumn{1}{c}{{Ad.\ IS-MP-QMCMC}} & \multicolumn{10}{c}{{Ratio in MSE for $N=$}} \\ \cmidrule{2-2} \cmidrule{4-12} & {{VS}} & & $4$& $8$& $16$ & $32$ & $64$ & $128$ & $256$ & $512$ & $1024$ \\ \midrule & {{Ad.\ IS-MP-MCMC}} & & $2.6$ & $4.1$ & $13.1$ & $10.9$ & $27.6$ & $69.2$ & $81.9$ & $275.9$ & $145.5$ \\ Ripley & {{M-H SmMALA}} & & $8.3$& $21.7$ & $54.5$ & $83.0$ & $144.0$ & $218.9$ & $338.3$ & $912.0$ & $645.3$ \\ & {{M-H}} & & $58.3$ & $64.6$ & $199.3$ & $259.7$ & $607.6$ & $1267.5$ &$1474.6$ & $3979.4$ & $2229.9$\\ \midrule & {{Ad.\ IS-MP-MCMC}} & & $6.4$& $7.6$ & $12.5$ & $27.4$ & $41.4$ & $60.1$ & $80.7$ & $119.5$ & $103.3$ \\ Pima & {{M-H SmMALA}}& \hspace{5mm} & $13.5$ & $31.0$ & $43.9$ & $111.8$ & $151.9$ & $219.3$ & $432.6$ & $430.1$ & $425.8$ \\ & {{M-H}} & & $190.4$ & $370.3$ & $447.7$ & $1028.2$ & $1825.0$ & $2542.7$ &$3773.4$ & $3746.4$ & $4433.8$\\ \midrule & {{Ad.\ IS-MP-MCMC}} & & $1.2$& $3.1$ & $1.3$ & $4.9$ & $9.9$ & $11.0$ & $17.9$ & $30.0$ & $23.4$ \\ Heart & {{M-H SmMALA}}& \hspace{5mm} & $8.2$ & $19.9$ & $8.9$ & $50.2$ & $93.8$ & $88.0$ & $198.6$ & $307.5$ & $250.7$ \\ & {{M-H}} & & $40.2$ & $143.3$ & $54.9$ & $288.6$ & $589.3$ & $597.6$ &$1093.2$ & $1660.6$ & $1246.3$\\ \midrule & {{Ad.\ IS-MP-MCMC}} & & $1.2$& $3.1$ & $1.3$ & $4.9$ & $9.9$ & $11.0$ & $17.9$ & $30.0$ & $23.4$ \\ Australian & {{M-H SmMALA}}& \hspace{5mm} & $6.4$ & $34.1$ & $10.7$ & $44.5$ & $75.7$ & $92.1$ & $156.3$ & $278.4$ & $203.6$ \\ & {{M-H}} & & $578.0$ & $3351.5$ & $1536.1$ & $8420.6$ & $9788.0$ & $8174.2$ &$17046.4$ & $24122.7$ & $14214.0$\\ \midrule & {{Ad.\ IS-MP-MCMC}} & & $1.5$& $1.9$ & $0.7$ & $2.5$ & $3.0$ & $5.8$ & $11.4$ & $7.9$ & $10.2$ \\ German & {{M-H SmMALA}}& \hspace{5mm} & $1.5$ & $4.3$ & $2.6$ & $9.9$ & $11.1$ & $25.3$ & $44.5$ & $37.1$ & $41.4$ \\ & {{M-H}} & & $49.9 $ & $114.7$ & $63.6$ & $268.0$ & $319.7$ & $635.0$ &$1021.6$ & $1163.1$ & $1052.1$\\ \midrule \bottomrule \end{tabular} \label{table:results_bayesian_logistic_regression_mse} } \end{table} \end{comment} \begin{table}[h] \ra{1.2} \centering \caption{ Ratio of empirical variances of adaptive IS-MP-QMCMC compared to adaptive IS-MP-MCMC, SmMALA Metropolis-Hastings (M-H SmMALA) and standard Metropolis-Hastings (M-H), resp., for the Bayesian logistic regression problem from \ref{subsubsec:emp_results_adaptive_IS_mp_mcmc}. Here, M-H SmMALA was tuned to an approximately optimal acceptance rate of $50$-$60 \%$, and M-H to $20$-$25 \% $. The results are based on $25$ MCMC simulations} \centering \resizebox{1.\textwidth}{!}{ \begin{tabular}{ @{} l @{} c @{} c @{} *9c @{}} \bottomrule \multirow{2}{*}{} & \multicolumn{1}{c}{{Ad.\ IS-MP-QMCMC}} & \multicolumn{10}{c}{{Ratio in empirical variance for $N=$}} \\ \cmidrule{2-2} \cmidrule{4-12} & {{VS}} & & $4$& $8$& $16$ & $32$ & $64$ & $128$ & $256$ & $512$ & $1024$ \\ \midrule & {{Ad.\ IS-MP-MCMC}} & & $3.9$ & $8.1$ & $18.5$ & $15.8$ & $35.7$ & $97.7$ & $113.5$ & $274.7$ & $207.1$ \\ Ripley & {{M-H SmMALA}} & & $12.7$& $47.5$ & $84.0$ & $121.0$ & $179.9$ & $309.7$ & $479.4$ & $933.2$ & $880.7$ \\ & {{M-H}} & & $82.8$ & $142.8$ & $290.8$ & $356.8$ & $759.3$ & $1794.6$ &$2040.6$ & $4040.5$ & $3191.7$\\ \midrule & {{Ad.\ IS-MP-MCMC}} & & $6.2$& $7.6$ & $11.7$ & $27.5$ & $41.7$ & $67.6$ & $81.1$ & $120.2$ & $110.3$ \\ Pima & {{M-H SmMALA}}& \hspace{5mm} & $18.1$ & $32.0$ & $43.7$ & $110.8$ & $148.5$ & $244.7$ & $422.7$ & $430.8$ & $448.9$ \\ & {{M-H}} & & $211.3$ & $378.5$ & $445.3$ & $1038.2$ & $1837.7$ & $2739.3$ &$3561.0$ & $3568.3$ & $4808.5$\\ \midrule & {{Ad.\ IS-MP-MCMC}} & & $1.6$& $3.9$ & $1.2$ & $5.1$ & $10.1$ & $11.0$ & $19.8$ & $34.7 $ & $ 27.7$ \\ Heart & {{M-H SmMALA}}& \hspace{5mm} & $11.7$ & $25.6$ & $9.1$ & $54.2$ & $95.8$ & $93.2$ & $213.9$ & $342.0$ & $296.1$ \\ & {{M-H}} & & $55.2$ & $177.7$ & $54.8$ & $301.7$ & $629.2$ & $665.3$ &$1225.1$ & $1815.6$ & $1486.8$\\ \midrule & {{Ad.\ IS-MP-MCMC}} & & $1.4$& $4.1$ & $1.5$ & $7.5$ & $15.2$ & $13.9$ & $28.2$ & $42.1$ & $32.8$ \\ Australian & {{M-H SmMALA}}& \hspace{5mm} & $15.9$ & $46.7$ & $11.4$ & $46.6$ & $76.0$ & $125.1$ & $193.7$ & $367.5$ & $222.5$ \\ & {{M-H}} & & $359.6$ & $2664.2$ & $917.4$ & $7017.4$ & $9842.3$ & $10985.6$ &$21625.8$ & $34043.2$ & $16010.5$\\ \midrule & {{Ad.\ IS-MP-MCMC}} & & $1.0$& $2.2$ & $0.8$ & $2.6$ & $3.1$ & $6.1$ & $12.2$ & $9.4$ & $14.0$ \\ German & {{M-H SmMALA}}& \hspace{5mm} & $3.4$ & $8.1$ & $2.8$ & $10.0$ & $11.2$ & $26.5$ & $48.3$ & $46.5$ & $62.6$ \\ & {{M-H}} & & $92.1$ & $195.0$ & $64.8$ & $255.7$ & $301.0$ & $622.9$ &$1076.5$ & $1465.6$ & $1595.2$\\ \bottomrule \end{tabular} \label{table:results_bayesian_logistic_regression_emprVar} } \end{table} \subsection{Empirical Results: Bayesian inference in non-linear differential equations} \label{subsec:bayesian_inference_non_linear_odes} An important category of inverse problems concern the study of uncertainty quantification in dynamical systems given by a set of ordinary differential equations (ODEs). Typically, such a system can be formalised by $M$ coupled ODEs and a model parameter $\vec{\theta} \in \mathbb{R}^d$, which describes the dynamics of the system's state $\vec{x}\in \mathbb{R}^M$ in terms of its time derivative by $\mathrm{d}\vec{x}/ \mathrm{d}t = \vec{f}(\vec {x}, \vec{\theta}, t)$. Given state observations $\vec{y}(t)$ at $T$ distinct points in time, our aim is to infer about the underlying parameter $\vec{\theta}$ and, more specifically, about integral quantities $\int g(\vec{\theta}) \mathrm{d}\vec{\theta}$ for an integrable scalar-valued function $g$. An observation $\vec{y}(t)$ at time $t$ is usually subject to a measurement error, which can be modeled as $\vec{y}(t) = \vec{x}(t) + \bm{\varepsilon}(t)$, where $\bm{\varepsilon}(t)\in \mathbb{R}^M$ states a suitable multivariate noise variate at time $t$. Often, $\bm{\varepsilon}(t)$ is Gaussian with zero mean and standard deviation $\sigma_m$ in the $n$th component for $m=1,...,M$. Given $T$ distinct observations, the observed system can be summarised in matrix notation by $Y = X+E$, where $Y,X,E$ denote $T \times M$ matrices whose rows correspond to the observation process at the distinct $T$ points in time. To generate a sample $X$ one needs to solve the underlying set of ODEs given the model parameter $\vec{\theta}$ and an initial condition $\vec{x}_0$, i.e.\ $X = X(\vec{\theta}, \vec{x}_0)$. If $\pi(\vec{\theta})$ denotes the prior for $\vec{\theta}$, the posterior density of $\vec{\theta}|Y$ can then be expressed as \begin{align} \pi(\vec{\theta}|Y) \propto \pi(\vec{\theta}) \prod_{m} \mathcal{M}\left(Y_{1:T,m} | X(\vec{\theta}, \vec{x}_0)_{1:T,m}, \Sigma_n \right). \end{align} In the following, experiments based on the adaptive importance sampling QMCMC scheme for multiple proposals introduced in Section \ref{subsec:adaptive_is_mp_qmcmc} are performed for two different ODE models, namely the Lotka-Volterra and the FitzHugh-Nagumo model. \subsubsection{Lotka-Volterra} \label{subsubsec:lotvol} The Lotka-Volterra equations are a set of two non-linear ODEs that describe the interaction between two species in a predator-prey relationship. Formally, this can be expressed as \begin{align} \frac{\mathrm{d}u}{\mathrm{d}t} &= \alpha u - \beta u v\\ \frac{\mathrm{d}v}{\mathrm{d}t} &= \gamma u v - \delta v, \end{align} where $u$ and $v$ represent the population of prey and predators, respectively, and $\alpha, \beta, \gamma, \delta >0$ determine the interaction between the two species given $\mathrm{d}u/\mathrm{d}t$ and $\mathrm{d}v/\mathrm{d}t$. We used 400 data points generated by the respective models between $t\in [0,8]$. The respective model parameters were chosen as $\alpha = 1.8, \beta=0.5, \gamma=2.5$ and $\delta = 1$, and the initial conditions as $u(0)=10$ and $v(0)=5$. To the model state outcomes was then added a Gaussian noise with standard deviation equal to $0.25$. In Figure \ref{fig:ode_data}, the underlying true state trajectories together with their noisy measurements are displayed. \subsubsection{FitzHugh–Nagumo} \label{subsubsec:fitznag} The FitzHugh-Nagumo model is a set of two non-linear ODEs that describe the dynamics of an excitable system in terms of two states, namely a membrane voltage $u$ and a recovery variable $v$, defined by \begin{align} \frac{\mathrm{d}u}{\mathrm{d}t} &= \gamma\left( u - \frac{u^3}{3} + v \right)\\ \frac{\mathrm{d}v}{\mathrm{d}t} &= -\left( \frac{u-\alpha + \beta v}{\gamma} \right). \end{align} Here, $\alpha, \beta$ and $\gamma$ serve as scaling parameters and to determine the unstable equilibrium state value. The underlying data consists of 200 data points produced by the FitzHugh-Nagumo model between $t\in [0,2]$ with model parameters $\alpha = 0.5, \beta=0.5, \gamma=1.5$ and initial conditions $u(0)=-1$ and $v(0)=1$. A Gaussian noise with standard deviation $1$ was then added to the model outcomes. Figure \ref{fig:ode_data} (b) shows the model outcomes and the associated noisy observations. \begin{figure}[h] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_lotkavolterra/DataBothLotVol_new.eps} \subcaption{Lotka-Volterra} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_fitznag/DataBothLotVol.eps} \subcaption{FitzHugh-Nagumo} \end{subfigure} \caption{\small{Output for both states $u$ and $v$ in (a) the Lotka-Volterra model with parameters $\alpha = 1.8, \beta=0.5, \gamma=2.5$ and $\delta = 1$, and in (b) the FitzHugh-Nagumo model with parameters $\alpha=0.5, \beta=0.5$ and $\gamma=1.5$, respectively; the dots correspond to the respective noisy data}} \label{fig:ode_data} \end{figure} \subsubsection{Numerical results} The performance of using QMC numbers compared to using pseudo-random numbers as driving sequence is investigated numerically for the adaptive IS-MP-MCMC algorithm. As a reference, the respective simulations are also performed for a random-walk Metropolis-Hastings within Gibbs algorithm, which corresponds to performing a Metropolis-Hastings step within each directional component update in a Gibbs sampler. The standard Metropolis-Hastings algorithm suffers severely from low mixing for the considered problems, and therefore did not qualify as a performance reference. To satisfy fairness of comparison, the Metropolis-Hastings within Gibbs algorithm produces the same number $n$ of total samples as the multiple proposal algorithms, i.e.\ $n=LN$ with $L$ denoting the number of iterations and $N$ the number of proposals. Non-linear ODEs generally produce corresponding non-linear features in the posterior distribution, which can result in the emergence of multiple local maxima. It is therefore germane to ensure for an underlying MCMC method not to dwell in a wrong mode. However, to allow for the comparison of sampling efficiency measured by the empirical variance of estimates, we employ the respective MCMC methods initialised on the true mode. As initial proposal mean and covariance in the adaptive IS-MP-MCMC algorithms a rough posterior mean and covariance estimate was used. The former further served as initial value for the Metropolis-Hastings within Gibbs algorithm, whose steps sizes for individual components was chosen to meet an acceptance rate between $20$-$40 \%$. The outcomes of the numerical experiments associated to the inference problems for the ODE models introduced above are shown in Figure \ref{fig:emprVar_ode_plus_mig} and Table \ref{table:results_ode_inference_emprVar}. As a prior, a Gamma distribution with shape parameter equal to $1$ and scale parameter equal to $3$, truncated at zero, was employed in both model problems. A clear improvement in the rate of convergence, being close to $n^{-2}$ in both the Lotka-Volterra and the FitzHugh-Nagumo case can be observed. In addition, we observe significant reductions in empirical variance for increasing numbers of proposals for adaptive IS-MP-QMCMC compared to its pseudo-random version and the reference Metropolis-Hastings within Gibbs algorithm. Compared to the latter, a maximal reduction of over $6$ orders of magnitude could be achieved. \begin{figure}[h] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_lotkavolterra/emprVar_10mcmc_new.eps} \subcaption{Lotka-Volterra} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{./plots_fitznag/emprVar_10mcmc_plus_mig_smFont.eps} \subcaption{FitzHugh-Nagumo} \end{subfigure} \caption{\small{Empirical variance of the posterior mean estimate associated to the (a) Lotka-Volterra and (b) FitzHugh-Nagumo model inference problem, using a pseudo-random (PSR) vs.\ CUD (QMC) seed, resp., for increasing proposal numbers and sample sizes}; also displayed are the results for the random-walk Metropolis-Hastings within Gibbs as a reference, tuned to an acceptance rate between $20$-$40 \%$. The results are based on $10$ MCMC simulations. The error band correspond to twice a standard deviation} \label{fig:emprVar_ode_plus_mig} \end{figure} \begin{table}[h] \ra{1.2} \centering \caption{ Ratio of empirical variances of adaptive IS-MP-QMCMC compared to adaptive IS-MP-MCMC and random-walk Metropolis-Hastings within Gibbs (M-H in Gibbs), resp., for the ODE model inference problems from \ref{subsubsec:lotvol} and \ref{subsubsec:fitznag}, respectively. Here, Metropolis-Hasting within Gibbs was tuned to an acceptance rate between $20$-$40 \% $. The results are based on $10$ MCMC simulations} \centering \resizebox{1.\textwidth}{!}{ \begin{tabular}{ @{} l @{} c @{} c @{} *9c @{}} \bottomrule \multirow{2}{*}{} & \multicolumn{1}{c}{{Ad.\ IS-MP-QMCMC}} & \multicolumn{10}{c}{{Ratio in empirical variance for $N=$}} \\ \cmidrule{2-2} \cmidrule{4-12} & {{VS}} & & $4$& $8$& $16$ & $32$ & $64$ & $128$ & $256$ & $512$ & $1024$ \\ \midrule \multirow{ 2}{*}{Lotka-Volterra} & {{Ad.\ IS-MP-MCMC}} & \hspace{5mm} & $3.9$ & $1.9$ & $7.3$ & $16.5$ & $3.5$ & $31.6$ & $62.7$ & $91.5$ & $76.2$ \\ & {{M-H in Gibbs}} & & $389.3$& $5129.5$ & $32677.7$ & $61495.1$ & $127050.1$ & $381491.6$ & $358768.2$ & $1559126.1$ & $196450.0$ \\\midrule \multirow{ 2}{*}{FitzHugh-Nagumo} & {{Ad.\ IS-MP-MCMC}} & & $5.9$ & $2.7$ & $7.6$ & $24.0$ & $21.1$ & $70.5$ & $33.5$ & $70.9$ & $209.6$ \\ & {{M-H in Gibbs}} & & $540.3$ & $2455.1$ & $2160.8$ & $7005.2$ & $12887.9$ & $8128.5$ & $57942.5$ & $55702.0$ & $199902.3$ \\ \bottomrule \end{tabular} \label{table:results_ode_inference_emprVar} } \end{table} \subsection{Intuition for increased convergence using importance sampling and QMC combined} Vanilla MP-MCMC has, independently of the underlying driving sequence, an acceptance mechanism that works similarly to classical Metropolis-Hastings: some proposals are accepted while others are refused, according to some acceptance ratio. This procedure can be viewed as introducing a discontinuity in the mapping between seed and actual samples. This can be explained more precisely as follows. Every proposal of dimensionality $d$ is generated based on a tupel of size $d$ from the underlying driving sequence. If the driving sequence is QMC, then it is homogeneously distributed on the unit interval. To refuse a proposal means to discard the underlying tupel from the driving sequence that created it. However, discarding points means figuratively speaking creating holes in the otherwise equidistributed point set, which damages its homogeneity. Since this homogeneity is the core of performance gain brought by QMC we cannot expect vanilla MP-QMCMC to converge at a faster rate. In contrast to this, in the extension to importance sampling every proposal is accepted and weighted such that the entire underlying seed is respected. Therefore the previously mentioned discontinuity is removed. Convergence rates close to $n^{-2}$ known from vanilla QMC are then possible as our numerical experiments illustrate. \subsection{Pros and Cons of using CUD numbers in MCMC} Whenever the regularity conditions formulated in Section \ref{subsection:consistency} are satisfied, consistency of the resulting MCMC algorithm is guaranteed. However, in applications where these do not hold true, e.g.\ in the algorithm used in \ref{mp_qmcmc_empirical_results}, numerical experiments suggest that the resulting samples asymptotically still follow the target distribution. In none of the experiments that we performed the CUD version of MP-MCMC along with its extensions (importance sampling, MALA, nonreversible kernels, etc.) has performed significantly worse than standard MCMC using a pseudo-random seed. However, in many simulations the benefit of using CUD numbers as driving sequence compared to IID numbers is substantial, see Sections \ref{mp_qmcmc_empirical_results}, \ref{adaptive_is_mp_qmcmc_simple_Gaussian_example}, \ref{adaptive_is_mp_qmcmc_empirical_results} and \ref{subsec:bayesian_inference_non_linear_odes}. The downside of using CUD numbers is that one relies on a construction that must first be developed, implemented and computed previous to running any MCMC experiments. The implementation might become expensive when the number of required samples is large. However, once a finite sequence is constructed, it can be reused for as many applications and experiments as one requires. Further, many CUD constructions do not allow a user-defined specific length of the sequence, but rather certain unique lengths. For example, some CUD constructions correspond to PRNGs with a short period $p$. In the case of this work, an LFSR construction was used which has a period of $p=2^s-1$ for $s \in \mathbb{N}$. Thus, sample sizes that differ by roughly a factor of two can be achieved, which limits the general applicability in the case of user-defined sample sizes. \section{Discussion and Conclusions} There is a rich history of research in QMC, importance sampling and MCMC methods, however these have tended to take different paths and have developed into somewhat separate research communities. This paper adds to the growing literature that ties together these complementary directions, and provides a demonstration that combining these ideas can result in computationally efficient and practically useful methodology, which we hope will prompt much more research into the intersection of these Monte Carlo techniques. \subsection{Contributions} We have significantly built upon a recent generalisation of Metropolis-Hastings, which allows for straight-forward parallelisation of MCMC by making multiple proposals at each iteration, and we have proposed numerous methodological extensions and proven some fundamental theoretical results. In particular, we investigated the use of non-reversible and optimised transition kernels within the proposal sub-sampling step of this method, and compared the relative performance of these approaches through a simulation study. We then extended this basic algorithm to make use of adaptivity of the proposal kernel and importance sampling, which can be considered as the limiting case of sampling from the finite state Markov chain on the multiple proposals. In addition, for the seven proposed algorithms, we have proven a variety of theoretical results, including limit theorems, asymptotic unbiasedness of the proposed importance sampling based estimators and ergodicity of the proposed adaptive samplers. We then showed how this general framework offers a principled and effective way of incorporating CUD numbers from the quasi-Monte Carlo literature into Markov chain Monte Carlo. In the case of using a driving sequence of CUD numbers, we prove consistency and asymptotic unbiasedness of our method under suitable regularity conditions. Furthermore, we demonstrated that the use of importance sampling based estimators together with CUD numbers results in an MCMC method, whose mean square error empirically converges at a rate closer to $n^{-2}$ rather than the standard ${n^{-1}}$ of pseudo-randomly driven MCMC algorithms. We argue that importance sampling removes the discontinuity induced by the acceptance threshold inherent in standard Metropolis-Hastings, thereby incorporating all points from the underlying homogeneously distributed CUD driving sequence. This leads to a smaller discrepancy, which is generally the basis of an increased performance when using QMC over pseudo-random numbers. \subsection{Further research directions} The work we have presented in this paper offers many interesting and potentially very useful theoretical, methodological and practical avenues for further research that tie together ideas from the MCMC and QMC literature. We have provided strong numerical evidence of the increased convergence rates that are possible by incorporating CUD numbers within a highly parallelisable multiple proposal MCMC algorithm, and so a natural question is whether theoretical results on the convergence rate for IS-MP-QMCMC are possible? While it appears to be a very challenging problem to tackle theoretically, some initial results on the convergence rate of CUD driven MCMC are given in \cite{chen2011consistencythesis}, which may potentially be extended. The proofs we derive in this paper depend on the existence of a coupling region, which exists for an independent sampler, as well as asymptotically when incorporating adaptivity. Empirical results suggest that this should also hold for dependent proposals, however a coupling region does not appear to be available for such an argument. Further research could therefore investigate whether there exists a consistency proof that does not rely on the coupling region condition, perhaps based on the contraction condition in \cite{chen2011consistency} instead. From a methodological perspective, there are many ways in which our work could be extended, for example investigating the use of variational approximations as proposal distributions, or indeed the use of optimal transport approximations for highly non-Gaussian target densities. With the advent of RKHS and low-discrepancy sequences tailored to the underlying integrand, more challenging integration problems in very high-dimensional spaces can be more efficiently solved using QMC, circumventing the curse of dimensionality. In light of this development, the use of QMC in MCMC becomes significantly more relevant, and this link is worthy of further investigation. Furthermore, in this work we have used one particular construction of CUD numbers, although there is much research currently taking place in this area. Could other constructions perhaps offer even greater efficiency gains within the proposed framework? Finally, there are many practical avenues of the above suggestions for research, including investigation of this methodology across a wider range of statistical models, experimental comparisons of different CUD constructions, and eventually the development of robust implementations in software to allow a wider range of practitioners to benefit more easily from the increased convergence rates and parallelisation offered by these multiple proposal quasi-Markov chain Monte Carlo methods. \bibliographystyle{alphamod}
1,108,101,565,016
arxiv
\section{HQET} \pagenumbering{arabic} The heavy quark effective theory (HQET) is a limit of the theory of the strong interactions appropriate for hadrons containing a single heavy quark $Q$. In such hadrons the light degrees of freedom typically have momentum of order $\Lambda_{QCD}$. Interactions of the heavy quark with the light degrees of freedom cause changes in its four-velocity $v$ of order $\Delta v \sim \Lambda_{QCD}/m_Q$. Consequently for these hadrons it is a reasonable approximation to take the limit of QCD where $m_Q \rightarrow \infty$ with the heavy quark's four-velocity fixed. The part of the QCD Lagrange density involving the heavy quark field is \begin{equation}\label{1} {\cal L} = \bar Q (i/\!\!\!\!D - m_Q) Q. \end{equation} The QCD heavy quark field is related to its HQET counterpart by \begin{equation}\label{2} Q = e^{-im_{Q} v \cdot x} \left[1 + {i/\!\!\!\!D\over 2m_Q} + \ldots\right] Q_v, \end{equation} where \begin{equation}\label{3} /\!\!\!v Q_v = Q_v. \end{equation} Putting Eq.~(\ref{2}) into the QCD Lagrange density and using eq.~(\ref{3}) yields \begin{equation}\label{4} {\cal L} = {\cal L}_{HQET} + \delta_1 {\cal L} + \ldots, \end{equation} where the HQET Lagrange density is~\cite{eichten1} \begin{equation}\label{5} {\cal L}_{HQET} = \bar Q_v i v \cdot D Q_v. \end{equation} If there are several heavy flavors a sum over different flavors of heavy quarks is understood. This Lagrange density is independent of the heavy quark mass and spin and has the spin-flavor symmetry~\cite{isgur1} of HQET. $\delta_1 {\cal L}$ contains corrections to the $m_Q \rightarrow \infty$ limit suppressed by a single power of the heavy quark mass. Explicitly~\cite{eichten2} \begin{equation}\label{6} \delta_1 {\cal L} = {1\over 2m_Q} [O_{kin,v}^{(Q)} + O_{mag,v}^{(Q)}], \end{equation} where the kinetic energy term is \begin{equation}\label{7} O_{kin,v}^{(Q)} = \bar Q_v (i D_\perp)^2 Q_v. \end{equation} Here, $D_\perp^\mu = D^\mu - v^\mu (v \cdot D)$ are the components of the covariant derivative perpendicular to the four-velocity. The chromomagnetic energy term is \begin{equation}\label{8} O_{mag,v}^{(Q)} = \bar Q_v {g\over 2} \sigma_{\alpha\beta} G^{\alpha\beta A} T^A Q_v. \end{equation} Note that the part of $\delta_1{\cal L}$ involving $O_{kin,v}^{(Q)}$ breaks the flavor symmetry but not the spin symmetry. $O_{mag,v}^{(Q)}$ breaks both symmetries. In the limit $m_Q \rightarrow \infty$ the angular momentum of the light degrees of freedom, \begin{equation}\label{9} \vec S_\ell = \vec J - \vec S_Q, \end{equation} is conserved~\cite{isgur2}. Therefore, in this limit, hadrons occur in doublets with total angular momentum \[ j_\pm = s_\ell \pm 1/2. \] Here $\vec J^2 = j (j + 1)$ and $\vec S_\ell^2 = s_\ell (s_\ell + 1)$. In the case of mesons with $Q\bar q$ flavor quantum numbers, the ground state doublet has spin-parity of the light degrees of freedom $s_\ell^{\pi_{\ell}} = {1\over 2}^-$. For $Q = c$ this doublet contains the $D$ and $D^*$ mesons with spin 0 and 1 respectively and for $Q = b$ they are the $B$ and $B^*$ mesons. An excited doublet of mesons with $s_\ell^{\pi_{\ell}} = {3\over 2}^+$ has also been observed. In the $Q = c$ case this doublet contains the $D_1 (2420)$ and $D_2^* (2460)$ with spin 1 and spin 2 respectively. The analogous $Q = b$ mesons are called $B_1$ and $B_2^*$. \section{NRQCD} For quarkonia (i.e., $Q\bar Q$ hadrons) physical properties are usually predicted using an expansion in $v/c$ where $v$ is the magnitude of the heavy quarks' relative velocity and $c$ is the speed of light~\cite{bodwin}. So the appropriate limit of QCD to take in this case is the $c \rightarrow \infty$ limit~\cite{grinstein}. In eq.~(\ref{1}) the speed of light was set to unity. Making the factors of $c$ explicit it becomes \begin{equation}\label{10} {\cal L} = c \bar Q (i/\!\!\!\!D - m_Q c) Q, \end{equation} where \begin{equation}\label{11} \partial_0 = {1\over c} {\partial\over\partial t}, \end{equation} and the covariant derivative \begin{equation}\label{12} D_\mu = \partial_\mu + {ig\over c} A_\mu^A T^A. \end{equation} Note that the strong coupling $g$ has the same units as $\sqrt{c}$. The full QCD heavy quark field $Q$ is related to its NRQCD counterpart by \begin{equation}\label{13} Q = e^{-im_{Q} c^{2} t} \left[1 + {i/\!\!\!\!D_\perp\over 2m_Q c} + \ldots \right]\left( {\psi\atop 0} \right), \end{equation} where $\psi$ is a two component Pauli spinor and $D_\perp = (0, {\bf D}_\perp)$. Putting eq.~(\ref{13}) into eq.~(\ref{10}) gives \begin{equation}\label{14} {\cal L} = {\cal L}_{NRQCD} + \ldots, \end{equation} where \begin{equation}\label{15} {\cal L}_{NRQCD} = \psi^\dagger \left(i \left({\partial\over\partial t} + ig A_0^A T^A\right) + {\vec\nabla^2\over 2m_Q} \right) \psi. \end{equation} The $c \rightarrow \infty$ limit of QCD is called non-relativistic quantum chromodynamics (NRQCD). Since the kinetic energy appears as a leading term in NRQCD this theory does not have a heavy quark flavor symmetry; however, it still has a heavy quark spin symmetry. The gluon field $A_0$ in eq.~(\ref{15}) is not a propagating field. It gives rise to a Coulomb potential between the heavy quarks. All the interactions of the propagating transverse gluons with the heavy quarks are suppressed by powers of $1/c$. The leading interaction of the propagating transverse gluons with the heavy quarks is also invariant under heavy quark spin symmetry. \section{Special Role of the Bottom Quark} The $c, b$ and $t$ quarks can be considered heavy. Unfortunately the top is so heavy that it decays before forming a hadron. Heavy quark symmetry is not a useful concept for the $t$-quark. The charm quark mass is not large enough for one to be confident that predictions based on heavy quark symmetry will work well. For charmonium $v^2/c^2 \sim 1/3$ and $\Lambda_{QCD}/m_c \sim 1/7$. However, for the b-quark, corrections to predictions based on heavy quark symmetry should be small. This ``special role'' of the b-quark is illustrated nicely by comparing with experiment the predictions of heavy quark symmetry for fragmentation. Heavy quark symmetry implies that the probability $P_{h_{Q} \rightarrow h_{s}}^{(H)}$ for heavy quark $Q$ with spin along the fragmentation axis (i.e., helicity) $h_Q$ to fragment to a hadron $H$ with spin of the light degrees $s_\ell$, total spin $s$ and helicity $h_s$ is~\cite{falk} \begin{equation}\label{16} P_{h_{Q} \rightarrow h_{s}}^{(H)} = P_{Q \rightarrow s_{\ell}} p_{h_{\ell}} |\langle s_Q, h_Q; s_\ell, h_\ell| s, h_s \rangle |^2. \end{equation} In eq.~(\ref{16}) $P_{Q \rightarrow s_{\ell}}$ is the probability for the heavy quark to fragment into the doublet with spin of the light degrees of freedom $s_\ell$. $p_{h_{\ell}}$ is the probability for the helicity of the light degrees of freedom to be $h_\ell = h_s - h_Q$, given that the heavy quark fragments to this doublet. Parity invariance of the strong interactions implies that \begin{equation}\label{17} p_{h_{\ell}} = p_{-h_{\ell}}, \end{equation} and the definition of a probability implies that \begin{equation}\label{18} \sum_{h_{\ell}} p_{h_{\ell}} = 1. \end{equation} The constraints in eqs.~(\ref{18}) and~(\ref{17}) imply that there are $s_\ell-1/2$ independent probabilities $p_{h_{\ell}}$. For the $c\bar q$ ground state meson doublet $p_{1/2} = p_{-1/2} = 1/2$ and the relative fragmentation probabilities are \begin{equation}\label{19} \begin{array}{ccccccc} P_{1/2 \rightarrow 0}^{(D)} &:& P_{1/2 \rightarrow 1}^{(D^{*})} &:& P_{1/2 \rightarrow 0}^{(D^{*})} &:& P_{1/2 \rightarrow -1}^{(D^{*})} \\[10pt] {1\over 4} &:& {1 \over 2} &:& {1\over 4} &:& 0 \end{array} \end{equation} For the excited $s_\ell^{\pi_{\ell}} = {3 \over 2}^+$ doublet the relative fragmentation probabilities can be expressed using eq.~(\ref{16}) in terms of $w_{3/2}$. This parameter is defined by $p_{3/2} = p_{- 3/2} = (1/2) {}~w_{3/2}$ and $p_{1/2} = p_{- 1/2} = (1/2) (1 - w_{3/2})$. In the charm system only part of eq.~(\ref{19}) is in agreement with experiment. While the experimental value for the relative probability to fragment to longitudinal and transverse $D^*$ helicities agrees with eq.~(\ref{19}), the experimental values for the probabilities to fragment to $D$ and $D^*$ are approximately equal~\cite{falk} instead of in the ratio 1:3 that eq.~(\ref{19}) predicts. This discrepancy is probably due to the $D^*$-$D$ mass difference which suppresses fragmentation to the $D^*$. Recent LEP data shows that predictions for fragmentation based on heavy quark symmetry work better in the b-quark case~\cite{eigen}. The experimental value for the probabilities to fragment to the $B$ and $B^*$ are in the ratio 1:3 . Experimental information on $D^{**}$ production provides the bound, $w_{3/2} < 0.24$~\cite{falk}. It would be very interesting to have an experimental determination of the Falk-Peskin fragmentation parameter $w_{3/2}$. Heavy quark spin symmetry also makes predictions for the alignment of quarkonia produced by gluon fragmentation. At leading order $v/c$ the gluon fragments to $Q\bar Q$ in a color singlet configuration. Two hard gluons occur in the final state to conserve color and charge conjugation , giving a fragmentation probability to $^3S_1$ quarkonia of order $(\alpha_s (m_Q)/\pi)^3 (v/c)^3$. However, a term higher order in $v/c$ is much more important because it is lower order in $\alpha_s (m_Q)/\pi$. The gluon can fragment to the $Q\bar Q$ pair in a color octet with two soft propagating NRQCD gluons in the final state (each with typical momentum of order $m_Q v (v/c)$ in the quarkonium rest frame). This color octet process~\cite{braaten} gives a contribution to the $^3S_1$ fragmentation probability of order $(\alpha_s (m_Q)/\pi) (v/c)^7$. The fragmenting gluon has large energy (compared with $m_Q$) and is almost real. Real gluons are transversely aligned. Because the leading interactions of the NRQCD propagating gluons preserve spin symmetry the final state $^3 S_1$ quarkonium is also transversely aligned~\cite{cho}. (There are $\alpha_s (m_Q)$ and $v/c$ corrections~\cite{beneke1} that reduce this alignment.) It may be possible to test this prediction in the $Q = c$ case from large $p_\perp$ data on $J/\psi$ and $\psi'$ production at the Tevatron~\cite{beneke2}. \section{$B \rightarrow D_1 (2420) \lowercase{e}\bar\nu_{\lowercase{e}}$ and $B \rightarrow D_2^* (2460) \lowercase{e}\bar\nu_{\lowercase{e}}$ Decay} Semileptonic B decays have been extensively studied. The semileptonic decays $B \rightarrow D e\bar\nu_e$ and $B \rightarrow D^* e\bar\nu_e$ have branching ratios of $(1.8 \pm 0.4)\%$ and $(4.6 \pm 0.3)\%$ respectively~\cite{particle}. They amount to about 60\% of the semileptonic decays. The differential decay rates are determined by matrix elements of the $b \rightarrow c$ weak axial-vector and vector currents. These matrix elements are usually written in terms of Lorentz scalar form factors and the differential decay rates are expressed in terms of them. For comparisons with the predictions of HQET it is convenient to write the form factors in terms of $w = v \cdot v'$. In the limit $m_Q \rightarrow \infty$ heavy quark spin symmetry implies that all six form factors can be written in terms of a single function of $w$~\cite{isgur1}. Furthermore, heavy quark flavor symmetry implies that this function is normalized to unity~\cite{isgur1,nussinov} at zero recoil, $w = 1$. The success of these predictions~\cite{cleo} indicates that in this case treating the charm quark mass as large is a reasonable approximation. At order $1/m_{c,b}$ several new functions occur but the normalization of the zero recoil matrix elements is preserved. In the $m_Q \rightarrow \infty$ limit zero recoil matrix elements of the weak axial vector and vector currents from the B-meson to any excited charmed meson vanish because of heavy quark spin symmetry. Since most of the phase space for such decays is near zero recoil (e.g., for B decay to the $s_\ell^{\pi_{\ell}} = {3\over 2}^+$ mesons $D_1 (2420)$ and $D_2^*(2460), 1< w < 1.3)$ the $\Lambda_{QCD}/m_{c,b}$ corrections are very important. The decay $B \rightarrow D_1 e\bar\nu_e$ has been observed. CLEO and ALEPH, respectively, find the branching ratios~\cite{aleph} $Br (B \rightarrow D_1 e\bar\nu_e) = (0.49 \pm 0.14)\%$ and $(0.74 \pm 0.16)\%$. For $Br (B\rightarrow D_2^* e\bar\nu_e)$ there are only upper limits. The form factors that parametrize the $B \rightarrow D_1$ and $B \rightarrow D_2^*$ matrix elements of the weak currents $V^\mu = \bar c \gamma^\mu b$ and $A^\mu = \bar c \gamma^\mu \gamma_5 b$ are defined by \begin{eqnarray}\label{21} {\langle D_1 (v',\varepsilon)| V^\mu| B(v)\rangle\over \sqrt{m_{D_{1}} m_B}} &=& f_{V_{1}} \varepsilon^{*\mu} + (f_{V_{2}} v^\mu + f_{V_{3}} v^{\prime\mu}) (\varepsilon^* \cdot v),\nonumber \\ {\langle D_1 (v',\varepsilon)| A^\mu| B(v)\rangle\over \sqrt{m_{D_{1}} m_B}} &=& if_A \varepsilon^{\mu\alpha\beta\gamma} \varepsilon_\alpha^* v_\beta v'\gamma,\nonumber \\ {\langle D_2^* (v',\varepsilon)| A^\mu| B(v)\rangle\over \sqrt{m_{D_{2}^{*}} m_B}} &=& k_{A_{1}} \varepsilon^{*\mu\alpha} v_\alpha + (k_{A_{2}} v^\mu + k_{A_{3}} v^{\prime\mu}) \varepsilon^*_{\alpha\beta} v^\alpha v^\beta,\nonumber \\ {\langle D_2^* (v',\varepsilon)| V^\mu| B(v)\rangle\over \sqrt{m_{D_{2}^{*}} m_B}} &=& ik_V \varepsilon^{\mu\alpha\beta\gamma} \varepsilon^*_{\alpha\sigma} v^\sigma v_\beta v'_\gamma. \end{eqnarray} The form factors $f_i$ and $k_i$ are functions of $w$. In the $m_{c,b} \rightarrow \infty$ limit they can be written in terms of a single function $\tau (w)$~\cite{isgur3}, \begin{eqnarray}\label{22} \sqrt{6} f_A &=& - (w + 1)\tau, \quad k_V = - \tau,\nonumber \\ \sqrt{6} f_{V_{1}} &=& (1 - w^2)\tau, \quad k_{A_{1}} = - (1 + w)\tau,\nonumber \\ \sqrt{6} f_{V_{2}} &=& - 3\tau, \quad k_{A_{2}} = 0,\nonumber \\ \sqrt{6} f_{V_{3}} &=& (w - 2)\tau, \quad k_{A_{3}} = \tau.\nonumber \\ \end{eqnarray} Only the form factor $f_{V_{1}}$ contributes at zero recoil. Surprisingly one can predict its value~\cite{leibovich} \begin{equation}\label{23} \sqrt{6} f_{V_{1}} (1) = - {4 (\bar\Lambda' - \bar\Lambda)\tau (1)\over m_c}, \end{equation} in terms of the $m_{c,b} \rightarrow \infty$ Isgur--Wise function $\tau$ and the difference between the mass of the light degrees of freedom in the excited $s_\ell^{\pi_{\ell}} = {3 \over 2}^+$ doublet $\bar\Lambda'$ and the mass of the light degrees of freedom in the ground state doublet $\bar\Lambda$. Experimentally the difference $\bar\Lambda' - \bar\Lambda \simeq 0.39$ GeV. (It can be expressed in terms of measured hadron masses.) A detailed discussion of the $1/m_{c,b}$ corrections to these decays can be found in Refs. [18]. They enhance the rate to $B \rightarrow D_1 e\bar\nu_e$ (compared with the $m_{c,b} \rightarrow \infty$ limit) and lead to the expectation that its branching ratio is greater than that for $B \rightarrow D_2^* e\bar\nu_e$. This may explain why semileptonic decays to the $D_2^*$ have not been observed.
1,108,101,565,017
arxiv
\section{Introduction} \subsection{Statement of results} In this paper we study a function field version of a classical problem concerning square-free values of polynomials evaluated at primes. Given a polynomial $f\in \Z[x]$ with integer coefficients, it is conjectured that there are infinitely many primes $p$ for which $f(p)$ is square-free, provided that $f$ has no repeated factor and that it satisfies some obvious congruence condition. This conjecture is only known to be true for polynomials having all their irreducible factors of degree at most three. Moreover, it is believed that the set $\Set_{f,2}$ of primes $p$ for which $f(p)$ is square-free has positive density, namely if \begin{equation} \Set_{f,2}(x) = \{p\leq x \textnormal{ prime } : f(p) \textnormal{ is square-free}\} \end{equation} then \begin{conj}\label{conj square-free} Let $f\in\Z[x]$ be a polynomial with no repeated factor. Assume that for each prime $p$ there is at least one integer $n_p$ for which $f(n_p)$ is not divisible by $p^2$. Denote $\rho_f(d)$ to be the number of solutions of $f(x)\equiv 0\pmod{d}$ in invertible residues modulo $d$. Then \begin{equation} |\Set_{f,2}(x)| \sim c_{f,2} \pi(x), \quad x\to \infty, \end{equation} where $\pi(x)$ denotes the number of prime integers not larger than $x$, and the positive density $c_{f,2}$ is given by \begin{equation} c_{f,2} = \prod_p \left(1-\frac{\rho_f\left(p^2\right)}{p^2-p}\right), \end{equation} \end{conj} More generally, one can ask for the density of the set $\Set_{f,k}$ of primes $p$ for which $f(p)$ is $k$-free (meaning $f(p)$ is not divisible by a $k$-th power). The conjectured density is \begin{equation} c_{f,k}=\prod_{P}\left(1-\frac{ \rho_f(p^k)}{\phi(p^k)}\right). \end{equation} Uchiyama \cite{Uchiyama} proved this conjectured density for $k=\deg f$ by a method that also handles the case $k>\deg f$. The case $k=\deg f-1$ was singled out by Erd\"os, who conjectured that the set contains infinitely many primes, and following the works of Hooley \cite{Hooley}, Nair \cite{Nair1}, \cite{Nair2} Heath-Brown \cite{Heath Brown}, Helfgott \cite{Helfgott}, Browning \cite{Browning} and Reuss \cite{Reuss} the quantitative conjecture for $k=\deg f-1$ is completely solved. The square-free case ($k=2$) is currently open for $\deg f>3$. Lee and Murty \cite{Lee and Murty} prove that the ABC conjecture implies the $k$-free conjecture on primes, for $k \geq 3$. Pasten \cite{Pasten} showed that conjecture~\ref{conj square-free} follows from the ABC conjecture for number fields. We turn to the function field version. Let $\fq$ be a finite field of $q$ elements, where $q=p^m$ is a prime power, and $\fq[t]$ the polynomial ring. We denote by $M_n(q)$ the set of monic polynomials of degree $n$. We define the absolute value of $a\in\fq[t]$ to be $|a|=q^{\deg a}$. We denote by $\pi_q(n)$ the set of monic irreducible polynomials of degree $n$, so that $|\pi_q(n)| =\frac{q^n}{n} + O\left(\frac{q^{n/2}}{n}\right)$. Let $f(x)\in \fq[t][x]$. Monic irreducible polynomials will be called prime polynomials. A polynomial $a(t)\in\fq[t]$ is called square-free if there is no $P\in\fq[t]$ such that $\deg P>0$ and $P^2\mid a$. We denote by $\Set(n)=\Set_{f,2}(n)$ the set of prime polynomials $P(t)\in M_n(q)$ such that $f(P)$ is square-free. We prove an analogue of Conjecture \ref{conj square-free} for square-free values, also establishing asymptotic bounds on the error term. A polynomial $f\in\fq[t][x]$ is called square-free if there is no $P\in\fq[t][x]$ such that $P^2\mid f$ and the degree of $P$ as a polynomial in $t, x$ is positive. \begin{thm}\label{main thm} Assume $f\in \fq[t][x]$ is square-free. For a polynomial $D\in \fq[t]$, define $$\rho_f(D)=|\{C\in\fq[t]: \deg C < \deg D, \gcd(D,C)=1, f(C)\equiv0\pmod{D}\}|.$$ Then \begin{equation}\label{Main Eq} \frac{|\Set_{f,2}(n)|}{|\pi_q(n)|} = c_{f,2} + O_{f,q}\left(\frac {1}{\log_q n}\right)\quad \mbox{as } n\to \infty, \end{equation} with \begin{equation} c_{f,2}=\prod_P \left(1-\frac{\rho_f\left(P^2\right)}{|P|^2-|P|}\right), \end{equation} where the product runs over the prime polynomials $P$. The implied constant in the error term $O_{f,q}\left(\frac {1}{\log_q n}\right)$ depends only on $f$ and the finite field size $q$. \end{thm} Note that the constant $c_{f,2}$ is positive if and only if for all primes $P$, there is some $C\in\fq[t]$ with $\deg C<\deg P^2$, $C$ coprime to $P$, such that $f(C) \neq 0 \pmod{P^2}$. See Section \S~\ref{degenerate case} for a discussion. \subsection{Plan of the proof} Take $M\in\N$ which will be chosen later, and let \begin{equation} \Set'(n, M) =\{a\in \pi_q(n): P^2\nmid f(a), \forall P \mbox{ prime with } \deg P<M\} \end{equation} and \begin{equation} \Set''(n, M) =\{a\in \pi_q(n): \exists P, \deg P\geq M,\mbox{ s.t. } P^2\mid f(a)\} \end{equation} Then clearly \begin{equation} \mathcal \Set(n) \subset \mathcal \Set'(n, M) \subset \mathcal \Set(n) \cup \mathcal \Set''(n, M), \end{equation} so that \begin{equation} |\mathcal \Set'(n, M)|-|\mathcal \Set''(n, M)| \leq |\mathcal \Set(n)|\leq |\mathcal \Set'(n, M)|. \end{equation} Thus it suffices to give an asymptotic estimate for $|\Set'(n, M)|$ (the ``main term''), which is easy if $M$ is small, and an upper bound for $|\Set''(n, M)|$ (the ``error term"). We will show in Proposition \ref{prop:1.5} that \begin{equation}\label{final for N'} |\Set'(n, M)| = c_{f,2} \frac{q^n}{n} + O\left(\frac{q^n}{nMq^M}\right)+O\left(\frac{q^{\frac{n}{2}+4q^M+M}}{n}\right), \end{equation} and in Proposition \ref{prop:2} that \begin{equation}\label{final estimate for N''} |\Set''(n, M)|= O\left(\frac{q^n}{Mq^M}+q^{n\frac{p-1}{p}}\right). \end{equation} Choosing $M=\lfloor\log_q\frac{n}{9}\rfloor$ in \eqref {final for N'} and \eqref{final estimate for N''} gives $$|\Set'(n, M)| = c_{f,2} \frac{q^n}{n} + O\left(\frac{q^n}{n^2\log_q{n}}\right)$$ and $$|\Set''(n, M)|\ll\frac{q^n}{n\log_q{n}},$$ which together yield \begin{equation}\label{main result} |\Set_{f,2}(n)| = c_{f,2} |\pi_q(n)| + O\left(\frac{q^n}{n\log_q n}\right). \end{equation} This proves Theorem \ref{main thm}. The proof of \eqref{final for N'} is carried out using a sieve method and an estimate on the size of the set $\pi_q(n;Q,A)$ of primes in arithmetic progression, defined as: $$\pi_q(n;Q,A)=\{P\in\pi_q(n) : P\equiv A \pmod{Q}\}.$$ The crucial bound \eqref{final estimate for N''} for the contribution of large primes uses ideas of Ramsay \cite{Ramsay} and Poonen \cite{Poonen}, formulated in their work on the related question of square-free values taken at arbitrary (non-prime) polynomials, which we will explain in the proof of \eqref{final estimate for N''}. As a final comment, we point out that we have dealt here with the limit of large degree $n$ and fixed finite field size $q$. One can also ask an analogous question for the limit of $q\to \infty$ and $n$ fixed. Define the content of $f(x)\in \fq[t][x]$ to be the monic greatest common divisor of its coefficients, which is an element of $\fq[t]$. In this case, it follows from the recent work of Rudnick \cite{Rudnick} that for any sequence of finite fields $\fq$ of cardinality $q\to \infty$, and any choice of separable $f_q\in \fq[t][x]$ with square-free content, $f_q(P)$ is square-free with probability 1, that is \begin{equation} \lim_{q\to \infty} \frac{|\Set_{f_q,2}(n)|}{|\pi_q(n)|}=1. \end{equation} This is because in \cite{Rudnick} it is shown that with probability 1 as $q\to \infty$, for an arbitrary polynomial $a\in M_n$, $f_q(a)$ is square-free. Since the primes have positive density (namely $1/n$) in the set of all monic polynomials of degree $n$, the result follows. \section{Estimating the main term} We now turn back to the proof of Theorem \ref{main thm}. We fixed $M\in\N$ and defined \begin{equation} \begin{split} \nonumber \Set'(n, M) &=\{a\in \pi_q(n): P^2\nmid f(a), \forall P \mbox{ prime with }\deg P<M\}, \\ \Set''(n, M) &=\{a\in \pi_q(n): \exists P, \deg P\geq M,\mbox{ s.t. } P^2\mid f(a)\}. \end{split} \end{equation} We wish to prove that for $0\ll M\leq\frac{n}{2}$ it holds that \begin{equation} \nonumber |\Set'(n, M)| = c_{f,2} \frac{q^n}{n} + O\left(\frac{q^n}{nMq^M}\right)+O\left(\frac{q^{\frac{n}{2}+4q^M+M}}{n}\right) \end{equation} and \begin{equation} \nonumber |\Set''(n, M)|\leq\frac{2q^n\deg f}{Mq^M}+O\left(q^{n\frac{p-1}{p}}\right). \end{equation} The rest of the paper will use the following notation: \begin{itemize} \item $n,N$ natural numbers \item For $a\in \fq[t]^N $, $a_i$ is the value of the $i$-th coordinate of $a$ \item $P$ is a prime in $\fq[t]$ \item $f(x)\in\fq[t][x]$ is a square-free polynomial \end{itemize} For $Q\in\fq[t]$, define $\phi(Q)=|\{a\in\fq[t] : \deg a<\deg Q, \gcd(Q, a)=1\}|$. In the the following proof, we need an estimate for the size of the set of primes of degree $n$ in an arithmetic progression: $\pi_q(n;Q,A).$ We get the estimate using the Prime Polynomial Theorem in arithmetic progressions, with a remainder term given by the Riemann Hypothesis for curves over a finite field (Weil's Theorem) which was first proved in \cite{Weil}: \begin{lem}[Weil's Theorem] For $Q, A\in\fq[t]$ with $\gcd(Q,A)=1$, \begin{equation}\label{eq:4.1} |\pi_q(n;Q,A)|=\frac{q^n}{n\phi(Q)}+O\left(\frac{q^{\frac{n}{2}}}{n}\deg Q\right) \end{equation} \end{lem} Lemma 1 is proved, similarly to the proof of Theorem 4.8 in \cite{Rosen}, from the Riemann Hypothesis in the same way that the corresponding statement over the integers is deduced from the Generalized Riemann Hypothesis, see e.g. \cite[Chapter 20]{Davenport}. It is done, by using the "Explicit Formula" to express a sum $\sum_{\substack{\deg(P^k)=n\\p^k=A \mod Q}} \deg P$ over prime powers in the arithmetic progression (the higher prime powers are easily shown to give a negligible amount), with an average over zeros of all L-functions associated to Dirichlet characters modulo $Q$; the trivial character gives the main term, and each of the remaining nontrivial characters, for which the associated L-function is a polynomial in $u=q^{-s}$ of degree at most $\deg Q-1$, all of whose inverse zeros are in the disc $|u|\leq \sqrt{q}$, contributes at most $(\deg Q-1)q^{n/2}/\phi(Q)$. The division by $n$ arises from the extra factor of $\deg P$ in the Explicit Formula. \begin{remark}\label{Discriminant-remark} Suppose that $f\in\fq[t][x]$ is square-free and $P\in\fq[t]$ is prime. Denote the discriminant of $f$ over $\fq(t)$ by $\Delta(f)$ and denote by $\Delta_{\fq[t]/\langle P\rangle}(f)$ the discriminant of $f$ over $\fq[t]/\langle P\rangle$. It holds that $\Delta(f)\neq0$ and for $P\in\fq[t]$ such that $\deg P>\deg\Delta(f)$, we can conclude that $P\nmid\Delta(f)$. Now assume that $\Delta_{\fq[t]/\langle P\rangle}(f)=0$ and $P$ does not divide the leading coefficient of $f$. It holds that $$\Delta(f)\pmod{P}\equiv\Delta_{\fq[t]/\langle P\rangle}(f)\equiv0\pmod{P}$$ which allows to conclude that $P|\Delta(f)$. \end{remark} We will use the following version of Hensel's lemma in the paper \begin{lem}[Hensel's Lemma]\label{Hensel's Lemma} Suppose that $f\in\fq[t][x]$ is square-free and $P\in\fq[t]$ is prime such that $\deg P>\deg\Delta(f)$ where $\Delta(f)$ is the discriminant of $f$ over $\fq(t)$. Also assume that $P$ does not divide the leading coefficient of $f$. Then \begin{equation} \begin{split} &|\{b\in\fq[t] : \deg b<\deg P^2, f(b)\equiv0\pmod{P^2}\}| \\ &=|\{a\in\fq[t] : \deg a<\deg P, f(a)\equiv0\pmod{P}\}| \end{split} \end{equation} \end{lem} \begin{proof} Suppose that $c\in\fq[t]$ and $f(c)\equiv0\pmod{P}$. Take $d\in\fq[t]$ such that $d\equiv c\pmod P$. Thus $d=c+tP$. From the formal derivative formula for $f$ we have that \begin{equation}\label{Hens-lem-eq-1} f(d)=f(c+tP)=f(c)+tP\frac{\partial f}{\partial x}(c)+P^2(\cdots). \end{equation} Now, since $f(c)\equiv0\pmod{P}$ it follows that there is some $s\in\fq[t]$ such that $f(c)=sP$. Combining this with \eqref{Hens-lem-eq-1} gives $$f(d)\equiv (s+t\frac{\partial f}{\partial x}(c))P\pmod{P^2}$$ thus \begin{equation}\label{Hens-lem-eq-1.25} f(d)\equiv0\pmod{P^2}\Longleftrightarrow s+t\frac{\partial f}{\partial x}(c)\equiv0\pmod{P}. \end{equation} By remark \ref{Discriminant-remark} we get that \begin{equation}\label{Hens-lem-eq-1.5} P\nmid\Delta(f). \end{equation} So if $f(c)\equiv0\pmod{P}$ and $\frac{\partial f}{\partial x}(c)\equiv0\pmod{P}$ then from the basic property of discriminant we get that the discriminant of $f$ over $\fq[t]/\langle P\rangle$, which will be denoted by $\Delta_{\fq[t]/\langle P\rangle}(f)$, is 0. By remark \ref{Discriminant-remark} this implies that $P|\Delta(f)$ which is a contradiction to \eqref{Hens-lem-eq-1.5}. Thus $\frac{\partial f}{\partial x}(c)\not\equiv0\pmod{P}$ and $\frac{\partial f}{\partial x}(c)\pmod{P}$ is invertible. Denote $h(c)=\frac{\partial f}{\partial x}(c)^{-1}\pmod{P}$. Combining this with \eqref{Hens-lem-eq-1.25} we get that $$f(d)\equiv0\pmod{P^2}$$ for \begin{equation}\label{Hens-lem-eq-2} d\equiv c-f(c)h(c)\pmod{P^2}. \end{equation} This gives a solution $d\in\fq[t]$ to $f(d)\equiv0\pmod{P^2}$ such that $d\equiv c\pmod{P}$ and according to the equation \eqref{Hens-lem-eq-2} the solution $d$ is unique modulo $P^2$, which proves the lemma. \end{proof} We will also use the following lemma \begin{lem}\label{bound-lem} Suppose that $f\in\fq[t][x]$ is square-free and $P\in\fq[t]$ is prime. Denote the discriminant of $f$ over $\fq(t)$ by $\Delta(f)$, denote by $w_f(t)\in\fq[t]$ the leading coefficient of $f$ as a polynomial in $x$ over $\fq[t]$, and denote by $\deg f$ the degree of $f$ as a polynomial in $x$ over $\fq[t]$. Then \begin{equation} \begin{split} &|\{b\in\fq[t] : \deg b<\deg P^2, f(b)\equiv0\pmod{P^2}\}| \\ &\leq \max\{\deg f, q^{2\max\{\deg\Delta(f), \deg w_f\}}\}=O(1) \end{split} \end{equation} and the implied constant in the bound depends only on $f$ and the finite field size $q$. \end{lem} \begin{proof} If $\deg P>\max\{\deg\Delta(f), \deg w_f\}$ then using Hensel's Lemma (lemma \ref{Hensel's Lemma}) we get \begin{equation}\label{bound-lem-eq-2} \begin{split} &|\{b\in\fq[t] : \deg b<\deg P^2, f(b)\equiv0\pmod{P^2}\}| \\ &=|\{a\in\fq[t] : \deg a<\deg P, f(a)\equiv0\pmod{P}\}| \end{split} \end{equation} and since $\deg P>\deg w_f$ it follows that $f\not\equiv0\pmod{P}$. The number of roots of a polynomial in $x$ over the field $\fq[t]/\langle P\rangle$ is bounded by its degree in $x$ and thus $$|\{a\in\fq[t] : \deg a<\deg P, f(a)\equiv0\pmod{P}\}|\leq \deg f.$$ Combining this with \eqref{bound-lem-eq-2} we get that if $\deg P>\max\{\deg\Delta(f), \deg w_f\}$ then \begin{equation}\label{bound-lem-eq-3} |\{b\in\fq[t] : \deg b<\deg P^2, f(b)\equiv0\pmod{P^2}\}|\leq\deg f. \end{equation} On the other hand, if $\deg P\leq\max\{\deg\Delta(f), \deg w_f\}$ then \begin{equation}\label{bound-lem-eq-4} \begin{split} |\{b\in\fq[t] : \deg b<\deg P^2, f(b)\equiv0\pmod{P^2}\}|\leq|P|^2 \\ =q^{2\deg P}\leq q^{2\max\{\deg\Delta(f), \deg w_f\}}. \end{split} \end{equation} The result is obtained by combining \eqref{bound-lem-eq-3} and \eqref{bound-lem-eq-4}. \end{proof} Finally, we will use the following lemma which is called the "Explicit Formula" and is proved in Proposition 2.1 in \cite{Rosen} \begin{lem}[The Explicit Formula]\label{Explicit Formula} For integers $q$ and $n$, the following holds $\sum_{d\mid n}d|\pi_q(d)|=q^n$. In particular, $i|\pi_q(i)|\leq q^i$ for all integers $i\leq n$. \end{lem} We will now prove: \begin{prop}\label{prop:1.5} For $0\ll M\leq n$, $$|\Set'(n, M)| = c_{f,2} \frac{q^n}{n}+O\left(\frac{q^n}{nMq^M}\right)+O\left(\frac{q^{\frac{n}{2}+4q^M+M}}{n}\right).$$ \end{prop} The proof will use a standard sieve argument. \begin{proof}[Proof of Proposition \ref{prop:1.5}] Denote $w=|\{P:\deg P<M\}|$ and enumerate $\{P:\deg P<M\}=\{P_j:1\leq j\leq w\}$. Define: \begin{equation} \begin{split} \nonumber B&=\{(d_1,\dots,d_w) \in (\fq[t])^w:\forall j,1\leq j\leq w, \deg d_j < \deg P_j^2, \\ &f(d_j)\not\equiv 0 \pmod{P_j^2}\}, \\ C&=\{(d_1,\dots,d_w)\in B : \forall j,1\leq j\leq w, \gcd(d_j, P_j^2)=1\}. \end{split} \end{equation} Now, for $a\in \pi_q(n)$, if $P_j|d_j$ for some $1\leq j\leq w$, then from $P_j|(a-d_j)$ it follows that $P_j|a$, but both are prime, thus $P_j=a$. Since $\deg a=n$ and $\deg P<M$, it follows that for $n\geq M$, if $P_j|d_j$, then $\{a\in \pi_q(n):\forall 1\leq j\leq w, a\equiv d_j \pmod{P_j^2}\}=\emptyset$. In addition, according to the Chinese Remainder Theorem, for every set $(d_1,\dots,d_w)\in B$, there is a unique element $d_{d_1,\dots,d_w}\in\fq[t]$ with $\deg d_{d_1,\dots,d_w} < \deg \prod_{\deg P<M}P^2$ such that $d_{d_1,\dots,d_w}\equiv d_j \pmod{P_j^2}, \forall 1\leq j\leq w$. Thus for $M\leq n$ it holds that \begin{equation}\label{eq:4.2} \begin{split} &|\Set'(n, M)|=|\{a\in \pi_q(n) : \forall \deg P< M, P^2\nmid f(a)\}| \\ &=\sum_{\begin{array}{c}{(d_1,\dots,d_w)\in B}\end{array}}\left|\left\{a\in \pi_q(n):\forall j,1\leq j\leq w, a\equiv d_j \pmod{P_j^2}\right\}\right| \\ &=\sum_{\begin{array}{c}{(d_1,\dots,d_w)\in C}\end{array}}\left|\left\{a\in \pi_q(n):\forall j,1\leq j\leq w, a\equiv d_j \pmod{P_j^2}\right\}\right| \\ &=\sum_{\begin{array}{c}{(d_1,\dots,d_w)\in C}\end{array}}\left|\left\{a\in \pi_q(n):a\equiv d_{d_1,\dots,d_w} \pmod{\prod_{\deg P<M}P^2}\right\}\right| \end{split} \end{equation} Now, using lemma \ref{Explicit Formula} we get \begin{equation}\label{eq:4.3} \begin{split} \deg\prod_{\deg P<M}P^2&=\sum_{\deg P<M}2\deg P=2\sum_{i=1}^Mi|\pi_q(i)| \\ \leq2\sum_{i=1}^Mq^i&=2\left(q^M+\frac{q^M-1}{q-1}-1\right)\leq4q^M. \end{split} \end{equation} Since $\gcd(d_{d_1,\dots,d_w}, \prod_{\deg P<M}P^2)=1$, we can use \eqref{eq:4.1} with \eqref{eq:4.3} to get \begin{equation} \begin{split} \nonumber &|\{a\in \pi_q(n):a\equiv d_{d_1,\dots,d_w} \pmod{\prod_{\deg P<M}P^2}\}| \\ &=\frac{q^n}{n\phi(\prod_{\deg P<M}P^2)}+O\left(\frac{q^{\frac{n}{2}}}{n}\deg\prod_{\deg P<M}P^2\right) \\ &=\frac{q^n}{n\phi(\prod_{\deg P<M}P^2)}+O\left(\frac{q^{\frac{n}{2}}}{n}q^M\right) \\ &=\frac{q^n}{n\phi(\prod_{\deg P<M}P^2)}+O\left(\frac{q^{\frac{n}{2}+M}}{n}\right). \end{split} \end{equation} Let us now insert this in equation \eqref{eq:4.2}: \begin{equation}\label{eq:4.4} \begin{split} &|\Set'(n, M)|=\sum_{\begin{array}{c}{(d_1,\dots,d_w)\in C}\end{array}}\left(\frac{q^n}{n\phi(\prod_{\deg P<M}P^2)}+O\left(\frac{q^{\frac{n}{2}+M}}{n}\right)\right) \\ &=\left(\prod_{\deg P<M}{\left(\phi\left(P^2\right)-\rho_f\left(P^2\right)\right)}\right)\left(\frac{q^n}{n\phi(\prod_{\deg P<M}P^2)}+O\left(\frac{q^{\frac{n}{2}+M}}{n}\right)\right) \\ &=\frac{q^n}{n}\prod_{\deg P<M}{\left(1-\frac{\rho_f\left(P^2\right)}{|P|^2-|P|}\right)}+O\left(\frac{q^{\frac{n}{2}+M}}{n}\prod_{\deg P<M}|P|^2\right) \\ &=\frac{q^n}{n}\prod_{\deg P<M}{\left(1-\frac{\rho_f\left(P^2\right)}{|P|^2-|P|}\right)}+O\left(\frac{q^{\frac{n}{2}+M}}{n}q^{4q^M}\right) \\ &=\frac{q^n}{n}\prod_{\deg P<M}{\left(1-\frac{\rho_f\left(P^2\right)}{|P|^2-|P|}\right)}+O\left(\frac{q^{\frac{n}{2}+4q^M+M}}{n}\right). \end{split} \end{equation} Now, using lemma \ref{bound-lem} we get \begin{equation} \begin{split} \nonumber \rho_f\left(P^2\right)&=|\{c\in\fq[t] : \deg c<\deg P^2, f(c)\equiv0 \pmod{P^2}, \gcd(c,P^2)=1\}| \\ &\leq|\{c\in\fq[t] : \deg c<\deg P^2, f(c)\equiv0 \pmod{P^2}\}|=O(1), \end{split} \end{equation} which is a uniform bound for all $P$ and the bound depends only on $f$ and the final field size $q$. It follows that the infinite product \\ $c_{f,2}=\prod_{P}{\left(1-\frac{\rho_f\left(P^2\right)}{|P|^2-|P|}\right)}$ converges because $\sum_{P}\frac{1}{|P|^2-|P|}$ converges. By \eqref{eq:4.4}, \begin{equation}\label{eq:4.5} \begin{split} |\Set'(n, M)|&=|\{a\in \pi_q(n) : \forall \deg P< M, P^2\nmid f(a)\}| \\ &=\frac{q^n}{n}\prod_{\deg P<M}{\left(1-\frac{\rho_f\left(P^2\right)}{|P|^2-|P|}\right)}+O\left(\frac{q^{\frac{n}{2}+4q^M+M}}{n}\right) \\ &=c_{f,2}\frac{q^n}{n}\prod_{\deg P\geq M}\frac{1}{1-\frac{\rho_f\left(P^2\right)}{|P|^2-|P|}}+O\left(\frac{q^{\frac{n}{2}+4q^M+M}}{n}\right). \end{split} \end{equation} Now using the uniform bound $\rho_f\left(P^2\right)=O(1)$, the fact that $\log\frac{1}{1-x} \ll x$ for small enough $x>0$, and lemma \ref{Explicit Formula}, we get \begin{align*} \log\left(\prod_{\deg P\geq M}{\frac{1}{1-\frac{\rho_f\left(P^2\right)}{|P|^2-|P|}}}\right)&\ll\sum_{\deg P\geq M}{\frac{1}{|P|^2-|P|}} \ll \sum_{\deg P\geq M}{\frac{1}{|P|^2}} \\ &\ll\sum_{i\geq M}{\frac{|\pi_q(i)|}{q^{2i}}}\ll\sum_{i\geq M}{\frac{1}{iq^i}}\ll\frac{1}{Mq^M}. \end{align*} Now insert this back in \eqref{eq:4.5} and use the Taylor expansion for $e$ to get \begin{equation} \begin{split} \nonumber |\Set'(n, M)|&=|\{a\in \pi_q(n) : \forall \deg P< M, P^2\nmid f(a)\}| \\ &=c_{f,2}\frac{q^n}{n}e^{O\left(\frac{1}{Mq^M}\right)}+O\left(\frac{q^{\frac{n}{2}+4q^M+M}}{n}\right) \\ &=c_{f,2}\frac{q^n}{n}\left(1+O\left(\frac{1}{Mq^M}\right)\right)+O\left(\frac{q^{\frac{n}{2}+4q^M+M}}{n}\right) \\ &=c_{f,2}\frac{q^n}{n}+O\left(\frac{q^n}{nMq^M}\right)+O\left(\frac{q^{\frac{n}{2}+4q^M+M}}{n}\right). \end{split} \end{equation} \end{proof} \section{Bounding the remainder} I will prove: \begin{prop}\label{prop:2} For $0\ll M\leq\frac{n}{2}$: \begin{equation} \begin{split} \nonumber |\Set''(n, M)|&=|\{a\in \pi_q(n) : \exists P, \deg P\geq M, P^2\mid f(a)\}| \\ &=O\left(\frac{q^n}{Mq^M}+q^{n\frac{p-1}{p}}\right) \end{split} \end{equation} \end{prop} In \cite{Ramsay}, Ramsay stated a bound for $|\Set''(n, M)|$. However, his argument only works for the case where $f\in \fq[x]$ has constant coefficients, and even in that case the argument is incomplete. A proof for the general case is given in Poonen \cite{Poonen}. Poonen gave an alternative proof that a multi-variable version of this set has density 0. In his proof he reduces the problem to the calculation of the density of $$\{a\in\fq[t] : \deg a=n, \exists P, \deg P\geq M, P\mid h(a),g(a)\}$$ for $h(x),g(x)\in\fq[t][x]$ which are coprime as elements of $\fq(t)[x]$. He calculates this density in a more general case (Lemma 5.1 in \cite{Poonen}). We will apply some of his ideas to the special case which is needed for the proof in the present paper. As it is standard in the problem of counting square-free values of polynomials, we bound the size of $P''(n,M)$ by separating the cases $\deg P > n/2$ and $n/2 \geq \deg P > M$ (this was suggested to us by Rudnik). We use Poonen's proof method in \cite{Poonen}. All the results will have explicit error terms. Proposition \ref{prop:2} will be proven by the following two propositions: \begin{prop}\label{prop:3} For $0\ll M\leq\frac{n}{2}$, $$\left|\left\{a\in M_n(q) : \exists P, \frac{n}{2}\geq\deg P\geq M, P^2\mid f(a)\right\}\right|\ll\frac{q^n}{Mq^M}.$$ \end{prop} \begin{prop}\label{prop:4} $$\left|\left\{a\in M_n(q) : \exists P, \deg P>\frac{n}{2}, P^2\mid f(a)\right\}\right|\ll q^{n\frac{p-1}{p}}.$$ \end{prop} The bound in Proposition \ref{prop:2} is achieved by bounding the sets above which go over non prime polynomials as well as prime polynomials. Those sets, as it turns out, are easier to estimate. In the case of $\Z$ this technique doesn't work for the parallel of Proposition \ref{prop:4} because the bigger set is also hard to bound. \section{Proof of Proposition \ref{prop:3}} \begin{proof} Notice that if $\deg P\leq\frac{n}{2}$, then for any $C\in\fq[t]$ we have $$|\{a\in M_n(q) : a\equiv C \pmod{P^2}\}|=\frac{q^n}{|P|^2}.$$ Denote $x(D)=|\{C\in\fq[t] : \deg C<\deg D, f(C)\equiv0 \pmod{D}\}|$. Using lemma \ref{bound-lem} and lemma \ref{Explicit Formula} we get \begin{align*} &\left|\left\{a\in M_n(q) : \exists P, M\leq\deg P\leq\frac{n}{2}, P^2\mid f(a)\right\}\right| \\ &\leq\sum_{M\leq\deg P\leq\frac{n}{2}}|\{a\in M_n(q) : P^2\mid f(a)\}| \\ &=\sum_{M\leq\deg P\leq\frac{n}{2}}\sum_{ \tiny\begin{array}{c}C\in\fq[t], \deg C <\deg P^2\\f(C)\equiv0\!\!\!\!\pmod{P^2}\end{array}}|\{a\in M_n(q) : a\equiv C\!\!\!\!\pmod{P^2}\}| \\ &=q^n\sum_{M\leq\deg P\leq\frac{n}{2}}\frac{x\left(P^2\right)}{|P|^2} \ll q^n\sum_{M\leq\deg P\leq\frac{n}{2}}\frac{1}{|P|^2} \\ &=q^n\sum_{M\leq k\leq\frac{n}{2}}\frac{|\pi_q(k)|}{q^{2k}} \leq q^n\sum_{M\leq k\leq\frac{n}{2}}\frac{\frac{q^k}{k}}{q^{2k}} \\ &=q^n\sum_{M\leq k\leq\frac{n}{2}}\frac{1}{kq^k} \leq\frac{q^n}{Mq^M}\sum_{M\leq k\leq\frac{n}{2}}\frac{1}{2^k}\leq\frac{q^n}{Mq^M}. \end{align*} \end{proof} Note 1: Poonen in \cite{Poonen} estimated this set (in his paper it is $|\bigcup_{s=0}^{N-1}Q_s|$) using dimension considerations from algebraic geometry (see the proof of Lemma 5.1 in \cite{Poonen}). Note 2: Similar proof for the integer case can be found in \cite{Granville}. \section{Proof of Proposition \ref{prop:4}} Denote \begin{equation} \begin{split} \nonumber &F(y_0,\dots,y_{p-1})=f\left(\sum_{j=0}^{p-1}{t^jy_j^p}\right)\in\fq[t][y_0,\dots,y_{p-1}], \\ &Q=\left\{a\in \fq[t]^p : \forall 0\leq i<p, \deg a_i\leq\floor{\frac{n}{p}} \textnormal{and } \exists P, \deg P>\frac{n}{2}, P\mid F(a),\frac{\partial F}{\partial t}(a)\right\}. \end{split} \end{equation} Proposition \ref{prop:4} follows from the following two lemmas: \begin{lem}\label{lem:8} $$\left|\left\{a\in M_n(q) : \exists P, \deg P>\frac{n}{2}, P^2\mid f(a)\right\}\right|\leq|Q|$$ \end{lem} \begin{lem}\label{lem:9} We have that at least one of the following holds: \begin{itemize} \item $c_{f,2} = 0$ and $P_{f,2} (n) = 0$ (in which case Theorem 1 holds), or \item $|Q| \ll q^{n\frac{p-1}{p}}$. \end{itemize} \end{lem} Lemma \ref{lem:8} will be proved using the following lemma: \begin{lem}\label{lem:7} Denote by $b$ the unique integer such that $b\equiv n \pmod{p}$ and $0\leq b<p$. For each $n$ there is a set $A_n\subset (\fq[t])^p$ such that $(1)$ $$M_n(q)=\left\{\sum_{j=0}^{p-1}{t^ja_j(t)^p} : (a_0(t),\dots,a_{p-1}(t))\in A_n\right\}$$ and $(2)$ $$\forall (a_0(t),\dots,a_{p-1}(t))\neq (b_0(t),\dots,b_{p-1}(t))\in A_n, \quad\sum_{j=0}^{p-1}{t^ja_j(t)^p}\neq \sum_{j=0}^{p-1}{t^jb_j(t)^p}$$ Moreover, $a_b(t)\in M_{\floor{\frac{n}{p}}}(q)$ and for all $a \in A_n$ we have that $\deg a_j(t) \leq \lfloor n/p \rfloor$ for each $j$. \end{lem} Lemma \ref{lem:9} will be proven using the following proposition \begin{prop}\label{prop:6} Let $N\geq0$ be an integer. For irreducible $f,g\in \fq[t][x_1,\dots,x_N]$ that are coprime in $\fq(t)[x_1,\dots,x_N]$, it holds for $N>0$ that $$\left|\left\{a\in \fq[t]^N : \deg a_i\leq\floor{\frac{n}{p}}, \exists P, \deg P>\frac{n}{2}, P\mid f(a),g(a)\right\}\right|\ll_{f,g} q^{n\frac{N-1}{p}}.$$ For $N=0$ and $n\gg0$, $$\left\{P : \deg P>\frac{n}{2}, P\mid f,g\right\}=\emptyset.$$ \end{prop} Proposition \ref{prop:6} will be proven using the following two propositions \begin{prop}\label{prop:7} Let $N\geq1$ be an integer and let $0\neq g\in \fq[t][x_1,\dots,x_{N-1}]$ be a polynomial. Define $$S_0=\left\{a\in \fq[t]^N : \deg a_i\leq\floor{\frac{n}{p}},g(a)=0\right\},$$ then it holds that $$|S_0|\ll q^{n\frac{N-1}{p}}.$$ \end{prop} \begin{prop}\label{prop:8} Let $N\geq1$ be an integer and let $f\in \fq[t][x_1,\dots,x_{N}]$, $g\in \fq[t][x_1,\dots,x_{N-1}]$ be polynomials. Denote by $f_1\in \fq[t][x_1,\dots,x_{N-1}]$ the coefficient of the highest power of $x_N$ in $f$ when looking at $f$ as polynomials in $x_N$. Define \begin{equation} \begin{split} \nonumber S&=\{a\in \fq[t]^N : \deg a_i\leq\floor{\frac{n}{p}}, \exists P, \deg P>\frac{n}{2}, P\mid f(a),g(a), \\ &\left.P\nmid f_1(a),g(a)\neq0\right\}, \end{split} \end{equation} then it holds that $$|S|\ll_{f,g} q^{n\frac{N-1}{p}}.$$ \end{prop} \begin{proof}[Proof of Lemma \ref{lem:8}] Denote the sets from Lemma \ref{lem:7} by $A_n$, for each $n$. By Lemma \ref{lem:7} we get \begin{equation}\label{eq:7.1} \begin{split} &\left|\left\{a\in M_n(q) : \exists P, \deg P>\frac{n}{2}, P^2\mid f(a)\right\}\right| \\ &=\left|\left\{\sum_{j=0}^{p-1}{t^ja_j(t)^p}: (a_0(t),\dots,a_{p-1}(t))\in A_n,\exists P, \deg P>\frac{n}{2},\right.\right. \\ &\left.\left.P^2\mid f(\sum_{j=0}^{p-1}{t^ja_j(t)^p})\right\}\right| \\ &=\left|\left\{(a_0(t),\dots,a_{p-1}(t))\in A_n : \exists P, \deg P>\frac{n}{2}, P^2\mid F(a_0(t),\dots,a_{p-1}(t))\right\}\right| \end{split} \end{equation} \begin{equation} \begin{split} \nonumber &=\left|\left\{(a_0(t),\dots,a_{p-1}(t))\in A_n : \exists P, \deg P>\frac{n}{2}, P\mid F(a_0(t),\dots,a_{p-1}(t)),\right.\right. \\ &\left.\left.P\mid \frac{\d F(a_0(t),\dots,a_{p-1}(t))}{\d t}\right\}\right|. \end{split} \end{equation} Notice that by the total derivative formula for $F$, and since $F(y_0,\dots,y_{p-1})\in\fq[t]\left[y_1^p,\dots,y_{p-1}^p\right]$, it holds that \begin{align*}\label{eq:7.2} &\frac{\d F(a_0(t),\dots,a_{p-1}(t))}{\d t} \\\stepcounter{equation}\tag{\theequation} &=\frac{\partial F(y_0,\dots,y_{p-1})}{\partial t}(a_0(t),\dots,a_{p-1}(t)) \\ &+\frac{\partial F(y_0,\dots,y_{p-1})}{\partial y_0}(a_0(t),\dots,a_{p-1}(t))\frac{\d a_0(t)}{\d t} \\ &+\frac{\partial F(y_0,\dots,y_{p-1})}{\partial y_1}(a_0(t),\dots,a_{p-1}(t))\frac{\d a_1(t)}{\d t}+\cdots \\ &+\frac{\partial F(y_0,\dots,y_{p-1})}{\partial y_{p-1}}(a_0(t),\dots,a_{p-1}(t))\frac{\d a_{p-1}(t)}{\d t} \\ &=\frac{\partial F(y_0,\dots,y_{p-1})}{\partial t}(a_0(t),\dots,a_{p-1}(t))+0+\cdots+0 \\ &=\frac{\partial F(y_0,\dots,y_{p-1})}{\partial t}(a_0(t),\dots,a_{p-1}(t)). \end{align*} Combining \eqref{eq:7.1} and \eqref{eq:7.2} we get: \begin{equation}\label{eq:7.3} \begin{split} &\left|\left\{a\in M_n(q) : \exists P, \deg P>\frac{n}{2}, P^2\mid f(a)\right\}\right|=\\ &\left|\left\{a=(a_0(t),\dots,a_{p-1}(t))\in A_n : \exists P, \deg P>\frac{n}{2}, P\mid F(a),P\mid \frac{\partial F}{\partial t}(a)\right\}\right|. \end{split} \end{equation} Now, by Lemma \ref{lem:7}, for all $a \in A_n$ we have that $\deg a_j(t) \leq \lfloor n/p \rfloor$ for each $j$. Therefore, $$\left\{a=(a_0(t),\dots,a_{p-1}(t))\in A_n : \exists P, \deg P>\frac{n}{2}, P\mid F(a),P\mid \frac{\partial F}{\partial t}(a)\right\}\subseteq Q,$$ which together with \eqref{eq:7.3} proves the lemma. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:7}] The existence of such $A_n$ for each $n$ follows from the fact that $\fq[t]$ is a free $\fq[t^p]$ module of rank $p$ with the obvious action. Denote by $b$ the unique integer such that $b\equiv n \pmod{p}$ and $0\leq b<p$. By the definition of $A_n$, for all $a \in A_n$ we have that $\deg\sum_{j=0}^{p-1}{t^ja_j(t)^p} = n$. Thus, using the fact that degrees of polynomials are integers, we get \begin{equation} \begin{split} \nonumber &a_b(t)\in M_{\floor{\frac{n}{p}}}(q)\textnormal{ and } \forall j,0\leq j\leq {p-1}, \deg a_j(t)\leq\frac{n-j}{p}\leq\frac{n}{p}\Longrightarrow\\ &\forall j,0\leq j\leq {p-1}, \deg a_j(t)\leq\lfloor n/p \rfloor, \end{split} \end{equation} as desired. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:9}] By Lemma 7.2 in \cite{Poonen}, $F\in\fq[t][y_0,\dots,y_{p-1}]$ is square-free because $f$ is square-free. The next argument, showing that the case where $F, \frac{\partial F}{\partial t}\in\fq(t)[y_0,\dots,y_{p-1}]$ are not coprime is degenerate, replaces Lemma 7.3 in \cite{Poonen} to allow the proof to be easily generalized to the $k$-free case. Set $G=\gcd\left(F, \frac{\partial F}{\partial t}\right)\in\fq(t)\left[y_0,\dots,y_{p-1}\right]$. By the total derivative formula one has, just like in the proof of Lemma \ref{lem:8}, that for all $a(t)\in\fq[t]^p$ it holds that $\frac{\d F(a)}{\d t}=\frac{\partial F}{\partial t}(a)$, because the rest of the partial derivatives vanish since $F\in\fq(t)[y_0^p,\dots,y_{p-1}^p]$. If $\deg G>0$, then for $a(t)\in\fq[t]^p$ let $P(t)$ be a prime factor of $G(a)$ (such $P(t)$ exists because $\deg G>0$). Then $P^2\mid F(a)$ (because $P\mid G(a)$, $G(a)\mid F(a)$, and $G(a)\mid \frac{\partial F}{\partial t}(a)=\frac{\d F(a)}{\d t}$). Thus if $\deg G>0$, then for all $a(t)\in\fq[t]^p$ , $F(a)$ is not square-free. Thus by Lemma \ref{lem:7}, for all $a(t)\in\fq[t]$, $f(a)$ is not square-free which gives $|\Set_{f,2}(n)|=0$. Now by Theorem 3.4 in \cite{Poonen} we have $$\lim_{n\to\infty}\frac{|\{a\in M_n(q) : f(a) \textnormal{ is square-free}\}|}{|M_n(q)|}=$$ $$\prod_P{\left(1-\frac{|\{c\in\fq[t] : \deg c<\deg P^2, f(c)\equiv0 \!\!\!\!\pmod{P^2}\}|}{|P|^2}\right)}.$$ Thus if for some $a(t)\in\fq[t]$, $f(a)$ is not square-free, then $$\prod_P{\left(1-\frac{|\{c\in\fq[t] : \deg c<\deg P^2, f(c)\equiv0 \!\!\!\!\pmod{P^2}\}|}{|P|^2}\right)}=0.$$ By Lemma \ref{bound-lem} and the fact that the sum $\sum_P{\frac{1}{|P|^2}}$ converges, the infinite product converges. So the vanishing of the product happens only when there is some prime $P$ such that $P^2|f(c)$ for all $c\in\fq[t]$. Thus we get \begin{equation} \begin{split} \nonumber \rho_f\left(P^2\right)&=\left|\left\{c\in\fq[t] : \deg c<\deg P^2, \gcd(c,P)=1, f(c)\equiv0\pmod{P^2}\right\}\right| \\ &=|P|^2-|P|. \end{split} \end{equation} This implies that $$c_{f,2}=\prod_P \left(1-\frac{\rho_f\left(P^2\right)}{|P|^2-|P|}\right)=0,$$ which concludes the result of the lemma for the case of $\deg G>0$. Now let us assume that $\deg G=0$, which means that $F$ and $\frac{\partial F}{\partial t}$ are coprime in $\fq(t)[y_0,\dots,y_{p-1}]$. If the decompositions of $F$ and $\frac{\partial F}{\partial t}$ into irreducibles are $F=f_1\cdots f_{i_f}$ and $\frac{\partial F}{\partial t}=g_1\cdots g_{i_g}$ ($f_i\neq g_j$ because $F$ is square-free, thus $f_i, g_j$ are coprime in $\fq(t)[y_0,\dots,y_{p-1}]$ since they are both irreducible), then \begin{equation} \begin{split} \nonumber Q&=\left\{a\in \fq[t]^p : \deg a_i\leq\floor{\frac{n}{p}}, \exists P, \deg P>\frac{n}{2}, P\mid F(a),\frac{\partial F}{\partial t}(a)\right\} \\ &=\bigcup_{i,j}\left\{a\in \fq[t]^p : \deg a_i\leq\floor{\frac{n}{p}}, \exists P, \deg P>\frac{n}{2}, P\mid f_l(a),g_j(a)\right\}. \end{split} \end{equation} Recall that $$F(y_0,\dots,y_{p-1})=f\left(\sum_{j=0}^{p-1}{t^jy_j^p}\right)\in\fq[t][y_0,\dots,y_{p-1}],$$ thus the number of irreducibles in the decomposition of $F$ is bounded by $p\cdot\deg f$ and thus this also bounds the number of irreducibles in the decomposition of $\frac{\partial F}{\partial t}$. Now, by proposition \ref{prop:6} we conclude the proof \begin{equation} \begin{split} \nonumber |Q|&=\left|\bigcup_{i,j}\left\{a\in \fq[t]^p : \deg a_i\leq\floor{\frac{n}{p}}, \exists P, \deg P>\frac{n}{2}, P\mid f_l(a),g_j(a)\right\}\right| \\ &\ll q^{n\frac{p-1}{p}}\cdot(p\cdot\deg f)^2\ll q^{n\frac{p-1}{p}}, \end{split} \end{equation} the product constant depending on the field size $q$ and the polynomial $f$ was neglected because as stated in the main theorem (theorem \ref{main thm}) the error term depends on $f$ and the finite field size $q$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:6}] Note that $f, g\neq 0$ because $f$ and $g$ are coprime. Since we are interested in $P$ with a large value of $|P|$, we may divide $f,g$ by common factors in $\fq[t]$ and assume that $f,g$ are coprime as elements of $\fq[t][x_1,\dots,x_N]$. Denote: $$Q'=\left\{a\in \fq[t]^N : \deg a_i\leq\floor{\frac{n}{p}}, \exists P, \deg P>\frac{n}{2}, P\mid f(a),g(a)\right\}.$$ We proceed by induction on $N$. If $N=0$, then $f,g\in\fq[t]$. Thus for $\frac{n}{2}>\max\{\deg f,\deg g\}$ it holds that $Q'=\emptyset$. Now assume $N\geq1$. Denote by $f_1,g_1\in \fq[t][x_1,\dots,x_{N-1}]$ the coefficients of the highest power of $x_N$ in $f,g$, respectively, when looking at $f,g$ as polynomials in $x_N$. Case 1: Assume the $x_N$-degrees of both $f$ and $g$ are positive. Since $f, g$ are coprime in $\fq[t][x_1,\dots,x_N]$, they are also coprime if viewed as single-variable polynomials in $\fq(t,x_1,\dots,x_{N-1})[x_N]$. Thus by the B\'ezout Identity, there are $b,c \in \fq(t,x_1,\dots,x_{N-1})[x_N]$ such that $1=bf+cg$. Multiplying by the common denominator it follows that there are $B,C \in \fq[t][X_1,\dots,X_{N-1}][X_N]$ and $0\neq D \in \fq[t][X_1,\dots,X_{N-1}]$ such that $D=Bf+Cg$. Note: The polynomial $D$ here replaces the resultant used in Poonen's proof of Lemma 5.1 in \cite{Poonen}. The change is done so that it will be easier to generalize the proof to the $k$-free case. Since $D$ is nonzero and does not involve $x_N$ and since $f,g$ are irreducible, it follows that $D$ is coprime with each of $f,g$. If $P$ divides $f(a)$ and $g(a)$, then from $D=Bf+Cg$ it follows that $P\mid D(a)$. Thus, $$Q'\subseteq\left\{a\in \fq[t]^N : \deg a_i\leq\floor{\frac{n}{p}}, \exists P, \deg P>\frac{n}{2}, P\mid f(a),D(a)\right\}.$$ Since $D$ only depends on $f,g$, its number of factors can be absorbed in the implicit constant implied by $\ll_{f,g}$. Thus, by looking at the irreducible factors of $D$, just as in the second half of the proof of Lemma \ref{lem:9}, we can assume that $D$ is irreducible. This reduces the problem to dealing with $f,g$ such that one of them is in $\fq[t][x_1,\dots,x_{N-1}]$ and thus does not depend on $x_N$. Case 2: Suppose that one of $f,g$ is in $\fq[t][x_1,\dots,x_{N-1}]$. Without loss of generality, we can assume that it is $g$. In this case the proof will be by induction on $\delta$, where $\delta$ is the $x_N$-degree of $f$. If $\delta=0$, then $f, g\in\fq[t][x_1,\dots,x_{N-1}]$ and according to the outer induction hypothesis, $$\left|\left\{a\in \fq[t]^{N-1} : \deg a_i\leq\floor{\frac{n}{p}}, \exists P, \deg P>\frac{n}{2}, P\mid f(a),g(a)\right\}\right|\ll_{f,g} q^{n\frac{N-2}{p}},$$ whence \begin{equation} \begin{split} \nonumber &\left|\left\{a\in \fq[t]^{N} : \deg a_i\leq\floor{\frac{n}{p}}, \exists P, \deg P>\frac{n}{2}, P\mid f(a),g(a)\right\}\right| \\ =&\left|\left\{a\in \fq[t]^{N-1} : \deg a_i\leq\floor{\frac{n}{p}}, \exists P, \deg P>\frac{n}{2}, P\mid f(a),g(a)\right\}\right| \\ &\cdot\left|\left\{a\in \fq[t] : \deg a\leq\floor{\frac{n}{p}}\right\}\right| \\ &\ll_{f,g} q^{n\frac{N-2}{p}}\cdot q^{\floor{\frac{n}{p}}}\leq q^{n\frac{N-2}{p}+\frac{n}{p}}=q^{n\frac{N-1}{p}}. \end{split} \end{equation} So let us assume $\delta>0$ and define \begin{equation} \begin{split} \nonumber S'&=\left\{a\in \fq[t]^N : \deg a_i\leq\floor{\frac{n}{p}}, \exists P, \deg P>\frac{n}{2}, P\mid f_1(a),g(a)\right\}, \\ S_0&=\left\{a\in \fq[t]^N : \deg a_i\leq\floor{\frac{n}{p}},g(a)=0\right\}, \\ S&=\{a\in \fq[t]^N : \deg a_i\leq\floor{\frac{n}{p}}, \exists P, \deg P>\frac{n}{2}, P\mid f(a),g(a), \\ &\left.P\nmid f_1(a),g(a)\neq0\right\}. \end{split} \end{equation} It holds that $Q'\subseteq S\cup S'\cup S_0$. If $g|f_1$, then by subtracting a multiple of $g$ from $f$, $Q'$ is not changed and the new $f$ is still coprime to $g$. In this way the degree of $f$ can be lowered, which then allows us to use the inner inductive hypothesis to get the desired result. So assume now that $g\nmid f_1$ and since we assumed that $g$ is irreducible this means that $g,f_1$ are coprime in $\fq[t][x_1,\dots,x_{N-1}]$. Thus by applying the hypothesis of the outer induction to $f_1, g\in\fq[t][x_1,\dots_,x_{N-1}]$, just like in the case $\delta=0$, we conclude that \begin{equation}\label{S' estimate} |S'|\ll_{f,g} q^{n\frac{N-1}{p}}. \end{equation} Using this together with propositions \ref{prop:7} and \ref{prop:8} allows us to conclude that $$|Q'|=|S\cup S'\cup S_0|\ll_{f,g} q^{n\frac{N-1}{p}}$$ as desired. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:7}] We will show that $|S_0|\ll q^{n\frac{N-1}{p}}$ by using induction on $N$. For $N=1$, since $g\in\fq[t][x_1,\dots,x_{N-1}]$, it follows that $g\in\fq[t]$ and since $g\neq 0$, $$\left\{a\in \fq[t] : \deg a\leq\floor{\frac{n}{p}},g(a)=0\right\}=\emptyset.$$ Let us assume now that the assertion is true for $N-1$, where $N>1$. Denote by $g_2$ the coefficient of the highest power of $x_{N-1}$ in $g$. If $g_2=0$, then $g\in \fq[t][x_1,\dots,x_{N-2}]$. By the induction hypothesis, for all $0\neq h\in \fq[t][x_1,\dots,x_{N-2}]$, \begin{equation} \begin{split} \nonumber &\left|\left\{a\in \fq[t]^N : \deg a_i\leq\floor{\frac{n}{p}},h(a)=0\right\}\right|= \\ &\left|\left\{a\in \fq[t]^{N-1} : \deg a_i\leq\floor{\frac{n}{p}},h(a)=0\right\}\right| \\ &\cdot\left|\left\{a\in \fq[t] : \deg a\leq\floor{\frac{n}{p}}\right\}\right| \\ &\ll q^{n\frac{N-2}{p}}\cdot q^{\floor{\frac{n}{p}}}\leq q^{n\frac{N-2}{p}+\frac{n}{p}}=q^{n\frac{N-1}{p}}, \end{split} \end{equation} as desired. Assume now that $g_2\neq 0$. Since $g_2\in \fq[t][x_1,\dots,x_{N-2}]$, it follows as we just seen that $$\left|\left\{a\in \fq[t]^N : \deg a_i\leq\floor{\frac{n}{p}},g_2(a)=0\right\}\right|\ll q^{n\frac{N-1}{p}}.$$ Now \begin{equation} \begin{split} \nonumber &\left\{a\in \fq[t]^N : \deg a_i\leq\floor{\frac{n}{p}},g(a)=0\right\} \\ \subset &\left\{a\in \fq[t]^{N} : \deg a_i\leq\floor{\frac{n}{p}},g_2(a)=0\right\} \\ &\cup \left\{a\in \fq[t]^N : \deg a_i\leq\floor{\frac{n}{p}},g(a)=0, g_2(a)\neq0\right\}. \end{split} \end{equation} Thus we only need to show that $$\left|\left\{a\in \fq[t]^N : \deg a_i\leq\floor{\frac{n}{p}},g(a)=0, g_2(a)\neq0\right\}\right|\ll q^{n\frac{N-1}{p}}.$$ Denote by $\deg_{x_{N-1}}g$ the degree of $g$ as a polynomial in the variable $x_{N-1}$. For each $(a_1,\dots,a_{N-2})\in \fq[t]^{N-2}$ there are at most $\deg_{x_{N-1}}g$ values of $a_{N-1}\in\fq[t]$ such that $g(a_1,\dots,a_{N-2}, a_{N-1})=0$; indeed, this holds because $g(a_1,\dots,a_{N-2}, x_{N-1})$ is a polynomial in $x_{N-1}$ of degree $\deg_{x_{N-1}}g$. Using this and the induction hypothesis gives \begin{equation} \begin{split} \nonumber &\left|\left\{a\in \fq[t]^N : \deg a_i\leq\floor{\frac{n}{p}},g(a)=0, g_2(a)\neq0\right\}\right| \\ =&\left|\left\{a\in \fq[t]^{N-1} : \deg a_i\leq\floor{\frac{n}{p}},g(a)=0, g_2(a)\neq0\right\}\right| \\ &\cdot\left|\left\{a\in \fq[t] : \deg a\leq\floor{\frac{n}{p}}\right\}\right| \end{split} \end{equation} \begin{equation} \begin{split} \nonumber =&\left|\left\{a\in \fq[t]^{N-2} : \deg a_i\leq\floor{\frac{n}{p}},g(a)=0, g_2(a)\neq0\right\}\right| \\ &\cdot\left|\left\{a\in \fq[t] : \deg a\leq\floor{\frac{n}{p}}\right\}\right|^2 \cdot \deg_{x_{N-1}}g \\ \ll &q^{n\frac{N-3}{p}}\cdot q^{2\floor{\frac{n}{p}}}\leq q^{n\frac{N-3}{p}+\frac{2n}{p}}=q^{n\frac{N-1}{p}}, \end{split} \end{equation} as desired. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:8}] Take $a'=(a_1,\dots,a_{N-1})\in \fq[t]^{N-1}$ with $\deg a_i\leq\floor{\frac{n}{p}}$. We will count the number of $a_N\in\fq[t]$ such that $a=(a',a_N)\in S$. Let $\deg g$ be the total degree of $g$. Since $a\in S$, the definition of $S$ implies that $g(a)\neq0$. Since $g\in\fq[t][x_1,\dots,x_{N-1}]$ does not depend on $x_N$, we get $g(a)=g(a')$. Therefore, $$\deg g(a)=\deg g(a')\leq\deg g\cdot\max\{\deg a_1,\dots,\deg a_{N-1}\}\leq\deg g\cdot\floor{\frac{n}{p}}.$$ Thus, since $\floor{\frac{n}{p}}\leq\frac{n}{2}$ and $g(a)\neq0$, there can be at most $\deg g$ different primes $P$ such that $\deg P>\frac{n}{2}$ and $P\mid g(a)$. Now for $P$ such that $P\nmid f_1(a)$, it holds that $f(a',x_N) \pmod{P}\in(\fq[t]/\langle P \rangle)[x_N ]$ is a polynomial of degree $\delta>0$ over the field $\fq[t]/\langle P\rangle$ (it is a field since $P$ is prime). Thus in this case $f(a',x_N) \pmod{P}$ has at most $\delta$ roots over the field $\fq[t]/\langle P\rangle$ (this is the reason why the case $P\mid f_1(a)$ needs to be dealt with separately, because in that case it might happen that $f(a',x_N) \pmod{P}$ is 0, which would prevent us from bounding the number of its roots). Now for $P$ such that $\deg P>\frac{n}{2}$ it holds that each $c\in\fq[t]/\langle P\rangle$ has at most one $a_N\in\fq[t]$ such that $\deg a_N\leq\floor{\frac{n}{p}}\leq\frac{n}{2}$ and $a_N\equiv c \pmod{P}$. We see that there are at most $\deg g$ primes $P$ such that $P\mid g(a')$, and for every such $P$ there are at most $\delta\cdot O(1)$ values of $a_N\in\fq[t]$ with $\deg a_N\leq \frac{n}{2}$ such that $P\mid f(a',a_N)$. We conclude that for each $a'=(a_1,\dots,a_{N-1})\in \fq[t]^{N-1}$ with $\deg a_i\leq\floor{\frac{n}{p}}$ there are $O(1)$ values of $a=(a',a_N)\in \fq[t]^N$ with $\deg a_N\leq\floor{\frac{n}{p}}$ such that $a\in S$. Thus \begin{equation} \begin{split} \nonumber |S|&\ll_{f,g} \left|\left\{(a_1,\dots,a_{N-1})\in \fq[t]^{N-1}:\deg a_i\leq\floor{\frac{n}{p}}\right\}\right| \\ &\ll_{f,g} q^{\floor{\frac{n}{p}}(N-1)}\ll_{f,g} q^{n\frac{N-1}{p}}. \end{split} \end{equation} \end{proof} \section{Remarks} \subsection{Positivity of $c_{f,2}$}\label{degenerate case} In Theorem \ref{main thm} we proved that \begin{equation} \frac{|\Set_{f,2}(n)|}{|\pi_q(n)|} = c_{f,2} + O_{f,q}\left(\frac {1}{\log_q n}\right)\quad \mbox{as } n\to \infty, \end{equation} with \begin{equation} c_{f,2}=\prod_P \left(1-\frac{\rho_f\left(P^2\right)}{|P|^2-|P|}\right), \end{equation} where the product runs over the prime polynomials $P$, and for every polynomial $D\in \fq[t]$, $$\rho_f(D)=|\{C\in\fq[t]: \deg C < \deg D, \gcd(D,C)=1, f(C)\equiv0\pmod{D}\}|.$$ Denote the discriminant of $f$ over $\fq(t)$ by $\Delta(f)$ and denote by $w_f(t)\in\fq[t]$ the leading coefficient of $f$ as a polynomial in $x$ over $\fq[t]$. We now investigate when $c_{f,2}$ is nonzero: \begin{prop}\label{prop8} The following conditions are equivalent: \begin{enumerate} \item $c_{f,2}>0$. \item There are infinitely many primes P such that f(P) is square-free. \item There is a prime $P\in\fq[t]$ with $\deg P>\max\{\deg\Delta(f),\deg w_f\}$ such that $f(P)$ is square-free. \item For every prime $P\in\fq[t]$, there is a polynomial $C\in\fq[t]$ such that $P\nmid C$ and $P^2\nmid f(C)$. \item For each prime $P\in\fq[t]$ such that $\deg P\leq\max\{\deg\Delta(f),\deg w_f\}$, there is a polynomial $C\in\fq[t]$ such that $P\nmid C$ and $P^2\nmid f(C)$. Note that this condition can be checked by a finite computation. \end{enumerate} \end{prop} \begin{proof} By lemma \ref{bound-lem}, the convergence of the sum $\sum_P{\frac{1}{|P|^2-|P|}}$ implies the convergence of the product in $c_{f,2}$. Consequently, $c_{f,2}=0$ if and only if some term in the product vanishes, which happens if and only if there is some $P$ such that $\rho_f\left(P^2\right)=|P|^2-|P|$. However, from Hensel's Lemma (lemma \ref{Hensel's Lemma}) it follows that for $P$ with $\deg P>\max\{\deg\Delta(f),\deg w_f\}$ we have that $\rho_f\left(P^2\right)=\rho_f(P)\leq |P|-1<|P|^2-|P|$. This proves that $(1)\Leftrightarrow(4)$ and $(5)\Rightarrow(1)$. Also, obviously $(2)\Rightarrow(3)$ and $(4)\Rightarrow(5)$. Now, if $c_{f,2}>0$, then by Theorem \ref{main thm} there is $N$ sufficiently large such that for every $n>N$ there exists a prime $P$ of degree $n$ such that $f(P)$ is square-free. This proves $(1)\Rightarrow(2)$. And finally, if there is some $P\in\fq[t]$ with $\deg P>\max\{\deg\Delta(f),\deg w_f\}$ such that $f(P)$ is square-free, then for all prime $p\in\fq[t]$ with $\deg p\leq\max\{\deg\Delta(f),\deg w_f\}$, we have $p\neq P$, and since they are both prime it follows that $p\nmid P$. But $f(P)$ is square-free, thus we also have $p^2\nmid f(P)$. This proves $(3)\Rightarrow(5)$, which completes the proof of the equivalence of the conditions in the proposition. \end{proof} \subsection{Final comments} We make a number of final remarks regarding the proofs in this paper. \begin{enumerate} \item Poonen used a technique that consists of looking only at part of the coordinates of $a\in \fq[t]^{N}$ such that $\exists P, \deg P\geq M, P^2\mid f(a)$, then proving that fixing those coordinates leaves $O(1)$ options for the rest of the coordinates. However, the range of the rest of the coordinates depends on $n$. This proves that the density of the desired set is 0. This technique is not available when working with one variable (last part of the proof of Lemma 5.1 in \cite{Poonen}) \item The reason we need to find $g(x)\in\fq[t][x]$ such that $P^2\mid f(a)\Longrightarrow P\mid f(a),g(a)$ is in order to use a method to reduce the problem to the case where $g$ depends on one variable less than $f$ does. This allows us, as explained in (1) above, to fix the variables appearing in $g$ and determine the amount of possible values for the variable that does not appear in $g$, but appears in $f$ (last part of the proof of Lemma 5.1 in \cite{Poonen}). \end{enumerate}
1,108,101,565,018
arxiv
\section{Introduction} \label{intro} The thermal emission rate of photons from strongly interacting matter encodes several interesting properties of the radiating medium (see, {\it e.g.}, Refs.~\cite{Alam:1999sc,Peitzmann:2002,Arleo:2004gn,Rapp:2009yu,Gale:2009gc} for reviews). Its spectral slope reflects the temperature of the system while its magnitude is related to the interaction strength of the charge carriers. In ultrarelativistic heavy-ion collisions (URHICs), the size of the interacting fireball is much smaller than the mean-free path of photons. Thus, the latter can probe the hot and dense interior of the medium. However, the observed photon spectra receive contributions from all reaction stages, {\it i.e.}, primordial $NN$ collisions, pre-equilibrium, quark-gluon plasma (QGP) and hadronic phases, plus final-state decays of short-lived resonances (these so-called ``direct'' photons exclude decays of long-lived hadrons, {\it e.g.}, $\pi$ and $\eta$). Calculations of direct-photon spectra require good control over both the microscopic emission rates and the space-time evolution of the medium. The latter not only determines the local emission temperature, but also the collective-flow field which generally imparts a net blue-shift on the radiated photons. In addition, the azimuthal asymmetry of the thermal photon spectra, $v_2^\gamma$, is of interest~\cite{Chatterjee:2006,Liu:2009kta,Holopainen:2011pd,vanHees:2011vb,Dion:2011pp,Mohanty:2011fp,Shen:2013cca,Linnyk:2013wma}: since the bulk $v_2$ requires several $\mathrm{fm}/c$ to build up, the observed value for photons helps to further constrain their emission history. Direct-photon spectra in URHICs have been extracted in Pb-Pb($\sqrt{s}=0.017\, A \mathrm{TeV}$) collisions at the Super Proton Synchrotron (SPS)~\cite{Aggarwal:2000th}, in Au-Au($\sqrt{s} =0.2\, A \mathrm{TeV}$) at the Relativistic Heavy-Ion Collider (RHIC)~\cite{Adare:2008fq}, and in Pb-Pb($\sqrt{s}=2.76\,A \mathrm{TeV}$) at the Large Hadron Collider (LHC)~\cite{Wilde:2012wc}. At SPS, various theoretical models could approximately reproduce the measured spectra by adding thermal radiation from an equilibrated expanding fireball to a primordial component estimated from pp data~\cite{Srivastava:2000pv,Huovinen:2001wx,Turbide:2003si,Mohanty:2009cd,Bauchle:2010sr}. The thermal yield prevailed over the primordial one up to transverse momenta of $q_T \approx2$-$4\, \mathrm{GeV}$. However, a decomposition into contributions from QGP and hadronic radiation, which would allow for a better characterization of the origin of the signal, remains ambiguous. By subtracting the primordial component from their data, the PHENIX collaboration extracted the ``excess radiation'' and determined its inverse-slope parameter (``effective temperature'') in Au-Au collisions at RHIC as $T_\mathrm{eff}=221\pm19^\mathrm{stat}\pm19^\mathrm{syst} \, \mathrm{MeV}$. Accounting for the aforementioned blue-shift effect, this result indicates that most of the radiation emanates from matter temperatures $T<200 \, \mathrm{MeV}$, challenging the notion of early QGP radiation~\cite{vanHees:2011vb}. A subsequent first measurement of the direct-photon $v_2$ supports this finding~\cite{Adare:2011zr}: in the regime where thermal radiation is expected to be large, $q_T \lesssim 3 \,\mathrm{GeV}$, $v_2^\gamma(q_T)$ turns out to be comparable to that of pions, which are only emitted at the end of the fireball evolution, {\it i.e.}, at thermal freezeout, $T_\mathrm{fo} \simeq 100\, \mathrm{MeV}$. The large $v_2^\gamma$, also found at LHC~\cite{Lohner:2012ct}, thus puts rather stringent constraints on the origin of the excess photons. In previous work~\cite{vanHees:2011vb} we have calculated thermal photon spectra at RHIC, differing from existing calculations in mainly two aspects. First, a more extensive set of hadronic thermal photon rates has been employed~\cite{Turbide:2003si}, which, in particular, includes the contributions from baryons and antibaryons (known to be important in the dilepton context~\cite{Rapp:2000pe,Rapp:2013nxa}). These rates approximately match complete leading-order (LO) QGP rates around the pseudo-critical temperature, $T_\mathrm{pc}\simeq 170 \,\mathrm{MeV}$~\cite{Arnold:2001ms}, thus rendering a near continuous emissivity across the transition region. Second, a schematic medium evolution was constructed utilizing a blast-wave type elliptic-fireball model, quantitatively fit to spectra and $v_2$ of bulk hadrons ($\pi$, K, p) at $T_\mathrm{fo} \simeq 100\,\mathrm{MeV}$ and multistrange hadrons (e.g., $\phi$ and $\Omega^-$) at $T_\mathrm{ch}=170\,\mathrm{MeV}$. The implementation of this ``sequential freezeout'' is phenomenologically motivated~\cite{He:2010vw}, and, in particular, leads to a saturation of the bulk-medium $v_2$ close to the transition regime, after about 4-$6\,\mathrm{fm}/c$ for central and semi-central Au-Au collisions at RHIC. As a result, the direct-photon $v_2$ increased by a factor of $\sim 3$ over existing calculations, reaching into the error bars of the PHENIX data. In the present paper we expand on and scrutinize these findings by extending the calculations to LHC energy and then employing a previously constructed ideal hydrodynamic bulk-evolution~\cite{He:2011zx} to conduct a detailed comparison to the emission characteristics of the fireball. Much like the latter, this hydro evolution has been quantitatively constrained by bulk-hadron spectra and $v_2$, utilizing the concept of sequential freezeout. Both evolutions will be based on a lattice-QCD equation of state (EoS) for the QGP, matched to a hadron resonance gas (HRG) with chemical freezeout. The comparisons will encompass the time evolution of radial and elliptic flow, temperature, four volume and photon emission profile. Motivated by these comparisons, which identify the transition region as a key contributor to thermal photon spectra, we conjecture an enhancement of the currently available photon emission rates around $T_{\rm pc}$ and explore in how far this could help to resolve the discrepancies with the data. Our article is organized as follows. In Sec.~\ref{sec_bulk} we recall basic ingredients and features of the fireball (Sec.~\ref{ssec_fb}) and hydrodynamic (Sec.~\ref{ssec_hydro}) bulk evolutions, including analyses of their time and temperature profiles of collective flow and four volume (Sec.~\ref{ssec_comp}). In Sec.~\ref{sec_gam} we investigate the spectra and elliptic flow of thermal photons emitted from the fireball (Sec.~\ref{ssec_gam-fb}) and hydro (Sec.~\ref{ssec_gam-hydro}), and (after adding primordial production) compare to recent RHIC and LHC data. In Sec.~\ref{sec_disc} we analyze the differences in the fireball and hydro photon results in view of the insights from Sec.~\ref{sec_bulk} and discuss possible origins of current discrepancies with data. We conclude and outline future investigations in Sec.~\ref{sec_sum}. \section{Bulk Evolution Models} \label{sec_bulk} The calculation of thermal-photon spectra in URHICs is based on the differential emission rate per unit phase space from a strongly interacting medium of temperature $T$ and baryon-chemical potential $\mu_B$, \begin{equation} \begin{split} \label{rate} q_0\frac{dN_{\gamma}}{d^4xd^3q} = &-\frac{\alpha_\mathrm{EM}}{\pi^2} \ f^B(q_0;T) \\ &\times \im \Pi_\mathrm{EM}^T(q_0=q;\mu_B,T) \ . \end{split} \end{equation} Here, the rate is written in terms of the 3-D transverse part of the electromagnetic current correlator, $\Pi_\mathrm{EM}^T$, and the thermal Bose distribution function, $f^\mathrm{B}$, where $q_0=q$ denote the energy and three-momentum of the photon in the local rest frame of the medium. This expression is leading order in the electromagnetic (EM) coupling $\alpha_\mathrm{EM}$, required to produce the photon without further EM interaction when traversing the fireball. Alternatively, one may express the rate in terms of photon-production scattering matrix elements, appropriately convoluted over the thermal distribution functions of the incoming particles. This approach is usually more convenient when evaluating tree-level diagrams ({\it e.g.}, $t$-channel meson exchanges) which are expected to prevail at high photon energies~\cite{Turbide:2003si}. The calculation of a thermal photon spectrum in URHICs requires to integrate the above rate over the entire four-volume of the reaction, $V_4=\int \text{d}^4x$, accounting for the local temperature and collective expansion velocity of the emission point. In the following, we will briefly recall how this four-volume integration is done in two different models, {\it i.e.}, a schematic blast-wave type fireball (Sec.~\ref{ssec_fb}) and ideal hydrodynamics (Sec.~\ref{ssec_hydro}); both are based on the same equation of state and fits to the same set of bulk-hadron observables. This will be followed by a detailed comparison of the flow, temperature and four-volume profiles (Sec.~\ref{ssec_comp}). \subsection{Thermal Fireball} \label{ssec_fb} The thermal fireball model is based on an isotropically expanding cylinder with volume \begin{equation} V_\mathrm{FB}(t) = \pi a(t) b(t) (z_0 +ct), \label{V_fb} \end{equation} where the elliptic transverse area is characterized by semi-major and -minor axes, $a(t)$ and $b(t)$, respectively. Their initial values are estimated from the nuclear overlap at given impact parameter, while the initial longitudinal size, $z_0$, controls the formation time of the thermal medium. Assuming a constant total entropy, $S_\mathrm{tot}$, of the fireball at given collision centrality (fixed by the observed number of charged particles), the time evolution of the entropy density follows as $s(t)=S_\mathrm{tot}/V_\mathrm{FB}(t)$. Once the EoS is specified, i.e., the temperature dependence of $s$, one can convert $s(t)$ into $T(t)$. In our previous calculations of thermal-photon spectra~\cite{vanHees:2011vb} we used a quasi-particle QGP EoS with a first-order transition into a HRG and chemical freezeout at $T_\mathrm{c}=T_\mathrm{ch}=180\, \mathrm{MeV}$~\cite{Rapp:2000pe}. Here, we update the EoS with a fit to lattice-QCD data for the QGP part~\cite{He:2011zx}, smoothly matched to a HRG at $T_\mathrm{pc}=170\,\mathrm{MeV}$ and chemical freezeout at $T_\mathrm{ch}=160\,\mathrm{MeV}$, at both RHIC and LHC energies~\cite{Andronic:2005yp,Stachel:2013zma}. For $T<T_\mathrm{ch}$, effective chemical potentials for pions, kaons, antibaryons etc., are introduced~\cite{Rapp:2002fc} to preserve the finally observed hadron ratios extracted from the chemical-freezeout fits, while strong processes ({\it e.g.}, $\pi\pi\leftrightarrow\rho$) are assumed to maintain chemical equilibrium (so-called \emph{partial} chemical equilibrium). Following Ref.~\cite{He:2011zx} we refer to this EoS as ``\textit{latPHG}''. \begin{figure}[!t] \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{pi-pT-spec-0-20-RHIC-lEoS} \end{minipage}\hfill \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{pi-v2-0-20-RHIC-lEoS} \end{minipage} \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{phi-pT-spec-0-20-RHIC-lEoS} \end{minipage}\hfill \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{phi-v2-0-20-RHIC-lEoS} \end{minipage} \caption{(Color online) Fits to the spectra and elliptic flow of light hadrons ($T_\mathrm{fo}\simeq110\,\mathrm{MeV}$, upper two panels) and $\phi$ mesons ($T_\mathrm{fo}=160\,\mathrm{MeV}$, lower two panels) in Au-Au($\sqrt{s}=200\,A \mathrm{GeV}$) collisions, using EoS \textit{latPHG} within either the fireball (dashed lines) or ideal hydrodynamic model (solid lines). The data are taken from Refs.~\cite{Abelev:2007rw,Adler:2003kt,Adler:2003cb,Adams:2003xp}.} \label{fig_hadrons} \end{figure} With this set-up, the time dependence of the elliptic radii, $a(t)$ and $b(t)$, can be constructed with guidance from hydrodynamic models~\cite{Kolb:2003dz} to approximately reproduce their time evolution of the radial and elliptic flow, as well as momentum-space anisotropy~\cite{vanHees:2011vb}. In addition, the idea of sequential freeze-out has been implemented, {\it i.e.}, a kinetic decoupling of multistrange hadrons ({\it e.g.}, $\phi$ and $\Omega^-$) at chemical freezeout. This requires a somewhat faster transverse expansion than in the original hydro models~\cite{Kolb:2003dz}, but is consistent with the observed phenomenon of constituent-quark number scaling of elliptic flow. Importantly, it implies that the bulk-$v_2$ essentially saturates close to $T_\mathrm{c}\simeq T_\mathrm{ch}$. With a suitable choice in initial conditions this can be recovered in hydrodynamic simulations~\cite{He:2011zx}, as we will see below. For a more accurate reproduction of the final-state hadron multiplicities compared to our previous work~\cite{vanHees:2011vb} (accounting for the modified EoS, feeddown and a narrowing of the fireball rapidity distributions due to the large transverse flow), we have reduced the total entropy in our fireball by ca.~20\%. With a freeze-out temperature of $T_\mathrm{fo} \simeq 100(160) \,\mathrm{MeV}$, the measured $p_T$ spectra and elliptic flow of light (multistrange) hadrons can be reasonably well described, cf.~dashed lines in Fig.~\ref{fig_hadrons}\footnote{We have not removed the entropy lost to multistrange hadrons (which amounts to $\sim$2\%) from the fireball; we neglect this correction in both fireball and hydro evolution.}. One might be concerned that the calculated $v_2(p_T)$ for thermal pions and protons exceeds the data toward higher $p_T$, in a regime which is still relevant for thermal photon production. However, the thermal $p_T$ spectra start to underpredict the experimental yields data in this regime. For example, for pions with $p_T\simeq3\,\mathrm{GeV}$, the thermal spectrum accounts for ca.~65\% of the experimental yield; weighting the thermal pion-$v_2$ of $\sim 22\%$ with this fraction gives $\sim 14\%$, which is not far from the data, $v_2(p_T=3\mathrm{GeV})\simeq11\%$. While these estimates pertain to kinetic freezeout, the calculations for the $\phi$ meson represent a snapshot at $T_{\mathrm{ch}}=160\,\mathrm{MeV}$; they approximately follow the measured spectra and $v_2$ out to higher $p_T$. \subsection{Ideal Hydrodynamics} \label{ssec_hydro} The hydrodynamic model used in the present study has been described in detail in Ref.~\cite{He:2011zx}. It is based on the 2+1-dimensional ideal hydro code of Ref.~\cite{Kolb:2003dz} (AZHYDRO), augmented with the updated EoS described above (\textit{latPHG}) and initial conditions tuned to reproduce bulk and multistrange hadron yields, spectra and $v_2$ in central and semicentral Au-Au collisions at full RHIC energy. Specifically, a rather compact initial-entropy density profile was adopted, proportional to the binary-collision density in the optical Glauber model (this is not unlike what has been obtained in gluon saturation models); with a thermalization time of $\tau_0=0.6 \; \mathrm{fm}/c$, the central initial temperature amounts to $T_0=398 \;\mathrm{MeV}$ in 0-20\% Au-Au($\sqrt{s_{NN}}=200\;\mathrm{GeV}$). Furthermore, a sizable initial radial flow with a surface value of around $0.13 c$ has been introduced (and a very small positive $v_2$ to optimize the hadron fits). All of these features (lattice EoS, compact initial profile and initial radial flow) lead to a more violent radial expansion, which enables an improved description of the bulk-hadron $v_2$ at kinetic freezeout even within ideal hydrodynamics. At the same time, it generates an earlier saturation of the bulk-medium $v_2$, which, in particular, requires multistrange hadrons to freeze out at the chemical freezeout temperature $T_{\rm ch}$ to reproduce their $p_T$ spectra and $v_2$ (this is not unwelcome as their hadronic cross sections might be too small to maintain kinetic equilibrium at lower temperatures). A more violent expansion has also been identified as a key to the solution of the ``HBT puzzle''~\cite{Pratt:2008qv}. As emphasized in Ref.~\cite{He:2011zx}, the ideal-hydro tunes are not meant to supplant principally more realistic viscous evolutions, but rather to explore limitations and flexibilities in the (ideal-) hydro description, within ``reasonable'' variations of the input. In Fig.~\ref{fig_hadrons}, the solid lines show some of the hydro results for bulk spectra and elliptic flow in comparison to RHIC data, which turn out to be very similar to the schematic fireball. \subsection{Comparison of Space-Time Properties} \label{ssec_comp} We are now in position to systematically compare the space-time evolutions of the schematic fireball and the ideal hydro solution. We focus on 0-20\% central Au-Au collisions at RHIC energy ($\sqrt{s}=200\,A\mathrm{GeV}$), where both models describe the bulk spectra and $v_2$ fairly well, based on the same EoS (\textit{latPHG}). Since the isotropic nature of the fireball is rather schematic compared to the more elaborate profiles in the hydro evolution, we investigate in this Section how this difference manifests itself in suitably averaged bulk quantities, which are expected to play an important role in the photon emission observables discussed in the next Section. \begin{figure}[!t] \begin{center} \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{temp-vs-time-comparison-0-20-RHIC-lEoS} \end{minipage}\hfill \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{RHIC_020p_average_vT} \end{minipage} \vspace*{5mm} \includegraphics[width=0.48\linewidth]{av-int-v2-comparison-0-20-RHIC-lEoS} \end{center} \caption{(Color online) Time evolution of average temperature (upper left panel), average transverse flow velocity (upper right panel) and elliptic flow (lower panel, using a particle mass of $m=300\,\mathrm{MeV}$) in the expanding fireball (dashed lines) and hydrodynamic evolution (solid lines). In addition, the upper left panel contains the temperature evolution of the central hydro cell (short-dashed line).} \label{fig_T-eps} \end{figure} Let us first investigate the time evolution of (average) temperature, radial and elliptic flow, see Fig.~\ref{fig_T-eps}. The temperature of the fireball is generally rather close to the average one from the hydro model, but exhibits systematically slightly higher values in the late states of the evolution (see upper left panel). This goes along with a 10-15\% longer lifetime of the fireball evolution, indicating a somewhat slower cooling in the later stages. The radial flow (upper right panel of Fig.~\ref{fig_T-eps}) starts out higher in the hydro (due to finite initial flow), but then levels off more quickly than in the fireball, eventually dropping below the latter's. For the elliptic flow comparison, we evaluate the $v_2$ coefficient of the momentum spectra of particles with an average mass of $300\; \mathrm{MeV}$ at fixed proper time (lower panel of Fig.~\ref{fig_T-eps}). For the hydrodynamic evolution this involves a varying temperature while the fireball is spatially homogeneous at each time. \begin{figure}[!t] \begin{center} \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{V4-temp-comparison-0-20-RHIC-lEoS} \end{minipage}\hfill \begin{minipage}{0.48\linewidth} \vspace*{5mm} \includegraphics[width=\textwidth]{phot-rate-temp-histo-0-20-RHIC-lEoS-pT05} \end{minipage} \includegraphics[width=0.48 \linewidth]{phot-rate-temp-histo-0-20-RHIC-lEoS-pT20} \end{center} \caption{(Color online) Temperature evolution of the differential emission four-volume (upper left panel) and the double-differential photon emission rate (QGP for $T>T_\mathrm{pc}$ and hadronic for $T<T_\mathrm{pc}$) for two transverse momenta ($q_T=0.5 \, \mathrm{GeV}$ and $q_T=2 \, \mathrm{GeV}$ in the upper right and lower panel, respectively), in the expanding fireball (dashed lines) and hydrodynamic evolution (solid lines).} \label{fig_V4-spec} \end{figure} To properly interpret the observed photon spectra it is important to understand how the different emission stages contribute to the total. From the rate expression, Eq.~(\ref{rate}), one sees that the weighting is governed by three ingredients: the (differential) four-volume, the thermal weight (Bose factor) and the EM spectral function (at the photon point). The former two are governed by the bulk-medium evolution. To make a closer connection to the underlying matter properties, we now plot pertinent quantities as a function of temperature, rather than time. The upper left panel in Fig.~\ref{fig_V4-spec} shows the $T$ dependence of the differential four-volume, $\Delta V_4/\Delta T$, over temperature intervals of $\Delta T=10\,\mathrm{MeV}$ (and per unit rapidity). For both fireball and hydro evolution this quantity shows a distinct maximum structure around the pseudocritical temperature of $T_\mathrm{pc}\simeq170 \,\mathrm{MeV}$, as a consequence of the rapid change in the entropy density in the EoS (a remnant of the latent heat). One also finds a pronounced increase in the differential four-volume in the late(r) hadronic stages of the collision (stipulating the importance of a realistic hadronic photon emission rate). Again we find that the ideal hydro evolution seems to cool somewhat faster than the fireball (this might slightly change in a viscous evolution where some of the expansion work is dissipated into heat). Whereas hadron emission at kinetic freezeout in the hydro evolution is a continuous process, the fireball freezeout of the entire three-volume occurs at the end of the evolution. This difference is illustrated in Fig.~\ref{fig_entro}, where we plot the {\em time} dependence of the fraction of the total entropy that is above the kinetic-freezeout temperature in the hydro evolution. This fraction shows a marked departure from one at times already well before the total lifetime; in contrast, this fraction is equal to one throughout the fireball evolution. Recall, however, that the three-volume at small temperatures must be very similar in both evolutions, since the total three-volume at freeze-out figures into the calculation of hadron multiplicities (and spectra), which agree rather well. Our hydro evolution, on the other hand, does not allow for the possibility that the freeze-out front ``re-swallows'' previously frozen-out matter cells. This effect has been studied, {\it e.g.}, in Ref.~\cite{Grassi:2004dz}, where it was found to be significant even in the context of hadron observables. It would be interesting to investigate its impact on thermal-photon spectra. \begin{figure}[!t] \centering{\includegraphics[width=0.48\linewidth]{Sfrac}} \caption{(Color online) Time evolution of the entropy fraction (relative to the total) which is in fluid cells at temperatures above the kinetic freezeout temperature in the hydrodynamic model.} \label{fig_entro} \end{figure} The increase in emission four-volume is counteracted by the drop in temperature, suppressing the thermal distribution function in the rate. This renders the energy argument in the Bose function as an important scale. As is well known (see, {\it e.g.}, Ref.~\cite{Rapp:2011is} for an analogous study in the dilepton context), larger energies increase the sensitivity of the exponential to temperature variations and thus will lead to a stronger weighting of earlier phases. To exhibit this interplay in a more realistic way, we include the weights from the QGP and hadronic spectral functions, $\im \Pi_\mathrm{EM}$, in plotting the temperature-differential photon spectra for two representative transverse energies (see upper right and lower panel of Fig.~\ref{fig_V4-spec}); in other words, we use the full rate expression - the same for both evolution models - figuring in our comparisons to data in the following sections (recall that the AMY QGP rates (full LO result)~\cite{Arnold:2001ms} and the TRG hadronic rates~\cite{Turbide:2003si} are nearly degenerate around $T_\mathrm{pc}$, thus avoiding a ``bias'' for either phase). For low photon momenta (energies), $q_T=0.5\,\mathrm{GeV}$, the ``phase transition'' peak observed in the four-volume is remarkably enhanced, but also the high-temperature part now exhibits significant emission strength. As expected, the high-temperature component increases further at larger momentum ($q_T =2\, \mathrm{GeV}$), albeit not dramatically. A pronounced peak around $T_\mathrm{pc}$ persists also for these kinematics. This analysis clearly identifies the importance of the ``pseudo-critical'' regime, around $T_\mathrm{pc}$, for thermal photon radiation, as found in Ref.~\cite{vanHees:2011vb}. The macrophysics encoded in the underlying EoS plays an important role through a rapid change in entropy density over a rather small temperature window, possibly augmented by a reduction in the velocity of sound, $c_{\mathrm{s}}^2$, figuring into the hydro evolution (a pertinent slowing down has not been implemented into the fireball evolution). An equally important role is plaid by the microphysics, {\it i.e.}, relatively large hadronic emission rates, comparable to the QGP ones, in the transition region. Together with substantial flow-induced blue shifts, this led to generating a sizable photon $v_2$ in our previous fireball calculations~\cite{vanHees:2011vb}. These features can be recovered within hydrodynamic evolutions, \emph{if} the collective flow is built up sufficiently fast, which can be realized via a modest initial radial-flow field and a compact initial-density profile. However, quantitative differences remain, which we further analyze in the following two sections by comparing the photon spectra from both approaches with each other and to direct photon data at RHIC and LHC. \section{Direct Photon Spectra at RHIC and LHC} \label{sec_gam} All thermal photon spectra presented in this Section are based on the same emission rates, from a complete LO calculation in a perturbative QGP~\cite{Arnold:2001ms} and hadronic many-body calculations supplemented with $t$-channel meson exchange reactions~\cite{Turbide:2003si} as well as $\pi\pi$ and $\pi K$ Bremsstrahlung~\cite{Liu:2007zzw}. We also note that short-lived (strong) resonance decays are usually not subtracted from the experimental spectra. To account for these ``strong feeddown'' photons ({\it e.g.}, $\Delta\to N\gamma$, etc.), we follow Refs.~\cite{Rapp:1999us,vanHees:2011vb} by running our evolution for an extra $1\,\mathrm{fm}/c$ after kinetic freezeout. Strictly speaking, as elaborated in Ref.~\cite{vanHees:2007th}, these final-state decays would have to be calculated with slightly modified kinematics compared to thermal processes (with an extra Lorentz-$\gamma$ factor), but we neglect this difference in the present study. Ultimately, their contribution to the total direct-$\gamma$ $v_2$ turns out to be rather modest (e.g., increasing it by typically 5-10\% around $q_T=2$\,GeV), and even less for the spectra. \subsection{Thermal Fireball} \label{ssec_gam-fb} \begin{figure}[!t] \centering{\includegraphics[width=0.48 \linewidth]{dnphdqtAu020-eos}} \caption{(Color online) Comparison of thermal photon spectra in Au+Au($\sqrt{s}=0.2A \, \mathrm{TeV}$) collisions from an expanding fireball using either a first-order quasiparticle-QGP + HRG EoS with $T_\mathrm{c}=T_\mathrm{cm}=180\,\mathrm{MeV}$~\cite{vanHees:2011vb} (dashed lines) or a cross-over lQCD + HRG EoS with $T_\mathrm{pc}=170\, \mathrm{MeV}$ and $T_\mathrm{ch}=160\, \mathrm{MeV}$; red, blue and purple/pink lines represent QGP, hadronic and total contributions, respectively.} \label{fig_fb-rhic-comp} \end{figure} We start by updating the fireball calculations at RHIC, which in Ref.~\cite{vanHees:2011vb} where conducted with a first-order EoS (and a by now outdated chemical-freezeout temperature of $T_\mathrm{ch}=180\, \mathrm{MeV}$), by implementing the \textit{latPHG} EoS of Ref.~\cite{He:2011zx} with chemical freezeout at $T_\mathrm{ch}=160\, \mathrm{MeV}$. The resulting thermal spectra are compared to the results of Ref.~\cite{vanHees:2011vb} in Fig.~\ref{fig_fb-rhic-comp} using the same total entropy. Since the partitioning of hadronic and QGP emission in the mixed phase of the first-order transition is now entirely assigned to the nonperturbative QGP phase, and due to a lower critical temperature in the \textit{latPHG}, the QGP contribution significantly increases while the hadronic part decreases compared to the first-order scenario. This ``reshuffling'' by itself corroborates that the major portion of the thermal emission originates from around the phase transition region, independent of the details of the EoS. While the total (integrated) photon yield is not changed much (analogous to what has been found for low-mass dileptons~\cite{Rapp:2013nxa}), the high-$q_T$ part of the spectrum benefits from the increased temperature in the QGP close to $T_\mathrm{pc}$ and from the later chemical freezeout in the hadronic phase, which slows down the drop in temperature that arises in the presence of pion (and other effective) chemical potentials (for the inclusive yields, and at low $q_T$, the faster temperature drop is (over-) compensated by the fugacity factors). \begin{figure}[!t] \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{photon-rate-0-20-ellfb-RHIC-lEoS} \end{minipage}\hfill \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{photon-v2-0-20-ellfb-RHIC-lEoS} \end{minipage} \caption{(Color online) Direct photon spectra (left panel) and elliptic flow (right panel) from the expanding fireball in 0-20\% Au+Au($\sqrt{s}=0.2\,A \mathrm{TeV}$) collisions with updated total entropy using the \textit{latPHG} EoS, compared to PHENIX data~\cite{Adare:2008fq,Adare:2011zr}. In the left panel, blue dashed, red dashed-dotted, green short-dashed-dotted and purple solid lines correspond to hadronic, QGP and primordial contributions, and their sum (``total''), respectively. The primordial conribution is based on the PHENIX pp parameterization. In the right panel, the red short-dashed line is the combined thermal $v_2$; the purple solid and long-dashed lines are the total direct-photon $v_2$ using two different primordial contributions, either the PHENIX pp parameterization (as in the left panel) or an $x_t$-scaling ansatz (labeled ``total-2''). Both primoridial contributions are assumed to carry vanishing $v_2$.} \label{fig_fb-rhic} \end{figure} \begin{figure}[!t] \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{photon-rate-lhc-cent0-40} \end{minipage}\hfill \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{photon-v2-0-40central-ellfb-lhc} \end{minipage} \caption{(Color online) Direct photon spectra (left panel) and elliptic flow (right panel) from the expanding fireball in 0-40\% Pb+Pb($\sqrt{s}=2.76\,A\mathrm{TeV}$) collisions using \textit{latPHG} EoS, compared to preliminary ALICE data~\cite{Wilde:2012wc,Lohner:2012ct}. In the left panel, red, blue, green and purple lines represent QGP, hadronic and primordial contributions, as well as the total, respectively. In the right panel we show the combined thermal $v_2$ (red dashed line) and the total $v_2$ (purple solid line).} \label{fig_fb-lhc} \end{figure} As mentioned above, we have further updated our fireball calculations by a careful readjustment of the entropy when using \textit{latPHG}, leading to a 20\% decrease compared to Ref.~\cite{vanHees:2011vb}. With our nonlinear dependence of thermal photon production on the charged-particle multiplicity, $\propto N_\mathrm{ch}^x$ with $x \simeq 1.5$~\cite{Turbide:2003si,Rapp:2013nxa} (larger at higher $q_T$), the thermal yields are somewhat reduced compared to our earlier results. The comparison to the PHENIX data for direct photons in Fig.~\ref{fig_fb-rhic} shows discrepancies at low $q_T$ for both spectral yields and $v_2$, while for $q_T \ge 2 \, \mathrm{GeV}$ the calculations are within the experimental errors. We illustrate uncertainties in the determination of the primordial photon component due to initial hard $NN$ scatterings: on the one hand, we used a phenomenological parameterization by the PHENIX collaboration of their pp data~\cite{Adare:2008fq}; on the other hand, we used the $x_t$-scaling ansatz of Ref.~\cite{Srivastava:2001bw}, fitted to the high-$q_T$ part of the PHENIX pp data with a $K$-factor of $2.5$. The latter spectrum turns out to be somewhat smaller than the PHENIX parameterization at small $q_T$. This has rather little impact on the total $q_T$ spectrum, but it affects the $v_2$ more significantly, inducing an increase of the total direct-photon $v_2$ of up to $\sim 25\%$ around $q_T\simeq2.5 \, \mathrm{GeV}$. Further theoretical studies of the hard component are needed to better quantify this effect, {\it e.g.}, via a suppression of fragmentation photons or nuclear effects on the initial parton distribution functions. Finally, we turn to LHC energies, where preliminary direct-photon data are available from ALICE for 0-40\% central Pb+Pb($\sqrt{s}=2.76\,A \mathrm{TeV}$) collisions~\cite{Wilde:2012wc,Lohner:2012ct}. We model these reactions with an average charged-particle multiplicity of $\text{d} N_\mathrm{ch}/\text{d} y=1040$ over a rapidity interval of $|y|<0.75$. For primordial photon production from binary $NN$ collisions, we employ the $x_t$-scaling ansatz~\cite{Srivastava:2001bw}, fitted to the high-$q_T$ ALICE photon spectra with a $K$-factor of 2. The description of the spectra and $v_2$ is at a comparable level as at RHIC, with indications for an underestimate in both observables in the regime where thermal radiation is most significant, cf.~Fig.~\ref{fig_fb-lhc}. \subsection{Ideal Hydrodynamics} \label{ssec_gam-hydro} The direct-photon results from the ideal-hydro tune with \textit{latPHG} EoS and default emission rates at RHIC are displayed in Fig.~\ref{fig_hy-rhic}. The QGP contribution to the $q_T$ spectra agrees within ca.~30\% with the fireball results, but the hadronic portion falls short by a larger margin, especially toward higher $q_T$. This is not unexpected as the lifetime of the hydro evolution in the hadronic phase is noticeably smaller, due to a faster cooling in the local rest frame of the ideal hydro cells (leading to smaller four-volumes, recall Fig.~\ref{fig_V4-spec}). In addition, the average temperature in the late stages is smaller in hydrodynamics than in the fireball (cf.~upper left panel of Fig.~\ref{fig_T-eps}), which leads to a reduction especially in the high-$q_T$ region of the hadronic emission. Consequently, the hydro spectra and $v_2$ come out lower than for the fireball evolution; this increases the discrepancy with the PHENIX data for both observables, although not by much. Both evolution models result in an underestimate of the first two data points of the PHENIX spectra, and barely reach into the lower portions of the error bars of the $v_2$ data: the maximal $v_2$ reaches $\sim 4.4\%$ for the hydrodynamic evolution, compared to $\sim 5.7\%$ for the fireball, both when using the PHENIX pp baseline spectra (larger for the $x_t$ scaling ansatz). However, our hydro results are well above other hydrodynamic calculations reported in the literature~\cite{Holopainen:2011pd,Dion:2011pp,Shen:2013vja}. One difference lies in a faster build-up of the $v_2$, which essentially saturates when the system reaches the pseudo-critical region in the cooling process, for both fireball and hydro (cf.~lower panel of Fig.~\ref{fig_T-eps}). As mentioned above, this feature is essential in describing the spectra and $v_2$ of multistrange hadrons with an early kinetic freezeout close to the chemical freezeout temperature, and thus rather well motivated by hadron phenomenology (including the constituent-quark number scaling of $v_2$). In the hydrodynamic modeling this can be realized by initial conditions including a finite transverse flow at thermalization, together with a compact energy-density profile. Both features increase the transverse flow early on, which ultimately leads to an earlier saturation of $v_2$ (since the initial spatial anisotropy is converted faster into momentum space); it also increases the blue shift of the photons emitted from around $T_\mathrm{pc}$, which helps to build up the photon yield with large $v_2$ in the $q_T=2$-$3\, \mathrm{GeV}$ region. Another difference to existing hydro calculations is the larger rate in the hadronic phase, in particular the contributions associated with the photon point of the in-medium $\rho$ spectral function, which includes sources from interactions involving baryons and antibaryons~\cite{Rapp:1999us} and higher excited meson resonances~\cite{Rapp:1999qu}. \begin{figure}[!t] \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{dN2piqTdqTdy_original} \end{minipage}\hfill \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{v2-rhic020-hy} \end{minipage} \caption{(Color online) Direct photon spectra (left panel) and $v_2$ (right panel) in Au+Au($\sqrt{s}=0.2\, A\mathrm{TeV}$) using ideal hydrodynamics, compared to PHENIX data~\cite{Adare:2008fq,Adare:2011zr}; line identifications as Fig.~\ref{fig_fb-rhic}.} \label{fig_hy-rhic} \end{figure} Next, we turn to our hydro results at LHC, adopting a similar ansatz for the initial conditions as at RHIC, {\it i.e.}, with a finite transverse flow and compact entropy density profile. A factor of $\sim 2$ underestimate of the preliminary photon spectra measured by ALICE in 0-40\% Pb-Pb($\sqrt{s} =2.76\, A\mathrm{TeV}$) is found for momenta below $q_T \simeq 2\, \mathrm{GeV}$, see left panel of Fig.~\ref{fig_hy-lhc}. The calculated $v_2$ is not inconsistent with these data within the current experimental uncertainties, see right panel of Fig.~\ref{fig_hy-lhc}. Although the hadronic component in the thermal spectra is again considerably smaller than in the fireball spectra, the relatively stronger QGP component in both calculations compared to RHIC renders the overall impact of the hadronic emission less relevant. The QGP component from the hydro is now somewhat stronger than from the fireball (especially at high $q_T$), leading to closer agreement in the total photon spectra and $v_2$, and also with the data. Note that the $v_2$ of the thermal component at $q_T\gsim3\,\mathrm{GeV}$ is significantly larger for the fireball than for the hydro (cf.~dashed lines in the right panels of Figs.~\ref{fig_fb-lhc} and \ref{fig_hy-lhc}, respectively), due to the larger hadronic component in the former. \begin{figure}[!t] \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{dndpt-lhc040-hy} \end{minipage}\hfill \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{v2-lhc040-hy} \end{minipage} \caption{(Color online) Direct photon spectra (left panel) and $v_2$ (right panel) in 0-40\% Pb+Pb($\sqrt{s}=2.76\,A \mathrm{TeV}$) using ideal hydrodynamics, compared to preliminary ALICE data~\cite{Wilde:2012wc,Lohner:2012ct}; line identifications as in Fig.~\ref{fig_hy-rhic}.} \label{fig_hy-lhc} \end{figure} \section{Discussion} \label{sec_disc} The general trend of our results reported above for both fireball and hydrodynamic evolutions is an underestimate of both spectra and $v_2$ for both PHENIX and preliminary ALICE data for photon momenta $q_T \le 3\, \mathrm{GeV}$, which is the region where the thermal radiation is expected to be most relevant. In the following, we will investigate possibilities how these deficits may be overcome. For simplicity, we concentrate on the hydrodynamic space-time evolution for these studies. One option to increase thermal radiation in URHICs is a decrease of the thermalization time, $\tau_0$, of the medium, as investigated, {\it e.g.}, in Refs.~\cite{Chatterjee:2008tp,vanHees:2011vb,Chatterjee:2012dn}. While the total yield generally increases above $q_T>1\, \mathrm{GeV}$, its slope becomes harder and the total $v_2$ becomes smaller, both not favored by the data. This reiterates the need for a softer radiation source with larger $v_2$. In the following, we will stick to our default value of $\tau_0=0.6\,\mathrm{fm}/c$. \begin{figure}[!t] \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{dN2piqTdqTdy_comparison} \end{minipage}\hfill \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{v2-rhic020-hy-en} \end{minipage} \caption{(Color online) Direct photon spectra (left panel) and $v_2$ (right panel) from hydrodynamics at RHIC when introducing a ``pseudo-critical'' enhancement of QGP and hadronic rates around $T_{\rm pc}$, compared to PHENIX data~\cite{Adare:2008fq,Adare:2011zr}; line identifications as Fig.~\ref{fig_fb-rhic}.} \label{fig_rhic-ampl} \end{figure} In Refs.~\cite{vanHees:2011vb,Rapp:2013ema,Shen:2013vja} an enhancement of the photon emission rates in the pseudo-critical region, beyond the default rates used above, has been conjectured. This may not be unexpected, from a theoretical point of view. On the QGP side, the AMY rates are based on perturbative parton rescattering, which in other contexts tends to fall short in producing sufficient interaction strength, {\it e.g.}, in both phenomenology and lattice calculations of $\eta/s$ or the heavy-quark diffusion coefficient~\cite{Schafer:2009dj,Rapp:2009my}. Especially close to the hadronization transition, confining interactions are expected to play an important role (as, {\it e.g.}, borne out of lattice calculations for the heavy-quark free energies~\cite{Kaczmarek:2005ui}). An increase in partonic scattering rates is a natural mechanism to also increase photon radiation (see, {\it e.g.}, Ref.~\cite{Goloviznin:2012dy}), which is quite different from a perturbative scenario with weakly interacting quasi-particles. On the hadronic side, an enhancement of the current rates is conceivable as well, since the TRG rates (includiung contributions from the in-medium $\rho$ spectral function) may not exhaust all relevant reaction channels in hadronic resonance matter; investigations to identify and calculate possibly important channels not considered thus far are in progress~\cite{Holt:2014} (we note in passing that hadronic Bremsstrahlung is an unlikely candidate since its spectrum tends to be too soft~\cite{Liu:2007zzw}). To mimic a ``pseudo-critical" enhancement of our default rates, we increase the latter by a baseline factor of 2, further amplified up to a maximum factor of 3 at $T_{\rm pc}=170\, \mathrm{MeV}$, linearly ramped up from $T=140\,\mathrm{MeV}$ and down until $T=200\, \mathrm{MeV}$ again. The results are encouraging (cf.~Figs.~\ref{fig_rhic-ampl} and ~\ref{fig_lhc-ampl}): the description of both PHENIX and preliminary ALICE spectra and $v_2$ improves significantly. The calculated $v_2$ at RHIC still tends to only reach into the lower portions of the experimental errors, but we recall that larger hadronic contributions, as suggested by the fireball calculations, would help to increase it further. \begin{figure}[!t] \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{dndpt-lhc040-hy-en} \end{minipage}\hfill \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{v2-lhc040-hy-en} \end{minipage} \caption{(Color online) Direct photon spectra (left panel) and $v_2$ (right panel) at LHC with enhanced photon rates around $T_\mathrm{pc}$, compared to preliminary ALICE data~\cite{Wilde:2012wc,Lohner:2012ct}.} \label{fig_lhc-ampl} \end{figure} Let us briefly expand on a speculation raised in Ref.~\cite{vanHees:2011vb}, that there might be an hitherto undetermined uncertainty in the subtraction of the radiative $\omega \rightarrow \pi^0\gamma$ decays, since the latter have not been explicitly measured in the low-$q_T$ region in the Au-Au environment. For this purpose, and to obtain an absolute upper estimate, we simply add to our thermal spectra (calculated with the amplified rates) the photon contribution from final-state $\omega$ decays based on our hydro $\omega$ spectra at thermal freezeout (as a three-pion or $\rho\pi$ resonance, the $\omega$ receives a pion fugacity factor to the third power). The result of this exercise is shown in Fig.~\ref{fig_rhic-omg}, illustrating an appreciable effect on both spectra and $v_2$ which would still be significant if reduced by a factor of 2. \begin{figure}[!t] \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{dN2piqTdqTdy_omegadecay} \end{minipage}\hfill \begin{minipage}{0.48\linewidth} \includegraphics[width=\textwidth]{v2_omegadecay} \end{minipage} \caption{(Color online) Direct photon spectra (left panel) and $v_2$ (right panel) from hydrodynamics at RHIC when adding $\omega \rightarrow\pi^0 + \gamma$ decays at thermal freezeout to the scenario with enhanced rates (dash-dotted line), compared to the enhanced-rate (dashed line) and default-rate (solid line) scenarios, as well as the PHENIX data~\cite{Adare:2008fq,Adare:2011zr}; all calculations use the PHENIX pp baseline for the primordial component.} \label{fig_rhic-omg} \end{figure} Finally, we conduct a schematic study of the effect of quark undersaturation in the early (Q)GP phases, as expected from gluon saturation models~\cite{McLerran:2014hza}. Similar to earlier calculations for thermal EM emission~\cite{Strickland:1994rf,Srivastava:1996qd,Rapp:2000pe}, we find the gluon Compton process, $gq\to q\gamma$, to still contribute appreciably (and with a harder slope than in a chemically equilibrated QGP at the same total entropy), unless the $q\bar q$ undersaturation is strong enough to largely suppress the early thermal yield altogether. This suppression would have to be made up by an even larger enhancement in the later phases compared to what we assumed above. \section{Summary and Conclusions} \label{sec_sum} In this paper we have studied the properties of thermal photon radiation at collider energies, in an attempt to better understand recent measurements of direct photon spectra and their elliptic flow. Using QGP and hadronic thermal emission rates as available from the literature, we first focused on a detailed comparison of the space-time evolution as given by a blast-wave type fireball and ideal hydrodynamics. Both were based on the same equation of state and fits to the same set of hadron data using the concept of sequential freeze-out for multistrange and light hadrons. The relevance of this concept for photon radiation lies in a rather early saturation of $v_2$ and larger blue shifts by the time the expanding system reaches the phase transition region (in the hydro model, this can be realized by compact initial conditions with non-zero radial flow). We have found that the emission characteristics of the QGP part agree rather well between hydro and fireball, while the latter leads to significantly larger photon radiation in the hadronic phase, especially toward higher $q_T$. We traced this back to a slower cooling with larger average temperatures and radial flow in the fireball, which, at least in part, is due to a continuous freezeout in hydrodynamics leading to an appreciable reduction of the ``active'' matter cells in the later stages of the evolution. Both evolution models clearly identify the transition region around $T_{\rm pc}\simeq170\,\mathrm{MeV}$ as a key source of thermal photon emission. After the addition of primordial photons extrapolated from pp collisions, both hydro and fireball results tend to be somewhat (although not dramatically) below the measured spectra and $v_2$ in Au-Au and Pb-Pb collisions at RHIC and LHC, with a preference for the fireball due to its larger hadronic contribution. We then shifted our focus to the microscopic emission rates. We argued that an enhancement of the currently employed emission rates is plausible, especially in the pseudo-critical region where the medium is expected to be most strongly coupled. Upon amplifying our default rates by a baseline factor of 2, reaching up to 3 around $T_{\rm pc}\pm 30\, \mathrm{MeV}$, we found that the photon results from the hydro model come rather close to the experimental spectra and $v_2$ within current uncertainties. The additional hadronic contributions suggested by the fireball model would further improve the situation. Microscopic calculations of photon rates to search for additional sources not considered thus far are underway. \textit{ Acknowledgments.--} We thank C.~Gale and R.J.~Fries for discussions, and gratefully acknowledge fruitful exchanges at the EMMI RRTF workshop on ``Direct-photon flow puzzle'' organized by K.~Reygers and J.~Stachel. This work has been supported by the U.S. National Science Foundation through grants PHY-0969394 and PHY-1306359, by the A.-v.-Humboldt Foundation, by NSFC grant 11305089, by the German Federal Ministry of Education and Research (BMBF F{\"o}rderkennzeichen 05P12RFFTS), and by the Hessian initiative for excellence (LOEWE) through the Helmholtz International Center for FAIR (HIC for FAIR).
1,108,101,565,019
arxiv
\section{Introduction} Video understanding is a computer vision problem that has attracted the deep-learning community, notably via the usage of the two-stream convolutional network \cite{Simonyan2014}. Such a framework uses a deep convolutional neural network (dCNN) to extract static RGB (Red-Green-Blue) features as well as motion cues from another network that deconstructs the optic-flow of a given video clip. Notably, there has been plenty of work in utilising different types of network architectures for factorising the RGB and optical-flow based features. For example, an inception network \cite{Szegedy2016} uses $1 \times 1$ convolutions in its inception block to estimate cross-channel corrections, which is then followed by the estimation of cross-spatial and cross-channel correlations. A residual network (ResNet), on the other hand, learns residuals on the inputs \cite{He2016}. There are obvious problems that have impeded high accuracy of deep neural networks for video classification. Videos unlike still images have short and long temporal correlations, attributes that single frame convolutional neural network fail to discover. Therefore, the first hurdle is designing recurrent networks and feedforward networks that can learn this latent temporal structure. Nonetheless, there has been much progress in devising novel neural network architecture since the work of \cite{Karpathy2014}. Another problem is the large storage and memory requirement for analysing moderately sized video snippets. One requires a relatively larger computing resource to train ultra deep neural networks that can learn the subtleties in temporal correlations, given varying lighting, camera angles, pose, etc. It is also difficult to utilise classical image augmentation techniques on a video stream. Additionally, video-based features (unlike in static images) evolve with a dynamics across several orders of time-scales. To add to this long list of technical difficulties, is the problem of the semantic gap, i.e., whether classification/labelling/captioning can lead to ``understanding" the video snippet? We improve upon existing technology by combining Inception networks and ResNets using a Support-Vector-Machine (SVM) classifier that is further combined in a multi-kernel setting to yield, to the best of our knowledge, an increased performance on the HMDB51 data-set \cite{Kuehne2013}. Notably, our work makes the following contributions: \begin{itemize} \item We introduce pillar networks that are deep as well as wide (depending on use-case), enabling horizontal scalability. This is important for a production quality video analytics framework that has to operate under the constraints of computational time. \item HMDB-51 is a dataset that has a wide variety of heterogeneity -- camera angle, video quality, pose, etc. Given that our framework has higher accuracy on this dataset, our methodology will be applicable to datasets that have similar statistical heterogeneity embedded in them. \end{itemize} \section{Methods} In this section, we describe the dataset, the network architectures and the multi-kernel learning based support-vector-machine (SVM) setup that we utilise in our four-stream dCNN pillar network for activity recognition. We refer the readers to the original network architectures in \cite{Wang2016} and \cite{Ma2017} for further technical details. While we do not report the results here, classification methodologies like AdaBoost, gradient boosting, random forests, etc. have classification accuracy in the range of 5-55\% for this dataset, for either the RGB or the optic-flow based features. \subsection{Dataset} The HMDB51 dataset \cite{Kuehne2013} is an action classification dataset that comprises of 6,766 video clips which have been divided into 51 action classes. Although a much larger UCF-sports dataset exists with 101 action classes \cite{Soomro2012}, the HMDB51 has proven to be more challenging. This is because each video has been filmed using a variety of viewpoints, occlusions, camera motions, video quality, etc. anointing the challenges of video-based prediction problems. The second motivation behind using such a dataset lies in the fact that HMDB51 has storage and compute requirement that is fulfilled by a modern workstation with GPUs -- alleviating deployment on expensive cloud-based compute resources. All experiments were done on Intel Xeon E5-2687W 3 GHz 128 GB workstation with two 12GB nVIDIA TITAN Xp GPUs. As in the original evaluation scheme, we report accuracy as an average over the three training/testing splits. \begin{figure*}[h] \centering \includegraphics[scale=0.15]{pillar_net} \caption{\textbf{The Pillar Network framework: }In this specific instantiation, there are two types of networks, namely ResNets and Inception networks that factorise static (RGB) and dynamic (optic flow) inputs obtained from a video. For the case specific deep tensors we use iDT and MIFS, under a multi-kernel learning framework. Additional feature tensors (hand-crafted or otherwise) can be learnt, according to the specific need of the problem, and incorporated as a new pillar.} \label{fig:framework} \end{figure*} \subsection{Inception layers for RGB and flow extraction} We use the inception layer architecture described in \cite{Wang2016}. Each video is divided into $N$ segments, and a short sub-segment is randomly selected from each segment so that a preliminary prediction can be produced from each snippet. This is later combined to form a video-level prediction. An Inception with Batch Normalisation network \cite{Ioffe2015} is utilised for both the spatial and the optic-flow stream. The feature size of each inception network is fixed at 1024. For further details on network pre-training, construction, etc. please refer to \cite{Wang2016}. \subsection{Residual layers for RGB and flow extraction} We utilise the network architecture proposed in \cite{Ma2017} where the authors leverage recurrent networks and convolutions over temporally constructed feature matrices as shown in Fig. \ref{fig:framework}. In our instantiation, we truncate the network to yield 2048 features, which is different from \cite{Ma2017} where these features feed into an LSTM (Long Short Term Memory) network. The spatial stream network takes in RGB images as input with a ResNet-101 \cite{He2016} as a feature extractor; this ResNet-101 spatial-stream ConvNet has been pre-trained on the ImageNet dataset. The temporal stream stacks ten optical flow images using the pre-training protocol suggested in \cite{Wang2016}. The feature size of each ResNet network is fixed at 2048. For further details on network pre-training, construction, etc. please refer to \cite{Ma2017}. \subsection{Hand-crafted features: iDT and MIFS} We follow \cite{Wang2013a} for generating the Fisher encoded Improved Dense Trajectory (iDT) features\footnote{http://lear.inrialpes.fr/~wang/improved\_trajectories}. First tracking points are created by median filtering in a dense optical flow field of 15 frames. We can then compute descriptors such as Trajectory, a histogram of oriented gradients (HOG), a histogram of optical flow (HOF) and motion boundary histograms (MBH) \cite{Wang2013a}. Descriptors within a space-time volume are aligned with a trajectory to encode the motion information; Fisher encoding \cite{Perronnin2007} is then applied on the local descriptors to generate the video representation for classification. Similar to Fisher encoded iDT, instead of using feature points extracted from one-time scale, Multi-skIp Feature Stacking (MIFS) \cite{Lan2015} extracts and stacks all of the raw feature points from multiple time skips (scale) before encoding it to a Fisher vector. MIFS achieves shift-invariance in the frequency domain and captures a longer range of action indicators by recapturing information at coarse scales. \subsection{Support Vector Machine (SVM) with multi-kernel learning (MKL)} The basis of the second stage of our classification methodology rests on a maximum margin classifier -- a support vector machine (SVM). Given training tuples $(x_{i},y_{i})$ and weights $w$ , under a Hinge loss, a SVM solves the primal problem \cite{Scholkopf2001}, \begin{eqnarray} \mathop {\min }\limits_{w,b,\zeta } \frac{1}{2}{w^T}w & + & C\sum\limits_{i = 1}^n {{\zeta _i}} \nonumber \\ & s.t. & {y_i}\left( {{w^T}\phi \left( {{x_i}} \right) + b} \right) \geqslant 1 - {\zeta _i} \nonumber \\ & & {\zeta _i} \geqslant 0,i = 1, \ldots ,n \nonumber \\ \end{eqnarray} As is customary in kernel methods, computations involving $\phi$ are handled using kernel functions $k\left( {{x_i},{x_j}} \right) = \phi \left( {{x_i}} \right) \cdot \phi \left( {{x_j}} \right)$. In all of our experiments, a Radial Basis Function (RBF) based kernel has been used. $C$ (fixed at 100) is the penalty parameter and $\zeta$ is the slack variable. For multiple kernel learning (MKL), we follow the recipe by \cite{Sonnenburg2006} (\textit{cf.} \cite{Xu2009}) and formulate a convex combination of sub-kernels as, \begin{eqnarray} {\mathbf{\kappa }}\left( {{x_i},{x_j}} \right) = \sum\limits_{k = 1}^K {{\beta _k}} {k_k}\left( {{x_i},{x_j}} \right) \label{eqn:MKL} \end{eqnarray} In contrast to \cite{Sonnenburg2006}, we use L2 regularised ${\beta _k} \geqslant 0$ and $ \sum\limits_{k = 1}^K {{\beta _k}^{2} \leq 1} $. L2 regularised multiple kernel is learnt by formulating Eqn.\ref{eqn:MKL} as a semi-infinite linear programming (SILP) problem. During each iteration, a SVM solver\footnote{http://www.shogun-toolbox.org/} is first instantiated to obtain the weighted support vectors; subsequently, a linear programming (LP) problem is solved using Mosek\footnote{http://docs.mosek.com/6.0/toolbox/}. \section{Results} \begin{table}[] \centering \caption{SVM accuracy results for the Inception Network} \label{table:inception} \begin{tabular}{@{}llll@{}} \toprule & optical flow & RGB & MKL \\ \midrule split-1 & 61\% & 54\% & 68.1\% \\ split-2 & 62.4\% & 50.8\% & 69.9\% \\ split-3 & 64\% & 49.2\% & 70.0\% \\ Average & 62.5\% & 51.3\% & 69.3\% \\ \bottomrule \end{tabular} \end{table} \begin{table}[] \centering \caption{SVM accuracy results for the ResNet Network} \label{table:resnet} \begin{tabular}{@{}llll@{}} \toprule & optical flow & RGB & MKL \\ \midrule split-1 & 58.5\% & 53.1\% & 64.1\% \\ split-2 & 57.5\% & 48.6\% & 62.9\% \\ split-3 & 57.2\% & 48\% & 62.5\% \\ Average & 57.7\% & 49.9\% & 63.2\% \\ \bottomrule \end{tabular} \end{table} \begin{table}[] \centering \caption{Fusing all kernels} \label{table:average} \begin{tabular}{@{}ll@{}} \toprule & Accuracy \\ \midrule split-1 & 73.1\% \\ split-2 & 72.3\% \\ split-3 & 72.9\% \\ Average & 72.8\% \\ \bottomrule \end{tabular} \end{table} We use 3570 videos from the HMDB51 data-set for training the SVMs under a multiple kernel learning (MKL) framework. Utilising four networks yield four features tensors that are fused in steps, to form a single prediction (Figure \ref{fig:framework}). The feature tensors for both RGB and Flow are extracted from the output of the last connected layer with 1024 dimension for the Inception network and 2048 for the ResNet network. Four separate SVMs are trained on these feature tensors. Results have been shown for the two networks used -- Inception (Table \ref{table:inception}) and ResNet (Table \ref{table:resnet}). Subsequently, we fuse multiple kernels learnt from the individual classifiers using a semi-infinite linear optimisation problem. Average result from three splits is displayed in Table \ref{table:average}. By fusing hand-crafted features, such as iDT and MIFS, with the features generated from a dCNN, the performance of pillar networks is further boosted. Such additional features take the place of `case-specific tensors' in Figure \ref{fig:framework}. It is apparent that combining kernels from various stages of the prediction process yields better accuracy. Particularly, the confusion matrix suggests that the two worst performing classes are: (a) to wave one's hands, being confused with walking and (b) throwing objects confused with swinging a baseball. Unsurprisingly, actions such as chewing are confused with the eating action class. Table \ref{table:stateofart} compares our method to other methods in the literature. Of notable mention, are the TS-LSTM and the Temporal-Inception methods that form part of the framework that we use here. In short, synergistically, utilising multiple kernels boosts the performance of our classification framework, and enable state-of-the art performance on this dataset. The improvement using hand-crafted features i.e., iDT and MIFS are marginal. This means that the commonly seen boost of accuracy, offered by iDT is now being implicitly learnt by the combination of features learnt by the Inception and the ResNet pillars. \begin{table*}[] \centering \caption{Accuracy scores for the HMDB51 data-set.} \label{table:stateofart} \begin{tabular}{@{}lll@{}} \toprule \textbf{Methods} & \textbf{Accuracy {[}\%{]}} & \textbf{Reference} \\ \midrule Two-stream & 59.4 & \cite{Simonyan2014} \\ Rank Pooling (ALL)+ HRP (CNN) & 65 & \cite{Fernando2017} \\ Convolutional Two-stream & 65.4 & \cite{Feichtenhofer2016} \\ Temporal-Inception & 67.5 & \cite{Ma2017} \\ TS-LSTM & 69 & \cite{Ma2017} \\ Temporal Segment Network (2/3/7 modalities) & 68.5/69.4/71 & \cite{Wang2016} \\ ST-ResNet + iDT & 70.3 & \cite{Ma2017} \\ ST-multiplier network & 68.9 & \cite{Feichtenhofer2017} \\ ST-multiplier network + iDT & 72.2 & \cite{Feichtenhofer2017} \\ Pillar Networks + SVM-MKL & 72.8 & this paper \\ Pillar Networks + iDT + SVM-MKL & 73.0 & this paper \\ Pillar Networks + MIFS + SVM-MKL & 73.3 & this paper \\ \bottomrule \end{tabular} \end{table*} \section{Discussion} Our main contribution in this paper is to introduce \textbf{pillar networks} that are deep as well as wide (by plugging in other feature tensors, horizontally) enabling horizontal scalability. Combining different methodologies allow us to reach state-of-the-art performance for video classification, especially, action recognition. We utilised the HMDB-51 dataset instead of UCF101 as the former has proven to be difficult for deep networks due to the heterogeneity of image quality, camera angles, etc. As is well-known videos contain extensive long-range temporal structure; using different networks (2 ResNets and 2 Inception networks) to capture the subtleties of this temporal structure is an absolute requirement. Since each network implements a different non-linear transformation, one can utilise them to learn very deep features. Utilising the distributed architecture then enables us to parcellate the feature tensors into computable chunks (by being distributed) of input for an SVM-MKL classifier. Such an architectural choice, therefore, enables us to scale horizontally by plugging in a variety of networks \textit{as per requirement}. While we have used this architecture for video based classification, there is a wide variety of problems where we can apply this methodology -- from speech processing (with different pillars/networks) to natural-language-processing (NLP). In supplementary studies, stacking features from the four network outputs with a softmax and a cross-entropy loss has an accuracy of approximately 67\%, highlighting the necessity for a multi-kernel approach. Thus, our framework rests on two stages of training -- one for training the neural networks and the other for training the multiple kernels of the support vector machine (SVM). Since both of the training stages are decoupled, it allows for scalability wherein different networks can operate on a plug-and-play basis. Indeed, there has been some work in combining deep neural networks with (linear) SVMs to facilitate end-to-end training \cite{Tang2013}. It would be useful to see how pillar networks perform on immensely large datasets such as the Youtube-8m data-set \cite{Abu-El-Haija2016}. Additionally, recently published Kinetics human action video dataset from DeepMind \cite{Kay2017} is equally attractive, as pre-training, the pillar networks on this dataset before fine-grained training on HMDB-51 will invariably increase the accuracy of the current network architecture. {\small \bibliographystyle{ieee}
1,108,101,565,020
arxiv
\section*{Acknowledgments}\vspace{-3ex} This work is supported by the Grant-in-Aid for Japan Society for the Promotion of Science (JSPS) Fellows No.25$\cdot$1107 (YH) and No.27$\cdot$1771 (KK). K.T.'s work is supported in part by the MEXT Grant-in-Aid for Scientific Research on Innovative Areas No. 26104704. \\
1,108,101,565,021
arxiv
\section{Introduction} With the Large Hadron Collider (LHC) having started to operate, the high energy community is expanding focus in the study and search of new physics beyond the standard model (SM). Grand unification theories (GUTs) are models of the most promising ones for this new physics \cite{langacker}. However, supersymmetry (SUSY) is necessary here to make the huge hierarchy between the GUT scale and the electroweak scale stable under radiative corrections \cite{martin-primer}. In this regard, SUSY SO(10) is an appealing candidate for realistic GUTs \cite{mohapatra-book}. Universal boundary conditions for gaugino masses, as well as other soft terms, at the high scale (the unification scale or Plank scale) are adopted in the the setting of the minimal supergravity (mSUGRA) or the constrained minimal supersymmetric standard model CMSSM \cite{cmssm}. If the discrepancy between the SM theoretical predictions and the experimental determinations of $(g-2)$ is confirmed at the $3$-sigma level, this could be interpreted as strong evidence against the CMSSM \cite{g-2}. Non-universal gaugino masses may arise in supergravity models in which a non-minimal gauge field kinetic term is induced by the SUSY-breaking vacuum expectation value (vev) of a chiral superfield that is charged under the GUT group G \cite{anderson}. The non-universal gaugino masses resulting from SUSY-breaking vevs of non-singlet chiral superfields, for $G=SU(5)$, $SO(10)$ and $E_6$, and their phenomenological implications have been investigated in~\cite{chamoun,chakra1,martin,chakra2}. If the grand unification group $G$ is large enough, like $SO(10)$ or $E_6$, then there are more than one breaking chain from $G$ down to the SM. It is natural here to assume that there exist multi intermediate mass scales in the breaking chain. It has been found that when extrapolating the coupling strengths to very high energies, they tend to converge in the non-SUSY $SO(10)$ provided one introduces two new intermediate energy scales, whereas they do not meet at one point in the absence of intermediate energy scale~\cite{alp}. A systematic study of the constraints of gauge unification on intermediate mass scales in non-SUSY $SO(10)$ scenarios was recently discussed in~\cite{bertolini}. The possibility of the existence of intermediate scales is an important issue for supersymmetric unification. The success of the minimal supersymmetric standard model MSSM couplings unification~\cite{giunti} favors a single GUT scale, and the intermediate scales cannot be too far from the GUT scale. However, recent studies show that in GUTs with large number of fields renormalization effects significantly modify the scale at which quantum gravity becomes strong and this in turn can modify the boundary conditions for coupling unification if higher dimensional operators induced by gravity are taken into consideration~\cite{calmet}. In GUT model building, the so called magic fields can be used to fix the gauge coupling unification in certain two-step breakings of the unified group~\cite{cfrz}. It has been pointed out that any choice of three options - threshold corrections due to the mass spectrum near the unification scale, gravity induced non-renormalizable operators near the Plank scale, or presence of additional light Higgs multiplets - can permit unification with a lower intermediate scale~\cite{majee}. This unification with distinct energy scales yields right handed neutrino masses in the range ($10^8-10^{13}$ GeV) relevant for leptogenesis~\cite{luty}, perhaps even reaching the TeV region~\cite{majee}. In the previous studies~\cite{chamoun,chakra1,martin,chakra2} on non-universal gaugino masses in SUSY-$SO(10)$ one assumed for simplicity that there was no intermediate scales between $M_{GUT}$ and $M_S$ (the SUSY scale$\sim 1TeV$) or the electro-weak scale $M_{EW}$. In this paper, we study in detail the intermediate scale dependence of the non-universal gaugino masses. The starting point is to consider a chiral superfield (`Higgs' field) $\Phi$ transforming under the gauge group $G=SO(10)$ in an irrep $R$ lying in the symmetric product of two adjoints \footnote{All group theory considerations can be found in the reviw article \cite{slansky81}.}: \begin{eqnarray} \label{45-sym-prod} ({\bf 45}\times{\bf 45})_{symmetric}&=&{\bf 1}+{\bf 54}+{\bf 210}+{\bf 770} \end{eqnarray} If R is G non-singlet and $\Phi$ takes a vev (vacuum expectation value) spontaneously breaking $G$ into a subgroup $H$ containing the SM, then it can produce a gauge non-singlet contribution to the $H$-gaugino mass matrix \cite{ellis85} \begin{eqnarray} M_{\alpha,\beta}&=& \eta_\alpha \delta_{\alpha\beta} \langle \Phi \rangle \end{eqnarray} where the discrete $\eta_\alpha$'s are determined by $R$ and $H$. Here, we make two basic assumptions. The first one is to omit the `possible' situation of a linear combination of the above irreps and to consider the dominant contribution to the gaugino masses coming from one of the non-singlet $F$-components. The second assumption is that $SO(10)$ gauge symmetry group is broken down at GUT scale $M_{GUT}$ into an intermediate group $H$ which, in turn, breaks down to the SM at some intermediate scale $M_{HB}$. In the case of several intermediate symmetry breakings one can assume various intermediate scales, for which case it is straightforward to generalize our method. We insist on $H$ being the gauge symmetry group in the range from $M_{HB}$ to $M_{GUT}$. Thus, only the $F$-component of the field $\Phi$ which is neutral with respect to $H$ can acquire a vev yielding gaugino masses. Depending on the breaking chain one follows down to the SM, ratios of gaugino masses $M_a$'s are dependent of $M_{HB}$ and are determined purely by group theoretical factors only if $M_{HB}=M_{GUT}$. In fact, the functional dependence on $M_{HB}$ of the gaugino mass ratios can not be deduced from their values obtained in the case of $M_{HB}=M_{GUT}$ by mere renormalization group (RG) running, and one has to consider carefully the normalization of the group generators and the mixing of the abelian $U(1)$'s necessary to get the dependence of the $U(1)_Y$'s gaugino mass on the intermediate scale. Whereas in ref.\cite{chamoun} we considered only low dimensional irreps $\bf {54},~{\bf 210}$, we extend here our analysis to include all three non-singlet irreps. Moreover, there were some errors in the results of ref.\cite{chamoun}, which upon being corrected agree now with the conclusions of \cite{chakra1,martin,chakra2} when $M_{HB}=M_{GUT}$. The plan of this paper is as follows. In section 2, we consider the first step of breaking, from $GUT=SO(10)$ to the intermediate group $H$, and calculate the $H$-gaugino mass ratios at the GUT scale $M_{GUT}$, for the three cases $H=G_{422}\equiv SU(4)_C \times SU(2)_L \times SU(2)_R,H=SO(3)\times SO(7)$ and $H=H_{51}\equiv SU(5) \times U(1)_X$, depending on the specific irreps in Eq.(\ref{45-sym-prod}). We investigate the second step of the breaking, from the intermediate group $H$ to the SM group in section 3, and compute the MSSM gaugino masses in terms of the $H$-gaugino masses at the intermediate breaking scale $M_{HB}$. Taking the RG running from $M_{GUT}$ to $M_{HB}$ into consideration, we compute in section 4 the MSSM gaugino mass ratios at $M_{HB}$. We also state in this section the particle content of the model in each case, and calculate the beta function coefficients necessary for the RG running. In section 5, we summarize the results in form of a table, where we compare numerically the case of two breaking scales with the case of one breaking scale, and present our conclusions. \section{From $GUT=SO(10)$ to the intermediate group $H$} Here we discuss the different ways in which one can break the $GUT$-group $SO(10)$ depending on the Higgs irrep one uses. As noted earlier, three irreps can be used (see Eq.\ref{45-sym-prod}): $\bf {54,210}$ and $\bf 770$. \subsection{The irrep $\bf 54$} If an irrep ${\bf 54}$ is used then the branching rules for $SO(10)$ tell us it can be broken into several subgroups (e.g. $H=G_{422}, H=SU(2)\times SO(7), H=SO(9)$). The choice $H=SO(9)$ leads to universal gaugino masses whereas the other two possible chains are more interesting. \subsubsection{$H=G_{422}$} The ${\bf 54}$ irrep can be represented as a traceless and symmetric $10 \times 10$ matrix which takes the vev: \begin{eqnarray} <{\bf 54}>&=& v~ Diag(\underbrace{2,\dots,2}_{6},\underbrace{-3,\ldots,-3}_{4}) \end{eqnarray} with the indices $1,\ldots 6$ corresponding to $SO(6)\simeq SU(4)_C$ while those of $7,\ldots 0$ (henceforth $0$ means $10$) correspond to $SO(4)\simeq SU(2)_L\times SU(2)_R$. This implies that at $M_{GUT}$-scale we have: \begin{eqnarray} \label{54G422} \left.\frac{M_L}{M_4}\right|_{M_{GUT}}=\left.\frac{M_R}{M_4}\right|_{M_{GUT}}&=& -\frac{3}{2}\end{eqnarray} \subsubsection{$H=SU(2)_L\times SO(7)$} The first breaking is achieved by giving a vev to the irrep ${{\bf 54}}$ \begin{eqnarray} <{\bf 54}>&=& v Diag(\underbrace{7/3,\dots,7/3}_{3},\underbrace{-1,\ldots,-1}_{7}) \end{eqnarray} where the indices $1$,$2$,$3$ correspond to $SO(3)\simeq SU(2)_L$ and $4,\ldots 0$ correspond to SO(7). This gives at $M_{GUT}$-scale \begin{eqnarray} \label{54s03so7} \left.\frac{M_L}{M_7}\right|_{M_{GUT}}&=& -\frac{7}{3}\end{eqnarray} \subsection{The irrep $\bf 210$} This irrep can be represented by a $4^{th}$-rank totally antisymmetric tensor $\Delta_{abcd}$. It can break SO(10) in different ways, of which we consider two. \subsubsection{$H=G_{422}$} The first breaking from $SO(10)$ to $H$ is achieved when the only non-zero vev is $<\Delta_{abcd}>=v\epsilon_{7890}=v$ \cite{aulakh} where ($a,b,c,d\in\{1,\ldots,0\}$). This leads to the mass term: \begin{eqnarray} {\cal L}_{\text{mass}} \propto <\Delta_{abcd}> \lambda^a_b \lambda^c_d &=& \frac{v}{4}[(\lambda^7_8+\lambda^9_0)^2-(\lambda^7_8-\lambda^9_0)^2] \label{210mass422}\end{eqnarray} As the indices ($1,\ldots,6$) which correspond to $SO(6)$ do not appear in the mass term then we have \begin{eqnarray} \label{0} \left.M_4\right|_{M_{GUT}} &=& 0\end{eqnarray} We can take the gauginos $\lambda_{2L},\lambda_{2R}$ corresponding to $SU(2)_L,SU(2)_R$ as being proportioanl to the `bracketed' combinations of $\lambda^7_8$ and $\lambda^9_0$ in Eq.(\ref{210mass422}), and thus we get: \begin{eqnarray} \label{-1} \left.\frac{M_{R}}{M_{L}}\right|_{M_{GUT}} &=& -1.\end{eqnarray} \subsubsection{$H=H_{51}$} This breaking from $SO(10)$ occurs when \cite{he90}: \begin{eqnarray} \Delta_{1234}&=&\Delta_{1256}=\Delta_{1278}=\Delta_{1290}= \Delta_{3456} =\Delta_{3478}\nonumber\\ &=&\Delta_{3490}=\Delta_{5678}=\Delta_{5690}= \Delta_{7890}=v. \end{eqnarray} For the $H=H_{51}$-case, we adopt the convention of restricting the use of indices to the $SU(5)$-indices in order to express only the $SU(5)\times U(1)_X$ gauginos amongst the $SO(10)$-ones. In fact, the branching rule \begin{eqnarray} \label{10branching rule} {\bf 10} &\overset{SO(10)\supset SU(5)\times U(1)_X}{=}& ({\bf 5})_{2} + ({\bf \overline{5}})_{-2} \end{eqnarray} allows us to use the indices: \begin{eqnarray} \label{indices} i=\tilde{a}+\tilde{b}\equiv a+\bar{b} &\text{with}& i=\in\{1,\ldots,0\};\tilde{a}\equiv 2a-1\in\{1,3,5,7,9\};\tilde{b}\equiv 2\bar{b}\in\{2,4,6,8,0\}\end{eqnarray} and so we have the $SU(5)$-indices ($a=1,\ldots,5;\bar{b}=\bar{1},\ldots,\bar{5}$) written usually as an upper index for `$a$' and a lower index for `${b}$' (omitting the `bar' of $\bar{b}$). With this, we write the $SO(10)$ adjoint irrep $\lambda^{(2a-1)(2b)}$ as $\lambda^a_b$ using the $H_{51}$ indices ($a,b=1,\ldots,5$). We know that the only way to get a $4^{\text{th}}$-rank totally antisymmetric tensor invariant under $SU(5)$ is by considering: \begin{eqnarray} \epsilon^{abefg}\epsilon_{cdefg} &=& \delta^a_c\delta^b_d - \delta^a_d \delta^b_c\end{eqnarray} ($a,b,c,d,e,f,g=1,\ldots,5$) and thus the $H_{51}$-singlet takes on the invariant form \begin{eqnarray} <\Delta^{ab}_{cd}> &=& v \epsilon^{abefg}\epsilon_{cdefg} \end{eqnarray} The gaugino mass term becomes \begin{eqnarray} <\Delta^{ab}_{cd}> \lambda^c_a \lambda^d_b &\propto& - \widehat{\lambda^c_a} \widehat{\lambda^a_c} + \frac{4}{5} (\lambda^b_b)^2 = - (\widehat{\lambda^c_a})^2 + 4 (\lambda)^2\end{eqnarray} where the `traceless' $SU(5)$-gaugino $\widehat{\lambda^c_a}$ and the $U(1)_X$-gaugino $\lambda$ are defined as usual by: \begin{eqnarray} \label{labH51}\widehat{\lambda^a_b} &=& \lambda^a_b - \frac{1}{5} \delta^a_b \lambda^c_c \\ \label{lH51} \lambda &=& \frac{1}{\sqrt{5}} \lambda^c_c\end{eqnarray} We get at $M_{GUT}$ the ratio: \begin{eqnarray} \label{-4} \left.\frac{M_X}{M_5}\right|_{M_{GUT}}&=& -4\end{eqnarray} \subsection{The irrep $\bf 770$} This irrep can be represented by a traceless $4^{\text{th}}$-rank tensor $\phi^{ij,kl}$ with symmetrized and anti-symmetrized indices in the combinations corresponding to the Young diagram with two rows and columns. It can break $SO(10)$ in three ways. \subsubsection{$H=G_{422}$} Here, since we have the branching rule: \begin{eqnarray} {\bf 10} &\overset{SO(10)\supset SO(6)\times SO(4)}{=}& ({\bf 1, 4})+({\bf 6, 1})\overset{SO(10)\supset SU(4)\times SU(2) \times SU(2)}{\equiv}({\bf 1, 2, 2})+({\bf 6},{\bf 1},{\bf 1}), \end{eqnarray} we can set $\phi^a=\phi^\alpha +\phi^i$ with $a=1,2,...,0;\;\alpha=1,...,6;\;i=7,...,0$. When the scalar components of $\phi^{ab,cd}$, corresponding to the singlet ($\bf1,1$) of ${\bf 770}$ under $SO(10) \supset SO(6)\times SO(4)$, acquire a non-zero vev, then the tensor structure impose the form: \begin{eqnarray} <\phi^{\alpha\beta,\gamma\delta}> &=& v(\delta^{\alpha\beta}\delta^{\gamma\delta}-\delta^{\alpha\delta}\delta^{\beta\gamma}) \nonumber\\ <\phi^{ij,kl}> &=& sv(\delta^{ij}\delta^{kl}-\delta^{il}\delta^{jk}) \nonumber\\ <\phi^{\alpha\beta,ij}> &=& s^\prime v\delta^{\alpha\beta}\delta^{ij} \label{tensor structure} \end{eqnarray} ($\alpha,\beta,\gamma,\delta=1,\ldots,6;i,j,k,l=7,\ldots,0$). Forcing the tensors $\phi^{aa\gamma\delta}$ and $\phi^{aaij}$ to be traceless would imply $s^\prime = -\frac{5}{4}$ and $s= \frac{5}{2}$, and so one gets a mass term: \begin{eqnarray} {\cal L}_{\text{mass}} &=& \phi^{\alpha\beta\gamma\delta} \lambda_{\alpha\beta}\lambda_{\gamma\delta}+ \phi^{ijkl} \lambda_{ij}\lambda_{kl}\\ &=& -v (\lambda_{\alpha\beta})^2-sv(\lambda_{ij})^2\end{eqnarray} The $\lambda_{\alpha\beta}$'s correspond to $SO(6)$-gauginos whereas $\lambda_{ij}$'s correspond to $SO(4)$-gauginos, whence we get at $M_{GUT}$-scale the ratios: \begin{eqnarray} \label{5/2} \left. \frac{M_{L}}{M_{R}} \right |_{M_{GUT}} = 1 &,& \left.\frac{M_{R}}{M_4}\right |_{M_{GUT}}= \frac{5}{2}\end{eqnarray} \subsubsection{$H=SO(3) \times SO(7) \simeq SU(2)_L\times SO(7)$} Again, the branching rule: \begin{eqnarray} {\bf 10} &\overset{SO(10) \supset H_{51}}{\Large {=}}& ({\bf 3, 1})+({\bf 1, 7}) \end{eqnarray} enables us to set $\phi^a=\phi^\alpha +\phi^i$ with $a=1,\ldots,0;\;\alpha=1,\ldots,7;\;i=8,9,0$. In the same way as in the case of $H=G_{422}$, when the scalar components of $\phi^{ab,cd}$, corresponding to the singlet ($\bf1,1$) of ${\bf 770}$ under $SO(10) \supset SO(3)\times SO(7)$, acquire a non-zero vev then we have the same tensor structures as in Eqs.(\ref{tensor structure}). Forcing the traces $\phi^{aa\gamma\delta}$ and $\phi^{aaij}$ to vanish would imply $s^\prime = -2$ and $s=7$. Substituting in the Lagrangian gaugino mass term gives now at $M_{GUT}$ the ratios: \begin{eqnarray} \label{7} \left. \frac{M_{L}}{M_{R}} \right |_{M_{GUT}} = 1 &,& \left.\frac{M_{R}}{M_4}\right |_{M_{GUT}}= 7 \end{eqnarray} \subsubsection{$H=H_{51}$} Again, using the branching rule in Eq.(\ref{10branching rule}), we can take $\phi^a=\phi^i +\phi^{\bar k}\equiv \phi^j+\phi_l$ with $a=1,\ldots,0; i=1,3,5,7,9\equiv 2j-1;\bar k=2,4,6,8,0\equiv 2l$ ($j,l=1,\ldots,5$ are the $\bf 5$ and $\bf \bar 5$ indices respectively). When the traceless $4^{\text{th}}$-rank tensor $\phi^{ab,cd}$ scalar fields, corresponding to the singlet ($\bf1,1$) of ${\bf 770}$ under $SO(10) \supset H_{51}$, have a non-zero vev, then we have the following tensor structures: \begin{eqnarray} \phi^{ab,cd}&=&\phi^{ij,kl}+\phi^{ij,k}_{l}+\phi^{ij}_{kl}+ \phi^{kl}_{ij} + \phi^{i}_{j,kl} + \phi_{ij,kl} \label{tensor structure first equation}\\ <\phi^{ij,kl}>&=&v_1(\delta^{ij}\delta^{kl}-\delta^{kj}\delta^{il}) \nonumber\\ <\phi^{ij,k}_{l}>&=&v_2\delta^{ij}\delta^k_l \nonumber \\ <\phi^{ij}_{kl}>&=&v_3(\delta^{ij}\delta_{kl} +\delta^i_k\delta^j_l+\delta^i_l\delta^j_k) \label{tensor structure H51-770 } \end{eqnarray} ($a,b,c,d=1,\ldots,0;i,j,k,l=1,\ldots,5$). Note that since $SU(5)$ is the only maximal non-abelian subgroup in $H_{51}$ then all the vevs above are equal $v_1=v_2=v_3=v$. We note also that the contribution to the gaugino mass from the last three terms in Eq.(\ref{tensor structure first equation}) is equal to that coming from the first three terms, and thus we can limit the computation to these latter terms to get the mass term: \begin{eqnarray} \langle \phi^{ab,cd}\rangle \lambda_{ab}\lambda_{cd} &=& v [\widehat{\lambda^j_l} \widehat{\lambda^l_j} + 16 \lambda^2]\end{eqnarray} where the expressions of the `traceless' $SU(5)$-gaugino $\widehat{\lambda^j_l}$ and the $U(1)_X$-gaugino $\lambda$ are taken from Eqs.(\ref{labH51}) and (\ref{lH51}). We get at $M_{GUT}$ the ratio: \begin{eqnarray} \label{ratio16} \left. \frac{M_X}{M_5} \right|_{M_{GUT}}&=& 16\end{eqnarray} \section{From the intermediate group to the SM} We discuss here the second stage of the breaking from $H$ into the $SM$. We note that in some cases there are more than one $U(1)$-group, and we need to consider the mixing of these $U(1)$'s in order to get the $U(1)_Y$ of the SM. The method is standard and we work it out case by case. \subsection{$H=G_{422}\equiv SU(4)_C \times SU(2)_L \times SU(2)_R \rightarrow SM\equiv SU(3)_C \times SU(2)_L \times U(1)_Y $} The Higgs field responsible for the breaking $SU(4)_C \times SU(2)_R \rightarrow SU(3)_C \times U(1)_Y$ can be taken to include the irrep $({\bf 4},{\bf 2})$ of the group $SU(4)_C \times SU(2)_R$: \begin{eqnarray} \Phi &=& \varphi^a \bigotimes \varphi^r : a\in\{1,2,3,4\},r\in\{1,2\}\end{eqnarray} We can choose $\Phi$ to be in the spinor irrep of $SO(10)$ since we have the branching rule: \begin{eqnarray} {\bf 16} &\overset{SO(10)\supset G_{422}}{=}& (\bf{4,2,1})+(\bf{\overline{4},1,2})\end{eqnarray} and we can write the covariant derivative terms related to the $SU(4)_C \times SU(2)_R$ group in the form: \begin{eqnarray} {\cal{D}}_\mu \Phi &=& \partial_\mu \Phi - ig_4 \frac{T^b}{2}A^b \varphi^a - ig_R \frac{\tau^s}{2}B^s \varphi^r \end{eqnarray} where $T^b$($b\in\{1,\ldots,15\}$) are the $4\times4$ generalized Gellman matrices for $SU(4)$ with the standard normalization $Tr(\frac{T^a}{2}\frac{T^b}{2})=\frac{1}{2}\delta^{ab}$, $\tau^r$($r\in\{1,2,3\}$) are the $2\times2$ Pauli matrices satisfying $Tr(\frac{\tau^r}{2}\frac{\tau^s}{2})=\frac{1}{2}\delta^{rs}$. In order to break $SU(4)_C$ to $SU(3)_C \times U(1)$, and $SU(2)_R$ to $U(1)^\prime$, the Higgs fields take the vevs: \begin{eqnarray} \langle \varphi^a \rangle = v_1 \delta^{a4} &,& \langle \varphi^r \rangle = v_2 \delta^{r1} \end{eqnarray} Since both $\varphi^a$ and $\varphi^r$ originate from the same $\Phi$, the spinor irrep in $SO(10)$ which under $SO(10)\supset SM$ has the component $({\bf{1,1}})_0$, then the two vevs are equal: $v_1=v_2=v$. Concentrating on the mixing of the $U(1)$ from $SU(4)_C$ and the other $U(1)^\prime$ from $SU(2)_R$, we note that the corresponding $A^{15}$ and $B^3$ components will mix together, and thus we obtain the neutral gauge boson mass terms in the form : \begin{eqnarray} \langle D_\mu\Phi \rangle \langle D_\mu\Phi \rangle ^+ &=& \frac{v^2}{4} \left( \sqrt{\frac {3}{2}} g_4 A^{15}-g_R B^3\right)^2 \end{eqnarray} This quadratic form in the fields $B^3$ and $A^{15}$ has a zero eigenvalue whose corresponding eigenstate can be identified as the massless $U(1)_Y$ gauge boson $E$. By diagonalizing the corresponding mass matrix we obtain the two physical vector bosons: the massless gauge boson $E$ , and the orthogonal combination $F$ corresponding to a massive vector boson: \begin{eqnarray} F&=& \cos\theta A^{15}-\sin\theta B^3 \nonumber \\E&=&\sin\theta A^{15}+\cos\theta B^3\label{mix1} \end{eqnarray} where \begin{eqnarray} \label{theta1} \cos\theta=\frac{\sqrt{\frac{3}{2}}g_4}{c},~~~~~~~\sin\theta=\frac{g_R}{c} &:& c^2=g_R^2+\frac{3}{2}g_4^2 \end{eqnarray} It is convenient \cite{langacker} to define the $4\times4$ ($2\times2$) matrix $\bf{A}$ ($\bf{B}$) as follows \begin{eqnarray} {\bf{A}}=\frac{T^b A^b}{\sqrt{2}}\text{ with }A^a_b\equiv(\bf{A})_{ab}&,& {\bf{B}}=\frac{\tau^r B^r}{\sqrt{2}}\text{ with }B^r_s\equiv(\bf{B})_{rs}\label{nota}\end{eqnarray} which leads to \begin{eqnarray} \label{a44anda15} A^4_4=-\frac{\sqrt{3}}{2}A^{15} &,& B^1_1=\frac{B^3}{\sqrt{2}}\label{rel}\end{eqnarray} In the notation of Eq. (\ref{nota}), the gaugino fields which lie in the same supermultiplet as the gauge fields $A^a_b$ of the $SU(4)_C$ group are denoted by $\lambda^a_b$ ($a,b=1,\ldots,4$ with $\lambda^a_a=0$), whereas we denote the gaugino fields of the $SU(2)_{L,R}$ group by ${\lambda^r_{s}}_{L,R}$ ($r,s=1,2$ with $\lambda^r_r=0$). Then the gaugino mass term in the $G_{422}$ group is: \begin{eqnarray} \label{g422massterm} \text{mass term} &=& M_4 \lambda^a_b \lambda^b_a + M_L {\lambda^r_s}_L {\lambda^s_r}_L + M_R {\lambda^r_s}_R {\lambda^s_r}_R \nonumber\\ &=& M_4 \widehat{\lambda^\alpha_\beta} \widehat{\lambda^\beta_\alpha} + \frac{4}{3}M_4 (\lambda^4_4)^2 + M_L {\lambda^r_s}_L {\lambda^s_r}_L + 2 M_R ({\lambda^1_1}_R)^2 + \ldots\end{eqnarray} where $\widehat{\lambda^\alpha_\beta} = \lambda^\alpha_\beta - \frac{1}{3} \delta^\alpha_\beta \lambda^\gamma_\gamma$ ($\alpha,\beta=1,2,3$) are the $SU(3)_C$ gaugino fields and `$\ldots$' denote the terms which do not contribute to the MSSM gaugino masses. Since the gaugino mixing should proceed in the same way as that for the gauge fields lying in the same supermultiplet, then Eqs. (\ref{mix1} and \ref{rel}) lead `by supersymmetry' to: \begin{eqnarray} \label{l44}\lambda^4_4 &=& -\frac{\sqrt{3}}{2}(\sin\theta \lambda+\cos\theta \widetilde{\lambda}) \\ \label{l11} {\lambda^1_1}_R &=&\frac{1}{\sqrt{2}}(\cos\theta \lambda-\sin\theta \widetilde{\lambda}) \end{eqnarray} where $\lambda$ is the gaugino field lying in the same supermultiplet as the $U(1)_Y$ gauge field $E$, whereas $\widetilde{\lambda}$ is the superpartner of the massive vector boson $F$. It follows from Eq.(\ref{g422massterm}) that at the intermediate scale $M_{HB}$ we have: \begin{eqnarray} \label{mass_rel_g422_first} \left. M_3\right|_{M_{HB}}=\left. M_4\right|_{M_{HB}} &,& \left.M_2\right|_{M_{HB}}=\left. M_L\right|_{M_{HB}}\end{eqnarray} As to the mass term corresponding to $U(1)_Y$, then substituting Eqs.(\ref{l44} and \ref{l11}) into Eq.(\ref{g422massterm}) leads to: \begin{eqnarray} \label{mass_rel_g422_second} \left. M_1\right|_{M_{HB}} = \sin^2\theta M_4+\cos^2\theta M_R &=& \left. \frac{2g_R^2M_4+3g_4^2M_R}{3g_4^2+2g_R^2}\right|_{M_{HB}} \end{eqnarray} To summarize, we have used an $SO(10)$--${\bf 16}$ irrep Higgs field to break $G_{422}$ into the SM when its neutral component $({\bf 1, 1})_0$ under SM develops a vev. The gauge supermultiplets ${\bf 45}$ of $SO(10)$ would also be decomposed having under $G_{422}$ the components $({\bf 15,1,1})$ and $({\bf 1,1, 3})$representing respectively the generators of $SU(4)$ and $SU(2)_R$. In the breaking from $G_{422}$ to SM, each of the latter generators would have a singlet $({\bf 1, 1})_0$ part and one needs to identify the weak hypercharge $Y$ generator as a linear combination of these $({\bf 1, 1})_0$ parts. With this, we could determine the $U(1)_Y$ gaugino in terms of the gauginos and coupling constants $g_4$, $g_R$ corresponding to $SU(4)_C$ and $SU(2)_R$. \subsection{$H\equiv SO(3)\times SO(7) \rightarrow H^\prime \equiv SU(2)_L \times SO(6) = SU(2)_L \times SU(4) \rightarrow SM\equiv SU(3)_C \times SU(2)_L \times U(1)_Y $} As we have discussed, one can use the irreps $\bf 54$ or $\bf 770$ to carry out the breaking $SO(10)\rightarrow SO(3)\times SO(7) \equiv SU(2)_L \times SO(7)$. As pointed out in \cite{martin}, the $SU(2)\times SO(7)$ can not be reconciled with the chiral fermion content of the SM. However, as was noticed in \cite{ross}, this case produces non-trivial mass ratios with interesting phenomenology, and we may still consider it since we are not involved in the model building. Thus, until the identification of a feasible model with masses in this region, we include the examination of this case in our study. Now, the $SO(7)$ is broken at $M_{GUT}$ to $SO(6)\simeq SU(4)$ which in turn is broken to $SU(3)_C\times U(1)_Y$ at $M_{HB}$. One can not use the $SU(4)$-$\bf 4$ irrep to achieve this breaking since its branching rule is: \begin{eqnarray} {\bf 4} &\overset{SU(4)\supset SU(3)\times U(1)}{=}& {\bf 1}_3+{\bf 3}_{-1}\end{eqnarray} whereas the `next simple' $SU(4)$-$\bf 15$ irrep can carry out this breaking having the branching rule: \begin{eqnarray}{\bf 15} &\overset{SU(4)\supset SU(3)\times U(1)}{=}& {\bf 1}_0+{\bf 3}_{-4} + {\bf \overline{3}}_{4}+ {\bf 8}_{0}\end{eqnarray} Thus, the Higgs field $\Phi$ responsible for the breaking $SU(4)\rightarrow SU(3)_C\times U(1)_Y$ should include the $SU(4)$-$\bf 15$ irrep, and the simplest choice is the $\bf 45$ irrep of $SO(10)$ having the branching rules: \begin{eqnarray} {\bf 45}&\overset{SO(10)\supset SO(3)\times S(7)}{=}& ({\bf 3,1}) + ({\bf 1,21}) + ({\bf 3,7}) \\ {\bf 21}&\overset{SO(7)\supset SO(6)}{=}& {\bf 15} + {\bf 6} \end{eqnarray} The ($SO(7)$) gaugino mass term in the Lagrangian is \begin{eqnarray} {\cal L}^{SO(7)}_{\text{mass}} &=& M_7 \lambda^{[a,b]}\lambda_{[a,b]}= M_7 \lambda^{[\alpha,\beta]}\lambda_{[\alpha,\beta]}+M_7 \lambda^{[7,\alpha]}\lambda_{[7,\alpha]}\end{eqnarray} where $a,b=1,\ldots,7;\alpha,\beta=1,\ldots,6$. Note that the $\lambda^{[7,\alpha]}$ does not represent the superpartner of a gauge field in $SO(6) = SU(4)$, and thus, using the $SU(4)$ indices, the mass term of the $SU(4) \times SU(2)_L$ is \begin{eqnarray} {\cal L}_{\text{mass}} &=& M^{\prime}_4 \lambda^i_j \lambda^j_i + M_L \lambda^r_s \lambda^s_r \nonumber \\ &=& M^{\prime}_4 \lambda^\alpha_\beta \lambda^\beta_\alpha + M^{\prime}_4 \lambda^4_4 \lambda^4_4 + M_L \lambda^r_s \lambda^s_r + \ldots \label{7mass} \end{eqnarray} where $i,j = 1,\ldots,4$ (with $\lambda^i_i = 0$); $r,s=1,2$ (with $\lambda^r_r=0$); $\alpha,\beta = 1,2,3$ and the `$\ldots$' represent the terms which do not contribute to the gaugino masses: $M_L$ for $SU(2)$ and $M^{\prime}_4$ for $SU(4)$ satisfying \begin{eqnarray} \label{m4m7} \left. \frac{M^{\prime}_4}{M_7}\right|_{M_{GUT}} = 1\end{eqnarray} We introduce in the same way as we did before, the `traceless' $SU(3)$-gauginos: $\widehat{\lambda^\alpha_\beta} = \lambda^\alpha_\beta - \frac{1}{3} \delta^\alpha_\beta \lambda^\gamma_\gamma$, and the `squared' $U(1)_Y$ gaugino field $\lambda^2=\frac{1}{3} (\lambda^\gamma_\gamma)^2 + (\lambda^4_4)^2$. Eq.(\ref{7mass}) reduces then to \begin{eqnarray} \label{massso7}{\cal L}_{\text{mass}} &=& M^{\prime}_4 \widehat{\lambda^\alpha_\beta} \widehat{\lambda^\beta_\alpha} + M^{\prime}_4 \lambda^2 + M_L \lambda^r_s \lambda^s_r \end{eqnarray} Therefore, we have at $M_{HB}$, the scale where the breaking of the intermediate group $H^\prime$ takes place, the relations: \begin{eqnarray} \label{mass_rel_so7} \left.M_1\right|_{M_{HB}}=\left.M_3\right|_{M_{HB}}=\left.M^{\prime}_4\right|_{M_{HB}} &,& \left.M_2\right|_{M_{HB}}=\left.M_L\right|_{M_{HB}} \end{eqnarray} \subsection{$H=H_{51}\equiv SU(5)\times U(1)_X\rightarrow SM\equiv SU(3)_C \times SU(2)_L \times U(1)_Y $} In order to break $SU(5)$ to $SU(3)_C \times SU(2)_L \times U(1)_Z$, one can use the $(SU_5)$-$10$-irrep with the branching rule: \begin{eqnarray} \label{10}{\bf 10} &\overset{SU(5)\supset SU(3)_C\times SU(2)_L\times U(1)_Z}{=}& ({\bf 3^*,1})_{-\frac{2}{3}} + ({\bf 3,2})_{\frac{1}{6}} + ({\bf 1,1})_1\end{eqnarray} Thus, the Higgs field $\Phi$ responsible for the breaking $H_{51}\rightarrow SM$ can be taken in the $(SO10)$-$16$-irrep having the branching rule: \begin{eqnarray} \label{16} {\bf 16} &\overset{SO(10)\supset SU(5)\times U(1)_X}{=}& {\bf 10}_1 + {\bf \bar{5}}_{-3}+{\bf 1}_5 \end{eqnarray} The conventions in the above two branching rules are consistent with the $U(1)_Z$-generator in $SU(5)$ given by: \begin{eqnarray} \label{Z} Z &=& diag(-1/3,-1/3,-1/3,1/2,1/2)\end{eqnarray} and we have an unbroken hypercharge~\cite{barr}: \begin{eqnarray} \frac{Y}{2} = \frac{1}{5} (X-Z)\end{eqnarray} As it is well known, one needs to define the `properly normalized' $U(1)_Z$-generator to be: \begin{eqnarray} \label{L_Z} L_Z &=& \sqrt{\frac{3}{5}} Z\end{eqnarray} so that $Tr(L_Z)^2 = \frac{1}{2}$. Similarly, we define the `properly normalized' $U(1)_X$-generator to be: \begin{eqnarray} \label{L_X} L_X &=& \sqrt{\frac{1}{40}} X \end{eqnarray} such that $Tr_{{\bf 10}}(L_X)^2 = 1$, since we should have $Tr_{{\bf 10}}(M_{ij}M_{i^\prime j^\prime})=1\delta_{ii^\prime}\delta_{jj^\prime}$ where $M_{ij}$ is the $SO(10)$ generator and ${\bf 10}$ is the defining (vector) irrep of $SO(10)$, and that the branching rule \begin{eqnarray} \label{10branching rule1} {\bf 10} &\overset{SO(10)\supset SU(5)\times U(1)_X}{=}& ({\bf 5})_{2} + ({\bf \overline{5}})_{-2} \end{eqnarray} implies $ Tr_{{\bf 10}}(X^2)=40$. We now come to the mixing of the two $U(1)$'s, which means we study how $U(1)_Z\times U(1)_X$ breaks into $U(1)_Y$. When the Higgs field corresponding to the $(1,1)$ component of Eq. (\ref{10}), with $Z$- and $X$-charges equal to one and represented by a $5\times 5$ antisymmetric tensor $\phi^{ab}$, takes a vev such that the only non-zero elements are: \begin{eqnarray} <\phi^{45}> = - <\phi^{54}>&=& v \end{eqnarray} we get a mass term \begin{eqnarray} \label{mass H51} {\cal L}_{\text{mass}} &=& v^2 \left( g_5 \sqrt{\frac{3}{5}}A^Z_\mu + \frac{g_X}{\sqrt{40}} B^X_\mu\right)^2\end{eqnarray} where $A^Z$ and $B^X$ are the $U(1)_Z$ and $U(1)_X$ gauge fields, respectively. By diaganolizing the mass matrix corresponding to the above quadratic form, we get a massive $U(1)_Y$-neutral vector boson field $B_\mu$ and a massless $U(1)_Y$-gauge field $A_\mu$ given by: \begin{eqnarray} B&=& \cos\psi A^Z-\sin\psi B^X \nonumber \\ A&=&\sin\psi A^Z+\cos\psi B^X \label{mix2}\end{eqnarray} where \begin{eqnarray} \cos\psi=\frac{\sqrt{3}g_5}{c},~~~~~~~\sin\psi=-\frac{g_X}{\sqrt{8}c} &:& c^2=3 g_5^2+\frac{g_X^2}{8}\end{eqnarray} Let $\lambda,\widetilde{Z}$ be the superpartners of $B^X$, $A^Z$ respectively, and call $\widetilde{X}$ the superpartner of the massive $B$, whereas we denote the superpartner of the massless $A$, that is the $U(1)_Y$ gaugino, by $\widetilde{Y}$. Then from Eq. (\ref{mix2}) we have \begin{eqnarray} \label{clZ}\widetilde{X}&=&\cos\psi \widetilde{Z}- \sin\psi \lambda \nonumber \\ \widetilde{Y}&=& \sin\psi \widetilde{Z} + \cos\psi \lambda \end{eqnarray} The gaugino mass term of the $H_{51}\equiv SU(5)\times U(1)_X$ can be written as: \begin{eqnarray} \label{calL} {\cal L} &\supset& M_5 \lambda^a_b \lambda^b_a + M_X \lambda^2 \end{eqnarray} where $\lambda^a_b$'s are the gauginos of $SU(5)$ ($a,b=1,\ldots,5$ and $\lambda^a_a=0$)\footnote{referred to by $\widehat{\lambda^a_b}$ in Eq. \ref{labH51}.}. After $H_{51}$ is broken to the SM, with the indices ($\alpha,\beta=1,2,3;r,s=4,5$), we have: \begin{eqnarray} \label{massLH51} {\cal L}_{\text{mass}} &=& M_5 [(\widehat{\lambda^\alpha_\beta} \widehat{\lambda^\beta_\alpha})^2 + (\widehat{\lambda^r_s} \widehat{\lambda^s_r})^2 + \widetilde{Z}^2] + M_X \lambda^2\end{eqnarray} where $\widehat{\lambda^\alpha_\beta} =\lambda^\alpha_\beta - \frac{1}{3} \delta^\alpha_\beta \lambda^\gamma_\gamma$ are the gaugino fields of $SU(3)_C$ (Similarly, $\widehat{\lambda^r_s}$ are the $SU(2)_L$ gauginos) and $\widetilde{Z}^2 = \frac{1}{3} (\lambda^\alpha_\alpha)^2 + \frac{1}{2} (\lambda^r_r)^2 $ is the squared $U(1)_Z$ gaugino field. From Eq.(\ref{massLH51}) and using Eq.(\ref{clZ}) we get: \begin{eqnarray} \label{mass_rel_H51} \left. M_2\right|_{M_{HB}}=\left. M_3\right|_{M_{HB}}=\left. M_5\right|_{M_{HB}} &,& \left. M_1\right|_{M_{HB}}=M_5\sin^2\psi+M_X\cos^2\psi= \left.\frac{g_X^2 M_5 + 24 g_5^2 M_X}{g_X^2 + 24 g_5^2} \right|_{M_{HB}}\end{eqnarray} To summarize, we obtained by calculating the mixing of the two $U(1)$'s the formulae relating the MSSM-guagino masses ($M_1,M_2,M_3$) to the intermediate group $H_{51}$-gaugino masses ($M_5,M_X$) and the coupling constants, which are valid at the scale where the breaking of the intermediate group to the SM occurs. \section{The RG running and the MSSM gaugino mass ratios} In section 2, we computed the $H$-gaugino mass ratios at the GUT scale $M_{GUT}$, whereas in section 3 we expressed, at the intermediate breaking scale $M_{HB}$, the MSSM gaugino masses in terms of the $H$-gaugino masses and the coupling constants. Thus, it is necessary to introduce the running factors for the gauge couplings of the intermediate group ($\alpha_i \equiv \frac{g_i^2}{4 \pi}$) from $M_{GUT}$ to $M_{HB}$: \begin{eqnarray} r_i=\frac{\alpha_i(t)}{\alpha_i(t_0)} &,& t=\log\frac{M_{GUT}^2}{Q^2} \end{eqnarray} with $Q^2=M_{HB}^2$ and $t_0=0$ corresponding to $Q^2=M_{GUT}^2$, and we assume unification at $M_{GUT}$ ($\alpha_i(t_0)=\alpha$). We define the ratio \begin{eqnarray} \label{running-ratio} R(i,j) \equiv \frac{r_i}{r_j}&=& \frac{1+\frac{\alpha}{2\pi} b_j t}{1+\frac{\alpha}{2\pi} b_i t}\end{eqnarray} with $b_i$ the beta function coefficients, and use the one-loop renormalization equations for the evolution of the gaugino masses and the coupling constants: \begin{eqnarray} \label{1loop} \frac{M_i(t)}{g_i^2(t)} &=& \frac{M_i(t_0)}{g_i^2(t_0)} \end{eqnarray} With this we can obtain our final results of the MSSM gaugino mass ratios at the intermediate scale $M_{HB}$ as follows: \begin{itemize} \item {\underline{$SO(10)\to G_{422}$ by ${\bf 54}$}} Eqs.(\ref{mass_rel_g422_first},\ref{mass_rel_g422_second}) and (\ref{54G422})lead to \begin{eqnarray} \frac{M_2(t)}{M_3(t)}=-\frac{3}{2}R(2_L,4) &,& \frac{M_1(t)}{M_3(t)}=\frac{-5 R(2_R,4)}{4 R(2_R,4)+6} \label{m13-54-422} \end{eqnarray} We note that we get the gaugino masses $M_a$(a=1,2,3) in the ratio $-\frac{1}{2}:-\frac{3}{2}:1$ when the two scales are equal ($M_{HB}=M_{GUT}$) in accordance with the results of \cite{martin} obtained via a different approach. However, it is instructive to notice here that the functional form of the ratio $M_1/M_3$, in terms of the `RG'-factor $R(2_R,4)$, in equation (\ref{m13-54-422}) can not be deduced directly, by simple RG running, from its value $(-\frac{1}{2})$ when $R(2_R,4)=1$ corresponding to two equal scales. This comes because the mixing of two $U(1)$'s, one from $SU(4)_C$ and the other from $SU(2)_R$, to give $U(1)_Y$ happens at the intermediate scale $M_{HB}$, and use of Eq.(\ref{mass_rel_g422_second}) is essential in order to take account of this mixing. \item{\underline{$SO(10)\to G_{422}$ by ${\bf 210}$}} Eqs.(\ref{mass_rel_g422_first},\ref{mass_rel_g422_second}) and (\ref{0},\ref{-1}) lead to \begin{eqnarray} M_3(t)=0 &,& \frac{M_1(t)}{M_2(t)}=\frac{-3}{3+2 R(2_R,4)} \label{m12-210-422} \end{eqnarray} where the symmetric evolution of $\alpha_{2R}$ and $\alpha_{2L}$ puts $R(2_R,2_L)=1$. This reduces to the `known' value $\frac{M_1}{M_2}= -\frac{3}{5}$ when $M_{HB}=M_{GUT}$ \cite{martin}. We note that the possibility of gluinos being massless is not phenomenologically excluded. \item {\underline{$SO(10)\to G_{422}$ by ${\bf 770}$}} Eqs.(\ref{mass_rel_g422_first},\ref{mass_rel_g422_second}) and (\ref{5/2}) lead to \begin{eqnarray} \label{m13-770-422} \frac{M_1(t)}{M_3(t)}=\frac{19 R(2_R,4)}{6+4R(2_R,4)} &,& \frac{M_2(t)}{M_3(t)}=\frac{5}{2}R(2_L,4) \end{eqnarray} We see that when $M_{HB}=M_{GUT}$ the results of the gaugino masses $M_a$(a=3,2,1) reduce, as expected, to $1:\frac{5}{2}:\frac{19}{10}$ in ratio \cite{martin}. \item \underline{$SO(10)\to SU(2)\times SO(7)$ by ${\bf 54}$} Eqs. (\ref{m4m7},\ref{mass_rel_so7}) and (\ref{54s03so7}) lead to gaugino masses, at the intermediate scale $M_{HB}$, in the ratio: \begin{eqnarray} M_3: M_2: M_1 &=& 1:-\frac{7}{3} R(2_L,4) :1\end{eqnarray} which reduces to $1:-\frac{7}{3}:1$ when $M_{HB}=M_{GUT}$ \cite{chamoun}. \item \underline{$SO(10)\to SU(2)\times SO(7)$ by ${\bf 770}$} Eqs. (\ref{m4m7},\ref{mass_rel_so7}) and (\ref{7}) lead to \begin{eqnarray} \frac{M_1(t)}{M_3(t)} = 1 &,& \frac{M_2(t)}{M_3(t)}=7 R(2_L,4) \end{eqnarray} which reduce respectively to $1,7$, when $M_{HB}=M_{GUT}$. \item \underline{ $SO(10)\to H_{51}$ by ${\bf 210}$} Eqs. (\ref{mass_rel_H51}) and (\ref{-4}) lead to \begin{eqnarray} \frac{M_2(t)}{M_3(t)} = 1 &,& \frac{M_1(t)}{M_3(t)}=\frac{-95 R(1_X,5)}{R(1_X,5)+24} \label{m-210-H51}\end{eqnarray} Again, these functional forms are consistent with the `known' values of the gaugino mass $M_a$(a=3,2,1) ratios $1:1:-\frac{19}{5}$ obtained in \cite{martin} using a different method when $M_{HB}=M_{GUT}$. However, their values at $M_{GUT}$ and RG running alone are not enough to deduce the `functional' forms, and one needs to carefully consider the normalization and mixing of $U(1)_X$ and $U(1)_Z$, which was done in Eq.(\ref{mass_rel_H51}) \item \underline {$SO(10)\to H_{51}$ by ${\bf 770}$} Eqs. (\ref{mass_rel_H51}) and (\ref{ratio16}) lead to \begin{eqnarray} \frac{M_2(t)}{M_3(t)}=1 &,& \frac{M_1(t)}{M_3(t)}= \frac{385 R(1_X,5)}{24+R(1_X,5)} \end{eqnarray} which reduce respectively to $1,\frac{77}{5}$ if $M_{HB}=M_{GUT}$, in accordance with \cite{martin}. \end{itemize} We compute now the beta coefficients for the RG running. We shall consider that the scale $M_{HB}$ is above the threshold of creating the superpartners of the known particles, so we use the RG equations of the SUSY-GUT \cite{vaughin}: \begin{eqnarray} b_i &=& S_i(R) - 3 C_i(G)\end{eqnarray} with $S_i(R)$ is the Dynkin index of the irrep $R$ summed over all chiral superfields, normalized to $1/2$ for each fundamental irrep of $SU(N)$, and $C_i(G)$ is the Casimir invariant (equal to the Dynkin index of the adjoint representation) which satisfies $C(SU(N)) = N,C(U(1)) = 0$. In order to single out the Higgs contribution, we write: \begin{eqnarray} S_i(R) &=& F_i + H_i \end{eqnarray} and we shall assume we have $N_g=3$ families of fermions which span an $SO(10)$-$\bf 16$ spinor irrep. As to the Higgs field, we only consider the Higgs field responsible for the breaking of the intermediate group $H$. These Higgs fields would include the MSSM Higgses but the way in which this is carried out is model-dependent. As to the Higgs fields responsible for the breaking of $SO(10)$, we do not consider them since they get masses of order of $M_{GUT}$, and some will be `eaten' by the gauge bosons. As explained in section 3, we need a Higgs field $\Phi$ in an $\bf 16$-irrep of $SO(10)$ in both cases corresponding to $H= G_{422}$ and $H=H_{51}$, whereas we need a Higgs field $\Phi$ in an $\bf 45$-irrep of $SO(10)$ in the case $H=SU(2)\times SO(7)$, whence we have the table: \begin{table}[htbp] \begin{center} \begin{tabular}{||c||c||c|c|c|c||c|c|c|c||c||} \hline $H$ & Higgs & $F_i$ & $H_i$ & $C_i$ & $b_i$ & $F_j$ & $H_j$ & $C_j$ & $b_j$ & MSSM\\ \hline $G_{422}$& {\bf{16}} & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $4$ & $-4$ & $b_1^{MSSM}=\frac{33}{5}$\\ \hline $SU2 \times SO7$& {\bf{45}} & $4$ & $16$ & $2$ & $22$ & $2$ & $8$ & $4$ & $2$ & $b_2^{MSSM}=1$\\ \hline $H_{51}$& {\bf{16}} & $2$ & $2$ & $0$ & $8$ & $2$ & $2$ & $5$ & $-7$ & $b_3^{MSSM}=-3$\\ \hline \hline \end{tabular} \end{center} \caption{$(i, j)=(2_R, 4)$ or $(2_L, 4)$ for $H= G_{422}$ or $H = SU2 \times SO7$ (broken at $M_{GUT}$ to $H^{\prime}=SU2 \times SU4$), whereas $(i,j)=(1_X, 5)$ for $H= H_{51}$. We put also the MSSM beta function coefficients.} \label{beta coefficients} \end{table} \begin{landscape} \begin{table*}[htbp] \begin{center} {\small \begin{tabular}{|c||c||c|c|c|c||c|c|c|c|} \hline Irrep & $H$ & \multicolumn{4}{|c||}{$M_1/M_3 $} & \multicolumn{4}{|c|}{$M_2/M_3$}\\ \hline $M_{HB}=$& & & $M_{GUT}$ & $10^8$ & $10^{3}$ & & $M_{GUT}$ & $10^8$ & $10^{3}$ \\ \hline \hline {\bf 54} & $G_{422}$ & $\frac{-5 R(2_R,4)}{6+4 R(2_R,4)}$ & $-1/2$ & $\begin{matrix} 0.88 \cr (3.21) \end{matrix}$ & $\begin{matrix} 2.27 \cr (1.96) \end{matrix}$ & $-\frac{3}{2} R(2_L,4)$ & $-3/2$ & $\begin{matrix} 0.93 \cr (3.13) \end{matrix}$ & $\begin{matrix} 1.45 \cr (1.58) \end{matrix}$ \\ \hline & $SU2 \times SO7$ & $1$ & $1$ & $\begin{matrix} 1 \cr (-6.42) \end{matrix}$ & $\begin{matrix} 1 \cr (-3.92) \end{matrix}$ & $- \frac{7}{3} R(2_L,4)$ & $-7/3$ & $\begin{matrix} -0.36 \cr (4.88) \end{matrix}$ & $\begin{matrix} -0.31 \cr (2.45) \end{matrix}$ \\ \hline \hline {\bf 210} & $G_{422}$ & $m=\frac{-3}{3 + 2 R(2_R,4)}$ & $m=-\frac{3}{5}$ & $\begin{matrix} m=-1.70 \cr (=-1.84) \end{matrix}$ & $ \begin{matrix} m=-2.82 \cr (=-2.24) \end{matrix}$ & $\infty$ & $\infty$ & $\infty$ & $\infty$ \\ \hline & $H_{51}$ & $\frac{-95 R(1_X,5)}{24 + R(1_X,5)}$ & $-19/5$ & $\begin{matrix} 2.21 \cr (24.38) \end{matrix}$ & $\begin{matrix} 2.67 \cr (14.90) \end{matrix}$ & $1$ & $1$ & $\begin{matrix} 1 \cr (-2.09) \end{matrix}$ & $\begin{matrix} 1 \cr (-1.05) \end{matrix}$ \\ \hline \hline {\bf 770} & $G_{422}$ & $\frac{19 R(2_R,4)}{6+4 R(2_R,4)}$ & $19/10$ & $\begin{matrix} -3.34 \cr (-12.19) \end{matrix}$ & $\begin{matrix} -8.63 \cr (-7.45) \end{matrix}$ & $\frac{5}{2} R(2_L,4)$ & $5/2$ & $\begin{matrix} -1.55 \cr (-5.22) \end{matrix}$ & $\begin{matrix} -2.42 \cr (-2.63) \end{matrix}$ \\ \hline & $SU2 \times SO7$ & $1$ & $1$ & $\begin{matrix} 1 \cr (-6.42) \end{matrix}$ & $\begin{matrix} 1 \cr (-3.92) \end{matrix}$ & $7R(2_L,4)$ & $7$ & $\begin{matrix} 1.09 \cr (-14.63) \end{matrix}$ & $\begin{matrix} 0.93 \cr (-7.36) \end{matrix}$ \\ \hline & $H_{51}$ & $\frac{385 R(1_X,5)}{24 + R(1_X,5)}$ & $77/5$ & $\begin{matrix} -8.95 \cr (-98.80) \end{matrix}$ & $\begin{matrix} -10.85 \cr (-60.40) \end{matrix}$ & $1$ & $1$ & $\begin{matrix} 1 \cr (-2.09) \end{matrix}$ & $\begin{matrix} 1 \cr (-1.05) \end{matrix}$ \\ \hline \end{tabular} } \end{center} \vspace{-.5cm}\caption{Gaugino mass ratios at intermediate scale $M_{HB}$ in the different cases. To each ratio correspond four columns, the first of which gives the general formula whereas the other three give the result when $M_{HB}$ is taking a specific value. Bracketed values denote the gaugino mass ratios when $M_{HB}=M_{GUT}$ evaluated at the same specific energy scale ($10^3$ or $10^8$ $GeV$) as the case of $M_{HB}\neq M_{GUT}$. The following numerical values are taken: $M_{GUT}=10^{16}$, $\alpha = 0.1$. Mass scales are evaluated in $GeV$. The parameter $m$ is equal to $\frac{M_1}{M_2}$.} \label{ratios} \end{table*} \end{landscape} \section{Summary and Discussion} We summarize our results in Table \ref{ratios}, where we compute the gaugino mass ratios in the different cases, using equation (\ref{running-ratio}), with $\alpha \sim 0.1$, $M_{GUT}=10^{16}$ GeV and we take two values for the intermediate breaking scale $M_{HB}=10^3, 10^8$ GeV. In order to illustrate in the table the effect of `successive' breakings, we have enclosed in brackets the values of the gaugino mass ratios at the specific values $10^3, 10^8$ GeV, had the two breakings occurred at one stage ($M_{HB}=M_{GUT}$), using the MSSM running from $E=M_{GUT}$ to $E=10^3\text{ or }10^8$ GeV: \begin{eqnarray} \frac{M_i}{M_j}(E) &=& \frac{M_i}{M_j}(M_{GUT}) \frac{1+\frac{\alpha}{2\pi}tb^{MSSM}_i}{1+\frac{\alpha}{2\pi}tb^{MSSM}_j} \end{eqnarray} where $t=\log(\frac{M_{GUT}}{E})^2$. We see that gaugino mass ratios, evaluated at the same energy scale, change significantly when the intermediate scale is low (say, $10^8$ GeV or Tev) compared to when the two breaking scales are approximately equal. We note here that we did not consider the impact of the intermediate scale on gauge coupling unification for the values of the parameters used in the table. To check that this unification requirement can be achieved in a way consistent with the low scale experimental measurements would involve model building details, where one constructs a complete SUSY GUT model with a full superpotential explicitly written, and in which the gauge coupling unification is realized in two steps of breaking: a task beyond the scope of the work in this paper which does not entail model building particularities. Having said this though, one should notice that from a phenomenological point of view there is a more reasonable way to obtain the gaugino mass ratios at the intermediate scale $M_{HB}$. In fact, once we fix the partially unified intermediate gauge group $H$ and the intermediate mass scale $M_{HB}$, the values of the gauge couplings at $M_{HB}$ can be calculated from the weak scale data by using RG equations, and then one can use the formulae of the past section to compute the corresponding gaugino mass ratios assuming gauge coupling unification at $M_{GUT}$. However, whether or not the numerical values of the running gauge couplings at a `low' intermediate scale $M_{HB}$ \footnote{By `low' we mean a scale smaller than $\sim 10^{12} GeV$, so that to be capable of explaining the smallness of neutrino masses.}, which are necessary to evaluate the gaugino mass ratios at this scale, can match with the SM gauge couplings measured at the electroweak scale $M_Z$, provided we insist on having just MSSM between $M_{HB}$ and $M_Z$ \footnote{More precisely, one has MSSM between $M_{HB}$ and $M_S$, the SUSY scale, and SM between $M_S$ and $M_Z$.}, would depend heavily on the nature of $H$. For instance, if $H=SU(5)\times U(1)$, it is difficult to get a low intermediate mass scale and unify both coupling constants to one corresponding to $SO(10)$~\cite{hll}. Nonetheless, if $H=G_{3221}\equiv SU(3)_C\times SU(2)_L\times SU(2)_R\times U(1)_{B-L}$, the low intermediate mass scale can be obtained~\cite{majee}. As an illustrative example, let us take the case of $H=G_{422}$ and calculate the gaugino mass ratios by way of computing the values of the gauge couplings at $M_{HB}$ from the weak scale data, and assuming gauge coupling unification at $M_{GUT}$ (which can be realized by, say, adding some particle content near $M_{HB}$ similar to that in \cite{majee}\footnote{In \cite{majee}, with the intermediate group $G_{3221}$ and additional light supermultiplets with masses around the intermediate mass scale $M_R$ (corresponding to $M_{HB}$ in the present paper), one could, within SUSY SO(10) GUT, achieve low values for $M_R$ ($10^4-10^{10} GeV$) with $M_{GUT}\sim 10^{16} GeV$.}). With the numerical values \cite{pdg} ($M_Z = 91.18$ GeV, $\alpha_S(M_Z) \sim 0.1187$, $\sin^2\theta_W \sim 0.2312,\alpha_{em}^{-1}(M_Z)=127.9 \Rightarrow \alpha_{2L}^{-1}(M_Z) = 29.57$ and $\alpha^{-1}_Y(M_Z) = 58.99$) and the MSSM beta coefficients from Table \ref{beta coefficients}, we get, for $M_{HB}=10^4 GeV$, the values: $\alpha_S^{-1}= 12.91, \alpha_{2L}^{-1}= 28.07, \alpha_Y^{-1} = 49.12$ at $M_{HB}$. Because $H$ breaks into the SM at $M_{HB}$, we have $g_4(M_{HB}) = g_S(M_{HB})$ and $g_{2R}(M_{HB})= g_{2L}(M_{HB})$. Applying Eqs. \ref{m13-54-422}, \ref{m12-210-422} and \ref{m13-770-422}, we get the numerical results of the gaugino mass ratios and show them in Table \ref{specific model}. \begin{table}[htbp] \begin{center} \begin{tabular}{||c||c|c||} \hline Irrep & $M_1/M_3$ & $M_2/M_3$ \\ \hline {\bf{54}} & $-0.29$ & $-0.68$ \\ \hline {\bf{210}} & $\infty$ & $M_1/M_2 = -0.76$ \\ \hline {\bf{770}} & $1.11$ & $1.15$ \\ \hline \hline \end{tabular} \end{center} \caption{Gaugino mass ratios at $M_{HB} \sim 10$ TeV for an intermediate $G_{422}$ group, obtained by computing the values of gauge couplings at $M_{HB}$ starting from the weak scale data.} \label{specific model} \end{table} In general, considering other models and other intermediate groups, one can say that although some model complexifications might affect the coupling constants evolution, and consequently the values of the derived gaugino mass ratios, however the conclusion concerning the significant influence of the existence of multi-stages in the breaking chain would remain unchanged. The derived mass ratios would be reflected in the electroweak energy scale measurements due to take place in the near future experiments, like the LHC, with interesting phenomenological consequences. \vspace{0.1cm} {\it{Acknowledgements:}} This work was supported in part by the National Natural Science Foundation of China under nos.90503002, 10821504, 10805018 and 10975171. N.C. thanks CBPF-Brazil, where part of the work has been done, for its hospitality and acknowledges support from TWAS.
1,108,101,565,022
arxiv
\section{Introduction} \label{sec:Introduction} Model comparison, selection and averaging lie at the core of Bayesian decision theory \citep{Robert:2007tc} and have attracted considerable attention in the past few decades. In most cases, approaches to the calculation of the required posterior model probabilities have revolved around simple asymptotic arguments or the post-processing of outputs from Markov chain Monte Carlo (MCMC\xspace) algorithms operating on the space of a single model or using specially designed MCMC\xspace techniques that provide direct estimates of these quantities (for example Reversible Jump MCMC\xspace, RJMCMC\xspace; \citet{Green:1995dg}). Within-model simulations are typically somewhat simpler, but generalisations of the harmonic mean estimator \citep{Gelfand:1994ux} which are typically used in this setting require careful design to ensure finite variances and, convergence assessment can be rather difficult. Simulations on the whole model spaces are often difficult to implement efficiently even though they can be conceptually appealing. More robust and efficient Monte Carlo algorithms have been established in recent years. Many of them are population based, in that they deal explicitly with a collection of samples at each iteration, including sequential importance sampling and resampling (AIS\xspace, \citet{Neal:2001we}; SMC\xspace, \citep{DelMoral:2006hc}) and population MCMC\xspace (PMCMC\xspace; \citet{Liang:2001dc,Jasra:2007in}). However, most studies have focused on their abilities to explore high dimensional and multimodal spaces. Results on the effectiveness of these algorithms when applied to Bayesian model comparison problems are less well studied. In the present work, we motivate and present a number of approaches based around the sequential Monte Carlo family of algorithms, and demonstrate the effectiveness of the proposed strategy empirically. Sequential Monte Carlo (SMC\xspace) methods are a class of sampling algorithms which combine importance sampling and resampling. They have been primarily used as ``particle filters'' to solve optimal filtering problems; see, for example, \citet{Cappe:2007hz,Doucet:2011us} for recent reviews. They are used here in a different manner, that proposed by \citet{DelMoral:2006hc} and developed by \citet{DelMoral:2006wv,Peters:2005wh}. This framework involves the construction of a sequence of artificial distributions on spaces of increasing dimensions which admit the distributions of interest as particular marginals. Although it is well known that SMC\xspace is well suited for the computation of normalising constants and that it is possible to develop relatively automatic SMC\xspace algorithms by employing a variety of ``adaptive'' strategies, their use for Bayesian model comparison has not yet received a great deal of attention. We highlight three strategies for computing posterior model probabilities using SMC\xspace, focusing on strategies which require minimal tuning and can be readily implemented requiring only the availability of \emph{locally-mixing} MCMC\xspace proposals. These methods admit natural and scalable parallelisation and we demonstrate the potential of these algorithms with real implementations suitable for use on consumer-grade parallel computing hardware including GPU\xspace{}s, reinforcing the message of \citet{Lee:2010fm}. We also present a new approach to adaptation and guidelines on the near-automatic implementation of the proposed algorithms. These techniques are applicable to SMC\xspace algorithms in much greater generality. The proposed approach is compared with state of the art alternatives in extensive simulation studies which demonstrate its performance and robustness. In the next section we provide a brief survey of the considerable literature on Monte Carlo methods for Bayesian model comparison. Section~\ref{sec:Methodology} presents three algorithms for performing model comparison using SMC\xspace techniques and Section~\ref{sec:Illustrative Applications} provides several illustrative applications, together with comparisons with other techniques. Section~\ref{sec:Theoretical Considerations} extends the standard SMC\xspace central limit theorem to include the path sampling estimator. The paper concludes with some discussions in Section~\ref{sec:Conclusion}. \section{Background} \label{sec:Background} Bayesian model comparison must be based upon the posterior distribution over models. It is only possible to obtain closed-form expressions for posterior model probabilities in very limited situations. Over the past five decades, this problem has attracted considerable attention. It is not feasible to exhaustively summarise this literature here. We aim only to describe the major contributions to the area and recent developments which are particularly relevant to the present paper. \subsection{Analytic Methods and MCMC} \label{sub:Conventional Methods} The Bayesian Information Criterion (BIC\xspace), developed by \citet{Schwarz:1978uv}, is based upon a large sample approximation of the Bayes factor. It is defined as $\text{BIC\xspace} = -2\widehat\ell + k\log(n)$, where $\widehat\ell$ denotes the maximum of the log-likelihood for the observed data, $k$ the number of model parameters and $n$ the effective dimension of the data. An asymptotic argument concerning Bayes factors under appropriate regularity conditions justifies the choice of the model with the smallest value of BIC\xspace. Although appealing in its simplicity, such an approach can only be formally justified when a very large number of observations (compared to the number of parameters) is available. In this era of fast computing, the difficulty of evaluating the integrals that must be computed in order to adapt a fully Bayesian approach is much smaller than it once was. The Bayesian approach to model comparison is, of course, to consider the posterior probabilities of the possible models \citep[Chapter 6]{Bernardo:1994vd}. Within this Bayesian framework the decision making process, which might include model comparison, model selection or the choice of an action, depend upon the relative probabilities of several models \citep[Chapter 7]{Robert:2007tc}. Given an (at most countable) collection of models $\{\Mk\}_{k\in\ensuremath{\mathcal{K}}\xspace}$, with model $\Mk$ having parameter space $\Paramk$, Bayesian inference proceeds from a prior distribution over the collection of models, $\pi(\Mk)$, a prior distribution for the parameters of each model, $\pi(\paramk|\Mk)$ and the likelihood under each model $p(\ensuremath{\bm{y}}\xspace|\paramk,\Mk)$. In order to perform model comparison, one requires the posterior model probability, \begin{equation} \pi(\Mk|\ensuremath{\bm{y}}\xspace) = \frac{p(\ensuremath{\bm{y}}\xspace|\Mk)\pi(\Mk)}{p(\ensuremath{\bm{y}}\xspace)} \end{equation} where $p(\ensuremath{\bm{y}}\xspace|\Mk) = \int_{\paramk} p(\ensuremath{\bm{y}}\xspace|\paramk,\Mk) \pi(\paramk|\Mk) \intd\paramk$ is termed the \emph{evidence} for model $\Mk$ and the normalising constant $p(\ensuremath{\bm{y}}\xspace) = \sum_{k\in\ensuremath{\mathcal{K}}\xspace}p(\ensuremath{\bm{y}}\xspace|\Mk) \pi(\Mk)$ can be easily calculated if $|\ensuremath{\mathcal{K}}\xspace|$ is finite and the evidence for each model is available. The case where $|\ensuremath{\mathcal{K}}\xspace|$ is countable is discussed later. We first review some techniques for calculating the evidence for each model individually. Several techniques have been proposed to approximate the evidence for a model using simulation techniques which approximate the posterior distribution of that model, including the harmonic mean estimator of \citet{Newton:1994wm,Raftery:2007ud} and the generalisations due to \citet{Gelfand:1994ux}. These pseudo-harmonic mean methods are based around the insight that for any density $g$, such that $g$ and the posterior density are mutually absolutely continuous, the fact that following identity holds, \begin{equation} \int \frac{g(\paramk)}{p(\ensuremath{\bm{y}}\xspace,\paramk|\Mk)} \pi(\paramk|\ensuremath{\bm{y}}\xspace,\Mk) \intd\paramk = \int \frac{g(\paramk)}{p(\ensuremath{\bm{y}}\xspace,\paramk|\Mk)} \frac{p(\ensuremath{\bm{y}}\xspace,\paramk|\Mk)}{p(\ensuremath{\bm{y}}\xspace|\Mk)} \intd\paramk = \frac{1}{p(\ensuremath{\bm{y}}\xspace|\Mk)} \end{equation} and by approximating the leftmost integral using any numerical integration technique one can in principle obtain an estimate of the evidence. Unfortunately, considerable care is required in the implementation of such schemes in order to control the variance of the resulting estimator (and indeed, to ensure that this variance is finite; see for example \citet{Neal:1994}). In the particular case of the Gibbs sampler, \citet{Chib:1995em} provides an alternative approach to the approximation of the evidence from within-model simulations based on that the identity, \begin{equation} p(\ensuremath{\bm{y}}\xspace|\Mk) = \frac{p(\ensuremath{\bm{y}}\xspace|\paramk,\Mk)\pi(\paramk|\Mk)}{\pi(\paramk|\ensuremath{\bm{y}}\xspace,\Mk)}, \end{equation} holds for any value of $\paramk$. Therefore, an estimator of the marginal likelihood can be obtained by replacing $\paramk$ with a particular value, say $\paramk^\star$, which is usually chosen from the high probability region of the posterior distribution and approximating the denominator $\pi(\paramk^\star|\ensuremath{\bm{y}}\xspace,\Mk)$ using the output from a Gibbs sampler. Though this method does not suffer the instability associated with generalised harmonic mean estimators, it requires that all full conditional densities are known including their normalising constants. This approach was generalised to other Metropolis-Hastings algorithms by \citet{Chib:2001gq}, where only the proposal distributions are required to be known including their normalising constants. The first MCMC\xspace method which operated simultaneously over the full collection of models of interest providing direct estimates of posterior model probabilities was probably the approach of \citet{Grenander:1994vy}. However, the general Reversible Jump MCMC\xspace (RJMCMC\xspace) strategy first proposed by \citet{Green:1995dg} is undoubtedly the most widespread of these techniques. RJMCMC\xspace adapts the Metropolis-Hastings algorithm to construct a Markov chain on an extended state-space which admits the posterior distribution over both model and parameters as its invariant distribution. The algorithm operates on the space $\bigcup_{k\in\ensuremath{\mathcal{K}}\xspace}(\{\Mk\}\times\Paramk)$. A countable set of types of moves are considered, say $m\in\ensuremath{\mathcal{M}}\xspace$, and each move type $m$ is capable of moving between two models, say $\Mk$ and $\Mk[k^\prime]$ (where $k=k^\prime$ in the case of within-model moves). At state $\paramk$, a move type $m$ together with a new state $\paramk[k^\prime]$ are proposed according to $q_m(\paramk,\paramk[k^\prime])r_m(\paramk)$, where $r_m(\paramk)$ is the probability of choosing type $m$ move when at state $\paramk$ and $q_m(\paramk,\paramk[k^\prime])$ is the proposal kernel for the new state when move type $m$ is made. The move is accepted with probability \begin{equation} \alpha_m(\paramk,\paramk[k^\prime]) = \min \biggl\{1, \frac{\pi(M_{k^\prime}) \pi(\paramk[k^\prime]|M_{k^\prime}) p(\ensuremath{\bm{y}}\xspace|\paramk[k^\prime],\Mk[k^\prime])} {\pi(\Mk)\pi(\paramk|\Mk)p(\ensuremath{\bm{y}}\xspace|\paramk,\Mk)} \frac{q_m(\paramk[k^\prime],\paramk)r_m(\paramk^\prime)} {q_m(\paramk,\paramk[k^\prime])r_m(\paramk)} \biggr\}. \end{equation} In practice, sampling of the proposed new state $\paramk[k^\prime]$ is often achieved by drawing a vector of continuous random variables, say $u$, which are independent of $\paramk$ and applying a deterministic mapping of vector $(\paramk,u)$ to $\paramk[k^\prime]$. The inverse of the move, from $\paramk[k^\prime]$ back to $\paramk$, then uses the inverse of this transformation. Through a simple change of variable, the conditional density $q_m(\paramk,\paramk[k^\prime])$ can be expressed in terms of the density of vector $u$, $q(u)$, and the acceptance probability becomes \begin{equation} \alpha_m(\paramk,\paramk[k^\prime]) = \min \biggl\{1, \frac {\pi(M_{k^\prime}) \pi(\paramk[k^\prime]|M_{k^\prime}) p(\ensuremath{\bm{y}}\xspace|\paramk[k^\prime],M_{k^\prime})} {\pi(\Mk)\pi(\paramk|\Mk)p(\ensuremath{\bm{y}}\xspace|\paramk,\Mk)} \frac{r_m(\paramk[k^\prime])}{r_m(\paramk)} \frac{1}{q(u)} \biggl\lvert \frac{\partial\paramk[k^\prime]}{\partial(\paramk,u)} \biggr\rvert\biggr\}, \end{equation} where the last term is the determinant of the Jacobian of the transformation. The design of efficient between-model moves is often difficult, and the mixing of these moves largely determines the performance of the algorithm. For example, in multimodal models, where RJMCMC\xspace has attracted substantial attention, information available in the posterior distribution of a model of any given dimension does not characterise modes that exist only in models of higher dimension, and thus successful moves between those models become unlikely and difficult to construct \citep{Jasra:2007id}. In addition, RJMCMC\xspace will not characterise models of low posterior probability well, as those models will be visited by the chain only rarely. In some cases it will be difficult to determine whether the low acceptance rates of between-model moves result from actual characteristics of the posterior or from a poorly-adapted proposal kernel. The related continuous time birth and death algorithm of \citet{Stephens:2000wq} was shown by \citet{Cappe:2003ek} to have no qualitative advantage over the simpler discrete time formulation. A post-processing approach to improve the computation of normalising constants from RJMCMC\xspace output using a bridge-sampling approach was advocated by \citet{Bartolucci:2006cb}. Sophisticated variants of these algorithms, such as those developed in \citet{Peters:2010vk}, have been considered in recent years but depend upon essentially the same construction and ultimately require adequate mixing of the underlying Markov process. \citet{Carlin:1995uy} presented an alternative method for simulating the model probability directly through a Gibbs sampler on the space $\{\Mk\}_{k\in\ensuremath{\mathcal{K}}\xspace} \times \prod_{k\in\ensuremath{\mathcal{K}}\xspace}\Paramk$. The joint parameter is thus $(M,\theta)$ where $\theta$ is the vector $(\paramk)_{k\in\ensuremath{\mathcal{K}}\xspace}$ and conditional on model $\Mk$ the data $\ensuremath{\bm{y}}\xspace$ only depends on a subset, $\paramk$, of the parameters. To form the Gibbs sampler, a so called pseudoprior $\pi(\paramk|M\ne\Mk)$ in addition to the usual prior $\pi(\paramk|\Mk)$ is selected, such that given the model indicator $M$, the parameters associated with different models are conditionally mutually independent. In this way, a Gibbs sampler can be constructed provided that all the full conditional distributions $\pi(\paramk|\ensuremath{\bm{y}}\xspace,\paramk[k^\prime\ne k],M)$ and $\pi(M = \Mk|\ensuremath{\bm{y}}\xspace,\theta)$ for $k\in\ensuremath{\mathcal{K}}\xspace$ are available. The major drawback of this approach is that the performance and validity of the sampler is very sensitive to the selected pseudopriors, and as usual for all Gibbs samplers, the full conditional distribution needs to be readily sampled from. This approach was later generalised by \citet{Godsill:2001cv} who also explored the connection with RJMCMC\xspace. Overall, the methods reviewed above either demand some knowledge of the target distributions that is often missing in reality, or require substantial tuning in order for the algorithms to perform well. \subsection{Recent Developments on Population-Based Methods} \label{sub:Recent Developments on Population-Based Methods} In the recent computational statistics literature, there has been a tendency to consider the use of population-based sampling methods. There are two popular approaches among many others. One is based on sequential importance sampling and resampling, such as the SMC\xspace sampler \citep{DelMoral:2006hc} and earlier development of annealed importance sampling (AIS\xspace; \citet{Neal:2001we}) which can be viewed as a special case of SMC\xspace. Another approach is population MCMC\xspace (PMCMC\xspace; \citet{Marinari:1992,Geyer:1991,Liang:2001dc}) also known as parallel tempering, which uses a collection of parallel MCMC\xspace chains to approximate a target distribution. PMCMC\xspace operates by constructing a sequence of distributions $\{\pi_t\}_{t=0}^T$ with $\pi_0$ corresponding to the target distribution and successive elements of this sequence consisting of distributions from which it is increasingly easy to sample. A population of samples is maintained, with the $i^{\text{th}}$ element of the population being approximately distributed according to $\pi_i$; the algorithm proceeds by simulating an ensemble of parallel MCMC\xspace chains each targeting one of these distributions. The chains interact with one another via exchange moves, in which the state of two adjacent chains is swapped, and this mechanism allows for information to be propagated between the chains and hopefully for the fast mixing of $\pi_T$ to be partially transferred to the chain associated with $\pi_0$. The outputs are samples that approximate the product $\prod_{t=0}^T\pi_t$ which admits the target distribution as its first coordinate marginal. There is evidence in the literature of substantial interest in the potential of using population based methods to explore high dimensional and multimodal parameter spaces which were previously difficult for conventional MCMC\xspace algorithms. \citet{Jasra:2007in} compared the performance of the two approaches in this context. There are also increasing interest of using these methods for Bayesian model comparison. The PMCMC\xspace outputs can be post-processed in the same way as conventional MCMC\xspace to obtain estimates of evidence for each model (for example using a generalised harmonic mean estimator). However, this approach inherits many of the disadvantages of this estimator. \citet{Jasra:2007id} combined PMCMC\xspace with RJMCMC\xspace and thus provide a direct estimate of the posterior model probability. Another approach is to use the outputs from all the chains to approximate the path sampling estimator \citep{Gelman:1998ei}, see \citet{Calderhead:2009bd}. However, the mixing speed of PMCMC\xspace is sensitive to the number and placement of the distributions $\{\pi_t\}_{t=0}^T$ (see \citet{Atchade:2010ha} for the optimal placement of distributions in terms of a particular mixing criterion for a restricted class of models). As seen in \citet{Calderhead:2009bd}, the placement of distributions can play a crucial role in the performance of the estimator, a topic we will revisit in Section~\ref{sec:Illustrative Applications}. The use of AIS\xspace for computing normalising constants directly and via path sampling dates back at least to \citet{Neal:2001we}; see \citet{Vyshemirsky:2008ch} for a recent example of its use in the computation of model evidences. In the literature it has generally been suggested that more general SMC\xspace strategies provide no advantage over AIS\xspace when the normalizing constant is the object of inference. Later we will demonstrate that this is not generally true, adding improved robustness of normalizing constant estimates to the advantages afforded by resampling within SMC\xspace. We will also discuss more details on the use of SMC\xspace and path sampling for Bayesian model section in the next section. The use of PMCMC\xspace coupled with path sampling was discussed in \citet{Vyshemirsky:2008ch}. \citet{Jasra:2008bb} developed a method using a system of interacting SMC\xspace samplers for trans-dimensional simulation. The targeting distribution $\pi$ and its space $S$ are the same as in RJMCMC\xspace. As usual in SMC\xspace, a sequence of distributions $\{\widetilde\pi_t\}_{t=0}^T$ with increasing dimensions are constructed such that $\widetilde\pi_T$ admits $\pi$ as a marginal. The algorithm starts with a set of SMC\xspace samplers with equal number of particles; each of them targets $\widetilde\pi_{i,t}(x) \propto \widetilde\pi_t(x)\bbI(x\in S_{i,t})$ up to a predefined time index $t^\star$, such that $\{S_{i,0}\}$ is a partition of $S$ and $S_{i,t^\star} = S$. At time $t^\star$ particles from all samplers are allowed to coalesce, and from this time on, all of them are iterated with the same Markov kernel until the single sampler reaches the target $\pi$. Each individual sampler explores only a portion of the parameter space and by using the information which each sampler gains about that region of the parameter space, with a properly chosen $t^\star$, the resulting sampler will be able to explore the whole space efficiently. One of the three algorithms detailed in the next section coincides, essentially, with the final stage of the approach of \citet{Jasra:2008bb}; the other algorithms which are developed rely on a quite different strategy. A proof of concept study in which several SMC\xspace approaches to the problem were outlined was provided by \citet{Zhou:2012uz} and these approaches are developed below. These strategies based around various combinations of path sampling \citep{Gelman:1998ei} and SMC\xspace (as used by \citet{Johansen:2006wm} in a rare events context and by \citet{Rousset:2006kq} in the context of the estimation of free energy differences) or the unbiased estimation of the normalizing constant via standard SMC\xspace techniques \citep{DelMoral:1996,DelMoral:2006hc}. A number of other recent developments should be mentioned for completeness, but are not directly relevant to the problems considered here. A strategy for SMC\xspace-based variable selection was developed by \citet{Schafer:2011bx}; this approach depends upon the precise structure of this particular problem and does not involve the explicit computation of normalizing constants. In recent years, several approaches to the problem of Bayesian model comparison in settings in which the likelihood cannot be evaluated have also been proposed: \citet{Grelaud:2009gc,Didelot:2011wo,Robert:2011vx}. This class of problems falls outside the scope of the current paper. We will assume throughout that the likelihood of all models can be evaluated point wise. \subsection{Challenges for Model Comparison Techniques} \label{sub:Difficulties of existing methods} We conclude this background section by noting that there are a number of desirable features in algorithms which seek to address any model comparison problem and that these desiderata can find themselves in competition with one another. One always requires accurate evaluation of Bayes factors or model proportions and to obtain these one requires estimates of either normalizing constants or posterior model probabilities with small error making the efficiency of any Monte Carlo algorithm employed in their estimation critical. If one is interested in characterising behaviour conditional upon a given model or even calculating posterior-predictive quantities, it is likely to be necessary to explore the full parameter space of each model; this can be difficult if one employs between-model strategies which spend little time in models of low probability. In many settings end-users seek to interpret the findings of model selection experiments and in such cases, accurate characterisation of all models including those of relatively small probability may be important. \section{Methodology} \label{sec:Methodology} SMC\xspace samplers allow us to obtain, iteratively, collections of weighted samples from a sequence of distributions $\{\pi_t\}_{t=0}^T$ over essentially any random variables on some measurable spaces $(E_t,\ensuremath{\mathcal{E}}\xspace_t)$, by constructing a sequence of auxiliary distributions $\{\widetilde\pi_t\}_{t=0}^T$ on spaces of increasing dimensions, \begin{equation}\label{eq:pitilde} \widetilde\pi_t(x_{0:t})=\pi_t(x_t)\prod_{s=0}^{t-1}L_s(x_{s+1},x_s), \end{equation} where the sequence of Markov kernels $\{L_s\}_{s=0}^{t-1}$, termed backward kernels, is formally arbitrary but critically influences the estimator variance. See \citet{DelMoral:2006hc} for further details and guidance on the selection of these kernels. Standard sequential importance resampling algorithms can then be applied to the sequence of synthetic distributions, $\{\widetilde\pi_t\}_{t=0}^T$. At time $t = n - 1$, assume that a set of weighted particles $\{W_{n-1}^{(i)},X_{0:n-1}^{(i)}\}_{i=1}^N$ approximating $\widetilde\pi_{n-1}$ is available, then at time $t = n$, the path of each particle is extended with a Markov kernel say, $K_n(x_{n-1}, x_n)$ and the set of particles $\{X_{0:n}^{(i)}\}_{i=1}^N$ reach the distribution $\eta_n(x_{0:n}) = \eta_0(x_0)\prod_{t=1}^nK_t(x_{t-1}, x_t)$, where $\eta_0$ is the initial distribution of the particles. To correct the discrepancy between $\eta_n$ and $\widetilde\pi_n$, importance sampling is then applied, that is the weights are corrected by \begin{equation} W_n(x_{0:n}) \propto \frac{\widetilde\pi_n(x_{0:n})}{\eta_n(x_{0:n})} = \frac{\pi_n(x_n)\prod_{s=0}^{n-1}L_s(x_{s+1}, x_s)} {\eta_0(x_0)\prod_{t=1}^nK_t(x_{t-1},x_t)} \propto W_{n-1}(x_{0:n-1})\widetilde{w}_n(x_{n-1}, x_n) \end{equation} where $\widetilde{w}_n$, termed the \emph{incremental weights}, are calculated as, \begin{equation} \widetilde{w}_n(x_{n-1},x_n) = \frac{\pi_n(x_n)L_{n-1}(x_n, x_{n-1})}{\pi_{n-1}(x_{n-1})K_n(x_{n-1}, x_n)} \end{equation} If $\pi_n$ is only known up to a normalizing constant, say $\pi_n(x_n) = \gamma_n(x_n)/Z_n$, then we can use the \emph{unnormalised} incremental weights \begin{equation} w_n(x_{n-1},x_n) = \frac{\gamma_n(x_n)L_{n-1}(x_n, x_{n-1})} {\gamma_{n-1}(x_{n-1})K_n(x_{n-1}, x_n)} \end{equation} for importance sampling. Further, with the previously \emph{normalised} weights $\{W_{n-1}^{(i)}\}_{i=1}^N$, we can estimate the ratio of normalizing constant $Z_n/Z_{n-1}$ by \begin{equation} \frac{\widehat{Z}_n}{Z_{n-1}} = \sum_{i=1}^N W_{n-1}^{(i)}w_n(X_{n-1:n}^{(i)}), \end{equation} and \begin{equation} \frac{\widehat{Z}_n}{Z_{1}} = \prod\limits_{p=2}^n \frac{\widehat{Z}_p}{Z_{p-1}} = \prod\limits_{p=2}^n \sum_{i=1}^N W_{p-1}^{(i)}w_p(X_{p-1:p}^{(i)}), \end{equation} provides an unbiased \citep[Proposition 7.4.1]{DelMoral:2004ux} estimate of $Z_n / Z_1$. Sequentially, the normalizing constant between initial distribution $\pi_0$ and target $\pi_T$ can be estimated. See \citet{DelMoral:2006hc} for details on calculating the incremental weights in general; in practice, when $K_n$ is $\pi_n$-invariant, $\pi_n \ll \pi_{n-1}$, and \begin{equation} L_{n-1}(x_n, x_{n-1}) = \frac{\pi_n(x_{n-1})K_n(x_{n-1}, x_n)}{\pi_n(x_n)} \end{equation} is used as the backward kernel, the unnormalised incremental weights become \begin{equation} w_n(x_{n-1},x_n) = \frac{\gamma_n(x_{n-1})}{\gamma_{n-1}(x_{n-1})}. \label{eq:inc_weight_mcmc} \end{equation} This will be the situation throughout the remainder of this paper. \subsection{Sequential Monte Carlo for Model Comparison} The problem of interest is characterising the posterior distribution over $\{\Mk\}_{k\in\ensuremath{\mathcal{K}}\xspace}$, a set of possible models, with model $\Mk$ having parameter vector $\paramk\in\Paramk$ which must also usually be inferred. Given prior distributions $\pi(\Mk)$ and $\pi(\paramk|\Mk)$ and likelihood $p(\ensuremath{\bm{y}}\xspace|\paramk,\Mk)$ we seek the posterior distributions $\pi(\Mk|\ensuremath{\bm{y}}\xspace) \propto p(\ensuremath{\bm{y}}\xspace|\Mk)$. There are three fundamentally different approaches to the computations: \begin{enumerate}[\hspace*{1cm} \bf 1.] \item Calculate posterior model probabilities directly. \item Calculate the evidence, $p(\ensuremath{\bm{y}}\xspace|\Mk)$, of each model. \item Calculate pairwise evidence ratios. \end{enumerate} Each approach admits a natural SMC\xspace strategy. The relative strengths of these approaches, which are introduced in the following sections, and alternative methods are identified in Table~\ref{tab:algadv}. \begin{table} \centering \begin{tabularx}{0.8\linewidth}{Xccccccc} \toprule & \begin{sideways} PHM\xspace \end{sideways} & \begin{sideways} RJMCMC\xspace \end{sideways} & \begin{sideways} PMCMC\xspace \end{sideways} & \begin{sideways} \textbf{SMC1\xspace} \end{sideways} & \begin{sideways} \textbf{SMC2\xspace} \end{sideways} & \begin{sideways} \textbf{SMC3\xspace} \end{sideways} \\ \midrule \small Can deal with a countable set of models & & \checkmark & & \checkmark & & \\ \small Can exploit inter-model relationships & & \checkmark & & \checkmark & & \checkmark \\ \small Characterises improbable models & \checkmark & & \checkmark & & \checkmark & \checkmark \\ \small Doesn't require reversible-pairs of moves & \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark \\ \small Doesn't require inter-model mixing & \checkmark & & \checkmark & & \checkmark & \\ \small Admits straightforward parallelisation & & & \checkmark/$\times$ & \checkmark & \checkmark & \checkmark\\ \small Doesn't rely upon ergodicity arguments & & & & \checkmark & \checkmark & \checkmark \\ \bottomrule \end{tabularx} \caption{Strengths of computational strategies for model choice. PMCMC\xspace admits parallelisation up to the number of chains used, but is not a natural candidate for implementation on massively-parallel architectures.} \label{tab:algadv} \end{table} \subsubsection{SMC1\xspace: An All-in-One Approach} One could consider obtaining samples from the same distribution employed in the RJMCMC\xspace approach to model comparison, namely: \begin{equation} \pi^{(1)}(\Mk,\paramk) \propto \pi(\Mk)\pi(\paramk|\Mk)p(\ensuremath{\bm{y}}\xspace|\paramk,\Mk) \end{equation} which is defined on the disjoint union space $\bigcup_{k\in\ensuremath{\mathcal{K}}\xspace}(\{\Mk\}\times\Paramk)$. One obvious SMC\xspace approach is to define a sequence of distributions $\{\pi_t^{(1)}\}_{t=0}^T$ such that $\pi^{(1)}_0$ is easy to sample from, $\pi_{T}^{(1)} = \pi^{(1)}$ and the intermediate distributions move smoothly between them. In the remainder of this section, we use the notation $(\Mk[t],\paramk[t])$ to denote a random sample on the space $\bigcup_{k\in\ensuremath{\mathcal{K}}\xspace}(\{\Mk\}\times\Paramk)$ at time $t$. One simple approach, which might be expected to work well, is the use of an annealing scheme such that: \begin{equation} \pi^{(1)}_t(\Mk[t],\paramk[t]) \propto \pi(\Mk[t])\pi(\paramk[t]|\Mk[t]) p(\ensuremath{\bm{y}}\xspace|\paramk[t],\Mk[t])^{\alpha(t/T)}, \label{eq:geometry_1} \end{equation} for some monotonically increasing $\alpha:[0,1]\to[0,1]$ such that $\alpha(0) = 0$ and $\alpha(1) = 1$. Other approaches are possible and might prove more efficient for some problems (such as the ``data tempering'' approach which \citet{Chopin:2002hg} proposed for parameter estimation which could easily be incorporated in our framework), but this strategy provides a convenient generic approach. These choices lead to Algorithm~\ref{alg:smc1}. This approach might outperform RJMCMC\xspace when it is difficult to design fast-mixing Markov kernels. There are many examples of such an annealed SMC\xspace strategy outperforming MCMC\xspace at a given computational cost --- see, for example, \citet{Fan:2008tf,Johansen:2008kp,Fearnhead:2010ua}. Such trans-dimensional SMC\xspace has been proposed in several contexts \citep{Peters:2005wh} and an extension proposed and analysed by \citet{Jasra:2008bb}. \begin{algorithm} \begin{algorithmic} \STATE \emph{Initialisation:} Set $t\leftarrow0$. \STATE\hskip.68cm Sample $X_0^{(i)} = (M_0^{(i)},\theta_0^{(i)})\sim\nu$ for some proposal distribution $\nu$ (usually the joint prior). \STATE\hskip.68cm Weight $W_0^{(i)} \propto w_0(X_0^{(i)}) = {\pi(M_0^{(i)}) \pi(\theta^{(i)}_0|M_0^{(i)})}/ {\nu(M_0^{(i)},\theta_0^{(i)})}$. \STATE\hskip.68cm Apply resampling if necessary (e.g., if \ess \citep{Kong:1994ul} less than some threshold). \STATE \emph{Iteration:} Set $t\leftarrow t + 1$. \STATE\hskip.68cm Weight $W_t^{(i)} \propto W_{t-1}^{(i)} p(\ensuremath{\bm{y}}\xspace|\theta_{t-1}^{(i)},M_{t-1}^{(i)})^{\alpha(t/T) - \alpha([t-1]/T)}$. \STATE\hskip.68cm Apply resampling if necessary. \STATE\hskip.68cm Sample $X_t^{(i)} \sim K_t(\cdot|X_{t-1}^{(i)})$, a $\pi_t^{(1)}$-invariant kernel. \STATE \emph{Repeat} the \emph{Iteration} step \emph{until $t = T$}. \end{algorithmic} \caption{SMC1\xspace: An All-in-One Approach to Model Comparison.}\label{alg:smc1} \end{algorithm} We include this approach for completeness and study it empirically later. However, the more direct approaches described in the following sections lead more naturally to easy-to-implement strategies with good performance. \subsubsection{SMC2\xspace: A Direct-Evidence-Calculation Approach} An alternative approach would be to estimate explicitly the evidence associated with each model. We propose to do this by sampling from a sequence of distributions for each model: starting from the parameter prior and sweeping through a sequence of distributions to the posterior. Numerous strategies are possible to construct such a sequence of distributions, but one option is to use for each model $\Mk$, $k\in\ensuremath{\mathcal{K}}\xspace$, the sequence $\{\pi_t^{(2,k)}\}_{t=0}^{T_k}$, defined by \begin{equation} \pi_t^{(2,k)}(\paramk[t]) \propto \pi(\paramk[t]|\Mk)p(\ensuremath{\bm{y}}\xspace|\paramk[t],\Mk)^{\alpha_k(t/T_k)}. \label{eq:geometry_2} \end{equation} where the number of distribution $T_k$, and the annealing schedule, $\alpha_k:[0,1]\to[0,1]$ may be different for each model. This leads to Algorithm~\ref{alg:smc2}. The estimator of the posterior model probabilities depends upon the approach taken to estimate the normalizing constant. Direct estimation of the evidence can be performed using the output of this SMC\xspace algorithm and the standard unbiased estimator, termed SMC2\xspace-DS\xspace below: \begin{equation}\label{eq:smc2-ds} \sum_{i=1}^N \frac{\pi(\theta_0^{(k,i)}|\Mk)}{\nu(\theta_0^{(k,i)})} \times \prod_{t=2}^T \sum_{i=1}^N W_{t-1}^{(k,i)} p(\ensuremath{\bm{y}}\xspace|\theta_{t-1}^{(k,i)}\Mk)^{\alpha_k(t/T_k) - \alpha_k([t-1]/T_k)} \end{equation} where $W_{t-1}^{(k,i)}$ is the importance weight of sample $i$, $\theta_{t-1}^{(k,i)}$, during iteration $t-1$ for model $\Mk$. An alternative approach to computing the evidence is also worthy of consideration. As has been suggested, and shown to perform well empirically previously \citep[see, for example]{Johansen:2006wm}, it is possible to use all of the samples from every generation of an SMC\xspace sampler to approximate the path sampling estimator and hence to obtain an estimate of the ratio of normalizing constants. Section~\ref{sub:Path Sampling via smctwo/smcthree} provides details. The posterior distribution of the parameters conditional upon a particular model can also be approximated with: \begin{equation*} \widehat{\pi}_{T_k}^{(2,k)}(\diff\theta) = \sum\limits_{i=1}^{N} W_{T_k}^{(k,i)} \delta_{\theta^{(k,i)}_{T_k}}(\diff\theta). \end{equation*} where $\delta_{\theta^{(k,i)}_{T_k}}$ is the Dirac measure. This approach is appealing for several reasons. One is that it is designed to estimate directly the quantity of interest: the evidence, producing a sample from that distribution at the same time. Another advantage of this approach over SMC1\xspace and the RJMCMC\xspace approach is that it provides as good a characterisation of each model as is required: it is possible to obtain a good estimate of the parameters of every model, even those for which the posterior probability is small. Perhaps most significant is the fact that this approach does not require the design of proposal distributions or Markov kernels which move from one model to another: each model is dealt with in isolation. Whilst this may not be desirable in every situation, there are circumstances in which efficient moves between models are almost impossible to devise. This approach also has some disadvantages. In particular, it is necessary to run a separate simulation for each model --- rendering it impossible to deal with countable collections of models (although this is not such a substantial problem in many interesting cases). The ease of implementation may often offset this limitation. \begin{algorithm} \begin{algorithmic} \STATE For each model $k \in \ensuremath{\mathcal{K}}\xspace$ perform the following algorithm. \STATE \emph{Initialisation:} Set $t\leftarrow0$. \STATE\hskip.68cm Sample $\theta_0^{(k,i)}\sim\nu_k$ for some proposal distribution $\nu_k$ (usually the parameter prior). \STATE\hskip.68cm Weight $W_0^{(k,i)} \propto w_0(\theta_0^{(k,i)}) = {\pi(\theta_0^{(k,i)}|\Mk)}/{\nu_k(\theta_0^{(k,i)})}$. \STATE\hskip.68cm Apply resampling if necessary. \STATE \emph{Iteration:} Set $t\leftarrow t + 1$. \STATE\hskip.68cm Weight $W_t^{(k,i)} \propto W_{t-1}^{(k,i)} p(\ensuremath{\bm{y}}\xspace|\theta_{t-1}^{(k,i)},\Mk)^{\alpha(t/T_k) - \alpha([t-1]/T_k)}$. \STATE\hskip.68cm Apply resampling if necessary. \STATE\hskip.68cm Sample $\theta_t^{(k,i)} \sim K_t(\cdot|\theta_{t-1}^{(k,i)})$, a $\pi_t^{(k,2)}$-invariant kernel. \STATE \emph{Repeat} the \emph{Iteration} step \emph{until $t = T_k$}. \end{algorithmic} \caption{SMC2\xspace: A Direct-Evidence-Calculation Approach.}\label{alg:smc2} \end{algorithm} \subsubsection{SMC3\xspace: A Relative-Evidence-Calculation Approach} A final approach can be thought of as \emph{sequential model comparison}. Rather than estimating the evidence associated with any particular model, we could estimate pairwise evidence ratios directly. The SMC\xspace sampler starts with a initial distribution being the posterior of one model (which could comes from a separate SMC\xspace sampler starting from its prior) and moves towards the posterior of another related model. Then the sampler can continue towards another related model. Given a finite collection of models $\{\Mk\}$, $k\in\ensuremath{\mathcal{K}}\xspace$, suppose the models are ordered in a sensible way (e.g., $\Mk[k-1]$ is nested within $\Mk$ or $\paramk$ is of higher dimension than $\paramk[k-1]$). For each $k\in\ensuremath{\mathcal{K}}\xspace$, we consider a sequence of distributions $\{\pi_t^{(3,k)}\}_{t=0}^{T_k}$, such that $\pi_0^{(3,k)}(\Mk[],\paramk[]) = \pi(\paramk[]|\ensuremath{\bm{y}}\xspace,\Mk) \mathbb{I}_{\{\Mk\}}(\Mk[])$ and $\pi_{T_k}^{(3,k)}(\Mk[],\paramk[]) = \pi(\paramk[]|\ensuremath{\bm{y}}\xspace,\Mk[k+1]) \mathbb{I}_{\{\Mk[k+1]\}}(\Mk[]) = \pi_{0}^{(3,k+1)}(\Mk[],\paramk[])$. When it is possible to construct a SMC\xspace sampler that iterates over this sequence of distributions, the estimate of the ratio of normalizing constants is the Bayes factor estimate of model $\Mk[k+1]$ in favour of model $\Mk$. This approach is conceptually appealing, but requires the construction of a smooth path between the posterior distributions of interest. The geometric annealing strategy which has been advocated as a good generic strategy in the previous sections is only appropriate when the support of successive distributions is non-increasing. This is unlikely to be the case in interesting model comparison problems. In this paper we consider a sequence of distributions on the disjoint union $\{\Mk,\Paramk\}\cup\{\Mk[k+1],\Paramk[k+1]\},$ with the sequence of distributions $\{\pi_t^{(3,k)}\}_{t=0}^{T_k}$ defined as the full posterior, \begin{equation} \pi_t^{(3,k)}(\Mk[t],\paramk[t]) \propto \pi_t(\Mk[t]) \pi(\paramk[t]|\Mk[t]) p(\ensuremath{\bm{y}}\xspace|\paramk[t],\Mk[t]) \end{equation} where $\Mk[t]\in\{\Mk,\Mk[k+1]\}$ and the prior of models at time $t$, $\pi_t(\Mk[t])$ is defined by \begin{equation} \pi_t(\Mk[k+1]) = \alpha(t/T_k) \label{eq:smc3_prior} \end{equation} for some monotonically increasing $\alpha:[0,1]\to[0,1]$ such that $\alpha(0) = 0$ and $\alpha(1) = 1$. It is clear that the MCMC\xspace moves between iterations need to be similar to those in the RJMCMC\xspace or SMC1\xspace algorithms. The difference is that instead of efficient exploration of the whole model space, only moves between two models are required and the sequence of distributions employed helps to ensure exploration of both model spaces. The algorithm for this particular sequence of distribution is outlined in Algorithm~\ref{alg:smc3}. It can be extended to other possible sequence of distributions between models. An advantage of this approach is that it provides direct estimate of the Bayes factor which is of interest for model comparison purpose while not requiring exploration of as complicated a space as that employed within RJMCMC\xspace or SMC1\xspace. The estimating of normalizing constant in SMC3\xspace can follows in exactly the same manner as in the SMC2\xspace case. In SMC3\xspace, the same estimator provides a direct estimate of the Bayes factor. \begin{algorithm} \begin{algorithmic} \STATE \emph{Initialisation:} Set $k\leftarrow1$. \STATE\hskip.68cm Use Algorithm~\ref{alg:smc2} to obtain weighted samples for $\pi_{T_1}^{(3,1)}$, the parameter posterior for model $\Mk[1]$ \STATE \emph{Relative Evidence Calculation} \STATE\hskip.68cm Set $k\leftarrow k + 1$, $t\leftarrow0$. \STATE\hskip.68cm Denote current weighted samples as $\{W_0^{(k,i)},X_0^{(k,i)}\}_{i=1}^N$ where $X_0^{(k,i)} = (M_0^{(k,i)},\theta_0^{(k,i)})$ \STATE\hskip.68cm Apply resampling if necessary. \STATE\hskip.68cm \emph{Iteration:} Set $t\leftarrow t + 1$. \STATE\hskip.68cm\STATESKIP Weight $W_t^{(k,i)} \propto W_{t-1}^{(k,i)} {\pi_t(M_{t-1}^{(k,i)})}/{\pi_{t-1}(M_{t-1}^{(k,i)})}$. \STATE\hskip.68cm\STATESKIP Apply resampling if necessary. \STATE\hskip.68cm\STATESKIP Sample $(M_t^{(k,i)},\theta_t^{(ki)}) \sim K_t(\cdot|M_{t-1}^{(k,i)}\theta_{t-1}^{(k,i)})$, a $\pi_t^{(3,k)}$-invariant kernel. \STATE\hskip.68cm \emph{Repeat the \emph{Iteration} step up to $t = T_k$}. \STATE \emph{Repeat} the \emph{Relative Evidence Calculation} step \emph{until sequentially all relative evidences are calculated}. \end{algorithmic} \caption{SMC3\xspace: A Relative-Evidence-Calculation Approach to Model Comparison.} \label{alg:smc3} \end{algorithm} \subsection{Path Sampling via SMC2\xspace/SMC3\xspace} \label{sub:Path Sampling via smctwo/smcthree} The estimation of the normalizing constant associated with our sequences of distributions can be achieved by a Monte Carlo approximation to the \emph{path sampling} formulation given by \citet{Gelman:1998ei} (sometimes known as thermodynamic integration or Ogata's method). This approach is very closely related to the use of AIS\xspace for the same purpose \citet{Neal:2001we} but as will be demonstrated below the incorporation of some other elements of the more general SMC\xspace algorithm family can improve performance at negligible cost. Given a parameter $\alpha$ which defines a family of distributions, $\{p_{\alpha} = q_{\alpha} / Z_\alpha\}_{\alpha \in [0,1]}$ which move smoothly from $p_0 = q_0 / Z_0$ to $p_1 = q_1 / Z_1$ as $\alpha$ increases from zero to one, one can estimate the logarithm of the ratio of their normalizing constants via a simple integral relationship which holds under very mild regularity conditions: \begin{equation} \log\biggl( \frac{Z_1}{Z_0} \biggr) = \int_{0}^{1} \Exp_\alpha \biggl[ \rnd{\log q_{\alpha}(\cdot)}{\alpha} \biggr] \intd\alpha, \label{eq:path_identity} \end{equation} where $\Exp_\alpha$ denotes expectation under $p_\alpha$; see \citet{Gelman:1998ei}. Note that the sequence of distributions in the SMC2\xspace and SMC3\xspace algorithms above, can both be interpreted as belonging to such a family of distributions, with $\alpha_t = \alpha(t/T_k)$, where the mapping $\alpha:[0,1]\to[0,1]$ is again monotonic with $\alpha(0) = 0$ and $\alpha(1) = 1$. The SMC\xspace sampler provides us with a set of weighted samples obtained from a sequence of distributions suitable for approximating this integral. At each $t$ we can obtain an estimate of the expectation within the integral for $\alpha(t/T)$ via the usual importance sampling estimator, and this integral can then be approximated via numerical integration. Whenever the sequence of distributions employed by SMC3\xspace has appropriate differentiability it is also possible to employ path sampling to estimate, directly, the evidence ratio via this approach applied to the samples generated by that algorithm. In general, given an increasing sequence $\{\alpha_t\}_{t=0}^T$ where $\alpha_0 = 0$ and $\alpha_T = 1$, a family of distributions $\{p_{\alpha}\}_{\alpha\in[0,1]}$ as before, and a SMC\xspace sampler that iterates over the sequence of distribution $\{\pi_t = p_{\alpha_t} = q_{\alpha_t}/Z_{\alpha_t}\}_{t=0}^T$, then with the weighted samples $\{W_t^{(j)},X_t^{(j)}\}_{j=1}^N$, and $t = 0,\dots,T$, a path sampling estimator of the ratio of normalizing constants $\Xi_T = \log(Z_1/Z_0)$ can be approximated (using an elementary trapezoidal scheme) by \begin{equation} \widehat\Xi_{T}^{N} = \sum_{t=1}^T \frac{1}{2}(\alpha_t - \alpha_{t - 1})(U_t^N + U_{t-1}^N) \label{eq:path_est} \end{equation} where \begin{equation} U_t^N = \sum_{j=1}^N W_t^{(j)} \rnd{\log q_{\alpha}(X_t^{(j)})}{\alpha}\Bigm|_{\alpha = \alpha_t} \end{equation} We term these estimators SMC2\xspace-PS\xspace and SMC3\xspace-PS\xspace in the followings. The combination of SMC\xspace and path sampling is somewhat natural and has been proposed before, e.g., \citet{Johansen:2006wm} although not there in a Bayesian context. Despite the good performance observed in the setting of rare event simulation, the estimation of normalizing constants by this approach seems to have received little attention in the literature. We suspect that this is because of widespread acceptance of the suggestion of \citet{DelMoral:2006hc}, that SMC\xspace doesn't outperform AIS\xspace when normalizing constants are the object of inference or that of \citet{Calderhead:2009bd} that all simulation-based estimators based around path sampling can be expected to behave similarly. We will demonstrate below that these observations, whilst true in certain contexts, do not hold in full generality. \subsection{Extensions and Refinements} \label{sub:Extensions and Refinements} \subsubsection{Improved Univariate Numerical Integration} \label{ssub:Improved Univariate Numerical Integration} As seen in the last section, the path sampling estimator requires evaluation of the expectation, \begin{equation*} \Exp_\alpha \biggl[ \rnd{\log q_{\alpha}(\cdot)}{\alpha} \biggr] \end{equation*} for $\alpha\in[0,1]$, which can be approximated by importance sampling using samples generated by a SMC\xspace sampler operating on the sequence of distributions $\{\pi_t = p_{\alpha_t} = q_{\alpha_t}/Z_t\}_{t=0}^T$ directly for $\alpha\in\{\alpha_t\}_{t=0}^T$. For arbitrary $\alpha\in[0,1]$, finding $t$ such that $\alpha\in(\alpha_{t-1},\alpha_t)$, the expectation can be easily approximated using existing SMC\xspace samples --- the quantities required in the importance weights to obtain such an estimate have already been calculated during the running of the SMC\xspace algorithm and such computations have little computational cost. As noted by \cite{Friel:2012} we can use more sophisticated numerical integration strategies to reduce the path sampling estimator bias. For example, higher order Newton-Cotes rules rather than the Trapezoidal rule can be implemented straightforwardly. In the case of SMC it is especially straightforward to estimate the required expectations at arbitrary $\alpha$ and so we cheaply use higher order integration schemes and we can also use numerical integrations which make use of a finer mesh $\{\alpha_t'\}_{t=0}^{T'}$ than $\{\alpha_t\}_{t=0}^T$. Since higher order numerical integrations based on approximations of derivatives obtained from Monte Carlo methods may potentially be unstable in some situations, the second approach can be more appealing in some applications. A demonstration of the bias reduction effect is provided in Section~\ref{sub:Positron Emission Tomography Compartmental Model}. \subsubsection{Adaptive Specification of Distributions} \label{ssub:Adaptive Specification of Distributions} In settings in which the importance weights at time $t$ depend only upon the sample at time $t-1$, such as that considered here, it is relatively straightforward to consider sample-dependent, adapative specification of the sequence of distributions (typically by choosing the value of a tempering parameter, such as $\alpha_t$ based upon the current sample). \citet{Jasra:2010eh} proposed such a method of adaptive placing the distributions in SMC\xspace algorithms based on controlling the rate at which the effective sample size (\ess; \citet{Kong:1994ul}) falls. With very little computation cost, this provides an automatic method of specifying a tempering schedule in such a way that the \ess decays in a regular fashion. \citet[Algorithm 2]{Schafer:2011bx} used a similar technique but by moving the particle system only when it resamples they are in a setting which would be equivalent to resampling at every timestep (with longer time steps, followed by multiple applications of the MCMC kernel) in our formulation. We advocate resampling only adaptively when \ess is smaller than certain preset threshold, and here we propose a more general adaptive scheme for the selection of the sequence of distributions which has significantly better properties when adaptive resampling is employed. The ESS was designed to assess the loss of efficiency arising from the use a simple weighted sample (rather than a simple random sample from the distribution of interest) in the computation of expectations. It's obtained by considering a sample approximation of a low order Taylor expansion of the variance of the importance sampling estimator of an arbitrary test function to that of the simple Monte Carlo estimator; the test function itself vanishes from the expression as a consequence of this low order expansion. In our context, allowing $W_{t-1}^{(i)}$ to denote the \emph{normalized weights} of particle $i$ at the end of time $t - 1$, and $w_t^{(i)}$ to denote the \emph{unnormalized} incremental weights of particle $i$ during iteration $t$ the \ess calculated using the current weight of each particle is simply: \begin{equation} \ess_t = \left[ {\sum_{j=1}^N\left( \frac{W_{t-1}^{(j)} w_t^{(j)}}{\sum_{k=1}^NW_{t-1}^{(k)}w_t^{(k)}}\right)^2} \right]^{-1} = \frac{\bigl(\sum_{j=1}^NW_{t-1}^{(j)}w_t^{(j)}\bigr)^2} {\sum_{k=1}^N\bigl(W_{t-1}^{(k)}\bigr)^2\bigl(w_t^{(k)}\bigr)^2} \end{equation} It's clearly appropriate to use this quantity (which corresponds to the coefficient of variation of the current normalized importance weights) to assess weight degeneracy and to make decisions about appropriate resampling times (cf. \cite{DelMoral:2012jq}) but it is rather less apparent that it's the correct quantity to consider when adaptively specifying a sequence of distributions in an SMC\xspace sampler. The \ess of the current sample weights tells us about the accumulated mismatch between proposal and target distributions (on an extended space including the full trajectory of the sample paths) since the last resampling time. Fixing either the relative or absolute reduction in \ess between successive distributions does \emph{not} lead to a common discrepancy between successive distributions unless resampling is conducted after every iteration as will be demonstrated below. When specifying a sequence of distributions it is natural to aim for a similar discrepancy between each pair of successive distributions. In the context of effective sample size, the natural question to ask is consequently, how large can we make $\alpha_t - \alpha_{t-1}$ whilst ensuring that $\pi_{t}$ remains sufficiently similar to $\pi_{t-1}$. One way to measure the discrepancy would be to consider how good an importance sampling proposal $\pi_{t-1}$ would be for the estimation of expectations under $\pi_t$ and a natural way to measure this is via the sample approximation of a Taylor expansion of the relative variance of such an estimator exactly as in the \ess. Such a procedure leads us to a quantity which we have termed the \emph{conditional} \ess (\ifmmode\text{CESS}\else CESS\xspace\fi): \begin{equation} \ifmmode\text{CESS}\else CESS\xspace\fi_t = \left[ {\sum_{j=1}^N N W_{t-1}^{(j)} \left( \frac{w_t^{(j)}}{\sum_{k=1}^N N W_{t-1}^{(k)}w_t^{(k)}}\right)^2} \right]^{-1} = \frac{\bigl(\sum_{j=1}^NW_{t-1}^{(j)}w_t^{(j)}\bigr)^2} {\sum_{k=1}^N \frac{1}{N} W_{t-1}^{(k)} \bigl(w_t^{(k)}\bigr)^2} \end{equation} which is equal to the \ess only when resampling is conducted during every iteration. The factor of $1/N$ in the denominator arises from the fact that $\{W_{t-1}^{(i)}\}$ is normalized to sum to unity rather than to have expectation unity: the bracketed term coincides with a sample approximation (using the actual sample which is properly weighted to target $\pi_{t-1}$) of the expected sum of the unnormalized weights squared divided by the square of a sample approximation of the expected sum of unnormalized weights when considering sampling from $\pi_{t-1}$ and targeting $\pi_t$ by simple importance sampling. \begin{figure} \includegraphics[width=\linewidth]{adaptive} \caption{A typical plot of $\alpha_t - \alpha_{t-1}$ against $\alpha_t$ (for the Gaussian mixture model example of Section~\ref{sub:Gaussian Mixture Model} using the SMC2\xspace algorithm). The specifications of the adaptive parameter (\ess or \ifmmode\text{CESS}\else CESS\xspace\fi) are adjusted such that all four samplers use roughly the same number of distributions.} \label{fig:adaptive_alpha} \end{figure} Figure~\ref{fig:adaptive_alpha} shows the variation of $\alpha_t - \alpha_{t-1}$ with $\alpha_t$ when fixed reductions in \ess and \ifmmode\text{CESS}\else CESS\xspace\fi are used to specify the sequence of distributions both when resampling is conducted during every iteration (or equivalently, when the \ess falls below a threshold of 1.0) and when resampling is conducted only when the \ess falls below a threshold of 0.5. As is demonstrated in Section~\ref{sec:Illustrative Applications} the \ifmmode\text{CESS}\else CESS\xspace\fi-based scheme leads to a reduction in estimator variance of around 20\% relative to a manually tuned (quadratic; see Section \ref{sec:gmm_res}) schedule while the \ess-based strategy provides little improvement over the linear case unless resampling is conducted during every iteration. In addition to providing a significantly better performance at essentially no cost, the use of the \ifmmode\text{CESS}\else CESS\xspace\fi emphasizes the purpose of the adaptive specification of the sequence of distributions: to produce a sequence in which the difference between each successive pair is the same (when using the \ifmmode\text{CESS}\else CESS\xspace\fi one is seeking to ensure that the variance of the importance weights one would arrive at if using $\pi_{t-1}$ as a proposal for $\pi_t$ is constant). \subsubsection{Adaptive Specification of Proposals} \label{ssub:Adaptive Specification of Proposals} The SMC\xspace sampler is remarkably robust to the mixing speed of MCMC\xspace kernels employed as can be seen in the empirical study below. However, as with any sampling algorithms, faster mixing doesn't harm performance and in some cases will considerably improve it. In the particular case of Metropolis-Hastings kernels, the mixing speed relies on adequate proposal scales. We adopt a simpler approach based on \citet{Jasra:2010eh}. They applied an idea used within adaptive MCMC\xspace methods \citep{Andrieu:2006tw} to SMC\xspace samplers by using variance of parameters estimated from its particle system approximation as the proposal scale for the next iteration, suitably scaled with reference to the dimension of the parameters to be proposed. Although, in practice we found that such an automatic approach does not always lead to optimal acceptance rates it generally produces satisfactory results and is simple to implement. In difficult problems alternative approaches to adaptation could be employed; one approach demonstrated in \citet{Jasra:2010eh} is to simply employ a pair of acceptance rate thresholds and to alter the proposal scale from the simply estimated value whenever the acceptance rate falls outside those threshold values. More sohisticated proposal strategies could undoubtedly improve performance further and warrant further investigation. One possible approach is using the Metropolis adjusted Langevin algorithm (MALA\xspace; see \citet{Roberts:1996vd}). In summary, MALA\xspace derives a Metropolis-Hastings proposal kernel for a target $\pi$ which satisfies suitable differentiability and positivity conditions, from the Langevin diffusion, \begin{equation*} \diff L_t = \frac{1}{2}\nabla\log\pi(L_t)\diff t + \diff B_t \end{equation*} where $B_t$ is the standard Brownian motion. Given a state $X_{n-1}$, a new state is proposed by discrete approximation to the above diffusion. That is, for a fixed $h>0$, \begin{equation} X_n\sim\rnorm\Bigl(X_{n-1}+\frac{1}{2}\nabla\log\pi(X_{n-1}), hI_d\Bigr) \end{equation} where $I_d$ is the identify matrix and $d$ is the dimension of the state space. The new proposed state is accepted or rejected through the usual Metropolis-Hastings algorithm. Compared to a ``vanilla'' random walk, which often has very robust theoretical properties, MALA\xspace is attractive when it is possible and its convergence conditions \citep{Roberts:1996vd} can be met, because only one discrete approximation parameter $h$ needs to be tuned for optimal performance. In addition, results from \citet{Roberts:2001ta} suggested that MALA\xspace can be more efficient than a random walk when using optimal scalings. We could also use the particle approximation at time index $t = n - 1$ to estimate the covariance matrix of $\pi_{n}$ and thus tune the scale $h$ on-line. As these algorithms are known to be somewhat sensitive to scaling, and we seek approaches robust enough to employ with little user intervention, we have not investigated this strategy here. \subsection{An Automatic, Generic Algorithm} \label{sub:An Automatic, Generic Algorithm} With the above refinements, we are ready to implement the SMC2\xspace algorithm with minimal tuning and application-specific effort while providing robust and accurate estimates of the model evidence $p(\ensuremath{\bm{y}}\xspace|\Mk)$. First the geometric annealing scheme that connects the prior $\pi(\paramk|\Mk)$ and the posterior $\pi(\paramk|\ensuremath{\bm{y}}\xspace,\Mk)$, provides a smooth path for a wide range of problems. Second, the actual annealing schedule under this scheme can be determined through the adaptive schedule as described above. The advantage of the adaptive schedule will be shown empirically later. Third, we can adaptively specify the Metropolis random walk (or MALA\xspace) scales through the estimation of their scaling parameters as the sampler iterates. In contrast to the MCMC\xspace setting, where such adaptive algorithms will usually require a burn-in period, which will not be used for further estimation, in SMC\xspace, the variance and covariance estimates come at almost no cost, as all the samples will later be used for marginal likelihood estimation. Additionally, adaptation within SMC\xspace does not require separate theoretical justification -- something which can significantly complicate the development of effective, theoretically justified schemes in the MCMC\xspace setting. Alternatively, we can also specify the proposal scales in a deterministic, but sensible way. Since SMC\xspace algorithms are relatively robust to the change of scales, such deterministic scales will not require the same degree of tuning as is required to obtain good performance in MCMC\xspace algorithms. The adaptive strategy can be applied to both algorithms directly. The applicability to the SMC3\xspace algorithm depends on the nature of the sequence of distributions. We outline the strategy in Algorithm~\ref{alg:adaptive}. \begin{algorithm} \begin{algorithmic} \STATE \emph{Accuracy control} \STATE\hskip.68cm Set constant $\ifmmode\text{CESS}\else CESS\xspace\fi^\star\in(0,1)$, using a small pilot simulation if necessary. \STATE \emph{Initialization:} Set $t\leftarrow0$. \STATE\hskip.68cm Perform the \emph{Initialization} step as in Algorithm~\ref{alg:smc1} or~\ref{alg:smc2} \STATE \emph{Iteration:} Set $t\leftarrow t + 1$ \STATE\hskip.68cm \emph{Step size selection} \STATE\hskip.68cm\STATESKIP Use a binary search to find $\alpha^\star$ such that $\ifmmode\text{CESS}\else CESS\xspace\fi_{\alpha^\star} = \ifmmode\text{CESS}\else CESS\xspace\fi^\star$ \STATE\hskip.68cm\STATESKIP Set $\alpha_t \leftarrow\alpha^\star$ if $\alpha^\star \le 1$, otherwise set $\alpha_t\leftarrow1$ \STATE\hskip.68cm \emph{Proposal scale calibration} \STATE\hskip.68cm\STATESKIP Computing the importance sampling estimates of first two moments of parameters. \STATE\hskip.68cm\STATESKIP Set the proposal scale of the Markov proposal $K_t$ with the estimated parameter variances. \STATE\hskip.68cm Perform the \emph{Iteration} step as in Algorithm~\ref{alg:smc1} or~\ref{alg:smc2} with the found $\alpha_t$ and proposal scales. \STATE \emph{Repeat} the \emph{Iteration} step \emph{until $\alpha_t = 1$} then set $T=t$. \end{algorithmic} \caption{An Automatic, Generic Algorithm for Bayesian Model Comparison} \label{alg:adaptive} \end{algorithm} As laid out above, the algorithms require minimal tuning. Its robustness, accuracy and efficiency will be shown empirically in Section~\ref{sec:Illustrative Applications}. We shall also note that SMC1\xspace is less straightforward as the between model moves still require effort to design and implement. In SMC3\xspace, the specification of the sequences between posterior distributions are less generic compared to the geometric annealing scheme in SMC2\xspace. However, the adaptive schedule and automatic tuning of MCMC\xspace proposal scales, both can be applied in these two algorithms in principal. Although further enhancements and refinements are clearly possible, we focus in the remainder of this article on this simple, generic algorithm which can be easily implemented in any application and has proved sufficiently powerful to provide good estimation in the examples we have encountered thus far. \section{Illustrative Applications} \label{sec:Illustrative Applications} In this section, we will use three examples to illustrate the algorithms. The Gaussian mixture model is discussed first, with implementations for all three SMC\xspace algorithms with comparison to RJMCMC\xspace and PMCMC\xspace. It will be shown that all five algorithms agree on the results while the performance in terms of Monte Carlo variance varies considerably. It will also be demonstrated how the adaptive refinements of the algorithms behaves in practice. We will reach the conclusion that considering ease of implementation, performance and generality, the SMC2\xspace algorithm is most promising among all three strategies. Then two more realistic examples, a nonlinear ODE\xspace model and a Positron Emission Tomography compartmental model are used to study the performance and robustness of algorithm SMC2\xspace compared to AIS\xspace and PMCMC\xspace. Various configurations of the algorithms are considered including both sequential and parallelized implementations. The \texttt{C++} implementations of all examples can be found at \url{https://github.com/zhouyan/vSMC}. \subsection{Gaussian Mixture Model} \label{sub:Gaussian Mixture Model} Since \citet{Richardson:1997ea}, the Gaussian mixture model (GMM\xspace) has provided a canonical example of a model-order-determination problem. We use the model formulation of \citet{DelMoral:2006hc} to illustrate the efficiency and robustness of the methods proposed in this paper compared to other approaches. The model is as follows; data $\ensuremath{\bm{y}}\xspace = (y_1,\dots,y_n)$ are independently and identically distributed as \begin{equation*} y_i|\theta_r \sim \sum_{j=1}^r \omega_j\rnorm(\mu_j,\lambda_j^{-1}) \end{equation*} where $\rnorm(\mu_j,\lambda_j^{-1})$ denotes the Normal distribution with mean $\mu_j$ and precision $\lambda_j$; $\theta_r = (\mu_{1:r},\lambda_{1:r},\omega_{1:r})$ and $r$ is the number of components in each model. The parameter space is thus $\Real^r \times (\Real^{+})^r\times \Delta_r$ where $\Delta_r = \{\omega_{1:r}:0\le\omega_j\le1; \sum_{j=1}^r\omega_j=1\}$ is the standard $r$-simplex. The priors which are the same for each component are taken to be $\mu_j\sim\rnorm(\xi,\kappa^{-1})$, $\lambda_j\sim\rgamma(\nu,\chi)$ and $\omega_{1:r}\sim\rdir(\rho)$ where $\rdir(\rho)$ is the symmetric Dirichlet distribution with parameter $\rho$ and $\rgamma(\nu,\chi)$ is the Gamma distribution with shape $\nu$ and scale $\chi$. The prior parameters are set in the same manner as in \citet{Richardson:1997ea}. Specifically, let $y_{\mathrm{min}}$ and $y_{\mathrm{max}}$ be the minimum and maximum of data $\ensuremath{\bm{y}}\xspace$, the prior parameters are set such that \begin{equation*} \xi = (y_{\mathrm{max}} + y_{\mathrm{min}}) / 2, \quad \kappa = (y_{\mathrm{max}} - y_{\mathrm{min}})^{-2}, \quad \nu = 2, \quad \chi = 50\kappa, \quad \rho = 1 \end{equation*} The data is simulated from a four components model with $\mu_{1:4} = (-3, 0,3, 6)$, and $\lambda_j =2$, $\omega_j = 0.25$, $j = 1,\dots,4$. We consider several algorithms. First the RJMCMC\xspace algorithm as in \citet{Richardson:1997ea}, and second an implementation of the SMC1\xspace algorithm. Next AIS\xspace, PMCMC\xspace and SMC2\xspace are used for within-model simulations. The last is an implementation of the SMC3\xspace algorithm. In all the algorithms, the local move which does not change the dimension of the model is constructed as a composition of Metropolis-Hastings random walk kernels: \begin{compactitem} \item Update $\mu_{1:r}$ using a multivariate Normal random walk proposal. \item Update $\lambda_{1:r}$ using a multivariate Normal random walk on logarithmic scale, i.e., on $\log\lambda_{j}$, $j = 1, \dots, r$. \item Update $\omega_{1:r}$ using a multivariate Normal random walk on logit scale, i.e., on $\omega_{j}/\omega_r$, $j = 1,\dots,r-1$. \end{compactitem} The RJMCMC\xspace, SMC1\xspace and SMC3\xspace algorithms use two additional reversible jump moves. The first is a combine and split move; the second is a birth and death move. Both are constructed in the same manner as in \citet{Richardson:1997ea}. Also in these implementations, an adjacency condition was imposed on the means $\mu_{1:r}$, such that $\mu_1 < \mu_2 < \dots < \mu_r$. No such restriction was used for other algorithms. In SMC1\xspace, SMC2\xspace and PMCMC\xspace implementations, the distributions are chosen with a geometric schedule, i.e., as in Equation~\eqref{eq:geometry_1} for SMC1\xspace and Equation~\eqref{eq:geometry_2} for the other two. This annealing scheme has been used in \citet{DelMoral:2006hc,Jasra:2007in} and many other works. The geometric scheme can also be seen in \citet{Calderhead:2009bd} for PMCMC\xspace tempering. A schedule $\alpha(t/T) = (t/T)^p$, with $p = 2$ was used. The rationale behind this particular schedule can be seen in \citet{Calderhead:2009bd} and other values of $p$ were also tried while $p\approx2$ performs best in this particular example. The adaptive schedule was also implemented for SMC2\xspace and AIS\xspace algorithms. The proposal scales for each block of the random walks are specified dynamically according to values of $\alpha(t/T)$ for the SMC2\xspace and AIS\xspace algorithms and also manually tuned for other algorithms such that the acceptance rates fall in $[0.2, 0.5]$. Later for the SMC2\xspace and AIS\xspace algorithms, we also consider adaptive schedule of the distribution specification parameter $\alpha(t/T)$ and the proposal scales of the random walks. For SMC2\xspace, SMC3\xspace and AIS\xspace we consider both the direct estimator and the path sampling estimator. For PMCMC\xspace we consider the path sampling estimator. \subsubsection{Results}\label{sec:gmm_res} The SMC1\xspace implementation uses $10^4$ particles and $500$ distributions. The RJMCMC\xspace implementation uses $5\times10^6$ iterations in addition to $10^6$ iterations of burn-in period. The resulting estimates of model probabilities are shown in Table~\ref{tab:gmm-rj}. \begin{table} \begingroup\small\begin{tabularx}{\linewidth}{XXccccccc} \toprule & & \multicolumn{7}{c}{Number of components} \\ \cmidrule(lr){3-9} Algorithm & Quantity & $\le2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $\ge8$ \\ \midrule SMC1\xspace & $\Prob(M = \Mk)$ & $0$ & $0.00257$ & $0.886$ & $0.103$ & $0.00715$ & $0.00128$ & $0$ \\ & $\log B_{4,k}$ & $\infty$ & $5.84$ & $0$ & $2.15$ & $4.82$ & $6.54$ & $\infty$ \\ RJMCMC\xspace & $\Prob(M = \Mk)$ & $0$ & $0.000526$ & $0.887$ & $0.103$ & $0.00623$ & $0.00324$ & $0$ \\ & $\log B_{4,k}$ & $\infty$ & $6.56$ & $0$ & $2.15$ & $4.96$ & $5.61$ & $\infty$ \\ \bottomrule \end{tabularx}\endgroup \caption{Gaussian mixture model estimates obtained via SMC1\xspace and RJMCMC\xspace} \label{tab:gmm-rj} \end{table} The SMC2\xspace, SMC3\xspace and AIS\xspace implementations use $1,000$ particles and $500$ iterations. The PMCMC\xspace implementation uses $50$ chains and $10^4$ iterations in addition to $10^4$ iterations of burn-in period --- these implementations have approximately equal computational costs. From the results obtained under the SMC1\xspace and RJMCMC\xspace algorithms it is clear that, in this particular example, simulations for models with fewer than ten components are adequate to characterize the model space. Therefore, under this configuration, the cost is roughly the same in terms of computational resources as that of the SMC1\xspace and RJMCMC\xspace algorithms. From the results of RJMCMC\xspace and SMC1\xspace, we consider four and five components models (i.e., the true model and the most competitive amongst the others). The estimates are shown in Table~\ref{tab:gmm-pair} which, like all of the other tables in this section, summarises the Monte Carlo variability of 100 replicate runs of each algorithm.. \begin{table} \begingroup\small\begin{tabularx}{\linewidth}{lXXXXXXX} \toprule & \multicolumn{7}{c}{Algorithms} \\ \cmidrule(lr){2-8} Quantity & SMC2\xspace-DS\xspace & SMC2\xspace-PS\xspace & SMC3\xspace-DS\xspace & SMC3\xspace-PS\xspace & AIS\xspace-DS\xspace & AIS\xspace-PS\xspace & PMCMC\xspace \\ \midrule $\log B_{4,5}$ & $2.15$ & $2.15$ & $2.16$ & $2.21$ & $2.16$ & $2.17$ & $2.63$ \\ SD\xspace & $0.25$&$\color{red}{\bf 0.22}$ & $0.61$&$0.62$ & $1.12$&$1.10$ & $0.41$ \\ \bottomrule \end{tabularx}\endgroup \caption{Gaussian mixture model estimates obtained via SMC2\xspace, SMC3\xspace, AIS\xspace and PMCMC\xspace} \label{tab:gmm-pair} \end{table} From Tables~\ref{tab:gmm-rj} and~\ref{tab:gmm-pair}, it can be seen that the unbiased estimators (RJMCMC\xspace, SMC1\xspace, SMC2\xspace-DS\xspace, SMC3\xspace-DS\xspace and AIS\xspace-DS\xspace) agree with each other. Among the path sampling estimators, SMC2\xspace-PS\xspace and AIS\xspace-PS\xspace have little bias. SMC3\xspace-PS\xspace shows a little more bias. The PMCMC\xspace algorithm has a considerable larger bias as the number of distributions is relatively small (as noted previously, a larger number will negatively affect the mixing speed). In terms of Monte Carlo variance, in Table~\ref{tab:gmm-pair}, SMC2\xspace clearly has an advantage compared to its no-resampling variant, AIS\xspace. The differences of Monte Carlo SD\xspace between SMC2\xspace, SMC3\xspace and PMCMC\xspace, although they do not affect model selection in this particular example, are considerable. \paragraph{Effects of resampling} It is clear from these results that resampling (when required) can substantially improve the estimation of normalising constants within an SMC\xspace framework. This doesn't contradict the statement in \cite{DelMoral:2006hc} which suggests that resampling may not much help when the normalising constant is the object of interest: the theoretical argument which supports this relies upon the assumption that the Markov kernel used to mutate the particles mixes extremely rapidly and the result is obtained under the assumption that resampling is performed after every iteration. When the Markov kernel is not so rapidly mixing, the additional stability provided by the resampling operation can outweight the attendant increase in Monte Carlo variance and that is what we observed here (and in the case of the other examples considered below; results not shown). \paragraph{Effects of adaptive schedules} To assess the evolution of distributions with an adaptive schedule, we consider the relation between $\alpha_t - \alpha_{t-1}$ and $\alpha_t$. As stated before, one of the motivations of using \ifmmode\text{CESS}\else CESS\xspace\fi for adaptive placement of distribution is to ensure that $\alpha_t$ evolves the same path regardless the resampling strategies. Figure~\ref{fig:adaptive_alpha} shows the evolution of $\alpha_t$ when fixing \ess or \ifmmode\text{CESS}\else CESS\xspace\fi and resampling every iteration or only when $\ess < N/2$. As shown in the plot, when fixing \ifmmode\text{CESS}\else CESS\xspace\fi, the evolution of the distributions is not affected by the resampling strategy. In contrast, fixing \ess yields a sequence of distributions which depends strongly upon the resampling strategy. In terms of the actual performance when using the \ifmmode\text{CESS}\else CESS\xspace\fi adaptive strategy in the SMC2\xspace and AIS\xspace algorithms, a reduction of standard deviation of 20\% was observed comparing to $\alpha(t/T) = (t/T)^2$, the one shown in Table~\ref{tab:gmm-pair}. When applied to the SMC3\xspace algorithm, 50\% reduction was observed. If the \ess adaptive strategy is used instead, similar standard deviation reduction is observed when resampling is performed every iteration but no significant effect was observed when resampling was only performed when $\ess < N/2$ (i.e., using \ess rather than \ifmmode\text{CESS}\else CESS\xspace\fi entirely eliminated the benefit). \paragraph{Effects of adaptive proposal scales} When using the SMC2\xspace algorithm, if the adaptive strategy of \citet{Andrieu:2006tw} is applied, where the important sampling estimates of the variance of parameters are included in the adaptation, the acceptance rates fall within $[0.2, 0.5]$ dynamically without manual tuning as for the results in Table~\ref{tab:gmm-pair}. It should be noted that in this particular example, it is the variance of $\log\lambda_i$ being estimated as the corresponding random walk block operates on the log scale. The same principle applies to the weight parameters, which have random walks on logit scale. Approximately 20\% reduction in standard deviation was observed. \subsection{Nonlinear Ordinary Differential Equations} The example from the previous section suggests that SMC2\xspace performs well relative to the other SMC\xspace possibilities. Given the wide range of settings in which it can be easily deployed, we will now concentrate further on this method. It also suggests that in the simple case of Gaussian mixtures, a complete adaptive strategy for both distribution specification and proposal scales works well. In this section, this will now be further explored in a more complex model, a nonlinear ordinary differential equations system. This model, which was studied in \cite{Calderhead:2009bd}, is known as the Goodwin model. The ODE\xspace system, for an $m$-component model, is: \begin{align*} \frac{\diff X_1(t)}{\diff t} &= \frac{a_1}{1 + a_2 X_m(t)^{\rho}} - \alpha X_1(t) & \\ \frac{\diff X_i(t)}{\diff t} &= k_{i-1}X_{i-1}(t) - \alpha X_i(t) & i = 2,\dots,m \\ X_i(0) &= 0 & i = 1,\dots,m \end{align*} The parameters $\{\alpha,a_1,a_2,k_{1:m-1}\}$ have common prior distribution $\rgamma(0.1, 0.1)$. Under this setting, $X_{1:m}(t)$ can exhibit either unstable oscillation or a constant steady state. The data are simulated for $m=\{3,5\}$ at equally spaced time points from $0$ to $60$, with time step $0.5$. The last $80$ data points of $(X_1(t), X_2(t))$ are used for inference. Normally-distributed noise with standard deviation $\sigma=0.2$ is added to the simulated data. Following \cite{Calderhead:2009bd}, the variance of the additive measurement error is assumed to be known. Therefore, the posterior distribution has $m+2$ parameters for an $m$-component model. As shown in \cite{Calderhead:2009bd}, when $\rho > 8$, due to the possible instability of the ODE\xspace system, the posterior can have a considerable number of local modes. In this example, we set $\rho = 10$. Also, as the solution to the ODE\xspace system is somewhat unstable, slightly different data can result in very different posterior distributions. \subsubsection{Results} We compare results from the SMC2\xspace and PMCMC\xspace algorithms. For the SMC\xspace implementation, $1,000$ particles and $500$ iterations were used, with the distributions specified specified by Equation~\eqref{eq:geometry_2}, with $\alpha(t/T) = (t/T)^5$, or via the completely adaptive specification. For the PMCMC\xspace algorithm, $50,000$ iterations are performed for burn-in and another $10,000$ iterations are used for inference. The same tempering as was used for SMC\xspace is used here. Note that, in a sequential implementation of PMCMC\xspace, with each iteration updating one local chain and attempting a global exchange, the computational cost of after burn-in iterations is roughly the same as the entire SMC\xspace algorithm. In addition, changing $T$ within the range of the number of cores available does not substantially change the computational cost of a generic parallel implementation of the PMCMC\xspace algorithm. We compare results from $T = 10,30,100$. The results for data generated from the simple model ($m = 3$) and complex model ($m = 5$), again summarising variability amongst 100 runs of each algorithm, are shown in Table~\ref{tab:node-s-all} and~\ref{tab:node-c-all}, respectively. \begin{table} \def\color{blue}\it{\color{blue}\it} \def\color{red}\bf{\color{red}\bf} \begingroup\small \begin{tabularx}{\linewidth}{lCClCCC} \toprule &&&& \multicolumn{2}{c}{Marginal likelihood} & \\ &&&& \multicolumn{2}{c}{($\log p(\paramk|\ensuremath{\bm{y}}\xspace)\pm\text{SD\xspace}$)} & \\ \cmidrule(lr){5-6} $T$ & Proposal Scales & Annealing Scheme & Algorithm & $m = 3$ & $m = 5$ & Bayes factor $\log B_{3,5}$ \\ \midrule $10 $ & Manual & Prior (5) & PMCMC\xspace & $-109.7\pm3.2$ & $-120.3\pm2.5$ & $10.6\pm3.8$ \\ $30 $ & & & & $\color{blue}\it-105.0\pm1.2$ & $\color{blue}\it-116.1\pm2.2$ & $\B11.2\pm2.5$ \\ $100$ & & & & $-134.7\pm7.9$ & $-144.1\pm6.2$ & $9.4\pm11.2$ \\ \midrule $500$ & Manual & Prior (5) & SMC2\xspace-DS\xspace & $-104.6\pm2.0$ & $-112.7\pm1.8$ & $8.1\pm2.8$ \\ & & & SMC2\xspace-PS\xspace & $-104.5\pm1.8$ & $-112.7\pm1.5$ & $8.2\pm2.5$ \\ $500$ & Manual & Adaptive & SMC2\xspace-DS\xspace & $-104.5\pm1.1$ & $-112.7\pm1.1$ & $8.1\pm1.6$ \\ & & & SMC2\xspace-PS\xspace & $-104.6\pm1.0$ & $-112.8\pm1.0$ & $8.2\pm1.5$ \\ $500$ & Adaptive & Adaptive & SMC2\xspace-DS\xspace & $-104.5\pm0.5$ & $-112.7\pm0.4$ & $8.1\pm0.8$ \\ & & & SMC2\xspace-PS\xspace & $\color{red}\bf-104.6\pm0.4$ & $\color{red}\bf-112.8\pm0.3$ & $\R8.1\pm0.6$ \\ \bottomrule \end{tabularx} \endgroup \caption{Results for non-linear ODE\xspace models with data generated from simple model. {\color{blue}\it italic}: Minimum variance for the same algorithm. {\color{red}\bf Bold}: Minimum variance for all samplers.} \label{tab:node-s-all} \end{table} \begin{table} \def\color{blue}\it{\color{blue}\it} \def\color{red}\bf{\color{red}\bf} \begingroup\small \begin{tabularx}{\linewidth}{lCClCCC} \toprule &&&& \multicolumn{2}{c}{Marginal likelihood} & \\ &&&& \multicolumn{2}{c}{($\log p(\paramk|\ensuremath{\bm{y}}\xspace)\pm\text{SD\xspace}$)} & \\ \cmidrule(lr){5-6} $T$ & Proposal Scales & Annealing Scheme & Algorithm & $m = 3$ & $m = 5$ & Bayes factor $\log B_{5,3}$ \\ \midrule $10 $ & Manual & Prior (5) & PMCMC\xspace & $-1651.0\pm27.9$ & $-85.1\pm36.6$ & $1565.9\pm42.1$ \\ $30 $ & & & & $\color{blue}\it-1639.7\pm7.4$ & $\color{blue}\it-78.9\pm11.2$ & $\B1560.8\pm12.8$ \\ $100$ & & & & $-1624.6\pm15.7$ & $-75.7\pm24.8$ & $1548.9\pm25.6$ \\ \midrule $500$ & Manual & Prior (5) & SMC2\xspace-DS\xspace & $-1640.7\pm10.8$ & $-78.5\pm9.8$ & $1562.2\pm10.1$ \\ & & & SMC2\xspace-PS\xspace & $-1640.8\pm 8.4$ & $-79.2\pm7.9$ & $1561.6\pm 8.5$ \\ $500$ & Manual & Adaptive & SMC2\xspace-DS\xspace & $-1639.7\pm 6.9$ & $-78.6\pm4.8$ & $1561.1\pm7.1$ \\ & & & SMC2\xspace-PS\xspace & $-1640.1\pm 5.4$ & $-78.8\pm3.7$ & $1561.3\pm6.8$ \\ $500$ & Adaptive & Adaptive & SMC2\xspace-DS\xspace & $-1639.8\pm 2.2$ & $-79.4\pm1.7$ & $1560.4\pm3.1$ \\ & & & SMC2\xspace-PS\xspace & $\color{red}\bf-1640.2\pm 1.9$ & $\color{red}\bf-78.5\pm1.5$ & $\R1561.7\pm2.3$ \\ \bottomrule \end{tabularx} \endgroup \caption{Results for non-linear ODE\xspace models with data generated from complex model. Number {\color{blue}\it italic}: Minimum variance for the same algorithm. {\color{red}\bf Bold}: Minimum variance for all samplers.} \label{tab:node-c-all} \end{table} As shown in both cases, the number of distributions can affect the performance of PMCMC\xspace algorithms considerably. When using $10$ distributions, large bias from numerical integration for path sampling estimator was observed, as expected. With $30$ distributions, the performance is comparable to the SMC2\xspace sampler, though some bias is still observable. With $100$ distributions, there is a much larger variance because, with more chains, the information travels more slowly from rapidly mixing chains to slowly mixing ones and consequently the mixing of the overall system is inhibited. The SMC\xspace algorithm provides results comparable to the best of three PMCMC\xspace implementations in essentially all settings, including one in which both the annealing schedule and proposal scaling were fully automatic. In fact, the completely adaptive strategy was the most successful. It can be seen that increasing the number of distributions not only reduces the bias of numerical integration for path sampling estimator, but also reduces the variance considerably. On the other hand increasing the number of particles can only reduce the variance of the estimates, in accordance with the central limit theorem (see \citet{DelMoral:2006hc} for the standard estimator and extensions for path sampling estimator, Proposition~\ref{prop:path_clt}) (as the bias arises from the numerical integration scheme). \subsection{Positron Emission Tomography Compartmental Model} \label{sub:Positron Emission Tomography Compartmental Model} It is now interesting to compare the proposed algorithm with other state-of-art algorithms using a more realistic example. Positron Emission Tomography (PET\xspace) is a technique used for studying the brain \emph{in vivo}, most typically when investigating metabolism or neuro-chemical concentrations in either normal or patient groups. Given the nature and number of observations typically recorded in time, PET\xspace data is usually modeled with linear differential equation systems. For an overview of PET\xspace compartmental model see \citet{Gunn:2002tf}. Given data $(y_1,\dots,y_n)^{\textrm{T}}$, an $m$-compartmental model has generative form: \begin{gather} y_j = C_T(t_j;\phi_{1:m},\theta_{1:m}) + \sqrt{ \frac{C_T(t_j;\phi_{1:m},\theta_{1:m})}{t_j-t_{j-1}}} \varepsilon_j \\ C_T(t_j;\phi_{1:m},\theta_{1:m}) = \sum_{i=1}^m \phi_i\int_0^{t_j}C_P(s)e^{-\theta_i(t_j-s)}\intd s \end{gather} where $t_j$ is the measurement time of $y_j$, $\varepsilon_j$ is additive measurement error and input function $C_P$ is (treated as) known. The parameters $\phi_1,\theta_1,\dots,\phi_m,\theta_m$ characterize the model dynamics. See \citet{Zhou2013} for applications of Bayesian model comparison for this class of models and details of the specification of the measurement error. In the simulation results below, $\varepsilon_j$ are independently and identically distributed according to a zero mean Normal distribution of unknown variance, $\sigma^2$, which was included in the vector of model parameters. \begin{figure} \includegraphics[width=\linewidth]{PETPlot-smc2-ps-bw} \caption{Estimates of $V_D$ from a single PET\xspace scan as found using SMC2\xspace. The data shows that the volume of distribution exhibits substantial spatial variation. Note that each pixel in the image represent an estimate from an individual time series data set. There are approximately a quarter million of them and each requires a Monte Carlo simulation to select a model. } \label{fig:petplot} \end{figure} Real neuroscience data sets involve a very large number ($\sim200,000$ per brain) of time series, which are typically somewhat heterogeneous. Figure~\ref{fig:petplot} shows estimates of $V_D = \sum_{j=1}^m\phi_j/\theta_j$ from a typical PET\xspace scan (generated using SMC2\xspace as will be discussed later). Robustness is therefore especially important. An application-specific MCMC\xspace algorithm was developed for this problem in \citet{Zhou2013}. A significant amount of tuning of the algorithms was required to obtain good results.The results shown in Figure~\ref{fig:petplot} are very close to those of \citet{Zhou2013} but, as is shown later, they were obtained with almost no manual tuning effort and at similar computational cost. For SMC\xspace and PMCMC\xspace algorithms, the requirement of robustness means that the algorithm must be able to calibrate itself automatically to different data (and thus different posterior surfaces). A sequence of distributions which performs well for one time series may not perform even adequately for another series. Specification of proposal scales that produces fast-mixing kernels for one data series may lead to slow mixing for another. In the following experiment, we will use a single simulated time series, and choose schedules that performs both well and poorly for this particular time series. The objective is to see if the algorithm can recover from a relatively poorly specified schedule and obtain reasonably accurate results. \subsubsection{Results} In this example we focus on the comparison between SMC2\xspace and PMCMC\xspace. We also consider parallelized implementations of algorithms. In this case, due to its relatively small number of chains, PMCMC\xspace can be parallelized completely (and often cannot fully utilize the hardware capability if a na\"\i ve approach to parallelization is taken; while we appreciate that more sophisticated parallelization strategies are possible, these depend instrinsicially upon the model under investigation and the hardware employed and given our focus on automatic and general algorithms, we don't consider such strategies here). The PMCMC\xspace algorithm under this setting is implemented such that each chain is updated at each iteration. Further, for the SMC\xspace algorithms, we consider two cases. In the first we can parallelize the algorithm completely (in the sense that each core has a single particle associated with it). In this setting we use a relatively small number of particles and a larger number of time steps. In the second, we need a few passes to process a large number of particles at each time step, and accordingly we use fewer time steps to maintain the same total computation time. These two settings allow us to investigate the trade-off between the number of particles and time steps. In both implementations, we consider three schedules, $\alpha(t/T) = t/T$ (linear), $\alpha(t/T) = (t/T)^5$ (prior), and $\alpha(t/T) = 1 - (1 - t/T)^5$ (posterior). In addition, the adaptive schedule based upon \ifmmode\text{CESS}\else CESS\xspace\fi is also implemented for the SMC2\xspace algorithm. Results from 100 replicate runs of the two algorithms under various regimes can be found in Table~\ref{tab:pet-py-par-sel} and~\ref{tab:pet-bf-par-sel} for the marginal likelihood and Bayes factor estimates, respectively. The SMC\xspace algorithms consistently outperforms the PMCMC\xspace algorithms in the parallel settings. The Monte Carlo SD\xspace of SMC\xspace algorithms is typically of the order of one fifth of the corresponding estimates from PMCMC\xspace in most scenarios. In some settings with the smaller number of samples, the two algorithms can be comparable. Also at the lowest computational costs, the samplers with more time steps and fewer particles outperform those with the converse configuration by a fairly large margin in terms of estimator variance. It shows that with limited resources, ensuring the similarity of consecutive distributions, and thus good mixing, can be more beneficial than a larger number of particles. However, when the computational budget is increased, the difference becomes negligible. The robustness of SMC\xspace to the change of schedules is again apparent. \begin{table} \def\color{blue}\it{\color{blue}\it} \def\color{red}\bf{\color{red}\bf} \begingroup\small \begin{tabularx}{\linewidth}{lllCCCC} \toprule \multicolumn{3}{l}{Proposal scales} & \multicolumn{3}{c}{Manual} & Adaptive \\ \cmidrule(lr){1-3}\cmidrule(lr){4-6}\cmidrule(lr){7-7} \multicolumn{3}{l}{Annealing scheme} & Prior (5) & Posterior (5) & \multicolumn{2}{c}{Adaptive} \\ \cmidrule(lr){1-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-7} % $T$ & $N$ & Algorithm & \multicolumn{4}{c}{Marginal likelihood estimates ($\log p(\paramk|\ensuremath{\bm{y}}\xspace)\pm\text{SD\xspace}$)} \\ \midrule $500$ & $30 $ & PMCMC\xspace & $ -39.1\pm 0.56$ & $-926.8\pm376.99$ && \\ $500$ & $192$ & SMC2\xspace-DS\xspace & $\color{blue}\it-39.2\pm0.25$ & $\color{blue}\it-39.7\pm1.06$ & $\color{blue}\it-39.2\pm0.18$ & $\color{red}\bf-39.1\pm0.12$ \\ & & SMC2\xspace-PS\xspace & $\color{blue}\it-39.2\pm0.25$ & $-91.3\pm21.69$ & $\color{blue}\it-39.2\pm0.18$ & $-39.1\pm0.13$ \\ $100 $ & $960$ & SMC2\xspace-DS\xspace & $-39.3\pm0.36$ & $-40.6\pm1.41$ & $-39.2\pm0.31$ & $-39.2\pm0.19$ \\ & & SMC2\xspace-PS\xspace & $-39.3\pm0.35$ & $302.1\pm46.29$ & $-39.3\pm0.31$ & $-39.2\pm0.18$ \\ \midrule $1000$ & $30 $ & PMCMC\xspace & $ -39.3\pm 0.46$ & $-884.1\pm307.88$ && \\ $1000$ & $192$ & SMC2\xspace-DS\xspace & $\color{blue}\it-39.2\pm0.19$ & $\color{blue}\it-39.4\pm0.68$ & $\color{blue}\it-39.2\pm0.17$ & $\color{red}\bf-39.1\pm0.10$ \\ & & SMC2\xspace-PS\xspace & $\color{blue}\it-39.2\pm0.19$ & $-66.0\pm13.26$ & $\color{blue}\it-39.2\pm0.17$ & $\color{red}\bf-39.1\pm0.10$ \\ $200 $ & $960$ & SMC2\xspace-DS\xspace & $-39.2\pm0.22$ & $-39.8\pm1.21$ & $-39.2\pm0.18$ & $-39.1\pm0.11$ \\ & & SMC2\xspace-PS\xspace & $-39.2\pm0.22$ & $175.5\pm26.84$ & $-39.2\pm0.18$ & $-39.2\pm0.11$ \\ \midrule $2000$ & $30 $ & PMCMC\xspace & $ -39.3\pm 0.28$ & $-928.7\pm204.93$ && \\ $2000$ & $192$ & SMC2\xspace-DS\xspace & $-39.2\pm0.14$ & $\color{blue}\it-39.3\pm0.41$ & $-39.1\pm0.12$ & $-39.1\pm0.07$ \\ & & SMC2\xspace-PS\xspace & $-39.2\pm0.14$ & $-51.2\pm4.30$ & $-39.2\pm0.12$ & $-39.1\pm0.07$ \\ $400 $ & $960$ & SMC2\xspace-DS\xspace & $\color{blue}\it-39.2\pm0.13$ & $-39.4\pm0.73$ & $\color{blue}\it-39.2\pm0.11$ & $-39.2\pm0.07$ \\ & & SMC2\xspace-PS\xspace & $\color{blue}\it-39.2\pm0.13$ & $106.0\pm14.36$ & $\color{blue}\it-39.2\pm0.11$ & $\color{red}\bf-39.2\pm0.06$ \\ \midrule $5000$ & $30$ & PMCMC\xspace & $ -39.3\pm 0.21$ & $-917.6\pm129.54$ && \\ $5000$ & $192$ & SMC2\xspace-DS\xspace & $-39.2\pm0.09$ & $\color{blue}\it-39.2\pm0.20$ & $-39.2\pm0.08$ & $-39.1\pm0.04$ \\ & & SMC2\xspace-PS\xspace & $-39.2\pm0.09$ & $-43.8\pm2.13$ & $-39.2\pm0.08$ & $-39.1\pm0.04$ \\ $1000$ & $960$ & SMC2\xspace-DS\xspace & $\color{blue}\it-39.2\pm0.08$ & $-39.2\pm0.31$ & $\color{blue}\it-39.2\pm0.07$ & $\color{red}\bf-39.2\pm0.03$ \\ & & SMC2\xspace-PS\xspace & $\color{blue}\it-39.2\pm0.08$ & $-65.7\pm5.54$ & $\color{blue}\it-39.2\pm0.07$ & $\color{red}\bf-39.2\pm0.03$ \\ \bottomrule \end{tabularx} \endgroup \caption{Marginal likelihood estimates of two components PET\xspace model. $T$: Number of distributions in SMC\xspace and number of iterations used for inference in PMCMC\xspace. $N$: Number of particles in SMC\xspace and number chains in PMCMC\xspace. The PMCMC\xspace and SMC\xspace with $N = 192$ are completely $N$-way parallelized. SMC\xspace with $N = 960$ are $N/5$-way parallelized. {\color{blue}\it Italic}: Minimum variance for the same computational cost and the same proposal scales and annealing schemes. {\color{red}\bf Bold}: Minimum variance for the same computaitonal cost and all proposal scales and annealing schemes.} \label{tab:pet-py-par-sel} \end{table} \begin{table} \def\color{blue}\it{\color{blue}\it} \def\color{red}\bf{\color{red}\bf} \begingroup\small \begin{tabularx}{\linewidth}{lllCCCC} \toprule \multicolumn{3}{l}{Proposal scales} & \multicolumn{3}{c}{Manual} & Adaptive \\ \cmidrule(lr){1-3}\cmidrule(lr){4-6}\cmidrule(lr){7-7} \multicolumn{3}{l}{Annealing scheme} & Prior (5) & Posterior (5) & \multicolumn{2}{c}{Adaptive} \\ \cmidrule(lr){1-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-7} % $T$ & $N$ & Algorithm & \multicolumn{4}{c}{Bayes factor estimates ($\log B_{2,1}\pm\text{SD\xspace}$)} \\ \midrule $500$ & $30 $ & PMCMC\xspace & $1.7\pm0.62$ & $-70.9\pm525.79$ && \\ $500$ & $192$ & SMC2\xspace-DS\xspace & $\B1.6\pm0.27$ & $\B1.3\pm1.13$ & $\B1.6\pm0.20$ & $\R1.6\pm0.15$ \\ & & SMC2\xspace-PS\xspace & $\B1.6\pm0.27$ & $-3.9\pm30.02$ & $\B1.6\pm0.20$ & $\R1.6\pm0.15$ \\ $100 $ & $960$ & SMC2\xspace-DS\xspace & $1.6\pm0.37$ & $0.5\pm1.55$ & $1.6\pm0.34$ & $1.6\pm0.21$ \\ & & SMC2\xspace-PS\xspace & $1.6\pm0.37$ & $-13.1\pm66.30$ & $1.6\pm0.33$ & $1.6\pm0.21$ \\ \midrule $1000$ & $30 $ & PMCMC\xspace & $1.6\pm0.49$ & $-67.3\pm400.21$ && \\ $1000$ & $192$ & SMC2\xspace-DS\xspace & $\B1.6\pm0.21$ & $\B1.5\pm0.79$ & $1.6\pm0.20$ & $1.6\pm0.13$ \\ & & SMC2\xspace-PS\xspace & $\B1.6\pm0.21$ & $-0.6\pm15.47$ & $1.6\pm0.20$ & $1.6\pm0.13$ \\ $200 $ & $960$ & SMC2\xspace-DS\xspace & $1.6\pm0.25$ & $1.1\pm1.25$ & $1.6\pm0.19$ & $1.6\pm0.12$ \\ & & SMC2\xspace-PS\xspace & $1.6\pm0.24$ & $-11.7\pm34.68$ & $\B1.6\pm0.18$ & $\R1.6\pm0.11$ \\ \midrule $2000$ & $30 $ & PMCMC\xspace & $1.6\pm0.31$ & $-95.5\pm264.74$ && \\ $2000$ & $192$ & SMC2\xspace-DS\xspace & $\B1.6\pm0.14$ & $\B1.6\pm0.44$ & $1.6\pm0.13$ & $1.6\pm0.09$ \\ & & SMC2\xspace-PS\xspace & $\B1.6\pm0.14$ & $1.6\pm6.06$ & $1.6\pm0.13$ & $1.7\pm0.09$ \\ $400 $ & $960$ & SMC2\xspace-DS\xspace & $1.6\pm0.16$ & $1.5\pm0.74$ & $\B1.6\pm0.12$ & $\R1.6\pm0.08$ \\ & & SMC2\xspace-PS\xspace & $1.6\pm0.16$ & $-4.2\pm17.15$ & $\B1.6\pm0.12$ & $\R1.6\pm0.08$ \\ \midrule $5000$ & $30$ & PMCMC\xspace & $1.6\pm0.24$ & $-60.3\pm198.10$ && \\ $5000$ & $192$ & SMC2\xspace-DS\xspace & $1.6\pm0.10$ & $\B1.6\pm0.23$ & $1.6\pm0.09$ & $1.6\pm0.05$ \\ & & SMC2\xspace-PS\xspace & $1.6\pm0.10$ & $1.3\pm2.98$ & $1.6\pm0.09$ & $1.6\pm0.05$ \\ $1000$ & $960$ & SMC2\xspace-DS\xspace & $\B1.6\pm0.09$ & $1.6\pm0.33$ & $\B1.6\pm0.08$ & $\R1.6\pm0.04$ \\ & & SMC2\xspace-PS\xspace & $\B1.6\pm0.09$ & $-0.2\pm6.63$ & $\B1.6\pm0.08$ & $\R1.6\pm0.04$ \\ \bottomrule \end{tabularx} \endgroup \caption{Bayes factor $B_{2,1}$ estimates of two components PET\xspace model. $T$: Number of distributions in SMC\xspace and number of iterations used for inference in PMCMC\xspace. $N$: Number of particles in SMC\xspace and number chains in PMCMC\xspace. The PMCMC\xspace and SMC\xspace with $N = 192$ are completely $N$-way parallelized. SMC\xspace with $N = 960$ are $N/5$-way parallelized. {\color{blue}\it Italic}: Minimum variance for the same computational cost and the same schedule. {\color{red}\bf Bold}: Minimum variance for the same computational cost and all schedules.} \label{tab:pet-bf-par-sel} \end{table} \paragraph{Effects of adaptive schedule} A set of samplers with adaptive schedules are also used. Due to the nature of the schedule, it cannot be controlled to have exactly the same number of time steps as non-adaptive procedures. However, the \ifmmode\text{CESS}\else CESS\xspace\fi was controlled such that the average number of time steps are comparable with the fixed schedules and in most cases slightly less than the fixed numbers. It is found that, with little computational overhead, adaptive schedules do provide the best results (or very nearly so) and do so without user intervention. The reduction of Monte Carlo SD\xspace varies among different configurations. For moderate or larger number of distributions, a reduction about 50\% was observed. In addition, it shall be noted that, in this example, the bias of the path sampling estimates are much more sensitive to the schedules than the previous Gaussian mixture model example. A vanilla linear schedule does not provide a low bias estimator at all even when the number of distributions is increased to a considerably larger number. The prior schedule though provides a nearly unbiased estimator, there is no clear theoretical evidence showing that this shall work for other situations. The adaptive schedule, without any manual calibration, can provide a nearly unbiased estimator, even when path-sampling is employed, in addition to potential variance reduction. \paragraph{Bias reduction for path sampling estimator} As seen in Table~\ref{tab:pet-py-par-sel} and~\ref{tab:pet-bf-par-sel}, a bad choice of schedule $\alpha(t/T)$ can results in considerable bias for the basic path sampling estimator, here for SMC2\xspace-PS\xspace but the problem is independent of the mechanism by which the samples are obtained. Increasing the number of iterations can reduce this bias but at the cost of additional computation time. As outlined in Section~\ref{ssub:Improved Univariate Numerical Integration}, in the case of the SMC\xspace algorithms discussed here, it is possible to reduce the bias without increasing computational cost significantly. To demonstrate the bias reduction effect, we constructed SMC\xspace sampler for the above PET\xspace example with only $1,000$ particles and about $20$ iterations specified using the \ifmmode\text{CESS}\else CESS\xspace\fi based adaptive strategy. The path sampling estimator was approximated using Equation~\eqref{eq:path_est} as well as other higher order numerical integration or by integrating over a grid that contains $\{\alpha_t\}$ at which the samples was generated. The results are shown in Table~\ref{tab:pet-py-bias-reduction} \begin{table} \begin{tabularx}{\linewidth}{lXXXX} \toprule & \multicolumn{4}{c}{Number of grid points (compared to sampled iterations)} \\ \cmidrule(lr){2-5} Integration rule & $\times1$ & $\times2$ & $\times4$ & $\times8$ \\ \midrule Trapezoid & $-52.2\pm5.01$ & $-45.5\pm1.93$ & $-42.1\pm1.21$ & $-40.5\pm1.06$ \\ Simpson & $-43.2\pm1.39$ & $-41.0\pm1.10$ & $-40.0\pm1.04$ & $-39.4\pm1.04$ \\ Simpson $3/8$ & $-42.1\pm1.21$ & $-40.5\pm1.06$ & $-39.7\pm1.04$ & $-39.3\pm1.04$ \\ Boole & $-40.9\pm1.09$ & $-39.9\pm1.04$ & $-39.4\pm1.04$ & $-39.2\pm1.05$ \\ \bottomrule \end{tabularx} \caption{Path sampling estimator of marginal likelihood of two components PET\xspace model. The estimator was approximated using samples from SMC2\xspace algorithm with $1,000$ particles and $20$ iterations, with different numerical integration strategies. Large sample result (see Table~\ref{tab:pet-py-par-sel}) shows that an unbiased estimate is $-39.2$.} \label{tab:pet-py-bias-reduction} \end{table} \paragraph{Real data results} Finally, the methodology of SMC2\xspace-PS\xspace was applied to measured positron emission tomography data using the same compartmental setup as in the simulations. The data shown in Figure~\ref{fig:petplot} comes from a study into opioid receptor density in Epilepsy, with the data being described in detail in \cite{Jiang:2009kf}. It is expected that there will be considerable spatial smoothness to the estimates of the volume of distribution, as this is in line with the biology of the system being somewhat regional. Some regions will have much higher receptor density while others will be much lower, yielding higher and lower values of the volume of distribution, respectively. While we did not impose any spatial smoothness but rather estimated the parameters independently for each time series at each spatial location, as can be seen, smooth spatial estimates of the volume of distribution consistent with neurological understanding were found using the approach. This method is computationally feasible for the entire brain on a voxel-by-voxel basis, due to the ease of parallelization of the SMC\xspace algorithm. In the analysis performed here, 1000 particles were used, along with an adaptive schedule using a constant $\ifmmode\text{CESS}\else CESS\xspace\fi^\star = 0.999$, resulting in about 180 to 200 intermediate distributions. The model selection results are very close to those obtained by a previous study of the same data \citep{Zhou2013}, although the present approach requires much less implementation effort and has roughly the same computational cost. \subsection{Summary} These three illustrative applications have essentially shown three aspects of using SMC\xspace as a generic tool for Bayesian model selection. Firstly, as seen in the Gaussian mixture model example, all the different variants of SMC\xspace proposed, including both direct and path sampling versions, produce results which are competitive with other model selection methods such as RJMCMC\xspace and PMCMC\xspace. In addition, in this somewhat simple example, SMC2\xspace performs well, and leads to low variance estimates with no appreciable bias. The effect of adaptation was studied more carefully in the nonlinear ODE\xspace example, and it was shown that using both adaptive selection of distributions as well as adaptive proposal variances leads to very competitive algorithms, even against those with significant manual tuning. This suggests that an automatic process of model selection using SMC2\xspace is possible. In the final example, considering the easy parallelization of algorithms such as SMC2\xspace suggests that great gains in variance estimation can be made using settings such as GPU\xspace computing for application where computational resources are of particular importance (such as in image analysis as in the PET example). It is also clear that the negligible cost of the bias reduction techniques described means that one should always consider using these to reduce the bias inherent in path sampling estimation. \section{Theoretical Considerations} \label{sec:Theoretical Considerations} The convergence results for the standard estimator can be found in \citet{DelMoral:2006hc} and references therein. In this paper, given our advocation of SMC2\xspace-PS\xspace, we extend the results for the path sampling estimator from SMC\xspace samplers. Here we present Proposition~\ref{prop:path_clt}, which is specific to path sampling estimator using the simplest Trapezoidal approach to numerical integration. It follows as a simple corollary to a more general result given in Appendix~\ref{sec:Proof of Proposition 1} which could be used to characterize more general numerical integration schemes. \begin{proposition}\label{prop:path_clt} Under the same regularity conditions as are required for the central limit theorem given in \cite{DelMoral:2006hc} to hold, given a SMC\xspace sampler that iterates over a sequence of distributions $\{\pi_t = q_{\alpha_t}/Z_{\alpha_t}\}_{t=0}^T$ and applies multinomial resampling at each iteration, the path sampling estimator, $\widehat\Xi_{T}^{N}$, as defined in Equation~\eqref{eq:path_est} obeys a central limit theorem in the following sense: Let $\xi_t(\cdot) = \rnd{\log q_{\alpha}(\cdot)}{\alpha}\Bigm|_{\alpha = \alpha_t}$, $\beta_{0} = \alpha_0 / 2$, $\beta_{T} = \alpha_T / 2$ and for $t \in \{1,\ldots,T-1\}$ $\beta_t = (\alpha_{t + 1} - \alpha_{t-1})/2$, then, provided $\xi_t$ is bounded: \begin{equation} \lim_{N\to\infty}\sqrt{N}(\widehat\Xi_{T}^{N} - \Xi_T) \xrightarrow{D}\rnorm(0, V_T(\xi_{0:T})) \end{equation} where $V_t$, $0\le t \le T$ is defined by the following recursion: \begin{align} V_0(\xi_0) =& \beta_0^2 \int \pi_0(x_0) (\xi_0(x_0) - \pi_0(\xi_0))^2 dx_0 \\ V_t(\xi_{0:t}) =& V_{t-1}\biggl(\xi_{0:t-2}, \xi_{t-1} + \frac{\beta_t}{\beta_{t-1}} \frac{\pi_{t}(\cdot)}{\pi_{t-1}(\cdot)} \int K_t(\cdot,x_t) (\xi_t(x_t)-\pi_t(\xi_t)) \intd x_t \biggr) \\\notag &+ \beta_t^2 \int\frac{\pi_t(x_{t-1})^2}{\pi_{t-1}(x_{t-1})} K_t(x_{t-1},x_t)(\xi_t(x_t) - \pi_t(\xi_t))^2) \intd x_{t-1} \intd x_t. \end{align} \end{proposition} We note that much recent analysis of SMC\xspace algorithms has focussed on relaxing the relatively strong assumptions used in the results upon which this result is based --- looking at more general resampling schemes \citep{DelMoral:2012jq} and relaxing compactness assumptions \citep{Whiteley:2013} for example. However, we feel that this simple result is sufficient to show the relationship between the path sampling and simple estimators and that in this instance the relatively simplicity of the resulting expression justifies these stronger assumptions. \section{Discussion} \label{sec:Conclusion} It has been shown that SMC\xspace is an effective Monte Carlo method for Bayesian inference for the purpose of model comparison. Three approaches have been outlined and investigated in several illustrative applications including the challenging scenarios of nonlinear ODE\xspace models and PET\xspace compartmental systems. The proposed strategy is always competitive and often substantially outperforms the state of the art in this area. It has been demonstrated that it is possible to use the SMC\xspace algorithms to estimate the model probabilities directly (SMC1\xspace), or through individual model evidence (SMC2\xspace), or pair-wise relative evidence (SMC3\xspace). In addition, both SMC2\xspace and SMC3\xspace algorithms can be coupled with the path sampling estimator. Among the three approaches, SMC1\xspace is applicable to very general settings. It can provide a robust alternative to RJMCMC\xspace when inference on a countable collection of models is required (and could be readily combined with the approach of \cite{Jasra:2008bb} at the expense of a little additional implementation effort). However, like all Monte Carlo methods involving between model moves, it can be difficult to design efficient algorithms in practice. The SMC3\xspace algorithm is conceptually appealing. However, the existence of a suitable sequence of distributions between two posterior distributions may not be obvious. The SMC2\xspace algorithm, which only involves within-model simulation, is most straightforward to implement in many interesting problems. It has been shown to be exceedingly robust in many settings. As it depends largely upon a collection of within-model MCMC\xspace moves, any existing MCMC\xspace algorithms can be reused in the SMC2\xspace framework. However, much less tuning is required because the algorithm is fundamentally less sensitive to the mixing of the Markov kernel and it is possible to implement effective adaptive strategies at little computational cost. With adaptive placement of the intermediate distributions and specification of the MCMC\xspace kernel proposals, it provides a robust and essentially automatic model comparison method. Compared to the PMCMC\xspace algorithm, SMC2\xspace has greater flexibility in the specification of distributions. Unlike PMCMC\xspace, where the number and placement of distributions can affect the mixing speed and hence performance considerably, increasing the number of distributions will always benefit a SMC\xspace sampler given the same number of particles. When coupled with a path sampling estimator, this leads to less bias and variance. Compared to its no-resampling variant, it has been shown that SMC\xspace samplers with resampling can reduce the variance of normalizing constant estimates considerably. Even after three decades of intensive development, no Monte Carlo method can solve the Bayesian model comparison problem completely automatically without any manual tuning. However, SMC\xspace algorithms and the adaptive strategies demonstrated in this paper show that even for realistic, interesting problems, these samplers can provide good results with very minimal tuning and few design difficulties. For many applications, they could already be used as near automatic, robust solutions. For more challenge problems, the robustness of the algorithms can serve as solid foundation for specific algorithm designs.
1,108,101,565,023
arxiv
\section{Introduction} \label{sec:intro} Active Galactic Nuclei (AGN) are believed to be triggered when gas and matter are deposited onto a super-massive black hole in the centre of the host galaxy. For this to happen, the gas needs to lose sufficient angular momentum to be transported deep into the potential well of the galaxy, until it eventually fuels the AGN. Many different mechanisms have been proposed for transporting the gas down to the nuclear region, from galaxy mergers and interactions \citep[e.g.][]{hec86,lin88,wu98,col95,can01,kuo08} to bars and central spiral structures \citep*[e.g.][]{sch89,pri05} to cooling flows and accretion of circum-galactic hot gas \citep[e.g.][]{fab95,bes06,all06}. Undoubtedly, many of these fuelling mechanisms do occur; since various classes of AGN (e.g quasars, Seyferts, radio galaxies, etc.) are found in different environments and are known to have intrinsic differences \citep[other than simply orientation-dependent properties, see e.g.][]{urr95}, it is likely that certain mechanisms are associated with specific types of AGN \citep[see e.g.][for a review]{mar04}. Nearby radio galaxies form a particularly interesting group of active galaxies for investigating possible AGN fuelling mechanisms. Their radio continuum sources evolve over time and both fuelling characteristics as well as dynamical interaction with their surrounding host galaxies can reflect in easily observable properties of these sources. This allows to estimate the time-scale since the onset of the current episode of AGN activity as well as to match radio source characteristics with host galaxy properties and possible fuelling mechanisms. For example, compact radio sources (in particular the Giga-hertz Peaked Spectrum [GPS] and Compact Steep Spectrum [CSS] sources) are often believed to be young radio sources. Interactions between their radio jets (which can be imaged at high resolution with VLBI observations) and the surrounding medium can give an insight into the physical properties of the Inter-Stellar Medium (ISM) in the central region of the galaxy, where the potential AGN fuel reservoir is stored. On larger scales, there is a striking dichotomy in radio source morphology between high- and low-power radio sources \citep{fan74}. While powerful, Fanaroff $\&$ Riley type-II (FR{-\small II}), radio sources contain relativistic jets that end in bright hot-spots, low-power FR{-\small I}\ sources are sub-relativistic and have an edge-darkened morphology. Various studies indicate that this striking difference in radio-source properties may be linked to a difference in host galaxy properties and, related, difference in the feeding mechanism of the AGN. It has been argued from optical studies that a significant fraction of powerful radio galaxies with strong emission-lines show peculiar optical morphologies and emission-line features reminiscent of a gas-rich galaxy merger, but that low-power radio sources with weak emission-lines do not generally share the same optical properties \citep{hec86,bau92}. \citet*{chi99} show from {\sl HST} observations that low-power radio sources lack evidence for an obscuring torus and substantial emission from a classical accretion disc. This suggests that accretion may take place in a low efficiency regime, which can be explained by accretion of gas from the galaxy's hot gaseous halo \citep{fab95}. From X-ray studies, \citet*{har07} suggest that high-excitation AGN in general (comprising a large fraction of powerful radio sources) may form a classical accretion disc from cold gas deposited by a gas-rich galaxy merger, while low-excitation AGN (comprising most low-power radio galaxies) may be fed through a quasi-spherical Bondi accretion of circum-galactic hot gas that condenses directly onto the central black-hole. A similar conclusion was reached by \citet{bal08} from the fact that high-excitation radio galaxies almost always show evidence for recent star formation, while this is generally not the case in their low-excitation counterparts. Interestingly, all the above mentioned studies place an important emphasis on the crucial role of the {\sl cold gas}, since this gas -- when deposited after a gas-rich galaxy merger/collision -- is thought to be the potential fuel reservoir for AGN/starburst activity and is also believed to be be an important ingredient in the formation of a classical accretion disc and surrounding torus. Unfortunately, so far a systematic inventory of the cold gas in radio galaxies is crucially lacking. Studying the neutral hydrogen (H{\,\small I}) gas in radio galaxies provides a powerful tool to investigate the occurrence of cold gas among various types of radio-loud AGN. {\sl H{\,\small I}\ observations are particularly suited to reveal the occurrence of gas-rich galaxy mergers and interaction in these systems and hence study their importance in triggering/fuelling the radio source.} This issue was first addressed by \citet{hec83}, who used single-dish H{\,\small I}\ observations for tracing the cold gas in a pre-selected sample of radio-loud interacting galaxies. Current-day interferometers allow us to map the H{\,\small I}\ gas in unbiased samples of nearby radio galaxies. Mapping the H{\,\small I}\ in emission on host galaxy scales can reveal ongoing galaxy interactions that are easily missed by optical imaging of the starlight; see for example the case of the M81 group \citep*[][]{Yun94}. A good example of this is also given by \citet{kuo08} and \citet{tan08}, who show that H{\,\small I}\ observations of nearby Seyfert galaxies clearly reveal that Seyfert systems are much more strongly associated with ongoing interactions than their non-active counterparts -- a trend that is not seen from optical analysis of their samples. In case of a more violent galaxy merger or collision, the simultaneous spatial and kinematical information obtained with H{\,\small I}\ observations is ideal for tracing and dating these events over relatively long time-scales. The reason is that in a galaxy merger or collision, part of the gas is often expelled in the form of large structures \citep[tidal tails, bridges, shells, etc.;][]{hib96,mih96,bar02}, which often have a too low surface density for massive star formation to occur. If the environment is not too hostile, parts of these gaseous structures remain bound to the host galaxy as relic signs of the galaxies' violent past, even long after optical stellar features directly associated with this encounter may have faded \citep[e.g.][]{hib96}. Several studies show that time-delays of tens to many hundreds of Myr between a merger event and the onset of the current episode of AGN activity do occur among active galaxies \citep{tad05,emo06,lab08}, making H{\,\small I}\ observations ideal for detecting evidence of galaxy mergers or collisions on these time-scales. \begin{table*} \centering \caption{Radio galaxy properties} \vspace{0.4cm} \label{tab:sourceproperties} \begin{tabular}{lllcccccclclc} Source & & \multicolumn{1}{c}{{\sl z}} & D & Opt. & $M_{V}$ & S(60$\mu$m) & S(100$\mu$m) & log$P_{\rm 1.4 GHz}$ & (ref.) & LS & (ref.) & Type \\ B2 name & other name & & (Mpc) & Mor. & & (mJy) & (mJy) & (W/Hz) & & (kpc) & & source \\ \hline 0034+25 & &0.0318 &134 & E & -22.1 & $<$153 & $<$378 & 23.4 & (3) & 200 & (5,10) & FR{-\small I}\ \\ 0055+30 & NGC 315 & 0.0165 & 70 & E & -23.1 & 363$\pm$17 & 586$\pm$74 & 24.2 & (6) & 1200 & (11,15) & FR{-\small I}\ \\ 0104+32 & 3C31 & 0.0169 & 71 & S0 & -22.0 & 444$\pm$21 & 1720$\pm$57 & 24.0 & (7) & 484 & (10) & FR{-\small I}\ \\ 0206+35 & &0.0377 &159 & E & -22.6 & $<$126 & $<$284 & 24.8 & (2) & 69.9 & (2,10) & FR{-\small I}\ \\ 0222+36 & &0.0334 &141 & E & -22.2 & $<$126 & $<$315 & 23.7 & (4) & 4.8 & (5) & C \\ 0258+35 & NGC 1167 & 0.0165 & 70 & S0 & -21.7 & 177$\pm$47 & $<$441 & 24.0 & (4) & 1.4 & (12) & C \\ 0326+39 & &0.0243 &103 & E$^{\dag}$ & -21.9 & $<$140 & $<$410 & 24.2 & (7) & 202 & (10) & FR{-\small I}\ \\ 0331+39 & &0.0206 &87 & E & -22.4 & $<$140 & $<$410 & 23.9 & (2) & 29.1 & (2,10) & FR{-\small I}\ \\ 0648+27 & &0.0412 &174 & S0 & -23.2 & 2758$\pm$57 & 2419$\pm$57 & 23.7 & (4) & 1.3 & (14) & C \\ 0722+30 & & 0.0189 & 80 & S & -20.5 & 3190$\pm$21 & 5141$\pm$57 & 23.0 & (4) & 13.6 & (2,10) & FR{-\small I}\ \\ 0924+30 & &0.0253 &107 & E$^{\dag}$ & -21.9 & $<$126 & $<$315 & 23.8 & (7) & 435 & (10) & FR{-\small I}\ \\ 1040+31 & &0.0360 &152 & DB$^{\dag}$& -21.5 & $<$195 & $<$473 & 24.3 & (2) & 40.3 & (2,10) & FR{-\small I}\ \\ 1108+27 & NGC 3563 &0.0331 &140 & S0 & -21.1$^{\dag}$ & $<$153$^{\ddag}$ & $<$315$^{\ddag}$ & 23.3 & (2) & 1085 & (10) & FR{-\small I}\ \\ 1122+39 & NGC 3665 &0.0069 &29 & S0 & -22.8 & 1813$\pm$47 & 7014$\pm$116& 22.0 & (5) & 10.7 & (2,10) & C \\ 1217+29 & NGC 4278 &0.0022 &16.1$^{1}$ & E & -21.4 & 618$\pm$21 & 2041$\pm$57 & 22.2 & (6) & 0.009& (13) & C \\ 1321+31 & NGC 5127 &0.0162 &68 & E pec & -21.5 & $<$140 & $<$347 & 23.9 & (7) & 246 & (10) & FR{-\small I}\ \\ 1322+36 & NGC 5141 &0.0174 &73 & S0 & -21.5 & $<$153 & $<$378 & 23.7 & (2) & 19.1 & (2,10) & FR{-\small I}\ \\ 1447+27 & & 0.0306 & 129 & S0 & -21.4 & $<$112 & $<$284 & 23.6 & (6) & $<$2.3& (3) & C \\ 1658+30 & 4C 30.31 & 0.0344 & 145 & E & -20.9 & $<$112 & $<$252 & 24.2 & (3) & 114 & (5,10) & FR{-\small I}\ \\ 2116+26 & NGC 7052 &0.0156 &66 & E & -21.9 & 538$\pm$17 & 1276$\pm$57 & 22.7 & (2) & 291 & (10) & FR{-\small I}\ \\ 2229+39 & 3C 449 &0.0171 &72 & E & -21.6 & 186$\pm$42 & 1029$\pm$137& 24.4 & (9) & 462 & (10) & FR{-\small I}\ \\ \vspace{-2mm} & & & & & & & & & & & & \\ 1557+26 & & 0.0442 & 187 & E & -22.1 & $<$126 & $<$315 & 23.1 & (2) & $\sim$2&(2) & C \\ \multicolumn{1}{c}{-}&NGC 3894 & 0.0108 & 46 & E & -22.4 & 140$\pm$59 & 480$\pm$158 & 23.0 & (6) & 1.6 & (15) & C \\ \end{tabular} \flushleft {Notes -- The distance D (col. 4) to the radio galaxy and the linear size of the radio source (LS -- col. 11) have been determined from the redshift ({\sl z} -- col. 2) and $H_{0}$ = 71 km\ s$^{-1}$\ Mpc$^{-1}$ (unless otherwise indicated). Redshifts and optical morphology are based on results from the NASA/IPAC Extragalactic Database (NED), unless otherwise indicated. $M_{V}$ and IRAS flux densities (columns 6 - 8) are from \citet[][and refs. therein]{imp93} and adjusted for $H_{0}$ = 71 km\ s$^{-1}$\ Mpc$^{-1}$ (unless otherwise indicated). $^{\dag}$Information taken from \citet{bur79}. $^{\ddag}$Data taken from \citet{imp90}. {\sl Refs:} {\sl 1.} \citet{ton01}, {\sl 2.} \citet{par86}, {\sl 3.} \citet{rui86}, {\sl 4.} \citet{fan86}, {\sl 5.} \citet{fan87}, {\sl 6.} \citet{whi92}, {\sl 7.} \citet{eke81}, {\sl 8.} \citet{sch83}, {\sl 9.} \citet{lai80}, {\sl 10.} \citet[][based on the data used in this paper]{emo06thesis}, {\sl 11.} \citet{mor09_NGC315}, {\sl 12.} \citet*{gir05a}, {\sl 13.} \citet{wil98}, {\sl 14.} \citet{mor03}, {\sl 15.}. \citet{tay98}, \citet{bri79}} \end{table*} Another advantage of studying H{\,\small I}\ in radio galaxies is that H{\,\small I}\ can be traced in absorption against the bright radio continuum. It allows investigating the kinematics of H{\,\small I}\ gas located in front of the radio source, all the way down to the very nuclear region. This provides important insight in the presence/absence of circum-nuclear discs and tori, or allows to look for direct evidence of AGN fuelling/feedback in the form of gas infall/outflow \citep[see e.g.][]{gor89,mor01,mor08,ver03,mor05} In this paper we study the large-scale H{\,\small I}\ properties of a {\sl complete} sample of nearby low-power radio galaxies and combine this with deep optical observations of the low-surface brightness stellar content of the H{\,\small I}-rich objects. The main aim is to investigate the importance of gas-rich galaxy mergers/interactions among low-power radio galaxies. We compare the H{\,\small I}\ properties of our sample of nearby radio galaxies with similar studies done on radio-quiet early-type galaxies \citep{oos07,oos09}. Our sample of nearby radio galaxies consists of low-power compact and FR{-\small I}\ radio sources. The current paper succeeds a first {\sl Letter} in this series \citep[][hereafter Paper{\,\small I}]{emo07}, in which some H{\,\small I}\ results related to the low-power compact sources in this sample were already discussed. While the current paper will shortly revisit the results from Paper{\,\small I}, it will also give the overall results and details on the entire sample. Results of a sample of more powerful FR{-\small II}\ radio sources as well as a discussion of the role of H{\,\small I}\ gas in the FR{-\small I}/FR{-\small II}\ dichotomy will be presented in a future paper.\\ \ \\ Throughout this paper we use $H_{0}$ = 71 km\ s$^{-1}$\ Mpc$^{-1}$. We calculate distances using the Hubble Law c$\cdot$$z$=H$_{0}$$\cdot$D (i.e., for simplicity we assume a single value for both the luminosity distance and angular distance, which is accurate to within a few percent at the redshifts of our sample sources). \section{The sample} \label{sec:sample} Our initial sample of nearby low-power radio galaxies consisted of 23 sources from the B2-catalogue \citep[][flux density limit $S_{\rm 408MHz} \ga 0.2$ Jy]{col70} with redshifts up to {\sl z} = 0.041. This initial sample is complete, with the restriction that we left out BL-Lac objects as well as sources in dense cluster environments (since here the gas content of galaxies is severely influenced by environmental effects [e.g. \citet{cay94,sol01,chu09}] and we expect that merger signatures may be wiped out on relatively short time-scales). Because of observational constraints, two sources were excluded from our initial sample (B2~1317+33 and B2~1422+26), but we do not expect that this will significantly alter our main results. Of our remaining sample of 21 radio galaxies, six have a compact radio source, while fifteen have an extended FR{-\small I}\ radio source (with `compact' defined as not extending beyond the optical boundary of the host galaxy, typically $\la 10$ kpc in diameter). Most of the compact sources have often been referred to as Low Power Compact (LPC) sources. The exception is B2~0258+35, which has been classified as a Compact Steep Spectrum (CSS) source \citep{san95}. In order to increase the number of compact sources in our sample, we observed two more radio galaxies with a compact source: B2~1557+26, a radio galaxy from the B2 catalogue with {\sl z} = 0.0442 (therefore just outside the redshift range of our complete sample) and NGC 3894 (a compact radio source that is comparable in power to our B2 sample sources, but with a declination outside the completeness limit of the B2 sample). While these two sources provide additional information on the H{\,\small I}\ content of nearby radio galaxies with a compact source, they are left out of the statistical analysis of our complete sample discussed in the remainder of this paper. All the sources in our sample have a radio power of 22.0 $\leq$ log ($P_{\rm 1.4\ GHz}$) $\leq$ 24.8 and their host galaxies were a priori classified as early-type galaxies \citep[with the exception of the late-type system B2~0722+30;][]{emo09}. Table \ref{tab:sourceproperties} lists the properties of the radio galaxies in our sample.\footnote{In this paper we use the B2 name for both the radio source as well as the host galaxy.} We note that our current sample contains no powerful FR{-\small II}\ radio galaxies. FR{-\small II}\ sources are generally located at a higher redshift and no FR{-\small II}\ source that meets our selection criteria is present in the B2 catalogue. \begin{table} \centering \caption{Radio observations} \vspace{0.4cm} \label{tab:obsparam} \begin{tabular}{llcc} B2 Source & Observatory & Obs. date(s) & t$_{\rm obs}$ \\ & & (dd/mm/yy)&(hrs) \\ \hline 0034+25 & WSRT & 09+11/08/07 & 24 \\ 0055+30 & WSRT$^{a}$& 30/06/00;05/09/01& 21 \\ 0104+32 & VLA-C & 22/12/02 & 4 \\ & WSRT & 23+29/08/07 & 24 \\ 0206+35 & WSRT & 29/08/04;22/08/07; & 31 \\ & & 11/09/07 & \\ 0222+36 & WSRT & 12+13/09/07 & 24 \\ 0258+35 & WSRT$^{b}$& 22/10/06;12/07/08; & 107 \\ & & 19/08/08;26+29/09/08; & \\ & & 01+02+07+10/10/08; \\ & & 07+12+17/11/08 \\ 0326+39 & WSRT & 17+18/09/07 & 24 \\ 0331+39 & WSRT & 19/08/04;19+20/09/07 & 30 \\ 0648+27 & WSRT$^{c}$& 12+15/08/02;28/12/02 & 36 \\ 0722+30 & VLA-C$^{d}$ & 23/12/02 & 4 \\ 0924+30 & WSRT & 07/08/04 & 9 \\ 1040+31 & WSRT & 31/07/04 & 7 \\ 1108+27 & VLA-C & 31/03/04 & 4 \\ 1122+39 & VLA-C & 20/04/04 & 2.5 \\ 1217+29 & WSRT$^{e}$& 04/02/04& 48 \\ 1321+31 & VLA-C & 02/11/02 & 4 \\ 1322+36 & VLA-C & 02/11/02 & 4 \\ 1447+27 & WSRT & 10/04/04 & 12 \\ 1658+30 & WSRT & 03+04/08/07 & 24 \\ 2116+26 & WSRT & 01+07/08/07 & 24 \\ 2229+39 & WSRT & 08+10/08/07 & 24 \\ & & & \\ 1557+26 & WSRT & 09/04/04 & 12 \\ NGC 3894& WSRT & 01/02/04 & 12 \\ \end{tabular} \flushleft {\sl Notes:} Although initially the WSRT and VLA observations were aimed at obtaining a uniform sensitivity, many of our sample sources were re-observed over the years, resulting in the varying observing times. t$_{\rm obs}$ (last column) is the total observing time. {{\sl References:} a). \citet{mor09_NGC315}; b). Struve et al. (in prep.) c). \citet{emo06}; d). \citet{emo09}; e). \citet{mor06b}} \end{table} \section{Observations} \label{sec:observations} \subsection{Neutral hydrogen gas} \label{sec:obsHI} Observations were done during various observing runs in the period Nov. 2002 - Nov. 2008 with the Very Large Array (VLA) in C-configuration and the Westerbork Synthesis Radio Telescope (WSRT). The C-configuration of the VLA was chosen to optimise the observations for sensitivity to detect both extended H{\,\small I}\ emission as well as H{\,\small I}\ absorption against the radio continuum and to match the beam of the WSRT (in order to obtain an as good as possible homogeneous H{\,\small I}\ sample). For the WSRT observations we used the 20 MHz bandwidth with 1024 channels in two intermediate frequency (IF) modes. For the VLA-C observations we used the 6.25 MHz band with 64 channels and two IF modes. Table \ref{tab:obsparam} gives the details of the observations. For the reduction, analysis and visualisation of the data we used the {\tt MIRIAD}, {\tt GIPSY} and {\tt KARMA} software. After flagging, a standard bandpass-, phase- and (if necessary) self-calibration was performed on the data. In order to minimise aberration effects of strong continuum point-sources in the field of our target sources, the model components of these strong point sources were removed from the data in the uv-domain. Continuum and line data sets were constructed by fitting a first or second order polynomial to the line-free channels in the uv-data, applying a Fourier transformation and subsequently cleaning and restoring the signal in the data in order to remove the beam-pattern. The resulting continuum images do not have the optimal sensitivity and resolution for studying the radio sources in detail and are, therefore, omitted from this paper \citep[see][\ for a collection of the continuum images]{emo06thesis}. We note, however, that the radio source structures and flux densities in these images agree with continuum observations from the literature \citep[][]{par86,rui86,fan86,fan87}. For the line-data we constructed data cubes with different weighting-schemes in order to maximise our sensitivity for various emission/absorption features. Uniform weighting has been used to study in detail H{\,\small I}\ in absorption against the radio continuum for most of our sample sources, while robust weighting \citep{bri95} provided the best results for tracing H{\,\small I}\ in emission. Table \ref{tab:linedataproperties} gives an overview of the properties of the data sets that we used for this paper. Total intensity maps of the line-data were made by summing all the signal that is present above (and below for absorption) a certain cut-off level in at least two consecutive channels. This cut-off level was determined at a few $\times$ the noise-level, the exact value depending on the noise properties of the individual data-cubes (but typically 3$\sigma$). In cases where the signal is very weak, it was taken into account only when it appeared in both polarisations and in both the first and the last half of the observations. Further details on the data reduction of several individual objects that we previously published can be found under the references mentioned in Table \ref{tab:obsparam}. \begin{table*} \centering \caption{Properties of the H{\,\small I}\ data} \vspace{0.4cm} \label{tab:linedataproperties} \begin{tabular}{lccclcclc} Source &$\Delta$v & \multicolumn{3}{l}{{\bf Uniform -- absorption}} & \multicolumn{4}{l}{{\bf Robust/Natural -- emission}} \\ B2 name & (km/s) & (1) & (2) & (3) & (1) & (2) & (3) & (4) \\ \hline 0034+25 & 16.5 & 0.43 & $26.2 \times 11.8$ & (-1.0) & 0.22 & $44.6 \times 25.6$ & (-2.7)& +2 \\ 0055+30$^{a}$ & 20 & - & - & - & 0.21 & $35 \times 18$ & (-5) &+0.5 \\ 0104+32{\small \ (VLA)} & 20.6 & 0.27 & $13.4 \times 10.8$ & (-39.2) & 0.20 & $18.8 \times 18.1$ & (-40.5)& +2 \\ 0104+32{\small \ (WSRT)} & 16.5 & 0.41 & $22.2 \times 14.3$ & (17.2) & 0.20 & $40.0 \times 26.7$ & (3.4) & +2 \\ 0206+35 & 16.5 & 0.34 & $21.4 \times 11.9$ & (9.2) & 0.16 & $42.2 \times 24.7$ & (8.6) & +2 \\ 0222+36 & 16.5 & 0.43 & $16.7 \times 15.6$ & (39.2) & 0.22 & $35.0 \times 30.4$ & (23.4) & +2 \\ 0258+35$^{b}$ & 16.5 & - & - & - & 0.13 & $28.9 \times 16.8$ & (1.5) & +0.4 \\ 0326+39 & 16.5 & 0.39 & $19.2 \times 13.4$ & (2.4) & 0.19 & $38.1 \times 26.8$ & (-1.2) & +2 \\ 0331+39 & 16.5 & 0.37 & $17.5 \times 14.7$ & (-11.1) & 0.18 & $38.7 \times 24.9$ & (1.3) & +2 \\ 0648+27$^{c}$ & 16.5 & 0.41 & $25.5 \times 11.2$ & (-0.4) & 0.14 & $48.1 \times 24.2$ & (1.0) & +1 \\ 0722+30$^{d}$ & 20.6 & 0.29 & $11.9 \times 10.6$ & (2.8) & 0.19 & $19.0 \times 17.5$ & (-24.5)& +2 \\ 0924+30 & 16.5 & 0.51 & $25.1 \times 8.6$ & (10.2) & 0.42 & $50.7 \times 19.0$ & (10.2) & +1 \\ 1040+31 & 16.5 & 1.18 & $29.8 \times 9.7$ & (19.0) & 0.52 & $60.5 \times 24.1$ & (20.7) & +2 \\ 1108+27 & 20.6 & 0.47 & $13.4 \times 11.1$ & (-41.8) & 0.31 & $19.5 \times 19.0$ & (88.8) & +2 \\ 1122+39 & 20.6 & 0.79 & $15.0 \times 10.5$ & (-62.8) & 0.50 & $21.7 \times 17.7$ & (-81.3)& +2 \\ 1217+29$^{e}$ & 16.5 & - & - & - & 0.37 & $28 \times 14$ & (11) & 0 \\ 1321+31 & 20.6 & 0.36 & $11.5 \times 11.3$ & (-25.6) & 0.28 & $16.5 \times 16.0$ & (-57.3)& +2 \\ 1322+36 & 20.6 & 0.30 & $13.3 \times 12.5$ & (-72.8) & 0.26 & $15.1 \times 14.3$ & (-86.6)&+0.5 \\ 1447+27 & 16.5 & 0.70 & $34.4 \times 13.6$ & (-1.9) & 0.41 & $61.6 \times 24.6$ & (-0.9) & +2 \\ 1658+30 & 16.5 & 0.39 & $23.8 \times 12.6$ & (1.9) & 0.20 & $41.8 \times 24.6$ & (1.3) & +2 \\ 2116+26 & 16.5 & 0.38 & $25.2 \times 12.6$ & (-3.4) & 0.19 & $45.3 \times 24.6$ & (-1.7) & +2 \\ 2229+39 & 16.5 & 0.38 & $18.7 \times 13.2$ & (1.5) & 0.19 & $35.6 \times 25.9$ & (-2.4) & +2 \\ & & & & & & & & \\ 1557+26 & 16.5 & 0.64 & $23.9 \times 11.8 $& (-0.3) & 0.39 & $43.9 \times 20.7$ & (-0.1) & +1 \\ NGC~3894 & 8.2 & 0.71 & $13.2 \times 11.5$ & (0.4) & 0.48 & $46.1 \times 41.3$ & (0.0) & +2 \\ \end{tabular}\\ \flushleft {Notes -- $\Delta$v = channel separation; (1) = noise level (mJy beam$^{-1}$); (2) = beam-size (arcsec$^{2}$); (3) = position angle ($^{\circ}$); (4) robustness parameter. {\sl References:} a). \citet{mor09_NGC315}; b). Struve et al. (in prep.); c). \citet{emo06}; d). \citet{emo09}; e).\citet{mor06b}} \end{table*} \subsection{Optical imaging} \label{sec:obsOpt} Deep optical B- and V-band images were taken for all the radio galaxies in our sample with large-scale H{\,\small I}\ gas detected in emission. The observations of all but one of our objects were done on 12, 13 and 14 March 2007 at the Hiltner 2.4m telescope of the Michigan-Dartmouth-MIT (MDM) observatory, located at the southwestern ridge of Kitt Peak, Arizona (USA). Imaging was done using the Echelle CCD, resulting in a field-of-view (F.o.V.) of $9.5 \times 9.5$ armcin. B2~0258+35 was observed on 15 November 2006 at the Hiltner 1.3m MDM telescope with the Templeton CCD, resulting in a F.o.V. of $8.5 \times 8.5$ arcmin. All observations were taken in relatively good to moderate seeing ($1-2$ arcsec) and under photometric conditions. Table \ref{tab:optparam} summarises the observational parameters. Because we are interested in studying very faint stellar features, detection of those features in both B- and V-band data assures their validity. In this paper we present the available B-band imaging, although we note that all the features presented and discussed in this paper are also detected in our V-band data.\footnote{For B2~0648+27 we present the co-added B+V-band data, as described in \citet[][]{emo08_0648}.} \begin{table} \centering \caption{Optical observations} \vspace{0.4cm} \label{tab:optparam} \begin{tabular}{llcccc} B2 Source & MDM & Obs. date & Int. time & airmass \\ \hline 0258+35 & 1.3m & 15/11/06 & 60 min& 1.0-1.1 \\ 0648+27 & 2.4m & 13/03/07 & 60 min& 1.0-1.1 \\ 0722+30 & 2.4m & 13/03/07 & 60 min& 1.0-1.2 \\ 1217+29 & 2.4m & 14/03/07 & 40 min& 1.1-1.2 \\ 1322+36 & 2.4m & 14/03/07 & 45 min& 1.1-1.3 \\ NGC~3894 & 2.4m & 14/03/07 & 50 min& 1.1 \\ \end{tabular} \end{table} We used the Image Reduction and Analysis Facility ({\tt IRAF}) to perform a standard data reduction (bias subtraction, flat-fielding, frame alignment and cosmic-ray removal). Probably due to minor shutter issues, a gradient was present in the background of the $9.5 \times 9.5$ arcmin CCD images obtained with the Echelle CCD. We were able to remove this effect to a significant degree by fitting a gradient to the background in the region surrounding our targeted objects and subsequently subtracting this background-gradient from our data. This method worked better for galaxies that covered only a small part of the CCD's F.o.V. (B2~0648+27 and B2~0722+30) than for galaxies that covered a large fraction of the CCD (B2~1217+29, NGC~3894 and B2~1322+36). The residual errors in the background subtraction are still visible in Fig. \ref{fig:HIsample}. This background issue made it impossible to obtain reliable flux and colour information from the B- and V-band images (in particular in the low-surface brightness regions). We therefore did not attempt an absolute or relative flux calibration of our sources. Using KARMA, we applied a world coordinate system to the images by identifying a few dozen of the foreground stars in a Sloan Digital Sky Survey (SDSS) image of the same region. The newly applied coordinate system agrees with that of the SDSS image to within 1 arcsec. This is good enough for comparing the optical with the H{\,\small I}\ data, since the latter have a much lower resolution (see Table \ref{tab:linedataproperties}). \section{Results} \label{sec:results} As can be seen in Table \ref{tab:hiproperties}, H{\,\small I}\ in emission is associated with seven of the 23 radio galaxies in our sample, while nine sources show indications for H{\,\small I}\ absorption against the radio continuum. Total intensity images of the H{\,\small I}\ emission-line structures are shown in Fig. \ref{fig:HIsample}, together with deep optical imaging of their host galaxies \citep[B2~0055+30 is presented in][and therefore not repeated in Fig. \ref{fig:HIsample}]{mor09_NGC315}. Table \ref{tab:hiradiogalaxies} summarises the H{\,\small I}\ properties. For five of the seven detections (B2~0258+35, B2 0648+27, B2 0722+30, B2 1217+29 and NGC 3894) the H{\,\small I}\ gas is distributed in a regularly rotating disc- or ring-like structure with a mass of a few $\times 10^{8} - 10^{10} M_{\odot}$ and a diameter of several tens to hundreds of kpc. For two radio galaxies (B2~0055+30 and B2~1322+36), patchy H{\,\small I}\ emission is observed, but the total mass associated with it is comparable to the upper limits that we derive for the non-detections. Since B2~0055+30 and B2~1322+36 are in the middle part of the redshift range of our sample sources, it is thus possible that sensitivity issues limit finding similar patchy, low-mass H{\,\small I}\ emission in the higher redshift sources in our sample. \begin{table*} \centering \caption{H{\,\small I}\ emission and absorption results} \vspace{0.4cm} \label{tab:hiproperties} \begin{tabular}{l|ccc|ccc} Source & H{\,\small I}\ & H{\,\small I}\ mass & H{\,\small I}\ contours & H{\,\small I}\ & $\tau$ & $N_{\rm HI}$ ($T_{\rm spin} = 100$K) \\ B2 name& emission & ($\times 10^{8} M_{\odot})$ & ($\times 10^{20}$ cm$^{-2}$) & absorption & ($\%$) & $\times 10^{20}$ cm$^{-2}$ \\ \hline 0034+25 & - & $<$1.6 & - & - & $<$8.7 & $<$16 \\ 0055+30$^{a}$ & + & 0.66 & - & + & \multicolumn{2}{l}{\ \ \ \ 1\ \ \ \ \ (broad) \ \ \ \ 2.5} \\ & & & - & & \multicolumn{2}{l}{\ \ \ \ 5\ \ \ \ \ (narrow)\ \ \ 4.5} \\ 0104+32 & - & $<$0.41 & - & - & $<$0.3 & $<$0.6 \\ 0206+35 & - & $<$1.6 & - & - & $<$0.3 & $<$0.5 \\ 0222+36 & - & $<$1.8 & - & (+) & 1.0 & 1.0 \\ 0258+35$^{b}$ & + & 180 & 0.26, 0.77, 1.3, 1.8, 2.3, 2.8, 3.3, 2.9, 4.4 & + & 0.23 & 1.2 \\ 0326+39 & - & $<$0.82 & - & - & $<$1.5 & $<$2.8 \\ 0331+39 & - & $<$0.55 & - & - & $<$0.2 & $<$0.3 \\ 0648+27$^{c}$ & + & 85 & 0.22, 0.36, 0.52, 0.71, & + & 0.74 & 2.8 \\ & & & 0.95, 1.2, 1.5, 1.8, 2.1 & & & \\ 0722+30$^{d}$ & + & 2.3 & 0.63, 1.2, 1.8, 2.7, 3.3, 4.3, 5.3, 6.3, 7.7& + & 6.4 & 29 \\ 0924+30 & - & $<$2.0 & - & - & $<$38 & $<$69 \\ 1040+31$^{\dagger}$ & - & $<$4.9 & - & - & $<$1.1 & $<$2.0 \\ 1108+27 & - & $<$2.8 & - & - & $<$2.0 & $<$3.7 \\ 1122+39 & - & $<$0.19 & - & - & $<$11 & $<$20 \\ 1217+29$^{e}$ & + & 6.9 & 0.1, 0.25, 0.5, 1.0, 2.5 & - & $<$0.15$^{\ddagger}$ & $<$0.27 \\ 1321+31 & - & $<$0.59 & - & (+) & \multicolumn{2}{l}{\ \ \ 5.5\ \ \ (nuc.)\ \ \ \ \ \ \ \ 4.9} \\ & & & - & & \multicolumn{2}{l}{\ \ \ 25\ \ \ \ (lobe)\ \ \ \ \ \ \ \ 36} \\ 1322+36 & + & 0.69 & 1.7, 2.3, 2.8 & + & 1.3 & 3.0 \\ 1447+27 & - & $<$2.8 & - & (+) & 0.87 & 2.9 \\ 1658+30 & - & $<1.7$ & - & - & $<$1.3 & $<$2.3 \\ 2116+26 & - & $<$0.34 & - & - & $<$1.3 & $<$2.4 \\ 2229+39 & - & $<$0.40 & - & - & $<$0.7 & $<$1.2 \\ & & & - & & & \\ 1557+26 & - & $<$5.5 & - & - & $<$6.2 & $<$11 \\ NGC~3894 & + & 22 & 0.17, 0.49, 0.87, 1.7, 3.2, 4.6 & + & 4.1 & 14 \\ \end{tabular} \flushleft {`+' = detection, `(+)' = tentative detection, `-' = non-detection; Column 4 lists the H{\,\small I}\ contours as shown in Figure \ref{fig:HIsample}; $^{\dagger}$We note that the H{\,\small I}\ data of B2 1040+31 (taken with WSRT during service time) are of poor quality. Given the peculiar radio continuum morphology of B2 1040+31 \citep{par86}, this system deserves further H{\,\small I}\ follow-up; $^{\ddagger}$Possible confusion with H{\,\small I}\ emission; {\sl References:} a). \citet{mor09_NGC315}; b). Struve et al. (in prep.); c). \citet{emo06}; d). \citet{emo09}; e).\citet{mor06b}} \end{table*} \begin{figure*} \centering \includegraphics[width=\textwidth]{figure1a_astroph.eps} \caption{Total intensity maps of H{\,\small I}\ emission and deep optical imaging of our H{\,\small I}-detected sample sources. Three plots are shown for each source; the left plot shows the H{\,\small I}\ contours overlaid onto our deep optical image, the middle shows the same H{\,\small I}\ contours overlaid onto a high-contrast representation of the deep optical image, the right plot emphasises the optical features of the main stellar body of the galaxy from our deep imaging. Contour levels of H{\,\small I}\ emission are given in Table \ref{tab:hiproperties}.} \label{fig:HIsample} \addtocounter{figure}{-1} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{figure1b_astroph.eps} \caption{{\sl -- continued.} The left plots show the contours of H{\,\small I}\ emission overlaid onto our deep optical image (for B2~1322+36) or an optical SDSS image (for B2~1217+19 and NGC~3894), which has a larger field-of-view than our deep optical image. The H{\,\small I}\ image of B2~1217+29 is taken from \citet{mor06b}. The left plot of B2~1322+36 also shows the contours of radio continuum (grey) and H{\,\small I}\ absorption (white). For B2~1322+36 and NGC~3894, position-velocity (PV) plots of H{\,\small I}\ emission (black contours) and absorption (grey contours) are also shown, taken along the lines indicated in the left plots. The middle plots of B2~1217+29 and NGC~3894 show a high-contrast representation of our deep optical image. The right plots emphasise the optical features of the main stellar body of the host galaxies (in particular the faint dust lanes in B2~1217+29 and NGC~3894) as well as several companion systems from our deep imaging. For all plots, contour levels of H{\,\small I}\ emission are given in Table \ref{tab:hiproperties}. See \citet[][]{emo07} for more details on contour levels of H{\,\small I}\ absorption, radio continuum and the PV-plots of B2~1322+36 and NGC~3894.} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{figure2_astroph.eps} \caption{Central H{\,\small I}\ absorption profiles of our sample sources. The plot of B2~0055+30 is taken from \citet{mor09_NGC315}, while the plot of B2~0258+35 is from Struve et al. (in prep.). The velocities are given in optical definition. The bar indicates the systemic velocity traced with optical emission lines. Values of v$_{\rm sys}$ are taken from NED (unless otherwise indicated in Appendix \ref{app:hiproperties}). The arrow indicates the direction of increasing redshift velocity -- the right and left pointing arrows correspond to the WSRT and VLA data respectively. Our classification of `tentative' is based on the weakness of the `signal' in combination with the quality of the data-cubes.} \label{fig:absorption} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{figure3_astroph.eps} \caption{Tentative H{\,\small I}\ absorption profile of B2 1321+31 against the outer western radio lobe.} \label{fig:absorption1321_extended} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{figure4_astroph.eps} \caption{Histogram of the H{\,\small I}-emission detections and non-detections in our sample regarding the total power of the radio source at 1.4 GHz {\sl (left)} as well as $M_{\rm V}$ {\sl (middle)} and morphological type {\sl (right)} of the host galaxy. (Values are taken from Table \ref{tab:sourceproperties}.)} \label{fig:histograms} \end{figure*} The upper limits for the non-detections are estimated assuming a potential 3$\sigma$ detection smoothed across a velocity range of 200 km\ s$^{-1}$\ (therefore resembling the large-scale H{\,\small I}\ structures that we detect): \begin{equation} \frac{M_{\rm upper}}{M_{\odot}} = 2.36 \cdot 10^{5} \times D^{2} \times S_{3\sigma} \times \Delta v \times \sqrt{\frac{200\ \rm {km/s}}{\Delta v}}, \label{eqn:3_upper} \end{equation} where $S_{3\sigma}$ is the 3$\sigma$ noise level per channel (in Jy beam$^{-1}$, from the robust weighted data), $D$ the distance to the galaxy (in Mpc) and $\Delta$v the channel width (in km\ s$^{-1}$) - see Tables \ref{tab:sourceproperties} and \ref{tab:linedataproperties}. H\,{\sc i} absorption is unambiguously detected against the radio continuum of six of our sample sources, while another three show tentative evidence for absorption, which has to be confirmed with additional observations (see Fig. \ref{fig:absorption}). All sources for which H{\,\small I}\ has been detected in emission also show unambiguous H{\,\small I}\ absorption, except B2~1217+29. For all nine sources that show (tentative) absorption, an H{\,\small I}\ absorption profile is seen against the central region of the galaxy. For eight of the nine sources the central absorption is spatially unresolved. Only for B2 1322+36 the absorption is slightly extended against the resolved radio continuum (see Fig. \ref{fig:HIsample}). Two sources (B2~0055+30 and B2~1321+31) show evidence for multiple absorption components. B2~0055+30 shows two components against the central radio continuum \citep[a broad and a narrow one, see][\ and Appendix \ref{app:hiproperties}]{mor09_NGC315}, while B2~1321+31 shows a second, spatially unresolved, tentative component against the outer edge of one of the radio lobes (see Fig. \ref{fig:absorption1321_extended}). Table \ref{tab:hiproperties} includes both components for these two sources. Table \ref{tab:hiproperties} lists the optical depth and column density of the absorption features. The optical depth ($\tau$) is calculated from: \begin{equation} e^{-\tau} = 1 - \frac{S_{\rm abs}}{S_{\rm cont}}, \label{eqn:3_tau} \end{equation} where $S_{\rm abs}$ is the peak flux density of the absorption and $S_{\rm cont}$ the flux density of the underlying radio continuum. Subsequently, the H{\,\small I}\ column density ($N_{\rm HI}$) is given by: \begin{equation} N_{\rm HI}\ (\rm cm^{-2}) = 1.8216 \cdot 10^{18} \times T_{\rm spin} \times \int \tau(v)\ dv, \label{eqn:3_NHI} \end{equation} where $v$ is the velocity and $T_{\rm spin}$ the typical spin temperature of the H{\,\small I}\ gas, assumed to be 100K. The H{\,\small I}\ column densities have been derived assuming a covering factor of 1 for the gas that overlies these radio sources. For non-detections an upper limit is calculated assuming a potential 3$\sigma$ detection (uniform weighting) spread over 100 km\ s$^{-1}$.\\ \vspace{0mm}\\ Appendix \ref{app:hiproperties} gives a detailed description of the individual objects in which H{\,\small I}\ is detected in emission or absorption. In the remainder of this Section we will describe the general H{\,\small I}\ properties of the sample as a whole. Section \ref{sec:hiemission} will summarise the H{\,\small I}\ emission properties, followed by the H{\,\small I}\ absorption results in Sect. \ref{sec:hiabsorption}.\\ \subsection{H{\tiny \ }{\small I} emission} \label{sec:hiemission} H{\,\small I}\ emission has been detected in 7 of the 23 sample galaxies. When taking into account only the complete sample (so without B2 1557+26 and NGC 3894; Sect. \ref{sec:sample}), our detection rate is 29$\%$. In Sect. \ref{sec:radioquiet} we compare this detection rate with that of radio-quiet early-type galaxies. Before summarising the sample properties in detail, we first check whether the presence of large-scale H{\,\small I}\ emission-line gas depends on some important parameters of the galaxies in our sample. Figure \ref{fig:histograms} shows histograms of the distribution of both the H{\,\small I}\ detections and non-detections regarding the optical morphological class and absolute visual magnitude ($M_{\rm V}$) of the host galaxy, as well as the total power ($P_{\rm 1.4GHz}$) of the radio source. Although we have to be careful with our small-number statistics, there is no apparent bias in detecting H{\,\small I}\ regarding these various observables (see Sect. \ref{sec:hiemission_optical} for a more in-depth discussion on the optical morphologies of the host galaxies). It is interesting to note that the range of absolute visual magnitudes among our sample sources covers the intermediate- and high-mass end of the galaxies from the Sloan Digital Sky Survey used in the Colour-Magnitude relations by \citet{bal04}. We therefore do not observe a difference in H{\,\small I}\ content among early-type galaxies of different mass in our sample. Above an H{\,\small I}\ mass of $10^{8} M_{\odot}$, all H{\,\small I}\ structures detected in our sample are fairly regularly rotating discs or rings (although a varying degree of asymmetry is still visible in these structures). We find no clear evidence for {\sl ongoing} gas-rich mergers in the form of long gaseous tidal debris (although B2~0648+27 is clearly a post-merger system; see Appendix \ref{app:hiproperties}). One potential worry is that the sensitivity of our H{\,\small I}\ observations is not ideal for detecting low surface brightness tidal features that are not (yet) settled. \citet*{gre04} show that for decreasing sensitivity to detect H{\,\small I}\ in emission, complicated velocity structures in H{\,\small I}\ tend to wash out and the H{\,\small I}\ often gets a more smooth and rotating appearance. Nevertheless, in \citet{emo06thesis} we showed examples of galaxies within the F.o.V. of our radio sources (but physically unrelated) that do show extended and complex tidal H{\,\small I}\ structures, which shows that we are sensitive enough for observing such features at the redshift of our sample sources. Table \ref{tab:hiradiogalaxies} gives $M_{\rm H{\small\ I}}/L_{\rm V}$ for our H{\,\small I}\ detected radio galaxies. The large spread in $M_{\rm H{\small\ I}}/L_{\rm V}$ for these galaxies (ranging from 0.0005 to 0.44 $M_{\odot}/L_{\odot}$) is consistent with a large spread of $M_{\rm H{\small\ I}}/L_{\rm B}$ found by \citet*{kna85} and \citet{mor06b} for elliptical galaxies (because our B-band data were not optimised for photometric studies, we had to rely on available V-band magnitudes from the literature; see Table \ref{tab:sourceproperties}). The large spread in $M_{\rm H{\small\ I}}/L_{\rm B}$ for elliptical galaxies compared to spiral galaxies has led \citet{kna85} and \citet{mor06b} to conclude that the H{\,\small I}\ gas in ellipticals is decoupled from the stars and has an external origin. The possible formation mechanism of the large-scale H{\,\small I}\ discs/rings in our sample sources will be discussed in detail in Sect. \ref{sec:HIrich}. Perhaps the most intriguing result from our H{\,\small I}\ study is that galaxies with large amounts of extended H{\,\small I}\ ($M_{\rm HI} \ga 10^{9} M_{\odot}$) all have a {\sl compact} radio source, while none of the host galaxies of the more extended FR-I type radio sources shows similar amounts of H{\,\small I}. This is illustrated in Fig. \ref{fig:HImasssize}, where we plot the total mass of H{\,\small I}\ detected in emission against the linear size of the radio sources. In Paper{\,\small I}\ we already presented and discussed this segregation in large-scale H{\,\small I}\ content between compact sources and extended FR{-\small I}\ sources, suggesting that {\sl there is a physical link between the properties of the central radio source and the large-scale properties of the ISM}. In Sect. \ref{sec:relation} we will briefly review our conclusions from Paper{\,\small I}. All but one (B2~1322+36) of the radio sources in our sample that contain H{\,\small I}\ emission are also detected in the infra-red (IR) at 60$\mu$m \citep[see Table \ref{tab:sourceproperties}; IR data are taken from][and references therein]{imp93}. However, as can be seen from Fig. \ref{fig:iras}, when taking into account the distance to the sample sources and hence converting the IR flux-density to IR luminosity (using the simple conversion $L_{\nu} = 4 \pi D^{2} S_{\nu}$), there is no clear correlation between large-scale H{\,\small I}\ mass content and IR-luminosity. It is interesting, though, that B2~0648+27 and B2~0722+30 have by far the highest IR-luminosity of our sample sources (both at 60 and 100$\mu$m). The 60$\mu$m emission is expected to trace cool dust that could predominantly be heated by young stars \citep[e.g.][]{san96}. Indeed, spectral analysis revealed evidence for a prominent young stellar population throughout the host galaxy for both B2 0648+27 and B2 0722+30 \citep{emo06,emo09}. Based on the IR-luminosity alone, we do not expect young stellar populations as prominent as in B2~0648+27 and B2~0722+30 to be present in the other radio sources in our sample \citep[although a smaller contribution from young stars cannot be ruled out; see e.g.][]{wil04}. \subsubsection{Optical morphology} \label{sec:hiemission_optical} Despite the morphological and kinematical similarity of the large-scale H{\,\small I}\ discs/rings, the optical host galaxy morphology of the H{\,\small I}\ detected radio galaxies varies significantly (see Fig. \ref{fig:HIsample}). While the merger remnant B2~0648+27 shows a distorted structure and a faint stellar ring, B2~0722+30 and B2~0258+25 contain a clear stellar disc. These three systems therefore contain a (faint) optical counterpart to the large-scale H{\,\small I}\ structure. Contrary, B2~1217+29 and NGC~3894 have the apparent morphology of dust-lane ellipticals and only a bulge component is visible in our deep optical imaging. We note, however, that these two galaxies fill a substantial part of the CCD and serious limitations in the background subtraction of the optical data limit our ability to trace faint stellar light across the region where the H{\,\small I}\ stretches (see Sect. \ref{sec:observations}). Moreover, NGC~3894 contains a very faint dust-lane that stretches in the same direction as the H{\,\small I}\ disc. Since the H{\,\small I}\ disc is viewed edge-on, it is possible that significant extinction may obscure a very faint stellar counterpart to the large-scale H{\,\small I}\ disc. \begin{table*} \centering \caption{{\sl HI around radio galaxies.} Given is the name, the NGC number, the total H{\,\small I}\ mass detected in emission, the diameter of the H{\,\small I}\ structure (or distance to the host galaxy for B2 1322+36), the peak in H{\,\small I}\ surface density, the relative H{\,\small I}\ content and the morphology of the H{\,\small I}\ structure (D = disc, R = ring, C = cloud).} \label{tab:hiradiogalaxies} \begin{tabular}{llcccccc} $\#$ & B2 Name & NGC & M$_{\rm HI}$ & D$_{\rm HI}$ & $\Sigma_{\rm HI}$ & $M_{\rm HI}/L_{\rm V}$ & Mor. \\ & & & (M$_{\odot}$) & (kpc) &(M$_{\odot}$/pc$^{2}$) & ($M_{\odot}/L_{\odot})$ & H{\,\small I} \\ \hline \hline 1 & 0055+30$^{\ a}$ & 315 & 6.8$\times$10$^{7}$ & 10 & 1 & 0.0005 & C \\ 2 & 0258+35$^{\ b}$ & 1167 & 1.8$\times$10$^{10}$ & 160 & 2.7 & 0.44 & D \\ 3 & 0648+27$^{\ c}$ & - & 8.5$\times$10$^{9}$ & 190 & 1.7 & 0.052 & R \\ 4 & 0722+30$^{\ d}$ & - & 2.3$\times$10$^{8}$ & 15 & 4.1 & 0.017 & D \\ 5 & 1217+29$^{\ e}$ & 4278 & 6.9$\times$10$^{8}$ & 37 & 2.0 & 0.022 & D \\ 6 & 1322+36 & 5141 & 6.9$\times$10$^{7}$ & 20 & 3.7 & 0.002 & C \\ 7 & - & 3894 & 2.2$\times$10$^{9}$ & 105 & 3.8 & 0.028 & R/D \\ \hline \hline \end{tabular}\\ \vspace{1mm} {\sl References:} a). \citet{mor09_NGC315}; b). Struve et al. (in prep.); c). \citet{emo06}; d). \citet{emo09}; e). \citet{mor06b} \end{table*} \subsubsection{H{\tiny \ }{\small I} environment} \label{sec:environment} Many of our sample sources contain H{\,\small I}-rich galaxies in their environment. However, with the exception of the late-type galaxy B2 0722+30 \citep{emo09}, none of our targeted B2 radio galaxies shows any obvious evidence in H{\,\small I}\ for ongoing interactions with H{\,\small I}-rich companions (in the form of tidal-bridges, -arms or -tails). Features such as the clouds of H{\,\small I}\ gas in between B2~1322+36 and its companion, the faint tails of H{\,\small I}\ gas stretching off the disc in B2~1217+29 and the slight distortion in the H{\,\small I}\ discs around B2~0258+35 and NGC~3894 could, however, present more subtle indications for less violent, gas-poorer or older interactions. A quantitative study of gas-rich companions in the environment of our sample sources would be interesting for estimating the H{\,\small I}\ accretion rate/probability in nearby radio galaxies through such less violent galaxy encounters. However, this is beyond the scope of the current paper and will be presented in a future publication by Struve et al. (in prep). \subsection{H{\tiny \ }{\small I} absorption} \label{sec:hiabsorption} H\,{\sc i} is unambiguously detected in absorption against the radio continuum for 6 of the 23 sample sources, while three more sources show a tentative detection (see Fig. \ref{fig:absorption}). The detection rate of H{\,\small I}\ absorption in our complete sample is therefore $24-38 \%$ (depending on whether or not the three tentative detections are included). For the compact sources and extended FR{-\small I}\ sources, the detection rates are $33-67 \%$ and $20-27 \%$ respectively. Our detection rate of extended FR{-\small I}\ sources is slightly higher than that derived by \citet{mor01} (who detect H{\,\small I}\ absorption in 10$\%$ of FR{-\small I}\ sources from the 2-Jy sample). However, when excluding the rare disc-dominated radio galaxy B2~0722+30 from our statistics (see Appendix \ref{app:hiproperties}), our detection rate drops to $14-21 \%$. This is in reasonable agreement with the values found by \citet{mor01}, given the low number statistics in their 2-Jy sample and the fact that their upper limits on the optical depth are almost a factor 2 larger than those in our B2 sample. It is interesting to note, however, that the FR{-\small I}\ radio sources in the 2-Jy sample of \citet{mor01} are on average more than an order of magnitude more powerful at 1.4 GHz than the FR{-\small I}\ sources in our B2 sample. These H{\,\small I}\ absorption results therefore suggest that more powerful FR{-\small I}\ radio sources do {\sl not} have a higher detection rate of H{\,\small I}\ absorption compared with less powerful FR{-\small I}\ sources. Our detection rate of compact sources is in good agreement with the detection rate of $54 \%$ that \citet*{pih03} derive for a large sample of Gigahertz Peaked Spectrum (GPS) and Compact Steep Spectrum (CSS) sources, despite the significantly lower radio power of our sources. Interestingly, \citet{pih03}, and later \citet[][]{gup06}, detect an anti-correlation between the projected linear size of compact radio sources and the H{\,\small I}\ column density. Since this anti-correlation is attributed to a gradient in the distribution of the cool ISM in the central region of the radio galaxies, it seems that it is not immediately related to the segregation that we find in H{\,\small I}\ content between compact and extended sources (Sect. \ref{sec:hiemission}). For most cases, the peak of the H{\,\small I}\ absorption appears to coincide with the systemic velocity as derived from optical emission lines. For B2~0055+30, the H{\,\small I}\ absorption is clearly redshifted with respect to the systemic velocity and part of it could represent gas falling into the nucleus \citep{mor09_NGC315}. For two other sources (B2~0222+36 and B2~1321+31) there are also indications that the peak of the H{\,\small I}\ absorption is redshifted with respect to v$_{\rm sys}$, but these detections are only tentative and we argue that the uncertaintly in v$_{\rm sys}$ determined from optical emission-line is too large to make any claims. Many of the H{\,\small I}\ absorption profiles in our sample are resolved in velocity and show both blue- and redshifted components, consistent with what is frequently observed in compact radio sources \citep{ver03,pih03,gup06}. In contrast to the H{\,\small I}\ emission results, there is no clear trend between radio source size and the presence of H{\,\small I}\ absorption. We note, however, that the strength of the underlying radio continuum as well as the geometry of the absorbing H{\,\small I}\ gas are important selection effects that influence our absorption results, while they are not relevant for detecting H{\,\small I}\ in emission. All the galaxies in our sample that are unambiguously detected in absorption also show H{\,\small I}\ emission-line structures at the same velocity. In fact, only one galaxy in the sample that is detected in emission is not detected in absorption, namely B2~1217+29, although \citet{mor06b} show that the large-scale H{\,\small I}\ matches very well the ionised gas in the central region of this galaxy. The remaining three H{\,\small I}\ absorption systems show only tentative detections. {\sl This strongly suggests that at least a significant fraction of the H{\,\small I}\ gas in many nearby absorption systems is part of gaseous structures on scales larger than just the (circum)-nuclear region.} This is also in agreement with the idea that FR{-\small I}\ sources do not necessarily require a geometrically thick torus, as already suggested by \citet{mor01} from the above mentioned low detection rate of H{\,\small I}\ absorption among FR{-\small I}\ sources in the 2-Jy sample \citep[although we note that there are FR{-\small I}\ sources for which the AGN is hidden by dust, e.g.][]{lei09}. The H{\,\small I}\ absorption characteristics of B2~0055+30 and B2~1322+36 indicate that extended H{\,\small I}\ does occur in some FR{-\small I}\ radio galaxies, but generally in much lower amounts than that associated with a significant fraction of the compact sources in our sample. However, the low detection rate of off-nuclear H{\,\small I}\ detected in absorption against the extended radio lobes of FR{-\small I}\ sources also suggests that such extended H{\,\small I}\ structures are not a commonly observable feature among FR{-\small I}\ radio galaxies. \begin{figure} \centering \includegraphics[width=0.46\textwidth]{figure5_astroph.eps} \caption{Total H{\,\small I}\ mass detected in emission plotted against the linear size (diameter) of the radio sources. In case of non-detection a tight upper limit (3$\sigma$ across 200 km s$^{-1}$) is given. The numbers correspond to the sources as they are given in Table \ref{tab:hiradiogalaxies}.} \label{fig:HImasssize} \end{figure} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{figure6_astroph.eps} \caption{Total H{\,\small I}\ mass detected in emission plotted against the 60$\mu$m (left) and 100$\mu$m (right) IRAS luminosity. The arrows represent upper limits. The numbers correspond to the sources as they are given in Table \ref{tab:hiradiogalaxies}.} \label{fig:iras} \end{figure*} \section{Discussion} \label{sec:discussion} The H{\,\small I}\ results presented in this paper provide -- for the first time -- a systematic insight into the properties of cold gas in nearby, low-power radio galaxies. In this Section we discuss these H{\,\small I}\ results in order to investigate the nature of low-power radio galaxies in more detail. In Sect. \ref{sec:HIrich} we first summarise the H{\,\small I}\ characteristics of our sample and conclude that, generally, we find no clear evidence for ongoing gas-rich galaxy mergers and interactions among our sample sources. Section \ref{sec:relation} discusses the fact that large amounts of H{\,\small I}\ gas are only found around compact radio sources in our sample, while extended FR{-\small I}\ sources lack similar amounts of H{\,\small I}\ gas. The possible explanations for this segregation in H{\,\small I}\ mass with radio source size are revisited from Paper{\,\small I}. In Sect. \ref{sec:radioquiet} the H{\,\small I}\ properties of our sample of low-power radio galaxies are compared with those of radio-quiet early-type galaxies, and from that comparison we find no clear differences. Section \ref{sec:nature} summarises our understanding of the nature of low-power radio galaxies. \subsection{H{\,\small I}\ characteristics} \label{sec:HIrich} Large-scale H{\,\small I}\ is associated with seven of the radio galaxies in our sample. The overall H{\,\small I}\ properties of our sample sources show a morphological trend in the sense that toward the high-mass end all the H{\,\small I}\ structures are fairly regularly rotating discs/rings. Only the two H{\,\small I}\ detections at the low-mass end (several $\times 10^{7} M_{\odot}$) appear much more irregular/clumpy. The regular kinematics of the large-scale H{\,\small I}\ discs/rings in our sample suggest that the gas is either settled or in the process of settling. The morphology of these large-scale H{\,\small I}\ structures is fairly uniform, although their sizes differ significantly (from 15 to 190 kpc) and their optical host galaxies show a range of morphologies. In this Section we discuss in detail what physical processes may have formed the observed H{\,\small I}\ structures in our sample. This provides us with information about the evolutionary history of the host galaxy, which will be useful for the remainder of the Discussion.\\ \vspace{0mm}\\ {\sl Major merger or collision:}\\ We find no evidence for {\sl ongoing} gas-rich major mergers (i.e. mergers between galaxies with roughly equal mass) or massive galaxy collisions - in the form of large-scale tidal tails or bridges of H{\,\small I}\ gas - among our sample sources. However, as we described in detail in Paper{\,\small I}, the formation of a large-scale disc-like structure may be the natural outcome of a major merger between gas-rich galaxies over the time-scale of one to several Gyr \citep[which at the same time also results in the formation of an early-type host galaxy from the merging systems;][]{hib96}. This scenario has been unambiguously verified only for B2~0648+27, whose H{\,\small I}\ ring is gaseous tidal debris that is settling after a major merger occurred roughly 1.5 Gyr ago \citep[][see also Appendix \ref{app:hiproperties}]{emo06,emo08_0648}. For the other large-scale H{\,\small I}\ discs in our sample, such a formation history is not immediately obvious; their host galaxies appear to have a more regular optical morphology without evidence for prominent stellar tidal features (Fig. \ref{fig:HIsample}) and they do not show evidence for a young stellar population as prominent as in B2~0648+27 across the bulge region \citep{emo06thesis}. Nevertheless, the surface brightness of the large-scale H{\,\small I}\ discs/rings is probably too low for vigorous star formation to occur and hence they are likely to survive for many Giga-years. It is possible that the large, regular disc of B2 0258+35 and elliptical morphology of NGC~3894 and B2~1217+29 reflect more evolved stages in the evolution of a merger-system compared with B2~0648+27.\\ \vspace{-2mm}\\ {\sl Galaxy interactions:}\\ None of the radio galaxies in our sample show clear signs of ongoing gas-rich interactions with nearby companions. The only exception is the rare disc-dominated radio galaxy B2~0722+30, which shows that -- perhaps under specific circumstances -- a classical radio source can occur in a system that is undergoing gas-rich interactions \citep[][see also Sect. \ref{sec:seyferts}]{emo09}. Nevertheless, our H{\,\small I}\ results indicate that low-power radio galaxies in general are not associated with violent, ongoing galaxy interactions that involve more than a few $\times 10^{8} M_{\odot}$ of H{\,\small I}\ gas. In Sect. \ref{sec:environment} we already mentioned that smaller amounts of patchy H{\,\small I}\ emission (such as the clouds of H{\,\small I}\ emission observed in B2~1322+36) or slight distortions in the large-scale H{\,\small I}\ discs could possibly be more subtle indications for less violent, gas-poorer or older galaxy interactions.\\ \vspace{-2mm}\\ {\sl Accretion of small companions:}\\ The presence of small amounts of H{\,\small I}\ gas ($\la \rm few \times 10^{8} M_{\odot}$), for example in the case of B2~0055+30, could potentially be the result of the (continuous) accretion of gas from small companions. Such events are not likely to leave obvious observational evidence of the actual accretion event (or even in the total amount of H{\,\small I}\ gas) at the sensitivity of our observations. From a much more sensitive study of H{\,\small I}\ in early-type systems, \citet[][see Sect \ref{sec:radioquiet}]{oos09} suggest that accretion of cold gas -- likely over long time-scales -- may be a common feature among field early-type galaxies. They estimate, however, that the typical observable {\sl total} H{\,\small I}\ accretion rate is smaller than 0.1 $M_{\odot}$ yr$^{-1}$ \citep[compared to at least 0.2 $M_{\odot}$ yr$^{-1}$ for field spiral galaxies;][]{san08}. We therefore argue that accretion of H{\,\small I}\ gas from small companion galaxies does not provide a sufficient explanation for the formation of the large-scale H{\,\small I}\ discs (with an H{\,\small I}\ mass of several $\times 10^9 - 10^{10} M_{\odot}$) that we detected in our sample, because in that case either the number of events must be unphysically large, or the companion systems are large enough that the encounter would have resulted in a more violent galaxy-galaxy interaction or merger. It could, however, explain the presence of small clouds of H{\,\small I}\ gas within the host galaxy, as for example detected in B2~0055+30. As mentioned in Sect. \ref{sec:environment}, estimates of the rates at which accretion of small gas-rich companions occurs in low-power radio galaxies can potentially be investigated by studying in detail the environment of nearby low-power radio galaxies (and comparing this with their radio-quiet counterparts), but this is beyond the scope of the current paper.\\ \vspace{-2mm}\\ {\sl Cold accretion of the IGM:}\\ \citet{ker05} show that gas from the inter-galactic medium (IGM) can be cooled along filamentary structures without being shock-heated, resulting in the accretion of cold gas onto the host galaxy. According to \citet{ser06}, the process of building a gaseous disc of about $10^{10} M_{\odot}$ through the process of cold accretion is certainly viable and takes many Gyrs. On smaller scales, \citet{kau06} show that through the cooling of hot halo gas, cold gas can be assembled onto a galactic disc. It thus seems possible that this cold accretion scenario is a potential process for forming -- over long timescales -- the range of H{\,\small I}\ structures that we observe in our sample.\\ \vspace{0mm}\\ Of course, the galaxies in our sample are evolving continuously and it is certainly possible that a combination of the above mentioned mechanisms has occurred during their formation history. For example, the regular appearance of the H{\,\small I}\ disc in B2~1217+29, combined with the typical elliptical morphology of the host galaxy, suggest that the system is old and that the H{\,\small I}\ disc was created a long time ago. However, the two tails of H{\,\small I}\ gas that stretch from either side of the disc \citep{mor06b} also suggest that the system is currently still accreting gas. The possibility that large-scale H{\,\small I}\ structures can be gradually assembled during the evolutionary history of early-type galaxies is supported through recent numerical simulations of `morphological quenching' by \citet{mar09}. They suggest that transformation from stellar discs to spheroids will stabilise the gas disc, quench star formation and create a red and dead early-type system while gas accretion continues. We argue that a stabilising factor (whether from the transformation from discs to spheroids or from the bulges of the galaxies themselves), but certainly also the low column densities of the gas, mean that the presence and ongoing accretion of large reservoirs of cold gas may occur naturally in early-type galaxies. Despite the fact that the H{\,\small I}\ emission properties can be used to investigate the formation history of the gas-rich host galaxies in our sample, it is good to keep in mind that for the majority of our sample sources (71$\%$) {\sl no} H{\,\small I}\ emission-line structures have been detected. As we will see in the next Section, this H{\,\small I}\ deficiency is particularly pronounced for the host galaxies of extended FR{-\small I}\ sources. In Sect. \ref{sec:nature} we will discuss in detail the nature of these H{\,\small I}\ poor FR{-\small I}\ sources. \subsection{The `H{\tiny \ }{\small I} mass - radio size' segregation} \label{sec:relation} As mentioned in Sect. \ref{sec:hiemission}, large amounts of H{\,\small I}\ gas ($M_{\rm HI} \ga 10^9 M_{\odot}$) are only associated with the host galaxies of compact radio sources in our sample, while none of the host galaxies of the more extended FR{-\small I}\ radio sources shows similar amounts of large-scale H{\,\small I}. A well known compact radio source from the literature that also contains a massive large-scale H{\,\small I}\ disc (59 kpc in diameter and with $M_{\rm HI} = 1.5 \times 10^{10} M_{\odot}$ for $H_{0} = 71$ km s$^{-1}$ Mpc$^{-1}$) is the nearby GPS source PKS~B1718-649 \citep[$P_{1.4 \rm GHz} = 24.2$ W Hz$^{-1}$;][]{ver95,tin97}. In Paper{\,\small I}, we already discussed the observed segregation in large-scale H{\,\small I}\ mass content between compact and extended radio sources in our sample. It suggests that there is a physical link between the properties of the radio source and the presence of large-scale H{\,\small I}\ structures. In this Section, we will review the possible explanations for the observed segregation, as discussed previously in Paper{\,\small I}.\\ \ \\ {\large {\sl Radio source ionisation/heating}}\\ \vspace{-2mm}\\ In Paper{\,\small I}\ we discarded the possibility that large-scale H{\,\small I}\ discs/rings similar to those observed around our H{\,\small I}-rich compact radio sources are fully ionised when the radio jets propagate outward. Although such a process may be viable when the radio jets are aligned in the plane of the disc \citep[as has been seen for Coma~A;][]{mor02}, such a chance alignment is not expected to occur frequently. Indications that radio jets propagating perpendicular to a large-scale H{\,\small I}\ structure are not efficient in ionising the neutral gas on tens to hundreds of kpc scale come from the recent discovery of an enormous H{\,\small I}\ disc in the nearby powerful radio galaxy NGC~612 \citep[][see also Sect. \ref{sec:NGC612}]{emo09} and from the presence of a 15 kpc wide central H{\,\small I}\ disc in the vicinity of fast propagating radio continuum jets in Centaurus~A \citep[][see also Sect. \ref{sec:nature}]{cro09}. In addition, extensive emission-line studies show that FR{-\small I}\ radio galaxies generally do not contain features of ionised gas as extended, massive and regularly rotating as the H{\,\small I}\ discs/rings that we find around a significant fraction of our compact radio sources \citep{bau88,bau89,bau92}. The situation may be different if the large-scale H{\,\small I}\ discs/rings originated from the hot IGM that cooled and condensed onto the host galaxy in its neutral state (see Sect. \ref{sec:HIrich}). For X-ray luminous clusters and galaxy groups, \citet{bir04} and \citet{mcn05} showed that expanding X-ray cavities, produced by powerful radio jets that interact with the hot IGM, can in many cases quench cooling of this hot gas. \citet{bes06} argued from empirical evidence that in particular the moderately powerful radio sources (similar in power to the FR{-\small I}\ sources in our sample) are most effective in self-regulating the balance between cooling and heating of the hot gas surrounding these systems through recurrent activity. If this effect is strong enough and occurs also outside cluster environments, recurrent activity in extended FR{-\small I}\ radio galaxies may perhaps prohibit H{\,\small I}\ structures from forming in the first place. An argument against this scenario could be the recent discovery of a very extended relic structure of radio continuum in our compact sample source B2~0258+35 (to be published in a forthcoming paper by Struve et al. [in prep.]). This indicates that the (recurrent) radio source in B2~0258+35 has not remained compact over the long time-scales required to build-up a large-scale H{\,\small I}\ discs through cold accretion.\\ \smallskip\\ {\large {\sl Confinement}}\\ \vspace{-2mm}\\ A possible explanation for the observed segregation between H{\,\small I}\ mass and radio source size is that the H{\,\small I}-rich compact radio sources do not grow into extended sources because they are confined or frustrated by ISM in the central region of the galaxy. If the large amounts of H{\,\small I}\ gas at large radii reflect the presence of significant amounts of gas in the central region \citep[e.g. as a result of a major merger, which both expels gas at large-scales and transports gas into the central kpc-scale region;][]{bar02}, the central ISM may be responsible for frustrating the radio jets if those are not too powerful. Interaction with the ambient medium has been suggested for each of the four most compact and most H{\,\small I}-rich radio sources in our sample \citep{gir05a,tay98,gir05b}. Although it is not clear how much gas is needed to confine a radio source \citep*[see e.g. the discussion by][]{hol03}, \citet{gir05a,gir05b} argue that the relatively low power radio sources in NGC 4278 and B2 0648+27 cannot bore through the local ISM.\\ \smallskip\\ {\large {\sl Fuelling efficiency}}\\ \vspace{-2mm}\\ Alternatively, while large amounts of cold gas may provide sufficient material for fuelling the AGN, its distribution may be clumpy and the fuelling process may be inefficient. For example, while in a galaxy merger the geometry and the conditions of the encounter may be favourable to forming the observed large-scale H{\,\small I}\ structures and even deposit significant amounts of gas in the central kpc-scale region, they may perhaps not be efficient in continuously channelling gas to the the very inner pc-scale region. This may prevent stable continuous fuelling of the AGN, so that large-scale radio structures do not develop. \citet*{sax01} argue that galaxy mergers can also temporarily interrupt the AGN fuelling process. They show that this likely happened in the nearest FR{-\small I}\ radio galaxy Centaurus~A (see Sect. \ref{sec:nature}), where the minor merger that formed the H{\,\small I}\ disc also likely shut down the radio-AGN for a period of $\sim 10^{2}$ Myr, until re-started activity formed the compact inner radio lobes that we see today. It is also possible that the radio jets drive out substantial amounts of H{\,\small I}\ gas from the centre [as observed in the nearby Seyfert galaxy IC~5063 \citep{oos00} as well as more powerful radio sources \citep{mor03apj593,mor05,mor05b}], terminating the fuelling process of the compact sources in our sample. \citet{gir05a} observed that the current radio source in B2~0258+35 displays variable levels of activity, suggestive of inefficient fuelling, and is therefore not expected to grow beyond the kpc-scale. As we will discuss in detail in Sect. \ref{sec:nature}, it has often been suggested that extended FR{-\small I}\ sources are fed through the accretion of hot circum-galactic gas. This likely results in a steady fuel supply, which allows these sources to grow to their large size before feedback effects may kick in \citep[e.g.][]{all06}.\\ \smallskip\\ \noindent In conclusion, we therefore argue that the observed `H{\,\small I}\ mass - radio size' segregation in our sample is most likely the result of either confinement/frustration of compact radio jets by a central dense ISM, or inefficient fuelling of a significant fraction of compact jets compared with a more steady fuelling of extended FR{-\small I}\ sources. It is very well conceivable that a combination of both processes is at work in an environment where re-started radio sources continuously have to try to fight their way through a dense ISM until the fuelling process is temporarily halted.\\ \smallskip\\ \indent A similar segregation in H{\,\small I}\ mass with radio source size is found for high-$z$ radio galaxies by \citet{oji97}. While they detect strong H{\,\small I}\ absorption in the majority of the galaxies' Ly$\alpha$ halos, they also find that 90$\%$ of the smaller ($<$50 kpc) radio sources have strong associated H{\,\small I}\ absorption, whereas only a minority of the more extended sources contain detectable H{\,\small I}\ absorption. Van Ojik et al. prefer the explanation that these small radio sources reside in dense, possibly (proto) cluster environments, where large amounts of neutral gas can exist and where the radio source vigorously interacts with the ambient gaseous medium (although also other possible scenarios are discussed). Although the radio galaxies in our sample are much less powerful and were selected {\sl not} to lie in dense cluster environments, it is nevertheless intriguing that we find a similar segregation in H{\,\small I}\ content between compact and extended sources in the nearby Universe. \subsection{Comparison radio-quiet early-type galaxies} \label{sec:radioquiet} Over the past two decades, case studies of early-type galaxies have imaged large-scale H{\,\small I}\ structures associated with these systems \citep[see e.g][]{dri88,dri89,sch94,sch95,mor97,gor97,sad00,oos01,oos02,ser06,don09}. Many of these large-scale H{\,\small I}\ structures have an H{\,\small I}\ mass and morphology similar to the structures that we find around the H{\,\small I}-rich radio galaxies in our sample. Even a significant fraction of early-type galaxies that optically can be classified as `dry' merger systems (i.e. systems that supposedly formed as a results of a major merger between red and dead galaxies without any gaseous component), is found to contain significant amounts of cool gas when observed with radio telescopes \citep{don07,ser10}. Recently, \citet{oos07,oos09} have completed two studies that obtained quantitative results on the occurrence and morphology of large-scale H{\,\small I}\ in early-type galaxies (not selected on radio loudness). These two studies are therefore ideally suited for a detailed comparison with the H{\,\small I}\ properties of our complete sample of B2 radio galaxies. The first study by \citet{oos07} involved follow-up imaging of H{\,\small I}\ in early-type galaxies detected by \citet{sad01} in the single-dish H{\,\small I}\ Parkes All-Sky Survey \citep[HIPASS;][]{bar01HIPASS,mey04}. The second study by \citet{oos09} involved deep H{\,\small I}\ imaging of 33 nearby early-type galaxies selected from a representative sample of early-type galaxies observed with the optical integral field spectrograph SAURON. Of these 33, 20 are field early-type galaxies \citep[an extension of earlier work done by][]{mor06b}, while 13 are Virgo-cluster systems. Since we excluded cluster sources from our B2 radio galaxy sample (Sect. \ref{sec:sample}), we will only take into account the field early-type galaxies from the SAURON sample in the remainder of this Section. The early-type galaxies from the HIPASS and SAURON samples have a typical radio power $P_{\rm 1.4 GHz} < 10^{22}$ W, i.e. significantly lower than that of our B2 sample of radio galaxies. The B2 and the SAURON sample have one source in common, namely B2~1217+29, which is by far the strongest radio source in the SAURON sample and the second weakest source in our B2 sample. It is interesting to note that the only two objects in the HIPASS sample with $P_{\rm 1.4 GHz} > 10^{22.1}$ W both have a {\sl compact} radio source as well as significant amounts of large-scale H{\,\small I}\ gas, in agreement with the trend that we find in our B2 sample (Sect. \ref{sec:relation}). \begin{table} \centering \caption{H{\,\small I}\ detection rates of the various samples of early-type galaxies} \label{tab:detectionrates} \begin{tabular}{l|c|c|c} & HIPASS & B2 & SAURON \\ & & & (field sample) \\ \hline \hline $\#$ galaxies & 818 & 21$^{*}$ & 20 \\ detection limit ($M_{\odot}$) & $\sim 10^{9}$ & ${\rm few} \times 10^{8}$ & ${\rm few} \times 10^{6}$ \\ detection rate ($\%$) & 9$^{\dag}$ & 29 & 70 \\ \hline $\%$ with $M_{\rm HI} > 10^{9} M_{\odot}$ & 9$^{\dag}$ & 10 & 10 \\ $\%$ with $M_{\rm HI} \ga 10^{8} M_{\odot}$ & - & 29 & 35 \\ \hline \hline \end{tabular}\\ \vspace{2mm} \flushleft{{\small $^{*}$ Complete B2 sample, does not include NGC~3894 and B2~1557+26 (see Sect. \ref{sec:sample}).\\ $^{\dag}$ From Sadler et al. (in prep.); for early results see \citet{sad01}.}} \end{table} Table \ref{tab:detectionrates} summarises both the H{\,\small I}\ detection limits and the H{\,\small I}\ detection rates of the HIPASS, B2 and SAURON samples. There is a substantial difference in sensitivity between the three samples, which makes a comparison of the H{\,\small I}\ detection rates difficult. Nevertheless, when looking at the high-mass end, the percentage of sample sources with $M_{\rm HI} \ga 10^{9} M_{\odot}$ is roughly the same for the three samples. Towards the low-mass end, the percentage of radio-quiet galaxies in the SAURON field sample with $M_{\rm HI} \ga 10^{8} M_{\odot}$ is also very similar to that of our B2 sample of radio-loud galaxies. In the presence of a radio continuum source, low amounts of H{\,\small I}\ gas could be observed in absorption rather than emission. When including the additional three galaxies from our B2 sample with a tentative H{\,\small I}\ absorption detection, the H{\,\small I}\ detection rate for our sample of radio-loud early-type galaxies raises to 43$\%$. Therefore -- within the significant uncertainty due to non-uniform sensitivity and relatively low number statistics -- there does not appear to be a significant difference in the H{\,\small I}\ total mass content between the radio-loud and radio-quiet samples.\footnote{We note again that these detection rates are based on non-cluster early-type galaxies; \citet{dis07} and \citet{oos09} show that the H{\,\small I}\ detection rate of early-type galaxies in the Virgo Cluster is dramatically lower.} Regarding the morphology, about two-thirds of the H{\,\small I}\ structures that are imaged in the HIPASS follow-up study are large and regularly rotating discs or rings \citep{oos07}, similar to the H{\,\small I}\ structures at the high-mass end of the B2 sample. The morphology of the H{\,\small I}\ structures in the SAURON sample is diverse, with H{\,\small I}\ morphologies ranging from regular rotating discs to irregular clouds, tails and complex distributions. Also here, the strongest H{\,\small I}\ detections are often regular disk/ring-like structures, although the good sensitivity of these observations clearly reveals more complex kinematics than observed for the HIPASS and B2 samples \citep[][]{mor06b,oos09}. At the low-mass end, the H{\,\small I}\ structures in the SAURON sample often have a much more irregular or clumpy appearance (as do B2~1322+36 and B2~0055+30 in our radio-loud B2 sample). Thus -- as far as we can tell from the limited comparison between the three systematic studies -- there appears to be no major difference in both H{\,\small I}\ detection rate and H{\,\small I}\ morphology between the radio-quiet and radio-loud early-type galaxies in these samples. For sure, across the range of masses that we studied in this paper, there is no evidence that our radio-loud sample has a higher content of large-scale H{\,\small I}\ gas or contains more tidally distorted H{\,\small I}\ structures than the radio-quieter samples. {\sl If confirmed by larger samples with comparable sensitivity, this may indicate that the radio-loud phase could be just a short period that occurs at some point during the lifetime of many -- or maybe even all? -- early-type galaxies.} This would add to the growing evidence that radio-AGN activity can be an episodic or recurrent phenomenon \citep[see e.g.][for a review]{sai10}. These conclusions are in agreement with a recent study of CO in nearby radio galaxies by \citet{oca10}, who find no difference in the molecular hydrogen (H$_{2}$) mass content between their sample of nearby radio galaxies and a sample of genuine early-type galaxies by \citet*{wik95}. Our results also agree with the fact that \citet{bet01} and \citet{bet09} find that -- regarding low-power radio AGN -- both radio and non-radio ellipticals follow the same Fundamental Plane and Core Fundamental Plane. \citet{cap06} \citep[also][]{cap05,bal06} show that radio-loud AGN occur only in early-type galaxies with a shallow inner cusp (`core-galaxies'), while those with steep (power-law) cusps solely harbour AGN that are radio-quiet. In that case, a radio-loud phase could be a common feature only among `core' early-type galaxies. Nevertheless, \citet{cap06} also show that -- apart from the properties of the central cusp -- this radio-loud/radio-quiet dichotomy is not apparently related to other properties of the host galaxy. \subsection{The nature of low-power radio galaxies} \label{sec:nature} \subsubsection{FR{-\small I}\ sources} The lack of detectable amounts of H{\,\small I}\ in most FR{-\small I}\ radio galaxies is in agreement with the growing evidence that the AGN in these systems are {\sl not} associated with galaxy mergers, collisions or violent ongoing interactions that involve significant amounts of cool gas. While this was already suggested from optical studies by \citet{hec86} and \citet{bau92} (see Sect. \ref{sec:intro}), our H{\,\small I}\ results provide -- for the first time in a systematic way -- {\sl direct} evidence for the lack of cold gas that would be associated with such violent events. Various studies suggest that low-power FR{-\small I}\ radio sources generally also lack evidence for a thick torus and classical accretion disc \citep{chi99,mor01}. Furthermore, there is growing evidence that these low-power radio sources are likely fed through a quasi-spherical accretion of hot gas from the galaxy's halo or IGM directly onto the nucleus \citep{bes06,all06,balma08}. As we already mentioned in Sect. \ref{sec:intro}, \citet{har07} agree with such a scenario, but extend on this idea in the sense that all low-excitation AGN (including almost all - but not exclusively - FR{-\small I}\ sources) may share this accretion mechanism \citep[see also][]{bal08}. We argue that the lack of large amounts of H{\,\small I}\ gas in extended FR{-\small I}\ sources, as well as the similarity in H{\,\small I}\ properties between our sample of low-power radio galaxies and radio-quiet(er) early-type galaxies, is in agreement with the growing evidence that the AGN in FR{-\small I}\ radio galaxies is generally fed through the steady accretion of hot circum-galactic gas. Although accretion of hot IGM directly onto the central engine provides a good explanation for the H{\,\small I}\ properties of FR{-\small I}\ radio galaxies, other possible feeding mechanisms need to be considered. As mentioned in Sect. \ref{sec:HIrich}, our observations cannot rule out that much less violent, gas-poor or old interactions may be associated with FR{-\small I}\ galaxies. \citet{col95} argue that elliptical-elliptical mergers -- often referred to as `dry' mergers -- may occur frequently among FR{-\small I}\ sources. Since a mass accretion rate as little as $10^{-3} - 10^{-5} M_{\odot}$ yr$^{-1}$ may be sufficient to power a radio source \citep[e.g.][]{gor89}, even a relatively dry merger (which does not contain observable amounts of H{\,\small I}\ gas) could potentially still carry enough fuel to feed the radio source for a significant time. Given these low mass accretion rates, it may even be conceivable that stellar mass-loss processes \citep[e.g.][]{wil00} are able to deliver the potential AGN-fuel to the central region. It may also be possible that, over long time-scales, (continuous) accretion of gas can build up a concentration or even a disc of gas and dust in the central region \citep[which are known to exist in FR{-\small I}\ radio galaxies;][]{ver99,cap00,rui02}, which may potentially provide the fuel supply for the AGN. As mentioned in Sect. \ref{sec:HIrich}, either cold accretion from the IGM or minor accretion events of companion galaxies (which do not leave observable amounts of H{\,\small I}\ debris) are potential processes that may drive this accretion. \citet{oos09} find an intriguing trend that `normal' early-type galaxies that are detected in H{\,\small I}\ (but often with an H{\,\small I}\ mass lower than the detection limit in our sample) are more likely to contain a very faint and (in many cases) very compact radio continuum component compared with early-type galaxies that are not detected in H{\,\small I}. This suggests that the cold gas contributes -- at least to some extent -- to the feeding of a very low-power radio-AGN in some early-type galaxies. We note, however, that the radio continuum sources in these systems are several orders of magnitude less powerful than the classical `low-power' radio sources in our B2 sample, hence it is very well conceivable that there are substantial differences between these two types of AGN (though a more detailed comparison certainly deserves further attention). We therefore conclude that our H{\,\small I}\ results are in agreement with the growing evidence that classical FR{-\small I}\ radio sources are fed through the steady accretion of hot circum-galactic gas and {\sl not} by violent gas-rich galaxy mergers and interactions, but that there are other possible mechanisms that cannot be ruled out.\\ \vspace{2mm}\\ {\it Centaurus~A}\\ \vspace{-2mm}\\ The lack of detectable amounts ($\ga \rm few \times 10^{8} M_{\odot}$) of large-scale H{\,\small I}\ in nearby FR{-\small I}\ radio galaxies seems, at first sight, in contradiction with H{\,\small I}\ observations of Centaurus~A. Cen~A is by far the nearest FR{-\small I}\ radio galaxy and hence studied in much greater detail than any other radio galaxy. Cen~A has an extended (650 kpc) FR{-\small I}\ radio source and contains a total of $6 \times 10^8 M_{\odot}$ of H{\,\small I}\ gas \citep[$4.5 \times 10^8 M_{\odot}$ in a central 15 kpc disc and $1.5 \times 10^8 M_{\odot}$ in faint outer shells;][]{gor90,sch94,str09}. From X-ray observations, \citet{kra03} and \citet{cro07,cro09} argue that Cen~A may be a non-typical low-power radio galaxy in that it shares some properties (supersonic expansion of one of the inner radio lobes and a high intrinsic absorption of the nucleus) with more powerful FR{-\small II}\ radio galaxies, which are often associated with gas-rich galaxy mergers. Indeed, it has been argued that the H{\,\small I}\ structures in Cen~A formed as a result of a minor merger \citep{sch94}, so it is certainly possible that Cen~A is indeed not a `typical' FR{-\small I}\ radio galaxy. On the other hand, \citet[][]{str09} find no evidence from the H{\,\small I}\ properties that the minor merger event in Cen~A is responsible for fuelling the current episode of radio-AGN activity \citep[in fact,][\ suggest that this minor merger was responsible for temporarily shutting down the radio-AGN rather than triggering the current episode of radio-AGN activity]{sax01}. \citet{str09} also show that, if Cen~A would be located at the average distance of our B2 sample sources, a significant part of the H{\,\small I}\ disc would not be detectable in emission but in absorption. In addition, at large distances the relatively compact continuum from the inner radio lobes is likely to dominate the radio-source structure. From the H{\,\small I}\ results presented in this paper, there is thus no unambiguous evidence either in favour or against the idea that Cen~A is not a typical FR{-\small I}\ radio galaxy, but it does serve as a strong reminder of the observational limitations of our current sample. \subsubsection{Low-power compact sources} Contrary to the extended FR{-\small I}\ sources, a significant fraction of the low-power compact sources in our sample do contain enormous discs/rings of H{\,\small I}\ gas, some of which could be related to a past gas-rich merger event. However, we saw in Sect. \ref{sec:HIrich} that these H{\,\small I}\ structures are at least one to several Gyr old. The lifetime of extended, low-power radio sources is generally believed to be not more than about $10^8$ yr \citep[e.g.][]{par02} and the compact radio sources in our sample are believed to be even significantly younger \citep{gir05a,tay98}. This suggests that the onset of the current episode of radio-AGN activity started long after the initial formation of these H{\,\small I}\ discs. In \citet{emo06} (where we studied the case of B2~0648+27) we discussed that, in (post-)merger systems, significant time-delays between the initial merger and the onset of the radio-AGN activity may not be uncommon. Also, it is possible that there have been previous episodes of AGN activity -- as we mentioned in Sect. \ref{sec:radioquiet}, growing evidence suggests that AGN activity could be episodic in nature \citep[e.g.][]{sai10} and there are indications that there is likely a high incidence of H{\,\small I}\ absorption associated with rejuvenated radio sources \citep{sai07,cha10}. Nevertheless, a direct causal connection between the formation of the H{\,\small I}\ discs/rings and the triggering of the {\sl current} episode of radio-AGN activity is not immediately apparent. Therefore, the feeding mechanism of these compact radio sources remains ambiguous. However, the `H{\,\small I}\ mass - radio size' segregation that we find (Sect. \ref{sec:HIrich}) indicates that the fuelling mechanism and/or the evolution of these H{\,\small I}-rich compact radio sources is fundamentally different from that of the extended FR{-\small I}\ sources and somehow related to the presence of large amounts of H{\,\small I}\ gas. \subsubsection{Comparison with Seyfert sources} \label{sec:seyferts} Our H{\,\small I}\ results on low-power radio galaxies are also interesting when compared with the properties of nearby Seyfert galaxies. Seyfert nuclei are often found in spiral galaxies and a significant fraction contains a very compact and low luminosity radio-AGN \citep[with a total radio power well below that of the low-power radio galaxies in our sample, e.g.][]{ho01}. Thus, while disc-dominated Seyfert galaxies (with a prominent H{\,\small I}\ and stellar disc) often contain a very faint radio-AGN, we find that a significant fraction of much brighter compact radio sources is hosted by early-type galaxies with more diffuse large-scale H{\,\small I}\ discs and that classical, extended FR{-\small I}\ sources occur in early-type galaxies without a prominent disc. This may hint to a continuum in radio-source properties from late- to early-type galaxies. A more detailed study on the comparison between radio-AGN activity and the host galaxy's disc properties across the full spectrum of galaxy morphologies deserved further investigation, but is beyond the scope of this paper. \citet{kuo08} and \citet{tan08} found from H{\,\small I}\ studies that local disc-dominated Seyfert galaxies generally show evidence for ongoing gas-rich interactions and that these interactions are important for the occurrence of the nuclear activity. From the lack of evidence of ongoing gas-rich interactions among our sample sources, we argue that the fuelling mechanism of low-power radio galaxies is likely fundamentally different from that of disc-dominated Seyfert galaxies. We note, however, that the Seyfert sources in the samples of \citet{kuo08} and \citet{tan08} all have a redshift comparable to the low-redshift range of our sample of radio galaxies, hence a more in-depth investigation would be necessary in order to determine to what extent sensitivity issues limit this comparison. Interestingly, the only disc-dominated FR{-\small I}\ radio galaxy in our sample (B2~0722+30) shows H{\,\small I}\ properties similar to those of local disc-dominated Seyfert systems (namely a regular H{\,\small I}/stellar disc and H{\,\small I}-rich interactions with companions). This could mean that the host galaxy environment and AGN feeding mechanism of B2~0722+30 more closely resembles that of nearby Seyfert galaxies rather than that of low-power radio galaxies in general \citep[despite the clear evidence for a typical FR{-\small I}\ radio-AGN not commonly observed among Seyferts; see][]{emo09}. \subsubsection{Comparison with powerful (FR{-\small II}) sources} \label{sec:NGC612} As mentioned in Sect. \ref{sec:intro}, powerful radio galaxies with strong emission-lines have -- in contrast to our results on FR{-\small I}\ sources -- often been associated with gas-rich galaxy mergers or collisions. We recently found evidence for a large-scale (140 kpc) H{\,\small I}\ disc associated with the nearby powerful radio galaxy NGC~612 \citep[PKS~0131-36;][]{emo08_NGC612}. This radio source has clear FR{-\small II}\ properties and shows a faint H{\,\small I}\ bridge that stretches across 400 kpc toward a gas-rich companion galaxy, indicating that a collision between both systems likely occurred \citep{emo08_NGC612}. In a future paper we will investigate the large-scale H{\,\small I}\ properties of a small sample of nearby powerful FR{-\small II}\ radio galaxies, which will allow us to compare the general H{\,\small I}\ properties between low- and high-power (as well as low- and high-excitation) radio galaxies. \section{Conclusions} \label{sec:conclusions} From our study of large-scale H{\,\small I}\ in a complete sample of nearby low-power radio galaxies (compact and FR{-\small I}), we derive the following conclusions:\\ \vspace{-1mm}\\ {\sl i).} Our detection rate of H{\,\small I}\ emission directly associated with the radio galaxy is 29$\%$ (with a detection limit of $\sim 10^8 M_{\odot}$);\\ \vspace{-3mm}\\ {\sl ii)} We find {\sl no} evidence for {\sl ongoing} gas-rich galaxy mergers, collisions or violent interactions associated with the early-type host galaxies of low-power radio sources. At the high-mass end, all the H{\,\small I}\ structures are fairly regularly rotating large-scale discs/rings, while at the low-mass end (several $\times 10^{7} M_{\odot}$) the H{\,\small I}\ distribution appears much more clumpy. The large-scale H{\,\small I}\ discs/rings are at least one to several Gyr old;\\ \vspace{-3mm}\\ {\sl iii).} There is a clear segregation in H{\,\small I}\ mass content between compact and extended radio sources in our sample. Large amounts of H{\,\small I}\ (with $M_{\rm HI} \ga 10^{9} M_{\odot}$) are only observed around host galaxies with a compact radio source, while none of the host galaxies of the more extended FR{-\small I}\ radio sources shows similar amounts of large-scale H{\,\small I}. This suggests that there is a physical link between the properties of the radio source and the presence of large-scale H{\,\small I}\ structures, which we ascribe most likely to either confinement/frustration of the compact radio sources by the presence of large amounts of gas, or to the lack-of-growth of the compact sources as a result of inefficient fuelling;\\ \vspace{-3mm}\\ {\sl iv).} Our H{\,\small I}\ results indicate that extended FR{-\small I}\ radio galaxies are generally hosted by H{\,\small I}-poor galaxies. Only low amounts of H{\,\small I}\ ($< 10^8 M_{\odot}$) have been detected in a small fraction of these systems. These results are in agreement with the growing belief that extended FR{-\small I}\ radio galaxies are fuelled through the accretion of their circum-galactic hot gas (although other mechanisms cannot be excluded);\\ \vspace{-3mm}\\ {\sl v).} From a limited comparison with samples of radio-quiet early-type galaxies, our complete sample of low-power radio galaxies shows no apparent difference in H{\,\small I}\ properties (detection rate, mass and morphology) compared with these radio-quiet samples. If confirmed by larger samples with uniform sensitivity, this could mean that a classical low-power radio source may occur at some point during the lifetime of many -- or perhaps even all -- early-type galaxies (at least the ones with a shallow central cusp).\\ \section*{Acknowledgments} We would like to thank Jacqueline van Gorkom for her great help and useful discussions. Also many thanks to our referee Dhruba Saikia for valuable suggestions that improved this paper. BE thanks Columbia University, the Kapteyn Astronomical Institute and ASTRON for their hospitality during parts of this project and acknowledges the corresponding funding received from the University of Groningen and the Netherlands Organisation for Scientific Research - NWO (Rubicon grant 680.50.0508). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The Westerbork Synthesis Radio Telescope is operated by the ASTRON (Netherlands Foundation for Research in Astronomy) with support from NWO. The Michigan-Dartmouth-MIT Observatory at Kitt Peak is owned and operated by a consortium of the University of Michigan, Dartmouth College, Ohio State University, Columbia University and Ohio University. The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. \bibliographystyle{mn}
1,108,101,565,024
arxiv
\section{Introduction} \vspace{2mm} It has now become fairly well established that the Universe is undergoing an accelerated expansion in recent times ($z \le 4$). Observational evidence for this mainly comes from supernovae Ia \cite{sn1a} and Cosmic Microwave Background Anisotropies \cite{cmb}. Large Scale Structure formation \cite{lss}, Baryon Oscillations \cite{bao} and Weak Lensing \cite{weak} also suggest such an accelerated expansion of the Universe. One of the most challenging problems of modern cosmology is to identify the cause of this late time acceleration. Many theoretical approaches have been employed to explain the phenomenon of late time cosmic acceleration. A positive cosmological constant can lead to accelerated expansion of the universe but it is plagued by the fine tuning problem \cite{lcdm}. The cosmological constant may either be interpreted geometrically as modifying the left hand side of Einstein's equation or as a kinematic term on the right hand side with the equation of state parameter $w=-1$. The second approach can further be generalized by considering a source term with an equation of state parameter $w<-1/3$. Such kinds of source terms have collectively come to be known as Dark Energy. Various scalar field models of dark energy have been considered in literature \cite{scalar1,scalar2,scalar3,scalar4,scalar5,scalar6,scalar7,scalar8,scalar9,scalar10,scalar11,scalar12,scalar13,scalar14,scalar15,scalar16,scalar17}. As an alternative to dark energy as a source for the accelerated expansion, modification of the gravity part of the action has also been attempted \cite{fr}. In these models, in addition to the scalar curvature $R$ in the gravity lagrangian there is an additional term $f(R)$. The gravity action hence becomes, \begin{equation} {\cal S} = {1\over{2 \kappa^2}} \int d^{4}x \sqrt{-g} [R + f(R)], \end{equation} However, in such models matter and gravity are still minimally coupled. Despite the significant literature on such $f(R)$ models \cite{fr}, another interesting possibility which has not received due attention until recent times is a non -minimum coupling between the scalar curvature and the matter lagrangian density \cite{nonmin}.\\ In this paper we study the evolution of Hubble parameter in minimal and non-minimal coupling between scalar curvature and the matter lagrangian density. We use a form for $f(R)$ model which assumes the following exponential form proposed by Linder \cite{linder}: \begin{equation} f(R)=-C\left[ 1-\exp(-R/R_0)\right] \label{lind_R} \end{equation} with $C$ being the model parameter and $R_{0}$ is the present day curvature scale. We also attempt to place observational constraints on the parameters of this model in both minimal and non-minimal coupling of scalar curvature with matter lagrangian density. We find that there is an upper bound on model parameter $C$ in the minimal coupling case. For non-minimal coupling between matter and gravity, there is a range of values which the parameter $C$ is allowed to take. These bounds depend on the present day value of $q_0$ that we use. The paper is organised as follows: In section \ref{FR} we introduce the most general action for modified gravity. The equations of motions corresponding to this action are solved numerically for both minimal and non-minimal coupling of scalar curvature with matter. We investigate the observational constraints on the model parameter in section \ref{obscon}. In section \ref{conclusions} we summarize the results. \section{$F(R)$ gravity models} \label{FR} \vspace{2mm} We start with the general action for modified gravity where the curvature is in general coupled with the matter lagrangian: \begin{equation} S = \int d^{4}x \sqrt{-g} \left[{1\over{2}} f_{1}(R) + [1+f_{2}(R)]{\cal{L}}_{m}\right], \label{action} \end{equation} where $f_{1}(R)$ and $f_{2}(R)$ are the arbitrary functions of Ricci scalar $R$ and ${\cal{L}}_{m}$ is the lagrangian density for matter which we will assume to be non-relativistic. We assume the natural unit with the speed of light taken to be unity in our calculations. The standard Einstein-Hilbert action is recovered with $f_{2}(R)=0$ and $f_{1}(R) = {R\over{\kappa^{2}}}$ where $\kappa^{2} = 8\pi G$. The standard $f(R)$ gravity class of models are recovered with $f_{2}(R) = 0$ and $f_{1}(R)= R+f(R)$, where $f(R)$ is an arbitrary function of $R$. In the latter case the pure gravity action has a non-minimal coupling while the matter is still minimally coupled to gravity. In what follows, we shall consider the cosmological evolutions and their observational constraints for modified gravity models in both cases, namely, one in which the curvature is coupled minimally as well as the one in which the curvature is non-minimally coupled with the matter lagrangian density. \begin{figure*}[t] \begin{tabular}{c@{\qquad}c} \epsfig{file=hzm1.eps,width = 7.5cm}& \epsfig{file=hzm2.eps,width = 8cm} \end{tabular} \caption{Behaviour of Hubble parameter as a function of redshift. $\Omega_{m}=0.25$ and $q_{0} = -0.55$ for both figures. In the left panel, $c = 1.1,1.2,1.4$ from left to right whereas in the right panel $c= 1.6,3,10,50$ from right to left.} \label{db_w_bf} \end{figure*} \subsection{Minimally Coupled f(R) gravity models} \vspace{2mm} As mentioned earlier, we recover the standard $f(R)$ gravity (i.e. the one with a minimal coupling of matter with gravity) if we have $f_{2}(R) = 0$ and $f_{1} = R + f(R)$ in the action defined in equation (\ref{action}). Now varying the action given in equation (\ref{action}) with respect to the metric tensor $g_{\mu\nu}$, we get the modified Einstein equation: \begin{equation} G_{\mu\nu}+f_RR_{\mu\nu}-(\frac{f}{2}-\Box f_R)g_{\mu\nu}-{\bigtriangledown}_\mu{\bigtriangledown}_\nu f_R=\kappa^2 T_{\mu\nu}, \label{mineins} \end{equation} \vspace{2mm} \noindent where $f_R \equiv \frac{df}{dR}$ and $f_{RR} \equiv \frac{d^2f}{dR^2}$. Assuming a flat Friedmann-Robertson-Walker spacetime with a scale factor $a(t)$: \begin{equation} ds^2 = -dt^2 +a(t)(dx^2 + dy^2 + dz^2), \label{frw} \end{equation} \vspace{2mm} the 0-0 component of the modified Einstein equation (\ref{mineins}) becomes \begin{equation} H^2+\frac {f}{6} -f_{R}(HH^\prime +H^2)+H^2f_{RR}R^\prime =\frac{\Omega_mH_0^2}{a^3}, \label{hmin} \end{equation} \vspace{2mm} \noindent where $\prime$ is with respect to $\ln a$, $\Omega_{m}$ is the matter energy density parameter today and $H_{0}$ is the Hubble parameter today. It will be convenient if we express the cosmological quantities involved in dimensionless units. Hence, we define the following dimensionless quantities: \begin{eqnarray} h &=& \frac{H}{H_{0}} \label{def}\\ x &=& \frac{R}{R_0} \\ {{\bar f}(x)} &=& \frac{f(R)}{R_{0}} \\ {{\bar f}_{x}} &=& \frac{d{\bar f}}{dx} = \frac{df}{dR} = f_{R} \\ {{\bar f}_{xx}} &=& \frac{d^2 {\bar f}}{dx^2} = R_{0}\frac{d^2 f}{dR^2}=R_{0} f_{RR} \\ \alpha = \frac{R_{0}}{H_{0}^2} &=& 6(1-q_{0}), \end{eqnarray} \vspace{2mm} \noindent where $R_{0}$ is the present curvature scalar and $q_{0}$ is the present day deceleration parameter. The evolution of $R$ with redshift, $z$ is given by the equation, \begin{equation} \frac{dR}{dz}=6(3H\frac{dH}{dz}-(1+z)(\frac{dH}{dz})^2-(1+z)H\frac{d^2H}{dz^2}). \end{equation} \vspace{2mm} \noindent With these definitions, one can write equation (\ref{hmin}) in terms of the redshift $z$, as, \begin{eqnarray} h^2+(1-q_{\circ}){f}+{f}_x((1+z)h\frac{dh}{dz}-h^2)+& &\nonumber\\ \frac{(1+z)^2}{1-q_{\circ}}h^3 {f}_{xx}(\frac{1}{h}(\frac{dh}{dz})^2 +\frac{d^2h}{dz^2} -\frac{3}{1+z}\frac{dh}{dz}) &=&{\Omega_{m}}(1+z)^3.\nonumber \\ \label{min} \end{eqnarray} Instead of working with a general $f(R)$, it will be more useful if we use a specific form for $f(R)$. A simple form is case of "Exponential Gravity" proposed by Linder \cite{linder}. In this case the form of $f(R)$ is given by, \begin{equation} f(R)=-C\left[ 1-\exp(-R/R_0)\right] \label{lind_R} \end{equation} This can be expressed in the dimensionless form by defining a dimensionless constant $c=C/R_0$. We then have \begin{equation} {\bar f}(x) = -c \left[ 1- \exp(-x)\right], \label{lind} \end{equation} With this choice of ${\bar f}(x)$, we now solve equation (\ref{min}) to find $H(z)$. There are a number of investigations where this equation has been solved and the model is subsequently constrained by observational data. In all of these works, it is assumed that the universe behaves as a $\Lambda$CDM model in the past and subsequently deviates from that behaviour \cite{amna}. In this way, one sets the initial conditions for $h(z)$ and ${d h(z)}/{dz}$, assuming the model is close to $\Lambda$CDM in the past. One problem one usually faces in this approach, is the epoch at which to set the initial conditions. Depending upon the redshift at which one fixes the initial conditions, one may or may not get well-behaved solutions without any pathology. In other words, one has to fine tune the initial conditions so as to get regular solutions. Although this always happens in most of the relevant modified gravity models discussed in the literature, it has not been sufficiently emphasized to the best of our knowledge. To circumvent this problem, we take a different approach to solve the equation (\ref{min}). We set the initial conditions at present epoch, i.e at $z=0$. In equation (\ref{min}), $h(z=0) = 1$ identically. Now for the second initial condition, ${dh}/{dz} \mid_{z=0}$, one can write ${dh}/{dz} (z=0) = 1+q_{0}$, where $q_{0}$ is the present day deceleration parameter, as mentioned before. Hence, the second initial condition is directly dependent on the decelaration parameter at present and this will be one of the parameters in our model. Hence, we have three parameters in our model, i.e., $c$, which is the parameter in the Exponential Gravity model, and two cosmological parameters, $\Omega_{m}$ and $q_{0}$. Using these conditions for today's epoch, we evolve our system from present day (i.e., at $z=0$) backwards in time (i.e., for increasing z). We want to stress that, in this approach, there is no extra assumption, or fine tuning of the initial conditions in order to solve the evolution equation (14). One of the initial conditions, $h(z=0)$ is fixed to $1$ by definition and the other one is related to $q_{0}$, which we shall constraint by observational data. We aim to investigate if the cosmological evolution is well behaved as we go to earlier times, or whether they are plagued by singularities at high redshifts. We further aim to study the role of the parameter $c$ in this context. As we discuss below, one indeed gets regular solutions upto any higher redshift for a certain range of values for the parameter $c$. For the rest of two parameters, we vary $\Omega_{m}$ between $0.25$ and $.0.35$ and the present deceleration parameter $q_{0}$ between $-0.9$ and $-0.55$. We first investigate the behaviour of the normalized Hubble parameter ($h(z)=H(z)/H_0$) as function of redshift for $c < 1.6$ as shown in figure 1. We have used three values for c, 1.1, 1.2 and 1.4 and we have assumed $\Omega_{m}= 0.25$ and $q_{0} = -0.55$. It clearly shows that singularity occurs at different redshifts for different values of $c$. Even if we vary $\Omega_{m}$ and $q_{0}$ in the range mentioned above, the overall behaviour does not change in the sense that there is always a singularity at some redshift. However, the behaviour significantly changes once we assume $c \ge 1.6$, as we show in figure 2. In figure 2 we have again plotted $h(z)$ as a function of $z$ but with the value of $c$ greater than 1.6. We evolve the system for redshift as high as $z=1000$, and the behaviour of $h(z)$ remains well-behaved. As we mentioned earlier, although these plots are for specific choices for $\Omega_{m}$ and $q_{0}$, the behaviour of $h(z)$ is similar for any value of these two parameters in the range mentioned above, i.e., $0.25 < \Omega_m < 0.35$ and $-0.9 < q_0 < -0.55$. So we can conclude that for the model parameter $c > 1.6$, the model is regular upto any higher redshift. We also study the behaviour of the deceleration parameter $q(z)$ as a function of redshift. In terms of the redshift, $q_0$ is given by, \begin{equation} q(z) = (1+z)\frac{H'(z)}{H(z0)}-1 \end{equation} The behaviour of $q(z)$ is shown in figure 2 for different values of the model parameter $c$ assuming $\Omega_{m} = 0.25$ and $q_{0} = -0.55$. It shows that in all cases, the universe has an accelerated phase at present and as we go back it smoothly enters the decelerating phase. The lower the value of $c$, the universe enters the accelerated phase earlier. \begin{figure}[t] \epsfig{file=qmin.eps,width=8 cm} \caption{Behaviour of the deceleration parameter $q(z)$ as a function of redshift. The lines are for $c = 2,5,10$ from bottom to top. we assume $\Omega_{m}=0.25$ and $q_{0}=-0.55$. } \label{qmin} \end{figure} \vspace{5mm} \subsection{Models with Non-Minimal matter-curvature coupling} \begin{figure}[t] \epsfig{file=hznm1.eps,width=8 cm} \caption{Behaviour of the Hubble parameter $q(z)$ as a function of redshift. The lines are for $c = 1.5,6,15, 50$ from top to bottom. we assume $\Omega_{m}=0.25$ and $q_{0}=-0.55$. } \label{qmin} \end{figure} Here we extend the $f(R)$ gravity models assuming a non-minimal coupling between the matter lagrangian ${\cal L}_{m}$ and the scalar curvature. We assume the action to be, \begin{equation} {\cal S} = \int d^{4}x \sqrt{-g} \left[{1\over{2\kappa^{2}}} R + (1+f(R)){\cal L}_{m}\right]. \label{action2} \end{equation} \vspace{2mm} Minimizing the above action with respect to $g_{\mu\nu}$ gives the modified version of the Einstein's Equation: \begin{equation} \phi R_{\mu\nu} -{1\over{2}}Rg_{\mu\nu} = \kappa^{2} (1+f(R)) T_{\mu\nu} + (\nabla_{\mu}\nabla_{\nu} - g_{\mu\nu}\Box)\phi, \label{einnon} \end{equation} where $\phi = 1 + 2\kappa^{2}{{\cal L}_{m}}{df\over{dR}}$. The Bianchi identity $G_{\mu\nu;\nu} = 0$ gives \begin{figure}[t!] \epsfig{file=compar.eps,width=8 cm} \caption{Behaviour of the Hubble parameter $q(z)$ as a function of redshift for minimally coupled (solid line) and the non-minimally coupled case (dashed line). We assume $\Omega_{m}=0.25$ and $q_{0}=-0.55, c=5$. } \label{qmin} \end{figure} \begin{equation} \nabla_{\mu}T_{\mu\nu} = {{df / {dR}}\over{1+f(R)}} (g_{\mu\nu} {\cal L}_{m} -T_{\mu\nu}) \nabla^{\mu}R, \label{cons} \end{equation} which implies the non-conservation of the matter energy momentum tensor. This is due to the non-minimal coupling between the matter lagrangian and the curvature which results exchange of energy between the matter and the scalar degrees of freedom present due to the $f(R)$ gravity model. This exchange of energy is one important feature of this non-minimally couple $f(R)$ gravity model. But once we assume the prefect fluid form for our matter energy momentum tensor (which is consistent with a homogeneous and isotropic universe) together with the form for the lagrangian density ${\cal L}_{m} = -\rho_{m}$ ($\rho_{m}$ being the matter energy density) \cite{rhom}, one can explicitly show that putting $\nu=0$ in the above equation, i.e for the energy density conservation equation, one gets the usual equation, ${\dot\rho_{m}} + 3H\rho_{m} = 0$. With this together with the metric given by (\ref{frw}), one can write the 0-0 component of the equation (\ref{einnon}): \begin{equation} H \dot{\phi}+\frac{1}{6}f_1-(\dot H+H^2)\phi= H_{\circ}^2f_2 \Omega_m(1+z)^3+H_{\circ}^2 \Omega_m (1+z)^3, \label{hnon} \end{equation} \vspace{2mm} where dot represents differentiation with respect to time. Further, comparing equation \ref{action} and \ref{action2}, $f_1$ is identified with $R/{\kappa^2}$ and $f_2$ with $f(R)$ given in equation \ref{lind_R}. Changing the variable to redshift $z$, and expressing everything in terms of the dimensional variables defined in (\ref{def}), one can now get from equation (\ref{hnon}) as \begin{eqnarray} ((1+z)h\frac{dh}{dz}-h^2)(1-\frac{\Omega_m(1+z)^3 {\bar f}_{x}}{1-q_{\circ}})+(1-q_{\circ})x+\nonumber\\ h^3\frac{\Omega_m(1+z)^5{\bar f}_{xx}}{(1-q_{\circ})^{2}} (\frac{3}{1+z}\frac{dh}{dz}-\frac{1}{h}(\frac{dh}{dz})^2-\frac{d^2h}{dz^2})\nonumber\\ +\frac{3(1+z)^3\Omega_m {\bar f}_{x} h^2}{(1-q_{\circ})} = \Omega_m(1+z)^3(1+{\bar f}) \label{nonmin} \end{eqnarray}\vspace{2mm} \begin{figure}[t] \epsfig{file=qnmin.eps,width=8 cm} \caption{Behaviour of the deceleration parameter $q(z)$ as a function of redshift. The lines are for $c = 2,5,10$ from top to bottom. we assume $\Omega_{m}=0.25$ and $q_{0}=-0.55$. } \label{qmin} \end{figure} \begin{figure*}[t] \begin{tabular}{c@{\qquad}c} \epsfig{file=contourmin.eps,width = 8cm}& \epsfig{file=contournonmin.eps,width = 8cm} \end{tabular} \caption{$1\sigma$ and $2\sigma$ contours in the $q_{0}-c$ plane for the minimally coupled (left panel) and non-minimally coupled (right panel) case using the observational data (explained in the text). The solid, dashed and dash-dotted lines are for $\Omega_{m} = 0.25, 0.3, 0.35$ respectively.} \end{figure*} where subscript ``$x$'' denotes differentiation with respect to variable $x$. The initial conditions are also taken in a way similar to the one described in the minimally coupled case in the previous section. The evolution of the Hubble parameter is shown in figure 3. Unlike the minimally coupled case, we do not have pathological behaviour for any values of the parameter $c$. To compare the minimally coupled and non-minimally couple cases, we show the behaviour of the Hubble parameter for the two case for same values of $c, \Omega_{m}$ and $q_{0}$ in figure 4. It shows that Hubble parameter in the non-minimally coupled case evolves slower than its counterpart in the minimally coupled case. We also show the behaviour of the deceleration parameter for nonminimally coupled case in figure 5. Here also the universe is in an accelerating phase at present and smoothly joins the decelerating regime in the past. Here unlike the minimally coupled case, higher the value of the parameter $c$, the universe enters the acceleration regime earlier. \section{Observational Constraint} \label{obscon} In this section we investigate the observational constraints on our model parameters. We use the Supernova Type Ia data from the latest Union2 dataset consisting of 557 data point \cite{union}. The data consists of the distance modulus defined as ${\cal \mu}=m-M=5{\rm log}d_{L}+25$ where $d_{L}(a)$ is the luminosity distance defined as, \begin{equation} d_{L}(a)={a}^{-1} \int_{a}^{1}\frac{{\rm d}y}{y^{2}H(y)}. \end{equation} The other data we consider is the baryon acoustic oscillations scale produced in the decoupling surface by the interplay between the pressure of the baryon-photon fluid and gravity. For this we calculate the distance ratio $D_{v}(z=0.35)/D_{v}(z=0.2)$ where $D_{v}$ is given by \begin{equation} D_{v}(z) = \left[{z\over{H(z)}}\left(\int^{z}_{0}{dz'\over{H(z')}}\right)^2\right]^{1/3}. \end{equation} The SDSS observation gives $D_{v}(z=0.35)/D_{v}(z=0.2) = 1.736 \pm 0.065$ \cite{ratio}. We use this measurements together with the measurements of distance modulus by Type Ia supernova observations, to constrain our model. We use these two observational data to put constraints on our model parameter. In Figure 6, we show the $1\sigma$ and $2\sigma$ contours in the $q_{0}-c$ plane with different choices of the density paremeter $\Omega_{m}$. For minimally coupled case, there is always an upper bound for the parameter $c$ as well as for the present day decceleration parameter $q_{0}$. For example, with $\Omega_{m}= 0.25$ the $2\sigma$ upper bound for $c$ is around 1.8. This upper bound shifts towards higher values as one increases $\Omega_{m}$. On the other hand, for non-minimally coupled case, things are different. Here there is an allowed range of $c$ for every values of $q_{0}$. For example, with $\Omega_{m} = 0.25$ and $q_{0}= -0.9$, the $2\sigma$ allowed range for $c$ is between $4.6$ and $6.2$. But as one increases $\Omega_{m}$, this range shifts towards smaller value of $c$. Also for the minimally coupled case, our constraint on parameter $c$ differs than that obtained by Ali et.al \cite{amna} They solved the evolution equation in a different way than ours. They have assumed the universe is close to $\Lambda$CDM at higher redshifts, and put the initial conditions in the early time whereas we put the initial condition at present and solve it backwards. Also one of the initial conditions $q_{0}$ is actually one of the fitting parameters. Also they have used the slightly older Supernova data given by constitution set, which has 397 data points in comparison to our 557 data points given by the Union2 set. Their analysis shows no constraint on $c$. at any level. It differs significantly from what we obtain. \vspace{5mm} \section{Conclusion} \label{conclusions} We have reinvestigated the $f(R)$ gravity model where the curvature is minimally as well as in the case where gravity is non-minimally coupled with matter. We have assumed the Linder's Exponential form for $f(R)$ for our analysis. We have fixed the initial condition at present. We do not need any additional assumptions. By definition, $H/H_{0} = 1$ at $z=0$, where $H_{0}$ is the present day Hubble parameter. The other initial condition fixes $q_{0}$, the present day deceleration parameter, and we take $q_{0}$ as one of our fitting parameters. Hence our method of solving the evolution equation does not involve any additional assumption. First we check that for minimally coupled case, for $c \ge 1.6$, one can evolve the universe till any higher redshifts without any pathological behaviour. For lower values of $c$, there is some singular features in $H(z)$ at different redshifts depending upon the parameter choices. For nonminimally coupled case, this changes and $H(z)$ is regular till any higher redshifts for any choice of parameter values (we have taken $\Omega_{m}$ between 0.25 and 0.35, $q_{0}$ between -0.9 to -0.55 and $c$ between 1 to 50). For minimally coupled case, the constraint $c \ge 1.6$ to get regular solutions for $H(z)$ also satisfies the constraint coming from the local gravity tests \cite{fr}. Also the evolution of the universe is as expected, showing accelerating universe at late time, and decelerating universe in the past. We next use the the observational data coming from Type Ia supernova observations as well as the BAO peak measurements by SDSS. For supernova, we use the latest Union2 compilation consisting 557 data points. For minimally coupled case, the constraints on $c$ is completely different from what obtained by Ali et al. \cite{amna} earlier. We have obtained a upper bound on $c$. This upper bound on $c$ shifts towards the higher values as one increases $\Omega_{m}$. For nonminimally coupled case, there is a range of allowed values for $c$, and this allowed range for $c$ shifts towards for smaller values as one increases $\Omega_{m}$. \section{Acknowledgement} A.A.S acknoweledges the financial support privided by the University Grants Commission, Govt. Of In- dia, through major research project grant (Grant No:33- 28/2007(SR)). A.A.S also acknowledges the financial support provided by the Abdus Salam International Center For Theoretical Physics, Trieste, Italy where part of the work has been done. ST and TRS acknowledge facilities provided by Inter University Center For Astronomy and Astrophysics, Pune, India through IUCAA Resource Center(IRC) at Department of Physics and Astronomy, University of Delhi, New Delhi, India. \vspace{2mm}
1,108,101,565,025
arxiv
\section{Introduction} \label{sec:intro} There is abundant and compelling evidence that the bulk of matter in the universe is made of massive, electrically neutral particles: dark matter (DM)~\cite{Bertone:2010zz}. While the density of DM has been precisely determined, the identity of the DM particle (or particles) is a complete mystery. Remarkably, many extensions of the Standard Model (SM) of electroweak and strong interactions---designed to address theoretical issues related to the breaking of the electroweak symmetry---require the introduction of new particles, some of which are excellent DM candidates. This is most notably the case in R-parity conserving supersymmetry (SUSY)~\cite{Jungman:1995df}. If the origin of DM is indeed new physics beyond the SM, in particular SUSY, there are high hopes that DM will be produced in abundance at the LHC, through cascade decays of the new matter particles produced in the pp collisions~\cite{Baer:2008uu}. The ambitious goal will then be to determine the properties of the DM candidate and reconstruct its thermal relic abundance from collider data. On the one hand this may then be used further to make testable predictions for direct and indirect DM searches. On the other hand, if the DM mass determined at colliders and non-accelerator experiments agrees, it may be used to constrain the cosmological model. This would enormously enhance the interplay between particle physics, astrophysics and cosmology. The LHC phenomenology of DM-motivated SUSY scenarios has hence been discussed in great detail in the literature~\cite{Jungman:1995df,Baer:2008uu}. Another experimental evidence that the SM misses something fundamental is the observation of neutrino oscillations~\cite{Bilenky:1998dt}. This can be resolved by including right-handed neutrinos in the model. Current observations, however, do not allow to establish the Majorana or Dirac nature of neutrinos. While the smallness of the neutrino mass can be naturally explained by introducing Majorana mass terms and making use of the see-saw mechanism, Dirac masses for neutrinos with very small Yukawa couplings are a viable and interesting alternative. In supersymmetric theories, one may naturally obtain very light Dirac neutrino masses from F-term SUSY breaking~\cite{ArkaniHamed:2000bq,Borzumati:2000mc}. In addition to providing an explanation for neutrino masses, this class of SUSY models offers an interesting alternative to the conventional neutralino DM candidate: the sneutrino. The crucial point is that in these models one can have a weak-scale trilinear $A_{\tilde\nu}$ term that is not proportional to the small neutrino Yukawa couplings and can hence induce a large mixing between left-handed (LH) and right-handed (RH) sneutrinos even though the Yukawa couplings are extremely small. The lightest sneutrino can thus become the lightest SUSY particle (LSP) and a viable thermal DM candidate. Note that the mainly RH sneutrino LSP is not sterile but couples to SM gauge and Higgs bosons through the mixing with its LH partner. Sufficient mixing provides efficient annihilation so that the mixed sneutrino can be a viable thermal DM candidate with a relic density of $\Omega h^2 \simeq 0.11$ as extracted from cosmological observations~\cite{Komatsu:2010fb,Jarosik:2010iu}. On the other hand the amount of mixing is constrained by limits on the spin-independent scattering cross-section, for which the LH sneutrino component receives an important contribution from Z exchange; this cross-section is suppressed by the sneutrino mixing angle. Because of the gauge and Higgs interactions, the presence of the mixed sneutrino can also significantly impact Higgs and SUSY signatures at the LHC~\cite{ArkaniHamed:2000bq}. In \cite{Belanger:2010cd}, some of us investigated the case of mixed sneutrinos as thermal DM with special emphasis on the mass range below $\sim$10~GeV. We examined the viable parameter space and discussed implications for direct and indirect dark matter searches, as well as consequences for collider phenomenology. Regarding the latter, we found that the SUSY signatures greatly differ from the expectations in the conventional Minimal Supersymmetric Standard Model (MSSM) with a $\tilde\chi^0_1$ LSP: while squarks and gluinos have the usual cascade decays through charginos and neutralinos, with the same branching ratios as in the corresponding MSSM case, the charginos and neutralinos decay further into the $\tilde{\nu}_1$ LSP. In particular, \begin{equation} \tilde\chi^0_{1,2}\rightarrow \nu\,\tilde{\nu}_1, \qquad \tilde\chi_1\rightarrow l^\pm\,\tilde{\nu}_1 \end{equation} with practically 100\% branching ratio over most of the parameter space. At the LHC, the typical cascade decays therefore are $\tilde q_R^{}\rightarrow q\tilde\chi^0_1\rightarrow q\nu\tilde{\nu}_1$, $\tilde q_L^{}\rightarrow q\tilde\chi^0_2\rightarrow q\nu\tilde{\nu}_1$ and $\tilde q_L^{}\rightarrow q\tilde\chi_1\rightarrow q'l^\pm\tilde{\nu}_1$, all giving different amount of missing transverse energy, $p_T^{\rm miss}$. Moreover, gluino-pair production followed by decays into $qq'\tilde\chi_1$ through either on- or off-shell squarks leads to same-sign (SS) and opposite-sign (OS) dilepton events with equal probability. Besides, the light Higgs decays invisibly into a pair of LSPs, $h^0\rightarrow\tilde{\nu}_1\lsp$. In this paper, we now perform a detailed study of the LHC potential to resolve the light sneutrino DM scenario. This includes in particular the determination of the DM mass from $\tilde g\rightarrow qq'\tilde\chi_1\rightarrow q'l^\pm\tilde\nu_{1}$ and/or $\tilde q\rightarrow q'\tilde\chi_1\rightarrow q'l^\pm\tilde\nu_{1}$ events. To this end we rely on the subsystem $m_{T2}$ method~\cite{Mt2sub}. Moreover, we address the question of measuring the masses of additional invisible sparticles. The case of a $\sim 100$~GeV mixed sneutrino LSP was studied in \cite{Thomas:2007bu}. The paper is organized as follows. First, in Section~\ref{sec:framework}, we briefly recall the main features of the mixed sneutrino model. In Section~\ref{sec:bench} we present three benchmark points and their characteristic signatures. This sets the framework of the analysis. We then go on in Section~\ref{sec:Disc} to study the discovery potential at the LHC with 7~TeV center-of-mass energy. Measurements at the 14~TeV LHC are discussed in detail in Section~\ref{sec:Masses}. A summary and conclusions are given in Section~\ref{sec:conclude}. \section{Mixed sneutrinos} \label{sec:framework} The framework for our study is the model of \cite{ArkaniHamed:2000bq} with only Dirac masses for sneutrinos. In this case, the usual MSSM soft-breaking terms are extended by \begin{equation} \Delta {\cal L}_{\rm soft} = m^2_{\tilde N_i} |\tilde N_i |^2 + A_{\tilde\nu_i} \tilde L_i \tilde N_i H_u + {\rm h.c.} \,, \end{equation} where ${m}^2_{\tilde{N}}$ and $A_{\tilde\nu}$ are weak-scale soft terms, which we assume to be flavor-diagonal. Note that the lepton-number violating bilinear term, which appears in case of Majorana neutrino masses, is absent. Neglecting the tiny Dirac masses, the $2\times2$ sneutrino mass matrix for one generation is given by \begin{equation} m^2_{\tilde\nu} = \left( \begin{array}{cc} {m}^2_{\widetilde{L}} +\frac{1}{2} m^2_Z \cos 2\beta \quad & \frac{1}{\sqrt{2}} A_{\tilde\nu}\, v \sin\beta\\ \frac{1}{\sqrt{2}} A_{\tilde\nu}\, v \sin\beta& {m}^2_{\widetilde{N}} \end{array}\right) \,. \label{eq:sneutrino_tree} \end{equation} Here ${m}^2_{\tilde{L}}$ is the SU(2) slepton soft term, $v^2=v_1^2+v_2^2=(246\;{\rm GeV})^2$ with $v_{1,2}$ the Higgs vacuum expectation values, and $\tan\beta=v_2/v_1$. The main feature of this model is that the ${m}^2_{\widetilde{L}}$, ${m}^2_{\widetilde{N}}$ and $A_{\tilde\nu}$ are all of the order of the weak scale, and $A_{\tilde\nu}$ does not suffer any suppression from Yukawa couplings. In the following, we will always assume $m_{\widetilde N}<m_{\widetilde L}$ so that the lighter mass eigenstate, $\tilde\nu_1$, is mostly a $\tilde\nu_R$. This is in fact well motivated from renormalization group evolution, since for the gauge-singlet $\tilde\nu_R$ the running at 1~loop is driven exclusively by the $A_{\tilde\nu}$ term: \begin{equation} \frac{dm_{\widetilde N}^2}{dt} = \frac{2}{16\pi^2}A_{\tilde\nu}^2 \,, \end{equation} while \begin{equation} \frac{dm_{\widetilde L}^2}{dt} = -\frac{3}{16\pi^2}g_2^2M_2^2 -\frac{3}{80\pi^2}g_Y^2M_1^2 +\frac{1}{16\pi^2}A_{\tilde\nu}^2 \,. \end{equation} A large $A_{\tilde\nu}$ term in the sneutrino mass matrix will induce a significant mixing between the RH and LH states, \begin{equation} \left(\begin{array}{c} \tilde\nu_{1}\\ \tilde\nu_{2} \end{array}\right) = \left(\begin{array}{lr} \cos\theta_{\tilde\nu}\, & -\sin\theta_{\tilde\nu}\\ \sin\theta_{\tilde\nu} & \cos\theta_{\tilde\nu} \end{array}\right) \left(\begin{array}{c} \tilde\nu_{R}\\ \tilde\nu_{L} \end{array}\right) , \quad \sin2\theta_{\tilde\nu} = \frac{\sqrt{2} A_{\tilde\nu} v \sin\beta}{m_{\tilde\nu_2}^2 - m_{\tilde\nu_1}^2}\,, \end{equation} leading to mass eigenvalues \begin{equation} m_{\tilde\nu_{1,2}}^2 = \frac{1}{2} \left(m_{+}^2 \mp \sqrt{m_{-}^4 + 2 A_{\tilde\nu}^2 v^2 \sin^2\beta}\right) \end{equation} where $m_{\pm}^2 \equiv m_{\tilde{L}}^2 \pm m_{\tilde{N}}^2 + m_Z^2/2$, and $m_{\tilde\nu_1} < m_{\tilde\nu_2}$ by convention. Notice that a large value of $A_{\tilde\nu}$ can induce a large splitting between the two mass eigenstates even if ${m}^2_{\widetilde{L}}$ and ${m}^2_{\widetilde{N}}$ are of the same order, leading to scenarios where $m_{\tilde\nu_1} \ll m_{\tilde\nu_2},m_{\tilde{l}_L}$. In this way, $\tilde\nu_1$ can naturally be driven much below the neutralino masses. Taking $m_{\tilde\nu_{1}}$, $m_{\tilde\nu_{2}}$ and $\theta_{\tilde\nu}$ as input, the soft terms $m_{\widetilde N}$, $m_{\widetilde L}$ and $A_{\tilde\nu}$ entering the sneutrino mass matrix Eq.~(\ref{eq:sneutrino_tree}) are fixed. This also fixes the corresponding LH charged slepton mass, $m_{\tilde l_L}^2= m_{\widetilde L}^2 + m_Z^2\cos 2\beta(\sin^2\theta_W-\frac{1}{2})$. For the RH one, $m_{\tilde l_R}^2= m_{\widetilde R}^2 - m_Z^2\cos 2\beta\sin^2\theta_W$, we assume $m_{\widetilde R}\equiv m_{\widetilde L}$ for simplicity. We use an appropriately modified version of {\tt SuSpect}~\cite{Suspect} for the spectrum calculation, which includes in particular radiative corrections induced by the $A_{\tilde\nu}$ term, as given in \cite{Belanger:2010cd}. A full scan of the relevant parameter space was done in \cite{Belanger:2010cd}, taking into account constraints from the Z invisible decay width, the Higgs and SUSY mass limits, as well as DM constraints from the relic abundance and direct and indirect DM searches. It was found that light mixed sneutrino DM consistent with all constraints populates the region \begin{equation} 1 \mbox{ GeV} \lesssim m_{\tilde\nu_{1}} \lesssim 8 \mbox{ GeV} \quad\mbox{and}\quad 0.1 \lesssim \sin\theta_{\tilde\nu} \lesssim 0.4 \,. \end{equation} Moreover, over most of the parameter space, $m_{\tilde\nu_2} \gtrsim 200$~GeV and $m_{\tilde{l}_{L,R}} > m_{\tilde\chi^0_1,\tilde\chi_1}$. For LHC physics this means that the final steps of the cascade decays will be dominated by $\tilde\chi^0_{1,2} \rightarrow \nu+\tilde{\nu}_1$ and $\tilde\chi_1 \rightarrow l+\tilde{\nu}_1$. The remaining relevant parameters are the squark and gluino masses, which determine the production cross sections. \section{Benchmark points and characteristic signatures} \label{sec:bench} In order to study the light mixed sneutrino DM (SNDM) scenario at the LHC, we pick a parameter point from~\cite{Belanger:2010cd} with a $\tilde\nu_1$ LSP of $7.6$~GeV as the DM candidate. The other sneutrino parameters are $m_{\tilde\nu_2}=524$~GeV, $\sin\theta_{\tilde\nu}=0.225$ and $\tan\beta=10$ ($A_\nu=348$~GeV). The neutralino/chargino sector is given by $M_2=2M_1=221$~GeV and $\mu=800$~GeV. Together with $m_{\widetilde R}=m_{\widetilde L}=514$~GeV, this fixes the properties of the weakly interacting sparticles. Note we assume flavor degeneracy. Based on this setup we define in Table~\ref{tab:bm} three benchmark points (SN1--SN3) with the same electroweak sector but different gluino--squark mass hierarchies. The squark and gluino production cross sections, computed using Pythia\cite{Pythia}, are given in Table~\ref{tab:xsects}, and the relevant decay branching ratios are given in Table~\ref{tab:decays}. \begin{table}\centering \begin{tabular}{lrrr} \hline & SN1 & SN2 & SN3 \\ \hline $m_{\tilde g}$ & 765 & 765 & 1000 \\ $m_{\tilde u_L}$ & 1521 & 775 & 700 \\ $m_{\tilde u_R}$ & 1521 & 776 & 700 \\ $m_{\tilde b_1}$ & 1514 & 766 & 689 \\ $m_{\tilde t_1}$ & 1441 & 675 & 584 \\ \hline $m_{\tilde\chi_2}$ & 811 & 807 & 805 \\ $m_{\tilde\nu_2}$ & 524 & 524 & 524 \\ $m_{\tilde e_{L,R}}$ & 516 & 516 & 516 \\ $m_{\tilde\tau_1}$ & 503 & 503 & 503 \\ $m_{\tilde\chi_1}$ & 227 & 228 & 228 \\ $m_{\tilde\chi^0_2}$ & 227 & 228 & 228 \\ $m_{\tilde\chi^0_1}$ & 109 & 109 & 109 \\ $m_{\tilde\nu_1}$ & 7.6 & 7.6 & 7.6 \\ \hline \end{tabular} \caption{Masses in~GeV units for the SNDM benchmark points SN1--SN3. The spectrum is computed at the EW scale using a modified version of {\tt Suspect}~\cite{Suspect}. For comparison with the MSSM case, we consider points MSSM1--MSSM3 with the same masses as SN1--SN3 but the ${\tilde\nu_1}$ removed from the spectrum.} \label{tab:bm} \end{table} \begin{table}\centering \begin{tabular}{l|rrr|rrr} & \multicolumn{3}{|c|}{7~TeV LHC} & \multicolumn{3}{c}{14~TeV LHC} \\ & $\tilde g\tg$ & $\tilde g\tilde q$ & $\tilde q\tq$ & $\tilde g\tg$ & $\tilde g\tilde q$ & $\tilde q\tq$ \\ \hline SN1 & 0.03 & 0.008 & 0.0 & 1.1 & 0.5 & 0.05 \\ SN2 & 0.02 & 0.2 & 0.2 & 1.0 & 4.3 & 2.3 \\ SN3 & 0.002 & 0.08 & 0.35 & 0.2 & 2.0 & 3.2 \\ \hline \end{tabular} \caption{Cross sections in [pb] for gluino and squark production at 7~TeV and 14~TeV. }\label{tab:xsects} \end{table} \begin{table}\centering \begin{tabular}{llrrr} \hline parent & daughters & SN1 & SN2 & SN3 \\ \hline \hline $\tilde g \rightarrow$ & $q\,\tilde q_{L,R}$ & --\quad\ & --\quad\ & 71\%\\ & $b\,\tilde b_{1,2}$ & --\quad\ & --\quad\ & 17\%\\ & $t\,\tilde t_{1,2}$ & --\quad\ & --\quad\ & 12\%\\ & $q\bar q'\,\tilde\chi_1$ & 52\% & 53\% & --\quad\ \\ & $q\bar q\,\tilde\chi^0_2$ & 33\% & 33\%& --\quad\ \\ & $q\bar q\,\tilde\chi^0_1$ & 15\% & 14\%& --\quad\ \\ \hline $\tilde q_L\rightarrow$ & $q \tilde g$ & 72\% & 0.4\% & --\quad\ \\ & $q'\tilde\chi_1$ & 18\% & 66\% & 66\% \\ & $q \tilde\chi^0_2$ & 9\% & 33\% & 33\% \\ \hline $\tilde q_R\rightarrow$ & $q \tilde g$ & 93\% & 2\% & --\quad\ \\ & $q \tilde\chi^0_1$ & 7\% & 98\% & 100\% \\ \hline $\tilde t_1\rightarrow$ & $t \tilde g$ & 64\% & --\quad\ & --\quad\ \\ & $b \tilde\chi_1$ & 9\% & 58\% & 59\%\\ & $t \tilde\chi^0_2$ & 4\% & 24\% & 23\%\\ & $t \tilde\chi^0_1$ & 3\% & 17\% & 18\%\\ \hline $\tilde\chi_1^\pm \rightarrow$ & $l^\pm\tilde\nu_1$ & 99\% & 99\% & 99\% \\ $\tilde\chi^0_2 \rightarrow$ & $\nu\tilde\nu_1$ & 100\% & 100\% & 100\% \\ $\tilde\chi^0_1 \rightarrow$ & $\nu\tilde\nu_1$ & 100\% & 100\% & 100\% \\ \hline \end{tabular} \caption{Most important decay channels for points SN1--SN3.}\label{tab:decays} \end{table} Employing GUT relations for gaugino-mass parameters, point SN1 has a gluino mass of $m_{\tilde g}=765$~GeV, while squark soft terms are set to 1.5~TeV, resulting in $m_{\tilde q}\simeq 2m_{\tilde g}$. Point SN1 is therefore characterized by heavy quarks and a light gluino, which decays through 3-body final states into charginos and neutralinos, which will then decay exclusively to $l^{\pm} \tilde\nu_1$ and $\nu \tilde\nu_1$, respectively. We hence expect dominantly gluino-pair production leading to 4 jets plus $p_T^{\rm miss}$. Moreover, about half of the events will have an isolated charged lepton, and about 25\% of the events will have two leptons with uncorrelated flavor (assuming three roughly degenerate light sneutrinos, 2/3 of the leptons are electrons or muons and 1/3 are taus). Since the gluino is a Majorana particle, same-sign and opposite-sign dileptons should have equal rates. However, same-sign dileptons have less SM background. Promising signatures to look for are hence \begin{quote} (a) 4 jets, 0 leptons, large $p_T^{\rm miss}$; \\ (b) 4 jets, same-sign dileptons, moderate $p_T^{\rm miss}$; \\ (c) 4 jets, opposite-sign dileptons, moderate $p_T^{\rm miss}$. \end{quote} Point SN2 has the same gluino mass as SN1 but lighter squarks with $m_{\tilde q}\sim m_{\tilde g}$. Therefore SN2 has a much larger overall SUSY production, since the squark-pair and gluino-squark associated production give the main contributions to the total cross-section, as shown in Table~\ref{tab:xsects}. The gluinos have the same decay modes as above, while the squarks decay dominantly through $q+\tilde\chi_1/\tilde\chi^0_{1,2}$. Events with only 2 or 3 jets are predominant and result from $\tilde q\tq$ or $\tilde g\tilde q$ production, with the squark decaying into a neutralino or chargino (99\% of the $\tilde q_L$ and 98\% of the $\tilde q_R$ decays). Finally, point SN3 is characterized by light squarks and heavy gluinos (achieved through non-universal gaugino masses). We hence expect dominantly squark-pair production followed by decays into quarks plus charginos or neutralinos, see Table~\ref{tab:decays}. These events have 2 hard jets plus $p_T^{\rm miss}$, often accompanied by 1--2 leptons from $\tilde q_L\rightarrow q\tilde\chi_1\rightarrow ql\tilde\nu_1$. Again, events without leptons are expected to have larger $p_T^{\rm miss}$ on average than events with leptons. It is also interesting to compare the phenomenology of the three SNDM benchmark points with the corresponding MSSM cases. To this end we consider points MSSM1--MSSM3 with the same masses as SN1--SN3 in Table~\ref{tab:bm} but with the ${\tilde\nu_1}$ removed from the spectrum. The MSSM points thus have a neutralino LSP with a mass of 109~GeV. The $\tilde\chi_1$ decays exclusively into $\tilde\chi^0_1W$, while the $\tilde\chi^0_2$ decays into $\tilde\chi^0_1Z$ (12\%) or into $\tilde\chi^0_1h$ (88\%).\footnote{In the SNDM case, these decays are highly suppressed with respect to the decay into the sneutrino LSP. Furthermore, for SN1--SN3, $h^0\rightarrow \tilde{\nu}_1\lsp$ with practically 100\% branching ratio.} The larger rate of multilepton events in the SNDM scenario is a clear difference to the MSSM case. On the other hand one can expect MSSM events to have higher jet multiplicity. \FIGURE[t]{ \includegraphics[width=7cm]{SNDM7-CompetmissB1.eps} \includegraphics[width=7cm]{SNDM7-CompetmissB3.eps} \caption{Comparison of contributions to the $p_T^{\rm miss}$ spectrum at $\sqrt{s}=7$~TeV for SN1 and MSSM1 (left) and for SN3 and MSSM3 (right); SN2/MSSM2 gives almost the same picture as SN3/MSSM3. }\label{fig:competmiss}} Another general feature of the SNDM scenario is that, due to the invisible $\tilde\chi^0_2$ decay, we expect a harder $p_T^{\rm miss}$ spectrum as compared to a similar MSSM point. This is illustrated in Fig.~\ref{fig:competmiss}, where we show the $p_T^{\rm miss}$ distribution for the SN1/MSSM1 and SN3/MSSM3 points at detector level without any cuts (for details on the event simulation, see next Section). Notice moreover that the subset of events with $\tilde\chi_1\rightarrow l^\pm+{\rm LSP}$ has a slightly harder $p_T^{\rm miss}$ spectrum in the SNDM case. This is because the light $\tilde{\nu}_1$ LSP is more boosted than the $\tilde\chi^0_1$ LSP in the MSSM case; this difference becomes less significant if the LSP's mother already has a large boost. On the other hand, the $p_T^{\rm miss}$ spectrum coming from $\tilde q/\tilde g \rightarrow \tilde\chi^0_1$ events are identical in both models, since the lightest neutralino decays invisibly in the SNDM. Before studying the LHC phenomenology in more detail, a comment is in order concerning the assumption of flavor degeneracy. As discussed in \cite{Belanger:2010cd}, the exact mass splitting between the $\tilde{\nu}_1$'s of different flavors strongly influences the DM-allowed parameter space. There is, however, no difference between one or three light sneutrinos if the mass splitting is $\gtrsim 1$~GeV. This is because a 1~GeV mass splitting is enough to suppress any co-annihilation contributions to $\Omega_{\tilde\nu}h^2$. Such a mass splitting is easily induced by small splittings in the soft terms, which are in fact rather generic even if one starts out with universal soft terms at a high scale. If we have, for instance, $m_{\tilde\nu_{1e,\mu}}>m_{\tilde\nu_{1\tau}}$, then the heavier $\tilde\nu_{1e,\mu}$ decays to the $\tilde\nu_{1\tau}$ LSP through 3-body modes~\cite{Kraml:2007sx}. The dominant decay is into neutrinos, while visible decays (e.g., $\tilde\nu_{1e}\rightarrow e^\mp\tau^\pm\tilde\nu_{1\tau}$) have at most a few percent branching ratio. From the perspective of LHC signatures, the difference to the case of three exactly degenerate light sneutrinos is negligible. We therefore take $m_{\tilde\nu_{1e}}=m_{\tilde\nu_{1\mu}}=m_{\tilde\nu_{1\tau}}\equiv m_{\tilde{\nu}_1}$ for simplicity. Note, however, that interesting non-trivial flavor structures beyond the scope of this study may appear in the presence of lepton-flavor violation~\cite{Kumar:2009sf,MarchRussell:2009aq}. \section{Discovery Potential at LHC7} \label{sec:Disc} \renewcommand\tilde\nu{\tilde\nu_1} We now turn to the discovery potential of the SNDM model at the LHC, with 7 TeV CM energy and ${\cal O}(1)$ fb$^{-1}$ of integrated luminosity. The Monte Carlo simulation details are presented in Sec.~\ref{sec:MC} and the main distinct signatures are presented in Sec.~\ref{sec:lhc7dists}. \subsection{Event Simulation} \label{sec:MC} For the SM background, we include in our calculations all relevant $2 \rightarrow n$ processes for the multi-lepton and multi-jet searches. Since in this Section we restrict our results to the first LHC physics run ($\lesssim 1$~fb$^{-1}$ and $\sqrt{s} = 7$~TeV) we generate (at least) the equivalent of 1~fb$^{-1}$ of events for each process, except for our QCD samples (see Table~\ref{table:bgs}). For the simulation of the background events, we use {\tt AlpGen}~\cite{Alpgen} to compute the hard scattering events and {\tt Pythia}~\cite{Pythia} for the subsequent showering and hadronization. For the final states containing multiple jets (namely $Z(\rightarrow ll,\nu\nu) + jets$, $W(\rightarrow l\nu) + jets$, $b\bar{b} + jets$, $t\bar{t} + jets$, $Z + b\bar{b} + jets$, $Z + t\bar{t} + jets$, $W + b\bar{b} + jets$, $W + t\bar{t} + jets$ and QCD), we use the MLM matching algorithm \cite{Alpgen} to avoid double counting. All the processes included in our analysis are shown in Table~\ref{table:bgs} as well as their total cross-sections and number of events generated. The SNDM spectrum was generated with a modified version of {\tt Suspect}~\cite{Suspect}, which includes right-handed sneutrinos, while the decay table was computed with {\tt CalcHEP}~\cite{Calchep}/{\tt micrOMEGAs2.4}\cite{Belanger:2006is,Belanger:2010cd}. Finally, using the SLHA~\cite{Skands:2003cj} interface, signal events were generated using {\tt Pythia6.4}~\cite{Pythia}. For the event generation, we use a toy detector simulation with calorimeter cell size $\Delta\eta\times\Delta\phi=0.05\times 0.05$ and rapidity $-5<\eta<5$. The HCAL (hadronic calorimetry) energy resolution is taken to be $80\%/\sqrt{E}+3\%$ for $|\eta|<2.6$ and FCAL (forward calorimetry) is $100\%/\sqrt{E}+5\%$ for $|\eta|>2.6$, where the two terms are combined in quadrature. The ECAL (electromagnetic calorimetry) energy resolution is assumed to be $3\%/\sqrt{E}+0.5\%$. We use the {\tt Isajet}~\cite{isajet} cone-type jet-finding algorithm to group the hadronic final states into jets. Jets and isolated leptons are defined as follows: \begin{itemize} \item Jets are hadronic clusters with $|\eta| < 3.0$, $R\equiv\sqrt{\Delta\eta^2+\Delta\phi^2}\leq0.4$ and $p_T(jet)>30$~GeV. \item Electrons and muons are considered isolated if they have $|\eta| < 2.5$, $p_T(l)>10 $~GeV with visible activity within a cone of $\Delta R<0.2$ about the lepton direction, $\Sigma p_T^{cells}<5$ GeV. \item We identify hadronic clusters as $b$-jets if they contain a B hadron with $p_T(B)>15$~GeV, $\eta(B)<$ 3 and $\Delta R(B,jet)< 0.5$. We assume a tagging efficiency of 60$\%$ and light quark and gluon jets can be mis-tagged as a $b$-jet with a probability 1/150 for $p_T \leq 100$~GeV, 1/50 for $p_T \geq 250$~GeV, with a linear interpolation for 100 GeV $\leq p_T \leq 250$~GeV. \end{itemize} \begin{table} \centering \begin{tabular}{|l|c|c|} \hline & Cross & number of \\ SM process & section & events \\ \hline QCD: $2$, $3$ and $4$ jets (40 GeV$<p_T(j1)<100$ GeV) & $2.6\times 10^9$ fb & 26M\\ QCD: $2$, $3$ and $4$ jets (100 GeV$<p_T(j1)<200$ GeV) & $3.9\times 10^8$ fb & 44M\\ QCD: $2$, $3$ and $4$ jets (200 GeV$<p_T(j1)<500$ GeV) & $1.6\times 10^7$ fb & 16M\\ QCD: $2$, $3$ and $4$ jets (500 GeV$<p_T(j1)<3000$ GeV) & $9.4\times 10^4$ fb & 0.3M\\ $t\bar{t}$: $t\bar{t}$ + 0, 1 and 2 jets & $1.6\times 10^5$ fb& 5M\\ $b\bar{b}$: $b\bar{b}$ + 0, 1 and 2 jets & $8.8\times 10^7$ fb& 91M\\ $Z$ + jets: $Z/ \gamma (\rightarrow l\bar{l},\nu \bar{\nu})$ + 0, 1, 2 and 3 jets & $8.6\times 10^6$ fb& 13M\\ $W$ + jets: $W^{\pm} (\rightarrow l\nu)$ + 0, 1, 2 and 3 jets & $1.8\times 10^7$ fb& 19M\\ $Z$ + $t\bar{t}$: $Z/ \gamma (\rightarrow l\bar{l},\nu\bar{\nu})$ + $t\bar{t}$ + 0, 1 and 2 jets & $53$ fb & 0.6M\\ $Z$ + $b\bar{b}$: $Z/ \gamma (\rightarrow l\bar{l},\nu\bar{\nu})$ + $b\bar{b}$ + 0, 1 and 2 jets & $2.6\times 10^3$ fb & 0.3M\\ $W$ + $b\bar{b}$: $W^{\pm} (\rightarrow all)$ + $b\bar{b}$ + 0, 1 and 2 jets & $6.4\times 10^3$ fb & 9M\\ $W$ + $t\bar{t}$: $W^{\pm} (\rightarrow all)$ + $t\bar{t}$ + 0, 1 and 2 jets & $1.8\times 10^2$ fb & 9M\\ $W$ + $tb$: $W^{\pm} (\rightarrow all)$ + $\bar{t}b(t\bar{b})$ & $6.8\times 10^2$ fb & 0.025M\\ $t\bar{t}t\bar{t}$ & $0.6$ fb & 1M\\ $t\bar{t}b\bar{b}$ & $1.0\times 10^2$ fb & 0.2M\\ $b\bar{b}b\bar{b}$ & $1.1\times 10^4$ fb & 0.07M\\ $WW$: $W^{\pm} (\rightarrow l\nu) + W^{\pm} (\rightarrow l\nu)$ & $3.0\times 10^3$ fb& 0.005M\\ $WZ$: $W^{\pm} (\rightarrow l\nu) + Z (\rightarrow all)$ & $3.4\times 10^3$ fb& 0.009M\\ $ZZ$: $Z (\rightarrow all) + Z (\rightarrow all)$ & $4.0\times 10^3$ fb& 0.02M\\ \hline \end{tabular} \caption{Background processes included in the discovery potential for LHC7, along with their total cross sections and number of generated events. All light (and {\it b}) partons in the final state are required to have $p_T> 40$~GeV. For QCD, we generate the hardest final parton jet in distinct bins to get a better statistical representation of hard events. For $Wtb$ production, additional multi-jet production is only via the parton shower because the AlpGen calculation including all parton emission matrix elements is not yet available. For this process, we apply the cut $|m(Wb)-m_t|\ge 5$~GeV to avoid double counting events from real $t\bar{t}$ production.} \label{table:bgs} \end{table} \subsection{Signal Distributions} \label{sec:lhc7dists} In Fig.~\ref{fig:sndmdists}a-c we show the $p_T^{\rm miss}$, $n(l)$ and $n(j)$ distributions for the benchmark points along with the SM background (BG) after the following set of cuts: \begin{itemize} \item $p_T^{\rm miss} > 400$ GeV, $n(j) > 3$, $p_T(j_1) > 150$ GeV, $p_T(j) > 50$ GeV and $S_T > 0.2$ \end{itemize} where $S_T$ is the transverse sphericity and $p_T(j_1)$ is the $p_T$ of the hardest jet. \FIGURE[t]{ \includegraphics[width=7cm]{SNDM7-etmiss.eps} \includegraphics[width=7cm]{SNDM7-nj.eps} \includegraphics[width=7cm]{SNDM7-nl.eps} \includegraphics[width=7cm]{SNDM7-eratio.eps} \caption{ $p_T^{\rm miss}$, $n(j)$, $n(l)$ and $\sum E_T(l)/\sum E_T(j)$ distributions for the model points in Table~\ref{tab:bm} along with SM background for LHC7. In frames $a-c$ the following cuts have been applied: $p_T^{\rm miss} > 400$ GeV, $n(j) > 3$, $p_T(j_1) > 150$ GeV, $p_T(j) > 50$ GeV and $S_T > 0.2$. While frame $d$ has weaker cuts: $p_T^{\rm miss} > 300$ GeV, $n(j) > 2$, $p_T(j_1) > 100$ GeV, $p_T(j) > 50$ and $S_T > 0.2$ }\label{fig:sndmdists}} Despite having a softer $p_T^{\rm miss}$ spectrum than the SM BG, the point SN1 has a much harder $n(j)$ distribution, which peaks at $n(j) = 4$, as expected from the discussion in Sec.~\ref{sec:bench}. As shown in Fig.~\ref{fig:sndmdists}b, the signal exceeds the SM BG in the $n(j) = 6, 7$ bins. Furthermore, with this set of cuts, the lepton number distribution for point SN1 is already at the BG level for $n(l) = 1$, easily surpassing the BG in the dilepton channel. However, due to its small cross section, the SN1 signal requires several fb$^{-1}$ of integrated luminosity in order to become visible. We estimate that approximately 5(2)~fb$^{-1}$ are required to claim a $5\sigma$($3\sigma$) evidence for the SN1 benchmark. As mentioned in Sec.~\ref{sec:bench}, point SN2 has some distinct signatures from the benchmark point SN1. While for the latter the sparticle production cross-sections is dominated by $\tilde g\tg$, SN2 has the bulk of its signal coming from $\tilde q\tq$ and $\tilde q\tilde g$ events. As a result the signal has a softer jet distribution, due to squark decays to charginos and neutralinos, as shown by the decay branching ratios (BRs) in Table~\ref{tab:decays}. The same is valid for the SN3 signal, which is dominated by squark pair production. This is clearly seen in the $n(j)$ distribution shown in Fig.~\ref{fig:sndmdists}b, once the overall signal normalization is taken into account. On the other hand, since the 2-body squark decay tends to produce boosted charginos and neutralinos, and because of the larger production cross-section, points SN2 and SN3 have a harder $p_T^{\rm miss}$ spectrum as compared to point SN1 (and the background), as shown in Fig.~\ref{fig:sndmdists}a. Therefore, the LHC7 should be able to discover the SN2 and SN3 points with hard $p_T^{\rm miss}$ ($\gtrsim 300$~GeV) and $p_T(j_1)$ ($\gtrsim 250$~GeV) cuts in the $n(j) = 2$ or 3 channels. We estimate that a $5\sigma$ evidence for both SN2 and SN3 points can be achieved with 1~fb$^{-1}$ of integrated luminosity. Furthermore, from Fig.~\ref{fig:sndmdists}c we see that both points are also visible in the dilepton channel. Since the SNDM signal is rich in boosted leptons, we show in Fig.~\ref{fig:sndmdists}d the ratio between the scalar sum of the lepton and jet $E_T$'s, $E_T(l/j) \equiv \sum E_T(l)/ \sum E_T(j)$, for a weaker set of cuts: \begin{itemize} \item $p_T^{\rm miss} > 300$ GeV, $n(j) > 2$, $p_T(j_1) > 100$ GeV, $p_T(j) > 50$ GeV and $S_T > 0.2$ \end{itemize} As we can see, both the signal and BG distributions peak at low $E_T(l/j)$ values. However the SM BG falls much more sharply than the SN2 and SN3 signals, which are above background for $E_T(l/j) \gtrsim 0.2$. Since the signal and SM distributions have very distinct shapes, $E_T(l/j)$ can be used to discriminate between signal and background even for SNDM models with smaller signal cross-sections. Nonetheless, point SN1 still is well below the SM BG to be seen at LHC7. So far each of the SNDM signatures described above are common in standard MSSM scenarios, including the CMSSM. Let us now discuss how the distributions shown in Fig.~\ref{fig:sndmdists} can help to distinguish between the MSSM and the SNDM models. For this purpose we also show the respective distributions for a MSSM case (MSSM3) with a spectrum identical to point SN3, but without RH sneutrino.\footnote{In order to avoid a proliferation of lines in the plots we do not show the distributions for MSSM1 and MSSM2, but we have checked that the general features discussed here by means of SN3 versus MSSM3 hold also for the other configurations.} As expected, MSSM3 has a softer $p_T^{\rm miss}$ spectrum than the SNDM point SN3, although the difference after cuts is not very large. Another distinction between the MSSM and SN points is their jet multiplicity. In the SN case all charginos/neutralinos decay to $\tilde\nu+ l/\nu$, while this channel is absent for the MSSM points and neutralinos and charginos decay instead to the $\tilde\chi^0_1$ LSP plus $h$, $Z$ or $W$, see Sec.~\ref{sec:bench}. Therefore we expect SNDM models to have a softer $n(j)$ distribution when compared to a similar MSSM case. This is confirmed in Fig.~\ref{fig:sndmdists}b, which shows that the $n(j)$ distribution for the MSSM3 model is suppressed in the $n(j)=3$ bin and enhanced in the higher bins when compared to the SN3 point (once the total signal normalization is taken into account). The main distinct feature of the SNDM scenario appears, nevertheless, in the multilepton distribution. As discussed in Sec.~\ref{sec:bench}, SNDM models are expected to be rich in hard leptons coming almost exclusively from $\tilde\chi_1$ decays. As a result, the leptons in dilepton events will have uncorrelated flavors and kinematics. Furthermore, higher lepton multiplicities only appear, at much smaller rates, from top decays. In corresponding MSSM scenarios, $\tilde\chi_1$'s and $\tilde\chi^0_2$'s decays to leptons are sub-dominant: in case of $\tilde\chi_1$, they are given by the BRs of the $W$, while in case of $\tilde\chi^0_2$, they are suppressed by the high BR into $h^0$. To give concrete numbers, the ratio of dilepton/0-lepton events is $\sim 0.12,\ 0.06$ and $0.06$ for points SN1, SN2 and SN3, respectively, see Fig.~\ref{fig:sndmdists}c, while for the MSSM1, MSSM2 and MSSM3 points it is $\sim 0.04,\ 0.01$ and $0.01$. Furthermore, dilepton events in MSSM models often come from $\tilde\chi^0_2$ decays to $l^+l^-\tilde\chi^0_1$ and hence consist of same-flavor OS dileptons. Therefore the ratio of OS and SS dilepton events is another useful discriminant. For the set of cuts used in Fig.~\ref{fig:sndmdists}c we have SS/OS = $0.94$, $0.94$ and $0.62$ for points SN1, SN2 and SN3, respectively. The SS/OS ratio is considerably smaller for SN3 than for the other points, because its signal is dominated by squark pair production, leading to less SS dileptons. Although the SN2 point also has a large $\tilde q \tilde q$ production cross-section, the $n(j) \geq 3$ cut suppresses the contribution from $\tilde q \tilde q$ events to the SN2 signal, enhancing the SS/OS ratio in this case. For the corresponding MSSM models and the same set of cuts, we have SS/OS = $0.75$, $0.54$ and $0.39$ for MSSM1, MSSM2 and MSSM3, respectively. As we can see, the SS/OS ratio is significantly enhanced in SNDM models, when compared to the MSSM. Furthermore, as seen in Fig.\ref{fig:sndmdists}d, the $E_T(l/j)$ distribution for the MSSM is much softer than in the SNDM case, which, together with the $n(j)$ and $n(l)$ distributions can also point to a light sneutrino LSP. We conclude that, while the $p_T^{\rm miss}$ and $n(j)$ distributions are more model dependent and more sensitive to NLO and detector systematics, the $n(l)$ and $E_T(l/j)$ distributions are promising candidates for an early distinction between the MSSM and SNDM cases. Although the first LHC run will probably not accumulate enough luminosity to make use of the dilepton invariant mass, depending on the signal cross-section, the dilepton channel may already indicate which is the relevant model. However, a more decisive answer will have to wait for the later physics run with higher energy. In the next section we discuss how LHC14 ($\sqrt{s}=14$~TeV) with an integrated luminosity of 100 fb$^{-1}$ can give information on the SNDM spectrum and thefore provide stronger evidence that the SNDM case is realized. \section{Measuring Masses at LHC14} \label{sec:Masses} Despite the distinct features of the SNDM model discussed in the previous sections, information on the sparticle spectrum can give a more decisive evidence for sneutrino DM. While in most MSSM models, a neutralino LSP consistent with DM and collider constraints implies $m_{\tilde\chi^0_1} \gtrsim 50$ GeV (assuming gaugino mass unification), the SNDM model has a much lighter LSP. Since several fb$^{-1}$ are required for any mass measurement, from now on we will present all our results for LHC14 ($\sqrt{s} = 14$ TeV and $\mathcal{L} = 100$ fb$^{-1}$). We include both SUSY and SM backgrounds, which include all the relevant processes listed in Table~\ref{table:bgs}, but at $\sqrt{s} = 14$ TeV (for more details see \cite{Baer14}). \subsection{Method} Recently much improvement has been made on mass measurement methods at hadron colliders and several distinct mass measurement techniques have been suggested, such as kinematic endpoints and shapes (invariant mass, $s_{min}$, ...) and transverse mass methods ($m_{T2}$, $m_{CT}$,...). See \cite{Matchev:2009iw,Barr:2011xt} for comprehensive reviews and references. Here we adopt the subsystem $m_{T2}$ method, described in detail in Ref.~\cite{Mt2sub}. It consists in applying the $m_{T2}$ endpoint technique to particular subsets of the event topology. \subsubsection*{Gluino-pair production} First, we consider gluino-pair production, with each gluino going through the cascade decay: \begin{equation} \tilde g \rightarrow qq + \tilde\chi_1 \rightarrow qq + l + \tilde\nu \label{eq:evtop1} \end{equation} From this event topology we can form three subsystems: \begin{equation} \begin{array}{llcl} & \bf P^{(1)}+P^{(2)} & \rightarrow & \bf vis^{(1)}+ D^{(1)}+ vis^{(2)}+D^{(2)} \\[1mm] {\rm 1.}\; & \tilde g^{(1)}+\tilde g^{(2)} & \rightarrow & qql^{(1)}+\tilde\nu^{(1)}+qql^{(2)}+\tilde\nu^{(2)} \\ {\rm 2.} & \tilde g^{(1)}+\tilde g^{(2)} & \rightarrow & qq^{(1)} +l\tilde\nu^{(1)}+qq^{(2)}+l\tilde\nu^{(2)}\\ {\rm 3.} & \tilde\chi_1^{(1)}+\tilde\chi_1^{(2)} &\rightarrow& l^{(1)}+\tilde\nu^{(1)}+l^{(2)}+\tilde\nu^{(2)} \end{array}\label{eq:subs} \end{equation} where the upper index labels the decay branch and the final states are grouped into a visible (vis) and a daughter (D) component. For instance, in the first case the gluinos are the parents (P), while $qql$ and $\tilde\nu$ are the visible and daughter components, respectively. On the other hand, for the third subsytem, the charginos are the parents and $l$ and $\tilde\nu$ are the visible and daughter components. The (subsystem) $m_{T2}$ variables are then constructed as defined in \cite{Lester1999, Barr2003}, but using the visible and daughter components defined within each subsystem: \begin{equation} m_{T2}(m_x) = \min_{\mathbf{p_T}({\rm D}_1) + \mathbf{p_T}({\rm D}_2) = - \mathbf{p_T}({\rm vis}_1) - \mathbf{p_T}({\rm vis}_2)} [{\rm max}(m_T^{(1)},m_T^{(2)})] \end{equation} where \begin{equation} m_T^{(i)} = \sqrt{m_{{\rm vis}_i}^2 + m_x^2 + 2(E_T({\rm vis}_i) E_T({\rm D}_i) - \mathbf{p_T}({\rm vis}_i).\mathbf{p_T}({\rm D}_i))} \end{equation} and $m_x$ is the trial daughter mass. Since $m_{T2}(m_x = m_{\rm D}) \leq m_{\rm P}$, the value of $m_{T2}^{max}(m_{\rm D})$ determines the parent's mass in each subsystem. For the above example we have: \begin{enumerate} \item $m_{T2}^{qql,max}(m_x = m_{\tilde\nu}) = m_{\tilde g}$ \item $m_{T2}^{qq,max}(m_x = m_{\tilde\chi_1}) = m_{\tilde g}$ \item $m_{T2}^{l,max}(m_x = m_{\tilde\nu}) = m_{\tilde\chi_1}$ \end{enumerate} where $m_{T2}^{qql}$, $m_{T2}^{qq}$ and $m_{T2}^{l}$ are the $m_{T2}$ subsystem variables for the subsystems 1, 2 and 3 defined in Eq.~(\ref{eq:subs}), respectively. Note that for the second subsystem the daughter is defined as $l\tilde\nu$, which has invariant mass $m_{\tilde\chi_1}$. However, as the above relations show, in order to obtain the parent's mass, the daughter mass has to be known. The strength of the $m_{T2}$ method comes from the fact that analytical expressions are known for the function $m_{T2}^{max}(m_x)$ and can be used to simultaneously extract both the parent and daughter masses from data. For the subsystems 1 and 2 in Eq.~(\ref{eq:subs}), we have: \begin{equation} m_{T2}^{qql,max}(m_x) = \left\{ \begin{array}{lr} \frac{m_{\tilde g}^2 - m_{\tilde\nu}^2}{2 m_{\tilde g}} + \sqrt{\left(\frac{m_{\tilde g}^2 - m_{\tilde\nu}^2}{2 m_{\tilde g}}\right)^2 + m_x^2} &\mbox{, if $m_x < m_{\tilde\nu}$}\\ m_{\tilde g}\left(1-\frac{m_{\tilde\chi_1}^2}{2m_{\tilde g}^2} -\frac{m_{\tilde\nu}^2}{2m_{\tilde\chi_1}^2}\right) + \sqrt{m_{\tilde g}^2\left(\frac{m_{\tilde\chi_1}^2}{2m_{\tilde g}^2}-\frac{m_{\tilde\nu}^2}{2m_{\tilde\chi_1}^2}\right)^2 + m_x^2} &\mbox{, if $m_x > m_{\tilde\nu}$} \end{array} \right. \label{eq:mt2qql} \end{equation} and \begin{equation} m_{T2}^{qq,max}(m_x) = \left\{ \begin{array}{lr} \frac{m_{\tilde g}^2 - m_{\tilde\chi_1}^2}{2 m_{\tilde g}} + \sqrt{\left(\frac{m_{\tilde g}^2 - m_{\tilde\chi_1}^2}{2 m_{\tilde g}}\right)^2 + m_x^2} &\mbox{, if $m_x < m_{\tilde\chi_1}$}\\ m_{\tilde g} - m_{\tilde\chi_1} + m_x &\mbox{, if $m_x > m_{\tilde\chi_1}$} \end{array} \right. \label{eq:mt2qq} \end{equation} The above expressions were derived in Refs.~\cite{Cho2007,Mt2sub} under the assumption of no initial state radiation (ISR), so the total parent $p_T$ ($|\mathbf{p_T}(\tilde g^{(1)})+\mathbf{p_T}(\tilde g^{(2)})|$) is zero. In Ref.~\cite{Mt2sub} it is shown that, unless $p_{T} \gtrsim m_{\tilde g}$, this is a reasonable approximation. Therefore for $m_{\tilde g} \gtrsim 700$ GeV, we can safely neglect the ISR effect. On the other hand, for the subsystem 3, the parent system ($\tilde\chi_1^{(1)}+\tilde\chi_1^{(2)}$) is expected to have large $p_T$, since they are produced from gluino decays and are much lighter than their parents ($m_{\tilde\chi_1} = 227$ GeV for the cases we consider). In this case we have to include the transverse momentum effect in $m_{T2}^{l,max}(m_x)$\cite{Mt2sub}: \begin{equation} m_{T2}^{l,max}(m_x) = \left\{ \begin{array}{lr} \sqrt{\left(\mu_- + \sqrt{(\mu_- + \frac{p_T}{2})^2 + m_x^2}\right)^2 -\frac{p_T^2}{4}} &\mbox{, if $m_x < m_{\tilde\nu}$}\\ \sqrt{\left(\mu_+ + \sqrt{(\mu_+ - \frac{p_T}{2})^2 + m_x^2}\right)^2 -\frac{p_T^2}{4}} &\mbox{, if $m_x > m_{\tilde\nu}$} \end{array} \right. \label{eq:mt2l} \end{equation} where $\mu_{\pm} = \frac{m_{\tilde\chi_1}^2 - m_{\tilde\nu}^2}{2 m_{\tilde\chi_1}} \left(\sqrt{1 + \frac{p_T^2}{4 m_{\tilde\chi_1}^2}} \pm \frac{p_T}{2 m_{\tilde\chi_1}}\right)$ and $p_T$ is the chargino pair total transverse momentum. Note that, neglecting initial state radiation for the gluino pair, we have \begin{equation} p_T = |\vec{p}_{T}(q_1)+\vec{p}_{T}(q_2)+\vec{p}_{T}(q_3)+\vec{p}_{T}(q_4)| \equiv p_T^{trans} \end{equation} Therefore, selecting events with a fixed $p_T^{trans}$, Eq.~(\ref{eq:mt2l}) can be used to obtain $m_{\tilde\chi_1}$ and $m_{\tilde\nu}$. However this cut considerably reduces the statistical significance of the $m_{T2}$ distribution. \subsubsection*{Squark production} The decay chain (\ref{eq:evtop1}) is relevant for point SN1, where the signal is dominated by gluino pair-production, while for points SN2 and SN3, the sparticle production cross-section is dominated by squark pair production or squark-gluino production. In this case we will consider the following event topology: \begin{equation} \tilde q_L \rightarrow q + \tilde\chi_1 \rightarrow q + l + \tilde\nu \label{eq:evtop2} \end{equation} The subsystems in this case are analagous to the ones defined in Eq.~(\ref{eq:subs}), but with one less quark in the final state. Using the same notation as before, we label them as $m_{T2}^{ql}$, $m_{T2}^{q}$ and $m_{T2}^{l}$, with: \begin{enumerate} \item $m_{T2}^{ql,max}(m_x = m_{\tilde\nu}) = m_{\tilde q}$ \item $m_{T2}^{q,max}(m_x = m_{\tilde\chi_1}) = m_{\tilde q}$ \item $m_{T2}^{l,max}(m_x = m_{\tilde\nu}) = m_{\tilde\chi_1}$ \end{enumerate} The new $m_{T2}^{max}$ functions now obey~\cite{Cho2007,Mt2sub}: \begin{equation} m_{T2}^{ql,max}(m_x) = \left\{ \begin{array}{lr} \frac{m_{\tilde q}^2 - m_{\tilde\nu}^2}{2 m_{\tilde q}} + \sqrt{\left(\frac{m_{\tilde q}^2 - m_{\tilde\nu}^2}{2 m_{\tilde q}}\right)^2 + m_x^2} &\mbox{, if $m_x < m_{\tilde\nu}$}\\ m_{\tilde q}\left(1-\frac{m_{\tilde\chi_1}^2}{2m_{\tilde q}^2} -\frac{m_{\tilde\nu}^2}{2m_{\tilde\chi_1}^2}\right) + \sqrt{m_{\tilde q}^2\left(\frac{m_{\tilde\chi_1}^2}{2m_{\tilde q}^2}-\frac{m_{\tilde\nu}^2}{2m_{\tilde\chi_1}^2}\right)^2 + m_x^2} &\mbox{, if $m_x > m_{\tilde\nu}$} \end{array} \right. \label{eq:mt2ql} \end{equation} \begin{equation} m_{T2}^{q,max}(m_x) = \left\{ \begin{array}{lr} \frac{m_{\tilde q}^2 - m_{\tilde\chi_1}^2}{2 m_{\tilde q}} + \sqrt{\left(\frac{m_{\tilde q}^2 - m_{\tilde\chi_1}^2}{2 m_{\tilde q}}\right)^2 + m_x^2} \end{array} \right. \label{eq:mt2q} \end{equation} while $m_{T2}^{l,max}(m_x)$ is identical to Eq.~(\ref{eq:mt2l}). Note that, for a pair of $m_x$ values ($m_x = 0$ and $m_x \gg m_D$), the gluino subsystems provide 6 constraints on the 3 masses involved, while the squark subsystems provide 5 constraints on 3 masses\footnote{If, as in Eq.~(\ref{eq:mt2l}), we assume a non-zero transverse momentum for the squark pair, Eq.~(\ref{eq:mt2q}) would have two branches and could provide an extra constraint on the masses. However, as in the gluino case, ISR effects for squark pair production are negligible.}. Therefore in both cases there is an arbitrariness on how to extract the mass values. Since the $m_{T2}^{l}$ distribution requires an extra cut on $p_T$ in order to allow for the determination of $m_{\tilde\chi_1}$ and $m_{\tilde\nu}$, it will necessarily have a much smaller statistical significance than the other $m_{T2}$ distributions. Hence we will make use only of the first two subsystems, which are already sufficient to determine all the masses involved in the process. For that, we adopt the following procedure: \begin{itemize} \item For discrete values of $m_x$, we extract the value of $m_{T2}^{max}(m_x)$ for the subsystems 1 and 2, from the respective $m_{T2}$ distributions, following the algorithm defined in Appendix \ref{sec:extr}; \item We then simultaneously fit the results to the appropriate $m_{T2}^{max}$ functions, Eqs.~(\ref{eq:mt2qql}), (\ref{eq:mt2qq}), (\ref{eq:mt2ql}) or (\ref{eq:mt2q}); \item The mass values and their uncertainties are then extracted from the best fit result. \end{itemize} Although only two $m_{T2}^{max}$ measurements at two different $m_x$ values are already sufficient to obtain all masses, fitting the $m_{T2}^{max}(m_x)$ expressions for a wide range of $m_x$ values has two main advantages. First, it is less sensitive to uncertainties in extracting $m_{T2}^{max}$ from the $m_{T2}$ distributions. Second, it allows us to test our underlying model asssumptions since a poor fit would indicate that the events selected do not correspond to the topologies (\ref{eq:evtop1}) or (\ref{eq:evtop2}). \subsubsection*{Zero-lepton channel} In principle we can also determine the $\tilde\chi^0_1$ and $\tilde\chi^0_2$ masses if we look at the 0 lepton plus $p_T^{\rm miss}$ channel. For point SN1 this channel is dominated by gluino decays to neutralinos: \begin{equation} \tilde g \rightarrow qq + \tilde\chi^0_1/\tilde\chi^0_2 \rightarrow qq + \nu + \tilde\nu \end{equation} The $m_{T2}^{max}(m_x)$ function for this process obeys Eq.~(\ref{eq:mt2qq}) with $m_{\tilde\chi_1} \rightarrow m_{\tilde\chi^0_1,2}$. In this case, the $m_{T2}$ distribution (for a fixed $m_x$) would present two endpoints, one from $\tilde g \rightarrow qq + \tilde\chi^0_2$ and one from $\tilde g \rightarrow qq + \tilde\chi^0_1$. Since the latter will necessarily be at higher $m_{T2}$ values, a simple extraction of $m_{T2}^{max}$ will always give the $\tilde\chi^0_1$ endpoint, with the $\tilde\chi^0_2$ endpoint partially obscured by the $m_{T2}$ distribution from $\tilde\chi^0_1$ events. We have verified that, for the SN1 point, the signal/BG ratio is too small in the 4 jets, 0 lepton plus $p_T^{\rm miss}$ channel, making such a measurement unviable for $\mathcal{L} \sim 100$~fb$^{-1}$. For points SN2 and SN3 the 0 lepton plus $p_T^{\rm miss}$ channel is dominated by $\tilde q$ decays: \begin{equation} \tilde q \rightarrow q + \tilde\chi^0_1/\tilde\chi^0_2 \rightarrow q + \nu + \tilde\nu \end{equation} and the $m_{T2}^{max}(m_x)$ function obeys Eq.~(\ref{eq:mt2q}) with $m_{\tilde\chi_1} \rightarrow m_{\tilde\chi^0_1,2}$. Since BR$(\tilde q_R \rightarrow \tilde\chi^0_1) \simeq 100\%$, while BR$(\tilde q_L \rightarrow \tilde\chi^0_2) \simeq 33\%$, the 2 jets, 0 lepton plus $p_T^{\rm miss}$ channel comes mainly from $\tilde q_R$ decays and will have a large cross section. Therefore in this case it is possible to extract $m_{\tilde\chi^0_1}$, once $m_{\tilde q_R}$ is known. Hence, after determining the $m_{\tilde q}$, $m_{\tilde\chi_1}$ and $m_{\tilde\nu}$ values from $\tilde q \rightarrow \tilde\chi_1 + q$ decays, we will use the zero lepton channel to determine $m_{\tilde\chi^0_1}$ under the assumption that left and right-handed squarks are degenerate. The results obtained applying the procedure described here to the benchmark points SN1--SN3 are presented below in sections~\ref{mt2results1} and \ref{mt2results2}. In our analysis we include all relevant SUSY and SM backgrounds, as well as detector and combinatorics systematics. We always assume $\sqrt{s} = 14$ TeV and $\mathcal{L} = 100$ fb$^{-1}$. \subsection{Results for SN1} \label{mt2results1} \FIGURE[t]{ \includegraphics[width=10cm]{mt2-1.eps} \caption{The $m_{T2}$ distributions for the first two subsystems defined in Eq.~(\ref{eq:subs}) for point SN1. The dashed lines show the signal distributions at parton level, the shaded histogram shows the SM contribution and the solid lines show the signal plus background distributions at the detector level, after the cuts Eq.~(\ref{eq:cuts1}) have been applied. All distributions are normalized to unity. }\label{fig:mt2}} To apply the $m_{T2}$ subsystem method for the SN1 point, we consider the event topology Eq.~(\ref{eq:evtop1}), which can be selected with the following set of cuts: \begin{equation} \begin{array}{l} p_T^{\rm miss} > 100\;{\rm GeV},~ n(j) = 4,~ n(l) = 2,~ n(b) = 0,\\ p_T(j_1) > 100\;{\rm GeV},~ p_T(j) > 50\;{\rm GeV},~ p_T(l) > 30\;{\rm GeV}\,. \end{array} \label{eq:cuts1} \end{equation} Furthermore, to reduce the $Z/\gamma + jets$ background, we veto OS-dilepton events with \be 80\;{\rm GeV} < m(l^+l^-) < 100\;{\rm GeV\;\; or }\;\;m(l^+l^-) < 40\;{\rm GeV}\,. \ee The total SM BG for this set of cuts is 0.7 fb, while the SN1 signal is 4.6 fb. Due to the small BG level, our MC samples have only a few events, dominated by $t\bar{t} + 2 jets$. Although we include the full SM background in our results, the effect is subdominant. A more relevant issue is the jets and lepton combinatorics. For the subsystem 1 (2), defined in Eq.~(\ref{eq:subs}), it is necessary to group the 4 jets and 2 leptons (4 jets) into 2 visible groups, $vis^{(1)}$ and $vis^{(2)}$. Several methods have been proposed to deal with the combinatorics issue~\cite{Cho2007,Rajaraman2010}, which usually rely on kinematical correlations between the final states. However, since we are only interested in obtaining $m_{T2}^{max}$, for each event we select the grouping which gives the minimum $m_{T2}$ value for that event. This way the $m_{T2} < m_{T2}^{max}(m_x)$ relation is still preserved even if the wrong grouping is selected. Nevertheless, initial and final state radiation (FSR), signal background and detector energy smearing still affects the $m_{T2}$ distribution, resulting in a tail for values above $m_{T2}^{max}(m_x)$. In Fig.~\ref{fig:mt2}, we present the $m_{T2}^{qql}$ and $m_{T2}^{qq}$ distributions for the SN1 signal plus the SM background, where the trial mass was chosen as the respective daughter mass. For comparison purposes we also present the exact parton level distributions (dashed lines) for the signal. The spikes in the (solid) $m_{T2}$ distributions come from MC fluctuations in the SM background. As can be seen from Fig.~\ref{fig:mt2}, both distributions have an edge at \begin{itemize} \item $m_{T2}^{qql} \sim 760$ GeV and $m_{T2}^{qq} \sim 760$ GeV\,. \end{itemize} The above values agree well with the expected value, $m_{\tilde g}$. Figure~\ref{fig:mt2} also shows that the $m_{T2}^{qql}$ and $m_{T2}^{qq}$ distributions are strongly affected by the cuts, ISR, FSR and energy smearing effects, when compared to the parton level distributions. Furthermore, our solution to the combinatorics usually shifts the $m_{T2}$ distribution to lower values, what diminishes the peak, resulting in a less evident edge. \FIGURE[t]{ \includegraphics[width=7.3cm]{mt2maxt-1.eps} \includegraphics[width=7.3cm]{mt2max-1.eps} \caption{$m_{T2}^{max}$ for point SN1, as a function of the trial daughter mass $m_x$, for the first two subsystems defined in Eq.~(\ref{eq:subs}). The dashed blue lines show the best fit result obtained using Eqs.~(\ref{eq:mt2qql})--(\ref{eq:mt2qq}). The solid lines show the exact result obtained at parton level. }\label{fig:mt2max}} The distributions in Fig.~\ref{fig:mt2} assume a specific value for the trial mass $m_x$, which was chosen to be the exact daughter mass in each case. However, as discussed in Sec.\ref{sec:Masses}, the value of $m_{T2}^{max}$ as a function of $m_x$ allows us to extract both the parent and daughter masses, through Eqs.~(\ref{eq:mt2qql}) and (\ref{eq:mt2qq}). Figure~\ref{fig:mt2max} shows the results obtained from fitting the $m_{T2}^{max}(m_x)$ functions to the $m_{T2}^{max}$ values extracted from the simulated data using the algorithm outlined in the Appendix. As can be seen from Figs.~\ref{fig:mt2max}a and b, the best fit for the $m_{T2}^{qql,max}$ and $m_{T2}^{qq,max}$ curves (dashed blue lines) agree well with the exact solution (solid lines). The final result for the three masses, taken as the best simultaneous fit to both $m_{T2}^{max}$ curves, is shown in Table~\ref{table:masses}. The approximate precision for $m_{\tilde g}$ and $m_{\tilde\chi_1}$ is under a few percent, while the precision for the $\tilde{\nu}_1$ mass is much worse, around 50\%, with the central value $1.8\sigma$ from the true one. This is due to its small mass, when compared to the other mass scales, which renders the $m_{T2}^{max}$ expressions weakly dependent on $m_{\tilde\nu}$. Nonetheless, the results clearly point to a very light LSP, with a mass scale much smaller than the other particles involved in the cascade decay. From the results in Fig.~\ref{fig:mt2max} and Table~\ref{table:masses} we conclude that, despite the lack of precision in determining the LSP mass, the $m_{T2}$ subsystem method can still show that the LSP state is much lighter than expected in most MSSM scenarios. This would provide strong evidence for a sneutrino DM scenario, at least for the case of a light gluino/heavy squark spectrum, such as in the SN1 point. \begin{table}[t] \centering \begin{tabular}{|l|c|c|c|c|c|} \hline & $m_{\tilde g}$ & $m_{\tilde q_L}$ & $m_{\tilde\chi_1}$ & $m_{\tilde\chi^0_1}$ & $m_{\tilde\nu}$ \\ \hline SN1 & $771\pm 9$ & -- & $236\pm 11$ & -- &$34 \pm 15$ \\ Exact value & 765 & 1520--1523 & 227 & 109 & 7.6 \\ \hline SN2 & --& $786\pm 4$ & $207\pm 10$ & $126\pm 13$ & $14\pm 8$ \\ Exact value & 765 & 775--779 & 228 & 109 & 7.6 \\ \hline SN3 & -- & $710 \pm 2$ & $222 \pm 5$ & $111\pm 7$ & $0 \pm 12$ \\ Exact value & 1000 & 700--704 & 228 & 109 & 7.6 \\ \hline \end{tabular} \caption{Measured mass values in GeV for the points SN1--SN3, obtained using the $m_{T2}$ subsystem method described in the text. The error shown only includes statistical uncertainties and assumes an integrated luminosity of 100~fb$^{-1}$.} \label{table:masses} \end{table} \subsection{Results for SN2 and SN3} \label{mt2results2} For points SN2 and SN3 we must consider the squark cascade decay shown in Eq.~(\ref{eq:evtop2}). To select $\tilde q_L \rightarrow \tilde\chi_1 + q$ events we use the following set of cuts: \begin{equation} \begin{array}{l} p_T^{\rm miss} > 100\;{\rm GeV},~ n(j) = 2,~ n(l) = 2,~ n(b) = 0,\\ p_T(j_1) > 100\;{\rm GeV},~ p_T(j) > 30\;{\rm GeV},~ p_T(l) > 30\;{\rm GeV} \end{array} \label{eq:cuts2} \end{equation} and once again veto OS-dilepton events with $80\;{\rm GeV} < m(l^+l^-) < 100$~GeV or $m(l^+l^-) < 40$~GeV. The cross-section after cuts for points SN2 and SN3 are 32~fb and 36~fb, respectively, while for the SM background we obtain 29~fb. Despite the large background, the bulk of the SM events are concentraded at low $m_{T2}$ values ($< 500$~GeV). Thus the position of the $m_{T2}$ end point extraction is almost unaffected by the SM BG, as seen in Fig.~\ref{fig:mt22}. For point SN2 only $\sim 65\%$ of the signal comes from squark pair production, due to contamination from gluino pair production and gluino-squark production. On the other hand, the SUSY background is much smaller for point SN3, with $\sim 82\%$ of the signal coming from squark pair production. After repeating the same procedure used for extracting the masses in the SN1 case, but using the appropriate $m_{T2}^{max}$ expressions for the squark decay chain, Eqs.~(\ref{eq:mt2ql}) and (\ref{eq:mt2q}), we obtain the best fit result for $m_{\tilde q}$, $m_{\tilde\chi_1}$ and $m_{\tilde\nu}$ shown in Table~\ref{table:masses}. As we can see, for both points the statistical error bars are smaller than the ones for point SN1, due to the larger signal cross-section. Once again the least precise measurement corresponds to $m_{\tilde\nu}$, due to its small value. Nonetheless we can still conclude that the LSP is much lighter than the chargino and squark. \FIGURE[t]{ \includegraphics[width=10cm]{mt2-2.eps} \includegraphics[width=10cm]{mt2-3.eps} \caption{The $m_{T2}^{q}$ and $m_{T2}^{ql}$ distributions for the points SN2 and SN3. The dashed lines show the signal distributions at parton level, the shaded histogram shows the SM contribution and the solid lines show the signal plus background distributions at the detector level, after the cuts Eq.~(\ref{eq:cuts2}) have been applied. All distributions are normalized to unity. }\label{fig:mt22}} As mentioned before, the large $\tilde q\tq$ production cross-section for points SN2 and SN3 can still provide one more piece of information about the spectrum. From Table~\ref{tab:decays}, we have BR$(\tilde q_R \rightarrow q + \tilde\chi^0_1)\sim 100\%$. Therefore, if instead of $\tilde q_L$ pair production we consider the $\tilde q_R\tilde q_R$ events, we can use the usual $m_{T2}$ variable with $\tilde q_R$ as the parent, $\tilde\chi^0_1(\nu\tilde\nu)$ as the daughter and the jet as the visible component to measure the $\tilde\chi^0_1$ mass once $m_{\tilde q_R}$ is known. In the scenarios considered we have $m_{\tilde q_L} \simeq m_{\tilde q_R}$, so we can use the $m_{\tilde q}$ value obtained from our previous results. To select the right-handed squark signal we require: \begin{equation} p_T^{\rm miss} > 200\;{\rm GeV},~ n(j) = 2,~ n(l) = 0,~ n(b) = 0,~ p_T(j_1) > 100\;{\rm GeV} \label{eq:cuts3b} \end{equation} The SN2 (SN3) signal in this channel is 133 (176)~fb, while for the SM BG we have 68~fb. Once again, the distribution for the SM background peaks at low $m_{T2}$ and has almost no impact on the $m_{T2}^{max}$ value. In Fig.~\ref{fig:mt23b} we show the $m_{T2}$ distribution for SN3 with $m_{x} = m_{\tilde\chi^0_1}$, where we see a clear edge at $m_{T2} \sim 690$ GeV, very close to the $m_{\tilde q_R}$ input value. \FIGURE[t]{ \includegraphics[width=10cm]{mt2-3B.eps} \caption{The $m_{T2}^{q}$ distribution for point SN3. The dashed line shows the signal distribution at parton level, while the solid lines show the signal plus background distribution at the detector level, after the $p_T^{\rm miss} > 200$ GeV, $n(j) = 2$, $n(l) = 0$, $n(b) = 0$, $p_T(j_1) > 100$ GeV cuts have been applied. All distributions are normalized to 1. }\label{fig:mt23b}} The $m_{T2}^{q,max}$ values as a function of $m_x$ for the SN3 point are shown in Fig.~\ref{fig:mt2max3b}. We see that the extracted endpoints are very close to their exact value and due to the large signal statistics and low BG, the error bars are barely visible. Fitting Eq.~(\ref{eq:mt2q}) to the data points in Fig.~\ref{fig:mt2max3b} we obtain: \be \frac{m_{\tilde q_R}^2 - m_{\tilde\chi^0_1}^2}{2m_{\tilde q_R}} = 346 {\rm GeV} \ee which agrees extremely well with the theoretical value, 342 GeV. Assuming $m_{\tilde q_L}=m_{\tilde q_R}$ and using the $m_{\tilde q_L}$ value obtained from the $\tilde q_L\tilde q_L$ signal (see Table~\ref{table:masses}), we can compute $m_{\tilde\chi^0_1}$: \be m_{\tilde\chi^0_1} = 111 \pm 7\; {\rm GeV\; (SN3)} \ee where the error in $m_{\tilde q_R}$ has been included when computing the uncertainty on $m_{\tilde\chi^0_1}$. Repeating the same procedure for point SN2 we obtain: \be m_{\tilde\chi^0_1} = 126 \pm 13\; {\rm GeV\; (SN2)} \ee The result in this case is worse than for point SN3 because of the larger SUSY background present in the SN2 signal. This mainly affects the determination of $m_{\tilde q}$, as seen in Table~\ref{table:masses}, which propagates to the central value and uncertainty in $m_{\tilde\chi^0_1}$. Nonetheless, the $\tilde q_R\tilde q_R$ channel still shows that the measured spectrum has a neutral NLSP with mass $\approx m_{\tilde\chi_1}/2$. Such a neutral NLSP is consistent with a bino state, if gaugino mass unification is assumed. Therefore, the combined results of the $\tilde q_L\tilde q_L$ and $\tilde q_R\tilde q_R$ channels would point to the usual MSSM scenario with gaugino mass unification, but with an additional LSP state, which is neutral and very light. This would be another strong evidence for the SNDM model. \FIGURE[t]{ \includegraphics[width=10cm]{mt2max-3B.eps} \caption{$m_{T2}^{max,q}$ for point SN3, as a function of the trial daughter mass $m_x$, in the zero lepton, dijet channel, as discussed in the text. The dashed blue line shows the best fit result obtained using Eq.~(\ref{eq:mt2q}). The solid line shows the exact result obtained at parton level. }\label{fig:mt2max3b}} \subsection{Dilepton Invariant Mass at LHC14} From the results in Table~\ref{table:masses} we see that the $m_{T2}$ method can indicate the presence of a very light LSP neutral particle in the signal. Moreover, in case of SN2--3, the $m_{T2}^q$ distribution can give information on an additional invisible sparticle, consistent with the $\tilde\chi^0_1$ in case of universal gaugino masses. Additional evidence for a sneutrino LSP can be obtained from the properties of dilepton events, as already discussed in Sec.~\ref{sec:lhc7dists}. Although at 7 TeV the dilepton signal is likely too small to allow for the use of the dilepton invariant-mass distributions, at LHC14 these may be exploited to probe the nature of the LSP. \FIGURE[t]{ \includegraphics[width=10cm]{SN1-dilepmass.eps} \includegraphics[width=10cm]{SN2-dilepmass.eps} \includegraphics[width=10cm]{SN3-dilepmass.eps} \caption{OS (red) and SS (blue) dilepton invariant masses for the SN1-3 points (solid) and the corresponding MSSM models (dashed). For frame {\it a)} the cuts in Eq.(\ref{eq:cuts1}) have been applied, while for frames {\it b)} and {\it c)} the cuts in Eq.(\ref{eq:cuts2}) were used instead.}\label{fig:dilepmass} } In Fig.~\ref{fig:dilepmass} we show the OS and SS dilepton invariant masses for the three SNDM benchmark points as well as for the corresponding MSSM models. The cuts applied are the same used for the $m_{T2}$ analyses, namely Eq.~(\ref{eq:cuts1}) for point SN1 and Eq.~(\ref{eq:cuts2}) for points SN2--3. We assume that the shape of the SM background distributions can be extracted from data and/or MC, so we neglect their contribution. As we can see, the $m_{l^{+}l^{-}}$ and $m_{l^{\pm}l^{\pm}}$ distributions are drastically different between the SNDM and MSSM scenarios. Besides the larger overall rate of dilepton events, the SNDM points show a much harder $m_{ll}$ distribution. This is due to the large $\tilde\chi_1-\tilde\nu$ mass gap, resulting in a much harder $p_T(l)$ spectrum. Moreover, for the gaugino masses considered here, the $\tilde\chi^0_2$ has a significant BR to $Z+\tilde\chi^0_1$, giving rise to the Z-peak seen in the MSSM OS distributions; in the SNDM scenario, no such peak is present since BR$(\tilde\chi^0_2 \rightarrow \tilde\nu + \nu) \simeq 100\%$. \section{Conclusions} \label{sec:conclude} A mainly right-handed sneutrino with large L/R mixing is an excellent candidate for light cold dark matter with mass below $\sim 10$~GeV. If DM is indeed realized in the form of light sneutrinos, there are important consequences to the SUSY signatures at the LHC. In particular, neutralinos $\tilde\chi^0_1$ and $\tilde\chi^0_2$ appearing in squark and gluino cascades decay invisibly into $\tilde{\nu}_1\nu$, so that there can be up to three different invisible sparticles in an event. Charginos, on the other hand, decay dominantly into charged leptons plus the $\tilde{\nu}_1$ LSP. SUSY events will therefore present a harder $p_T^{\rm miss}$ distribution than expected in MSSM scenarios with a similar sparticle spectrum, and dilepton events will appear at much larger rates both in the OS and SS channels. During the first LHC run, at $\sqrt{s}=$7 TeV and with $\sim 1$~fb$^{-1}$, a signal could already be seen if gluinos and squarks have masses up to $\sim 1$~TeV. Signal distributions such as the lepton and jet number, as well as SS/OS dilepton rates may already indicate a light sneutrino as the lightest SUSY particle in the early phase of LHC running. Precision measurements enabling a model discrimination should be possible at higher energy and luminosity. For $\sqrt{s}=$14 TeV and $\mathcal{L} = 100$~fb$^{-1}$ we have shown that the sneutrino mass can be measured using the $m_{T2}$ technique with a $\sim 50\%$ precision, which is already sufficient to distinguish between the SNDM and MSSM scenarios with gaugino mass unification. Furthermore, the dilepton invariant mass distributions can also point to the presence of a light LSP which carries lepton number. The presence of additional invisible sparticles in the decay chains may be inferred from the $p_T^{\rm miss}$ and transverse-mass distributions. We have shown that indeed, for $m_{\tilde q_R}^{}\approx m_{\tilde q_L}^{}$ the $\tilde\chi^0_1$ mass might be measurable with $\sim10\%$ precision. Regarding alternative scenarios with possibly similar signatures, a 7--8~ GeV $\tilde\chi^0_1$ LSP in the MSSM with non-universal gaugino masses~\cite{Fornengo:2010mk} (see however \cite{Vasquez:2010ru}) could be distinguished from the case studied here by exploiting, e.g., same-flavor opposite-sign (SFOS) dileptons from $\tilde\chi^0_2\rightarrow \tilde\chi^0_1+Z$, which is absent in the SNDM case. Indeed the absence of kinematical structure and flavor correlations typical for the SNDM case will point to $\tilde\chi_1\rightarrow l^\pm\tilde{\nu}_1$ decays.\footnote{In case of a strong hierarchy among the $\tilde{\nu}_1$ of different flavors there can of course be SFOS dileptons from cascades involving $\tilde\chi_1$'s; however in that case there should also appear the corresponding SFSS events.} Quite similar signals as in the SNDM case can in principle arise in the next-to-MSSM, with $<10$~GeV neutralinos as viable DM candidates that can have large ($10^{-5}-10^{-4}$~pb) elastic scattering cross sections~\cite{Vasquez:2010ru,Draper:2010ew,Cao:2011re}. In this case one may have dominantly invisible $\tilde\chi^0_2$ decays through $\tilde\chi^0_2\rightarrow h_2\tilde\chi^0_1$ followed by $h_2\rightarrow\tilde\chi^0_1\tilde\chi^0_1$. Here possible ways of discrimination are, e.g., the $\tilde\chi_1\rightarrow W^\pm\tilde\chi^0_1$ decays and the presence of additional light Higgs states $h_1$ and $a_1$. We conclude that the LHC offers very good prospects to resolve the light mixed sneutrino DM case. Finally, recall that a corroborating signal is expected in direct dark matter searches, as there is a lower limit on the spin-independent scattering cross section of $\sigma^{\rm SI}\gtrsim10^{-5}$~pb. \acknowledgments GB and AL thank the LPSC Grenoble for hospitality. AL would like to thank Xerxes Tata for useful discussions, and SK gratefully acknowledges discussions with Sanjay Padhi on dilepton searches in CMS. Last but not least, we thank A.~Pukhov for providing SHLA decay-table output in {\tt micrOMEGAs}. This research was supported in part by the U.S. Department of Energy, by the Fulbright Program and CAPES (Brazilian Federal Agency for Post-Graduate Education), and by the French ANR project {\tt ToolsDMColl}, BLAN07-2-194882.
1,108,101,565,026
arxiv
\section{Nets and their executions}\label{sec: nets and their executions} We start by recalling some basic constructions of category theory and some basic facts about Petri nets and their categorical formalization. The notions of \emph{bicategory, pseudofunctor, lax functor and bimodule} are not strictly necessary to understand this paper: They only show up in results that we cite and that could in principle be taken for granted while skimming on the details. In any case, we list these notions here for the reader interested in parsing these results in full depth. The first definition we recall is the one of \emph{bicategory}. Intuitively, bicategories are categories where we also allow for ``morphisms between morphisms'', called \emph{2-cells}. This in turn allows to define a version of the associativity and identity laws that is weaker than for usual categories, holding only up to isomorphism. \begin{definition}[Bicategory]\label{bicat} A \emph{(locally small) bicategory} $\mathcal{B}$ consists of the following data. \begin{enumerate} \item \label{bica:uno} A class $\mathcal{B}_o$ of \emph{objects}, denoted with Latin letters like $A,B,\dots$, also called \emph{0-cells}. \item \label{bica:due} A collection of (small) categories $\mathcal{B}(A,B)$, one for each $A,B\in \mathcal{B}_o$, whose objects are called \emph{1-cells} or \emph{arrows} with \emph{domain} $A$ and \emph{codomain} $B$, and whose morphisms $\alpha : f \Rightarrow g$ are called \emph{2-cells} or \emph{transformations} with domain $f$ and codomain $g$; the composition law $\circ$ in $\mathcal{B}(A,B)$ is called \emph{vertical composition} of 2-cells. \item A family of \emph{compositions \[ \bullet_{\mathcal{B},ABC} : \mathcal{B}(B,C)\times\mathcal{B}(A,B) \to \mathcal{B}(A,C) : (g,f)\mapsto g\bullet f \] defined for any triple of objects $A,B,C$. This is a family of functors between hom-categories, and its action on morphisms is called \emph{horizontal composition} of natural transformations, that we denote $\alpha\bullet\beta$. \item \label{bica:tre} For every object $A\in \mathcal{B}_o$ there is an arrow $\mathrm{id}_A\in \mathcal{B}(A,A)$ \end{enumerate} To this basic structure we add\index{associator!monoidal ---} \begin{enumerate} \item a family of invertible maps $\alpha_{fgh} : (f \bullet g) \bullet h \cong f \bullet (g \bullet h)$ natural in all its arguments $f,g,h$, which taken together form the \emph{associator} isomorphisms; \item a family of invertible maps $\lambda_f : \mathrm{id}_B \bullet f \cong f$ and $\varrho_f : f \bullet \mathrm{id}_A \cong f$ natural in its component $f : A \to B$, which taken together form the \emph{left unitor} and \emph{right unitor} isomorphisms. \end{enumerate} Finally, these data are subject to the following axioms. \index{horizontal composition} \index{_aaa_boxminus@$\bullet$} \begin{enumerate} \item For every quadruple of 1-cells $f,g,h,k$ we have that the diagram \[ \vcenter{\xymatrix{ ((f\bullet g)\bullet h)\bullet k \ar[d]_{\alpha_{f,g,h}\bullet k}\ar[r]^{\alpha_{fg,h,k}} & (f\bullet g)\bullet (h\bullet k) \ar[r]^{\alpha_{f,g,hk}} & f\bullet (g\bullet (h\bullet k))\\ (f\bullet (g\bullet h))\bullet k \ar[rr]_{\alpha_{f,gh,k}} && f\bullet ((g\bullet h)\bullet k)\ar[u]_{f\bullet \alpha_{g,h,k}} }} \] commutes. \item For every pair of composable 1-cells $f,g$, \[ \vcenter{\xymatrix{ (f \bullet \mathrm{id}_A)\bullet g\ar[dr]_{\varrho_f\bullet g}\ar[rr]^{a_{A,\mathrm{id}_A,g}} && f\bullet(\mathrm{id}_A\bullet \, g)\ar[dl]^{f\bullet \lambda_g}\\ & f\bullet g }} \] commutes. \end{enumerate} \end{definition} \begin{definition}[2-category] A \emph{2-category} is a bicategory where the associator and unitors are the identity natural transformations. In other words, a 2-category is precisely a bicategory where horizontal composition is strictly associative, and the identities $\mathrm{id}_A$ work as strict identities for the horizontal composition operation. \end{definition} Some sources call `2-category' what we call a bicategory, and `strict 2-category' what we call a 2-category. Something similar happens for monoidal categories: a monoidal category is called \emph{strict} if its associator and left/right unitors are identity natural transformations. This is not by chance: a (strict) monoidal category $\mathcal V$ is exactly a (strict) 2-category with a single object $\ast$ (so that the category $\mathcal V$ can be identified with the category of endomorphisms of $\ast$). \begin{example}\leavevmode \begin{itemize} \item There is a 2-category $\Cat$ where 0-cells are small categories, and the hom categories $\Cat(C,D)$ are the categories of functors and natural transformations. Composition of functors is strictly associative and unital. \item There is a bicategory of \emph{profunctors}, as defined in \cite{benabou2000distributors,cattani2005profunctors} and \cite[Ch. 5]{coend-calcu}. Composition of profunctors is associative up to a canonical isomorphism. \item Every category $\mathcal{C}$ is trivially a 2-category by taking the 2-cells to be identities. This is sometimes called the `discrete' 2-category obtained from a category $\mathcal{C}$. \item There is a 2-category where 0-cells are partially ordered sets $(P,\le)$, and where the category $\mathsf{Pos}(P,Q)$ is the partially ordered set of monotone functions $f : P \to Q$ and pointwise order ($f\preceq g$ iff $\forall p.fp\le gp$ in $Q$). Composition is strictly associative and unital. \end{itemize} \end{example} \begin{remark} The fact that for every bicategory $\mathcal B$ the maps \[ \boxminus_{\mathcal{B},ABC} : \mathcal{B}(B,C)\times\mathcal{B}(A,B) \to \mathcal{B}(A,C) : (g,f)\mapsto g\bullet f \] are functors with domain a product category entails the following identity: \begin{quote} Given any diagram of 2-cells like \[\xymatrix{ A\ruppertwocell^{}{\alpha} \rlowertwocell_{}{\beta} \ar[r] & B \ruppertwocell^{}{\gamma} \rlowertwocell_{}{\delta} \ar[r] & C\\ }\] we have that $(\delta\bullet\beta)\circ (\gamma\bullet\alpha) = (\delta\circ\gamma) \bullet (\beta\circ\alpha)$. This is usually called the \emph{interchange law} in $\mathcal B$. \end{quote} \end{remark} Pseudofunctors and lax functors, defined below, are some of the most widely used notions of morphism between bicategories. These are useful to parse the deep results on which~\cref{sec: internalization} relies. \begin{definition}[Pseudofunctor, (co)lax functor]\label{colaxe}\index{functor!pseudo---} Let $\mathcal{B},\mathcal{C}$ be two bicategories; a \emph{pseudofunctor} consists of \begin{enumerate} \item a function $F_o : \mathcal{B}_o \to \mathcal C_o$, \item a family of functors $F_{AB} : \mathcal{B}(A,B) \to \mathcal{C}(FA, FB)$, \item an invertible 2-cell $\mu_{fg} : Ff \circ Fg \Rightarrow F(fg)$ for each $A \xrightarrow{g}B\xrightarrow{f} C$, natural in $f$ (with respect to vertical composition) and an invertible 2-cell $\eta : \eta_f : \mathrm{id}_{FA} \Rightarrow F(\mathrm{id}_A)$, also natural in $f$. \end{enumerate} These data are subject to the following commutativity conditions for every 1-cell $A \to B$: \[ \xymatrix{ Ff\circ \mathrm{id}_A \ar[r]^{\varrho_{Ff}}\ar[d]_{Ff * \eta} & Ff\ar[d]^{F(\varrho_f)} & \mathrm{id}_B \circ Ff\ar[d]_{\eta * Ff} \ar[r]^{\lambda_{Ff}}& Ff\ar[d]^{F(\lambda_f)}\\ Ff \circ F(\mathrm{id}_A)\ar[r]_{\mu_{f,\mathrm{id}_A}} & F(f \circ \mathrm{id}_A) & F(\mathrm{id}_B)\circ Ff\ar[r]_{\mu_{\mathrm{id}_B,f}} & F(\mathrm{id}_B \circ f)\\ (Ff\circ Fg) \circ Fh \ar[rrr]^{\alpha_{Ff,Fg,Fh}}\ar[d]_{\mu_{fg} * Fh} &&& Ff\circ (Fg\circ Fh)\ar[d]^{Ff * \mu_{gh}}\\ F(fg)\circ Fh \ar[d]_{\mu_{fg} * Fh} &&& Ff\circ F(gh)\ar[d]^{\mu_{f,gh}}\\ F((fg)h) \ar[rrr]_{F \alpha_{fgh}}&&& F(f(gh)) } \] (we denote invariably $\alpha,\lambda,\varrho$ the associator and unitor of $\mathcal{B},\mathcal{C}$). A \emph{lax} functor is defined by the same data, but both the 2-cells $\mu : Ff \circ Fg \Rightarrow F(fg)$ and $\eta : \mathrm{id}_{FA} \Rightarrow F(\mathrm{id}_A)$ can be non-invertible; the same coherence diagrams in Definition \ref{colaxe} hold. A \emph{colax} functor reverses the direction of the cells $\mu,\eta$, and the commutativity of the diagrams in Definition \ref{colaxe} changes accordingly.\index{functor!lax ---} \end{definition} Another notion that we will make heavy use of is the one of \emph{comonad}. On the other hand, \emph{monads} and morphisms between them, called \emph{bimodules}, will only appear in~\cref{thm: zanasi}. We will only use a straightforward consequence of this theorem, and the hurrying reader may not linger too much on these definitions. \begin{definition}[Monad, comonad]\label{muso_da_mona} Let $\mathcal{C}$ be a category; a \emph{monad} on $\mathcal{C}$ consists of an endofunctor $T : \mathcal{C}\to \mathcal{C}$ endowed with two natural transformations \begin{itemize} \item $\mu : T\circ T\Rightarrow T$, the \emph{multiplication} of the monad, and \item $\eta : \mathrm{id}_\mathcal{C} \Rightarrow T$, the \emph{unit} of the monad, \end{itemize} such that the following axioms are satisfied: \begin{itemize} \item the multiplication is associative, i.e. the diagram \[ \vcenter{\xymatrix{ T\circ T\circ T\ar[r]^{T *\mu}\ar[d]_{\mu *T} & T\circ T \ar[d]^\mu\\ T\circ T \ar[r]_\mu & T } } \] is commutative, i.e. the equality of natural transformations $\mu\circ (\mu * T) = \mu \circ (T * \mu)$ holds; \item the multiplication has the transformation $\eta$ as unit, i.e. the diagram \[ \vcenter{\xymatrix{ T \ar[r]^{\eta *T}\ar@{=}[dr]& T\circ T \ar[d]_\mu & T\ar[l]_{T*\eta} \ar@{=}[dl]\\ & T & } } \] is commutative, i.e. the equality of natural transformations $\mu\circ (\eta *T)=\mu\circ (T * \eta)= \mathrm{id}_T$ holds. \end{itemize} Dually, let $\mathcal{C}$ be a category; a \emph{comonad} on $\mathcal{C}$ consists of an endofunctor $T : \mathcal{C}\to \mathcal{C}$ endowed with two natural transformations \begin{itemize} \item $\sigma : T \Rightarrow T\circ T$, the \emph{comultiplication} of the comonad, and \item $\epsilon : T\Rightarrow \mathrm{id}_\mathcal{C}$, the \emph{counit} of the comonad, \end{itemize} such that the following axioms are satisfied: \begin{itemize} \item the comultiplication is coassociative, i.e. the diagram \[ \vcenter{\xymatrix{ T\circ T\circ T\ar@{<-}[r]^{T *\sigma}\ar@{<-}[d]_{\sigma *T} & T\circ T \ar@{<-}[d]^\sigma\\ T\circ T \ar@{<-}[r]_\sigma & T } } \] is commutative \item the comultiplication has the transformation $\epsilon$ as counit, i.e. the diagram \[ \vcenter{\xymatrix{ T \ar@{<-}[r]^{\epsilon *T}\ar@{=}[dr]& T\circ T \ar@{<-}[d]_\sigma & T\ar@{<-}[l]_{T*\epsilon} \ar@{=}[dl]\\ & T & } } \] is commutative \end{itemize} \end{definition} \begin{definition}[Bimodule] Given a bicategory $\mathcal B$ having finite colimits (in the 2-categorical sense of \cite{2catlimits}), define the 2-category $\cate{Mod}(\mathcal B)$ of \emph{bimodules} as in \cite[2.19]{Zanasi2018}: \begin{itemize} \item 0-cells are the monads in $\mathcal B$; \item 1-cells $T \to S$ are \emph{bimodules}, i.e. 1-cells $H : C \to D$ (assuming $T$ is a monad on $C$, and $S$ a monad on $D$) equipped with suitable action maps: $\rho : HT \to H$ and $\lambda : SH\to H$ satisfying suitable axioms expressing the fact that $T$ acts on the right over $H$, via $\rho$ (resp., $S$ acts on the left on $H$, va $\lambda$); \item 2-cells are natural transformations $\alpha : H \Rightarrow K : T\to S$ compatible with the action maps. \end{itemize} \end{definition} \subsection{Categorical Petri nets} Having recalled some of the category theory we are going to use, we now summarize some needed definitions underlying the study of Petri nets from a categorical perspective. \begin{notation} Let $S$ be a set; a multiset is a function $S \to \Naturals$. Denote with $\Msets{S}$ the set of multisets over $S$. Multiset sum and difference (only partially defined) are defined pointwise and will be denoted with $\oplus$ and $\ominus$, respectively. The set $\Msets{S}$ together with $\oplus$ and the empty multiset is isomorphic to the free commutative monoid on $S$. \end{notation} \begin{definition}[Petri net]\label{def: Petri net} A \emph{Petri net} is a pair of functions $T \xrightarrow{s,t} \Msets{S}$ for some sets $T$ and $S$, called the set of places and transitions of the net, respectively. $s,t$ are called \emph{input and output} functions, respectively, or equivalently \emph{source and target}. A \emph{morphism of nets} is a pair of functions $f: T \to T'$ and $g: S \to S'$ such that the following square commutes, with $\Msets{g}: \Msets{S} \to \Msets{S'}$ the obvious lifting of $g$ to multisets: % % \begin{equation*} \begin{tikzcd} {\Msets{S}} & {T} & {\Msets{S}} \\ {\Msets{S'}} & {T'} & {\Msets{S'}} \arrow["{s}"', from=1-2, to=1-1] \arrow["{s'}", from=2-2, to=2-1] \arrow["{t'}"', from=2-2, to=2-3] \arrow["{t}", from=1-2, to=1-3] \arrow["{\Msets{g}}"', from=1-1, to=2-1] \arrow["{\Msets{g}}", from=1-3, to=2-3] \arrow["{f}" description, from=1-2, to=2-2] \end{tikzcd} \end{equation*} % Petri nets and their morphisms form a category, denoted $\Petri$. Details can be found in~\cite{Meseguer1990}. \end{definition} \begin{definition}[Markings and firings]\label{def: Petri net firing} A \emph{marking} for a net $T \xrightarrow{s,t} \Msets{S}$ is an element of $\Msets{S}$, representing a distribution of tokens in the net places. A transition $u$ is \emph{enabled} in a marking $M$ if $M \ominus s(u)$ is defined. An enabled transition can \emph{fire}, moving tokens in the net. Firing is considered an atomic event, and the marking resulting from firing $u$ in $M$ is $M \ominus s(u) \oplus t(u)$. Sequences of firings are called \emph{executions}. \end{definition} The main insight of categorical semantics for Petri nets is that the information contained in a given net is enough to generate a free symmetric strict monoidal category representing all the possible ways to run the net. There are multiple ways to do this~\cite{Sassone1995,Genovese2019c,Genovese2019b,Master2020, Baez2021}. In this work, we embrace the \emph{individual-token philosophy}, where tokens are considered distinct and distinguishable and thus require the category in~\cref{def: executions individual token philosophy} to have non-trivial symmetries. \begin{definition}[Category of executions -- individual-token philosophy]\label{def: executions individual token philosophy} Let $N: T \xrightarrow{s,t} \Msets{S}$ be a Petri net. We can generate a \emph{free symmetric strict monoidal category (\textsc{fssmc}\xspace)}, $\Free{N}$, as follows: % % \begin{itemize} \item The monoid of objects is the free monoid generated by $S$. Monoidal product of objects $A,B$ is denoted with $A \otimes B$. \item Morphisms are generated by $T$: each $u \in T$ corresponds to a morphism generator $(u,su,tu)$, pictorially represented as an arrow $su \xrightarrow{u} tu$; morphisms are obtained by considering all the formal (monoidal) compositions of generators and identities. \end{itemize} % A detailed description of this construction can be found in~\cite{Master2020}. \end{definition} In this definition, objects represent markings of a net. For instance, the object $A\oplus A \oplus B$ means ``two tokens in $A$ and one token in $B$''. Morphisms represent executions of a net, mapping markings to markings. A marking is reachable from another one if and only if there is a morphism between them. An example is provided in~\cref{fig: execution of a net}. \begin{figure} \begin{small} \begin{center} \input{Images/petriNetExecution.tex} \end{center} \caption{Graphical representation of a net's execution.}\label{fig: execution of a net} \end{small} \end{figure} \section{Engineering perspective}\label{sec: engineering perspective} We deem it wise to spend a few words on why we consider this way of doing things advantageous from an applicative perspective. Petri nets have been considered as a possible way of producing software for a long time, with some startups even using them as a central tool in their product offer~\cite{StateboxTeam2019}. Providing some form of hierarchical calling is needed to make the idea of ``Petri nets as a programming language/general-purpose design tool'' practical. Our definition of hierarchy has the advantage of not making hierarchical nets more expressive than Petri nets. If this seems like a downside, notice that a consequence of this is that decidability of any reachability-related question is exactly as for Petri nets, which is a great advantage from the point of view of model checking. The legitimacy of this assertion is provided by internalization, that allows us to reduce hierarchical nets back to Petri nets. A further advantage of this is that we can use already widespread tools for reachability checking~\cite{UniversityofTorino2018} to answer reachability questions for our hierarchical nets, without having necessarily to focus on producing new ones. Moreover, and more importantly, our span formalism works really well in modelling net behaviour in a distributed setting. To better understand this, imagine an infrastructure where each Petri net is considered as a \emph{smart contract} (as it would be, for instance, if we were to implement nets as smart contracts on a blockchain). A smart contract is nothing more than a piece of code residing at a given address. Interaction with smart contracts is \emph{transactional}: One sends a request to the contract address with some data to be processed (for example a list of functions to be called on some parameters). The smart contract executes as per the transaction and returns the data processed. In our Petri net example things do not change: A user sends a message consisting of a net address, the transaction the user intends to fire, and some transaction data. The infrastructure replies affirmatively or negatively if the transaction can be fired, which amounts to accept or reject the transaction. As we already stressed, this is particularly suitable for blockchain-related contexts and it is how applications such as~\cite{StateboxTeam2017a} implement Petri nets in their services. \begin{figure}[ht!]\centering \input{Images/engineeringNet.tex} \caption{In this diagram we describe the interaction between a user and a net, with downward pointing arrows representing the flow of time. The user, having id \texttt{dbbfe69836}, sends a request to a net having address \texttt{832344009d}. The user is requesting to fire transition $t_1$ in the net. As the transition is enabled and able to fire, the request is granted, the state of the net updated, and a reply to the user is sent.} \end{figure} From this point of view, a hierarchical net would work exactly as a standard Petri net, with the exception that in sending a transaction to the parent net, the user also has to specify, in the transaction data, a proper execution of the child net corresponding to the firing transition. \begin{figure*}[ht!]\centering \input{Images/engineeringHierarchicalNet.tex} \caption{In this diagram we describe the interaction between a user and a hierarchical net. This time the user, having id \texttt{dbbfe69836}, sends a request to a net having address \texttt{832344009d}. This net is hierarchical, so in calling transition $t_1$ in the parent net, the user has also to provide a valid execution for its child. This is provided as transaction data, in this case $u_1 \cp u_2$. The parent net stores the address to the child net corresponding to $t_1$, which in this case is \texttt{2f9b1ee0dc}. The request to fire $u_1$ and then $u_2$ is forwarded to \texttt{2f9b1ee0dc}, which changes its state and responds affirmatively. This means that \texttt{832344009d} can itself change its state and respond affirmatively to \texttt{dbbfe69836}. Should any of these steps fail, the entire transaction is rejected and each net reverts to its previous state.} \end{figure*} Again, from a smart contract standpoint, this means that the smart contract corresponding to the parent net will call the contract corresponding to the child net with some execution data, and will respond affirmatively to the user only if the generated call resolves positively. Recalling the results in previous sections of this work, all the possible ways of executing the contracts above form a category, which is obtained by internalizing the hierarchical net via~\cref{thm: internalization}. Internalized categories being free, they are presented by Petri nets, which we can feed to any mainstream model checker. Now, all sorts of questions about liveness and interaction of the contracts above can be analyzed by model-checking the corresponding internalized net. This provides an easy way to analyze complex contract interaction, relying on tools that have been debugged and computationally optimized for decades. \section{Local semantics for Petri nets}\label{sec: local semantics for Petri nets} We concluded the last section pointing out reasons that make defining a semantics for hierarchical nets less intuitive than one would initially expect. Moreover, requiring the transition firings in the parent net to be considered as atomic events basically rules out the majority of the previous approaches to hierarchical Petri nets, as the one sketched in~\cite{Jensen2009, oswald1990environment}. Embracing an engineering perspective, we could get away with some ad-hoc solution to conciliate that parent and child net topologies are unrelated. One possible way, for instance, would be imposing constraints between the shapes of the parent net and its children. However, in defining things ad-hoc, the possibility for unforeseen corner cases and situations we do not know how to deal with becomes high. To avoid this, we embrace a categorical perspective and define things up to some degree of canonicity. Making good use of the categorical work already carried out on Petri nets, our goal is to leverage it and get to a plausible definition of categorical semantics for hierarchical nets. Our strategy is to consider a hierarchical net as an extension of a Petri net: The parent net will be the Petri net we extend, whereas the children nets will be encoded in the extension. This is precisely the main idea contained in~\cite{Genovese2020}, that is, the idea of describing net extensions with different varieties of monoidal functors. Indeed, we intend to show how the theory presented in~\cite{Genovese2020}, and initially serving a wholly different purpose, can be reworked to represent hierarchical nets with minimal effort. As for semantics, we will use strict monoidal functors and name it \emph{local} because the strict-monoidality requirement amounts to endow tokens with properties that cannot be shared with other tokens. To understand this choice of naming a little bit better, it may be worth comparing it with the notion of \emph{ non-local semantics}, defined in terms of lax-monoidal-lax functors, that we gave in~\cite{Genovese2021a}. \begin{definition}[Local semantics for Petri nets]\label{def: PetriS} Given a strict monoidal category $\Semantics$, a \emph{Petri net with a local $\Semantics$-semantics} is a pair $\NetSem{N}$, consisting of a Petri net $N$ and a strict monoidal functor \[\Fun{N}: \Free{N} \to \Semantics.\] A morphism $F: \NetSem{M} \to \NetSem{N}$ is just a strict monoidal functor $F: \Free{M} \to \Free{N}$ such that $\Fun{M} = F \cp \Fun{N}$, where we denote composition in diagrammatic order; i.e.\ given $f\colon c\to d$ and $g\colon d\to e$, we denote their composite by $(f\cp g)\colon c\to e$. Nets equipped with $\Semantics$-semantics and their morphisms form a monoidal category denoted $\PetriS{\Semantics}$, with the monoidal structure arising from the product in $\Cat$. \end{definition} In~\cite{Genovese2020}, we used local semantics to describe guarded Petri nets, using $\Span$ as our category of choice. We briefly summarize this, as it will become useful later. \begin{definition}[The category $\Span$] We denote by $\Span$ the 1-category of sets and spans, where isomorphic spans are identified. This category is symmetric monoidal. From now on, we will work with the \emph{strictified} version of $\Span$, respectively. \end{definition} \begin{notation} Recall that a morphism $A\to B$ in $\Span$ consists of a set $S$ and a pair of functions $A\leftarrow S\to B$. When we need to extract this data from $f$, we write % % \begin{equation*} A\From{f_1}S_f\To{f_2}B \end{equation*} % We sometimes consider the span as a function $f\colon S_f\to A\times B$, thus we may write $f(s)=(a,b)$ for $s\in S_f$ with $f_1(s)=a$ and $f_2(s)=b$. \end{notation} \begin{definition}[Guarded nets with side effects] \label{def: guarded net} A \emph{guarded net with side effects} is an object of $\PetriSpan$. A morphism of guarded nets with side effects is a morphism in $\PetriSpan$. \end{definition} \begin{example} Let us provide some intuition behind the definition of $\PetriSpan$. Given a net $N$, its places (generating objects of $\Free{N}$) are sent to sets. Transitions (generating morphisms of $\Free{N}$) are mapped to spans. Spans can be understood as \emph{relations with witnesses}, provided by elements in the apex of the span: Each path from the span domain to its codomain is indexed by some element of the span apex, as it is shown in \cref{fig: semantics in span}. Witnesses allow considering different paths between the same elements. These paths represent the actions of processing the property a token is enowed with according to some side effect. Indeed, an element in the domain can be sent to different elements in the codomain via different paths. We interpret this as \emph{non-determinism}: the firing of the transition is not only a matter of the tokens input and output; it also includes the chosen path, which we interpret as having side-effects interpreted outside of our model. \end{example} \begin{figure}[!ht]\centering \scalebox{0.8}{ \begin{tikzpicture}[node distance=1.3cm,>=stealth',bend angle=45,auto] \node [place,colored tokens={red},] (1a) at (0,0){}; % \node [transition] (2a) at (1.5,0) {} edge [inarrow, pre] (1a); % \node [place,tokens=0] (3a) at (3,0) {} edge [inarrow, pre] (2a); % \node [transition] (4a) at (4.5,0) {} edge [inarrow, pre] (3a); % \node [place,tokens=0] (5a) at (6,0) {} edge [inarrow, pre] (4a); % \draw[thick, draw=black!75, fill=black!20] (0,2) ellipse (0.25cm and 0.75cm); \draw[thick, draw=black!75, fill=black!20] (3,2) ellipse (0.25cm and 0.75cm); \draw[thick, draw=black!75, fill=black!20] (6,2) ellipse (0.25cm and 0.75cm); \node[circle,fill=red, draw=gray!75, inner sep=0pt,minimum size=4pt] (red) at (0,1.7) {}; \node[circle,fill=blue, draw=gray!75, inner sep=0pt,minimum size=4pt] (blue) at (0,2.3) {}; \node[circle,fill=yellow, draw=gray!75, inner sep=0pt,minimum size=4pt] (yellow) at (3,1.7) {}; \node[circle,fill=green, draw=gray!75, inner sep=0pt,minimum size=4pt] (green) at (3,2.3) {}; \node[circle,fill=brown, draw=gray!75, inner sep=0pt,minimum size=4pt] (brown) at (5.9,2.4) {}; \node[circle,fill=purple, draw=gray!75, inner sep=0pt,minimum size=4pt] (purple) at (6.1,2) {}; \node[circle,fill=orange, draw=gray!75, inner sep=0pt,minimum size=4pt] (orange) at (5.9,1.6) {}; \draw[->, out=20, in=160] (blue) to node[midway, above] {$s_1$} (green); \draw[->, out=0, in=180] (red) to node[midway, above] {$s_2$} (green); \draw[->, out=-20, in=220] (red) to node[midway, below] {$s_3$} (green); \draw[->, out=0, in=180] (yellow) to node[midway, below] {$z_2$} (purple); \draw[->, out=0, in=180] (yellow) to node[midway, above] {$z_1$} (brown); \draw[dotted, -] (1a.north) -- (0,1.25); \draw[dotted, -] (3a.north) -- (3,1.25); \draw[dotted, -] (5a.north) -- (6,1.25); \node[transition,dotted,fill=none, draw=black!50, inner sep=0pt,minimum height=65pt] (left) at (1.5,2.1) {}; \node[transition,dotted,fill=none, draw=black!50, inner sep=0pt,minimum height=40pt] (right) at (4.5,1.9) {}; \draw[dotted, -] (2a.north) -- (left.south); \draw[dotted, -] (4a.north) -- (right.south); \end{tikzpicture}} \caption{Semantics in $\Span$}\label{fig: semantics in span} \end{figure} In \cref{fig: semantics in span} the composition of paths is the empty span: Seeing things from a reachability point of view, the process given by firing the left transition and then the right will never occur. This is because the rightmost transition has a guard that only accepts yellow tokens, so that a green token can never be processed by it. This is witnessed by the fact that there is no path connecting the green dot with any dot on its right. The relation with reachability can be made precise by recasting~\cref{def: executions individual token philosophy}. \begin{definition}[Markings for guarded nets]\label{def: marking guarded Petri net} Given a guarded Petri net with side effects $\NetSem{N}$, a \emph{marking} for $\NetSem{N}$ is a pair $(X, x)$ where $X$ is an object of $\Free{N}$ and $x \in \Fun{N}X$. We say that a marking $(Y,y)$ is \emph{reachable} from $(X,x)$ if there is a morphism $f: X \to Y$ in $\Free{N}$ and an element $s \in S_f$ such that $\Fun{N}f(s) = (x,y)$. \end{definition} \section{Semantics for hierarchical nets}\label{sec: semantics for hierarchical nets} In the span semantics we can encode externalities in the tips of the spans to which we send transitions. That is, given a bunch of tokens endowed with some properties, to fire a transition, we need to provide a \emph{witness} that testifies how these properties have to be handled. The central intuition of this paper is that we can use side effects to encode the runs of some other net: To fire a transition in the parent net, we need to provide a \emph{trace} of the corresponding child net. So we are saying that to fire a transition in the parent net, a valid execution of the corresponding child net must be provided. Relying on the results in \cref{sec: nets and their executions}, we know that such valid executions are exactly the morphims in the free symmetric strict monoidal category generated by the child net. Putting everything together, we want the tips of our spans to ``represent'' morphisms in the monoidal categories corresponding to the children nets. The following result makes this intuition precise, explaining how monoidal categories and spans are related: \begin{theorem}[{\cite[Section 2.4.3]{Zanasi2018}}]\label{thm: zanasi} Given a category $A$ with finite limits, a \emph{category internal in $A$} is a monad in $\Span(A)$. Categories are monads in $\Span$, whereas strict monoidal categories are monads in $\Span(\Mon)$, with $\Mon$ being the category of monoids and monoid homomorphisms. A symmetric monoidal category is a \emph{bimodule} in $\Span(\Mon)$. \end{theorem} It is worth pointing out, at least intuitively, how this result works: Given a category $\mathcal{C}$, we denote with $\Object{\mathcal{C}}$ and $\Arrow{\mathcal{C}}$ the sets\footnote{Here we are assuming that the objects and morphisms of our categories aren't proper classes. This assumption is harmless in our context unless one wants to consider a Petri net whose places and transitions, respectively, form a proper class.} of objects and morphisms of $\mathcal{C}$, respectively. Then we can form a span: \begin{equation*} \Object{\mathcal{C}} \xleftarrow{\Dom{}} \Arrow{\mathcal{C}} \xrightarrow{\Cod{}} \Object{\mathcal{C}} \end{equation*} where the legs send a morphism to its domain and codomain, respectively. This is clearly not enough, since in a category we have a notion of identity and composition, but asking for a monad provides exactly this. For instance, the monad multiplication in this setting becomes a span morphism \begin{equation*} \scalebox{0.89}{ \begin{tikzpicture} \node (tip) at (0,3) {$\Arrow{\mathcal{C}} \times_{\Object{\mathcal{C}}} \Arrow{\mathcal{C}}$}; \node (tipl) at (-2,1.5) {$\Arrow{\mathcal{C}}$}; \node (tipr) at (2,1.5) {$\Arrow{\mathcal{C}}$}; \node (int) at (-4,0) {$\Object{\mathcal{C}}$}; \node (net) at (0,0) {$\Object{\mathcal{C}}$}; \node (outt) at (4,0) {$\Object{\mathcal{C}}$}; \begin{scope}[>={Stealth[black]}, every node/.style={fill=white,circle}, every edge/.style={draw=black}] \draw[->] (tip) to (tipl); \draw[->] (tip) to (tipr); \draw[->] (tipl) to node[] {$\Dom{}$} (int); \draw[->] (tipl) to node[] {$\Cod{}$} (net); \draw[->] (tipr) to node[] {$\Dom{}$} (net); \draw[->] (tipr) to node[] {$\Cod{}$} (outt); \path (tip) to node[near start, rotate=45] {$\urcorner$} (net); \node(explanation) at (6,1.5) {$\xrightarrow{\text{monad multiplication}}$}; \node (cat) at (10,1.5) {$\Arrow{\mathcal{C}}$}; \node (catl) at (8,1.5) {$\Object{\mathcal{C}}$}; \node (catr) at (12,1.5) {$\Object{\mathcal{C}}$}; \draw[->] (cat) to node[above] {$\Dom{}$} (catl); \draw[->] (cat) to node[above] {$\Cod{}$} (catr); \end{scope} \end{tikzpicture}} \end{equation*} which gives composition of arrows. Similarly, the monad unit singles out identities, and the monad laws witness the associativity and identity laws. In a similar way, monoidal categories are represented as above, but we furthermore require $\Object{\mathcal{C}}$ and $\Arrow{\mathcal{C}}$ to be endowed with a monoid structure (representing the action of the monoidal structure on the objects and morphisms of $\mathcal{C}$, respectively), and that this structure is preserved by the span legs, while the bimodule structure on top of the monad witnesses the monoidal symmetries. For the scope of our applications, we remember that each Petri net $N$ generates a free symmetric strict monoidal category $\Free{N}$, which will correspond to a bimodule in $\Span(\Mon)$. So, in particular, we have a span of monoids\footnote{We are abusing notation, and writing $\Object{N}$, $\Arrow{N}$ in place of $\Object{\Free{N}}$, $\Arrow{\Free{N}}$, respectively.} \begin{equation*} \Object{N} \xleftarrow{\Dom{}} \Arrow{N} \xrightarrow{\Cod{}} \Object{N} \end{equation*} underlying a bimodule, with $\Object{N}$ and $\Arrow{N}$, representing the objects and arrows of the category, respectively, both free. We will refer to such a span as \emph{the \textsc{fssmc}\xspace $N$ (in $\Span(\Mon)$)}. \begin{definition}[Hierarchical nets -- External definition]\label{def: hierarchical nets external definition} A \emph{hierarchical net} is a functor $\Free{N} \to \Span(\Mon)$ defined as follows: % % \begin{itemize} \item Each generating object $A$ of $\Free{N}$ is sent to a set $FA$, aka \emph{the set of accepting states for the place $A$}. \item Each generating morphism $A \xrightarrow{f} B$ is sent to a span with the following shape: % % \begin{equation*} \scalebox{0.89}{ \begin{tikzpicture} \node (tip) at (0,4.5) {$(\Play{f} \mathrel{\times_{\Object{N_f}}} \Dom{f}) \times_{\Arrow{N_f}} (\Cod{f} \mathrel{\times_{\Object{N_f}}} \Stop{f})$}; \node (tipl) at (-2,3) {$(\Play{f} \mathrel{\times_{\Object{N_f}}} \Dom{f})$}; \node (tipr) at (2,3) {$(\Cod{f} \mathrel{\times_{\Object{N_f}}} \Stop{f})$}; \node (int) at (-4,1.5) {$FA$}; \node (net) at (0,1.5) {$\Arrow{N_f}$}; \node (outt) at (4,1.5) {$FB$}; \node (inl) at (-6,0) {$FA$}; \node (inr) at (-2,0) {$\Object{N_f}$}; \node (outl) at (2,0) {$\Object{N_f}$}; \node (outr) at (6,0) {$FB$}; \begin{scope}[>={Stealth[black]}, every node/.style={fill=white,circle}, every edge/.style={draw=black}] \draw[->] (tip) to (tipl); \draw[->] (tip) to (tipr); \draw[->] (tipl) to (int); \draw[->] (tipl) to (net); \draw[->] (tipr) to (net); \draw[->] (tipr) to (outt); \draw[-] (int) to (inl); \draw[->] (int) to node[midway] {$\Play{f}$} (inr); \draw[->] (net) to node[midway, inner sep=0pt] {\footnotesize $\Dom{f}$} (inr); \draw[->] (net) to node[midway, inner sep=0pt] {\footnotesize $\Cod{f}$} (outl); \draw[->] (outt) to node[midway] {$\Stop{f}$} (outl); \draw[-] (outt) to (outr); \path (tip) to node[near start, rotate=45] {$\urcorner$} (net); \path (tipl) to node[near start, rotate=45] {$\urcorner$} (inr); \path (tipr) to node[near start, rotate=45] {$\urcorner$} (outl); \end{scope} \end{tikzpicture}} \end{equation*} % The \textsc{fssmc}\xspace $N_f$ at the center of the span is called the \emph{child net associated to $f$}; the morphisms $\Play{f}$ and $\Stop{f}$ are called \emph{play $N_f$} and \emph{stop $N_f$}, respectively. \end{itemize} % \end{definition} Unrolling the definition, we are associating to each generating morphism of $f$ of $\Free{N}$ -- the parent net -- a \textsc{fssmc}\xspace $N_f$-- the child net. As the feet of the spans corresponding to the child nets will, in general, be varying with the net themselves, we need to pre and post-compose them with other spans to ensure composability: $\Play{f}$ and $\Stop{f}$ represent morphisms that select the \emph{initial and accepting states} of $N_f$, that is, markings of $N_f$ in which the computation starts, and markings of $N_f$ in which the computation is considered as concluded. Notice how this also solves the problems highlighted in~\cref{sec: hierarchical nets}, as $\Play{f}$ and $\Stop{f}$ mediate between the shape of inputs/outputs of the transition $f$ and the shape of $N_f$ itself. \begin{remark}\label{rem: interpreting hierarchical nets} Interpreting markings as in~\cref{def: marking guarded Petri net}, We see that to fire $f$ in the parent net we need to provide a triple $(a,x,b)$, where: % % \begin{itemize} \item $a$ is an element of $FA$, witnessing that the tokens in the domain of $f$ are a valid initial state for $N_f$. \item $x$ is an element of $\Arrow{N_f}$, that is, a morphism of $N_f$, and hence an execution of the child net. This execution starts from the marking $\Play{f}a$ and ends in the marking $\Stop{f}b$. \item $b$ is an element of $FB$, witnessing that the resulting state of the execution $x$ is \emph{accepting}, and can be lifted back to tokens in the codomain of $f$. \end{itemize} % \end{remark} \begin{definition}[Category of hierarchical Petri nets] Nets $\NetSem{N}$ in the category $\PetriSpan$ with $\Fun{N}$ having the shape of~\cref{def: hierarchical nets external definition} form a subcategory, denoted with $\PetriHier$, and called \emph{the category of hierarchical Petri nets}. \end{definition} \begin{remark} Using the obvious forgetful functor $\Mon \to \Set$ we obtain a functor $\Span(\Mon) \to \Span$, which allows to recast our non-local semantics in a more liberal setting. In particular, we could send a transition to spans whose components are \emph{subsets} of the monoids heretofore considered. We could select only a subset of the executions/states of the child net as valid witnesses to fire a transition in the parent. Everything we do in this work will go through smoothly, but we consider this approach less elegant; thus, we will not mention it anymore. \end{remark} \section{Introduction} \label{sec: introduction} This paper is the fourth instalment in a series of works~\cite{Genovese2020, Genovese2021a, Genovese2021} devoted to describing the semantics of extensions of Petri nets using categorical tools. Category theory has been applied to Petri nets starting in the nineties~\cite{Meseguer1990}; see also \cite{baldan2015modular,baldan2005relating,baldan2001compositional,baldan2011mpath2pn,baldan2018petri,baldan2010petri,baldan2014encoding,baldan2008open,baldan2019petri,baldan2015asynchronous}. The main idea is that we can use different varieties of free monoidal categories to describe the executions (or runs) of a net~\cite{Master2020,Genovese2019c}. These works have been influential since they opened up an avenue of applying high-level methods to studying Petri nets and their properties. For instance, in~\cite{Baez2020a} the categorical approach allowed to describe glueing of nets leveraging on colimits and double categories, while category-theory libraries such as~\cite{Genovese2019d} can be leveraged to implement nets in a formally verified way. These libraries implement category theory \emph{directly}, so that one could translate the categorical definitions defining some model object directly and obtain an implementation. In~\cite{Genovese2020}, we started another line of research, where we were able to define a categorical semantics for coloured nets employing monoidal functors. The Grothendieck construction was then used to internalize this semantics, obtaining the well-known result that coloured nets can be ``compiled back'' to Petri nets. In~\cite{Genovese2021a, Genovese2021}, we extended these ideas further, and we were able to characterize bounded nets and mana-nets -- a new kind of nets useful to model chemical reactions -- in terms of generalized functorial semantics. This approach, based on the correspondence between slice categories and lax monoidal functors to the category of spans ~\cite{Pavlovic1997}, has still a lot to give. In this paper, we show how it can be used to model hierarchical nets. There are a lot of different ways to define hierarchical nets \cite{Jensen2009,fehling1991concept,oswald1990environment,huber1989hierarchies,Buchholz1994HierarchicalHL}, which can be seen as a graph-based model. It means that we have one ``parent'' Petri net and a bunch of ``child'' nets. A transition firing in the parent net corresponds to some sort of run happening in a corresponding child net. The main net serves to orchestrate and coordinate the executions of many child nets in the underlayer. This paper will contain very little new mathematics. Instead, we will reinterpret results obtained in~\cite{Genovese2020} to show how they can be used to model hierarchical nets, moreover, in a way that makes sense from an implementation perspective. It is worth noting that category theory in this paper is used in a way that is slightly different than the usage in graph transformations research: We won't be using category theory to generalize definitions and proofs to different classes of graph(-related) objects. Instead, we will employ categorical concepts to actually build a semantics for hierarchical Petri nets. \section{Hierarchical nets}\label{sec: hierarchical nets} Now we introduce the main object of study of the paper, \emph{hierarchical nets}. As we pointed out in~\cref{sec: introduction}, there are many different ways to model hierarchy in Petri nets~\cite{Jensen2009}, often incompatible with each other. We approach the problem from a developer's perspective, wanting to model the idea that ``firing a transition'' amounts to call another process and waiting for it to finish. This is akin to calling subroutines in a piece of code. Moreover, we do not want to destroy the decidability of the reachability relation for our nets~\cite{Esparza1994}, as it happens for other hierarchical models such as the net-within-nets framework~\cite{Kohler-Bussmeier2014}. We consider this to be an essential requirement for practical reasons. We will postpone any formal definition to~\cref{sec: semantics for hierarchical nets}. In the present work, we focus on giving an intuitive explanation of what our requirements are. \begin{figure}[!ht]\centering \scalebox{0.7}{ \begin{tikzpicture}[node distance=1.3cm,>=stealth',bend angle=45,auto] % \filldraw[thick, draw=black!75, fill=gray!25, rounded corners] (0,-1) rectangle (6,-3); \draw[black!75, thick] (3,1) -- (3,-1); \filldraw[thick, draw=black!75, fill=gray!25, rounded corners] (7,-1) rectangle (11,-3); \draw[black!75, thick] (9,1) -- (9,-1); % \node [place,tokens=1] (1a) at (0,1){}; \node [transition] (2a) at (3,1) {} edge [inarrow, pre, bend left] (1a) edge [inarrow, pre, bend right] (1a); \node [place,tokens=0] (3a) at (6,1) {} edge [inarrow, pre] (2a); \node [transition] (4a) at (9,1) {} edge [inarrow, pre] (3a); \node [place,tokens=0] (5a) at (12,1) {} edge [inarrow, pre] (4a); % % % \node [place,tokens=1] (c11a) at (0.5,-2){}; \node [transition] (c12a) at (1.75,-2) {} edge [inarrow, pre] (c11a); \node [place,tokens=0] (c13a) at (3,-2) {} edge [inarrow, pre] (c12a); \node [transition] (c14a) at (4.25,-2) {} edge [inarrow, pre] (c13a); \node [place,tokens=0] (c15a) at (5.5,-2) {} edge [inarrow, pre] (c14a); % \draw[black!74, dashed] (1a) -- (c11a); \draw[black!74, dashed] (3a) -- (c15a); % % % \node [place,tokens=0] (c23a) at (7.5,-2) {}; \node [transition] (c24a) at (9,-1.5) {} edge [inarrow, pre] (c23a); \node [transition] (c24b) at (9,-2.5) {} edge [inarrow, pre] (c23a); \node [place,tokens=0] (c25a) at (10.5,-2) {} edge [inarrow, pre] (c24a) edge [inarrow, pre] (c24b); % \draw[black!74, dashed] (3a) -- (c23a); \draw[black!74, dashed] (5a) -- (c25a); % \end{tikzpicture}} \caption{A hierarchical net.}\label{fig: hierarchical net} \end{figure} Looking at the net in~\cref{fig: hierarchical net}, we see a net on the top, which we call \emph{parent}. To each transition of the parent net is attached another net, which we call \emph{child}. Transitions can only have one child, but the parent net may have multiple transitions, and hence multiple children overall. Connecting input and output places of a transition in the parent net with certain places in the corresponding child, we can represent the orchestration by saying that each time a transition in the parent net fires, its input tokens are transferred to the corresponding child net, that takes them across until they reach a place connected with the output place in the parent net. This way, the atomic act of firing a transition in the parent net results in an execution of the corresponding child. \begin{figure}[!ht]\centering \scalebox{0.7}{ \begin{tikzpicture}[node distance=1.3cm,>=stealth',bend angle=45,auto] % \node [place,tokens=1] (c11a) at (0.5,-2){}; \node [transition] (c12a) at (1.75,-2) {} edge [inarrow, pre] (c11a); \node [place,tokens=0] (c13a) at (3,-2) {} edge [inarrow, pre] (c12a); \node [transition] (c14a) at (4.25,-2) {} edge [inarrow, pre] (c13a); \node [place,tokens=0] (c15a) at (6,-2) {} edge [inarrow, pre] (c14a); % % % \node [place,tokens=0] (c23a) at (6,-2) {}; \node [transition] (c24a) at (9,-1.5) {} edge [inarrow, pre] (c23a); \node [transition] (c24b) at (9,-2.5) {} edge [inarrow, pre] (c23a); \node [place,tokens=0] (c25a) at (10.5,-2) {} edge [inarrow, pre] (c24a) edge [inarrow, pre] (c24b); % \end{tikzpicture}} \caption{Replacing transitions in the parent net of~\cref{fig: hierarchical net} with its children.}\label{fig: replacing transitions in the parent net with child nets} \end{figure} Notice that we are not interested in considering the semantics of such hierarchical net to be akin to the one in~\cref{fig: replacing transitions in the parent net with child nets}, where we replaced transitions in the parent net with their corresponding children. Indeed, this way of doing things is similar to what happens in~\cite{oswald1990environment}: In this model, transitions in the parent net are considered as placeholders for the children nets. There are two reasons why we distance ourselves from this approach: First, we want to consider transition firings in the parent net as atomic events, and replacing nets as above destroys this property. Secondly, such replacement is not so conceptually easy given that we do not impose any relationship between the parent net's topologies and its children. Indeed, the leftmost transition of the parent net in~\cref{fig: hierarchical net} consumes two inputs, while the corresponding leftmost transition in its child only takes one. How do we account for this in specifying rewriting-based semantics for hierarchical nets? \section{Internalization}\label{sec: internalization} In~\cref{sec: semantics for hierarchical nets} we defined hierarchical nets as nets endowed with a specific kind of functorial semantics to $\Span$. As things stand now, Petri nets correspond to categories, while hierarchical nets correspond to functors. This difference makes it difficult to say what a Petri net with multiple levels of hierarchy is: intuitively, it is easy to imagine that the children of a parent net $N$ can be themselves parents of other nets, which are thus ``grandchildren'' of $N$, and so on and so forth. In realizing this, we are blocked by having to map $N$ to hierarchical nets, which are functors and not categories. To make such an intuition viable, we need a way to \emph{internalize} the semantics in~\cref{def: hierarchical nets external definition} to obtain a category representing the executions of the hierarchical net. Luckily, there is a way to turn functors into categories, which relies on an equivalence between the slice 2-category over a given category $\mathcal{C}$, denoted $\Cat/\mathcal{C}$, and the 2-category of lax-functors $\mathcal{C} \to \Span$~\cite{Pavlovic1997}. This is itself the ``1-truncated'' version of a more general equivalence between the slice of $\Cat$ over $\mathcal{C}$, and the 2-category of lax \emph{normal} functors to the bicategory $\cate{Prof}$ of profunctors (this has been discovered by B\'enabou~\cite{Benabou1967}; a fully worked out exposition, conducted in full detail, is in~\cite{coend-calcu}). Here, we gloss over these abstract motivations and just give a very explicit definition of what this means, as what we need is just a particular case of the construction we worked out for guarded nets in~\cite{Genovese2020}. \begin{definition}[Internalization]\label{def: internalization} Let $\NetSem{M}\in\PetriHier$ be a hierarchical net. We define its \emph{internalization}, denoted $\Grothendieck{\Fun{M}}$, as the following category: % % \begin{itemize} \item The objects of $\GrothendieckS{M}$ are pairs $(X, x)$, where $X$ is an object of $\Free{M}$ and $x$ is an element of $\Fun{M}X$. Concisely: % % \begin{equation*} \Obj{(\GrothendieckS{M})} := \Suchthat{(X,x)}{(X \in \Obj{\Free{M}}) \wedge (x \in \Fun{M}X)}. \end{equation*} % % \item A morphism from $(X,x)$ to $(Y,y)$ in $\Grothendieck{\Fun{M}}$ is a pair $(f,s)$ where $f\colon X \to Y$ in $\Free{M}$ and $s\in S_{\Fun{M}f}$ in the apex of the corresponding span that connects $x$ to $y$. Concisely: % % \begin{align*} &\Hom{\GrothendieckS{M}}{(X,x)}{(Y,y)} :=\\ &\qquad :=\Suchthat{(f,s)}{(f \in \Hom{\Free{M}}{X}{Y}) \wedge (s\in S_{\Fun{M}f})\wedge(\Fun{M}f(s) = (x,y))}. \end{align*} % \end{itemize} \end{definition} The category $\GrothendieckS{N}$, called \emph{the Grothendieck construction applied to $\Fun{N}$}, produces a place for each element of the set we send a place to, and makes a transition for each path between these elements, as shown in Figure \ref{fig:grote}. \begin{figure} \begin{small} \begin{center} \input{Images/yetAnotherFigure.tex} \end{center} \caption{The Grothendieck construction applied to $\Fun{N}$.} \label{fig:grote} \end{small} \end{figure} Notice that in~\cref{fig:grote}, on the left, each path between coloured dots is a triple $(a,x,b)$ as in~\cref{rem: interpreting hierarchical nets}. This amounts to promote every possible trace of the child net -- together with a selection of initial and accepting states -- to a transition in the parent net. This interpretation is justified by the following theorem, which we again proved in~\cite{Genovese2020}: \begin{theorem}\label{thm: internalization} Given any strict monoidal functor $\Free{N} \xrightarrow{\Fun{N}} \Span$, the category $\GrothendieckS{N}$ is symmetric strict monoidal, and free. Thus $\GrothendieckS{N}$ can be written as $\Free{M}$ for some net $M$. Moreover, we obtain a \emph{projection functor} $\GrothendieckS{N} \to \Free{N}$ which turns $\Grothendieck{}$ into a functor, in that for each functor $F: \NetSem{M} \to \NetSem{N}$ there exists a functor $\LiftSpan{F}$ making the following diagram commute: % % \begin{equation*} \begin{tikzpicture}[node distance=1.3cm,>=stealth',bend angle=45,auto] % \node (1) at (0,1.5) {$\GrothendieckS{M}$}; \node (2) at (0,0) {$\Free{M}$}; \node (3) at (3,1.5) {$\GrothendieckS{N}$}; \node (4) at (3,0) {$\Free{N}$}; \node (5) at (1.5, -1.5) {$\Span$}; % % \draw[->] (1)--(2) node [midway,left] {$\pi_M$}; % \draw[->] (3)--(4) node [midway,right] {$\pi_N$}; % \draw[->] (2)--(4) node [midway,above] {$F$}; % \draw[->, dashed] (1)--(3) node [midway,above] {$\LiftSpan{F}$}; % \draw[->] (2)--(5) node [midway,left] {$\Fun{M}$}; % \draw[->] (4)--(5) node [midway,right] {$\Fun{N}$}; % % \end{tikzpicture} \end{equation*} % \end{theorem} \cref{thm: internalization} defines a functor $\PetriSpan \to \FSSMC$, the category of {\textsc{fssmc}\xspace}s and strict monoidal functors between them. As $\PetriHier$ is a subcategory of $\PetriSpan$, we can immediately restrict~\cref{thm: internalization} to hierarchical nets. A net in the form $\GrothendieckS{N}$ for some hiearchical net $\NetSem{N}$ is called the \emph{internal categorical semantics for $N$} (compare this with~\cref{def: hierarchical nets external definition}, which we called \emph{external}). \begin{remark} Notice how internalization is \emph{very} different from just copy-pasting a child net in place of a transition in the parent net as we discussed in~\cref{sec: hierarchical nets}. Here, each \emph{execution} of the child net is promoted to a transition, preserving the atomicity requirement of transitions in the parent net. \end{remark} Clearly, now we can define hierarchical nets with a level of hierarchy higher than two by just mapping a generator $f$ of the parent net to a span where $N_f$ is in the form $\GrothendieckS{N}$ for some other hierarchical nets $N$, and the process can be recursively applied any finite number of times for each transition. \section{Conclusion and future work}\label{sec: conclusion and future work} In this work, we showed how a formalism for guarded nets already worked out in~\cite{Genovese2020} can be used to define the categorical semantics of some particular variety of hierarchical nets, which works particularly well from a model-checking and distributed-implementation point of view. Our effort is again part of a more ample project focusing on characterizing the categorical semantics of extensions of Petri nets by studying functors from {\textsc{fssmc}\xspace}s to spans~\cite{Genovese2021a, Genovese2021}. As a direction of future work, we would like to obtain a cleaner way of describing recursively hierarchical nets. In this work, we relied on the Grothendieck construction to internalize a hierarchical net, so that we could use hierarchical nets as children of some other another parent net, recursively. This feels a bit like throwing all the carefully-typed information that the external semantics gives into the same bucket, and as such it is a bit unsatisfactory. Ideally, we would like to get a fully external semantics for recursively hierarchical nets, and generalize the internalization result to this case. Another obvious direction of future work is implementing the findings hereby presented, maybe relying on some formally verified implementation of category theory such as~\cite{Genovese2019d}.
1,108,101,565,027
arxiv
\section{Introduction}\label{s:intro} There are many similarities between electromagnetic (E\&M) radiation and gravitational radiation: both travel at the speed of light; both carry energy away from their sources; both consist of transverse waves with two polarizations. In addition, Einstein's general relativity, the theoretical underpinning of gravitational waves, can be put into a form remarkably parallel to Maxwell's electrodynamics, the theoretical underpinning of E\&M\ waves. Despite the many similarities, there are important differences, and focusing on those differences helps to give a deeper understanding of both kinds of radiation. Our mechanism for exploring those differences will be the visualization of the fields. Visualization of E\&M\ field lines has proven very helpful to student understanding \cite{johnsvispaper}. The visualization of gravitational fields has been a challenge, but the recently developed technique of using ``tendex'' and ``vortex'' lines\cite{CorntechPRL,CorntechPRD,CorntechZeroes} provides insights that may be of pedagogical value comparable to the plotting of electric or magnetic field lines in E\&M. Both in E\&M\ and in gravitation, the dynamic nature of radiation fields is of central importance to visualization, and in both E\&M\ and gravitation, waves carries energy. In E\&M\ it will turn out that a definite meaning can be given to the motion of field lines and to the transport of energy. By contrast, in the gravitational case, a definite meaning cannot be given to either of these concepts. This will help us understand some important ways in which gravitation fundamentally differs from E\&M. The rest of this paper is organized as follows. In Sec.~\ref{subsec:theoryem} we give a very brief review of E\&M\ theory to serve as a basis for comparison with the elements of gravitational theory that we subsequently present in Sec.~\ref{subsec:theorygrav}. Section \ref{s:statvis} develops the principles of visualization of static fields, for E\&M in Sec.~\ref{sub:EM}, and for gravitaion in Sec.~\ref{sub:GW}. In Sec.~\ref{s:statvis}, to illustrate both E\&M and gravitational static fields, we focus on a particular model that will be useful later, in the discussion of radiation. A dipole is the simplest model for a source of E\&M radiation, but a gravitational dipole cannot generate radiation. We therefore choose the simplest configuraton that can generate both E\&M and gravitational waves: a point quadupole. For simplicity of visualization, we eliminate the issues of visualizing truly three dimensional fields by having the point quadrupole be axisymmetric. Time changing sources give rise to radiation fields for both E\&M and gravitation. These dynamical fields and their visualization are discussed in Sec.~\ref{s:dynavis} with the example of oscillating E\&M and gravitational quadrupoles. We summarize and restate our conclusions in Sec.~\ref{s:conc}. A few words are in order about the choices that have been made for notation and conventions. The principle has been to produce a paper that can be understood by a reader with a minimum of mathematical preliminaries. To that end we have avoided certain practices that are common in advanced literature. The following points deserve particular notice. (i)~Papers involving relativity typically assume units in which the $c$, the speed of light, is taken to be unity. In order to have expressions in which the dimensionality of quantities is more transparent, we do not make that choice; all factors of $c$ explicitly appear. (ii)~It is common to use the ``Einstein summation convention,'' in which summation is assumed for any repeated index. Adopting this convention would allow us to drop the explicit summation symbols in Eq.~(\ref{Lorentzforce}) and many subsequent equations. We have chosen, however, to have these summation symbols appear. (iii)~To avoid the mathematical baggage of covariant differentiation we have been explicit in using only Cartesian coordinates and Cartesian components where expressions involve differentiation, as in Eq.~(\ref{eq:EMcartcomps}). (iv)~We avoid ``coordinate bases'' that are commonly used in computations with tensor fields. Rather, the components expressed, e.g., in Eq.~(\ref{staticcalE}), are with respect to the familiar spherical coordinate orthonormal basis, not the coordinate basis. One of the simplifications following from this choice is that indices on components are the same whether they are superscripts or subscripts; their location is chosen for convenience. \section{Electromagnetic and gravitational fields: introductory theory} \label{s:theory} \subsection{Electromagnetic fields}\label{subsec:theoryem} For comparison with the gravitational case, it is useful for us to mention the very roots of electromagnetic physics. The electric field ${\bf E}$ and magnetic field ${\bf B}$ are defined through the expression for the acceleration {\bf a} due to the total electromagnetic force, the Lorentz force, acting on a point particle of mass $m$ and charge $q$, moving at velocity {\bf v},\cite{noSRT} \begin{equation}\label{eq:Loraccel} {\bf a}=\frac{q}{m}\left({\bf E} +{\bf v}{\bs\times}{\bf B}\right)\,. \end{equation} The fields obey the Maxwell equations, which in rationalized MKS units, take the form \begin{equation} \nabla\cdot {\bf E}=0\quad\quad \nabla\times{\bf E}+\frac{\partial {\bf B}} {\partial t}=0 \end{equation} \begin{equation} \nabla\times{\bf B}-\frac{1}{c^2}\frac{\partial {\bf E}}{\partial t}=0 \quad\quad \nabla\cdot{\bf B}=0\ . \end{equation} Here we have simplified the equations by assuming that they apply in a region devoid of sources and of material properties, i.e., we take the charge and current density to be zero, and we assume vacuum values of the dielectric constant and magnetic permeability. It will be useful to rewrite these equations in component form (in which we assume a Cartesian basis): \begin{equation}\label{Lorentzforce} a^j=\frac{q}{m}\left(E^j+\sum_{k,p}\epsilon_{jkp}v^kB^p\right) \end{equation} \begin{equation}\label{eq:EMcartcomps} \sum_{j}\frac{\partial E_j}{\partial x^j}=0\quad\quad \sum_{j,k}\epsilon_{ijk}\,\frac{\partial E_k}{\partial x^j} +\frac{\partial B_i}{\partial t} =0 \end{equation} \begin{equation} \sum_{j,k}\epsilon_{ijk}\,\frac{\partial B_k}{\partial x^j}-\frac{1}{c^2}\, \frac{\partial E_i}{\partial t} =0 \quad\quad \sum_{j}\frac{\partial B_j}{\partial x^j}=0\,. \end{equation} Here the summations are over the indices of the Cartesian components and coordinates, $\{x^1,x^2,x^3\}$, or $\{x,y,z\}$. The symbol $\epsilon_{ijk}$ is the three-dimensional alternating symbol used in the construction of determinants and cross products. It vanishes if any of its indices is repeated (e.g., $\epsilon_{221}=0$), equals +1 for any even permutation of 1,2,3, or $x,y,z$, (i.e., $\epsilon_{123}=\epsilon_{312}=\epsilon_{231} =1 $,), and equals -1 for any odd permutation (i.e., $\epsilon_{213}=\epsilon_{321}= \epsilon_{132} =1 $). The simplest solutions of the Maxwell equations are the time independent solutions, especially the point multipole (point charge, dipole, quadrupole ...) solutions for electrostatic or magnetostatic fields. Of greatest interest in this paper will be the not-so-simple radiation solutions in which time variation is essential. Of considerable importance to these radiation solutions is the Poynting vector, the flux of electromagnetic power per unit cross sectional area \begin{equation} {\bf P}=\frac{1}{\mu_{0}}{\bf E}\times{\bf B}\,. \end{equation} In a general treatment, the computation of the radiation produced by a given distribution of time-changing charges and currents leads to retarded integrals over those sources. Here, however, we are primarily interested in the description and visualization of the resulting fields, so we simply invoke radiation fields without being specific about the internal details of their source. For such purposes the choice usually made is that of a point dipole, but for our purpose here this choice has the disadvantage that it has no gravitational analog. The lowest order multipole for gravitational radiation is the quadrupole\cite{whynodipole}. Accordingly, as our example of an electromagnetic radiating source we choose a point quadrupole, and we make the description and visualization as simple as possible by taking the quadrupole to be axisymmetric. \subsection{Gravitational fields}\label{subsec:theorygrav} To understand what is meant by ``gravitation'' in relativistic theories it is best to start with the simplest case, gravitostatics, gravitation for static configurations. In this case, if the gravitational fields are typically weak (if they do not drive particles to speeds comparable to $c$) then Newtonian ideas can be adopted, with minor modification, to relativistic gravitation. By ``gravitational field,'' in this approach, we do not mean, e.g., the downward acceleration at 9.8\,m/sec$^2$ near the surface of the Earth. More generally, if $\Phi_g$ is the usual Newtonian potential, $\nabla\Phi_g$ is not considered ``true'' gravitation. Since it affects all particles identically, its effects on particles undergoing their natural, freely falling motion disappear in a freely falling frame, the inertial frame in the relativistic view of spacetime. Gravitation, in the relativistic viewpoint, is the way in which the the natural free-fall motions vary from place to place and time to time. In a static configuration, one in which there is no change from time to time, the information is contained in the way in which $\nabla\Phi_g$ varies from place to place. The rate of variation of a vector is a tensor, in the case of $\nabla\Phi_g$ it is often called the gravitoelectric field\cite{paradigm}. (Because this tensor describes the raising of tides on astrophysical objects in a Newtonian setting it is also called the tidal tensor.) In a Cartesian basis the content of this tensor is the set of tensor components\cite{relversion} \begin{equation}\label{eq:gravitoelectricdef} {\cal E}_{jk} =\frac{\partial^2\Phi_g}{\partial x^j\partial x^k}\,. \end{equation} The trace of this gravitoelectric tensor is a familiar quantity \begin{equation}\label{eq:traceEkk} \sum_k {\cal E}_{kk}= \sum_k \frac{\partial^2\Phi_g}{\partial x^k\partial x^k} =\nabla^2\Phi_g\,. \end{equation} Just as the electric field is divergenceless outside sources, the gravitoelectric field is traceless outside sources. The definition in Eq.~(\ref{eq:traceEkk}) is valid even in the presence of sources. In that case the right hand side has the familiar value $4\pi G$ times mass density. This equation, then, is analogous to Coulomb's law for electromagnetism. It gives a definition of the field in terms of its sources, but only for a static field. To deal with radiation we need more general definitions, definitions based on the manifestations, the physical effects, of these fields. In the case of electromagnetism, this more general definition is given by Eq.~(\ref{eq:Loraccel}) or, equivalently, Eq.~(\ref{Lorentzforce}). In relativistic gravitation there is no concept of force {\em per se}. Rather, the manifestations of gravity are seen in the effects on two point particles separated by a small displacement ${\bf s}$, and with a relative velocity ${\bf v}$. The relative acceleration of the two particles, given by\cite{geodev} \begin{equation}\label{eq:geodev} \frac{d^2s^j}{dt^2}=-\sum_{k}{\cal E}_{jk}s^k-2\sum_{k,p,m}\epsilon_{jkp}{\cal B}_{pm}v^ks^m\,, \end{equation} defines the tensors $\bs{\cal E}$ and $\bs{\cal B}$. The similarity to the Lorentz acceleration in Eq.~(\ref{Lorentzforce}) is striking, especially if one considers ${\cal E}_{jk}s^k$ and ${\cal B}_{jk}s^k$ to be vectors. There are, of course, differences of detail, one of which is very fundamental. In Eqs.~(\ref{eq:Loraccel}), or (\ref{Lorentzforce}), the factor $q/m$ describes the special features of the particle undergoing electromagnetic acceleration. By contrast, in Eq.~(\ref{eq:geodev}) there is no reference to any characteristic of the particles. In accordance with the so-called ``equivalence principle,'' gravitation acts in the same way on any particle. The gravitoelectric and gravitomagnetic fields defined by Eq.~(\ref{eq:geodev}) are symmetric ( ${\cal E}_{jk}={\cal E}_{kj}$, ${\cal B}_{jk}={\cal B}_{kj}$). Outside sources these fields are traceless ($\sum_k{\cal E}_{kk}$=$\sum_k{\cal B}_{kk}$=0 ) and obey the Maxwell-like relations\cite{bianchiident} \begin{equation} \sum_{j}\frac{\partial {\cal E}_{jk}}{\partial x^j}=0\quad\quad \frac{1}{2} \left(\sum_{jk}\epsilon_{pjk}\frac{\partial{\cal E}_{qk}}{\partial x^j} +\sum_{jk}\epsilon_{qjk}\frac{\partial{\cal E}_{pk}}{\partial x^j}\right) +\frac{\partial {\cal B}_{pq}}{\partial t} =0 \end{equation} \begin{equation} \frac{1}{2} \left(\sum_{jk}\epsilon_{pjk}\frac{\partial{\cal B}_{qk}}{\partial x^j} +\sum_{jk}\epsilon_{qjk}\frac{\partial{\cal B}_{pk}}{\partial x^j}\right) -\frac{1}{c^2}\,\frac{\partial {\cal E}_{pq}}{\partial t} =0 \quad\quad \sum_{j}\frac{\partial{\cal B}_{jk}}{\partial x^j}=0\,, \end{equation} which can be written as \begin{eqnarray} {\bs \nabla\cdot{\bs{\cal E}}}=0\quad\quad\quad {\bs\nabla\times{\bs{\cal E}}}+\frac{\partial{\bs{\cal B}}}{\partial t}=0\\ {\bs\nabla\times{\bs{\cal B}}}-\frac{1}{c^2}\,\frac{\partial{\bs{\cal E}}}{\partial t}=0 \quad\quad\quad {\bs \nabla\cdot{\bs{\cal B}} }=0\,,\label{eq:curlyBeqs} \end{eqnarray} with appropriate interpretation of the divergence and curl. Note that the divergence can be taken on either index (since the ``gravito-'' tensors are symmetric) and the curl in these equations is symmetrized\cite{c2factor}. Two ``theoretical'' points bear mentioning: (i) Just as the six independent components of ${\bf E}$ and ${\bf B}$ contain a complete description of the electromagnetic field at a point, the ten independent components of the two symmetric traceless tensors ${\bs{\cal E}}$ and ${\bs{\cal B}}$ contain a complete description of the gravitational field at a point. (ii) The vectors ${\bf E}$ and ${\bf B}$ are ``gauge invariant.'' They cannot, for instance, be made to vanish by a mathematical choice. In the same sense ${\bs{\cal E}}$ and ${\bs{\cal B}}$ are gauge invariant\cite{vsmetricperts}. This is the mathematical equivalent of the physical statement that these quantities are directly physically measurable. \section{Visualization of static electromagnetic and gravitational fields} \label{s:statvis} \subsection{Electromagnetism}\label{sub:EM} \subsubsection{General considerations for electromagnetic visualization}\label{subsec:genconsidEM} The electric and magnetic fields are vector fields, and hence in principle are simple to picture. One can use small arrows indicating field direction, with arrow length indicating vector magnitude. Alternatively one can sketch field lines, curves to which the vectors are tangent, with the density of these lines indicating the strength of the field. The limitations of spatial resolution in these methods are well known. Below we will be illustrating vector and line fields with a more modern technique: the line integral convolution (LIC) method of Cabral and Leedom\cite{LIC}. In this method the brightness or darkness of pixels is correlated along field lines. The method produces images with streaks showing the structure of the field lines in an intuitively appealing way and with resolution approaching that of the display. \subsubsection{Static electromagnetic point quadrupole } As the simplest example of an electromagnetic multipole source that will be generalizable to gravitation, we choose an axisymmetric quadrupole. A realization of such a source is shown in Fig.~\ref{fig:elecstatquad}: two equal positive charges symmetrically arranged on the $z$ axis about a double negative charge at the origin. Since there is no net charge, and no favored positive direction, the configuration has neither a monopole nor a dipole moment. The static configuration pictured then has a quadrupole as its lowest nonvanishing multipole. \begin{figure}[htb] \includegraphics[height=1.5in]{elecquad} \caption{ A simple model of an electric quadrupole. \label{fig:elecstatquad}} \end{figure} In general, the Cartesian components of an electric quadrupole are given by\cite{JacksonQ} \begin{equation}\label{quadelecdef} Q_{ij}=\int (3x_ix_j-r^2\delta_{ij})\rho({\bf x}) d^3x\,, \end{equation} where $\rho({\bf x})$ is charge density. For our model in Fig.~\ref{fig:elecstatquad}, the specific components are \begin{equation}\label{quadeleccomps} Q_{zz}=4d^2q\equiv4Q\quad\quad Q_{xx}= Q_{yy}= -2Q\,, \end{equation} with $Q_{ij}=0$ for $i\neq j$. Note that the quadrupole $Q_{ij}$ is itself a tensor describing the charge distribution of the source. It is not a tensor field. The electric {\em field} produced by that source is the vector field {\bf E}. The model in Fig.~\ref{fig:elecstatquad} is not a ``pure'' quadrupole; it has multipole moments of order 2, 4, 6\ldots. More important, it is not a ``point source''; it has a characteristic size $d$. To create a pure point quadrupole we use a limiting process analogous to that for defining a point dipole. We shrink $d$ to zero, while keeping finite the product $Q\equiv d^2 q$. The result is a point source with only a quadrupole moment. \begin{figure}[htb] \includegraphics[width=.35\textwidth]{ElecStat2}\hspace{12pt} \caption{A LIC of the electric field lines of the azimuthally symmetric static electric quadrupole described in the text.\label{fig:LICestat}} \end{figure} The static electric field for the point quadrupole is most simply computed from the electrostatic potential. A formal procedure\cite{JacksonQ} can be used to find the potential directly from the quadrupole components in Eq.~(\ref{quadeleccomps}), or a limiting procedure can be applied to the potential of the three point charges in Fig.~\ref{fig:elecstatquad}. The resulting electrostatic potential is \begin{equation} \Phi_e=\frac{Q}{4\pi\epsilon_0 r^3}\left(3\cos^2\theta-1\right)\,, \end{equation} and hence the electric field ${\bf E}=-{\bs\nabla}\Phi_e$ has spherical components \begin{equation}\label{statelec} E_r=\frac{6Q}{4\pi\epsilon_0r^4}\left(\frac{3}{2}\cos^2\theta-\frac{1}{2}\right) \quad\quad\quad\quad E_\theta=\frac{6Q}{4\pi\epsilon_0r^4} \cos\theta\,\sin\theta\ . \end{equation} The electric field topology for this case is illustrated in Fig.~\ref{fig:elecstatquad}. For this static electric configuration, there are no associated magnetic fields. \subsection{Gravitation}\label{sub:GW} \subsubsection{General considerations for gravitational visualization} The problem of visualizing the ${\cal E}_{jk}$ and ${\cal B}_{jk}$ fields is a special example of the question: how does one visualize tensorial fields. The ``gravito-'' fields, ${\cal E}_{jk}$ and ${\cal B}_{jk}$, are tensors, but at least they are the simplest nontrivial type of 3-dimensional tensor: they are second rank (two index) symmetric tensors. In this sense they are similar to the most familiar tensors of physics: the inertia tensor, the stress tensor, the dielectric tensor of an anisotropic material, etc. In the case of a tensor like the inertia tensor of an extended massive object, a reasonable visualization is the inertia ellipsoid, a three dimensional ellipsoid whose shape shows the directions and size of the principal axes of the moment of inertia of the massive object\cite{allpositive}. This ellipsoid for a second-rank symmetric tensor is very much the analog of the arrow for a vector. The moment of inertia ellipsoid describes a single tensor, not a tensor field. While a display of space filled with arrows has some usefulness for the visualization of a vector field, the same is probably not true of space filled with ellipsoids. But, just as arrows can be connected together to form field lines, the {\em principal axes} of tensorial ellipsoids can be connected to form a network of lines with visualization properties somewhat similar to field lines. The specific technique for visualization of a symmetric tensor $A_{jk}$ is to find, at each point in space, the principal directions, i.e., the vectors $v^k$ satisfying the eigenvector condition $A_{jk}v^k=\lambda \,v^k$. If $A_{jk}$ is symmetric we are guaranteed that three such eigenvectors exist and are orthogonal. The three orthogonal sets of lines connecting these eigenvectors, the ``eigenlines'' for these eigenvectors then give us visual information about our tensor field. This method has been used for some time in the visualization of stress (a tensor quantity) in fluid dynamics and solid mechanics\cite{stress}, and has recently been suggested for use in visualizing the gravitoelectric and gravitomagnetic fields\cite{CorntechPRL,CorntechPRD,CorntechZeroes}. Those advocating the application to gravitation give the name ``vortex'' lines to the eigenvectors of ${\bs{\cal B}}$ due to the role of that field in driving the precession of spin. The eigenvector field lines of ${\bs{\cal E}}$ are given the name ``tendex'' lines, suggestive of the role of the tidal distortions associated with ${\bs{\cal E}}$. This approach to visualization holds the promise of giving the kind of intuitive insights that may suggest what configurations lead to strong emission of power and of linear momentum in gravitational waves. \subsubsection{Static gravitational point quadrupole} We now consider the gravitational equivalent of the configuration in Fig.~\ref{fig:elecstatquad}. Here the objects at $z=\pm d$ are points of mass $M$, rather than points of charge $q$. In analogy with Fig.~\ref{fig:elecstatquad}, we include a negative mass $-2M$ at the center\cite{neg2M}. \begin{figure}[htb] \includegraphics[width=.15\textwidth]{gravquad} \caption{ A simple model of a gravitational quadrupole. \label{fig:gravquad}} \end{figure} The quadrupole components for the gravitational case are given by the same integral as that in Eq.~(\ref{quadelecdef}), but with $\rho$ now representing mass density. The components therefore are those in Eq.~(\ref{quadeleccomps}) with $q$ replaced by $M$. The Newtonian gravitational potential for the mass configuration in Fig.~\ref{fig:gravquad} is the same as the electrostatic potential of the charge configuration in Fig.~\ref{fig:elecstatquad} after the replacements $q\rightarrow M$ and $1/4\pi\epsilon_0\rightarrow -G$. The limit of this potential, for $d\rightarrow0$ with $Q=Md^2$ fixed, is \begin{equation} \Phi_g=-\frac{GQ}{r^3}\left(3\cos^2\theta-1\right)\,. \end{equation} The components of the static gravitoelectric field are straightforward to compute from this potential\cite{fieldcalc}. For the static gravitational quadrupole, in analogy with the electric case, there are no gravitomagnetic fields\cite{whynocalB}. The components of the gravitoelectric fields, in a spherical basis, are given by \begin{eqnarray}\label{staticcalE} {\cal E}_{rr}&=& -\frac{12GQ}{r^5}\left(3\cos^2\theta-1\right)\nonumber \\ {\cal E}_{r{\theta}}&=&-\frac{24GQ}{r^5}\sin\theta\cos\theta\label{eq:calEstatcomps} \\ {\cal E}_{{\phi}{\phi}}-{\cal E}_{{\theta}{\theta}}&=&\frac{6GQ}{r^5}\,\sin^2\theta\nonumber \,. \end{eqnarray} The components ${\cal E}_{r\phi}$ and ${\cal E}_{\theta\phi}$ are zero by axisymmetry, and the individual components ${\cal E}_{{\phi}{\phi}},{\cal E}_{{\theta}{\theta}}$ follow from the last of Eqs.~(\ref{staticcalE}) and from the tracelessness condition ${\cal E}_{{\phi}{\phi}}+{\cal E}_{{\theta}{\theta}}=-{\cal E}_{rr}$. We now turn to the issue of visualizing these fields. The structure of the components of $\bs{\cal E}$ shows that two of the eigenvectors will be in the $r,\theta$ plane, and one in the $\phi$ direction. The eigenlines in the $\phi$ direction are simple azimuthal circles. The much more interesting eigenlines in the $r\theta$ plane are shown, as LIC images, in Fig.~\ref{fig:statgraveigens}. \vspace{.2in} \begin{figure}[htb] \hspace{0in}\includegraphics[width=.4\textwidth]{StatNeg}\hspace{12pt}\\ \vspace{.03in} \hspace{0in}\includegraphics[width=.4\textwidth]{StatPos}\\ \vspace{.5in} \caption{ Line Integral Convolutions showing the eigenvector fields for the vertically oriented static gravitational point quadrupole. The top image shows the field for negative eigenvalues, and the bottom image shows the field for positive eigenvalues. \label{fig:statgraveigens}} \end{figure} The comparison of Fig.~\ref{fig:statgraveigens} and Fig.~\ref{fig:LICestat} is very instructive and poses a sequence of questions. The electrostatic and gravitostatic potentials are identical aside from trivial replacements. Why are the visualizations so different? The immediate answer is that in the electromagnetic case we are picturing ${\bf E}=-{\bs\nabla}\Phi_e$, a vectorial quantity with components $\partial\Phi_e/\partial x^j$. In the gravitational case we are picturing not the vector field ${\bf g}=-{\bs\nabla}\Phi_g$, with components $\partial\Phi/\partial x^j$, but the tensorial quantity ${\bs{\cal E}}$ with the components. $\partial^2\Phi_g/\partial x^j\partial x^k$. The component notation correctly suggests that ${\bs{\cal E}}$ is the gradient\cite{neggrad} of ${\bf g}$. Why not, then, simply visualize gravitational fields with images of ${\bf g}$? For that matter why not simply use potentials in both cases? In electromagnetism we know the answer. The electrostatic potential is useful for electrostatics, but the concept does not carry over to time-changing fields, and hence to radiation. For electromagnetic radiation the electric field ${\bf E}$ is a valid and important concept, but it is not simply the gradient of a scalar. The same turns out to be true in gravitation. For gravitational radiation ${\bs{\cal E}}$ is a valid and important concept, but its components are not the second derivatives of a scalar field. \section{dynamic electromagnetic and gravitational fields and their visualization} \label{s:dynavis} \subsection{Electromagnetic fields} \label{sub:visdynaEM} \subsubsection{General visualization considerations} While LIC snapshots are very useful, they show the fields only at a moment of time, and do not capture the necessarily dynamic nature of the radiation fields. To address this, a new technique, Dynamic Line Integral Convolution (DLIC), has been developed\cite{johnsvispaper,Sundquist}. Quite aside from the technical challenges to be overcome is a fundamental question underlying the notion of field line movement. Animations can show how a certain field line changes from one moment to the next, but what do we mean by ``a certain field line''? How do we relate the field line at one moment to the ``same'' field line at another moment? How do we put unchanging ``tags'' on a field line\cite{BnO}. It turns out that there is a reasonable criterion for tagging field lines. At least in the case of axisymmetric sources, the mathematical nature of Maxwell's equations imposes these tags in a natural way. Furthermore, the same tagging can be justified on purely physical grounds. \begin{figure}[htb] \includegraphics[width=.35\textwidth]{Eflux2} \caption{ Electric flux ${\bf E}$ through an area whose boundary is moving with velocity ${\bf v}$. \label{fig:Eflux}} \end{figure} The mathematical property is illustrated in Fig.~\ref{fig:Eflux}. The flux of electric field through an area $A$, with unit normal ${\bf n}$, changes in time according to $$ \frac{d}{dt}\int_{A} {\bf E}\cdot{\bf n}\,dA=\int_A \frac{\partial{\bf E}}{\partial t}\cdot{\bf n}\,dA+\int_C \left({\bf E}\times{\bf v}\right)\cdot d{\bf l} =\int_A c^2\nabla\times{\bf B}\cdot{\bf n}\,dA +\int_C \left({\bf E}\times{\bf v}\right)\cdot d{\bf l} $$\begin{equation} =\int_C \left(c^2{\bf B}+{\bf E}\times{\bf v}\right)\cdot d{\bf l}\,. \end{equation} We adopt flux conservation as our criterion for the motion of electric field lines. This requires that the field line velocity ${\bf v}$ satisfy \begin{equation}\label{Evconstraint} c^2{\bf B}+{\bf E}\times{\bf v}=0\,. \end{equation} For magnetic field lines, the equivalent condition is \begin{equation}\label{Bvconstraint} {\bf E}+{\bf v}\times{\bf B}=0\,. \end{equation} The component of ${\bf v}$ along the field lines is meaningless, so these conditions give one constraint on the components of ${\bf v}$ perpendicular to the field lines. In the case of axisymmetry, to be considered below, that is all we need. For a magnetic field configuration, e.g., an oscillating magnetic dipole, we can interpret Eq.~(\ref{Bvconstraint}) as the condition that a charged particle experience no net Lorentz force. Particles trapped in tight orbits around magnetic field lines {\em do} in fact, move with field lines, so this condition has a very practical meaning in many plasma situations: the motion of field lines is equivalent to the motion of electrons trapped on tight orbits around field lines. For an electric field these same considerations apply if we replace the electric monopole charge motion with the motion of a (hypothetical) magnetic monopole charge. In the radiation field, and in many nonradiative configurations, the electric and magnetic fields are orthogonal, i.e., ${\bf E\cdot B}=0$. In this case the solution to Eqs.~(\ref{Evconstraint}) and (\ref{Bvconstraint}) are, respectively, \begin{equation}\label{eq:vandPoynting} {\bf v}=c^2\frac{{\bf E\times B}}{E^2} \quad\quad\quad {\bf v}=\frac{{\bf E\times B}}{B^2}\,. \end{equation} It is interesting that the numerator in both cases is proportional to the Poynting flux, indicating that the flux-conserving motion of the field lines is compatible with the transport of electromagnetic energy. \subsubsection{Oscillating electric point quadrupole} We can convert the static quadrupole of Fig.~\ref{fig:elecstatquad} to a quadrupole source oscillating at frequency $\omega$ by replacing the static distances $d$ by the oscillating distances $d_0+\Delta d \cos{\omega t}$. The ``amplitude'' of the quadrupole then changes from $Q=qd^2$ to \begin{equation} Q(t)=q(d_0+\Delta d\,\cos{\omega t})^2=qd_0^2+2qd_0\Delta d\cos{\omega t} +q(\Delta d\cos{\omega t})^2\,. \end{equation} We now take only the part of this expression that oscillates at frequency $\omega$, and we define $Q\equiv2qd_0\Delta d$. With this meaning for $Q$, the spherical components of fields produced by this source are\cite{EMcalcs} \begin{equation}\label{eq:Erdynamic} E_r=-2\frac{Q k^2}{4\pi\epsilon_0\,r^2}\left[ \cos{(kr-\omega t)}\left(1-\frac{3}{k^2r^2}\right) -\frac{3}{kr}\sin{(kr-\omega t)} \right] \left(\frac{3}{2}\cos^2\theta-\frac{1}{2}\right) \end{equation} \begin{equation}\label{eq:Ethetadynamic} E_\theta=-\frac{Q k^2}{4\pi\epsilon_0\,r^2}\left[ \sin{(kr-\omega t)}\left(kr-\frac{6}{kr}\right) +\left(3-\frac{6}{k^2r^2}\right) \cos{(kr-\omega t)} \right] \cos\theta\,\sin\theta\,, \end{equation} \begin{equation}\label{eq:Bphidynamic} B_\phi=-\frac{Q k^3}{4\pi\epsilon_0 c\,r}\left[ \left(1-\frac{1}{3k^2r^2}\right)\sin{(kr-\omega t)}+\frac{3}{kr}\cos{(kr-\omega t)} \right]\cos\theta\,\sin\theta\,, \end{equation} where $k\equiv\omega/c$. In the limit $kr\rightarrow0$ the dominant terms in the field are proportional to $1/r^4$ and these terms agree precisely with those in Eqs.~(\ref{statelec}). This leads to an important insight about visualization: As $kr$ becomes much smaller than unity, the dominant terms in the field are the $1/r^4$ terms, and these terms have the form of the static solution modulated by $\cos{\omega t}$. In other words, at distances from the source much less than a wavelength the field is a quasistatic field, a field with the structure of the static field, but with an amplitude oscillating in time. Far from the source, in the region $kr\gg1$, the story is very different. Here the radial electric field falls off as $~1/r^2$. The dominant fields are the $~1/r$ parts of $E_\theta$ and $B_\phi$: \begin{equation}\label{eq:Erad} E_\theta=-\frac{Q k^3}{4\pi\epsilon_0\,r} \sin{(kr-\omega t)} \cos\theta\,\sin\theta \quad\quad\quad B_\phi=-\frac{Q k^3}{4\pi\epsilon_0 c\,r} \sin{(kr-\omega t)} \cos\theta\,\sin\theta \quad\quad \mbox{for $kr\gg1$}\,. \end{equation} These are the radiation fields, the fields in the limit that $r$ is much larger than the radiation wavelength. The fact that the radial component does not contribute to this field is simply a statement that the radiation fields are transverse, orthogonal to the direction to the source. \begin{figure}[htb] \includegraphics[width=.5\textwidth]{Figure6NewFrame00} \caption{A Line Integral Convolution snapshot of the electric field given, at time $t=0$ by Eqs.~(\ref{eq:Erdynamic}) and (\ref{eq:Ethetadynamic}) for an oscillating electric quadrupole The markers show points at one wavelength on either side of the origin. At distances from the origin small compared to one wavelength $cT$ the solutions approach those of the static solutions in Fig.~\ref{fig:LICestat}. This is one frame of a complete movie that can be found at {\tt http://web.mit.edu/viz/gravrad/visualizations/EM/EMintermediate/}. \label{fig:emradquad}} \end{figure} Figure~\ref{fig:emradquad} gives a LIC snapshot of the fields of the oscillating electric quadrupole at a single instant of time. The wavelength $\lambda=c/\omega=cT$ of the radiation is indicated on the figure, and it is particularly interesting to note that the central region of Fig.~\ref{fig:emradquad}, the region small compared to $\lambda$, is indistinguishable from Fig.~\ref{fig:LICestat}. In the opposite limit, the regions far from the central region, to the extent that these are included in Fig.~\ref{fig:emradquad}, show the tendency to take the radiation form, with transverse field lines. What is most interesting in Fig.~\ref{fig:emradquad}, however, is neither the near or the far regions, but rather the intermediate region. The figure reveals details of the structure of the transition fields that cannot easily be inferred from the mathematical expressions in Eqs.~(\ref{eq:Erdynamic}) and (\ref{eq:Ethetadynamic}). It shows how the field structure makes the transition from the very different nature of the near and the far fields\cite{johnsvispaper}. A snapshot of a dynamic field, however, necessarily shows only a single phase of the radiation field. A full description requires an animation, and an animation requires a way of identifying field lines at different moments of time, as described in Sec.~\ref{sub:EM}. The identification principle introduced in that subsection gives a single constraint on the velocity of field lines orthonognal to the line itself. In the axisymmetric case, geometry gives us the other constraint. In particular, the electric field lines in Fig.~\ref{fig:elecstatquad}, by symmetry, can have no $\phi$ component. The condition in Eq.~(\ref{Evconstraint}) then completely fixes the line velocity ${\bf v}$. A DLIC animation of the dynamic electric field lines is available online\cite{onlineDLIC}. \subsection{Gravitational fields} \label{sub:visdynaGrav} \subsubsection{Oscillating gravitational point quadrupole} To create a point quadrupole gravitational source oscillating at frequency $\omega$, we can perform precisely the same procedure as for the electromagnetic case. We take the distances $d$ to have the form $d_0+\Delta d \cos{\omega t}$ and we take the symbol $Q$ to mean $2Md_0\Delta d$. The nonvanishing spherical components of ${\bs{\cal E}}$ and ${\bs{\cal B}}$, the analogs of Eqs.~(\ref{eq:Erdynamic}), (\ref{eq:Ethetadynamic}), are found to be \begin{eqnarray} {\cal E}_{rr}&=&\frac{-4GQ k^2}{r^3}\,\left[\left(-1+\frac{3}{k^2r^2}\right)\cos{(kr\!-\!\omega t)} +\frac{3\sin{(kr\!-\!\omega t)}}{ kr}\right]\left(3\cos^2\theta-1\right)\label{eq:calerr}\\ {\cal E}_{r{\theta}}&=&\frac{-4GQ k^2}{r^3}\,\left[\left(\frac{6}{(kr)^2}-3\right)\cos{(kr\!-\!\omega t)} -\left( kr-\frac{6}{ kr}\right)\sin{(kr\!-\!\omega t)}\right]\sin\theta\cos\theta\label{eq:calert}\\ {\cal B}_{r{\phi}}&=&\frac{-4GQ k^2}{c r^3}\,\left[-{3}\cos{(kr\!-\!\omega t)} -\left(k r-\frac{3}{kr}\right)\sin{(kr\!-\!\omega t)}\right]\sin\theta\cos\theta\label{eq:calbrp}\\ {\cal E}_{{\phi}{\phi}}-{\cal E}_{{\theta}{\theta}}&=&\frac{-2GQ k^2}{r^3}\,\left[\left( -\frac{3}{k^2r^2}+3-(k r)^2 \right)\cos{(kr\!-\!\omega t)} -\left( \frac{3}{kr}-2k r \right)\sin{(kr\!-\!\omega t)}\right]\sin^2\theta\label{eq:calepp}\\ {\cal B}_{{\theta}{\phi}}&=&\frac{-GQ k^2}{c r^3}\,\left[\left( -3+k^2r^2 \right)\cos{(kr\!-\!\omega t)} -\left( -\frac{3}{kr}+2kr \right)\sin{(kr\!-\!\omega t)}\right]\sin^2\theta\label{eq:calbtp}\,. \end{eqnarray} As was the case for the dynamical fields in Eqs.~(\ref{eq:Erdynamic}), (\ref{eq:Ethetadynamic}) there are interesting limits to these expressions for both $kr\ll1$ and $kr\gg1$. In the former case, we find that to the leading $1/r^5$ order ${\cal E}_{rr}$, ${\cal E}_{r\theta}$, and ${\cal E}_{{\phi}{\phi}}-{\cal E}_{{\theta}{\theta}}$, have the form of the values in Eqs.~(\ref{eq:calEstatcomps}), modulated by $\cos{\omega t}$, while the gravitomagnetic components vanish to this order. As is the case in electromagnetism, at distances from the source much less than a wavelength the fields are those of a quasistatic source, fields with the structure of a static field, but oscillating in time. In the opposite limit, for $kr\gg1$, the dominant components, those that fall off as $1/r$, are \begin{eqnarray} {\cal E}_{{\phi}{\phi}}=-{\cal E}_{{\theta}{\theta}}&=&\frac{GQ\omega^4}{r}\cos{(kr\!-\!\omega t)}\sin^2\theta\label{eq:epprad}\\ {\cal B}_{{\theta}{\phi}}&=&-\frac{GQ\omega^4}{r}\cos{(kr\!-\!\omega t)}\sin^2\theta\,.\label{eq:btprad} \end{eqnarray} The form of these radiation fields suggests transverse fields, a suggestion that can be confirmed with the mathematics of general relativity and an appropriate set of definitions and constraints. For our purposes here it is sufficient to note that the eigenvector directions can immediately be inferred. From Eq.~(\ref{eq:epprad}), and treating ${\cal E}_{rr}$ as zero, we have that the matrix of gravitoelectric field components is diagonal and hence that there are eigenvectors of the gravitoelectric field in the $\theta$ direction, in the $\phi$ direction and (for zero eigenvalue) in the $r$ direction. From Eq.~(\ref{eq:btprad}) we infer that in the radiation zone there are two eigenvectors for the gravitomagnetic field in the $\theta\phi$ plane, each at 45 degrees from the $\theta$ and the $\phi$ directions. Again, there is an eigenvector field, for zero eigenvalue, in the radial direction. Snapshots of the eigenlines of ${\bs{\cal E}}$ are presented in Fig.~\ref{fig:LICGsnapshot}. As in the electromagnetic case, these snapshots show what we already learned from the mathematics. Close to the source the eigenline field is quasistatic, and far from the source it is transverse. Also as in the electromagnetic case, these figures can clarify what the mathematics cannot: the form of the fields in the intermediate zone, at distances from the source comparable to the wavelength $\lambda= cT$. In this zone the fields must make a transition from the quasistatic small-$r$ form to the radiating large-$r$ form. \begin{figure}[htb] \includegraphics[width=.45\textwidth]{Figure7aNewNoBkPosFrame00}\\ \includegraphics[width=.45\textwidth]{Figure7bNewNoBkNegFrame00} \caption{LIC snapshots of the eigenlines of the gravitoelectric field ${\bs{\cal E}}$ for an oscillating point gravitational quadrupole described by Eqs.~(\ref{eq:calerr}), (\ref{eq:calert}) and (\ref{eq:calepp}) at time $t=0$. The markers show the points at a distance of one wavelength on either side of the origin. The top LIC shows the eigenlines for one family\cite{family} of eigenvectors; the LIC on the bottom shows the other family. At distances from the origin small compared to the wavelength $cT$, the fields, and hence the eigenlines, approach those of the two families of eigenlines for the static solution shown in Fig.~\ref{fig:statgraveigens}. The top (bottom) image is one frame of a complete movie that can be found at {\tt http://web.mit.edu/viz/gravrad/visualizations/GRAVinter/EposgravNoBkInt/} ({\tt http://web.mit.edu/viz/gravrad/visualizations/GRAVinter/EneggravNoBkInt/} ). \label{fig:LICGsnapshot}}. \end{figure} Both in the mathematics and in Fig.~\ref{fig:LICGsnapshot} we see that the form of the fields in the intermediate zone is even richer for the oscillating gravitational quadrupole than for the oscillating electric quadrupole. In the latter case the radiation fields fall off as $1/r$, while the near-source quasistatic field has the character $1/r^4$. For the gravitational quadrupole the radiation fields have the same $1/r$ character, but the near-source quasistatic fields behave as $1/r^5$. There is an interesting feature of the gravitational fields that is associated with the mathematical expressions more than the visualizations. The expressions in Eqs.~(\ref{eq:Erdynamic}), (\ref{eq:Ethetadynamic}), and (\ref{eq:Bphidynamic}), for $E_r, E_\theta, B_\phi$, are identical to those for ${\cal E}_{rr},{\cal E}_{r{\theta}},{\cal B}_{r{\phi}}$ when the change $1/4\pi\epsilon_0 r^2 \rightarrow -2GQ/r^3 $ is made; this is not true only in the near-source zone or the radiation zone, but for all values of $kr$! This means that, aside from a change in constants and a single factor of $r$, the electromagnetic solution, including radiation fields, is completely contained in the nonradiative part of the gravitational solution. Stated in the other direction: the electromagnetic solution contains all of the gravitational solution except for those components ${\cal E}_{{\phi}{\phi}}-{\cal E}_{{\theta}{\theta}}$, and ${\cal B}_{{\theta}{\phi}}$, that carry radiation (cf.~Eq.~(\ref{eq:epprad}) and (\ref{eq:btprad})). This, of course, is not a coincidence, nor is it an idiosyncrasy of the axially symmetric point quadrupole. Rather it is a consequence of the structure of classical theories that describe massless fields, i.e., fields that propagate at the speed of light. This structure is most apparent when the fields are described with the appropriate mathematics, a set of scalars that result from projecting the fields onto a set of basis vectors best suited to the analysis of propagating fields\cite{NP}. This relationship of the electromagnetic and linearized gravity fields gives us an additional possibility for visualization. If we project the gravitational tensors with the unit $r$ vector (equivalently take the dot product of $\bs{\cal E}$ and $\bs{\cal B}$ with the unit radial vector $\bs{\hat{r}}$) we get two vector fields, one with $r,\theta,\phi$ components ${\cal E}_{rr},{\cal E}_{r{\theta}},{\cal E}_{r{\phi}}$, the other with components ${\cal B}_{rr},{\cal B}_{r{\theta}},{\cal B}_{r{\phi}}$. The {\bf E} field differs from the vector with components ${\cal E}_{rr},{\cal E}_{r{\theta}},{\cal E}_{r{\phi}}$ only by a factor of $r$, and hence the direction of the two fields is the same at any point. The same is true for the corresponding {\bf B} field. The LICs of the {\bf E} and {\bf B}, as in e.g., Fig.~\ref{fig:emradquad}, fields can therefore be considered to be LICs of all parts of the gravitational field except the parts transverse to the $r$ direction. It is those transverse parts, of course, that carry the radiation. \subsubsection{General visualization considerations} In linearized gravity, there is no equivalent of the principles that define or constrain field line motion in electromagnetism. What this means is that we can present a sequence of eigenlines, but we cannot say which line at one moment corresponds to which line at another. This fundamental problem turns out not to be a barrier if we only want to show the qualitative nature of the field pattern motions in gravitation. Intuitively useful flow fields can be guessed at qualitatively due to the existence of singularities. In plots of lines tangent to vector fields, the singularities are points at which the vector has zero magnitude, so that the direction of a tangent line is undefined. In Fig.~\ref{fig:emradquad}, such points can be seen both in the equatorial plane and along the symmetry axis. If the vector field illustrated were the velocity of a fluid, these singular points would be called stagnation points. It is simple to argue that lines of the electric field {\bf E} in electromagnetic quadrupole radiation moving according to the rules of Eq.~(\ref{Evconstraint}) do not cross singularities. Similary, in the gravitational case, singularities are easily identified, and the motion of the singularities -- their displacement from one snapshot to another -- therefore gives us a coarse visual sketch of the motion of the field lines; once the position of the singularities is known at a new time, the remainder of the field lines can be approximately drawn in. Different choices of the details of how they are drawn make no difference in the qualitative content of the images. Note that the eigenline fields of gravitational radiation shown in Fig.~\ref{fig:LICGsnapshot} are richer in singularities than the electromagnetic vector fields shown in Fig.~\ref{fig:emradquad}. In addition to ``stagnation points,'' the eigenlines contain line singularities in the equatorial plane. In sequences of LIC snapshots the motion of these singularities, and the understanding that they ``drag'' the field lines, give approximate meaning to the evolution of the eigenlines. Such qualitative visual flow fields could be used in making the movies, i.e., sequences of LIC snapshots, that have been made available online\cite{onlineDLIC}. However, we chose not to do that even qualitatively, and have taken the underlying flow field to be zero in those movies. The eye of the observer provides a concept of motion in any case, and we felt that it would be overinterpretation to try to augment that perception by imposing a qualitative flow pattern that matches it. Therefore we do not evolve the underlying pattern at all in the gravitational radiations movies online. What may be the most interesting point about these visualizations of eigenline dynamics is that, unlike motion of fluid velocity fields and electric field lines, there is no ``correct'' and ``true'' motion of the eigenlines. There is neither a physical process (e.g., the spiraling of charged particles around magnetic field lines, the identification of specific fluid elements in velocity flows), nor a mathematical criterion (e.g., flux conservation) available as a basis for defining or constraining the motion of lines. Exploiting singularity location is the best we can do. The intuitive feeling that there should be a well defined meaning to the motion of field lines may be rooted in the relationship of electromagnetic field line motion and energy flow, a relationship, expressed in Eq.~(\ref{eq:vandPoynting}), limited to regions, such as the radiation region, in which the {\bf E} and {\bf B} fields are orthogonal. This suggests the question of whether energy flow could be used as a guide to the motion of eigenlines, and hence the question whether there is a gravitational analog of the Poynting flux. There is, in fact, a pragmatic quantitative measure of energy flow in gravitational waves, the Landau-Lifschitz pseudotensor\cite{LL}, but this measure is neither definitive nor useful for our purposes. The fact that it is not definitive is important. Just as gravitational acceleration near the Earth surface vanishes in a freely falling elevator, many other aspects of gravitation vanish in an appropriately chosen reference frame. A consequence of this is the fundamental impossibility of localizing energy in a gravitational wave. The Landau-Lifschitz pseudotensor can give only a sort of average energy flux over several wavelengths. This already suggests that such a measure is not useful for us; it cannot help us give meaning to line motion. The mathematical details confirm that suggestion; the pseudotensor cannot be inferred from the gravitoelectric and gravitomagntic fields. In a very rough sense the pseuodensor is constructed from mathematical objects that are spacetime integrals of $\bs{\cal E}$ and $\bs{\cal B}$\cite{handcurlies}. \section{Conclusions} \label{s:conc} We have compared the mathematics and visualizations of electromagnetic and gravitational fields, working up from static field configurations to radiating oscillatory versions of these configurations. Most of the details of the mathematics and the images were illustrated with the examples of electric and gravitational point quadrupoles. Visualizations, using LICs, were based on the familiar field lines for the electromagnetic case. Visualizations of gravitational fields used the recently introduced tendex/vortex eigenline formalism\cite{CorntechPRL,CorntechPRD,CorntechZeroes}. We have found, as foretold in the Introduction, that this comparison shows both instructive similarities and instructive differences. An important similarity of radiation examples was the change, in both cases, from a quasistatic field structure at distances from the point source small compared to a wavelength, to a transverse, $1/r$, radiation field at distances large compared to a wavelength. The visualizations, in both cases, show the field structures that are not easily seen in the mathematics, and show how the fields make the transition from the near-source structure to the very different radiation structure. An important difference between the two cases lay in the visualization of dynamical fields. The motion of electromagnetic field lines has both mathematical and physical meaning, while the motion of gravitational eigenlines, has neither. This difference can be partially ascribed to the difference between the nature of eigenlines for a tensor field and ``lines of force'' for a vector field. But the difference, especially regarding energy flow, underscores fundamental differences between electromagnetism and relativistic gravitation. In Sec.~\ref{sub:visdynaGrav} there was a rediscovery and illustration of an interesting mathematical relationship between the electromagnetic and gravitational fields. Aside from trivial replacements, the complete details of electromagnetic solution -- including the radiation fields -- is contained within the gravitational field solution. To go from the electromagnetic solution to the gravitational, requires only adding the gravitational radiation fields. \begin{acknowledgments} For discussions of tendex/vortex lines we thank the groups at Caltech and Cornell, in particular Kip Thorne, Mark Scheel, Rob Owen, Jeff Kaplan, Fan Zhang and Aaron Zimmerman. \end{acknowledgments}
1,108,101,565,028
arxiv
\section{Introduction} Let $A$ be an automorphism of the torus $\mathbb{T}^d=\mathbb{R}^d/\mathbb{Z}^d$. For a $C^\infty$ function $\tau(x): \mathbb{T}^d\longrightarrow\mathbb{R}^s$ with the integer $s\geqslant 1$, it defines an isometric toral extension of $A$, which is a map $\mathcal{T}_{A,\tau}:\mathbb{T}^d\times\mathbb{T}^s\longrightarrow \mathbb{T}^d\times\mathbb{T}^s$ of the form \begin{equation}\label{gen_isoextension} \mathcal{T}_{A,\tau}(x,y)=(Ax,y+\tau(x) \textup{~mod~}\mathbb{Z}^{s}) \end{equation} where $\mathbb{T}^s=\mathbb{R}^s/\mathbb{Z}^s$. We can think of $\mathbb{T}^d\times\mathbb{T}^s$ as a (trivial) bundle over the base space $\mathbb{T}^d$, so that $\mathcal{T}_{A,\tau}$ is a skew product with an automorphism $A$ on the base $\mathbb{T}^d$ and a translation on each fiber $\{x\}\times\mathbb{T}^s$ with the translation vector $\tau(x)$. If $A$ is ergodic, then such isometric extensions provide a special class of volume preserving partially hyperbolic systems. They have been extensively studied in the literature, especially in the case when $A$ is hyperbolic. Our paper treats abelian group actions generated by multiple diffeomorphisms of the form \eqref{gen_isoextension}, and study the rigidity properties of such actions under perturbations. In general, a smooth $\mathbb{Z}^k$ action $\rho$ on a compact nilmanifold (including the torus) $M$ is given by a group morphism $\rho:\mathbf{n}\mapsto \rho(\mathbf{n})$ from $\mathbb{Z}^k$ into the group $\textup{Diff}^\infty(M)$ of $C^\infty$ diffeomorphisms of $M$. The classification of smooth actions of higher rank on compact manifolds is one of the central problems in smooth dynamics. It originated from the Zimmer program of studying actions of higher rank groups and lattices \cite{Zimmer1987}. The action considered in this paper acts by automorphisms on the base $\mathbb{T}^d$ and acts isometrically on the fiber $\mathbb{T}^s$. We start by considering a class of $\mathbb{Z}^2$ actions $\alpha=\langle\mathcal{T}_{A_1,\tau_1}, \mathcal{T}_{A_2,\tau_2}\rangle$, that is $\alpha(\mathbf{n})=\mathcal{T}^{n_1}_{A_1,\tau_1}\circ \mathcal{T}^{n_2}_{A_2,\tau_2}$ for all $\mathbf{n}=(n_1, n_2)\in \mathbb{Z}^2$. We are motivated by an attempt to understand the smooth actions close to $\alpha$ in terms of their dynamics and geometry. Moreover, we assume that the $\mathbb{Z}^2$ action $\langle A_1, A_2\rangle$ on the base $\mathbb{T}^d$ is higher rank, which is closely tied to certain ergodic properties (see Remark \ref{Rem_ergod}). As a consequence, one can show that $ \mathcal{T}_{A_i,\tau_i}$, $i=1,2$, are simultaneously $C^\infty$-conjugate to $ \mathcal{T}_{A_i,[\tau_i]}$, see Proposition \ref{Pro_conj_ave}, where the constant vector \[[\tau_i]\overset{\Delta}=\int_{\mathbb{T}^d}\tau_i(x)\, dx\in \mathbb{R}^s\] denotes the average of $\tau_i$ over $\mathbb{T}^d=\mathbb{R}^d/\mathbb{Z}^d$, $i=1,2$. People are interested in the smooth rigidity problem of the above actions. By the discussion above, it is related to the properties of the averages $[\tau_1]$ and $[\tau_2]$. We first recall some prior works. $\bullet$ If $s=0$ (i.e. no any extensions), then $\alpha$ becomes $\langle A_1, A_2\rangle$, and the local rigidity has been established by Katok and the second author. More precisely, \begin{The}\label{thmDamjKatok10}\cite{Damjanovic_Katok10} If the $\mathbb{Z}^2$ action $\alpha=\langle A_1, A_2\rangle$ is higher rank, then there exists an integer $l=l(\alpha)>0$ such that: any smooth action $\widetilde\alpha:\mathbb{Z}^2\to \textup{Diff}^\infty(\mathbb{T}^d)$ which is sufficiently close to $\alpha$ in the $C^l$ topology is $C^\infty$-conjugate to $\alpha$. \end{The} It still holds for any higher rank $\mathbb{Z}^k$, $k\geqslant 2$ actions by toral automorphisms, cf. \cite{Damjanovic_Katok10}. $\bullet$ If $s\neq 1$, the action may enjoy a local rigidity subject to constraints that some invariants are preserved. For $[\tau_1]$ and $[\tau_2]$ satisfying certain Diophantine condition, the following form of local rigidity holds. \begin{The}\cite{Damjanovic_Fayad} Consider the $\mathbb{Z}^2$ action $\alpha=\langle \mathcal{T}_{A_1, \tau_1},\mathcal{T}_{A_2,\tau_2}\rangle$, where the averages $[\tau_1]$ and $[\tau_2]$ satisfy the simultaneous Diophantine condition and the action $\langle A_1, A_2\rangle$ on the base $\mathbb{T}^d$ is higher rank. Then, there exists an integer $l=l(\alpha)>0$ such that: for any smooth $\mathbb{Z}^2$ action $\widetilde{\alpha}=\langle F_1, F_2 \rangle$ that is sufficiently close to $\alpha$ in the $C^l$ topology, $\widetilde\alpha$ can be $C^\infty$-conjugate to $\alpha$ provided that each $F_i$, $i=1,2$, preserves an invariant probability measure $\mu_i$ whose translation vector along the fiber direction is equal to $[\tau_i]$. \end{The} The above assumption is inspired by Moser's local rigidity result \cite{Moser90_commuting} for commuting circle maps (see also \cite{Fayad_Khanin} for a global result). Nevertheless, very little is known for the rational case, i.e., $[\tau_i]\in \mathbb{Q}^s$, $i=1,2$. In this situation, $\mathcal{T}_{A_i,\tau_i}$, $i=1,2$ are \textbf{non-ergodic} on the total space $\mathbb{T}^d\times\mathbb{T}^s$, thus in order to study the rigidity aspect of such actions, some constraints are needed to be imposed on the class of perturbations. More precisely, one may ask the following question. \begin{Question} Let $[\tau_1]\in \mathbb{Q}^s$ and $[\tau_2]\in \mathbb{Q}^s$, and consider a smooth $\mathbb{Z}^2$ action $\widetilde\alpha$ that is close to $\alpha=\langle \mathcal{T}_{A_1,\tau_1}, \mathcal{T}_{A_2,\tau_2}\rangle$ in the $C^r$ topology with $r$ being suitably large. Under which conditions can we show that the perturbed action $\widetilde\alpha$ is $C^\infty$-conjugate to the original action $\alpha$? \end{Question} In this paper, we solve this problem by only assuming that the perturbed actions satisfy certain topological assumption (see subsection \ref{subsecmainres}). This assumption is not only sufficient but also necessary. We stress that the actions considered here are not necessarily Anosov on the base $\mathbb{T}^d$, and therefore, they are not the so-called fibered partially hyperbolic systems discussed in \cite{Damjanovic_Wilkinson_Xu2021_Duke}. In the case of Anosov base, there are many geometric tools that can be used towards classifying perturbations, see e.g. \cite{Damjanovic_Wilkinson_Xu2021_Duke} and references therein. We also remark that for groups with more structure than $\mathbb{Z}^k$, the perturbations are better understood. In \cite{FM_2009}, Fisher and Margulis established local rigidity in full generality for quasi-affine actions by higher rank lattices in semisimple Lie groups. Prior to \cite{FM_2009}, the question about local rigidity of product actions of property (T) groups has been addressed in \cite{Nitica_Torok_1995,Nitica_Torok_2001,Torok_2003} where they considered higher-rank lattice actions of the form $\rho\times id_{\mathbb{T}^1}$ with the subaction $\rho$ having certain hyperbolic structure. However, the situations and methods in these works are very different from ours and depend on the acting group having Kazhdan's property (T). \subsection{Background on rigidity for smooth actions} To better explain the background and motivation of our result, we give a brief introduction to the rigidity problem of smooth actions mainly from the viewpoint of dynamical systems. The interested reader can also refer to \cite{Fishier_2007Local} for a survey of the local rigidity problem for general group actions. Let $M$ be a compact manifold. We refer to a homomorphism $\rho: \mathbb{Z}^k\to \textup{Diff}^\infty(M)$ as an action since it can be thought of as $C^\infty$ action $\rho: \mathbb{Z}^k\times M\to M$. Briefly, we say a $\mathbb{Z}^k$ action $\rho$ is $C^\infty$-locally rigid if for any sufficiently small perturbations $\widetilde\rho$, there is a $C^\infty$ conjugacy $h$ such that $h\circ \widetilde\rho(\mathbf{n})\circ h^{-1}= \rho(\mathbf{n})$, for all $\mathbf{n}\in\mathbb{Z}^k$. Two $\mathbb{Z}^k$ actions $\rho$ and $\widetilde\rho$ are said to be $C^r$-close if the diffeomorphisms $\rho(\mathbf{e}_i)$ and $\widetilde\rho(\mathbf{e}_i)$ are close in the $C^r$ topology for all $i=1,\cdots, k$, where $\mathbf{e}_1,\cdots,\mathbf{e}_k$ are the generators of $\mathbb{Z}^k$. The dynamical motivation for investigating the rigidity started with the study of structural stability. In contrast to the structural stability which preserves only the topological orbit structure, the $C^\infty$-local rigidity preserves all differentiable orbit structure. By a classical result of Franks and Manning, any Anosov diffeomorphism on the torus is topologically conjugate to an affine Anosov automorphism. Nevertheless, this topological conjugacy, generically, cannot be improved to $C^1$. For example, one can easily perturb an Anosov diffeomorphism $f$ around a periodic point to a new Anosov diffeomorphism $\widetilde f$ which changes the eigenvalues of its differential at this periodic point, and thus $\widetilde f$ cannot be $C^1$ conjugate to $f$. In contrast to the rank-one (i.e. $\mathbb{Z}^1$ action) situation, higher rank abelian Anosov actions exhibit much more rigidity. Some special cases are studied in \cite{Katok_Lewis_1991,Hur_1992,Katok_Lewis_Zimmer_1996}. Katok and Spatzier first established the local rigidity for higher-rank algebraic Anosov actions with semisimple linear parts \cite{KS_1997}, and later extended to some non-semisimple actions by Einsiedler and Fisher \cite{Einsiedler_Fisher2007}. These results motivate the Katok-Spatzier global rigidity conjecture: \textit{each irreducible higher rank abelian smooth Anosov actions on a compact manifold is smoothly conjugate to algebraic actions}. It is concerned with classification of higher-rank Anosov smooth actions. Over the last two decades, significant progress has been made towards this conjecture, we only list a few \cite{Kal_Sa_2006,Kalinin_Spatzier2007,RodriguezHertz_globalrigidity_2007,FKS_2013,RW_2014} and see references therein. In particular, Rodriguez Hertz and Wang \cite{RW_2014} obtained the optimal global rigidity result on nilmanifolds and tori for $\mathbb{Z}^k$ Anosov actions without rank-one factors. This extended earlier work \cite{FKS_2013} which required every Weyl chamber contains an Anosov element. However, smooth classification of partially hyperbolic actions is much more complicated. Even local rigidity results are scarce. The major difficulty lies in the appearance of center foliations. The first breakthrough is the work \cite{Damjanovic_Katok10} on local rigidity of certain partially hyperbolic affine actions of abelian groups on tori using a KAM approach. A few more recent developments and rigidity results on partially hyperbolic actions can be found in \cite{Damjanovic_Katok2011,Damjanovic_Fayad,Vin_Wang_2019, Damjanovic_Wilkinson_Xu2021_Duke}, \textit{etc}. \subsection{The main results} \label{subsecmainres} The actions considered in this paper are a special class of partially hyperbolic actions: (I) the action has a partially hyperbolic part and a non-hyperbolic, isometric extension part; (II) all elements of the action are non-ergodic on the total space. To state our main results we need some basic definitions. The first one is the higher rank condition which is a common condition used in the rigidity problem of group actions. \begin{defn} We say that a $\mathbb{Z}^k$, $k\geqslant 2$ action has \emph{a rank-one factor} if it factors to a $\mathbb{Z}^k$ action which is (up to a finite index subgroup of $\mathbb{Z}^k$) generated by a single diffeomorphism. Moreover, an action is said to be \emph{higher rank} if it has no rank-one factors. \end{defn} \begin{Rem}[Ergodicity]\label{Rem_ergod} From a dynamical viewpoint, a $\mathbb{Z}^k$ action by toral automorphisms is higher rank is equivalent to saying that it contains a subgroup $L$ isomorphic to $\mathbb{Z}^2$ such that every element in $L$, except for identity, is ergodic. See \cite{Starkov1999}. In particular, for a $\mathbb{Z}^2$ action, this condition is equivalent to saying that all non-trivial elements of the action are ergodic, see Lemma \ref{lemequihighrank} for an explanation. Consequently, these ergodic automorphisms are partially hyperbolic. \end{Rem} For our purpose we need the following notion. \begin{defn} A map $F(x,y): \mathbb{T}^d\times\mathbb{T}^s\to \mathbb{T}^d\times\mathbb{T}^s$ is said to satisfy \emph{the intersection property} if \begin{enumerate} \renewcommand{\labelenumi}{\theenumi} \renewcommand{\theenumi}{\bf(IP)} \makeatletter \makeatother \item \label{condIP} for any $d$-dimensional torus $\Gamma$ that is $C^1$-close\footnote{It means that $\Gamma$ is of the form $\{(x,y): x\in \mathbb{T}^d,\quad y=y_0+\psi(x) ~\textup{mod}~\mathbb{Z}^s\}$ where $\psi\in C^1(\mathbb{T}^d,\mathbb{R}^s)$ and $\|\psi\|_{C^1}\leqslant \delta$ for a priori fixed number $\delta>0$ } to $\mathbb{T}^d\times\{y_0\}$, one has $F(\Gamma)\cap \Gamma\neq \emptyset.$ \end{enumerate} \end{defn} For instance, the map $\mathcal{T}_{A,0}=A\times id_{\mathbb{T}^s}$ satisfies condition \ref{condIP}. This can be readily verified by using the fact that $x=0$ is always a fixed point of the automorphism $A:\mathbb{T}^d\to\mathbb{T}^d$. Historically, the intersection property was once used by Moser (see \cite{Siegel_Moser1971} or R\"ussmann's works), as an alternative for the area-preserving condition, to prove the existence of invariant circles for the twist maps of a cylinder (Moser's twist map theorem). Subsequently, several high-dimensional versions of the intersection property were introduced to study the existence of invariant tori for certain non-symplectic maps in high-dimensional spaces. In the present paper condition \ref{condIP} also plays an important role in the proof. Indeed, it is used to estimate the averaged perturbations during our KAM iteration process. Now we can state the first main result. \begin{THEalpha}\label{MainThm_0} Consider the smooth $\mathbb{Z}^2$ action $\alpha=\langle\mathcal{T}_{A_1,\tau_1}, \mathcal{T}_{A_2,\tau_2}\rangle$ on $\mathbb{T}^d\times\mathbb{T}^s$, where the averages $[\tau_1]$ and $[\tau_2]$ are rational and the action $\langle A_1, A_2\rangle$ on the base $\mathbb{T}^d$ is higher rank. Then, there exist $\varepsilon=\varepsilon(\alpha)>0$ and $\mu=\mu(\alpha)>0$ such that: for any smooth $\mathbb{Z}^2$ action $\widetilde{\alpha}=(\mathcal{F}_1, \mathcal{F}_2)$, if \begin{enumerate}[(i)] \item $\textup{dist}_{C^\mu}(\widetilde{\alpha}, \alpha)<\varepsilon$. \item the finite set $\{(i,j)\in \mathbb{Z}^2: ~|i|\leqslant M_0, |j|\leqslant M_0\}$ contains two linearly independent elements $\mathbf{m}, \mathbf{n}$ such that $\widetilde{\alpha}(\mathbf{m})$ and $\widetilde{\alpha}(\mathbf{n})$ satisfy condition \ref{condIP}, where $M_0\geqslant 1$ is the minimal positive integer $\lambda$ such that $\lambda\,[\tau_1]\in \mathbb{Z}^s$ and $\lambda\,[\tau_2]\in \mathbb{Z}^s$. \end{enumerate} then the action $\widetilde{\alpha}=\langle \mathcal{F}_1, \mathcal{F}_2\rangle$ is $C^\infty$-conjugate to $\alpha=\langle\mathcal{T}_{A_1,\tau_1}, \mathcal{T}_{A_2,\tau_2}\rangle$. \end{THEalpha} \begin{Rem} By assumption we see that the original action $\alpha$ acts ergodically on the base $\mathbb{T}^d$ and acts isometrically on the fiber $\mathbb{T}^s$. But $\alpha$ does not act ergodically on the total space $\mathbb{T}^d\times\mathbb{T}^s$ since $[\tau_1], [\tau_2]$ are rational. For the generators of the perturbed action $\widetilde\alpha$, $\mathcal{F}_l(x,y)\in \textup{Diff}^\infty(\mathbb{T}^d\times\mathbb{T}^s)$, $l=1,2$, can be written in the form $\mathcal{F}_l=\mathcal{T}_{A_l, \tau_l}+f_l$ where the perturbation term $f_l(x,y)=(f_{l,1}(x,y), f_{l,2}(x,y))$ with $f_{l,1}\in C^\infty(\mathbb{T}^{d+s}, \mathbb{R}^d)$ and $f_{l,2}\in C^\infty(\mathbb{T}^{d+s}, \mathbb{R}^s)$. The above mentioned $C^\mu$ distance between two actions $\widetilde\alpha$ and $\alpha$ is defined by using the generators: \[\textup{dist}_{C^\mu}(\widetilde{\alpha}, \alpha):=\max\limits_{l=1,2}\|\mathcal{F}_l-\mathcal{T}_{A_l,\tau_l}\|_{C^\mu(\mathbb{T}^d\times\mathbb{T}^s)}.\] \end{Rem} \begin{Rem} Theorem \ref{MainThm_0} implies that there is a near-identity conjugacy $U\in \textup{Diff}^\infty(\mathbb{T}^d\times\mathbb{T}^s)$ such that $U\circ \widetilde\alpha(\mathbf{k})\circ U^{-1}= \alpha(\mathbf{k})$, for all $\mathbf{k}\in \mathbb{Z}^2$. As will be shown in subsection \ref{subsecproofA}, the construction of the conjugacy $U$ mainly consists of two parts: one is produced by the KAM scheme, and the other is obtained by solving a cohomology equation over periodic diffeomorphisms. \end{Rem} \begin{Rem} Condition \ref{condIP} cannot be removed, otherwise, the above result may fail. Although the local rigidity of the higher rank action $\langle A, B\rangle$ on $\mathbb{T}^d$ holds (see Theorem \ref{thmDamjKatok10}), it can not be applied directly to prove the situation considered by the above theorem. This is because the generating elements $\mathcal{F}_1(x,y), \mathcal{F}_2(x,y)$ of the perturbed action $\widetilde\alpha$ depend on both the base variable $x$ and the fiber variable $y$, and for each fixed $y$ the restriction map $\pi_1\circ\mathcal{F}_1(\cdot,y), \pi_1\circ\mathcal{F}_2(\cdot, y):$ $\mathbb{T}^d\to \mathbb{T}^d$ of $\mathcal{F}_1$ and $\mathcal{F}_2$ on the base $\mathbb{T}^d$, generically, do not commute. Here, $\pi_1:(x,y)\mapsto x$ is the projection. \end{Rem} \subsubsection{Volume preserving actions} By a volume preserving diffeomorphism of $M$, we mean a diffeomorphism that preserves a volume form on $M$. For the actions considered here, if \textbf{the fiber dimension $s=1$} and the perturbations are within the class of volume preserving actions, we have the following result. \begin{THEalpha}\label{MainThm_1} Consider the smooth $\mathbb{Z}^2$ action $\alpha=\langle\mathcal{T}_{A_1,\tau_1}, \mathcal{T}_{A_2,\tau_2}\rangle$ on $\mathbb{T}^d\times\mathbb{T}^1$, where the averages $[\tau_1]$ and $[\tau_2]$ are rational and the action $\langle A_1, A_2\rangle$ on the base is higher rank. Then, there exist $\varepsilon=\varepsilon(\alpha)>0$ and $\mu=\mu(\alpha)>0$ such that: for any smooth $\mathbb{Z}^2$ action $\widetilde{\alpha}=\langle \mathcal{F}_1, \mathcal{F}_2\rangle$ with $\mathcal{F}_l$, $l=1,2$, preserving a volume form on $\mathbb{T}^{d+1}$, if \begin{enumerate}[(i)] \item $\textup{dist}_{C^\mu}(\widetilde{\alpha}, \alpha)<\varepsilon$. \item for $l=1,2$ let $q_l$ denote the minimal positive integer $\lambda$ such that $\lambda\,[\tau_l]\in \mathbb{Z}$. Assume that each map $\mathcal{F}_l^{q_l}=\mathcal{F}_l\circ\cdots\circ\mathcal{F}_l$ admits an invariant $d$-dimensional torus homotopic to $\mathbb{T}^d\times\{0\}$. \end{enumerate} then the action $\widetilde{\alpha}=\langle \mathcal{F}_1, \mathcal{F}_2\rangle$ is $C^\infty$-conjugate to $\alpha=\langle\mathcal{T}_{A_1,\tau_1}, \mathcal{T}_{A_2,\tau_2}\rangle$. \end{THEalpha} We also give a corresponding result for $\mathbb{Z}^k$, $k\geqslant 2$ actions. \begin{THEalpha}\label{MainThm_2} Let $\rho=\rho_0\times id_{\mathbb{T}^1}$ be a $\mathbb{Z}^k$, $k\geqslant 2$ action on $\mathbb{T}^d\times\mathbb{T}^1$, where $\rho_0$ is a $\mathbb{Z}^k$ higher rank action by automorphisms on $\mathbb{T}^d$. Then, there exist $\varepsilon=\varepsilon(\rho)>0$ and $\mu=\mu(\rho)>0$ such that: for any smooth $\mathbb{Z}^k$ action $\widetilde{\rho}$ which preserves a volume form on $\mathbb{T}^{d+1}$, if \begin{enumerate}[(i)] \item $\textup{dist}_{C^\mu}(\widetilde{\rho}, \rho)<\varepsilon$. \item $\widetilde{\rho}$ admits a common invariant $d$-dimensional torus homotopic to $\mathbb{T}^d\times\{0\}$. \end{enumerate} then $\widetilde{\rho}$ is $C^\infty$-conjugate to $\rho$. \end{THEalpha} \begin{Rem} In condition $(ii)$, $\widetilde{\rho}$ admits a common invariant $d$-dimensional torus means that there exists a $d$-dimensional torus which is invariant under $\widetilde\rho({\mathbf{e}_i})$, for all generators $\mathbf{e}_1,\cdots, \mathbf{e}_k$ of $\mathbb{Z}^k$. \end{Rem} \subsubsection{The method.} Different from almost all existing local/global results for isometric extensions where the base maps are Anosov, the situation considered here is only \textit{partially hyperbolic} on the base $\mathbb{T}^d$, so the geometric methods in previous works on isometric extensions of Anosov systems or isometric extensions of partially hyperbolic, accessible systems are not applicable in our situation. The main approach we will use is a generalization of the KAM (Kolmogorov-Arnold-Moser) method. In fact, finding a conjugacy between a perturbed action and the original one is a problem of inverting a nonlinear operator. Our strategy is to apply linearization and successive iterations to produce a solution to the nonlinear problem. It is independent of methods which use hyperbolic dynamics on the base, so our arguments are analytic rather than geometric in nature. \subsection{Strategy of the proof} The higher rank condition for the action $\langle A_1, A_2\rangle$ on the base $\mathbb{T}^d$ plays a key role throughout our proofs. One immediate consequence is that $ \mathcal{T}_{A_i,\tau_i}$, $i=1,2$, are simultaneously $C^\infty$-conjugate to $\mathcal{T}_{A_i,[\tau_i]}$. The philosophy behind this fact is that the higher rank condition on the base implies that $\tau_i(x)$, $i=1,2$ is a coboundary with respect to the action $\langle A_1, A_2\rangle$, see Section \ref{Section_actionofproducttype}. Since $\mathcal{T}_{A_i,[\tau_i]}=A_i\times R_{[\tau_i]}$, if $q_i$ is the period of the rational vector $[\tau_i]$, then the $q_i$-fold composition becomes $\mathcal{T}_{A_i, [\tau_i]}^{q_i}=A_i^{q_i}\times id_{\mathbb{T}^s}$. Consequently, this reduces Theorem \ref{MainThm_0} to studying the local rigidity of a subaction generated by two automorphisms of the form $\mathcal{T}_{A,0}=A\times id_{\mathbb{T}^s}$ and $\mathcal{T}_{B,0}=B\times id_{\mathbb{T}^s}$, where $A, B$ still satisfy the higher rank condition. See Theorem \ref{Element_Thm1}. The proof of Theorem \ref{Element_Thm1} is included in Section \ref{Section_conjugequation}--Section \ref{Section_KAMscheme}. We adopt the KAM methodology, as in \cite{Damjanovic_Katok10}. The basic philosophy of this method is that one can reduce a nonlinear problem to a linear one and solve approximately the linearized equations, then by iterating this process, the limit of successive iterations finally produces a solution to the nonlinear problem. The proof includes obtaining tame solutions for the cohomological equations and constructing tame splitting as well. More precisely, the linear problem consists of solving approximately (with error quadratically small with respect to the error of the perturbation) two kinds of cohomological equations over a $\mathbb{Z}^2$ action by non-ergodic partially hyperbolic automorphisms: the twisted case \begin{equation}\label{sec1twis} \begin{aligned} u_1(Ax, y)-Au_1(x,y)=\mathbf{f}_1(x,y),\qquad u_1(Bx,y)-Bu_1(x,y)=\mathbf{g}_1(x,y), \end{aligned} \end{equation} and the untwisted case \begin{equation}\label{sec1untwis} u_2 (Ax,y)-u_2(x,y)=\mathbf{f}_2(x,y),\qquad u_2(Bx,y)-u_2(x,y)=\mathbf{g}_2(x,y).\ \end{equation} If $\mathcal{L}_1(\mathbf{f}_1, \mathbf{g}_1)=0$ and $\mathcal{L}_2(\mathbf{f}_2, \mathbf{g}_2)=0$ (operators $\mathcal{L}_1$ and $\mathcal{L}_2$ are defined in \eqref{fas1}-\eqref{fas2}) are satisfied, then due to the higher rank condition it is feasible to solve equations \eqref{sec1twis}-\eqref{sec1untwis} exactly and estimate the solution with a fixed loss of regularity. Since for a general perturbation $\mathcal{L}_1(\mathbf{f}_1, \mathbf{g}_1)\ne 0$ and $\mathcal{L}_2(\mathbf{f}_2, \mathbf{g}_2)\ne 0$ one needs (as in \cite{Damjanovic_Katok10}) to approximate data given by the perturbation by data which satisfies $\mathcal{L}_1(\mathbf{f}_1, \mathbf{g}_1)=0$ and $\mathcal{L}_2(\mathbf{f}_2, \mathbf{g}_2)=0$. To this aim, we use the concrete constructions from \cite{Damjanovic_Katok10} but we have to combine them with the idea of smooth dependence on parameters to give tame estimates (for the solutions) in the fiber direction as well as the base direction. See Section \ref{Section_conjugequation} and Section \ref{Section_Smoothdependparamt} for more details. When it comes to verifying the convergence of the KAM scheme, one needs to handle the following problems:\\ (I)\, $\mathcal{L}_1(\mathbf{f}_1, \mathbf{g}_1)\neq 0$ and $\mathcal{L}_2(\mathbf{f}_2, \mathbf{g}_2)\neq 0$ in general;\\ (II)\, the fixed loss of regularity; \\ (III)\, the estimate of the averaged terms $[\mathbf{f}_2](y):=\int_{\mathbb{T}^d}\mathbf{f}_2(x,y)\, dx$ and $[\mathbf{g}_2](y):=\int_{\mathbb{T}^d}\mathbf{g}_2(x,y)\,dx$.\\ To tackle (I), the basic idea is to solve equations \eqref{sec1twis}--\eqref{sec1untwis} up to quadratic errors. This requires to split $\mathbf{f}_i, \mathbf{g}_i$, $i=1,2$ into $\mathbf{f}_i=\mathcal{P}(\mathbf{f}_i)+\mathcal{E}(\mathbf{f}_i)$ and $\mathbf{g}_i=\mathcal{P}(\mathbf{g}_i)+\mathcal{E}(\mathbf{g}_i)$ in a tame way, such that $\mathcal{L}_i(\mathcal{P}(\mathbf{f}_i), \mathcal{P}(\mathbf{g}_i))=0$ and the remainder terms $\mathcal{E}(\mathbf{f}_i), \mathcal{E}(\mathbf{g}_i)$ are quadratically small with tame estimates. This issue also appeared in the work \cite{Damjanovic_Katok10}, but the new difficulty here is that we need to obtain tame splitting in the fiber direction as well as in the base direction. This requires delicate analysis, and our arguments rely on a specific and explicit construction, see Section \ref{Section_tamesplit}. To make up for the fixed loss of regularity, a standard treatment in the KAM method is that one solves the linearized equations modified by smoothing operators in place of the original linearized equations at each iterative step. To solve problem (III), the intersection property enters into the picture and causes the average terms to be of higher order. See subsection \ref{subsection_induclem}. Eventually, the iteration process is set and carried out in subection \ref{subsection_KAMscheme}, which proves Theorem \ref{Element_Thm1}. Finally, we go back to prove Theorem \ref{MainThm_0} which is concerned with the perturbation $\widetilde\alpha$ of the action $\alpha=\langle \mathcal{T}_{A_1,\tau_1}, \mathcal{T}_{A_2,\tau_2}\rangle$. As will be shown in Section \ref{Section_proofMainResult}, the construction of the conjugacy between $\widetilde\alpha$ and $\alpha$ mainly consists of two parts: one is produced by the KAM scheme, and the other is obtained by solving a cohomology equation over periodic diffeomorphisms. In general, using only the KAM scheme does not produce the exact conjugacy conjugating $\widetilde\alpha$ to $\alpha$. It only produces a conjugacy that conjugates the subgroup $\widetilde\alpha\big|_{\mathbf{m}\mathbb{Z}+\mathbf{n}\mathbb{Z}}$ to $\alpha\big|_{\mathbf{m}\mathbb{Z}+\mathbf{n}\mathbb{Z}}$, see Part 1 in the proof of Theorem \ref{MainThm_0}. To solve this issue we need one more step (based on Lemma \ref{lem_periodid} and the commutation relation) to construct a diffeomorphism conjugating the whole action $\widetilde\alpha$ to $\alpha$, see Part 2 in the proof of Theorem \ref{MainThm_0}. Theorems \ref{MainThm_1}--\ref{MainThm_2} are obtained as corollaries of Theorem \ref{MainThm_0}. \subsection{Further discussion} Although some of the statements are true in a more general setting, we state them in this paper only for the situation on the torus. One may wonder if the strategy in this paper could give results for more general actions of similar kind. General question could be: if one can show local rigidity via KAM method for an action on some base manifold, under which conditions can the KAM method be applied to classify perturbations of fiber bundle extensions of such actions? More concretely, we may state the following problem. \begin{Problem} Let $M$ be a compact nilmanifold and $\rho_0: \mathbb{Z}^2\to \textup{Aut}(M)$ be a $\mathbb{Z}^2$ action of higher rank. We consider the extension $\rho=\rho_0\times id_{\mathbb{T}^s}$ of $\rho_0$ on the bundle $M\times \mathbb{T}^s$. Suppose that $\rho_0$ is locally rigid via KAM approach. Then, for any smooth action $\widetilde\rho:\mathbb{Z}^2\to \textup{Diff}^\infty(M\times\mathbb{T}^s)$ which is sufficiently close to $\rho$ and satisfies the intersection property, is $\widetilde\rho$ $C^\infty$-conjugate to $\rho$? \end{Problem} That $\rho_0$ is locally rigid via KAM approach means that there is a tame splitting on the base. However, to use the KAM approach for the extended action $\rho=\rho_0\times id_{\mathbb{T}^s}$, it requires also tame splitting along the fiber direction. In other words, one needs to deal with the following problem. \begin{Problem} Let $C_0^\infty(M\times\mathbb{T}^s,\mathbb{R}^s)$ be the space of all smooth functions $u(x,y): M\times\mathbb{T}^s\to \mathbb{R}^s$ which satisfy $[u](y):=\int_{M} u(x,y)\, dx=0$. Consider two smooth tame linear operators \begin{align*} d^0: C_0^\infty(M\times\mathbb{T}^s,\mathbb{R}^s)&\longrightarrow C_0^\infty(M\times\mathbb{T}^s,\mathbb{R}^s)\times C_0^\infty(M\times\mathbb{T}^s,\mathbb{R}^s)\\ u &\longmapsto (u\circ \rho(\mathbf{e}_1)-u,~u\circ \rho(\mathbf{e}_2)-u) \end{align*} \begin{align*} d^1: C_0^\infty(M\times\mathbb{T}^s,\mathbb{R}^s)\times C_0^\infty(M\times\mathbb{T}^s,\mathbb{R}^s)&\longrightarrow C_0^\infty(M\times\mathbb{T}^s,\mathbb{R}^s)\\ (u,v) &\longmapsto u\circ \rho(\mathbf{e}_2)-u-(v\circ \rho(\mathbf{e}_1)-v) \end{align*} Does the following exact sequence \begin{equation*} C_0^\infty(M\times\mathbb{T}^s,\mathbb{R}^s) \xrightarrow{~ d^0~} C_0^\infty(M\times\mathbb{T}^s,\mathbb{R}^s)\times C_0^\infty(M\times\mathbb{T}^s,\mathbb{R}^s) \xrightarrow{~d^1~} C_0^\infty(M\times\mathbb{T}^s,\mathbb{R}^s) \end{equation*} admit a tame splitting? \end{Problem} In the case of $M=\mathbb{T}^d$, we show in Section \ref{Section_tamesplit} that the tame splitting exists. But we have to say that our proof relies on a specific and explicit construction via Fourier analysis, which allows us to obtain estimates for derivatives along the fiber direction as well the base direction. Nevertheless, this concrete construction has not yet been generalized to the general compact nilmanifold. This remains a deep and open problem. It may be helpful to use the exponential mixing tool developed in \cite{Gorodnik_Spatzier2015}. \subsection{Organization of the paper} This paper is organized as follows. Section \ref{Section_prelim} reviews some basic facts and properties. In Section \ref{Section_actionofproducttype} we consider an action of product type, and state a corresponding local rigidity result (Theorem \ref{Element_Thm1}) for such actions. It plays a crucial role in the proof of Theorem \ref{MainThm_0}. Section \ref{Section_conjugequation}--Section \ref{Section_tamesplit} mainly include obtaining tame solutions for the cohomological equations and constructing tame splitting as well. In Section \ref{Section_KAMscheme}, we prove Theorem \ref{Element_Thm1} by using the KAM scheme. Theorem \ref{MainThm_0}, Theorem \ref{MainThm_1} and Theorem \ref{MainThm_2} are finally proved in Section \ref{Section_proofMainResult}. \section{Preliminaries}\label{Section_prelim} \subsection{Ergodic toral automorphisms and partial hyperbolicity} An automorphism of the torus $\mathbb{T}^d=\mathbb{R}^d/\mathbb{Z}^d$ is determined by a $d\times d$ matrix $A\in \textup{GL}(d,\mathbb{Z})$ with integer entries and determinant $\pm 1$. In this paper, by a slight abuse of notation, we use $A$ to denote both the matrix $A$ and the induced automorphism of $\mathbb{T}^d$. The dual to $A$ is the automorphisms $A^*:\mathbb{Z}^d\to\mathbb{Z}^d$ given by the matrix $A^*=(A^T)^{-1}$. In particular, the Fourier coefficients of any function $\phi\in C^(\mathbb{T}^d,\mathbb{R})$ satisfy: $\widehat{(\phi\circ A)}_n=\widehat\phi_{A^*n}$, $\forall ~ n \in \mathbb{Z}^d$. The following properties are classical, see for instance \cite{Katok_Nitica_2011}. \begin{Lem}\label{Lem_charergod} (i) An automorphism of $\mathbb{T}^d$ induced by a matrix $A$ is ergodic if and only if none of the eigenvalues of $A$ is a root of unity.\\ (ii) An automorphism of $\mathbb{T}^d$ induced by a matrix $A$ is ergodic if and only if for any $n\in\mathbb{Z}^d\setminus\{0\}$, the dual orbit $\mathcal{O}(n):=\{(A^*)^i n~:~ i\in\mathbb{Z} \}$ is an infinite sequence.\\ (iii) Any ergodic automorphism of $\mathbb{T}^d$ is partially hyperbolic. \end{Lem} We infer from Lemma \ref{Lem_charergod} (i) that if $A$ is ergodic, then the automorphism of $\mathbb{T}^d$ induced by $A^*=(A^T)^{-1}$ is also ergodic. In addition, by the partial hyperbolicity it has an invariant splitting of the tangent space \[\mathbb{R}^d=E^u(A^*)\oplus E^c(A^*)\oplus E^s(A^*), \] and there are a rate $\rho>1$ and a positive constant $C$ such that \begin{equation}\label{parhyp_split} \begin{aligned} v\in E^u(A^*) ~&\Longleftrightarrow ~\|(A^*)^iv\|\geqslant C\rho^i\|v\|, \quad \textup{for all~} i\geqslant 0, \\ v\in E^s(A^*) ~& \Longleftrightarrow ~\|(A^*)^iv\|\geqslant C\rho^{-i}\|v\|, \quad \textup{for all~} i\leqslant 0,\\ v\in E^c(A^*) ~& \Longleftrightarrow ~\|(A^*)^iv\|\geqslant C\frac{\|v\|}{(1+|i|)^d}, \quad \textup{for all~}i\in\mathbb{Z}, \end{aligned} \end{equation} Here, the superscripts $c, u$ and $s$ stand for ``center'', ``unstable'' and ``stable'', respectively. $A^*$ expands $E^u(A^*)$ (resp. contracts $E^s(A^*)$) with the expansion (resp. contraction) rate being at least $\rho$. For $n\in\mathbb{Z}^d$ we can write $n=\pi_u(n)+\pi_s(n)+\pi_c(n),$ where $\pi_u(n), \pi_s(n)$ and $\pi_c(n)$ are the projections of $n$ to the subspaces $E^u(A^*)$, $E^s(A^*)$ and $E^c(A^*)$, respectively. In this paper we say \begin{itemize} \item $n$ is \emph{mostly in} $E^u(A^*)$ and will write $n\hookrightarrow E^u(A^*)$, if $\|\pi_u(n)\|=\max\limits_{\iota=u,c,s}\|\pi_\iota (n) \|$; \item $n$ is \emph{mostly in} $E^s(A^*)$ and will write $n\hookrightarrow E^s(A^*)$, if $\|\pi_s(n)\|=\max\limits_{\iota=u,c,s}\|\pi_\iota (n) \|$; \item $n$ is \emph{mostly in} $E^c(A^*)$ and will write $n\hookrightarrow E^c(A^*)$, if $\|\pi_c(n)\|=\max\limits_{\iota=u,c,s}\|\pi_\iota (n) \|$. \end{itemize} Obviously, if $n\hookrightarrow E^\iota(A^*)$ with $\iota\in \{u,s,c\}$, then \begin{equation}\label{ineq_mostlyin} \frac{1}{3}\|n\|\leqslant \|\pi_\iota(n)\|\leqslant \|n\|. \end{equation} The following result comes from the Katznelson lemma \cite{Katznelson_1971}. See also \cite[Remark 5]{Damjanovic_Katok10}. \begin{Lem}\label{Lem_Katznelson} Let $M:\mathbb{T}^d\to\mathbb{T}^d$ be an ergodic automorphism. Let $V=E^s(M)\oplus E^c(M)$ be the linear subspace in $\mathbb{R}^d$ spanned by the contracting and neutral spaces, then $V\cap \mathbb{Z}^d=\{0\}$ and there is a constant $\gamma>0$ such that for any nonzero $n\in \mathbb{Z}^d$, \[\qquad\|\pi_u (n)\|\geqslant \gamma \,\|n\|^{-d},\] where $\pi_u$ is the projection to the expanding space $E^u(M)$, and $\|\cdot\|$ is the Euclidean norm. \end{Lem} \subsection{Higher rank actions on tori} Let us consider higher rank actions by automorphisms of $\mathbb{T}^d$. Recall that a smooth $\mathbb{Z}^k$ action $\rho$ by automorphisms of $\mathbb{T}^d$ is given by a group morphism $\rho:\mathbf{n}\to \rho(\mathbf{n})$ from $\mathbb{Z}^k$ into the group $\textup{Aut}(\mathbb{T}^d)$ of automorphisms of $\mathbb{T}^d$. It is a classical result that $\rho$ is higher rank if and only if $\rho(\mathbb{Z}^k)$ contains a subgroup isomorphic to $\mathbb{Z}^2$ such that all non-trivial elements in this subgroup are ergodic automorphisms, cf. \cite{Starkov1999}. In particular, in the case of $\mathbb{Z}^2$ actions we can say a little more. The following result is elementary, and we give a proof for the reader's convenience. \begin{Lem}\label{lemequihighrank} Let $\rho_0=\langle A_1, A_2\rangle=\{A_1^l A_2^k: (l,k)\in \mathbb{Z}^2\}$ be a $\mathbb{Z}^2$ action generated by automorphisms $A_1$ and $A_2$ on $\mathbb{T}^d$. If $\rho_0$ is higher rank (i.e., has no rank-one factors), then for any nonzero $(l,k)\in\mathbb{Z}^2\setminus\{\mathbf{0}\}$, $A_1^l A_2^k$ is ergodic on $\mathbb{T}^d$. \end{Lem} \begin{proof} By assumption, there exists a subgroup $H=\langle \rho_0(\mathbf{i}),\rho_0(\mathbf{j})\rangle$, isomorphic to $\mathbb{Z}^2$, such that every non-trivial element in $H$ is ergodic. Here, $\mathbf{i}$ and $\mathbf{j}$ are integer vectors in $\mathbb{Z}^2$. Now, assume by contradiction that for some $\mathbf{k}\in \mathbb{Z}^2\setminus\{0\}$, $\rho_0(\mathbf{k})$ is not ergodic. By Lemma \ref{Lem_charergod}, a toral automorphism is ergodic if and only if none of its eigenvalues is a root of unity. As a consequence, $\rho_0(n\mathbf{k})$ are non-ergodic for all $n\in\mathbb{Z}$. On the other hand, we observe that the subgroups $\langle \mathbf{k} \rangle$ and $\langle \mathbf{i}, \mathbf{j}\rangle$ are, respectively, rank-one and rank-two in $\mathbb{Z}^2$, so the intersection between $\langle \mathbf{k} \rangle$ and $\langle \mathbf{i}, \mathbf{j}\rangle$ must be non-trivial and rank-one. Thus, $H$ contains non-identity elements that are non-ergodic. This is a contradiction. \end{proof} \begin{Lem}\label{lem_Abexp} If the $\mathbb{Z}^2$ action $\langle A, B\rangle$ generated by automorphisms $A$ and $B$ on $\mathbb{T}^d$ is higher rank, then there exist constants $\kappa_0>0$ and $C>0$ such that for every non-zero $n\in \mathbb{Z}^d$, we have \begin{equation*} \|(A^*)^l(B^*)^k n\|\geqslant C e^{|(l,k)|\kappa_0}\,\|n\|^{-d},\qquad \textup{for all}~(l,k)\in \mathbb{Z}^2. \end{equation*} Here, the norm $|(l,k)|:=\max\{|l|, |k|\}$. \end{Lem} We refer to \cite{Katok_Katok2005} for the proof. \subsection{Fr\'echet spaces and tame linear maps} A Fr\'echet space $X$ is said to be \emph{graded} if the topology is defined by a family of semi-norms $\{\|\cdot\|_r\}_r$ satisfying $\|x\|_r\leqslant \|x\|_{r+k}$ for every $x\in X$, and $r, k\geqslant 0$. For example, the space $C^\infty(\mathbb{T}^n,\mathbb{R})$ with the topology given by the usual $C^r$ norms $\|g\|_r=\max_{0\leqslant|j|\leqslant r}\sup_{z\in\mathbb{T}^n}|\partial^j g(z)|$, $r\in \mathbb{N}$ is a graded Fr\'echet space. A map $L: U\to V$ between two graded Fr\'echet spaces $U$ and $V$ is said to be \emph{tame} if there exists a constant $\sigma\geqslant 0$ such that for any $u\in U$ and $r\geqslant 0$, \[\|L u\|_r\leqslant C_r\|u\|_{r+\sigma},\] where the constants $C_r$ may depend on $r$. Our KAM strategy needs the following classical result (see for instance \cite{Zeh_generalized1,Sal-Zeh} for its proof). It will be used to compensate for the loss of regularity during the KAM iteration. \begin{Lem}\label{Lem_trun} There exists a family of linear smoothing operators $\{\mathrm{S}_N\}_{N\geqslant 0}$ from $C^\infty(\mathbb{T}^n,\mathbb{R})$ into itself, such that for every $\psi\in C^\infty(\mathbb{T}^n,\mathbb{R})$, one has $\lim_{N\to\infty}\|\psi-\mathrm{S}_N \psi\|_{C^0}=0$, and \begin{align}\label{trun_est0} \|\mathrm{S}_N \psi\|_{C^l}&\leqslant C_{k,l} N^{l-k}\|\psi\|_{C^k}\qquad \text{for~} l\geqslant k, \end{align} and for the linear operator $\mathrm{R}_N\overset{\textup{def}}=id-\mathrm{S}_N$, it satisfies \begin{align} \label{trun_est1} \| \mathrm{R}_N \psi\|_{C^k}&\leqslant C_{k,l} \frac{\|\psi\|_{C^l}}{N^{l-k}} \qquad \text{for~} l\geqslant k. \end{align} Here, $C_{k,l}>0$ are constants depending on $k$ and $l$. \end{Lem} In fact, the smoothing operators $\mathrm{S}_N$ are constructed by convoluting with appropriate kernels decaying rather fast at infinity. As pointed out in \cite{Zeh_generalized1}, one important consequence of the existence of smoothing operators is the following interpolation inequalities (Hadamard convexity inequalities). \begin{Lem}\label{cor_intpest} For every $\psi\in C^\infty(\mathbb{T}^n,\mathbb{R})$ and any $m_1\leqslant m_2\leqslant m_3$, $m_2=(1-\lambda) m_1+\lambda m_3$ with $\lambda\in[0,1]$, \[\|\psi\|_{C^{m_2}}\leqslant C_{\lambda,m_1,m_3}\,\|\psi\|_{C^{m_1}}^{1-\lambda}\,\|\psi\|_{C^{m_3}}^{\lambda},\] where the positive constants $C_{\lambda,m_1,m_3}$ depend on $m_1,m_3$ and $\lambda$. \end{Lem} We have the following fact on the inverse functions. See for instance \cite{Hamil_1982}. \begin{Lem}\label{Apdix_pro1} Let $u\in C^\infty(\mathbb{T}^n, \mathbb{R}^n)$ and suppose that $\|u\|_{C^1}\leqslant \frac{1}{4}$. Then, the map induced by $H=id+u:\mathbb{T}^n\to \mathbb{T}^n$ is a $C^\infty$ diffeomorphism. Moreover, the inverse map $H^{-1}$ satisfies \[\|H^{-1}-id\|_{C^r}\leqslant C_r\,\|u\|_{C^r}\] for every $r\geqslant 0$. \end{Lem} For the composition of two maps, the following estimates hold. \begin{Lem}\label{Apdix_linecont} Let $\psi_1 : B^m\to B^n $ and $\psi_2: B^l\to B^m$ be $C^\infty$ maps where $B^\iota\subset \mathbb{R}^\iota$, $\iota=m, n,l$ are bounded balls. Suppose that $\|\psi_1\|_{C^1}\leqslant M$ and $\|\psi_2\|_{C^1}\leqslant M$ for a constant $M>0$, then the composition $\psi_1\circ \psi_2$ satisfies: for all $r\geqslant 0$, \begin{align*} \|\psi_1\circ \psi_2\|_{C^r} \leqslant C_{M,r}\left(1+\|\psi_1\|_{C^r}+\|\psi_2\|_{C^r}\right), \end{align*} where the constants $C_{M,r}$ depend on $M$ and $r$. \end{Lem} We refer to \cite[Lemma 2.3.4]{Hamil_1982} for its proof. \section{Partially hyperbolic actions of product type}\label{Section_actionofproducttype} In this section, we show that our $\mathbb{Z}^2$ action $\alpha=\langle \mathcal{T}_{A_1,\tau_1},\mathcal{T}_{A_2,\tau_2}\rangle$ can, up to a smooth conjugacy, reduce to an action of product type. The philosophy behind this phenomenon is simple: the higher rank condition on the base space implies that $\tau_i(x)$ is a coboundary with respect to the base map $A_i$, $i=1,2$. More precisely, we can obtain the following result. \begin{Pro}\label{Pro_conj_ave} If $\mathcal{T}_{A_1,\tau_1}$ commutes with $\mathcal{T}_{A_2,\tau_2}$, and $A_1^lA_2^k$ are ergodic automorphisms on $\mathbb{T}^d$ for all nonzero $(l,k)\in \mathbb{Z}^2$, then there exists a diffeomorphism $\mathfrak{H}\in \textup{Diff}^\infty(\mathbb{T}^d\times\mathbb{T}^s)$ which is of the form $\mathfrak{H}(x,y)=(x,y+\phi(x)~\textup{mod}~\mathbb{Z}^s)$ with $\phi\in C^\infty(\mathbb{T}^d,\mathbb{R}^s)$, such that \begin{equation}\label{fbkdufbe} \mathfrak{H}\circ \mathcal{T}_{A_1,\tau_1}\circ\mathfrak{H}^{-1}=\mathcal{T}_{A_1, [\tau_1]},\qquad \mathfrak{H}\circ \mathcal{T}_{A_2,\tau_2}\circ \mathfrak{H}^{-1}=\mathcal{T}_{A_2, [\tau_2]}, \end{equation} where $\mathfrak{H}^{-1}(x,y)=(x,y-\phi(x)~\textup{mod}~\mathbb{Z}^s)$, and $[\tau_i]=\int_{\mathbb{T}^d}\tau_i(x)\, dx$, $i=1,2$. \end{Pro} \begin{proof} Recall that the functions $\tau_1, \tau_2\in C^\infty(\mathbb{T}^d,\mathbb{R}^s)$. It is easy to check that $\mathcal{T}_{A_i,\tau_i}$, $i=1,2$, are conjugate to $\mathcal{T}_{A_i,[\tau_i]}$ via a common $C^\infty$ conjugacy of the form $\mathfrak{H}(x,y)=(x, y+\phi(x)~\textup{mod}~\mathbb{Z}^s)$ if and only if the smooth function $\phi:\mathbb{T}^d\to \mathbb{R}^s$ solves the following two cohomological equations \begin{equation}\label{comsolt} \phi(A_1x)-\phi(x)=-\tau_1(x)+[\tau_1],\qquad \phi(A_2x)-\phi(x)=-\tau_2(x)+[\tau_2]. \end{equation} Thus, to complete the proof we only need to show that \eqref{comsolt} admits a smooth solution. By the commutation relation $\mathcal{T}_{A_1,\tau_1}\circ \mathcal{T}_{A_2,\tau_2}=\mathcal{T}_{A_2,\tau_2}\circ\mathcal{T}_{A_1,\tau_1}$, it is direct to see that $A_1$ commutes with $A_2$ and \begin{equation}\label{vnnaeb} \tau_1(A_2 x)-\tau_1(x)=\tau_2(A_1 x)-\tau_2(x). \end{equation} This, combined with the assumption that $A_1^lA_2^k$ are ergodic for all $(l,k)\in \mathbb{Z}^2\setminus\{0\}$, can ensure the existence of $C^\infty$ solutions of equation \eqref{comsolt}. In fact, by using Fourier analysis the proof is included in Lemma \ref{Lem_base_tame2}, which provides tame estimates on the solutions as well. \end{proof} \begin{Rem}[Nilmanifold case] We point out that Proposition \ref{Pro_conj_ave} still holds when the base maps are automorphisms on compact nilmanifolds. The proof requires the use of exponential mixing of the action by automorphisms of nilmanifolds which does not follow easily from Fourier analysis. We refer to the work \cite{Gorodnik_Spatzier2015} by Gorodnik and Spatzier. \end{Rem} Observe that $\mathcal{T}_{A_i,[\tau_i]}$, $i=1,2$, are actually maps of \textbf{product type} since \[\mathcal{T}_{A_i,[\tau_i]}=A_i\times R_{[\tau_i]}:\mathbb{T}^d\times\mathbb{T}^s\to \mathbb{T}^d\times\mathbb{T}^s\] with $R_{[\tau_i]}$ a translation map on $\mathbb{T}^s$. In particular, in the case of \textbf{rational} $[\tau_i]$, Proposition \ref{Pro_conj_ave} immediately implies the following result. \begin{Cor}\label{Cor_rationaltoid} Let $\mathfrak{H}$ be the conjugacy obtained in Proposition \ref{Pro_conj_ave}. If $[\tau_i]$, $i=1,2$ are both rational, i.e. $[\tau_i]\in \mathbb{Q}^s$, then for any $q_i\in \mathbb{Z}$ satisfying $q_i\, [\tau_i]\in \mathbb{Z}^s$, the $q_i$-fold composition $\mathcal{T}_{A_i, \tau_i}^{q_i}:=\mathcal{T}_{A_i, \tau_i}\circ \cdots\circ \mathcal{T}_{A_i, \tau_i}$ is $C^\infty$-conjugate to $A_i^{q_i}\times id_{\mathbb{T}^s}$, that is \begin{equation*} \mathfrak{H}\circ\mathcal{T}_{A_i, \tau_i}^{q_i}\circ\mathfrak{H}^{-1}=A_i^{q_i}\times id_{\mathbb{T}^s}, \end{equation*} for each $i=1,2$. \end{Cor} \begin{proof} By \eqref{fbkdufbe} one has $\mathfrak{H}\circ \mathcal{T}_{A_i, \tau_i}^{q_i}\circ \mathfrak{H}^{-1}=\mathcal{T}_{A_i^{q_i}, q_i [\tau_i]}$. Clearly, $\mathcal{T}_{A_i^{q_i}, q_i [\tau_i]}=\mathcal{T}_{A_i^{q_i}, 0}=A_i^{q_i}\times id_{\mathbb{T}^s}$. \end{proof} Consequently, Corollary \ref{Cor_rationaltoid} leads us to discover the rigidity phenomenon of a $\mathbb{Z}^2$ action generated by two commuting automorphisms of the form $\mathcal{T}_{A,0}$ and $\mathcal{T}_{B,0}$. They are maps of product type: $\mathcal{T}_{A,0}=A\times id$, $\mathcal{T}_{B,0}=B\times id: \mathbb{T}^d\times\mathbb{T}^s\longrightarrow \mathbb{T}^d\times\mathbb{T}^s$ \begin{equation}\label{form_FFGG} \begin{aligned} \mathcal{T}_{A,0}(x,y)=(Ax,y),&\qquad \mathcal{T}_{B,0}(x,y)=(Bx,y). \end{aligned} \end{equation} We need the higher rank condition on the action $\langle A, B\rangle$ of $\mathbb{T}^d$. This is equivalent to saying: \begin{enumerate} \renewcommand{\labelenumi}{\theenumi} \renewcommand{\theenumi}{\bf(HR)} \makeatletter \makeatother \item \label{condHR} $A^l B^k$ is ergodic on $\mathbb{T}^d$ for any nonzero $(l,k)\in\mathbb{Z}^2$. \end{enumerate} Such $A$ and $B$ are called ergodic generators. Then we have the following result. \begin{The}\label{Element_Thm1} Let the action $\langle A, B\rangle$ on the base $\mathbb{T}^d$ satisfies condition \ref{condHR}. Then, there exist $\varepsilon_0=\varepsilon_0(A, B)>0$ and integer $\mu_0=\mu_0(A, B)$ such that: given any smooth $\mathbb{Z}^2$ action $\langle\mathbf{F}, \mathbf{G}\rangle$ on $\mathbb{T}^d\times\mathbb{T}^s$, if $\mathbf{F}$ and $\mathbf{G}$ satisfy condition \ref{condIP} and \begin{align}\label{qqofnakd} \|\mathbf{F}-\mathcal{T}_{A,0}\|_{C^{\mu_0}}<\varepsilon_0,\qquad \|\mathbf{G}-\mathcal{T}_{B,0}\|_{C^{\mu_0}}<\varepsilon_0, \end{align} then $\langle\mathbf{F}, \mathbf{G}\rangle$ is $C^\infty$-conjugate to $\langle\mathcal{T}_{A,0}, \mathcal{T}_{B,0}\rangle$. \end{The} \begin{Rem} Let us say a little more on the smallness condition \eqref{qqofnakd}. In fact it suffices to require \begin{align*} \|\mathbf{F}-\mathcal{T}_{A,0}\|_{C^0}< \varepsilon,\quad \|\mathbf{G}-\mathcal{T}_{B,0}\|_{C^0}<\varepsilon,\qquad \|\mathbf{F}-\mathcal{T}_{A,0}\|_{C^{\mu_0}}<\varepsilon^{-\frac{3}{4}},\quad \|\mathbf{G}-\mathcal{T}_{B,0}\|_{C^{\mu_0}}<\varepsilon^{-\frac{3}{4}} \end{align*} with $\varepsilon$ suitably small. Through the interpolation estimates, this is enough for the convergence of our KAM scheme. See Lemma \ref{Lem_induc_ineq}. \end{Rem} The intersection property \ref{condIP} imposed on $\mathbf{F}$ and $\mathbf{G}$ is necessary, otherwise the above result may fail. For instance, consider $\mathbf{F}=(Ax, y+c)$ and $\mathbf{G}=(Bx, y+c)$ with $c\neq 0$ a constant vector being arbitrarily small, we find that $\mathbf{F}$ and $\mathbf{G}$ cannot be conjugate to $\mathcal{T}_{A,0}$ and $\mathcal{T}_{B,0}$. On the other hand, it is easy to see that the unperturbed maps $\mathcal{T}_{A,0}$ and $\mathcal{T}_{B,0}$ indeed satisfy condition \ref{condIP}. In fact, for any $d$-dimensional torus $\Gamma$ that is $C^1$-close to $\mathbb{T}^d\times\{y_0\}$, and we write it in the form $\Gamma=\{(x,y)~:~ y=y_0+\psi(x), x\in\mathbb{T}^d\}$ with $\psi\in C^1(\mathbb{T}^d,\mathbb{R}^s)$ a small function, then the point $(0,y_0+\psi(0))$ is exactly a fixed point of $\mathcal{T}_{A,0}=A\times id_{\mathbb{T}^s}$, which implies $\mathcal{T}_{A,0}(\Gamma)\cap \Gamma\neq \emptyset$. This is why the intersection property holds for $\mathcal{T}_{A,0}$. The same is true for $\mathcal{T}_{B,0}$. Theorem \ref{Element_Thm1} plays an essential role in proving Theorem \ref{MainThm_0}. In fact, the main task of Sections \ref{Section_conjugequation}--\ref{Section_KAMscheme} is to prove Theorem \ref{Element_Thm1}. The proof is based on the KAM approach. \section{The linearized conjugacy equations}\label{Section_conjugequation} \subsection{Cohomological equations over non-ergodic partially hyperbolic systems} In this subsection we will produce the corresponding cohomological equations over a $\mathbb{Z}^2$ action by toral automorphisms. To prove Theorem \ref{Element_Thm1} one needs to find a smooth near-identity diffeomorphism $U$ such that \begin{equation}\label{eqnsano} U\circ \mathbf{F}= \mathcal{T}_{A,0}\circ U,\qquad U\circ \mathbf{G}=\mathcal{T}_{B,0}\circ U. \end{equation} We introduce in place of $U$ its inverse $H=U^{-1}$ and then write \eqref{eqnsano} in the form \begin{equation}\label{nciaoqm} \mathbf{F}\circ H= H\circ \mathcal{T}_{A,0} ,\qquad \mathbf{G}\circ H=H\circ\mathcal{T}_{B,0}. \end{equation} Writing $H=id+\mathbf{h}$ with $\mathbf{h}=(\mathbf{h}_1,\mathbf{h}_2)$, $\mathbf{h}_1(x,y)\in C^\infty(\mathbb{T}^d\times\mathbb{T}^s,\mathbb{R}^d)$ and $\mathbf{h}_2(x,y)\in C^\infty(\mathbb{T}^d\times\mathbb{T}^s,\mathbb{R}^s)$, and $\mathbf{F}=\mathcal{T}_{A,0}+\mathbf{f}$ and $\mathbf{G}=\mathcal{T}_{B,0}+\mathbf{g}$, then \eqref{nciaoqm} reduces to \begin{align*} \mathbf{h}_1\circ \mathcal{T}_{A,0}-A\mathbf{h}_1=\mathbf{f}_1\circ H,\qquad \mathbf{h}_2\circ \mathcal{T}_{A,0}-\mathbf{h}_2=\mathbf{f}_2\circ H \end{align*} and \begin{align*} \mathbf{h}_1\circ \mathcal{T}_{B,0}-B\mathbf{h}_1=\mathbf{g}_1\circ H,\qquad \mathbf{h}_2\circ \mathcal{T}_{B,0}-\mathbf{h}_2=\mathbf{g}_2\circ H \end{align*} where $\mathbf{f}_1(x,y), \mathbf{g}_1(x,y)\in C^\infty(\mathbb{T}^d\times\mathbb{T}^s,\mathbb{R}^d)$ and $\mathbf{f}_2(x,y), \mathbf{g}_2(x,y)\in C^\infty(\mathbb{T}^d\times\mathbb{T}^s,\mathbb{R}^s)$. Further, the corresponding linearized equations are \begin{equation}\label{twis_LE} \begin{aligned} &\mathbf{h}_1(Ax, y)-A\mathbf{h}_1(x,y)=\mathbf{f}_1(x,y),\\ &\mathbf{h}_1(Bx,y)-B\mathbf{h}_1(x,y)=\mathbf{g}_1(x,y). \end{aligned} \end{equation} and \begin{equation}\label{untwis_LE} \begin{aligned} & \mathbf{h}_2 (Ax,y)-\mathbf{h}_2(x,y)=\mathbf{f}_2(x,y),\\ & \mathbf{h}_2(Bx,y)-\mathbf{h}_2(x,y)=\mathbf{g}_2(x,y).\ \end{aligned} \end{equation} Each equation in \eqref{twis_LE} is called a \textit{twisted cohomological equation}, and each equation in \eqref{untwis_LE} is called an \emph{untwisted cohomological equation}. We point out the following equivalence relation. \begin{Pro} Equations \eqref{twis_LE} are solvable in the $C^\infty$ category if and only if the operator \begin{equation}\label{fas1} \mathcal{L}_1(\mathbf{f}_1, \mathbf{g}_1)\overset{\textup{def}}=\Big(\mathbf{f}_1(Bx,y)-B\mathbf{f}_1(x,y)\Big)-\Big( \mathbf{g}_1(Ax,y)-A\mathbf{g}_1(x,y)\Big)=0. \end{equation} Equations \eqref{untwis_LE} are solvable in the $C^\infty$ category if and only if the operator \begin{equation}\label{fas2} \mathcal{L}_2(\mathbf{f}_2, \mathbf{g}_2)\overset{\textup{def}}=\Big(\mathbf{f}_2(Bx,y)-\mathbf{f}_2(x,y)\Big)-\Big(\mathbf{g}_2(Ax,y)-\mathbf{g}_2(x,y)\Big)=0. \end{equation} and $\int_{\mathbb{T}^d}\mathbf{f}_2(x,y)\,dx=\int_{\mathbb{T}^d}\mathbf{g}_2(x,y)\,dx=0$. \end{Pro} \begin{proof} It follows directly from Propositions \ref{Pro_LRS0}--\ref{Pro_LRS1111}, which will be shown in Section \ref{Section_Smoothdependparamt}. \end{proof} However, we have to say $\mathcal{L}_1(\mathbf{f}_1, \mathbf{g}_1)\neq 0$ and $\mathcal{L}_2(\mathbf{f}_2, \mathbf{g}_2)\neq 0$ in general. Instead, they are actually quadratic, see Lemma \ref{Lem_comm_est} below. \begin{Lem}\label{Lem_comm_est} For the commuting maps $\mathbf{F}=\mathcal{T}_{A,0}+\mathbf{f}$ and $\mathbf{G}=\mathcal{T}_{B,0}+\mathbf{g}$, we have \begin{equation}\label{twoineq_L1L2} \| \mathcal{L}_1(\mathbf{f}_1,\mathbf{g}_1) \|_{C^r}\leqslant C_r \|\mathbf{f}, \mathbf{g}\|_{C^{r+1}} \|\mathbf{f}, \mathbf{g}\|_{C^r} ,\qquad \| \mathcal{L}_2(\mathbf{f}_2,\mathbf{g}_2) \|_{C^r}\leqslant C_r \|\mathbf{f}, \mathbf{g}\|_{C^{r+1}} \|\mathbf{f}, \mathbf{g}\|_{C^r} \end{equation} \end{Lem} \begin{proof} By the commutation relation $\mathbf{F}\circ \mathbf{G}=\mathbf{G}\circ\mathbf{F}$, one has \begin{align*} A\mathbf{g}_1+\mathbf{f}_1\circ\mathbf{G}=B\mathbf{f}_1+\mathbf{g}_1\circ\mathbf{F},\qquad \mathbf{g}_2+\mathbf{f}_2\circ\mathbf{G}=\mathbf{f}_2+\mathbf{g}_2\circ\mathbf{F}. \end{align*} This implies \begin{align} \mathcal{L}_1(\mathbf{f}_1,\mathbf{g}_1) =&\mathbf{g}_1\circ\mathbf{F}-\mathbf{g}_1\circ\mathcal{T}_{A,0}-(\mathbf{f}_1\circ\mathbf{G}-\mathbf{f}_1\circ\mathcal{T}_{B,0}) = \int_0^1 D\mathbf{g}_1(\mathcal{T}_{A,0}+t\mathbf{f})\,\mathbf{f}-D\mathbf{f}_1(\mathcal{T}_{B,0}+t\mathbf{g})\,\mathbf{g} dt \label{dgdfintgral} \end{align} and \begin{align} \mathcal{L}_2(\mathbf{f}_2,\mathbf{g}_2) =&\mathbf{g}_2\circ\mathbf{F}-\mathbf{g}_2\circ\mathcal{T}_{A,0}-(\mathbf{f}_2\circ\mathbf{G}-\mathbf{f}_2\circ\mathcal{T}_{B,0}) = \int_0^1 D\mathbf{g}_2(\mathcal{T}_{A,0}+t\mathbf{f})\,\mathbf{f}-D\mathbf{f}_2(\mathcal{T}_{B,0}+t\mathbf{g})\,\mathbf{g} dt \label{dgdfintgral111} \end{align} so $\| \mathcal{L}_1(\mathbf{f}_1,\mathbf{g}_1) \|_{C^0}\leqslant C \|\mathbf{f}, \mathbf{g}\|_{C^1} \|\mathbf{f}, \mathbf{g}\|_{C^0}$ and $\| \mathcal{L}_2(\mathbf{f}_2,\mathbf{g}_2) \|_{C^0}\leqslant C \|\mathbf{f}, \mathbf{g}\|_{C^1} \|\mathbf{f}, \mathbf{g}\|_{C^0}$. This verifies \eqref{twoineq_L1L2} for $r=0$. Based on \eqref{dgdfintgral}-\eqref{dgdfintgral111}, the $C^r$-norm estimates follow similarly as in \cite[Lemma 4.7]{Damjanovic_Katok10}. \end{proof} In view of the quadratic estimates in Lemma \ref{Lem_comm_est}, we can construct approximate solutions of \eqref{twis_LE}--\eqref{untwis_LE} up to errors of higher order. This will be done in Section \ref{Section_tamesplit} and subsection \ref{subsection_induclem}, and it plays an important role in our KAM scheme. \subsection{Cohomological equations over the base map}\label{subsec_cohomeq_base} As a warm-up, we first investigate the cohomological equations where all functions involved do not depend on the fiber variables $y$. The results stated here will be used as a ``black box'' for a more general situation discussed in Section \ref{Section_Smoothdependparamt}. In the sequel, $A$ and $B$ are commuting automorphisms of $\mathbb{T}^d$ satisfying condition \ref{condHR}. \subsubsection{Twisted cohomological equations over ergodic automorphisms of $\mathbb{T}^d$} For an ergodic automorphism $A:\mathbb{T}^d\to\mathbb{T}^d$ which is partially hyperbolic, we consider the following twisted cohomological equation over $A$, with an unknown function $u:\mathbb{T}^d\to \mathbb{R}^d$ and a given function $\Phi:\mathbb{T}^d\to\mathbb{R}^d$, \begin{equation*} u(Ax)-Au(x)=\Phi(x),\qquad x\in\mathbb{T}^d. \end{equation*} Sometimes, for simplicity we use the symbol $\Delta^A$ to denote $ \Delta^Au(x):=u(Ax)-Au(x)$. For commuting automorphisms $A$ and $B$ of $\mathbb{T}^d$, we recall the following result. \begin{Lem}\cite[Lemma 4.4]{Damjanovic_Katok10}\label{Lem_base_tame1} For $\Phi(x)\in C^\infty(\mathbb{T}^d,\mathbb{R}^d)$, if there exists a function $\Psi(x)\in C^\infty(\mathbb{T}^d,\mathbb{R}^d)$ such that $L(\Phi, \Psi):=\Delta^B \Phi-\Delta^A \Psi=0$, then the cohomological equation \begin{equation}\label{linequphi} \Delta^A u=\Phi \end{equation} has a unique $C^\infty$ solution $u(x)$, which also solves the equation $\Delta^B u=\Psi.$ Moreover, it satisfies \begin{equation}\label{tame_base1} \|u\|_{C^r}\leqslant C_r \|\Phi\|_{C^{r+\sigma_1}},\qquad \textup{for all~} r\geqslant 0 \end{equation} for some $\sigma_1>0$ depending only on the dimension $d$ and the eigenvalues of $A, B$. The constants $C_r$ depend on $r$. \end{Lem} \begin{Rem}[A remark on inequality \eqref{tame_base1}] In \cite[Lemma 4.4]{Damjanovic_Katok10} it states that the solution satisfies $\|u\|_{C^r}\leqslant C_r \|\Phi, \Psi\|_{C^{r+\sigma_1}}$. However, according to the proof there one can find that $u$ can be controlled by using only $\Phi$. The existence of $\Psi$ is only used to ensure that the obstruction to solving the linear equation \eqref{linequphi} vanishes. One can also understand it from another perspective: the relation $L(\Phi,\Psi)=0$ implies that $\Psi$ is a solution to the linear equation $\Delta^A \Psi= F$, where $F:=\Delta^B \Phi$. Then, $\Psi$ can be controlled linearly by $F$, and hence by $\Phi$. \end{Rem} \subsubsection{Untwisted cohomological equations over ergodic automorphisms of $\mathbb{T}^d$} For the ergodic automorphism $A$ of $\mathbb{T}^d$, we consider the following untwisted cohomological equation over $A$, with an unknown function $u:\mathbb{T}^d\to\mathbb{R}^s$ and a given function $\Phi:\mathbb{T}^d\to\mathbb{R}^s$, \begin{equation*} u(Ax)-u(x)=\Phi(x),\qquad x\in\mathbb{T}^d. \end{equation*} Let $C^\infty_0(\mathbb{T}^d,\mathbb{R}^s)$ denote the space of all functions $f\in C^\infty(\mathbb{T}^d,\mathbb{R}^s)$ satisfying $\int_{\mathbb{T}^d} f(x)\,dx=0$. \begin{Lem}\label{Lem_base_tame2} For $\Phi(x)\in C_0^\infty(\mathbb{T}^d,\mathbb{R}^s)$, if there exists $\Psi(x)\in C_0^\infty(\mathbb{T}^d,\mathbb{R}^s)$ such that \begin{equation}\label{condvkf} \Phi(Bx)-\Phi(x)=\Psi(Ax)-\Psi(x), \end{equation} then the cohomological equation \begin{equation} u(Ax)-u(x)=\Phi(x) \end{equation} has a unique $C^\infty$ solution $u$ in $C^\infty_0(\mathbb{T}^d,\mathbb{R}^s)$, and it also solves the equation $u(Bx)-u(x)=\Psi(x).$ Moreover, for any $r\geqslant 0$ \begin{equation}\label{tame_base2} \|u\|_{C^r}\leqslant C_r \|\Phi\|_{C^{r+d+2}} \end{equation} where the constants $C_r$ depend on $r$. \end{Lem} We will use the so-called \textit{higher-rank trick} developed in \cite{Damjanovic_Katok10} to prove it. \begin{proof} Condition \eqref{condvkf} and the cohomological equations $u(Ax)- u(x)=\Phi(x)$ and $u(Bx)-u(x)=\Psi(x)$ can split into finitely many one-dimensional problems as follows: \begin{align}\label{akwoqoo} \theta(Bx)-\theta(x)= \psi(Ax)- \psi(x) \end{align} and the equations \begin{align} \omega(Ax)-\omega(x)=\theta(x),\qquad \omega(Bx)- \omega(x)=\psi(x) \end{align} where $\theta, \psi\in C_0^\infty(\mathbb{T}^d, \mathbb{R})$ and the averages $[\theta]=[\psi]=0$. Passing to Fourier coefficients, \eqref{akwoqoo} becomes \begin{align*} \widehat\theta_{B^*n}-\widehat\theta_n=\widehat\psi_{A^*n}-\widehat\psi_n,\quad \qquad\textup{for ~} n\in\mathbb{Z}^d\setminus\{0\}. \end{align*} By iterating this formula, for each integer $i\in\mathbb{Z}$ we obtain $\widehat\theta_{(A^*)^iB^*n}-\widehat\theta_{(A^*)^{i}n}$ $=$ $\widehat\psi_{(A^*)^{i+1}n}-\widehat\psi_{(A^*)^{i}n}$. Taking the sum over all $i$ we obtain \begin{equation}\label{winncnda} \sum_{i\in\mathbb{Z}}\widehat\theta_{(A^*)^iB^*n}- \sum_{i\in\mathbb{Z}}\widehat\theta_{(A^*)^{i}n}=\sum_{i\in\mathbb{Z}}\widehat\psi_{(A^*)^{i+1}n}-\sum_{i\in\mathbb{Z}}\widehat\psi_{(A^*)^{i}n} \end{equation} The nonzero integer vector $n$ has nontrivial projections to unstable subspace $E^u(A^*)$ and stable subspace $E^s(A^*)$ since $A^*$ is partially hyperbolic, so one can find that all the sums involved in \eqref{winncnda} are absolutely convergent when $n\neq 0$ (see \cite[Lemma 4.3]{Damjanovic_Katok10}). Note that the right-hand side of \eqref{winncnda} equals zero, hence, the left-hand side implies that \begin{align*} \sum_{i\in\mathbb{Z}}\widehat\theta_{(A^*)^iB^*n}=\sum_{i\in\mathbb{Z}}\widehat\theta_{(A^*)^{i}n},\qquad\textup{for every~} n\in \mathbb{Z}^d\setminus\{0\}. \end{align*} By iterating this equation, for each $j\in\mathbb{Z}$, \begin{align}\label{rurdllw} \sum_{i\in\mathbb{Z}}\widehat\theta_{(A^*)^i(B^*)^jn}=\sum_{i\in\mathbb{Z}}\widehat\theta_{(A^*)^i(B^*)^{j-1}n}=\cdots\cdots=\sum_{i\in\mathbb{Z}}\widehat\theta_{(A^*)^iB^*n}=\sum_{i\in\mathbb{Z}}\widehat\theta_{(A^*)^{i}n}. \end{align} By condition \ref{condHR} and the ergodicity there, one can show that $\sum_{i\in\mathbb{Z}}\widehat\theta_{(A^*)^i(B^*)^jn}$ converges to zero, as $j\to \infty$. Therefore, \eqref{rurdllw} implies that \begin{equation}\label{obvani} \sum_{i\in\mathbb{Z}}\widehat \theta_{(A^*)^{i}n}=0, \qquad\textup{for each~} n\in \mathbb{Z}^d\setminus\{0\}. \end{equation} Now, we consider the cohomological equation $\omega(Ax)-w(x)=\theta(x)$. Passing to Fourier coefficients, it is equivalent to solving the following equations \begin{equation}\label{foucon} \widehat\omega_{A^*n}-\widehat\omega_{n}=\widehat\theta_n, \qquad \forall~n\in\mathbb{Z}^d. \end{equation} When $n=0$, we can set $\widehat\omega_0=0$ since we have assumed $\widehat\theta_0=0$. For $n\in \mathbb{Z}^d\setminus\{0\}$, from \eqref{obvani} we see that the obstruction to solving equation \eqref{foucon} vanishes, so we have $\widehat\omega_n= \widehat\omega_n^+=\widehat\omega_n^-$, where \begin{equation}\label{adjsk} \widehat\omega_n^+=-\sum_{i=0}^\infty\theta_{(A^*)^i n},\qquad \widehat\omega_n^-=\sum_{i= -1}^{-\infty}\theta_{(A^*)^i n}. \end{equation} Consequently, we obtain a formal solution \[w=\sum_{n\in\mathbb{Z}^d} \widehat\omega_n\cdot e^{i2\pi\du{n,x}}\] for the equation $\omega(Ax)-w(x)=\theta(x)$. In order to show that $\omega\in C^\infty$, we need to estimate $\widehat\omega_n$ for every $n\neq 0$. If $n$ is mostly in $E^u(A^*)$, i.e. $n\hookrightarrow E^u(A^*)$, we use the form $\widehat \omega_n=\widehat \omega_n^+$, by \eqref{parhyp_split} and \eqref{ineq_mostlyin}, for any $k\in \mathbb{Z}^+$ we have \begin{align}\label{aanoi1} \begin{aligned} |\widehat \omega_n| \leqslant \sum_{i\geqslant 0}\frac{\|\theta\|_{C^k}}{\|(A^*)^in\|^{k}} \leqslant \sum_{i\geqslant 0}\frac{\|\theta\|_{C^k}}{\|(A^*)^i\,\pi_u(n)\|^{k}} \leqslant \sum_{i\geqslant 0}\frac{\|\theta\|_{C^k}}{\rho^{ik}\|\pi_u(n)\|^{k}} \leqslant & \frac{M_k\cdot\|\theta\|_{C^k}}{\|\pi_u(n)\|^{k}}\leqslant C_k \frac{\|\theta\|_{C^k}}{\|n\|^{k}} \end{aligned} \end{align} where the expanding rate $\rho>1$ and $C_k=3^k\cdot M_k$. Similarly, if $n$ is mostly in $E^s(A^*)$, i.e. $n\hookrightarrow E^s(A^*)$, we use the form $\widehat \omega_n=\widehat \omega_n^-$ to obtain \begin{equation}\label{aanoi2} |\widehat \omega_n|\leqslant C_k \|\theta\|_{C^k}\,\|n\|^{-k},\qquad \forall~k\in\mathbb{Z}^+. \end{equation} If $n\hookrightarrow E^c(A^*)$, we use the form $\widehat \omega_n=\widehat \omega_n^+$. By the Katznelson lemma (see Lemma \ref{Lem_Katznelson}) one has $\|\pi_u(n)\|\geqslant \gamma \|n\|^{-d}$. Then, \begin{align*} \|(A^*)^i n\|\geqslant\|(A^*)^i\,\pi_u(n)\|\geqslant C\rho^{i}\|\pi_u(n)\|\geqslant C \gamma \rho^{i} \|n\|^{-d}\geqslant C\gamma \rho^{i-i_0} \|n\|. \end{align*} for all $i\geqslant i_0$ where $i_0=\left[\frac{(d+1)\ln\|n\|}{\ln \rho}\right]+1$. For $0\leqslant i \leqslant i_0-1$, by \eqref{parhyp_split} and \eqref{ineq_mostlyin} we have \[\|(A^*)^i n\|\geqslant \|(A^*)^i \pi_c(n)\|\geqslant C(1+i)^{-d}\|\pi_c(n)\|\geqslant\frac{C}{3}(1+i)^{-d}\|n\|. \] Then it follows that for $k\in\mathbb{Z}^+$, \begin{align} |\widehat\omega_n| \leqslant\sum_{i\geqslant 0}\frac{\|\theta\|_{C^k}}{\|(A^*)^in\|^{k}} \leqslant C'\frac{\|\theta\|_{C^k}}{\|n\|^{k}}\left(\sum_{i=0}^{i_0-1}(1+i)^{dk}+ \sum_{i=i_0}^{\infty}\rho^{-k(i-i_0)}\right) \leqslant & C''\|\theta\|_{C^k} \|n\|^{-k}\cdot (i_0^{dk+1}+ c)\nonumber\\ \leqslant & C''' \|\theta\|_{C^k} \|n\|^{-k} \cdot(\ln\|n\|)^{dk+1}\nonumber\\ \leqslant & C_k \|\theta\|_{C^k}\, \|n\|^{-k+1}\label{haomgn2} \end{align} Finally, for each $r\geqslant 0$, using \eqref{aanoi1}--\eqref{haomgn2} we obtain that \begin{align*} \|\omega\|_{C^r}\leqslant (2\pi)^r \sum_{n\in\mathbb{Z}^d\setminus\{0\}} \|n\|^r\cdot|\widehat \omega_n|\leqslant (2\pi)^r \,C_k \sum_{n\in\mathbb{Z}^d\setminus\{0\}} \frac{\|\theta\|_{C^k}}{\|n\|^{k-r-1}} \end{align*} By taking $k=r+d+2$ we obtain \[\|\omega\|_{C^r}\leqslant C'_r \|\theta\|_{C^{r+d+2}}.\] This is true for any $r\geqslant 0$, hence, $\omega\in C_0^\infty(\mathbb{T}^d,\mathbb{R})$ and solves the equation $\omega(Ax)-\omega(x)=\theta(x)$. The uniqueness of solutions follows from the ergodicity of $A$. Moreover, since the ergodic generators $A$ and $B$ commute, by using the relation \eqref{akwoqoo} we can prove that $\omega$ also solves $\omega(Bx)-\omega(x)=\psi(x)$. The argument is elementary and similar to \cite[Lemma 4.4]{Damjanovic_Katok10}, so we will not repeat here. \end{proof} \section{Smooth dependence on multi-dimensional parameters}\label{Section_Smoothdependparamt} The main goal of this section is to prove Propositions \ref{Pro_LRS0}--\ref{Pro_LRS1111}. In contrast with subection \ref{subsec_cohomeq_base}, we will deal with cohomological equations over a non-ergodic partially hyperbolic automorphism $\mathcal{T}_{A,0}=A\times id_{\mathbb{T}^s}$ with $A$ ergodic on the base $\mathbb{T}^d$. Based on the results in subection \ref{subsec_cohomeq_base}, we then employ the idea of smooth dependence on parameters to study the solutions of the equations of the form \eqref{twis_LE}--\eqref{untwis_LE} and give their derivative estimates in the fiber direction as well as in the base direction. A similar idea was once used by \cite{delaLave_Marco_Moriyon1986} to study the parameter dependence for the solutions of untwisted cohomological equations over Anosov diffeomorphisms. Throughout this section, $A:\mathbb{T}^d\to\mathbb{T}^d$ and $B:\mathbb{T}^d\to\mathbb{T}^d$ are commuting automorphisms and satisfy condition \ref{condHR}. \subsection{The twisted case}\label{subsect_thetwistcase} We first consider the twisted cohomological equations. Let us introduce the following set in $C^\infty(\mathbb{T}^d,\mathbb{R}^d)$. \begin{equation*} \mathbb{V}:=\{\phi\in C^\infty(\mathbb{T}^d,\mathbb{R}^d)~:\quad \exists~ \psi\in C^\infty(\mathbb{T}^d,\mathbb{R}^d) \textup{~such that~} \Delta^B \phi=\Delta^A \psi \} \end{equation*} where the symbols $\Delta^A\,\psi(x)=\psi(Ax)-A \psi(x)$ and $\Delta^B\,\phi(x)=\phi(Bx)-B \phi(x)$. According to subsection \ref{subsec_cohomeq_base} it is easy to find that $\mathbb{V} $ is exactly the set of all functions $\phi\in C^\infty(\mathbb{T}^d,\mathbb{R}^d)$ for which the equation $\Delta^A u=\phi$ admits a smooth solution. For our purpose, we need to study $s$--dimensional parameters. The set of parameters will be an open ball $\mathcal{D}\subset \mathbb{R}^s$, and we use $y\in \mathcal{D}$ to denote the parameter variables. \begin{Lem}\label{Lem_Mcont} $\mathbb{V}$ has the following property: \begin{enumerate}[(i)] \item $\mathbb{V}$ is a linear subspace of $C^\infty(\mathbb{T}^d, \mathbb{R}^d)$. There is a tame linear operator $\mathcal{H}: \mathbb{V}\longrightarrow C^\infty(\mathbb{T}^d,\mathbb{R}^d)$ which satisfies: for each $\phi\in \mathbb{V}$, \begin{equation}\label{teqkaslf1} \Delta^A \big(\mathcal{H}(\phi)\big)=\phi \quad \textup{~and~} \quad \|\mathcal{H}(\phi)\|_{C^r(\mathbb{T}^d)}\leqslant C_r \|\phi\|_{C^{r+\sigma_1}(\mathbb{T}^d)}, \end{equation} where $\sigma_1>0$ is the same constant given in Lemma \ref{Lem_base_tame1}. \item For $\xi(x,y)\in C^\infty(\mathbb{T}^d\times\mathcal{D}, \mathbb{R}^d)$, we denote $\xi^y(x):=\xi(x,y)$. If $\xi^y\in \mathbb{V}$ for every parameter $y\in \mathcal{D}$, then the map $y\longmapsto \mathcal{H}({\xi}^y)$ is continuous, i.e., for any $r\in\mathbb{N}$, \begin{align}\label{lemcontin} \lim_{y\to a}\| \mathcal{H}({\xi}^y)- \mathcal{H}({\xi}^{a})\|_{C^{r}(\mathbb{T}^d)}=0,\qquad \forall~a\in\mathcal{D}. \end{align} \end{enumerate} \end{Lem} \begin{proof} (i) The fact that $\mathbb{V}$ is a linear subspace follows readily from the definition. Moreover, \eqref{teqkaslf1} comes from Lemma \ref{Lem_base_tame1}. Namely, for each $\phi\in \mathbb{V}$, $\mathcal{H}(\phi)$ is the unique solution of $\Delta^A u=\phi$. (ii) Given $r\in \mathbb{N}$. Assume by contradiction that there exists some point $a\in\mathcal{D}$ such that \eqref{lemcontin} fails. Then there would exist a sequence $z_k\to a$ and a number $\delta>0$ such that \begin{equation}\label{xnbnfaa} \| \mathcal{H}({\xi}^{z_k})- \mathcal{H}({\xi}^a)\|_{C^{\tau}(\mathbb{T}^d)}> \delta,\quad \textup{for all}~ k. \end{equation} On the other hand, by item (i) we see that \begin{align*} \|\mathcal{H}({\xi}^{z_k})- \mathcal{H}({\xi}^a)\|_{C^{r+1}(\mathbb{T}^d)}\leqslant C_{r+1} \| \xi^{z_k}-\xi^{a}\|_{C^{r+1+\sigma_1}(\mathbb{T}^d)}. \end{align*} Then all the functions $\mathcal{H}({\xi}^{z_k})$ are uniformly bounded in the $C^{r+1}$ topology because $\xi(x,y)\in C^\infty$. Using the Arzel\`a-Ascoli theorem it is not difficult to show that, by taking a subsequence if necessary, $ \mathcal{H}({\xi}^{z_k})$ converges to some $v$ in the $C^r(\mathbb{T}^d,\mathbb{R}^d)$ topology. However, by continuity we have $\Delta^A v=\xi^a$. Hence the uniqueness of solutions implies $v=\mathcal{H}({\xi}^a)$, which contradicts \eqref{xnbnfaa}. This finishes the proof. \end{proof} For two smooth functions $f, g: \mathbb{T}^d\times\mathcal{D}\to \mathbb{R}^d $, we define \[\mathcal{L}_1(f, g):=\Big(f(Bx,y)-B f(x,y)\Big)-\Big(g(Ax,y)-A g(x,y)\Big).\] In the sequel, for a smooth function $\xi(x,y)$ we denote $(\partial_y^\beta\xi)^y(x):=\partial_y^\beta \xi(x,y)$. \begin{Lem}\label{Lem_xieta} Suppose that $\xi, \eta \in C^\infty(\mathbb{T}^d\times\mathcal{D},\mathbb{R}^d)$ and $\mathcal{L}_1(\xi,\eta)=0$. Then, \begin{enumerate}[(i)] \item for any differential operator $\partial^\beta_y$ with the multi-index $\beta\in\mathbb{N}^s$, we have $(\partial_y^\beta\xi)^y\in\mathbb{V}$. \item for any parameter $y\in\mathcal{D}$ and any index $j=1,\cdots, s$ \begin{align}\label{lemcnt2} \lim_{\varepsilon\to 0}\left\|\frac{\mathcal{H}\left(\xi^{y+\varepsilon \mathbf{e}_j}\right)-\mathcal{H}(\xi^{y})}{\varepsilon}-\mathcal{H} \big((\partial_{y_j} \xi)^y\big)\right\|_{C^r(\mathbb{T}^d)}=0,\quad\textup{for any~} r\in\mathbb{N}. \end{align} where $(\mathbf{e}_1, \cdots,\mathbf{e}_j,\cdots,\mathbf{e}_s)$ is the standard orthogonal basis for $\mathbb{R}^s$. This also implies that the map $y\longmapsto \mathcal{H}({\xi}^y)\in C^\infty(\mathbb{T}^d,\mathbb{R}^d)$ is of class $C^1$. \end{enumerate} \end{Lem} \begin{proof} (i) It follows easily from the fact $\mathcal{L}_1(\partial^\beta_{y} \xi, \partial_{y}^\beta \eta)=\partial_{y}^\alpha\mathcal{L}_1(\xi,\eta)=0$. (ii) We only need to check the case of $j=1$, other cases are similar. If \eqref{lemcnt2} fails for some $y\in\mathcal{D}$ and some $r\in\mathbb{N}$, there would exist a nonzero sequence $\varepsilon_k\to 0$ in $\mathbb{R}$ such that \begin{align}\label{xiyedel} \left\|\frac{\mathcal{H}\left(\xi^{y+\varepsilon_k \mathbf{e}_1}\right)-\mathcal{H}(\xi^{y})}{\varepsilon_k}-\mathcal{H} \big((\partial_{y_1} \xi)^y\big)\right\|_{C^r(\mathbb{T}^d)}> \delta \end{align} for some number $\delta>0$. By the tame estimate \eqref{teqkaslf1} we have \begin{align*} \left\|\frac{\mathcal{H}\left(\xi^{y+\varepsilon_k \mathbf{e}_1}\right)-\mathcal{H}(\xi^{y})}{\varepsilon_k}-\mathcal{H} \big((\partial_{y_1} \xi)^y\big)\right\|_{C^{r+1}(\mathbb{T}^d)}=&\left\|\mathcal{H}\left(\frac{\xi^{y+\varepsilon_k \mathbf{e}_1}-\xi^{y}}{\varepsilon_k}- (\partial_{y_1} \xi)^y\right)\right\|_{C^{r+1}(\mathbb{T}^d)}\\ \leqslant & C_{r+1+\sigma_1} \left\|\frac{\xi^{y+\varepsilon_k \mathbf{e}_1}-\xi^{y}}{\varepsilon_k}- (\partial_{y_1} \xi)^y\right\|_{C^{r+1+\sigma_1}(\mathbb{T}^d)} \end{align*} Since $\xi\in C^\infty$, the quantity in the last line is uniformly bounded as $\varepsilon_k\to 0$. This implies $\frac{\mathcal{H}\left(\xi^{y+\varepsilon_k \mathbf{e}_1}\right)-\mathcal{H}(\xi^{y})}{\varepsilon_k}$ is uniformly bounded in the $C^{r+1}(\mathbb{T}^d,\mathbb{R}^d)$ topology. Then, using the Arzel\`a-Ascoli theorem we are able to prove that, taking a subsequence if necessary, $\frac{\mathcal{H}\left(\xi^{y+\varepsilon_k \mathbf{e}_1}\right)-\mathcal{H}(\xi^{y})}{\varepsilon_k}$ converges to some $w$ in the $C^r(\mathbb{T}^d,\mathbb{R}^d)$ topology. Due to \eqref{xiyedel}, we get \begin{align}\label{wxidel} \left\|w-\mathcal{H} \big((\partial_{y_1} \xi)^y\big)\right\|_{C^r(\mathbb{T}^d)}> \delta. \end{align} On the other hand, it is direct to see that \begin{align*} \Delta^A \left(\frac{\mathcal{H}\left(\xi^{y+\varepsilon_k \mathbf{e}_1}\right)-\mathcal{H}(\xi^{y})}{\varepsilon_k}\right)=\frac{\xi^{y+\varepsilon_k \mathbf{e}_1}-\xi^{y}}{\varepsilon_k}. \end{align*} Sending $\varepsilon_k\to 0$ yields $ \Delta^A w=(\partial_{y_1}\xi)^{y}$. Then the uniqueness of solutions implies that $w=\mathcal{H} \big((\partial_{y_1} \xi)^y\big)$. This contradicts \eqref{wxidel}. \end{proof} Since a smooth function $R(x,y)$ on $\mathbb{T}^d\times\mathbb{T}^s$ is also a smooth function on $\mathbb{T}^d\times\mathbb{R}^s$ that is $\mathbb{Z}^s$-periodic in $y$, we can apply the above lemmas to prove the following result. \begin{Pro}\label{Pro_LRS0} Suppose that $\mathcal{L}_1(R, S)=0$ where $R(x,y), S(x,y): \mathbb{T}^d\times\mathbb{T}^s\to \mathbb{R}^d$ are $C^\infty$ functions. Then, there is a unique function $\Omega\in C^\infty(\mathbb{T}^d\times\mathbb{T}^s,\mathbb{R}^d)$ that solves the equations \begin{align}\label{exten_twoeqs} \Delta^A \Omega(x,y)= R(x,y),\qquad \Delta^B \Omega(x,y)= S(x,y). \end{align} Here, $\Delta^A \Omega(x,y)=\Omega(Ax,y)-A \Omega(x,y)$ and $\Delta^B \Omega(x,y)=\Omega(Bx,y)-B \Omega(x,y)$. Moreover, \begin{align}\label{ext_omegest} \| \Omega \|_{C^r}\leqslant C_r \|R\|_{C^{r+\sigma_1}}, \end{align} where $\sigma_1>0$ is the same constant given in Lemma \ref{Lem_base_tame1}. \end{Pro} \begin{proof} \noindent\textbf{(I) Existence of continuous solutions to equations \eqref{exten_twoeqs}.} We use the notation \[R^y(x):=R(x,y),\qquad S^y(x):=S(x,y).\] The condition $\mathcal{L}_1(R, S)=0$ implies that $R^y\in \mathbb{V}$, for any $y$. Then by Lemma \ref{Lem_base_tame1} we obtain a family of solutions $u^y:=\mathcal{H}( R^y)\in C^\infty(\mathbb{T}^d,\mathbb{R}^d)$, with parameter $y$, that solves \[ \Delta^A u^y=R^y,\qquad \Delta^B u^y= S^y, \qquad \textup{for each}~ y. \] Thus, we define \[\Omega(x,y):=u^y(x)=\mathcal{H}(R^y)(x).\] $\Omega(x,y)$ is continuous on $\mathbb{T}^d\times\mathbb{T}^s$. Indeed, for any point $(a,b)$, \begin{align*} |\Omega(x,y)-\Omega(a,b)|\leqslant & |\Omega(x,y)-\Omega(x,b)|+|\Omega(x,b)-\Omega(a,b)|\\ =& |u^y(x)-u^{b}(x)|+|u^{b}(x)-u^{b}(a)|\\ \leqslant & \|\mathcal{H}(R^y)-\mathcal{H}(R^{b})\|_{C^0(\mathbb{T}^d)}+|\mathcal{H}(R^{b})(x)-\mathcal{H}(R^{b})(a)| \end{align*} By Lemma \ref{Lem_Mcont} (ii), the last line tends to zero as $(x,y)\longrightarrow (a,b)$. Consequently, $\Omega(x,y)$ is a continuous solution to \eqref{exten_twoeqs}. More precisely, it is smooth in $x$ and continuous in $y$. \noindent\textbf{(II) $C^1$-regularity. } Applying Lemma \ref{Lem_xieta} (ii) to $R^y$ we obtain that for each $j=1,\cdots,s$, the partial derivative $\partial_{y_j}\Omega(x,y)$ exists and \[\partial_{y_j}\Omega(x,y)=\mathcal{H}\big((\partial_{y_j} R)^{y}\big)(x).\] Now, let us show the continuity of $(x,y)\longmapsto\partial_{y_j}\Omega(x,y)$. Indeed, for any point $(a,b)$, \begin{align*} \left|\partial_{y_j}\Omega(x,y)-\partial_{y_j}\Omega(a,b)\right|\leqslant & \left|\partial_{y_j}\Omega(x,y)-\partial_{y_j}\Omega(x, b)\right|+\left|\partial_{y_j}\Omega(x, b)-\partial_{y_j}\Omega(a, b)\right|\\ \leqslant & \left\|\mathcal{H}\big((\partial_{y_j} R)^{y}\big)-\mathcal{H}\big((\partial_{y_j} R)^{b}\big)\right\|_{C^0(\mathbb{T}^d)}\\ & \qquad\qquad\quad +\left| \mathcal{H}\big((\partial_{y_j} R)^{b}\big)(x)-\mathcal{H}\big((\partial_{y_j} R)^{b}\big)(a)\right|. \end{align*} Here, it is evident that $\left| \mathcal{H}\big((\partial_{y_j} R)^{b}\big)(x)-\mathcal{H}\big((\partial_{y_j} R)^{b}\big)(a)\right|$ converges to zero as $x\to a$. Meanwhile, as $(\partial_{y_j} R)^y\in\mathbb{V}$ for all $y$, using Lemma \ref{Lem_Mcont} (ii) with $\xi=\partial_{y_j} R$ we deduce that \[\left\|\mathcal{H}\big((\partial_{y_j} R)^{y}\big)-\mathcal{H}\big((\partial_{y_j} R)^{b}\big)\right\|_{C^0(\mathbb{T}^d)}\longrightarrow 0 \] as $y\to b$. Thus, $\partial_{y_j}\Omega(x,y)$ converges to $\partial_{y_j}\Omega(a,b)$ as $(x,y)\to (a,b)$. On the other hand, to prove the continuity of $(x,y)\longmapsto\partial_{x_i}\Omega(x,y)$, where $x_i$ is the $i$-th coordinate of $x$, we observe that \begin{align*} \left|\partial_{x_i}\Omega(x,y)-\partial_{x_i}\Omega(a,b)\right|\leqslant & \left|\partial_{x_i}\Omega(x,y)-\partial_{x_i}\Omega(x, b)\right|+\left|\partial_{x_i}\Omega(x, b)-\partial_{x_i}\Omega(a, b)\right|\\ \leqslant & \left\|\mathcal{H}\big(R^{y}\big)-\mathcal{H}\big(R^{b}\big)\right\|_{C^1(\mathbb{T}^d)} +\left| \partial_{x_i}\mathcal{H}\big(R^{b}\big)(x)-\partial_{x_i}\mathcal{H}\big(R^{b}\big)(a)\right|. \end{align*} When $(x,y)\longrightarrow(a,b)$, the last line converges to zero as a result of Lemma \ref{Lem_Mcont} (ii) and $\mathcal{H}\big(R^{b}\big)\in C^\infty(\mathbb{T}^d,\mathbb{R}^d)$. Therefore, we conclude that $\Omega(x,y)$ is of class $C^1$. \noindent\textbf{(III) $C^k$-regularity. } The higher regularity can be proved by induction. The case of $r=1$ has been proved above. Suppose that $\Omega(x,y)$ is $C^r$ and \[\partial_x^\alpha\partial_y^{\beta}\Omega(x,y)=\partial_x^\alpha \mathcal{H}\big((\partial^\beta_y R)^y\big)\] for any multi-indices $\alpha, \beta$ satisfying $|\alpha|+|\beta|=r$, we will show that $\Omega\in C^{r+1}$, namely, every partial derivatives of order $\leqslant r+1$ exists and is continuous. Since $\Omega$ is assumed to be $C^r$, one only needs to check that the partial derivatives $\partial_{x_i}\partial_x^\alpha\partial_y^{\beta}\Omega(x,y)$ and $\partial_{y_j}\partial_x^\alpha\partial_y^{\beta}\Omega(x,y)$, $|\alpha|+|\beta|=r$, exist and continuous on $\mathbb{T}^d\times\mathbb{T}^s$. We first claim that the partial derivative $\partial_{y_j}\partial_x^\alpha\partial_y^\beta\Omega(x,y)$, where $|\alpha|+|\beta|=r$, exists and \begin{equation}\label{bvnsb} \partial_{y_j}\partial_x^\alpha\partial_y^\beta\Omega(x,y)=\partial^\alpha_x\mathcal{H}\big((\partial_{y_j}\partial_y^\beta R)^{y}\big). \end{equation} Let $(\mathbf{e}_1, \cdots,\mathbf{e}_j,\cdots,\mathbf{e}_s)$ denote the standard orthogonal basis for $\mathbb{R}^s$. Then, for $\varepsilon\neq 0$, \begin{equation}\label{limtepej} \begin{aligned} &\left|\frac{\partial_x^\alpha\partial_y^\beta\Omega(x,y+\varepsilon\mathbf{e}_j)-\partial_x^\alpha\partial_y^\beta\Omega(x,y)}{\varepsilon}- \partial^\alpha_x\mathcal{H}\big((\partial_{y_j}\partial_y^\beta R)^{y}\big)\right|\\ &\qquad\qquad\qquad = \left|\partial_x^\alpha\left(\frac{\partial_y^\beta\Omega(x,y+\varepsilon\mathbf{e}_j)-\partial_y^\beta\Omega(x,y)}{\varepsilon}-\mathcal{H}\big((\partial_{y_j}\partial_y^\beta R)^{y}\big)\right)\right|\\ &\qquad\qquad\qquad \leqslant \left\|\frac{\mathcal{H}\big((\partial_y^\beta R)^{y+\varepsilon\mathbf{e}_j}\big)-\mathcal{H}\big((\partial_y^\beta R)^{y}\big)}{\varepsilon}-\mathcal{H}\big((\partial_{y_j}\partial_y^\beta R)^{y}\big)\right\|_{C^{|\alpha|}(\mathbb{T}^d)} \end{aligned} \end{equation} Note that $\partial^\beta_y R(x, y)\in C^\infty$ and $\mathcal{L}_1(\partial^\beta_y R, \partial^\beta_y S)=0$. Then, applying Lemma \ref{Lem_xieta} (ii) with $\xi=\partial^\beta_y R$ we see that the last line of \eqref{limtepej} converges to zero as $\varepsilon\to 0$. Therefore, $\partial_{y_j}\partial_x^\alpha\partial_y^\beta\Omega(x,y)$ exists and \eqref{bvnsb} holds. Next, we will show that $(x,y)\mapsto$ $\partial_{y_j}\partial_x^\alpha\partial_y^\beta\Omega(x,y)$ is continuous. Indeed, for any point $(a, b)$, using \eqref{bvnsb} it follows that \begin{align*} &\left|\partial_{y_j}\partial_x^\alpha\partial_y^\beta\Omega(x,y)-\partial_{y_j}\partial_x^\alpha\partial_y^\beta\Omega(a,b)\right|\\ \leqslant & \left|\partial_{y_j}\partial_x^\alpha\partial_y^\beta\Omega(x,y)-\partial_{y_j}\partial_x^\alpha\partial_y^\beta\Omega(x,b)\right|+\left|\partial_{y_j}\partial_x^\alpha\partial_y^\beta\Omega(x,b)-\partial_{y_j}\partial_x^\alpha\partial_y^\beta\Omega(a,b)\right|\\ = &\left|\partial^\alpha_x\mathcal{H}\big((\partial_{y_j}\partial_y^\beta R)^{y}\big)-\partial^\alpha_x\mathcal{H}\big((\partial_{y_j}\partial_y^\beta R)^{b}\big)\right|+\left|\partial^\alpha_x\mathcal{H}\big((\partial_{y_j}\partial_y^\beta R)^{b}\big)(x)-\partial^\alpha_x\mathcal{H}\big((\partial_{y_j}\partial_y^\beta R)^{b}\big)(a)\right|\\ \leqslant & \left\|\mathcal{H}\big((\partial_{y_j}\partial_y^\beta R)^{y}\big)-\mathcal{H}\big((\partial_{y_j}\partial_y^\beta R)^{b}\big)\right\|_{C^{|\alpha|}}+\left|\partial^\alpha_x\mathcal{H}\big((\partial_{y_j}\partial_y^\beta R)^{b}\big)(x)-\partial^\alpha_x\mathcal{H}\big((\partial_{y_j}\partial_y^\beta R)^{b}\big)(a)\right| \end{align*} Evidently, the second quantity of the last line converges to zero as $x\to a$. By applying Lemma \ref{Lem_Mcont} (ii) with $\xi=\partial_{y_j}\partial_y^\beta R$, the first quantity of the last line also converges to zero as $y\to b$. This thus verifies the continuity of $(x,y)\mapsto$ $\partial_{y_j}\partial_x^\alpha\partial_y^\beta\Omega(x,y)$. By using similar argument, we can also prove that the partial derivative $\partial_{x_i}\partial_x^\alpha\partial_y^\beta\Omega(x,y)$, where $|\alpha|+|\beta|=r$, exists and is equal to $ \partial_{x_i}\partial^\alpha_x\mathcal{H}\big((\partial_y^\beta R)^{y}\big) $, and the function $(x,y)\mapsto$ $\partial_{x_i}\partial_x^\alpha\partial_y^\beta\Omega(x,y)$ is continuous. Therefore, we can conclude that $\Omega$ is $C^{r+1}$. Finally, by what we have shown above, equations \eqref{exten_twoeqs} have a unique $C^\infty$ solution $\Omega$. In addition, for any multi-indices $\alpha, \beta$ with $|\alpha|+|\beta|=r$, as $\partial_x^\alpha\partial_y^{\beta}\Omega(x,y)=\partial_x^\alpha \mathcal{H}\big((\partial^\beta_y R)^y\big)$, we can apply Lemma \ref{Lem_Mcont} to obtain that \begin{align*} |\partial_x^\alpha\partial_y^{\beta}\Omega|=|\partial_x^\alpha \mathcal{H}\big((\partial^\beta_y R)^y\big)|\leqslant \|\mathcal{H}\big((\partial^\beta_y R)^y\big)\|_{C^{|\alpha|}(\mathbb{T}^d)}\leqslant C_{|\alpha|} \|(\partial^\beta_y R)^y\|_{C^{|\alpha|+\sigma_1}(\mathbb{T}^d)} \leqslant C_r \|R\|_{C^{r+\sigma_1}}. \end{align*} This verifies estimate \eqref{ext_omegest}. \end{proof} \subsection{The untwisted case} The untwisted cohomological equations can be studied in the same spirit. Let \[\mathcal{L}_2(R, S):=\Big(R(Bx,y)-R(x,y)\Big)-\Big(S(Ax,y)-S(x,y)\Big),\] we have the following property. \begin{Pro}\label{Pro_LRS1111} Suppose that $\mathcal{L}_2(R, S)=0$, where $R(x,y), S(x,y): \mathbb{T}^d\times\mathbb{T}^s\to \mathbb{R}^s$ are $C^\infty$ functions and the averages over the base $\mathbb{T}^d$ vanish: $\int_{\mathbb{T}^d} R(x,y) \,dx=\int_{\mathbb{T}^d} S(x,y)\, dx=0$. Then, \begin{align} \Omega(Ax,y)-\Omega(x,y)= R(x,y),\qquad \Omega(Bx,y)-\Omega(x,y)= S(x,y) \end{align} have a unique solution $\Omega\in C^\infty(\mathbb{T}^d\times\mathbb{T}^s,\mathbb{R}^s) $ satisfying $\int_{\mathbb{T}^d} \Omega(x,y)\,dx=0$, and \begin{align} \| \Omega \|_{C^r}\leqslant C_r \|R\|_{C^{r+d+2}}. \end{align} \end{Pro} This result also can be proved by using the preceding idea of smooth dependence on parameters. In fact, thanks to Lemma \ref{Lem_base_tame2}, by slight adaptation of the arguments in subsection \ref{subsect_thetwistcase} we are able to obtain analogues of Lemma \ref{Lem_Mcont} and Lemma \ref{Lem_xieta} for the (untwisted) operator $\mathcal{L}_2$. Then, Proposition \ref{Pro_LRS1111} follows by using arguments similar to the proof of Proposition \ref{Pro_LRS0}. So we will not repeat it here. We end this section by remarking that for the commuting maps $\mathbf{F}=\mathcal{T}_{A,0}+\mathbf{f}$ and $\mathbf{G}=\mathcal{T}_{B,0}+\mathbf{g}$ in Section \ref{Section_actionofproducttype}, $\mathcal{L}_1(\mathbf{f}_1, \mathbf{g}_1)\neq 0$ and $\mathcal{L}_2(\mathbf{f}_2, \mathbf{g}_2)\neq 0$ in general (instead, they are quadratic, see Lemma \ref{Lem_comm_est}). So Proposition \ref{Pro_LRS0} and Proposition \ref{Pro_LRS1111} can not be applied directly to solve the cohomological equations in \eqref{twis_LE} and \eqref{untwis_LE}. Somehow, we will attempt to split $\mathbf{f}_i, \mathbf{g}_i$, $i=1,2$, into $\mathbf{f}_i=\mathcal{P}(\mathbf{f}_i)+\mathcal{E}(\mathbf{f}_i)$ and $\mathbf{g}_i=\mathcal{P}(\mathbf{g}_i)+\mathcal{E}(\mathbf{g}_i)$ in a tame way, such that $\mathcal{L}_i(\mathcal{P}(\mathbf{f}_i), \mathcal{P}(\mathbf{g}_i))=0$ and $\mathcal{E}(\mathbf{f}_i), \mathcal{E}(\mathbf{g}_i)$ are suitably small. This will be done in Section \ref{Section_tamesplit}. \section{Tame Splitting}\label{Section_tamesplit} The goal of this section is to prove Proposition \ref{Pro_split}. It implies that the perturbation can be split into two terms due to the commutation relations: one for which the linearized equations are solvable, and the other ``quadratically small'' with tame estimates. Different from Section \ref{Section_Smoothdependparamt}, our arguments here are based on a specific and explicit construction. \subsection{Construction} \begin{Lem}\label{lem_defproj} Let $A$ be an ergodic automorphism of $\mathbb{T}^d$ and $\eta$ be a nonzero number in $\mathbb{C}$. For any function $f(x,y)\in C^\infty(\mathbb{T}^d\times\mathbb{T}^s,\mathbb{C})$, we define a function $\omega=\omega(f)$ as follows: $ \omega(x,y)\overset{\textup{def}}=\sum\limits_{n\in \mathbb{Z}^d} a_n(y)\, e^{i2\pi\langle n,x\rangle}, $ where for nonzero $n\neq 0$, \begin{equation}\label{anydef} a_n(y)\overset{\textup{def}}= \begin{cases} -\sum\limits_{l\geqslant 0}\eta^{-(l+1)}\, \widehat f_{(A^*)^l\,n}(y), & \textup{~if~} n\hookrightarrow E^u(A^*),\, E^c(A^*)\\ \\ \quad\sum\limits_{l\leqslant-1}\eta^{-(l+1)}\, \widehat f_{(A^*)^l\,n}(y), & \textup{~if~} n\hookrightarrow E^s(A^*) \end{cases} \end{equation} For $n=0$, $a_0(y)\overset{\textup{def}}=(\eta-1)^{-1}\, \widehat f_0(y)$ if $\eta\neq 1$, and $a_0(y)\overset{\textup{def}}=0$ if $\eta=1$. Then, $\omega$ is $C^\infty$ and \begin{align}\label{omgr} \|\omega\|_{C^r}\leqslant C_r \|f\|_{C^{r+d+2+\tau}}. \end{align} where $\tau=(d+1)\frac{|\ln |\eta||}{\ln \rho}$ and $\rho>1$ is the expansion rate for $A^*$. \end{Lem} \begin{proof} In order to prove that $\omega$ is $C^r$ differentiable for each $r\in\mathbb{N}$, we consider any multi-indices $\alpha\in\mathbb{N}^d$ and $\beta\in\mathbb{N}^s$ satisfying $|\alpha|+|\beta|=r$, and differentiate $\omega$ formally to get \begin{equation}\label{dklhr} \partial_x^\alpha\partial_y^\beta\omega=\sum_{n\in \mathbb{Z}^d} \partial_y^\beta a_n(y)\cdot (i2\pi)^{|\alpha|}\cdot n^\alpha\cdot e^{i2\pi\langle n,x\rangle}. \end{equation} Then we only need to show that the formal sum \eqref{dklhr} is absolutely convergent. If $n\hookrightarrow E^u(A^*)$, then using $\partial_y^\beta\Big(\widehat f_{(A^*)^l\,n} (y)\Big)=\widehat{(\partial_y^\beta f)}_{(A^*)^l\,n}(y)$ and \eqref{parhyp_split}, it follows that \begin{align*} |\partial_y^\beta a_n|\leqslant \sum\limits_{l\geqslant 0}|\eta|^{-(l+1)} \, \left|\widehat{(\partial_y^\beta f)}_{(A^*)^l\,n}\right|\leqslant \sum\limits_{l\geqslant 0} \frac{|\eta|^{-(l+1)} \|\partial_y^\beta f\|_{C^k}}{\|(A^*)^l\,n\|^k} \leqslant & \sum\limits_{l\geqslant 0} \frac{|\eta|^{-(l+1)}\|\partial_y^\beta f\|_{C^k}}{\|(A^*)^l\,\pi_u(n)\|^k}\\ \leqslant & C \sum\limits_{l\geqslant 0}|\eta|^{-(l+1)} \, \frac{\|\partial_y^\beta f\|_{C^k}}{\rho^{kl}\|\pi_u(n)\|^k}\\ \leqslant & C' \frac{\|\partial_y^\beta f\|_{C^k}}{\|\pi_u(n)\|^k}\leqslant 3^k C'\frac{\|\partial_y^\beta f\|_{C^k}}{\|n\|^k} \end{align*} provided $k>\frac{-\ln |\eta|}{\ln |\rho|}$. Indeed, the choice of $k$ ensures the convergence of $\sum_{l\geqslant 0} |\eta|^{-(l+1)}\rho^{-kl}$. Similarly, for $n\hookrightarrow E^s(A^*)$, the estimate $|\partial_y^\beta a_n| \leqslant C_k\frac{\|\partial_y^\beta f\|_{C^k}}{\|n\|^k}$ holds provided that $k>\frac{\ln |\eta|}{\ln |\rho|}$. When $n\hookrightarrow E^c(A^*)$, by the Katznelson lemma (see Lemma \ref{Lem_Katznelson}), $\|\pi_u(n)\|\geqslant \gamma \|n\|^{-d}$. Then, \begin{align*} \|(A^*)^l n\|\geqslant\|(A^*)^l\,\pi_u(n)\|\geqslant C\rho^{l}\|\pi_u(n)\|\geqslant C\gamma \rho^{l} \|n\|^{-d}\geqslant C\gamma \rho^{l-l_0} \|n\|. \end{align*} for all $l\geqslant l_0$ where $l_0=\left[\frac{(d+1)\ln\|n\|}{\ln \rho}\right]+1$. Meanwhile, for $0\leqslant l\leqslant l_0-1$ we have \[ \|(A^*)^ln\|\geqslant \|(A^*)^l\pi_c(n)\|\geqslant C(1+l)^{-d} \|\pi_c(n)\|\ge\frac{C}{3}(1+l)^{-d} \|n\|. \] Then we deduce that (only need to consider the worst case, $|\eta|<1$) \begin{align*} |\partial_y^\beta a_n| \leqslant & \sum_{l\geqslant 0} |\eta|^{-(l+1)} \, \left|\widehat{(\partial_y^\beta f)}_{(A^*)^l\,n}\right|\\ \leqslant & C'\|\partial^\beta_y f\|_{C^k}\left(\sum_{l=0}^{l_0-1}|\eta|^{-(l+1)}(1+l)^{dk}\|n\|^{-k}+ \sum_{l=l_0}^{+\infty}|\eta|^{-(l+1)}\rho^{-k(l-l_0)}\|n\|^{-k}\right)\\ \leqslant & C''\|\partial^\beta_y f\|_{C^k} \|n\|^{-k}\left(|\eta|^{-l_0}\cdot l_0^{dk+1}+ |\eta|^{-l_0}\sum_{i=0}^{+\infty}|\eta|^{-i}\rho^{-ki}\right)\\ \leqslant & C'''\|\partial^\beta_y f\|_{C^k} \|n\|^{-k}\cdot |\eta|^{-l_0}\left(l_0^{dk+1}+ c\right)\\ \leqslant & C'''' \|\partial^\beta_y f\|_{C^k} \|n\|^{-k}\cdot \|n\|^{\tau} \cdot(\ln\|n\|)^{dk+1}\leqslant C_k \|\partial^\beta_y f\|_{C^k}\, \|n\|^{-k+\tau+1} \end{align*} provided that $k>\frac{-\ln |\eta|}{\ln \rho}$. Here, $\tau=(d+1)\frac{|\ln|\eta||}{\ln\rho}$. Therefore, by \eqref{dklhr} we can estimate that \begin{align*} |\partial_x^\alpha\partial_y^\beta\omega|\leqslant (2\pi)^{|\alpha|}\sum_{n\in \mathbb{Z}^d} |\partial_y^\beta a_n| \cdot \|n\|^{|\alpha|}\leqslant (2\pi)^{|\alpha|} \, C_k \sum_{n\in \mathbb{Z}^d} \| f\|_{C^{k+|\beta|}}\, \|n\|^{-k+\tau+1+|\alpha|} \end{align*} provided that $k>\frac{|\ln |\eta||}{\ln \rho}$. By taking $k=|\alpha|+d+2+\tau$, we get \begin{align*} |\partial_x^\alpha\partial_y^\beta\omega|\leqslant C \| f\|_{C^{|\alpha|+|\beta|+d+2+\tau}}. \end{align*} It holds for each multi-indices $\alpha\in\mathbb{N}^d$ and $\beta\in\mathbb{N}^s$ satisfying $|\alpha|+|\beta|=r$. This verifies that $\omega$ is $C^r$ for any $r\geqslant 0$, and satisfies estimate \eqref{omgr}. \end{proof} Recall the following two operators: \begin{equation*} \mathcal{L}_1(f_1, g_1)=\Big(f_1(Bx,y)-Bf_1(x,y)\Big)-\Big( g_1(Ax,y)-Ag_1(x,y)\Big) \end{equation*} for functions $f_1, g_1 : \mathbb{T}^d\times\mathbb{T}^s\to \mathbb{R}^d$. \begin{equation*} \mathcal{L}_2(f_2, g_2)=\Big(f_2(Bx,y)-f_2(x,y)\Big)-\Big( g_2(Ax,y)-g_2(x,y)\Big) \end{equation*} for functions $f_2, g_2 : \mathbb{T}^d\times\mathbb{T}^s\to \mathbb{R}^s$. Then, the following tame splitting holds. \begin{Pro}[Tame Splitting]\label{Pro_split} Suppose that $\mathcal{L}_1(f_1, g_1)=\Phi_1$ and $\mathcal{L}_2(f_2, g_2)=\Phi_2$, where $ \Phi_1\in C^\infty(\mathbb{T}^d\times\mathbb{T}^s, \mathbb{R}^d)$ and $\Phi_2\in C^\infty(\mathbb{T}^d\times\mathbb{T}^s, \mathbb{R}^s)$. Then, \begin{enumerate}[\bf(I)] \item there exists a splitting: $ f_1=\mathcal{P}(f_1)+\mathcal{E}(f_1)$ and $g_1=\mathcal{P}(g_1)+\mathcal{E}(g_1)$ such that \[\mathcal{L}_1(\mathcal{P}(f_1),\mathcal{P}(g_1))=0,\qquad \mathcal{L}_1(\mathcal{E}(f_1),\mathcal{E}(g_1))=\Phi_1\] \begin{align*} \|\mathcal{P}(f_1), \mathcal{P}(g_1)\|_{C^r}\leqslant C_r \,\|f_1\|_{C^{r+\sigma_2}},\qquad \|\mathcal{E}(f_1), \mathcal{E}(g_1)\|_{C^r}\leqslant C_r \,\|\Phi_1\|_{C^{r+\sigma_2}} \end{align*} \item there exists a splitting: $ f_2=[f_2]+\mathcal{P}(f_2)+\mathcal{E}(f_2)$ and $g_2=[g_2]+\mathcal{P}(g_2)+\mathcal{E}(g_2)$ where $[f_2](y)$ and $[g_2](y)$ are the averages over the base $\mathbb{T}^d$, and \[\mathcal{L}_2(\mathcal{P}(f_2),\mathcal{P}(g_2))=0,\qquad \mathcal{L}_2(\mathcal{E}(f_2),\mathcal{E}(g_2))=\Phi_2\] \begin{align*} \|\mathcal{P}(f_2), \mathcal{P}(g_2)\|_{C^r}\leqslant C_r \,\|f_2\|_{C^{r+\sigma_2}}, \qquad \|\mathcal{E}(f_2), \mathcal{E}(g_2)\|_{C^r}\leqslant C_r \,\|\Phi_2\|_{C^{r+\sigma_2}} \end{align*} Moreover, the averages over the base $[\mathcal{P}(f_2)](y)=[\mathcal{P}(g_2)](y)=[\mathcal{E}(f_2)](y)=[\mathcal{E}(g_2)](y)=0$. \end{enumerate} Here, the integer $\sigma_2$ depends only on the dimensions $d>0, s>0$ and $A$ and $B$. \end{Pro} \begin{proof} \textbf{(I)} If $A$ and $B$ are semisimple, then by choosing a proper basis in which $A$ and $B$ are simultaneously diagonalize, the system $\mathcal{L}_1(f_1, g_1)=\Phi_1$ splits into finitely many one-dimensional equations of the following form \begin{align}\label{bmwiuqhw} \big(\theta(Bx,y)-\mu \theta(x, y)\big)-\big(\psi(Ax,y)-\lambda \psi(x, y)\big)=\phi(x,y) \end{align} where $\lambda\neq 1, \mu\neq 1$ are a pair of eigenvalues of the ergodic $A$ and $B$, and $\theta, \psi, \phi$ belong to $C^\infty(\mathbb{T}^d\times\mathbb{T}^s,\mathbb{R})$. For simplicity, we introduce the notation \[\Delta_A^\lambda\psi:=\psi(Ax,y)-\lambda \psi(x,y),\qquad \Delta_B^\mu\theta:=\theta(Bx,y)-\mu \theta(x,y).\] Applying Lemma \ref{lem_defproj} with the number $\eta=\lambda$ and the function $f=\theta$, we can construct a $C^\infty$ function $\omega(x,y)$ satisfying \begin{align}\label{nxkanal} \|\omega\|_{C^r}\leqslant C_r \|\theta\|_{C^{r+r_0}} \end{align} where $r_0=d+2+\tau$ with $\tau=(d+1)\frac{|\ln |\lambda||}{\ln \rho}$ and $\rho>1$ is the expansion rate for $A^*$. Now we construct the projections \begin{equation*} \mathcal{P}(\theta)\overset{\textup{def}}=\Delta_A^\lambda\omega=\omega(Ax,y)-\lambda \omega(x,y),\qquad \mathcal{P}(\psi)\overset{\textup{def}}=\Delta_B^\mu\omega=\omega(Bx,y)-\mu \omega(x,y). \end{equation*} As $A$ commutes with $B$, $\Delta_B^\mu \mathcal{P}(\theta)-\Delta_A^\lambda\mathcal{P}(\psi)=0$. Next, we set \begin{equation*} \mathcal{E}(\theta)\overset{\textup{def}}=\theta-\mathcal{P}(\theta),\qquad \mathcal{E}(\psi)\overset{\textup{def}}=\psi-\mathcal{P}(\psi). \end{equation*} Thus, by \eqref{bmwiuqhw} it follows that \begin{equation}\label{afqqq} \Delta_B^\mu \, \mathcal{E}(\theta)-\Delta_A^\lambda\, \mathcal{E}(\psi)=\phi. \end{equation} Note that all functions $\mathcal{P}(\theta)$, $\mathcal{P}(\psi)$, $\mathcal{E}(\theta)$ and $\mathcal{E}(\psi)$ are $C^\infty$ on $\mathbb{T}^d\times\mathbb{T}^s$. \noindent\textbf{Estimates for $\mathcal{P}(\theta)$ and $\mathcal{P}(\psi)$.} Obviously, \eqref{nxkanal} implies that \begin{equation} \|\mathcal{P}(\theta), \mathcal{P}(\psi)\|_{C^r}\leqslant C_r \|\theta\|_{C^{r+r_0}}, \end{equation} where we enlarge the constant $C_r$ if necessary. \noindent\textbf{Estimates for $\mathcal{E}(\theta)$ and $\mathcal{E}(\psi)$.} We write $\mathcal{E}(\theta)$ in the following form of Fourier series expansion \begin{equation}\label{fexpan_calE} \mathcal{E}(\theta)(x,y)=\sum\limits_{n\in \mathbb{Z}^d} \widehat{\mathcal{E}(\theta)}_n(y)\cdot e^{i2\pi\langle n,x\rangle}, \end{equation} then by $\mathcal{E}(\theta)=\theta-\mathcal{P}(\theta)=\theta-\Delta_A^\lambda\omega$ and Lemma \ref{lem_defproj}, it is not difficult to check that $\widehat{\mathcal{E}(\theta)}_0=0$, and for nonzero $n\neq 0$, \begin{equation}\label{wotyoiy} \widehat{\mathcal{E}(\theta)}_n(y)= \begin{cases} \lambda \sum\limits_{l\in \mathbb{Z}}\lambda^{-(l+1)}\cdot \widehat \theta_{(A^*)^l\,n}(y), & \textup{if~} n\hookrightarrow E^s(A^*) \textup{~and~} A^*n\hookrightarrow E^u(A^*), E^c(A^*) \\ \quad 0, & \textup{otherwise} \end{cases} \end{equation} $\bullet$ Thus, it requires us to estimate the following \eqref{dfgoga} for every $n$ satisfying $n\hookrightarrow E^s(A^*)$ and $A^*n\hookrightarrow E^u(A^*), E^c(A^*)$, \begin{equation}\label{dfgoga} \sum\nolimits^A \widehat \theta_{n}(y):=\sum\limits_{l\in \mathbb{Z}}\lambda^{-(l+1)}\cdot \widehat \theta_{(A^*)^l\,n}(y) \end{equation} Note that the formal sum \eqref{dfgoga} is always absolutely convergent for any $n\in \mathbb{Z}^d\setminus\{0\}$, this is easy to check because $n$ has non-trivial projections to the expanding subspace and contracting subspace of the ergodic $A^*$. $\bullet$ We need to estimate the size of \eqref{dfgoga} with respect to $\|n\|$, for every $n$ satisfying $n\hookrightarrow E^s(A^*)$ and $A^*n\hookrightarrow E^u(A^*), E^c(A^*)$. Recall that \eqref{bmwiuqhw} implies that the equation $\Delta_A^\lambda\psi= \Delta_B^\mu\theta-\phi$ holds, so the obstructions for $ \Delta_B^\mu\theta-\phi$ with respect to $A$ vanish, namely, for any nonzero $n\in\mathbb{Z}^d$, \begin{equation}\label{eq_nisia} \sum\nolimits^A \widehat\theta_{B^*n}(y)-\mu\sum\nolimits^A \widehat\theta_n(y)-\sum\nolimits^A \widehat\phi_n(y)=0. \end{equation} All formal sums in \eqref{eq_nisia} are absolutely convergent (the proof is the same with \eqref{dfgoga}). Therefore, by iterating \eqref{eq_nisia} backward and forward we obtain \begin{equation}\label{doublesum} \sum\nolimits^A \widehat\theta_n(y)=-\sum\nolimits^B_+\sum\nolimits^A \widehat\phi_n(y)=\sum\nolimits^B_-\sum\nolimits^A \widehat\phi_n(y), \end{equation} where the notation \begin{equation*} \sum\nolimits^B_{\substack{+\\(-)}}\sum\nolimits^A \widehat\phi_n=\sum_{\substack{(l,k)\in H^+\\ ( (l,k)\in H^-)}}\lambda^{-(l+1)}\cdot \mu^{-(k+1)}\,\,\widehat\phi_{(A^*)^l(B^*)^k\,n} \end{equation*} with the sets $H^+=\{(l,k): l\in \mathbb{Z}, k\geqslant 0\}$ and $H^-=\{(l,k): l\in \mathbb{Z}, k< 0\}$. Consequently, estimating \eqref{dfgoga} is equivalent to estimating the double sum in \eqref{doublesum}. Since $n\hookrightarrow E^s(A^*) $ with $A^*n\hookrightarrow E^u(A^*), E^c(A^*)$, due to condition \ref{condHR} we have: either for all non-zero $(l,k)\in H^+$ or for all non-zero $(l,k)\in H^-$, the following polynomial estimates hold \begin{equation}\label{pollowbound} \|(A^*)^l(B^*)^k\,n\|\geqslant \frac{C}{|(l,k)|^{2d}} \|n\|, \end{equation} where $|(l,k)|:=\max\{|l|, |k|\}$. See \cite[page 1837]{Damjanovic_Katok10}. In the sequel, without loss of generality, we suppose that \eqref{pollowbound} holds on $H^+$. Then, by \eqref{doublesum}, \begin{align*} \sum\nolimits^A \widehat \theta_{n}(y)=-\sum_{(l,k)\in H^+} \lambda^{-(l+1)}\cdot \mu^{-(k+1)}\,\,\widehat\phi_{(A^*)^l(B^*)^k\,n}(y) \end{align*} We will choose a suitable $M>0$ and split the above sum into two parts $S_{<M}(\phi)$ and $S_{\geqslant M}(\phi)$: one is the finite sum on $H^+_{<M}=\{(l,k)\in H^+, |(l,k)|< M \} $ and the other is the infinite sum on $H^+_{\geqslant M}=\{(l,k)\in H^+, |(l,k)|\geqslant M \}$. For $H^+_{<M}$ we use the polynomial estimates \eqref{pollowbound}, and for $H^+_{\geqslant M}$ we use the exponential estimates in Lemma \ref{lem_Abexp}. More precisely, by Lemma \ref{lem_Abexp} one has \begin{align}\label{hfgnvmow} \|(A^*)^l(B^*)^k n\|\geqslant C e^{|(l,k)|\kappa_0}\|n\|^{-d} \geqslant C e^{(|(l,k)|-M)\kappa_0}\|n\| \end{align} where we choose the integer $M=\big[\frac{d+1}{\kappa_0}\ln \|n\|\big]+1$. To estimate $S_{\geqslant M}(\phi)$, we set $m_0:=\max\{|\lambda|, |\mu|, |\lambda|^{-1},|\mu|^{-1}\}$, then using \eqref{hfgnvmow} it follows that for any $r\geqslant 0$, and any integer $p> a=\big[\frac{2(d+1)}{\kappa_0}\ln |m_0|\big]+1$, \begin{equation}\label{nannie} \begin{aligned} \left\|S_{\geqslant M}(\phi)\right\|_{C^r(\mathbb{T}^s)}\leqslant & C'\sum_{H^+_{\geqslant M}}|\lambda|^{-(l+1)} |\mu|^{-(k+1)}\frac{\|\phi\|_{C^{r+p}}}{\|(A^*)^l(B^*)^k\,n\|^p}\\ \leqslant &C'' \|\phi\|_{C^{r+p}}\|n\|^{-p}\sum_{H^+_{\geqslant M}} m_0^{2|(l,k)|} \,e^{-(|(l,k)|-M)\kappa_0 p}\\ \leqslant &C'' \|\phi\|_{C^{r+p}}\|n\|^{-p}\,m_0^{2M}\sum_{H^+_{\geqslant M}} \left(m_0^2\, e^{-\kappa_0 p}\right)^{|(l,k)|-M}\\ \leqslant &C''' \|\phi\|_{C^{r+p}}\|n\|^{-p+a}\sum_{H^+_{\geqslant M}} \left(m_0^2\, e^{-\kappa_0 p}\right)^{|(l,k)|-M}\leqslant C_{r,p} \|\phi\|_{C^{r+p}}\|n\|^{-p+a} \end{aligned} \end{equation} To estimate $S_{< M}(\phi)$, we use \eqref{pollowbound} and it follows that for any $r\geqslant 0$, and any integer $p> a$, \begin{equation}\label{fjslw} \begin{aligned} \left\|S_{< M}(\phi)\right\|_{C^r(\mathbb{T}^s)}\leqslant &C'\sum_{H^+_{< M}}|\lambda|^{-(l+1)} |\mu|^{-(k+1)}\frac{\|\phi\|_{C^{r+p}}}{\|(A^*)^l(B^*)^k\,n\|^p}\\ \leqslant &C'' \|\phi\|_{C^{r+p}}\|n\|^{-p}\sum_{H^+_{< M}} m_0^{2|(l,k)|}\, |(l,k)|^{2d p}\\ \leqslant &C'' \|\phi\|_{C^{r+p}}\|n\|^{-p} M^{2}m_0^{2M} M^{2dp}\\ \leqslant &C''' \|\phi\|_{C^{r+p}}\|n\|^{-p+a} M^{2+2dp}\leqslant C_{r,p} \|\phi\|_{C^{r+p}}\|n\|^{-p+a+1} \end{aligned} \end{equation} Thus, \eqref{nannie} together with \eqref{fjslw} give the $C^r$-estimate for $\sum\nolimits^A \widehat \theta_{n}(y)$. Combined with \eqref{wotyoiy}, we obtain \begin{equation}\label{auukjfh} \left\|\widehat{\mathcal{E}(\theta)}_n(y)\right\|_{C^r(\mathbb{T}^s)}\leqslant C_{r,p} \|\phi\|_{C^{r+p}}\|n\|^{-p+a+1} \end{equation} for any $p> a=\big[\frac{2(d+1)}{\kappa_0}\ln |m_0|\big]+1$. $\bullet$ Now, for any multi-indices $\alpha\in \mathbb{N}^d$ and $\beta\in \mathbb{N}^s$ satisfying $|\alpha|+|\beta|=r'$, we deduce from \eqref{fexpan_calE} and \eqref{auukjfh} that \begin{align*} \|\partial_x^\alpha\partial_y^\beta\mathcal{E}(\theta)\|_{C^0}\leqslant \sum_{n\in \mathbb{Z}^d} (2\pi\|n\|)^{|\alpha|} \left\|\widehat{\mathcal{E}(\theta)}_n(y)\right\|_{C^{|\beta|}(\mathbb{T}^s)}\leqslant \widetilde{C}_{|\beta|,p} \|\phi\|_{C^{|\beta|+p}}\sum_{n\in \mathbb{Z}^d} \|n\|^{-p+a+1+|\alpha|}. \end{align*} To ensure the convergence, we take $p=a+|\alpha|+d+2$ and get $ \|\partial_x^\alpha\partial_y^\beta\mathcal{E}(\theta)\|_{C^0}\leqslant C_{r'} \|\phi\|_{C^{r'+a+d+2}}. $ Since it holds for any multi-indices $\alpha\in \mathbb{N}^d$ and $\beta\in \mathbb{N}^s$, we thus obtain \begin{align*} \|\mathcal{E}(\theta)\|_{C^{r'}}\leqslant C_{r'} \|\phi\|_{C^{r'+a+d+2}}. \end{align*} for any $r'\geqslant 0$. As for $\mathcal{E}(\psi)$, we recall \eqref{afqqq} and find that $\mathcal{E}(\psi)$ satisfies the equation $\Delta_A^\lambda\, \mathcal{E}(\psi)=\Delta_B^\mu \, \mathcal{E}(\theta)-\phi$. Thus, we infer from Proposition \ref{Pro_LRS0} that \[ \|\mathcal{E}(\psi)\|_{C^{r'}}\leqslant K_{r'} \|\Delta_B^\mu \, \mathcal{E}(\theta)-\phi\|_{C^{r'+\sigma_1}}\leqslant\widetilde {K}_{r'} (\|\mathcal{E}(\theta)\|_{C^{r'+\sigma_1}}+\|\phi\|_{C^{r'+\sigma_1}})\leqslant C_{r', \sigma_1} \|\phi\|_{C^{r'+a+d+2+\sigma_1}}. \] for any $r'\geqslant 0$. Therefore, we have finished the proof in the case of semisimple $A$ and $B$. If $A$ and $B$ are not semisimple we may have Jordan blocks, then instead of the one-dimensional equation \eqref{bmwiuqhw}, for each Jordan block we would get a system of equations. However, analogous to \cite[Lemma 4.5]{Damjanovic_Katok10}, this system of equations can be studied inductively in finitely many steps, starting from an equation of the form \eqref{bmwiuqhw}. We will not repeat the arguments here. \noindent \textbf{(II)} It can be proved in the same fashion as part \textbf{(I)}. In fact, the proof will be simpler because the untwisted operator $\mathcal{L}_2(f_2, g_2)=\Big(f_2(Bx,y)-f_2(x,y)\Big)-\Big(g_2(Ax,y)-g_2(x,y)\Big)$, and there is no Jordan blocks. Then we directly apply Lemma \ref{lem_defproj} with $\eta=1$ and $f=f_2$ to construct a $C^\infty$ function $\omega\in C^\infty(\mathbb{T}^d\times\mathbb{T}^s,\mathbb{R}^s)$. Now, we define \begin{equation*} \mathcal{P}(f_2)\overset{\textup{def}}=\omega(Ax,y)-\omega(x,y),\qquad \mathcal{P}(g_2)\overset{\textup{def}}=\omega(Bx,y)-\omega(x,y). \end{equation*} and \begin{equation*} \mathcal{E}(f_2)\overset{\textup{def}}=f_2-[f_2]-\mathcal{P}(f_2),\qquad \mathcal{E}(g_2)\overset{\textup{def}}=g_2-[g_2]-\mathcal{P}(g_2). \end{equation*} The remaining proof is just similar to that of part \textbf{(I)}, so we will not repeat the arguments here. \end{proof} \subsection{Concluding remark} Let $\rho:\mathbb{Z}^2\to \textup{Diff}^\infty(\mathbb{T}^d\times\mathbb{T}^s)$ denote the $\mathbb{Z}^2$ action generated by $\rho(\mathbf{e}_1)=\mathcal{T}_{A,0}=A\times id_{\mathbb{T}^s}$ and $\rho(\mathbf{e}_2)=\mathcal{T}_{B,0}=B\times id_{\mathbb{T}^s}$. Let $\mathcal{V}$ be the space of all smooth functions $h=(h_1,h_2): \mathbb{T}^d\times\mathbb{T}^s\to \mathbb{R}^d\times\mathbb{R}^s$ which satisfy $\int_{\mathbb{T}^d} h_2(x,y)\, dx=0$. We define two smooth tame linear operators $\Delta: \mathcal{V}\to \mathcal{V}\times \mathcal{V}$ and $\mathcal{L}: \mathcal{V}\times\mathcal{V}\to\mathcal{V}$ by \begin{equation}\label{opedelcL} \begin{aligned} \Delta h:=(\Delta^{\mathbf{e_1}} h,~ \Delta^{\mathbf{e_2}} h),\qquad \mathcal{L}(f, g):= \Delta^{\mathbf{e_2}}f -\Delta^{\mathbf{e_1}}g \end{aligned} \end{equation} where the linear operators $\Delta^{\mathbf{e_1}}$ and $\Delta^{\mathbf{e_2}}$ are defined as follows: for each $h=(h_1, h_2)\in \mathcal{V}$ \begin{align*} \Delta^{\mathbf{e_1}}h:=h\circ \rho(\mathbf{e}_1) -\rho_*(\mathbf{e}_1) h &=(h_1(Ax,y)-A h_1(x,y),~ h_2(Ax,y)- h_2(x,y) ),\\ \Delta^{\mathbf{e_2}}h:=h\circ \rho(\mathbf{e}_2) -\rho_*(\mathbf{e}_2) h &=(h_1(Bx,y)-B h_1(x,y), ~h_2(Bx,y)- h_2(x,y) ). \end{align*} Since $A$ commutes with $B$, the following sequence is exact, i.e., $\mathcal{L}\circ \Delta =0$, \begin{equation}\label{exactsequence} \mathcal{V} \xrightarrow{\quad\Delta\quad} \mathcal{V}\times\mathcal{V} \xrightarrow{\quad\mathcal{L}\quad} \mathcal{V} \end{equation} Let $Y=\textup{Im} \mathcal{L}$ be the image of $\mathcal{L}$. As a consequence of Proposition \ref{Pro_split}, the following result holds. \begin{Cor} The exact sequence \eqref{exactsequence} admits the following tame splitting: there exist two smooth tame operators $\widetilde\Delta:\mathcal{V}\times\mathcal{V}\to \mathcal{V}$ and $\widetilde\mathcal{L}: Y\to\mathcal{V}\times\mathcal{V}$ such that $\Delta\circ\widetilde\Delta+\widetilde\mathcal{L}\circ\mathcal{L}=id$ on the space $\mathcal{V}\times\mathcal{V}$. Here, $Y=\mathcal{L}(\mathcal{V}\times\mathcal{V})$ is a linear subspace of $\mathcal{V}$. \end{Cor} \begin{proof} For any two elements $f=(f_1, f_2)$ and $g=(g_1, g_2)$ in $\mathcal{V}$, note that $\int_{\mathbb{T}^d}f_2(x,y)\,dx$ $=$ $\int_{\mathbb{T}^d}g_2(x,y)\,dx=0$, then we can apply Proposition \ref{Pro_split} to split $f$ and $g$ into \[f=\mathcal{P}(f)+\mathcal{E}(f), \qquad g=\mathcal{P}(g)+\mathcal{E}(g),\] where $\mathcal{P}(f)=(\mathcal{P}(f_1), \mathcal{P}(f_2))$ and $\mathcal{E}(f)=(\mathcal{E}(f_1), \mathcal{E}(f_2))$, and $\mathcal{P}(g)=(\mathcal{P}(g_1),\mathcal{P}(g_2))$ and $\mathcal{E}(g)=(\mathcal{E}(g_1),\mathcal{E}(g_2))$. Moreover, $\mathcal{L}_1(\mathcal{P}(f_1), \mathcal{P}(f_2))=0$ and $\mathcal{L}_2(\mathcal{P}(f_2), \mathcal{P}(g_2))=0$. This, combined with Propositions \ref{Pro_LRS0} and \ref{Pro_LRS1111}, implies that there exists a unique solution $h\in\mathcal{V}$ satisfies the equation $\Delta h = (\mathcal{P}(f), \mathcal{P}(g))$, where $\Delta$ is defined in \eqref{opedelcL}. As a consequence, we can define the operator $\widetilde\Delta: \mathcal{V}\times\mathcal{V}\to \mathcal{V}$ by $\widetilde\Delta (f, g)=h$. Clearly, it is a tame operator. On the other hand, for any $\Phi\in Y$, by definition there exists a pair $(f, g)$ such that $\mathcal{L}(f,g)=\Phi$, in other words, $\mathcal{L}_1(f_1, g_1)=\Phi_1$ and $\mathcal{L}_2(f_2,g_2)=\Phi_2$. Then, applying the construction in Proposition \ref{Pro_split} we define the operator $\widetilde \mathcal{L}:Y\to\mathcal{V}$ by $\widetilde\mathcal{L}(\Phi)=(f-\mathcal{P}(f), g-\mathcal{P}(g))$. To show that $\widetilde\mathcal{L}$ is well defined, suppose that there is another pair $(f', g')$ such that $\mathcal{L}(f',g')=\Phi$, then we claim that $(f'-\mathcal{P}(f'), g'-\mathcal{P}(g'))=(f-\mathcal{P}(f), g-\mathcal{P}(g))$. Indeed, we can write $f'=f+u$ and $g'=g+v$. Because $\mathcal{L}(f',g')=\mathcal{L}(f,g)$, we obtain $\mathcal{L}(u,v)=0$. Thus, according to the construction in the proof of Proposition \ref{Pro_split}, $\mathcal{P}(u)=u$ and $\mathcal{P}(v)=v$. By linearity, we get $f'-\mathcal{P}(f')=f+u-\mathcal{P}(f)-\mathcal{P}(u)=f-\mathcal{P}(f)$ and $g'-\mathcal{P}(g')=g-\mathcal{P}(g)$. Therefore, $\widetilde\mathcal{L}$ is well defined. Finally, $\Delta\circ\widetilde\Delta+\widetilde\mathcal{L}\circ \mathcal{L}=id_{\mathcal{V}\times\mathcal{V}}$ follows immediately from the above definitions. \end{proof} \section{The KAM scheme and proof of Theorem \ref{Element_Thm1}}\label{Section_KAMscheme} \subsection{The inductive step}\label{subsection_induclem} We first establish the inductive step of the KAM scheme. Note that at each step of the iterative process we need to deal with cohomological equations of the form \eqref{twis_LE}--\eqref{untwis_LE}. According to Proposition \ref{Pro_LRS0} and Proposition \ref{Pro_LRS1111}, a phenomenon of the loss of regularity could happen. To overcome the fixed loss of derivatives, we use the smoothing operators $\mathrm{S}_N$ for functions of $\mathbb{T}^{d+s}$, and solve approximately the following (truncated) system:\begin{equation}\label{trun_twis_LE} \begin{aligned} \mathbf{h}_1\circ \mathcal{T}_{A,0}-A\mathbf{h}_1=\mathrm{S}_N\mathbf{f}_1,\qquad \mathbf{h}_1\circ \mathcal{T}_{B,0}-B\mathbf{h}_1=\mathrm{S}_N\mathbf{g}_1. \end{aligned} \end{equation} and \begin{equation}\label{trun_untwis_LE} \begin{aligned} \mathbf{h}_2 \circ \mathcal{T}_{A,0}- \mathbf{h}_2=\mathrm{S}_N\mathbf{f}_2,\qquad \mathbf{h}_2\circ \mathcal{T}_{B,0}-\mathbf{h}_2=\mathrm{S}_N\mathbf{g}_2.\ \end{aligned} \end{equation} To obtain approximate solutions we apply Proposition \ref{Pro_split} to $\mathrm{S}_N\mathbf{f}$ and $\mathrm{S}_N\mathbf{g}$, and get the splitting \begin{equation*} \mathrm{S}_N\mathbf{f}_1=\mathcal{P}(\mathrm{S}_N\mathbf{f}_1)+\mathcal{E}(\mathrm{S}_N\mathbf{f}_1),\qquad \mathrm{S}_N\mathbf{g}_1=\mathcal{P}(\mathrm{S}_N\mathbf{g}_1)+\mathcal{E}(\mathrm{S}_N\mathbf{g}_1) \end{equation*} \begin{equation*} \mathrm{S}_N\mathbf{f}_2=[\mathrm{S}_N(\mathbf{f}_2)]+\mathcal{P}(\mathrm{S}_N\mathbf{f}_2)+\mathcal{E}(\mathrm{S}_N\mathbf{f}_2),\qquad \mathrm{S}_N\mathbf{g}_2=[\mathrm{S}_N(\mathbf{g}_2)]+\mathcal{P}(\mathrm{S}_N\mathbf{g}_2)+\mathcal{E}(\mathrm{S}_N\mathbf{g}_2) \end{equation*} so that \[\mathcal{L}_1(\mathcal{P}(\mathrm{S}_N\mathbf{f}_1), \mathcal{P}(\mathrm{S}_N\mathbf{g}_1))=0,\qquad\mathcal{L}_2(\mathcal{P}(\mathrm{S}_N\mathbf{f}_2), \mathcal{P}(\mathrm{S}_N\mathbf{g}_2))=0\] with the averaged (w.r.t $\mathbb{T}^d$) terms $[\mathcal{P}(\mathrm{S}_N\mathbf{f}_2)](y)=[\mathcal{P}(\mathrm{S}_N\mathbf{g}_2)](y)=0$, and \[\mathcal{L}_1(\mathcal{E}(\mathrm{S}_N\mathbf{f}_1), \mathcal{E}(\mathrm{S}_N\mathbf{g}_1))=\mathcal{L}_1(\mathrm{S}_N\mathbf{f}_1, \mathrm{S}_N\mathbf{g}_1),\qquad \mathcal{L}_2(\mathcal{E}(\mathrm{S}_N\mathbf{f}_2), \mathcal{E}(\mathrm{S}_N\mathbf{g}_2))=\mathcal{L}_2(\mathrm{S}_N\mathbf{f}_2, \mathrm{S}_N\mathbf{g}_2).\] Then, by Proposition \ref{Pro_LRS0} and Proposition \ref{Pro_LRS1111} the system \begin{equation}\label{eq_psnfg1} \begin{aligned} \mathbf{h}_1\circ\mathcal{T}_{A,0}-A\mathbf{h}_1=\mathcal{P}(\mathrm{S}_N\mathbf{f}_1),\qquad \mathbf{h}_1\circ\mathcal{T}_{B,0}-B\mathbf{h}_1=\mathcal{P}(\mathrm{S}_N\mathbf{g}_1). \end{aligned} \end{equation} and the system \begin{equation}\label{eq_psnfg2} \begin{aligned} \mathbf{h}_2\circ\mathcal{T}_{A,0}-\mathbf{h}_2=\mathcal{P}(\mathrm{S}_N\mathbf{f}_2),\qquad \mathbf{h}_2\circ\mathcal{T}_{B,0}-\mathbf{h}_2=\mathcal{P}(\mathrm{S}_N\mathbf{g}_2).\ \end{aligned} \end{equation} have a solution $\mathbf{h}=(\mathbf{h}_1, \mathbf{h}_2)$, $\mathbf{h}_1\in C^\infty(\mathbb{T}^d\times\mathbb{T}^s,\mathbb{R}^d)$ and $\mathbf{h}_2\in C_0^\infty(\mathbb{T}^d\times\mathbb{T}^s,\mathbb{R}^s)$. In what follows, we set $\sigma=\max\{\sigma_1,\sigma_2, d+2\}$, where $\sigma_1$ and $\sigma_2$ are given in Proposition \ref{Pro_LRS0} and Proposition \ref{Pro_split}. \begin{Pro}\label{Pro_iterate} Consider the commuting maps $\mathbf{F}=\mathcal{T}_{A,0}+\mathbf{f}$ and $\mathbf{G}=\mathcal{T}_{B,0}+\mathbf{g}$ satisfying condition \ref{condIP}. Then, for $N>0$ there exists a $C^\infty$ function $\mathbf{h}=(\mathbf{h}_1,\mathbf{h}_2)$ solving the system \eqref{eq_psnfg1}--\eqref{eq_psnfg2}, and it satisfies \begin{equation}\label{hhhhnorm} \|\mathbf{h}\|_{C^r}\leqslant C_{r',r,\sigma}\, N^{r-r'+2\sigma}\| \mathbf{f},~\mathbf{g}\|_{C^{r'}},\qquad \text{for~}r\geqslant r'\geqslant 0. \end{equation} For the map defined by $H=id+\mathbf{h}$, if $\|\mathbf{h}\|_{C^1}\leqslant \frac{1}{4}$, then $H$ has a smooth inverse, and we obtain new $C^\infty$ maps $\widetilde\mathbf{F}=H^{-1} \circ\mathbf{F} \circ H$ and $\widetilde\mathbf{G}=H^{-1}\circ\mathbf{G} \circ H$. In addition, the new errors $ \widetilde \mathbf{f}=\widetilde\mathbf{F}-\mathcal{T}_{A,0}$ and $\widetilde\mathbf{g}=\widetilde\mathbf{G}-\mathcal{T}_{B,0}$ satisfy the following estimates, \begin{align} \|\widetilde\mathbf{f}, ~\widetilde\mathbf{g} \|_{C^0} \leqslant & C_{r,\sigma} \left( N^{2\sigma}\|\mathbf{f}, \mathbf{g}\|_{C^1}\|\mathbf{f}, \mathbf{g}\|_{C^0}+\frac{\|\mathbf{f}, \mathbf{g}\|_{C^{\sigma+r+1}}^2}{N^r}+\frac{\|\mathbf{f}, \mathbf{g}\|_{C^{\sigma+r}}}{N^r}\right), \quad \textup{for~} r\geqslant 0,\label{wtf0norm}\\ \|\widetilde \mathbf{f},~\widetilde \mathbf{g}\|_{C^r}\leqslant & C_{r,\sigma}\,\Big(1+N^{2\sigma}\| \mathbf{f},~\mathbf{g}\|_{C^r}\Big),\qquad \textup{for~}r> 0.\label{wtf_rnorm} \end{align} \end{Pro} In the following proof, for simplicity we write $\|u\|_{C^r}\ll \|v\|_{C^s}$ if there exists a constant $C>0$ independent of $u, v$ such that $\|u\|_{C^r}\leqslant C\|v\|_{C^s}$. We also write $\|u\|_{C^r}\ll_{r,s} \|v\|_{C^s}$ in order to stress that the constant $C$ depend on $r$ and $s$. \begin{proof} By Proposition \ref{Pro_LRS0}, Proposition \ref{Pro_LRS1111} and the analysis above, there exists a smooth $\mathbf{h}=(\mathbf{h}_1,\mathbf{h}_2)$ such that $\mathbf{h}_1\in C^\infty(\mathbb{T}^{d}\times\mathbb{T}^s,\mathbb{R}^d)$ and $\mathbf{h}_2\in C_0^\infty(\mathbb{T}^{d}\times\mathbb{T}^s,\mathbb{R}^s)$ are solutions to \eqref{eq_psnfg1} and \eqref{eq_psnfg2} respectively. Moreover, combined with Proposition \ref{Pro_split} it follows that \begin{align}\label{bfhcr} \|\mathbf{h}\|_{C^r}\ll_{r,\sigma} \|\mathcal{P}(\mathrm{S}_N\mathbf{f}_1), \mathcal{P}(\mathrm{S}_N\mathbf{f}_2) \|_{C^{r+\sigma}}\ll_{r,\sigma} \|\mathrm{S}_N\mathbf{f}_1, \mathrm{S}_N\mathbf{f}_2 \|_{C^{r+2\sigma}} \ll_{r,r',\sigma} N^{2\sigma+r-r'} \|\mathbf{f}\|_{C^{r'}}, \end{align} where for the last inequality we used Lemma \ref{Lem_trun}. This proves the desired estimate \eqref{hhhhnorm}. Now, we form $H=id+\mathbf{h}:$ $(x,y)\mapsto (x+\mathbf{h}_1, y+\mathbf{h}_2)$. If $\|\mathbf{h}\|_{C^1}\leqslant \frac{1}{4}$, by Lemma \ref{Apdix_pro1} it is a smooth diffeomorphism. Under this conjugacy the original maps $\mathbf{F}, \mathbf{G}$ become \begin{align*} \widetilde\mathbf{F}\overset{\textup{def}}= H^{-1}\circ \mathbf{F}\circ H,\qquad \widetilde\mathbf{G}\overset{\textup{def}}= H^{-1}\circ \mathbf{G} \circ H. \end{align*} In the sequel we will estimate the new errors $\widetilde\mathbf{f}=\widetilde\mathbf{F}-\mathcal{T}_{A,0}$ and $\widetilde\mathbf{g}=\widetilde\mathbf{G}-\mathcal{T}_{B,0}$. \noindent\textit{\large (I) Estimate of $\widetilde\mathbf{f}, \widetilde\mathbf{g}$ in $C^0$.} We first estimate $\widetilde\mathbf{f}=(\widetilde\mathbf{f}_1, \widetilde\mathbf{f}_2)$. By the relation $H\circ\widetilde\mathbf{F}$ $=$ $\mathbf{F}\circ H$ it follows that $\widetilde\mathbf{f}_1=A \mathbf{h}_1+\mathbf{f}_1\circ H-\mathbf{h}_1\circ\widetilde\mathbf{F}$ and $\widetilde\mathbf{f}_2=\mathbf{h}_2+\mathbf{f}_2\circ H-\mathbf{h}_2\circ\widetilde\mathbf{F}$. As $\mathbf{h}=(\mathbf{h}_1,\mathbf{h}_2)$ solves the system \eqref{eq_psnfg1}--\eqref{eq_psnfg2}, we thus obtain \begin{equation}\label{tfexpre} \begin{aligned} \widetilde \mathbf{f}_i=-\mathcal{P}(\mathrm{S}_N\mathbf{f}_i)+\mathbf{f}_i\circ H-(\mathbf{h}_i\circ\widetilde\mathbf{F}-\mathbf{h}_i\circ\mathcal{T}_{A,0}),\qquad i=1,2 \end{aligned} \end{equation} Observe that $\mathbf{f}=(\mathbf{f}_1, \mathbf{f}_2)$ has the following splitting \begin{align*} \mathbf{f}_1=&\mathrm{S}_N\mathbf{f}_1+\mathrm{R}_N\mathbf{f}_1=\mathcal{P}(\mathrm{S}_N\mathbf{f}_1)+\mathcal{E}(\mathrm{S}_N\mathbf{f}_1)+\mathrm{R}_N\mathbf{f}_1\\ \mathbf{f}_2=&\mathrm{S}_N\mathbf{f}_2+\mathrm{R}_N\mathbf{f}_2 =[\mathrm{S}_N \mathbf{f}_2]+\mathcal{P}(\mathrm{S}_N\mathbf{f}_2)+\mathcal{E}(\mathrm{S}_N\mathbf{f}_2)+\mathrm{R}_N\mathbf{f}_2 \end{align*} where $\mathrm{S}_N$ and $\mathrm{R}_N$ are the operators given in Lemma \ref{Lem_trun}. Hence, \eqref{tfexpre} becomes \begin{align}\label{wwleiw} \begin{array}{lll} \widetilde{\mathbf{f}}_1= & &\mathcal{E}(\mathrm{S}_N\mathbf{f}_1)+ \mathrm{R}_N\mathbf{f}_1+(\mathbf{f}_1\circ H-\mathbf{f}_1)-(\mathbf{h}_1\circ\widetilde\mathbf{F}-\mathbf{h}_1\circ\mathcal{T}_{A,0}),\\\\ \widetilde{\mathbf{f}}_2= &[\mathbf{f}_2]-[\mathrm{R}_N\mathbf{f}_2]+&\mathcal{E}(\mathrm{S}_N\mathbf{f}_2)+\mathrm{R}_N\mathbf{f}_2+(\mathbf{f}_2\circ H-\mathbf{f}_2)-(\mathbf{h}_2\circ\widetilde\mathbf{F}-\mathbf{h}_2\circ\mathcal{T}_{A,0}). \end{array} \end{align} During estimating $\widetilde\mathbf{f}_2$, the hard part is the averaged term $[\mathbf{f}_2](y)=\int_{\mathbb{T}^d}\mathbf{f}_2(x,y)\,dx$ which is only of order one without further information. It is here that the intersection property comes into play, causing this term to be of higher order. More precisely, as $\widetilde\mathbf{F}(x,y)=(Ax+\widetilde\mathbf{f}_1, y+\widetilde\mathbf{f}_2)$ satisfies the intersection property, then for each point $y$, \[\big(\mathbb{T}^d\times\{y\}\big) ~\cap ~\widetilde\mathbf{F}\big(\mathbb{T}^d\times\{y\}\big) \neq \emptyset, \] This implies that for every $y$, the map $x\mapsto \widetilde\mathbf{f}_2(x,y)$ has zeros. As a consequence, we have \begin{equation*} \|\widetilde\mathbf{f}_2\|_{C^0}\leqslant 2\| \widetilde\mathbf{f}_2-[\mathbf{f}_2]\|_{C^0}. \end{equation*} This combined with \eqref{wwleiw} gives \begin{equation}\label{tildebff_2} \begin{split} \|\widetilde\mathbf{f}_2\|_{C^0}\leqslant & 2\| \widetilde\mathbf{f}_2-[\mathbf{f}_2]\|_{C^0}\\ \leqslant & 2\Big( \|\mathcal{E}(\mathrm{S}_N\mathbf{f}_2)\|_{C^0}+2\|\mathrm{R}_N\mathbf{f}_2\|_{C^0}+\|\mathbf{f}_2\circ H-\mathbf{f}_2\|_{C^0}+\|\mathbf{h}_2\circ \widetilde\mathbf{F}-\mathbf{h}_2\circ \mathcal{T}_{A,0}\|_{C^0}\Big)\\ \leqslant & 2\Big( \|\mathcal{E}(\mathrm{S}_N\mathbf{f}_2)\|_{C^0}+2\|\mathrm{R}_N\mathbf{f}_2\|_{C^0}+\|\mathbf{f}\|_{C^1}\|\mathbf{h}\|_{C^0}+\|\mathbf{h}\|_{C^1}\|\widetilde \mathbf{f}\|_{C^0}\Big) \end{split} \end{equation} Meanwhile, \eqref{wwleiw} also gives the following preliminary estimate for $\widetilde\mathbf{f}_1$, \begin{align}\label{tildebff1} \|\widetilde \mathbf{f}_1\|_{C^0}\leqslant \|\mathcal{E}(\mathrm{S}_N\mathbf{f}_1)\|_{C^0}+\|\mathrm{R}_N\mathbf{f}_1\|_{C^0}+\|\mathbf{f}\|_{C^1}\|\mathbf{h}\|_{C^0}+\|\mathbf{h}\|_{C^1}\|\widetilde \mathbf{f}\|_{C^0} \end{align} As $\|\widetilde\mathbf{f}\|_{C^0}=\|\widetilde\mathbf{f}_1,~\widetilde\mathbf{f}_2\|_{C^0}$, \eqref{tildebff_2} and \eqref{tildebff1} together imply that \begin{equation*} \|\widetilde\mathbf{f}\|_{C^0} \leqslant 2\Big( \|\mathcal{E}(\mathrm{S}_N\mathbf{f}_1),~\mathcal{E}(\mathrm{S}_N\mathbf{f}_2)\|_{C^0}+2\|\mathrm{R}_N\mathbf{f}\|_{C^0}+\|\mathbf{f}\|_{C^1}\|\mathbf{h}\|_{C^0}+\|\mathbf{h}\|_{C^1}\|\widetilde \mathbf{f}\|_{C^0}\Big), \end{equation*} which yields \begin{equation*} (1-2\|\mathbf{h}\|_{C^1})\cdot\|\widetilde\mathbf{f}\|_{C^0} \leqslant 2\Big( \|\mathcal{E}(\mathrm{S}_N\mathbf{f}_1),~\mathcal{E}(\mathrm{S}_N\mathbf{f}_2)\|_{C^0}+2\|\mathrm{R}_N\mathbf{f}\|_{C^0}+\|\mathbf{f}\|_{C^1}\|\mathbf{h}\|_{C^0}\Big). \end{equation*} Using $\|\mathbf{h}\|_{C^1}\leqslant \frac{1}{4}$ yields \begin{equation}\label{dawobff} \|\widetilde\mathbf{f}\|_{C^0} \leqslant 4\Big( \|\mathcal{E}(\mathrm{S}_N\mathbf{f}_1),~\mathcal{E}(\mathrm{S}_N\mathbf{f}_2)\|_{C^0}+2\|\mathrm{R}_N\mathbf{f}\|_{C^0}+\|\mathbf{f}\|_{C^1}\|\mathbf{h}\|_{C^0}\Big). \end{equation} Let us estimate the three terms on the right hand side of \eqref{dawobff}. \begin{itemize} \item To estimate $ \|\mathcal{E}(\mathrm{S}_N\mathbf{f}_1),~\mathcal{E}(\mathrm{S}_N\mathbf{f}_2)\|_{C^0}$ we apply Proposition \ref{Pro_split} to $\mathrm{S}_N\mathbf{f}_1$, $\mathrm{S}_N\mathbf{f}_2$, and obtain \begin{align}\label{qo_sjfk} \|\mathcal{E}(\mathrm{S}_N\mathbf{f}_1)\|_{C^0}\ll_{\sigma} \, \|\mathcal{L}_1( \mathrm{S}_N\mathbf{f}_1, \mathrm{S}_N\mathbf{g}_1 )\|_{C^\sigma}\, ,\qquad \|\mathcal{E}(\mathrm{S}_N\mathbf{f}_2)\|_{C^0} \ll_{\sigma} \|\mathcal{L}_2( \mathrm{S}_N\mathbf{f}_2, \mathrm{S}_N\mathbf{g}_2 )\|_{C^\sigma} \end{align} Note that \begin{align*} \mathcal{L}_1(\mathrm{S}_N \mathbf{f}_1, \mathrm{S}_N\mathbf{g}_1) = & \Delta^B(\mathbf{f}_1-\mathrm{R}_N\mathbf{f}_1)-\Delta^A(\mathbf{g}_1-\mathrm{R}_N\mathbf{g}_1)\\ =&\mathcal{L}_1(\mathbf{f}_1, \mathbf{g}_1)-\Delta^B\mathrm{R}_N\mathbf{f}_1 + \Delta^A\mathrm{R}_N\mathbf{g}_1\\ =&\mathrm{S}_N\Big(\mathcal{L}_1(\mathbf{f}_1, \mathbf{g}_1)\Big)+\mathrm{R}_N \Big(\mathcal{L}_1(\mathbf{f}_1, \mathbf{g}_1)\Big)-\Delta^B\mathrm{R}_N\mathbf{f}_1 + \Delta^A\mathrm{R}_N\mathbf{g}_1 \end{align*} Then, invoking Lemma \ref{Lem_comm_est} and Lemma \ref{Lem_trun} it follows that, for any $r\geqslant 0$, \begin{align} \|\mathcal{L}_1(\mathrm{S}_N \mathbf{f}_1, \mathrm{S}_N\mathbf{g}_1)\|_{C^\sigma}\ll_{\sigma, r}\, & N^\sigma\|\mathcal{L}_1(\mathbf{f}_1, \mathbf{g}_1)\|_{C^0}+\frac{\|\mathcal{L}_1(\mathbf{f}_1, \mathbf{g}_1)\|_{C^{\sigma+r}}}{N^r}+\frac{\|\mathbf{f}_1, \mathbf{g}_1\|_{C^{\sigma+r}}}{N^r}\nonumber\\ \ll_{\sigma, r}\, & N^\sigma\|\mathbf{f}, \mathbf{g}\|_{C^1}\|\mathbf{f}, \mathbf{g}\|_{C^0}+\frac{\|\mathbf{f}, \mathbf{g}\|_{C^{\sigma+r+1}}^2}{N^r}+\frac{\|\mathbf{f}, \mathbf{g}\|_{C^{\sigma+r}}}{N^r}\nonumbe \end{align} Combined with \eqref{qo_sjfk}, we get \begin{align}\label{caleNf1} \|\mathcal{E}(\mathrm{S}_N\mathbf{f}_1)\|_{C^0}\ll_{\sigma,r} \, N^\sigma\|\mathbf{f}, \mathbf{g}\|_{C^1}\|\mathbf{f}, \mathbf{g}\|_{C^0}+\frac{\|\mathbf{f}, \mathbf{g}\|_{C^{\sigma+r+1}}^2}{N^r}+\frac{\|\mathbf{f}, \mathbf{g}\|_{C^{\sigma+r}}}{N^r}. \end{align} Similarly, we can also show that \begin{align}\label{caleNf2} \|\mathcal{E}(\mathrm{S}_N\mathbf{f}_2)\|_{C^0} \ll_{\sigma, r}\, N^\sigma\|\mathbf{f}, \mathbf{g}\|_{C^1}\|\mathbf{f}, \mathbf{g}\|_{C^0}+\frac{\|\mathbf{f}, \mathbf{g}\|_{C^{\sigma+r+1}}^2}{N^r}+\frac{\|\mathbf{f}, \mathbf{g}\|_{C^{\sigma+r}}}{N^r}. \end{align} \item Using Lemma \ref{Lem_trun}, \begin{equation}\label{es_RNf1f2} \|\mathrm{R}_N\mathbf{f}\|_{C^0}\ll_{r} \|\mathbf{f}\|_{C^r}N^{-r}. \end{equation} \item Applying \eqref{bfhcr} with $r= r'=0$, we obtain \begin{equation}\label{es_f1h0} \|\mathbf{f}\|_{C^1}\|\mathbf{h}\|_{C^0}\ll_{\sigma}\, N^{2\sigma}\|\mathbf{f}\|_{C^1}\|\mathbf{f}\|_{C^0}. \end{equation} \end{itemize} Therefore, combining \eqref{caleNf1}--\eqref{es_f1h0} we have the following estimate for $\|\widetilde\mathbf{f}\|_{C^0}$, \begin{equation} \|\widetilde\mathbf{f}\|_{C^0} \ll_{\sigma,r} N^{2\sigma}\|\mathbf{f}, \mathbf{g}\|_{C^1}\|\mathbf{f}, \mathbf{g}\|_{C^0}+\frac{\|\mathbf{f}, \mathbf{g}\|_{C^{\sigma+r+1}}^2}{N^r}+\frac{\|\mathbf{f}, \mathbf{g}\|_{C^{\sigma+r}}}{N^r}. \end{equation} On the other hand, to estimate $\widetilde\mathbf{g}$ we repeat similar arguments as above and obtain \begin{equation} \|\widetilde\mathbf{g}\|_{C^0} \ll_{\sigma,r} N^{2\sigma}\|\mathbf{f}, \mathbf{g}\|_{C^1}\|\mathbf{f}, \mathbf{g}\|_{C^0}+\frac{\|\mathbf{f}, \mathbf{g}\|_{C^{\sigma+r+1}}^2}{N^r}+\frac{\|\mathbf{f}, \mathbf{g}\|_{C^{\sigma+r}}}{N^r}. \end{equation} This proves the desired estimate \eqref{wtf0norm}. \noindent\textit{\large (II) Estimate of $\widetilde\mathbf{f}, \widetilde\mathbf{g}$ in $C^r$.} Note that $\widetilde\mathbf{f}$ can be expressed as follows \begin{align*} \widetilde\mathbf{f}=H^{-1}\circ \mathbf{F}\circ H-\mathcal{T}_{A,0} =& (H^{-1}-id)\circ\mathbf{F}\circ H +(A\mathbf{h}_1, \mathbf{h}_2)+\mathbf{f}\circ H, \end{align*} which leads to \begin{equation*} \|\widetilde\mathbf{f}\|_{C^r}\ll \|(H^{-1}-id)\circ\mathbf{F}\circ H\|_{C^r} +\|\mathbf{h}\|_{C^r}+\|\mathbf{f}\circ H\|_{C^r}. \end{equation*} Note that $\widetilde\mathbf{f}(x,y)$ is a $\mathbb{Z}^{d+s}$-periodic function. By Lemma \ref{Apdix_linecont} it follows that \begin{align*} \|(H^{-1}-id)\circ\mathbf{F}\circ H\|_{C^r}\ll_{r} \, 1+\|(H^{-1}-id)\|_{C^r}+\|\mathbf{f}\|_{C^r}+\|\mathbf{h}\|_{C^r} \end{align*} and \begin{align*} \|\mathbf{f}\circ H\|_{C^r}\ll_r\, 1+\|\mathbf{f}\|_{C^r}+\|\mathbf{h}\|_{C^r} \end{align*} By Lemma \ref{Apdix_pro1}, $\|H^{-1}-id\|_{C^r}\ll_r \|\mathbf{h}\|_{C^r}$. Thus, combined with \eqref{bfhcr} we obtain \begin{align*} \|\widetilde\mathbf{f}\|_{C^r}\ll_r\, 1 +\|\mathbf{h}\|_{C^r}+\|\mathbf{f}\|_{C^r}\ll_r\, 1+N^{2\sigma} \|\mathbf{f}\|_{C^r}. \end{align*} As for $\|\widetilde\mathbf{g}\|_{C^r}$, it can be estimated in the same way as $\|\widetilde\mathbf{f}\|_{C^r}$. This finishes the proof. \end{proof} \subsection{Proof of Theorem \ref{Element_Thm1}}\label{subsection_KAMscheme} Based on Proposition \ref{Pro_iterate}, we now prove Theorem \ref{Element_Thm1}. \begin{proof}[Proof of Theorem \ref{Element_Thm1}] To begin the iterative process, we set up \[\mathbf{F}^{(0)}=\mathcal{T}_{A,0}+\mathbf{f}^{(0)},\quad \mathbf{G}^{(0)}=\mathcal{T}_{B,0}+\mathbf{g}^{(0)}; \quad H^{(0)}=id.\] where $\mathbf{f}^{(0)}=\mathbf{f}$ and $\mathbf{g}^{(0)}=\mathbf{g}$. Fix a sufficiently large integer $N_0>0$, and define $N_i$ inductively by \begin{align}\label{Nisetting} N_{i+1}=N_{i}^{\frac{3}{2}} \end{align} for all $i=0,1,2,\cdots$. We will construct $ \mathbf{F}^{(i)},\mathbf{G}^{(i)}, H^{(i)}$ inductively for every $i\geqslant 0$: suppose that we already obtained $\mathbf{F}^{(i)},\mathbf{G}^{(i)}, H^{(i)}$, then at the $(i+1)$-th step we choose the smoothing operator $\mathrm{S}_{N_i}$ with $N_{i}>0$ given in \eqref{Nisetting} and apply Proposition \ref{Pro_iterate} to obtain $\mathbf{h}^{(i)}$. This produces a new smooth conjugacy \[H^{(i+1)}=id+\mathbf{h}^{(i)}.\] In what follows, we introduce the notation \begin{align*} \varepsilon_{i,r}\overset{\textup{def}}=\|\mathbf{f}^{(i)},~\mathbf{g}^{(i)}\|_{C^r}\,, \qquad \delta_{i,r}\overset{\textup{def}}=\|\mathbf{h}^{(i)}\|_{C^r}. \end{align*} By Proposition \ref{Pro_iterate}, we have \begin{align} \delta_{i,r}\ll_{r,r',\sigma} \, N_{i}^{r-r'+2\sigma}\,\varepsilon_{i,r'},\qquad \text{for~}r\geqslant r'\geqslant 0.\label{iterate_h_rnorm} \end{align} To proceed, we have to check that each $H^{(i+1)}$ is indeed invertible. This is true if the following condition is satisfied \begin{equation}\label{smalonc1}\tag{D} \delta_{i,1}\leqslant\frac{1}{4}. \end{equation} Then we can define \[\mathbf{F}^{(i+1)}=\left(H^{(i+1)}\right)^{-1}\circ \mathbf{F}^{(i)}\circ H^{(i+1)}=\mathcal{T}_{A,0}+\mathbf{f}^{(i+1)},\] \[\mathbf{G}^{(i+1)}=\left(H^{(i+1)}\right)^{-1}\circ \mathbf{G}^{(i)}\circ H^{(i+1)}=\mathcal{T}_{B,0}+\mathbf{g}^{(i+1)}.\] where $\mathbf{f}^{(i+1)}$ and $\mathbf{g}^{(i+1)}$ are the new errors. According to Proposition \ref{Pro_iterate}, \begin{align} \varepsilon_{i+1,0} \ll_{r,\sigma} &\, N_{i}^{2\sigma}\,\varepsilon_{i,1}\cdot\varepsilon_{i,0}+\frac{\varepsilon^2_{i,\sigma+r+1}}{ N_{i}^r}+\frac{\varepsilon_{i,\sigma+r}}{ N_{i}^r},\qquad \text{for~} r\geqslant 0.\label{iterate_wtf0norm}\\ \varepsilon_{i+1,r}\ll_{r,\sigma} &\, 1+ N_{i}^{2\sigma}\,\varepsilon_{i,r},\qquad \text{for~}r> 0.\label{iterate_wtf_rnorm} \end{align} To ensure the above condition \eqref{smalonc1} and the convergence of the KAM scheme, one only needs that for the original error, $\varepsilon_{0,0}=\|\mathbf{f}^{(0)},~\mathbf{g}^{(0)}\|_{C^0}$ is sufficiently small and $\varepsilon_{0,\mu_0}=\|\mathbf{f}^{(0)},~\mathbf{g}^{(0)}\|_{C^{\mu_0}}$ is well controlled for some integer $\mu_0>0$. This is guaranteed by the following lemma. \begin{Lem}\label{Lem_induc_ineq} Let us set $\mu_0=20(\sigma+1)$ and $\kappa=6(\sigma+1)$. We can choose $N_0>0$ suitably large, such that if $\varepsilon_{0,0}=\|\mathbf{f}^{(0)},~\mathbf{g}^{(0)}\|_{C^0}\leqslant N_0^{-\kappa}$ and $\varepsilon_{0,\mu_0}=\|\mathbf{f}^{(0)},~\mathbf{g}^{(0)}\|_{C^{\mu_0}}\leqslant N_0^{\frac{3}{4}\kappa}$, then condition \eqref{smalonc1} holds for all $i\geqslant 0$. In addition, we have \begin{align}\label{3fchi} \varepsilon_{i,0}\leqslant N_i^{-\kappa},\qquad \varepsilon_{i,\mu_0}\leqslant N_i^{\frac{3}{4}\kappa},\qquad \delta_{i,1}\leqslant N_i^{-\frac{1}{2}\kappa} \end{align} \end{Lem} \begin{proof} We prove it by induction. For $i=0$, by \eqref{iterate_h_rnorm} we have $\delta_{0,1}\ll N_0^{2\sigma+1}\varepsilon_{0,0}$ $\leqslant N_0^{2\sigma+1-\kappa}$. As $N_0>0$ is large enough, it follows that $\delta_{0,1}\leqslant N_0^{-\frac{\kappa}{2}}$. Thus, by assumption, \eqref{3fchi} holds for $i=0$, and condition \eqref{smalonc1} for $i=0$ is satisfied provided that $N_0$ is large. Suppose that \eqref{3fchi} holds for all steps $\leqslant i$, we need to verify these estimates for the $(i+1)$-th step. Note that by the interpolation inequalities (Lemma \ref{cor_intpest}), we get \begin{equation}\label{intp_inequ} \varepsilon_{i,1}\leqslant C_{\mu_0}\,\varepsilon_{i,0}^{1-\frac{1}{\mu_0}} \,\varepsilon_{i,\mu_0}^{\frac{1}{\mu_0}}\leqslant C_{\mu_0}\,N_i^{-\kappa(1-\frac{7}{4\mu_0})}. \end{equation} Then, using inequality \eqref{iterate_wtf0norm} with $r=\mu_0-\sigma-1$ we obtain \begin{align} \varepsilon_{i+1,0}\ll_{\sigma} N_{i}^{2\sigma}\varepsilon_{i,1}\cdot\varepsilon_{i,0}+\frac{\varepsilon^2_{i,\mu_0}}{N_{i}^{\mu_0-\sigma-1}}+\frac{\varepsilon_{i,\mu_0-1}}{N_{i}^{\mu_0-\sigma-1}} & \ll_{\sigma} \, N_{i}^{2\sigma}N_i^{-\kappa(2-\frac{7}{4\mu_0})} +\frac{N_i^{\frac{3}{2}\kappa}}{N_{i}^{\mu_0-\sigma-1}} \nonumber\\ & \ll_{\sigma} N_{i}^{2\sigma-\kappa(2-\frac{7}{4\mu_0})} + N_{i}^{-10(\sigma+1)}\nonumber\\ &< \, N_{i+1}^{-\kappa}\,.\label{cE_i1} \end{align} Next, applying \eqref{iterate_wtf_rnorm} with $r=\mu_0$, it follows that \begin{align}\label{rho} \varepsilon_{i+1,\mu_0}\ll_{\mu_0,\sigma} 1+N_{i}^{2\sigma}\varepsilon_{i,\mu_0} \ll_{\mu_0,\sigma} N_{i}^{2\sigma} N_i^{\frac{3}{4}\kappa} < N_{i+1}^{\frac{3}{4}\kappa}. \end{align} Finally, applying inequality \eqref{iterate_h_rnorm} with $r=1$ and $r'=0$, we have \begin{align}\label{cu_i11} \delta_{i+1,1} &\ll_{\sigma} N_{i+1}^{2\sigma+1}\varepsilon_{i+1,0} \ll_{\sigma} N_{i+1}^{2\sigma+1}N_{i+1}^{-\kappa}< N_{i+1}^{-\frac{1}{2}\kappa}. \end{align} This also implies that condition \eqref{smalonc1} holds at the $(i+1)$-th step, since $N_{i+1}=N_0^{\left(\frac{3}{2}\right)^{i+1}}$ is large. This proves Lemma \ref{Lem_induc_ineq}. \end{proof} Now, let us proceed with the proof of Theorem \ref{Element_Thm1}. By Lemma \ref{Lem_induc_ineq}, as long as $\varepsilon_{0,0}$ is suitably small and $\varepsilon_{0,\mu_0}\leqslant 1$, the following sequences \[\|\mathbf{f}^{(i)},~\mathbf{g}^{(i)}\|_{C^0}\leqslant N_0^{-\left(\frac{3}{2}\right)^i\kappa},\quad \|\mathbf{h}^{(i)}\|_{C^1}\leqslant N_0^{-\frac{1}{2}\left(\frac{3}{2}\right)^i\kappa} \] converge rapidly to zero. This rapid convergence ensures that as $l\to\infty$, the composition \[\mathcal{H}_l =H^{(1)}\circ\cdots\circ H^{(l)}\] converges in the $C^1$ topology to some $\mathcal{H}_\infty$ which is a $C^1$ diffeomorphism, for which the following equations hold \[ \mathbf{F}\circ \mathcal{H}_\infty=\mathcal{H}_\infty \circ \mathcal{T}_{A,0},\qquad \mathbf{G}\circ \mathcal{H}_\infty=\mathcal{H}_\infty\circ \mathcal{T}_{B,0}.\] It remains to show that the above $C^1$ limit solution $\mathcal{H}_\infty$ is also of class $C^p$ for every $p>1$. In fact, as shown in \cite{Zeh_generalized1}, this can be achieved by making full use of the interpolation inequalities. More precisely, observe that for any $m>0$, applying \eqref{iterate_wtf_rnorm} with $r=m$ we get \[ \varepsilon_{i,m}\leqslant C_{m,\sigma} \big(1+N_{i-1}^{2\sigma}\,\varepsilon_{i-1,m}\big) \] for some constant $C_{m,\sigma}>1$. This also gives $1+\varepsilon_{i,m}\leqslant C_{m,\sigma} \, N_{i-1}^{2\sigma}\Big(1+\varepsilon_{i-1,m}\Big)$, from which we derive inductively that \begin{align*} \varepsilon_{i,m}\leqslant \left(1+\varepsilon_{0,m} \right)\prod_{j=0}^{i-1}\left(C_{m,\sigma} \,N_{j}^{2\sigma}\right) \leqslant \left(1+\varepsilon_{0,m} \right) C^i_{m,\sigma}\left(\prod_{j=0}^{i-1} N_j\right)^{2\sigma} &\leqslant M_m\cdot C^i_{m,\sigma}\cdot N_0^{\left(\frac{3}{2}\right)^i 4\sigma}\\ &= M_m\cdot C^i_{m,\sigma}\cdot N_i^{4\sigma} \end{align*} where we denote $M_m= (1+\varepsilon_{0,m})>1$. Now, for any given $p>1$, we choose $m=4p$. Using the interpolation inequalities (Lemma \ref{cor_intpest}) and \eqref{3fchi} it follows that \begin{align*} \varepsilon_{i,p}\leqslant C_{p}\, \varepsilon_{i,m}^{\frac{1}{4}}\cdot \varepsilon_{i,0}^{\frac{3}{4}}\leqslant &C_p\left(M_m\, C^i_{m,\sigma}\right)^{\frac{1}{4}}\, N_i^{\sigma}\cdot N_{i}^{-\frac{3}{4}\kappa} \leqslant C_p\, M_m\, C_{m,\sigma}^{i}\cdot N_i^{-\frac{1}{2}\kappa}\, . \end{align*} Then, applying \eqref{iterate_h_rnorm} with $r=r'=p$ yields \begin{align*} \delta_{i,p}\leqslant C_{p,\sigma}\, N_{i}^{2\sigma}\varepsilon_{i,p} \leqslant & C_{p,\sigma} C_p\,M_m\, C^{i}_{m,\sigma}\cdot N_i^{2\sigma-\frac{\kappa}{2}} \leqslant L \cdot b^i\cdot N_i^{-1-\sigma} \end{align*} where the constants $L=C_{p,\sigma} C_p\,M_m$ and $b=C_{m,\sigma}>1$, with $m=4p$. Observe that although $b^i$ grows exponentially, the quantity $N_i^{-1-\sigma}$ decays super-exponentially. Hence, $\delta_{i,p}=\|\mathbf{h}^{(i)}\|_{C^p}\ll N_i^{-\sigma}$ still converges rapidly to zero. This implies the convergence of the sequence $\mathcal{H}_l$ in the $C^p$ topology and the limit is exactly $\mathcal{H}_\infty$. Also, note that the above argument is true for any given integer $p\geqslant 1$, therefore, $\mathcal{H}_\infty$ is of class $C^\infty$. This finally finishes the proof of Theorem \ref{Element_Thm1}. \end{proof} \section{Proofs of Theorems \ref{MainThm_0}, \ref{MainThm_1} and \ref{MainThm_2}}\label{Section_proofMainResult} \subsection{Proof of Theorem \ref{MainThm_0}}\label{subsecproofA} Based on Proposition \ref{Pro_conj_ave} and Theorem \ref{Element_Thm1} we proceed to prove Theorem \ref{MainThm_0}. We will also need the following lemma. \begin{Lem}\label{lem_periodid} Given $\theta\in \mathbb{Q}^s$ and an integer $q>0$ satisfying $q\theta\in \mathbb{Z}^s$. Suppose that $\Phi(y):\mathbb{T}^s\to\mathbb{T}^s$ is a $C^\infty$ diffeomorphism which satisfies: \begin{enumerate} \item $\Phi$ is $C^1$-sufficiently close to the toral translation $R_\theta: y\mapsto y+\theta$ (mod $\mathbb{Z}^s$). More precisely, we can write $\Phi=R_\theta+\omega$ where the function $\omega\in C^\infty(\mathbb{T}^s,\mathbb{R}^s)$ and $\|\omega\|_{C^1}\ll 1$. \item the $q$-fold composition $\Phi^q$ of $\Phi$ satisfies that $\Phi^q=id_{\mathbb{T}^s}$ \end{enumerate} Then, $\Phi$ can be $C^\infty$-conjugated to $R_\theta$ via a conjugacy $V=id_{\mathbb{T}^s}+v$ with $v$ of the form \begin{equation*} v(y)=\frac{1}{q}\sum_{i=0}^{q-2}(q-i-1)\,\omega\circ\Phi^{i}(y), \end{equation*} and hence $\|v\|_{C^1}\leqslant C\|\omega\|_{C^1}$. \end{Lem} \begin{Rem} We point out that the conjugacy between $\Phi$ and $R_{\theta}$ is not unique, here we just construct one having an explicit formula. The non-uniqueness is due to the non-ergodicity of $\Phi$. \end{Rem} \begin{proof} By assumption the diffeomorphism $\Phi$ can be written as $\Phi=R_\theta+\omega$, where $\omega\in C^\infty(\mathbb{T}^s,\mathbb{R}^s)$ with the $C^1$ norm $\|\omega\|_{C^1}$ sufficiently small. Note that $\Phi^i(y)=\Phi\circ\Phi^{i-1}(y)$ $=$ $\Phi^{i-1}(y)+\theta+\omega\circ \Phi^{i-1}(y)$. By iterating this formula for $i=1,\cdots,q$ and adding them up, one gets \begin{equation*} \Phi^q(y)=y+q\theta+\sum_{i=0}^{q-1}\omega\circ\Phi^{i}(y)=R_{q\theta}+\sum_{i=0}^{q-1}\omega\circ\Phi^{i}. \end{equation*} Due to $q\theta\in \mathbb{Z}^s$ and $\Phi^q=id_{\mathbb{T}^s}$, we have $\sum_{i=0}^{q-1}\omega\circ\Phi^{i}=0$ mod $\mathbb{Z}^s$. As $\omega$ is sufficiently small, it implies that \begin{align}\label{qomegzero} \sum_{i=0}^{q-1}\omega\circ\Phi^{i}=0. \end{align} Now, we proceed to construct a near-identity conjugacy $V\in \textup{Diff}^\infty(\mathbb{T}^s)$ such that $V\circ \Phi=R_\theta\circ V$. We write $V(y)=y+v(y)$ with $v\in C^\infty(\mathbb{T}^s,\mathbb{R}^s)$, then the conjugacy equation $V\circ \Phi=R_\theta\circ V$ reduces to the following equation \begin{equation}\label{coeqvomega} v(y)-v\circ\Phi(y)=\omega(y). \end{equation} Using \eqref{qomegzero}, it is easy to check that equation \eqref{coeqvomega} has a $C^\infty$ solution given by \begin{equation} v(y):=\frac{1}{q}\sum_{i=0}^{q-2}(q-i-1)\,\omega\circ\Phi^{i}(y). \end{equation} The condition $\|\omega\|_{C^1}\ll 1$ ensures that $v$ is suitably small in the $C^1$ topology, and hence the map $V(y)=y+v(y)$ has a smooth inverse. Therefore, $\Phi$ is $C^\infty$-conjugate to $R_\theta$ via the near-identity conjugacy $V=id_{\mathbb{T}^s}+v$. \end{proof} \begin{proof}[Proof of Theorem \ref{MainThm_0}] Thanks to Proposition \ref{Pro_conj_ave}, the original perturbation $\alpha=\langle \mathcal{T}_{A_1,\tau_1}, \mathcal{T}_{A_2,\tau_2} \rangle$ is $C^\infty$-conjugate to the action $\langle \mathcal{T}_{A_1,[\tau_1]}, \mathcal{T}_{A_2,[\tau_2]} \rangle$. So we just need to prove Theorem \ref{MainThm_0} for the case where $\tau_1(x)$ and $\tau_2(x)$ are constants, i.e., $\tau_1=[\tau_1]$ and $\tau_2=[\tau_2]$, and for brevity we will denote \begin{equation*} \theta_1:=[\tau_1],\qquad \theta_2:=[\tau_2] \end{equation*} and from now on, our unperturbed action is assumed to be $\alpha=\langle\mathcal{T}_{A_1,\theta_1}, \mathcal{T}_{A_2,\theta_2}\rangle$, where $\theta_1, \theta_2\in \mathbb{Q}^s$. In other words, $\alpha(\mathbf{n})=\mathcal{T}_{A_1^{n_1}A_2^{n_2}, n_1\theta_1+n_2\theta_2}$ for any $\mathbf{n}=(n_1,n_2)\in \mathbb{Z}^2$. \textsl{In the case of integer $(\theta_1, \theta_2)$.} If $(\theta_1, \theta_2)\in \mathbb{Z}^s\times\mathbb{Z}^s$, the unperturbed action $\alpha$ becomes $\alpha=\langle\mathcal{T}_{A_1,0},\mathcal{T}_{A_2,0}\rangle$, then our result follows immediately from Theorem \ref{Element_Thm1}. \textsl{In the case of non-integer $(\theta_1, \theta_2)$.} If $(\theta_1,\theta_2)\in \mathbb{Q}^s\times\mathbb{Q}^s$ with $(\theta_1,\theta_2)\notin \mathbb{Z}^s\times\mathbb{Z}^s$, we split the proof into two parts. \noindent\textbf{\large Part 1.} Recall that $M_0\geqslant 1$ denotes the minimal positive integer $\lambda$ such that $\lambda\,\theta_1\in \mathbb{Z}^s$, $\lambda\, \theta_2\in \mathbb{Z}^s.$ In the sequel we denote \begin{align*} \mathbb{Z}^2_{M_0}:=\{(i,j)\in \mathbb{Z}^2: |i|\leqslant M_0,~ |j|\leqslant M_0 \}, \end{align*} and consider two sets as follows \begin{align*} \Sigma:=\{ (i,j)\in \mathbb{Z}^2_{M_0}: i\theta_1+j\theta_2\in \mathbb{Z}^s\},\qquad\Lambda:=\{ (i,j)\in \mathbb{Z}^2_{M_0}: i\theta_1+j\theta_2\notin \mathbb{Z}^s\}. \end{align*} $\Sigma\neq \emptyset$ and $\Lambda\neq\emptyset$ since $(\theta_1,\theta_2)$ is non-integer. So there exists $\delta^*=\delta^*(\theta_1,\theta_2)>0$ such that \begin{equation}\label{mindistij} \textup{dist}(i\theta_1+j\theta_2, \mathbb{Z}^s)\geqslant \delta^*,\qquad \forall (i,j)\in \Lambda \end{equation} since $\Lambda$ is a finite set. Now, we consider the perturbed action $\widetilde\alpha$. For $r\geqslant 0$ we denote by \begin{equation*} d_{{C^r}}(\widetilde\alpha, \alpha; M_0):=\max_{\mathbf{k}\in \mathbb{Z}^2_{M_0}} \textup{dist}_{C^r}(\widetilde\alpha(\mathbf{k}), \alpha(\mathbf{k})). \end{equation*} the maximum of the $C^r$ distance $\textup{dist}_{C^r}(\widetilde\alpha(\mathbf{k}), \alpha(\mathbf{k}))$ for all $\mathbf{k}\in \mathbb{Z}^2_{M_0}$. Then, we have the following properties: \textbf{(I)} \textit{If $d_{C^0}(\widetilde\alpha, \alpha; M_0)<\frac{\delta^*}{2}$ and if for some nonzero $\mathbf{k}\in\mathbb{Z}^2_{M_0}$ the diffeomorphism $\widetilde\alpha(\mathbf{k})$ satisfies condition \ref{condIP}, it must be $\mathbf{k}\in \Sigma$}. Let us explain it. Otherwise, suppose that $\mathbf{k}\in \Lambda$. Taking a $d$-dimensional torus $\Gamma=\mathbb{T}^d\times\{y=0\}$, by \eqref{mindistij} we see that the Hausdorff distance between $\Gamma$ and its image under the map $\alpha(\mathbf{k})=\mathcal{T}_{A_1^{k_1}A_2^{k_2}, k_1\theta_1+k_2\theta_2}$ is greater than $\delta^*$. Combined with $d_{C^0}(\widetilde\alpha, \alpha; M_0)<\frac{\delta^*}{2}$, the Hausdorff distance between $\Gamma$ and its image under $\widetilde\alpha(\mathbf{k})$ is greater than $\frac{\delta^*}{2}$, so they cannot intersect. This contradicts the intersection property of $\widetilde\alpha(\mathbf{k})$. \textbf{(II)} \textit{For any $(i,j)\in\Sigma$, one has $\alpha((i,j))=\mathcal{T}_{A_1,\theta_1}^i\circ\mathcal{T}_{A_2,\theta_2}^j=\mathcal{T}_{A_1^iA_2^j, ~0}=A_1^iA_2^j\times id_{\mathbb{T}^s}$}. \textbf{(III)} \textit{For two linearly independent $(i, j)\in \Sigma$ and $(i',j')\in\Sigma$, we can apply Theorem \ref{Element_Thm1} to the ergodic generators $A=A_1^iA_2^j$ and $B=A_1^{i'}A_2^{j'}$ to obtain the corresponding two positive numbers $\varepsilon_0(A, B)$ and $\mu_0(A, B)$ for which the local rigidity holds. Let $\varepsilon^{*}>0$ be the minimum of all such $\varepsilon_0(A, B)$ where $(i, j)\in \Sigma$ and $(i',j')\in\Sigma$ are linearly independent, and let integer $\mu>0$ be the maximum of all such possible $\mu_0(A, B)$.} Now we choose a sufficiently small $\varepsilon_1$ satisfying $\varepsilon_1<\min\left\{\frac{\delta^*}{2}, \varepsilon^*\right\}$. Thus, when \begin{equation*} d_{C^\mu}(\widetilde\alpha, \alpha; M_0)<\varepsilon_1, \end{equation*} and $\mathbb{Z}^2_{M_0}$ contains two linearly independent elements $\mathbf{m}, \mathbf{n}$ such that $\widetilde{\alpha}(\mathbf{m})$ and $\widetilde{\alpha}(\mathbf{n})$ satisfy condition \ref{condIP}, we must have $\mathbf{m}, \mathbf{n}\in \Sigma$. This is due to the above property \textbf{(I)}. Write $\mathbf{m}=(m_1,m_2)$, $\mathbf{n}=(n_1,n_2)$ and set \[A:=A_1^{m_1}A_2^{m_2},\qquad B:=A_1^{n_1}A_2^{n_2}.\] By the above property \textbf{(II)}, $\alpha(\mathbf{m})=\mathcal{T}_{A,0}$ and $\alpha(\mathbf{n})=\mathcal{T}_{B,0}$. Note that $A$ and $B$ are still ergodic generators. Now that $\|\widetilde{\alpha}(\mathbf{m})-\mathcal{T}_{A,0}\|_{C^\mu}<\varepsilon_1$, $\|\widetilde{\alpha}(\mathbf{n})-\mathcal{T}_{B,0}\|_{C^\mu}<\varepsilon_1$, and $\widetilde{\alpha}(\mathbf{m})$ and $\widetilde{\alpha}(\mathbf{n})$ satisfy condition \ref{condIP}, the above analysis and Theorem \ref{Element_Thm1} imply that there is a $C^\infty$ near-identity conjugacy $H$ such that \begin{equation}\label{qdqd} H\circ \widetilde{\alpha}(\mathbf{m})\circ H^{-1}= \mathcal{T}_{A,0}=\alpha(\mathbf{m}),\qquad H\circ \widetilde{\alpha}(\mathbf{n})\circ H^{-1}= \mathcal{T}_{B,0}=\alpha(\mathbf{n}). \end{equation} This also implies that restricted on the subgroup $\mathbf{m} \mathbb{Z}+\mathbf{n}\mathbb{Z}\subset\mathbb{Z}^2$, $\widetilde{\alpha}$ is conjugate to $\alpha$ via $H$. \noindent\textbf{\large Part 2.} We still need to show that for any $\mathbf{k}\in \mathbb{Z}^2$, $\widetilde{\alpha}(\mathbf{k})$ can be conjugated to $\alpha(\mathbf{k})$. In fact, it suffices to verify the generators $\mathbf{e}_1=(1,0)$ and $\mathbf{e}_2=(0,1)$. As we will see below, in general $H\circ \widetilde{\alpha}(\mathbf{e}_i)\circ H^{-1}\neq \alpha(\mathbf{e}_i)$. Hence, our plan is to construct another conjugacy which conjugates $H\circ \widetilde{\alpha}(\mathbf{e}_i)\circ H^{-1}$ to $ \alpha(\mathbf{e}_i)$. Let us write \[H\circ\widetilde{\alpha}(\mathbf{e}_1)\circ H^{-1}=\alpha(\mathbf{e}_1)+P,\] where $P(x,y)=(P_1(x,y), P_2(x,y))$ with $P_1\in C^\infty(\mathbb{T}^{d}\times\mathbb{T}^s,\mathbb{R}^d)$ and $P_2\in C^\infty(\mathbb{T}^{d}\times\mathbb{T}^s,\mathbb{R}^s)$. By the commutation relation $\widetilde{\alpha}(\mathbf{e}_1)\circ\widetilde{\alpha}(\mathbf{m})=\widetilde{\alpha}(\mathbf{m})\circ\widetilde{\alpha}(\mathbf{e}_1)$ and \eqref{qdqd}, it follows that \begin{equation*} P_1(Ax,y)=AP_1(x,y),\qquad P_2(Ax,y)=P_2(x,y). \end{equation*} As $A$ is ergodic, we obtain $P_1=0$, and $P_2(x,y)=f(y)$ is a function independent of $x$. Hence, \begin{equation}\label{dsfaw1} H\circ \widetilde{\alpha}(\mathbf{e}_1)\circ H^{-1}(x,y)=(A_1x, F(y)) \end{equation} where $F(y)=R_{\theta_1}+f(y)\in \textup{Diff}^\infty(\mathbb{T}^s)$. Similarly, we can prove that \begin{equation}\label{dsfaw2} H\circ\widetilde{\alpha}(\mathbf{e}_2)\circ H^{-1}(x,y)=(A_2x, G(y)) \end{equation} where $G(y)=R_{\theta_2}+g(y)\in \textup{Diff}^\infty(\mathbb{T}^s)$ with $g\in C^\infty(\mathbb{T}^s,\mathbb{R}^s)$. Since $\widetilde{\alpha}(\mathbf{m})=\widetilde{\alpha}(m_1\mathbf{e}_1+m_2\mathbf{e}_2)=\big(\widetilde{\alpha}(\mathbf{e}_1)\big)^{m_1}\circ\big(\widetilde{\alpha}(\mathbf{e}_2)\big)^{m_2}$, it follows from \eqref{dsfaw1}--\eqref{dsfaw2} that \begin{align*} H\circ\widetilde{\alpha}(\mathbf{m})\circ H^{-1}=(A_1^{m_1}A_2^{m_2}x, F^{m_1}\circ G^{m_2}(y))=(Ax, F^{m_1}\circ G^{m_2}(y)). \end{align*} Similarly, we also have \begin{align*} H\circ\widetilde{\alpha}(\mathbf{n})\circ H^{-1}=(A_1^{n_1}A_2^{n_2}x, F^{n_1}\circ G^{n_2}(y))=(Bx,F^{n_1}\circ G^{n_2}(y)). \end{align*} Combined with \eqref{qdqd}, we obtain that $ F^{m_1}\circ G^{m_2}=id_{\mathbb{T}^s}$ and $F^{n_1}\circ G^{n_2}=id_{\mathbb{T}^s}$. This also implies that \begin{equation}\label{idTs} F^{q}=id_{\mathbb{T}^s},\qquad G^{q}=id_{\mathbb{T}^s}. \end{equation} with $q=|m_1n_2-m_2n_1|=|\mathbf{m}\times \mathbf{n}|$. Obviously, the integer $q>0$. Moreover, as $m_1\theta_1+m_2\theta_2\in \mathbb{Z}^s$ and $n_1\theta_1+n_2\theta_2\in \mathbb{Z}^s$, it follows that $q\theta_1\in \mathbb{Z}^s$ and $q\theta_2\in \mathbb{Z}^s$. Since $d_{C^\mu}(\widetilde\alpha, \alpha; M_0)<\varepsilon_1$, by letting $\varepsilon_1$ sufficiently small if necesary, we obtain the following Claim 1 and Claim 2. \textbf{Claim 1:} \textit{There exists a near-identity conjugacy $V\in \textup{Diff}^\infty(\mathbb{T}^s)$ such that \[V\circ F\circ V^{-1}=R_{\theta_1}.\] In addition, $\|V-id_{\mathbb{T}^s}\|_{C^1}\leqslant C \|F-R_{\theta_1}\|_{C^1}$ } In fact, since $F$ is $C^1$-sufficiently close to $R_{\theta_1}$, Claim 1 follows directly from $F^q=id_{\mathbb{T}^s}$ in \eqref{idTs} and Lemma \ref{lem_periodid}. Next, we set $\widetilde G:=V\circ G\circ V^{-1}$, then \textbf{Claim 2:} \textit{There exists a near-identity conjugacy $\widetilde{V}\in \textup{Diff}^\infty(\mathbb{T}^s)$ such that \[\widetilde V\circ \widetilde G\circ \widetilde V^{-1}=R_{\theta_2},\qquad\widetilde V\circ R_{\theta_1}\circ \widetilde V^{-1}=R_{\theta_1}.\]} Indeed, note that $\widetilde G=R_{\theta_2}+\widetilde g$ satisfies $\|\widetilde g\|_{C^1}\ll 1$, and by \eqref{idTs}, $\widetilde G^q=id_{\mathbb{T}^s}$. Then, invoking Lemma \ref{lem_periodid} we can construct a near-identity conjugacy $\widetilde V(y)=id_{\mathbb{T}^s}+\widetilde v(y)$ with $\widetilde v\in C^\infty(\mathbb{T}^s,\mathbb{R}^s)$ of the form \begin{equation}\label{fnnsenk} \widetilde v=\frac{1}{q}\sum_{i=0}^{q-2}(q-i-1)\,\widetilde g\circ \widetilde{G}^{i}, \end{equation} such that $\widetilde V\circ \widetilde G\circ \widetilde V^{-1}=R_{\theta_2}$. On the other hand, owing to the commutation relation $F\circ G=G\circ F$, it follows that $R_{\theta_1}\circ\widetilde G=\widetilde G\circ R_{\theta_1}$, and hence $\widetilde g=\widetilde g\circ R_{\theta_1}$. This leads to \begin{equation}\label{djdqoil} \begin{aligned} \widetilde v\circ R_{\theta_1}=\frac{1}{q}\sum_{i=0}^{q-2}(q-i-1)\,\widetilde g\circ \widetilde{G}^{i}\circ R_{\theta_1}=\frac{1}{q}\sum_{i=0}^{q-2}(q-i-1)\,\widetilde g\circ R_{\theta_1}\circ \widetilde{G}^{i} =&\frac{1}{q}\sum_{i=0}^{q-2}(q-i-1)\,\widetilde g\circ \widetilde{G}^{i}\\ =& \widetilde v. \end{aligned} \end{equation} It is easy to see that \eqref{djdqoil} implies that $\widetilde V\circ R_{\theta_1}=R_{\theta_1}\circ\widetilde V $. This finally proves Claim 2. Now, we define a smooth conjugacy $\mathcal{V}\in \textup{Diff}^\infty(\mathbb{T}^d\times\mathbb{T}^s)$ as follows \begin{equation*} \mathcal{V}(x,y)=(x, \widetilde V\circ V(y)). \end{equation*} Using \eqref{dsfaw1}--\eqref{dsfaw2} together with the above Claim 1 and Claim 2, we obtain that \begin{align*} \mathcal{V}\circ H\circ \widetilde\alpha(\mathbf{e}_1)\circ H^{-1}\circ \mathcal{V}^{-1}=&A_1\times R_{\theta_1}=\alpha(\mathbf{e}_1),\\ \mathcal{V}\circ H\circ \widetilde\alpha(\mathbf{e}_2)\circ H^{-1}\circ \mathcal{V}^{-1}=&A_2\times R_{\theta_2}=\alpha(\mathbf{e}_2). \end{align*} Therefore, the action $\widetilde\alpha$ is $C^\infty$-conjugate to $\alpha$ via the conjugacy $U=\mathcal{V}\circ H$ provided that $d_{C^\mu}(\widetilde\alpha, \alpha; M_0)<\varepsilon_1$. Finally, it is easy to see that there exists $\varepsilon>0$ such that whenever $d_{C^\mu}(\widetilde\alpha, \alpha)<\varepsilon$, one has $d_{C^\mu}(\widetilde\alpha, \alpha; M_0)<\varepsilon_1$. This completes the proof. \end{proof} \subsection{Proof of Theorem \ref{MainThm_1}} For the perturbed action $\widetilde\alpha=\langle \mathcal{F}_1, \mathcal{F}_2 \rangle$, according to our assumption for each $l=1,2$, the composition $\mathcal{F}_l^{q_l}=\mathcal{F}_l\circ \cdots\circ\mathcal{F}_l$ possesses an invariant $d$-dimensional torus homotopic to $\mathbb{T}^d\times\{0\}$ $\subset$ $\mathbb{T}^d\times\mathbb{T}^1$. This, combined with the volume preserving condition, implies that both $\mathcal{F}_1^{q_1}$ and $\mathcal{F}_2^{q_2}$ satisfy the intersection property since the fiber is of dimension one. Therefore, based on Theorem \ref{Element_Thm1} and Lemma \ref{lem_periodid}, the rest of the proof is just analogous to that of Theorem \ref{MainThm_0}. \subsection{Proof of Theorem \ref{MainThm_2}} Since the $\mathbb{Z}^k$ action $\rho_0: \mathbb{Z}^k\to \textup{Aut}(\mathbb{T}^d)$ is higher rank on the base $\mathbb{T}^d$, one can find a subgroup $\Sigma\subset \mathbb{Z}^k$ with $\Sigma\cong \mathbb{Z}^2$ such that $\rho_0(\mathbf{n})$ is ergodic on $\mathbb{T}^d$ for all $\mathbf{n}\in\Sigma\setminus\{0\}$. For the perturbed action $\widetilde\rho$, by assumption the generators $\widetilde\rho(\mathbf{e}_1),\cdots, \widetilde\rho(\mathbf{e}_k)$ have a common invariant $d$-dimensional torus homotopic to $\mathbb{T}^d\times\{0\}$ $\subset$ $\mathbb{T}^d\times\mathbb{T}^1$, combined with the volume preserving condition and the fiber is of dimension 1, it follows that for every $\mathbf{m}\in\mathbb{Z}^k$, the map $\widetilde\rho(\mathbf{m}): \mathbb{T}^d\times\mathbb{T}^1\to \mathbb{T}^d\times\mathbb{T}^1$ satisfies the intersection property. Then, by applying Theorem \ref{Element_Thm1} to the subgroup $\Sigma$, we obtain that the restriction $\widetilde\rho\big|_{\Sigma}$ of $\widetilde\rho$ to the subgroup $\Sigma$ can be $C^\infty$-conjugated to $\rho\big|_{\Sigma}=\rho_0\times id_{\mathbb{T}^1}\big|_{\Sigma}$ via a conjugacy $H$, provided that $\textup{dist}_{C^\mu}(\widetilde\rho,\rho)$ is sufficiently small. To complete the proof, it remains to show that for other $\mathbf{m}\in \mathbb{Z}^k\setminus\Sigma$, $\widetilde\rho(\mathbf{m})$ is $C^\infty$-conjugate to $\rho(\mathbf{m})$. Indeed, it suffices to verify the generators $\mathbf{e}_1,\cdots, \mathbf{e}_k$ of $\mathbb{Z}^k$. Here, we only check it for $\mathbf{e}_1$, and other cases are similar. Using arguments analogous to the proof of Theorem \ref{MainThm_0}, the ergodicity of $\rho_0\big|_{\Sigma}$ on the base $\mathbb{T}^d$ and the commutativity imply that \[H\circ \widetilde\rho(\mathbf{e}_1)\circ H^{-1}=\rho(\mathbf{e}_1)+(0, f(y) ) \] for some $f(y)\in C^\infty(\mathbb{T}^1,\mathbb{R}^1)$. Then, as a result of the intersection property of $\widetilde\rho(\mathbf{e}_1)$, it is easy to find that the function $f(y)$ has to be zero. Therefore, $H\circ \widetilde\rho(\mathbf{e}_1)\circ H^{-1}=\rho(\mathbf{e}_1).$ The case of $\mathbf{e}_2, \cdots, \mathbf{e}_k$ can be proved in the same fashion as that of $\mathbf{e}_1$. This proves Theorem \ref{MainThm_2}. \, \, \, \, \, \noindent{\bf Acknowledgments.} Our work was supported by Swedish Research Council grant VR 2019-04641 and the Wallenberg Foundation grant for international postdocs 2020.
1,108,101,565,029
arxiv
\section{Introduction} \label{sec:intro} Let $T > 0$ be fixed. In this work we are concerned with convergence of characteristics associated with stochastic Euler equations in vorticity form on the two-dimensional torus ${\mathbb{T}^2} \coloneqq \mathbb{R}^2/(2\pi \mathbb{Z}^2)$: \begin{align} \label{eq:euler_intro} d \xi^\epsilon_t + (v^\epsilon_t+u^\epsilon_t) \cdot \nabla \xi^\epsilon_t dt = - \epsilon^{-1} \xi^\epsilon_t dt + \epsilon^{-1} \sum_{k \in \mathbb{N}} \varsigma_k dW^k_t, \quad t \in [0,T], \end{align} where $\xi^\epsilon$ is the zero-mean unknown vorticity field, $u^\epsilon$ is the velocity field reconstructed from $\xi^\epsilon$ via the Biot-Savart kernel: $u^\epsilon_t = -\nabla^\perp (-\Delta)^{-1} \xi^\epsilon_t$, $v^\epsilon$ is a divergence-free external field with suitable regularity, $\varsigma_k : {\mathbb{T}^2} \to \mathbb{R}$ with zero average for every $k \in \mathbb{N}$, $(W^k)_{k \in \mathbb{N}}$ is a family of i.i.d. Wiener processes defined on a filtered probability space $(\Omega,\mathcal{F}_t,\mathbb{P})$, and $\epsilon \ll 1$ is a scaling parameter. Equations \eqref{eq:euler_intro} above aim to represent the small-scale component of a two-dimensional incompressible fluid \cite{BoEc12}, with the additive noise and damping on the right-hand-side modelling the influence on the fluid of a possibly irregular boundary or topography. The choice of the parameter $\epsilon^{-1}$ in front of both noise and damping is appropriate when looking at the system with respect to the point of view of a large-scale observer, see \cite{FlPa21} and \autoref{ssec:motivations} for details. In view of this, it makes sense to couple \eqref{eq:euler_intro} with a large-scale scalar dynamics: \begin{align} \label{eq:large_eps_intro} d \Xi^\epsilon_t + (v^\epsilon_t + u^\epsilon_t) \cdot \nabla \Xi^\epsilon_t dt = \nu \Delta \Xi^\epsilon_t dt + q^\epsilon_t dt, \quad t \in [0,T], \end{align} either passive (in which case the external field $v^\epsilon$ should be interpreted as given a priori) or active (in which case the external field $v^\epsilon$ could depend on the large-scale dynamics itself, as for instance in the vorticity formulation of 2D Navier-Stokes equations, where $v^\epsilon_t = -\nabla^\perp (-\Delta)^{-1} \Xi^\epsilon_t$). In \eqref{eq:large_eps_intro} above, $\nu\geq 0$ is a fixed parameter that represents molecular diffusivity (passive dynamics) or viscosity (active dynamics), and $q^\epsilon$ is a given source term with suitable integrability. Let $(\tilde{\Omega},\tilde{\mathcal{F}}_t,\tilde{\mathbb{P}})$ be an auxiliary probability space and let $w$ be a standard $\mathbb{R}^2$-valued Wiener process defined on $(\tilde{\Omega},\tilde{\mathcal{F}}_t,\tilde{\mathbb{P}})$. The (stochastic) characteristics $\phi^\epsilon$ associated with problem \eqref{eq:euler_intro} are given by the family of maps $\phi^\epsilon_t:{\mathbb{T}^2} \to {\mathbb{T}^2}$ satisfying \begin{align} \label{eq:char_eps_intro} \phi^\epsilon_t(x) &= x + \int_0^t v^\epsilon_s(\phi^\epsilon_s(x)) ds + \int_0^t u^\epsilon_s(\phi^\epsilon_s(x)) ds + \sqrt{2\nu} w_t, \end{align} where $t \in [0,T]$, $x \in {\mathbb{T}^2}$. Since $v^\epsilon$ and $u^\epsilon$ are divergence-free and have sufficient regularity, the characteristics $\phi^\epsilon$ defined above constitute a \emph{stochastic flow of measure-preserving homeomorphisms}, in the sense of \autoref{def:stoch_flow} below. The interest in studying the solution of \eqref{eq:char_eps_intro} is motivated by the following representation formula for the solution of \eqref{eq:large_eps_intro}: \begin{align} \label{eq:repr_large_eps} \Xi^\epsilon_t &= \tE{ \Xi_0 \circ (\phi^\epsilon_t)^{-1} + \int_0^t q^\epsilon_s \circ \phi^\epsilon_s \circ(\phi^\epsilon_t)^{-1}ds}, \end{align} where $\tilde{\mathbb{E}}$ is the expectation on $\tilde{\Omega}$ with respect to $\tilde{\mathbb{P}}$ and we have tacitly assumed that the initial condition $\Xi^\epsilon|_{t=0} = \Xi_0$ is independent of $\epsilon$. See \autoref{def:sol} for more details on the notion of solution adopted in the present paper. The main purpose of this work - cfr. \autoref{thm:char} - is to investigate conditions allowing to prove convergence in a suitable sense, as $\epsilon \to 0$, of $\phi^\epsilon$ towards the solution of: \begin{align} \label{eq:char_intro} \phi_t(x) &= x + \int_0^t v_s(\phi_s(x)) ds + \sum_{k \in \mathbb{N}} \int_0^t \sigma_k(\phi_s(x)) \circ dW^k_s + \sqrt{2\nu} w_t, \end{align} where $\sigma_k = -\nabla^\perp (-\Delta)^{-1} \varsigma_k$ and $v^\epsilon \to v$ in a certain sense. The notion of convergence $\phi^\epsilon \to \phi$ contained in \autoref{thm:char} permits to prove a notion of weak convergence of the large-scale osservable $\Xi^\epsilon$ given by \eqref{eq:repr_large_eps} towards \begin{align} \label{eq:repr_large} \Xi_t &= \tE{ \Xi_0 \circ (\phi_t)^{-1} + \int_0^t q_s \circ \phi_s \circ(\phi_t)^{-1}ds}, \end{align} that solves the large-scale dynamics with \emph{transport noise}: \begin{align} \label{eq:large_intro} d \Xi_t + v_t \cdot \nabla \Xi_t dt + \sum_{k \in \mathbb{N}} \sigma_k \cdot \nabla \Xi_t \circ dW^k_t &= \nu \Delta \Xi_t dt + q_t dt, \end{align} where $v$ is independent of $\Xi$ for passive dynamics, while it could depend of $\Xi$ itself for active dynamics, and $q^\epsilon \to q$ in a sense to be specified later. The precise meaning of weak convergence is made rigorous in \autoref{thm:conv_large} below. We think that these results are fundamental for a proper interpretation of transport noise in SPDEs, at least for the two classes considered here. Several papers considered transport noise so far, either in passive scalars (\cite{Dolgo}, \cite{FlaGaleLuoPTRSA}, \cite{Galeati}, \cite{Gess}, \cite{LeJan Ray}, \cite{MajdaK}, \cite{Sreen}), passive vector fields (\cite{FlaMaurNek}, \cite{FlaOliv advection}, \cite{Krause Radler}, \cite{Zeldovich}) and fluid mechanics equations themselves (\cite{BCF 91 mult noise}, \cite{BCF 92 mult noise}, \cite{BrFlMa16}, \cite{BrSl20+}, \cite{CrFlHo19}, \cite{Cruzeiro Torr}, \cite{DrivasHolm}, \cite{DrivasHolmLehaly}, \cite{FlaGaleLuoJEE}, \cite{FlaGalLuorate}, \cite{FlaPappWater}, \cite{Funaki}, \cite{Holm}, \cite{MiRo04}, \cite{MiRo05}, \cite{Yokohama}). In terms of consequences of transport noise, among the aforementioned works are proved several results concerning well-posedness, enhanced dissipation and mixing properties of fluid dynamics equations perturbed by transport noise, thus being a good starting point towards a rigorous understanding of turbulence in fluids. However, unlike the case of additive noise, that is widely accepted as a source of randomness, transport noise needs a more careful justification. One possible attempt is given by Wong-Zakai approximation results, largely investigated both in and outside the realm of fluid dynamics (\cite{BrCaFl90}, \cite{Gy88}, \cite{Gy89}, \cite{HoLeNi19}, \cite{HoLeNi19+}, \cite{TeZa06}, \cite{Tw93}). Let us explain what is added to these works by the present paper. Concerning the passive dynamics, several Wong-Zakai type results of convergence to the white noise transport in Stratonovich form have been proved before (see also \cite{Pa21+} for a dissipation enhancement result due to the presence of a Stratonovich-to-It\=o corrector), but this seems to be the first work where the velocity field approximating the white noise one is the solution of a nonlinear fluid mechanics equation. Concerning the active dynamics, the results contained in this paper extend and make more precise our previous work \cite{FlPa21}: i) some details in the proof of \cite[Proposition 4.1]{FlPa21}, which after publication appeared not precise, are fixed here in \autoref{thm:char}; ii) more importantly, the term $u_{t}^{\epsilon}\cdot\nabla\xi_{t}^{\epsilon}$ was absent in \cite{FlPa21}, which therefore should be interpreted more along the research lines of model reduction, inspired by \cite{MaTiVE01}, instead of multiscale analysis of the full problem. \subsection{Examples} \label{ssec:ex} Throughout the paper we keep ourselves in a setting as general as possible, in order to comprehend, in our abstract results, the greatest number of particular cases. However, our work has been motivated by two main examples: \begin{itemize} \item \emph{Advection-diffusion equation}. Consider the following system, describing the evolution of the concentration $\rho^\epsilon$ of a passive scalar advected by the Euler flow and subject to the influence of an external source $q^\epsilon$: \begin{align*} \begin{cases} d\rho^\epsilon_t + (v_t + u^\epsilon_t) \cdot \nabla \rho^\epsilon_t dt = \nu \Delta \rho^\epsilon_t dt + q^\epsilon_t dt, \\ d \xi^\epsilon_t + (v_t+u^\epsilon_t) \cdot \nabla \xi^\epsilon_t dt = - \epsilon^{-1} \xi^\epsilon_t dt + \epsilon^{-1} \sum_{k \in \mathbb{N}} \varsigma_k dW^k_t, \\ u^\epsilon_t = -\nabla^\perp(-\Delta)^{-1}\xi^\epsilon_t. \end{cases} \end{align*} We have taken $\nu\geq 0$ and $v^\epsilon=v$, independent of $\epsilon$, since the passive scalar does not affect the external field. In this setting, $\rho^\epsilon$ converges towards the solution of the limiting advection-diffusion equation with transport noise: \begin{align*} d\rho_t + v_t \cdot \nabla \rho_t dt + \sum_{k \in \mathbb{N}} \sigma_k \cdot \nabla \rho_t \circ dW^k_t &= \nu \Delta \rho_t dt + q_t dt. \end{align*} \item \emph{Navier-Stokes and Euler equations}. Consider the following system, describing the coupling between large-scale Navier-Stokes ($\nu > 0$) or Euler ($\nu=0$) equations and small-scale stochastic Euler equations: \begin{align*} \begin{cases} d\Xi^\epsilon_t + (v^\epsilon_t + u^\epsilon_t) \cdot \nabla \Xi^\epsilon_t dt = \nu \Delta \Xi^\epsilon_t dt + q^\epsilon_t dt, \\ d \xi^\epsilon_t + (v^\epsilon_t+u^\epsilon_t) \cdot \nabla \xi^\epsilon_t dt = - \epsilon^{-1} \xi^\epsilon_t dt + \epsilon^{-1} \sum_{k \in \mathbb{N}} \varsigma_k dW^k_t, \\ v^\epsilon_t = -\nabla^\perp(-\Delta)^{-1}\Xi^\epsilon_t, \\ u^\epsilon_t = -\nabla^\perp(-\Delta)^{-1}\xi^\epsilon_t. \end{cases} \end{align*} We take $q^\epsilon$ and $\Xi_0$ with zero spatial average, so that $\Xi^\epsilon$ is zero mean, too. Notice that in this case the field $v^\epsilon$ is generated by $\Xi^\epsilon$ itself through the Biot-Savart law $v^\epsilon_t = -\nabla^\perp(-\Delta)^{-1}\Xi^\epsilon_t$, in particular $v^\epsilon$ is random. On the other hand, the external source $q^\epsilon$ can be thought as given a priori and deterministic. In this setting, $\Xi^\epsilon$ converges towards the solution of the limiting Navier-Stokes or Euler equations with transport noise: \begin{align*} \begin{cases} d \Xi_t + v_t \cdot \nabla \Xi_t dt + \sum_{k \in \mathbb{N}} \sigma_k \cdot \nabla \Xi_t \circ dW^k_t = \nu \Delta \Xi_t dt + q_t dt, \\ v_t=-\nabla^\perp(-\Delta)^{-1}\Xi_t . \end{cases} \end{align*} It is worth of mention that, also in the limit, the velocity field $v$ is still generated by $\Xi$ through the Biot-Savart law $v_t = -\nabla^\perp(-\Delta)^{-1}\Xi_t$. \end{itemize} \subsection{Motivations} \label{ssec:motivations} As already mentioned in the Introduction, \eqref{eq:euler_intro} aims to represent the small-scale component of a two-dimensional incompressible fluid, looked at by a large-scale observer. At large scales the fluid shows a turbulent behaviour, and its statistical properties are well-described by solutions of stochastic equations, although the underlying continuum mechanics equations that govern the evolution of the fluid are deterministic. We refer to \cite[Chapter 2]{FlPa21} and reference therein for a complete discussion about the equations under investigation in this paper and the interest for their asymptotical behaviour as $\epsilon \to 0$. \subsubsection{On the additive noise and damping} Additive noise in SPDEs is so common that apparently we do not need a justification for introducing it, as we have done in equation \eqref{eq:euler_intro} above. However, a short discussion may help to convince ourselves that it is very natural, and moreover to understand that also the damping term is needed. Our opinion, described more extensively in the proposal of \cite[Chapter 1]{Fla libro}, is that an additive noise is a good compromise to keep into account the vortices produced by obstacles and irregularities at the boundary or internal obstacles, which are not explicitly described in the mathematical formulation, often based on the torus geometry or a domain with smooth boundary. Such obstacles introduce vortices, eddies, that could be idealized and described as a jump Markov process $W_{N}(t)$ in the Hilbert space $H$ of $L^{2}({\mathbb{T}^2})$ vorticity fields on the torus; the fluid equation perturbed by the creation of these new vortices takes a priori the form \[ \partial_{t}\xi_{t}+(v_{t}+u_{t})\cdot\nabla\xi_{t}=\partial_{t}W_{N}(t) \] where $\partial_{t}W_{N}(t)$ is a sum of delta Dirac in time, with the effect that $\xi_{t}$ jumps at those times, namely (if $t_{i}$ denotes one of such times) $\xi_{t_{i}^{+}}$ is equal to $\xi_{t_{i}^{-}}$ plus the created vortex. We have indexed $W_{N}(t)$ by $N$ to anticipate that we consider a regime with frequent creation of vortices of small amplitude. Scaling the parameters of $W_{N}(t)$ in the right way, see \cite{Fla libro}, under suitable assumptions of zero average of $W_{N}(t)$ and integrability, $W_{N}(t)$ converges in law to a Brownian motion $W(t)$ in $H$ with a suitable covariance. This is our motivation for the equation with additive noise \[ d\xi_{t}+(v_{t}+u_{t})\cdot\nabla\xi_{t}dt=dW(t) . \] However, as it is easily seen by It\^{o} formula \cite[Chapter 2]{Fla libro}, such additive noise introduces systematically energy, fact that is not acceptable from the physical viewpoint: the vortices created by obstacles do not increase the energy (at most, some energy is lost in thermal dissipation at the boundary). Therefore some sort of compensation is needed. The simplest is to think that the forces which are responsible for the creation of vortices by the obstacles are somewhat similar to a friction. Thus we introduce a friction term to maintain equilibrium: \[ d\xi_{t}+(v_{t}+u_{t})\cdot\nabla\xi_{t}dt=-\lambda\xi_{t}dt+dW(t). \] This is the origin of the fluid model. The particular scaling attributed above to the terms $-\lambda\xi_{t}dt$ and $dW(t)$ is related to a different argument, which is explained in the next paragraph. \subsubsection{On the parameter $\epsilon^{-1}$} An important feature of \eqref{eq:euler_intro} is the presence of the scaling parameter $\epsilon^{-1}$ in front of \emph{both} noise and damping, in contrast to the widely-studied diffusive scaling given by coefficients $\epsilon^{-1}$ in front of the damping and $\epsilon^{-1/2}$ in front of the noise. Let us recall briefly where this scaling comes from, referring to \cite{FlPa21} for further details. We suppose to have a small time scale $\mathcal{T}_{\text{S}}$, at which we observe the vorticity field $\xi$. At this scale, the small scales evolve according to deterministic equations, and the typical intensity and turnover time of $\xi$ are of order one. Let us now take an intermediate point of view on the system, say human-scale, $\mathcal{T}_{\text{M}} \coloneqq \epsilon^{-1} \mathcal{T}_{\text{S}}$. At this scale, fluctuations of $\xi = \xi^\epsilon$ look random and could be well modeled by stochastic equations \eqref{eq:euler_intro}, with the crucial difference of a coefficient $\epsilon^{-1/2}$ in front of the noise rather than $\epsilon^{-1}$. Only when we look at the system with respect to a large time scale $\mathcal{T}_{\text{L}} \coloneqq \epsilon^{-1} \mathcal{T}_{\text{M}}$ the scaling of \eqref{eq:euler_intro} appears. As a result of the theory here developed, under this point of view the small scale fluctuations behave as a white noise of multiplicative type. We remark that, in our arguments, spatial scaling is less important then temporal scaling. As it emerges from computations performed in \cite[Subsection 2.3]{FlPa21}, the spatial scaling only affects the spatial covariance of the noise in \eqref{eq:euler_intro}. For the sake of concreteness, suppose that $W_{\text{M}}(\tilde{t},\tilde{x})$ is the noise perturbing $\xi^\epsilon$ at intermediate scales and $W_{\text{L}}(t,x)$ is the noise perturbing $\xi^\epsilon$ at large scales, with mesoscopic variables $\tilde{t},\tilde{x}$ related to macroscopic variables $t,x$ by the formulas $\tilde{t} = \epsilon^{-1} t$, $\tilde{x} = \epsilon_X^{-1} x$. Then it holds the equality in law \begin{align*} W_{\text{M}}(\tilde{t},\tilde{x}) = \epsilon^{1/2} W_{\text{L}}\left(t,\epsilon^{-1}_X x\right). \end{align*} Moreover, assuming that the elements producing the noise (topography, boundaries \emph{et cetera}) are actually large-scale, we can suppose that the covariance of $W_{\text{M}}$ is slowly-varying with respects to $\tilde{x}$, or equivalently \begin{align*} W_{\text{L}}\left(t,\epsilon^{-1}_X x\right) = \sum_{k \in \mathbb{N}} \varsigma_k(x) W^k_t, \end{align*} with $\varsigma_k$ and $W^k$ as in \eqref{eq:euler_intro}. \subsection{Structure of the paper} In \autoref{sec:notations} we introduce some notation and recall classical results that will be frequently used in the remainder of the paper. This section contains, among others: main properties of the Biot-Savart kernel on the torus $-\nabla^\perp(-\Delta)^{-1}$; a useful Gronwall-type lemma for ODEs with log-Lipschitz drift; notions of solution and well-posedness results for stochastic Euler equations \eqref{eq:euler_intro}, equations of characteristics \eqref{eq:char_eps_intro} and \eqref{eq:char_intro}, and large-scale dynamics \eqref{eq:large_eps_intro} and \eqref{eq:large_intro}. Also, here we introduce our main working assumptions (A1)-(A7), and in the last part of this section we state our two main results, concerning convergence of characteristics (\autoref{thm:char}) and subsequent convergence of large-scale dynamics (\autoref{thm:conv_large}). In the first part of \autoref{sec:tech}, we define a linearized version of \eqref{eq:euler_intro}, where we neglect the nonlinear term. This approach is similar to that of \cite{FlPa21}, and the key idea is that, although the solution $\theta^\epsilon$ of linearized equation is not close ot the actual solution $\xi^\epsilon$ of \eqref{eq:euler_intro}, the characteristics generated by $\theta^\epsilon$ are close to the characteristics generated by $\xi^\epsilon$, in particular they have the same limit as $\epsilon \to 0$. In the same section we present two main technical results, needed in the proof of \autoref{thm:char}. The first of those results is \autoref{prop:old}, which ensures that the linear part $\theta^\epsilon$ of the small-scale dynamics behaves as a Stratonovich white-in-time noise as $\epsilon \to 0$, at least in a distributional sense. The second result \autoref{prop:sup_z}, instead, aims to rigorously prove the closeness of the characteristics generated by $\theta^\epsilon$ and $\xi^\epsilon$, and it is one of the main novelties of this paper with respect to \cite{FlPa21}. The proof of \autoref{thm:char} is contained in \autoref{sec:conv_char}, and it is based on a Gronwall-type lemma and It\=o Formula applied to a smooth approximation $g_\delta(x)$ of the absolute value $|x|$, $x \in \mathbb{R}^2$. The proof of \autoref{thm:conv_large} can be found in \autoref{sec:conv_large}, and it relies on representation formulas \eqref{eq:repr_large_eps} and \eqref{eq:repr_large} and a measure-theoretic argument. Finally, in \autoref{sec:ex} we discuss how our main motivational examples - cfr. \autoref{ssec:ex} - fit our abstract setting. In particular, the non-trivial one is the coupled system given by deterministic Navier-Stokes equations at large scales plus stochastic Euler equations at small scales; we identify an additional but very natural condition (A8) on the limit external source $q$ that allows to verify assumptions (A1)-(A7) for the system under consideration. \section{Notations, preliminaries and main results} \label{sec:notations} In this section we collect definitions, notations and classical results needed in the paper. Also, we introduce our main working assumptions (A1)-(A7), and state our main results. \subsection{Properties of the Biot-Savart kernel} Here we briefly recall some useful properties of the Biot-Savart kernel $K$. We refer to \cite{MaPu94,BrFlMa16} for details and proofs. First of all, the Biot-Savart kernel $K$ is defined as $K=-\nabla^\perp G = (\partial_2 G, - \partial_1 G)$, where $G$ is the Green function of the Laplace operator on the torus ${\mathbb{T}^2}$ with zero mean. For $p \in (1,\infty)$ and $\xi \in L^p(\mathbb{T}^2)$ with zero-mean, the convolution with $K$ represents the Biot-Savart operator: \begin{align*} K \ast \xi = -\nabla^\perp (-\Delta)^{-1} \xi, \end{align*} that to every zero-mean $\xi \in L^p(\mathbb{T}^2)$ associates the unique zero-mean, divergence-free velocity vector field $u \in W^{1,p}(\mathbb{T}^2,\mathbb{R}^2)$ such that $\mbox{curl}\,u = \xi$. Moreover, for every $p \in (1,\infty)$ there exist constants $c$, $C$ such that for every zero-mean $\xi \in L^p(\mathbb{T}^2)$ \begin{align*} c\| \xi \|_{L^p(\mathbb{T}^2)} \leq \| K \ast \xi \|_{W^{1,p}(\mathbb{T}^2,\mathbb{R}^2)} \leq C\| \xi \|_{L^p(\mathbb{T}^2)}. \end{align*} Also, recall that since $K \in L^1({\mathbb{T}^2},\mathbb{R}^2)$ the convolution $K \ast \xi$ is well-defined for every $\xi \in L^p(\mathbb{T}^2)$, $p \in [1,\infty]$ and the following estimate holds: \begin{align} \label{eq:K} \| K \ast \xi \|_{L^p(\mathbb{T}^2,\mathbb{R}^2)} \leq \| K \|_{L^1(\mathbb{T}^2,\mathbb{R}^2)} \| \xi \|_{L^p(\mathbb{T}^2)}. \end{align} Let $r \geq 0$. Denote $\gamma:[0,\infty) \to \mathbb{R}$ the concave function: \begin{equation*} \gamma(r) = r(1-\log r) \mathbf{1}_{\{0<r<1/e\}} + (r+1/e) \mathbf{1}_{\{r\geq 1/e\}}. \end{equation*} The following two lemmas are proved in \cite{MaPu94} and \cite{BrFlMa16}. \begin{lem} \label{lem:log_lip} There exists a constant $C$ such that: \begin{equation*} \int_{\mathbb{T}^2} \left| K(x-y) - K(x'-y)\right| dy \leq C \gamma(|x-x'|) \end{equation*} for every $x,x' \in \mathbb{T}^2$. \end{lem} \begin{lem} \label{lem:comp} Let $T>0$, $\lambda>0$, $a_0 \in [0,\exp(1-2e^{\lambda T})]$ be constants. Let $a:[0,T] \to \mathbb{R}$ be such that for every $t \in [0,T]$: \begin{align*} a_t \leq a_0 + \lambda \int_0^t \gamma(a_s) ds. \end{align*} Then for every $t \in [0,T]$ the following estimate holds: \begin{align*} a_t \leq e a_0^{\exp(-\lambda t)}. \end{align*} \end{lem} \subsection{Stochastic flows of measure-preserving homeomorphisms} As a convention, in the following we say that $\mathcal{N} \subset \Omega$ (respectively $\tilde{\mathcal{N}} \subset \tilde{\Omega}$) is \emph{negligible} if it is measurable and $\mathbb{P}(\mathcal{N}) = 0$ (respectively $\tilde{\mathbb{P}}(\tilde{\mathcal{N}}) = 0$), without explicit mention of the reference probability measure. Unless otherwise specified, we will always denote with $\mathcal{N}$ negligible sets in $\Omega$, and with $\tilde{\mathcal{N}}$ negligible sets in $\tilde{\Omega}$. Let us begin this paragraph with the following fundamental definition. \begin{definition} \label{def:stoch_flow} A measurable map $\phi: \Omega \times \tilde{\Omega} \times [0,T] \times {\mathbb{T}^2} \to {\mathbb{T}^2}$ is a \emph{stochastic flow of measure-preserving homeomorphisms} provided there exist negligible sets $\mathcal{N} \subset \Omega$ and $\tilde{\mathcal{N}} \subset \tilde{\Omega}$ such that: \begin{itemize} \item for every $\omega \in \mathcal{N}^c$, $\tilde{\omega} \in \tilde{\mathcal{N}}^c$ and $t \in [0,T]$, the map $\phi(\omega,\tilde{\omega},t,\cdot):{\mathbb{T}^2} \to {\mathbb{T}^2}$ is a homeomorphism of the torus and \begin{align*} \int_{\mathbb{T}^2} f(x) dx = \int_{\mathbb{T}^2} f(\phi(\omega,\tilde{\omega},t,y)) dy \end{align*} for every $f \in L^1({\mathbb{T}^2})$; \item for every $\tilde{\omega} \in \tilde{\mathcal{N}}^c$ and $x \in {\mathbb{T}^2}$, the stochastic process $\phi(\cdot,\tilde{\omega},\cdot,x):\Omega \times [0,T] \to {\mathbb{T}^2}$ is progressively measurable with respect to the filtration $(\mathcal{F}_t)_{t \in [0,T]}$. \end{itemize} \end{definition} In some circumstances it can be useful to have the following: \begin{definition} A stochastic flow of measure-preserving homeomorphisms $\phi$ is called \emph{inviscid} if there exist negligible sets $\mathcal{N} \subset \Omega$ and $\tilde{\mathcal{N}} \subset \tilde{\Omega}$, and a measurable map $\psi:\Omega \times [0,T] \times {\mathbb{T}^2} \to {\mathbb{T}^2}$ such that for every $\omega \in \mathcal{N}^c$, $\tilde{\omega} \in \tilde{\mathcal{N}}^c$, $t \in [0,T]$ and $x \in {\mathbb{T}^2}$ \begin{align*} \phi(\omega,\tilde{\omega},t,x) = \psi(\omega,t,x). \end{align*} \end{definition} With a little abuse of notation, hereafter we identify an inviscid stochastic flow of measure-preserving homeomorphisms $\phi$ with its $\tilde{\omega}$-independent representative $\psi$. Let us now clarify the meaning of \eqref{eq:char_eps_intro}, \eqref{eq:char_intro}. A measurable map $\phi^\epsilon:\Omega \times \tilde{\Omega} \times [0,T] \times {\mathbb{T}^2} \to {\mathbb{T}^2}$ is a solution of \eqref{eq:char_eps_intro} if there exist negligible sets $\mathcal{N} \subset \Omega$ and $\tilde{\mathcal{N}} \subset \tilde{\Omega}$ such that for every $\omega \in \mathcal{N}^c$, $\tilde{\omega} \in \tilde{\mathcal{N}}^c$, $t \in [0,T]$ and $x \in {\mathbb{T}^2}$: \begin{align*} \phi^\epsilon(\omega,\tilde{\omega},t,x) &= x + \int_0^t v^\epsilon(\omega,s,\phi^\epsilon(\omega,\tilde{\omega},s,x)) ds \\ &\quad+ \int_0^t u^\epsilon(\omega,s,\phi^\epsilon(\omega,\tilde{\omega},s,x)) ds + \sqrt{2\nu} w(\tilde{\omega},t), \end{align*} where the previous identity can be interpreted as an equation on ${\mathbb{T}^2}$ since one can check $\phi^\epsilon(\omega,\tilde{\omega},t,x+2\pi\mathbf{e}) = \phi^\epsilon(\omega,\tilde{\omega},t,x)+2\pi\mathbf{e}$ for $\mathbf{e}=(1,0)$ and $\mathbf{e}=(0,1)$. Similarly, a measurable map $\phi:\Omega \times \tilde{\Omega} \times [0,T] \times {\mathbb{T}^2} \to {\mathbb{T}^2}$ is a solution of \eqref{eq:char_intro} if there exist negligible sets $\mathcal{N} \subset \Omega$ and $\tilde{\mathcal{N}} \subset \tilde{\Omega}$ such that for every $\tilde{\omega} \in \tilde{\mathcal{N}}^c$ and $x \in {\mathbb{T}^2}$, the stochastic process $\phi(\cdot,\tilde{\omega},\cdot,x):\Omega \times [0,T] \to {\mathbb{T}^2}$ is progressively measurable with respect to the filtration $(\mathcal{F}_t)_{t \in [0,T]}$, and for every $\omega \in \mathcal{N}^c$, $\tilde{\omega} \in \tilde{\mathcal{N}}^c$, $t \in [0,T]$ and $x \in {\mathbb{T}^2}$: \begin{align*} \phi(\omega,\tilde{\omega},t,x) &= x + \int_0^t v(\omega,s,\phi(\omega,\tilde{\omega},s,x)) ds \\ &\quad+ \sum_{k \in \mathbb{N}} \left(\int_0^t \sigma_k(\phi(\cdot,\tilde{\omega},s,x))) \circ dW^k_s \right)(\omega) + \sqrt{2\nu} w(\tilde{\omega},t). \end{align*} Notice that progressive measurability of the process $\phi(\cdot,\tilde{\omega},\cdot,x):\Omega \times [0,T] \to {\mathbb{T}^2}$ is necessary to make sense of the Stratonovich stochastic integral apparing in the equation above. \subsection{Notions of solution and some well-posedness results} \subsubsection{Well-posedness of small-scale dynamics and characteristics} First we make the following assumptions on the external fields: \begin{itemize} \item[(\textbf{A1})] $v^\epsilon,v:\Omega \times [0,T] \times {\mathbb{T}^2} \to \mathbb{R}^2$ and for every $t \in [0,T]$ the maps $v^\epsilon,v |_{\Omega \times [0,t]}:\Omega \times [0,t] \times {\mathbb{T}^2} \to \mathbb{R}^2$ are $\mathcal{F}_t \otimes \mathcal{B}_{[0,t]} \otimes \mathcal{B}_{{\mathbb{T}^2}}$ measurable, where $\mathcal{B}$ denotes the Borel sigma-field; \item [(\textbf{A2})] there exist a constant $C$ and a negligible set $\mathcal{N} \subset \Omega$ such that, for every $\omega \in \mathcal{N}^c$, $\epsilon > 0$ and $t \in [0,T]$: $\mbox{div}\, v^\epsilon(\omega,t,\cdot) = \mbox{div}\, v(\omega,t,\cdot) = 0$, and \begin{gather*} |v^\epsilon(\omega,t,x)|\leq C, \quad |v^\epsilon(\omega,t,x) - v^\epsilon(\omega,t,y)| \leq C \gamma(|x-y|), \\ |v(\omega,t,x)|\leq C, \quad |v(\omega,t,x) - v(\omega,t,y)| \leq C \gamma(|x-y|), \end{gather*} for every $x,y \in {\mathbb{T}^2}$. \end{itemize} Also, we make the following assumption on the coefficients $(\varsigma_k)_{k \in \mathbb{N}}$: \begin{itemize} \item[(\textbf{A3})] there exists $\ell \geq 1$ such that $\varsigma_k \in W^{\ell,\infty}({\mathbb{T}^2})$ with zero-mean for every $k \in \mathbb{N}$, and moreover \begin{align*} \sum_{k \in \mathbb{N}} \| \varsigma_k \|_{W^{\ell,\infty}({\mathbb{T}^2})} < \infty. \end{align*} \end{itemize} Similarly to what has been done in the Introduction, given a stochastic flow of measure-preserving homeomorphisms $\phi$ we will use $\phi_t(x)$ as a notational shortcut for $\phi(\omega,\tilde{\omega},t,x)$, thus making implicit the dependence of the randomness variables $\omega,\tilde{\omega}$. The same convenction may be used for the fields $v,u,$ \emph{et cetera}. The next result can be proved repeating the arguments contained in \cite{BrFlMa16} and \cite{FlPa21}. \begin{prop} \label{prop:well_posed_small} Assume (A1)-(A3). Then: \begin{itemize} \item for every $\epsilon>0$ there exist a unique Lagrangian solution $\xi^\epsilon$ of \eqref{eq:euler_intro}, namely there exists a unique stochastic process $\xi^\epsilon : \Omega \times [0,T] \to L^\infty({\mathbb{T}^2})$ weakly progressively measurable with respect to $(\mathcal{F}_t)_{t \in [0,T]}$ such that the equation \begin{align*} \psi^\epsilon_t(x) &= x + \int_0^t v^\epsilon_s(\psi^\epsilon_{s}(x)) ds + \int_0^t u^\epsilon_s(\psi^\epsilon_{s}(x)) ds, \end{align*} with $u^\epsilon=K\ast \xi^\epsilon$, admits a unique inviscid stochastic flow of measure-preserving homeomorphisms $\psi^\epsilon$ as a solution, and moreover \begin{align} \label{eq:xi_lagr} \xi^\epsilon_t(\psi^\epsilon_t(x)) &= \epsilon^{-1} \sum_{k \in \mathbb{N}} \int_0^t e^{-\epsilon^{-1}(t-s)} \varsigma_k(\psi^\epsilon_s(x)) dW^k_s; \end{align} \item for every $\epsilon>0$ there exists a unique stochastic flow of measure-preserving homeomorphisms $\phi^\epsilon$ solution of \eqref{eq:char_eps_intro}, with $u^\epsilon=K\ast \xi^\epsilon$; \item there exists a unique stochastic flow of measure-preserving homeomorphisms $\phi$ solution of \eqref{eq:char_intro}. \end{itemize} \end{prop} \begin{rmk} If $\nu=0$, then both $\phi^\epsilon$ and $\phi$ are inviscid stochastic flows of measure-preserving homeomorphisms, and actually $\phi^\epsilon=\psi^\epsilon$. The terminology is thus justified, since $\nu=0$ corresponds to null diffusivity/viscosity in the equations for the large-scale dynamics \eqref{eq:large_eps_intro} and \eqref{eq:large_intro}. \end{rmk} \begin{rmk} Formula \eqref{eq:xi_lagr} above corresponds to the solution of \eqref{eq:euler_intro} with initial condition $\xi^\epsilon_0=0$, that we assume throughout this paper for the sake of simplicity. More general initial conditions, as those considered in \cite{FlPa21}, can be taken into account by simply modifying \eqref{eq:xi_lagr} into \begin{align*} \xi^\epsilon_t(\psi^\epsilon_t(x)) &= e^{-\epsilon^{-1}t}\xi^\epsilon_0(x) + \epsilon^{-1} \sum_{k \in \mathbb{N}} \int_0^t e^{-\epsilon^{-1}(t-s)} \varsigma_k(\psi^\epsilon_s(x)) dW^k_s. \end{align*} \end{rmk} \subsubsection{Notion of solution to the large-scale dynamics} By previous \autoref{prop:well_posed_small}, under assumption (A1)-(A3) we can use the Euler flow to represent the solutions of \eqref{eq:large_eps_intro} and \eqref{eq:large_intro}. To be more precise, our notion of solution is given exactly by those processes $\Xi^\epsilon$, $\Xi$ for which \eqref{eq:repr_large_eps} and \eqref{eq:repr_large} hold true, and it is inspired by the notion of generalized solution in \cite[Definition 2.2]{BrFl95}. \begin{definition} \label{def:sol} Assume (A1)-(A3), $q^\epsilon,q \in L^1([0,T],L^\infty({\mathbb{T}^2}))$ for every $\epsilon>0$ and $\Xi_0 \in L^\infty({\mathbb{T}^2})$. Then: \begin{itemize} \item for every $\epsilon>0$, a measurable map $\Xi^\epsilon : \Omega \times [0,T] \times {\mathbb{T}^2} \to \mathbb{R}$ is called \emph{generalized solution} to \eqref{eq:large_eps_intro} if it is compatible with $v^\epsilon$ and for every $t \in [0,T]$ it holds \begin{align*} \Xi^\epsilon_t &= \tE{ \Xi_0 \circ (\phi^\epsilon_t)^{-1} + \int_0^t q^\epsilon_s \circ \phi^\epsilon_s \circ(\phi^\epsilon_t)^{-1}ds}, \end{align*} as an equality in $L^\infty(\Omega \times {\mathbb{T}^2})$, where $\phi^\epsilon$ is the unique stochastic flow of measure-preserving homeomorphisms solution of \eqref{eq:char_eps_intro}; \item a measurable map $\Xi: \Omega \times [0,T] \times {\mathbb{T}^2} \to \mathbb{R}$ is called \emph{generalized solution} to \eqref{eq:large_intro} if it is compatible with $v$ and for every $t \in [0,T]$ it holds \begin{align*} \Xi_t &= \tE{ \Xi_0 \circ (\phi_t)^{-1} + \int_0^t q_s \circ \phi_s \circ(\phi_t)^{-1}ds}, \end{align*} as an equality in $L^\infty(\Omega \times {\mathbb{T}^2})$, where $\phi$ is the unique stochastic flow of measure-preserving homeomorphisms solution of \eqref{eq:char_intro}. \end{itemize} \end{definition} Some remark on the previous definition is appropriate. First, notice that this notion of solution immediately implies existence and uniqueness in the case of passive large-scale dynamics: indeed, the compatibility condition is void, and $\Xi^\epsilon$ (resp. $\Xi$) depends only on the initial datum $\Xi_0$, the external sources $q^\epsilon$ (resp. $q$), and the characteristics $\phi^\epsilon$ (resp. $\phi$), the latter existing and being unique by \autoref{prop:well_posed_small}. For active dynamics this picture is not correct, since the compatibility condition between the external field and the large-scale variable is not encoded in the representation formula itself. However, we will not investigate in this paper well-posedness for this notion of solution in full generality, but rather assume to have generalized solutions $\Xi^\epsilon$, $\Xi$ to work with. Secondly, it is worth of mention that every sufficiently smooth generalized solution of \eqref{eq:large_eps_intro} or \eqref{eq:large_intro} is also a classical solution, as can be proved following the lines of \cite[Theorem 2.2 and Proposition 2.7]{CoIy08}. On the other hand, our notion of generalized solution is weaker than the notion of $L^\infty$-weak solution contained in \cite{BrFlMa16}, that we recall now: \begin{definition} Assume (A1)-(A3), $q^\epsilon,q \in L^1([0,T],L^\infty({\mathbb{T}^2}))$ for every $\epsilon>0$ and $\Xi_0 \in L^\infty({\mathbb{T}^2})$. For $f,g:{\mathbb{T}^2} \to \mathbb{R}$, denote $\langle f, g \rangle \coloneqq \int_{\mathbb{T}^2} f(x)g(x)dx$. Then: \begin{itemize} \item for every $\epsilon>0$, a stochastic process $\Xi^\epsilon : \Omega \times [0,T] \to L^\infty({\mathbb{T}^2})$ is called a \emph{$L^\infty$-weak solution} of \eqref{eq:large_eps_intro} if it is weakly progressively measurable with respect to $(\mathcal{F}_t)_{t \in [0,T]}$ and for every smooth test function $f \in C^\infty({\mathbb{T}^2})$ it holds $\mathbb{P}$-a.s. for every $t \in [0,T]$: \begin{align*} \langle \Xi^\epsilon_t , f \rangle - \langle \Xi^\epsilon_0 , f \rangle &= \int_0^t \langle \Xi^\epsilon_s , (v^\epsilon_s + u^\epsilon_s) \cdot \nabla f \rangle ds \\ &\quad + \int_0^t \langle \Xi^\epsilon_s , \nu \Delta f \rangle ds + \int_0^t \langle q^\epsilon_s , f \rangle ds; \end{align*} \item a stochastic process $\Xi : \Omega \times [0,T] \to L^\infty({\mathbb{T}^2})$ is called a \emph{$L^\infty$-weak solution} of \eqref{eq:large_intro} if it is weakly progressively measurable with respect to $(\mathcal{F}_t)_{t \in [0,T]}$ and for every smooth test function $f \in C^\infty({\mathbb{T}^2})$ it holds $\mathbb{P}$-a.s. for every $t \in [0,T]$: \begin{align*} \langle \Xi_t , f \rangle - \langle \Xi_0 , f \rangle &= \int_0^t \langle \Xi_s , v_s \cdot \nabla f \rangle ds + \sum_{k \in \mathbb{N}} \int_0^t \langle \Xi_s , \sigma_k \cdot \nabla f \rangle \circ dW^k_s \\ &\quad + \int_0^t \langle \Xi_s , \nu \Delta f \rangle ds + \int_0^t \langle q_s , f \rangle ds . \end{align*} \end{itemize} \end{definition} \begin{prop} Assume (A1)-(A3), $q^\epsilon,q \in L^1([0,T],L^\infty({\mathbb{T}^2}))$ for every $\epsilon>0$ and $\Xi_0 \in L^\infty({\mathbb{T}^2})$. Then every $L^\infty$-weak solution to \eqref{eq:large_eps_intro} is also a generalized solution to \eqref{eq:large_eps_intro}, and every $L^\infty$-weak solution to \eqref{eq:large_intro} is also a generalized solution to \eqref{eq:large_intro}. \end{prop} \begin{proof} The strategy of the proof is similar to \cite[Proposition 5.3]{BrFlMa16} and \cite[Theorem 20]{FlGuPr10}, and consists in taking the convolution of a $L^\infty$-weak solution with a smooth mollifier $\vartheta_\delta = \delta^{-2} \vartheta(\delta \cdot)$, $\delta>0$, and then taking the limit for $\delta \to 0$. Let $\Xi^\epsilon$ be a $L^\infty$-weak solution of \eqref{eq:large_eps_intro} and $\Xi$ be a $L^\infty$-weak solution of \eqref{eq:large_intro}, in the sense of the previous definition. Using $f = \vartheta_\delta(y-\cdot)$ as a test function, $y \in {\mathbb{T}^2}$, and denoting $\Xi^\epsilon_\delta \coloneqq \vartheta_\delta \ast \Xi^\epsilon$, $\Xi_\delta \coloneqq \vartheta_\delta \ast \Xi$ we get (omitting the parameter $\omega$) \begin{align*} \Xi^\epsilon_\delta(t,y) - \Xi^\epsilon_\delta(0,y) &= \int_0^t \int_{\mathbb{T}^2} \Xi^\epsilon(s,x) (v^\epsilon(s,x) + u^\epsilon(s,x)) \cdot \nabla_x \vartheta_\delta (y-x) dx ds \\ &\quad + \nu \int_0^t \int_{\mathbb{T}^2} \Xi^\epsilon(s,x) \Delta_x \vartheta_\delta (y-x) dx ds \\ &\quad + \int_0^t \int_{\mathbb{T}^2} q^\epsilon(s,x) \vartheta_\delta (y-x) dx ds, \end{align*} and \begin{align*} \Xi_\delta(t,y) - \Xi_\delta(0,y) &= \int_0^t \int_{\mathbb{T}^2} \Xi(s,x) v(s,x) \cdot \nabla_x \vartheta_\delta (y-x) dx ds \\ &\quad + \sum_{k \in \mathbb{N}} \int_0^t \int_{\mathbb{T}^2} \Xi(s,x) \sigma_k(x) \cdot \nabla_x \vartheta_\delta (y-x) dx \circ dW^k_s \\ &\quad + \nu \int_0^t \int_{\mathbb{T}^2} \Xi(s,x) \Delta_x \vartheta_\delta (y-x) dx ds \\ &\quad + \int_0^t \int_{\mathbb{T}^2} q(s,x) \vartheta_\delta (y-x) dx ds. \end{align*} Since $\Xi^\epsilon_\delta$, $\Xi_\delta$ are smooth functions in the variable $y$, we can write the equivalent expressions in differential notation \begin{align*} d\Xi^\epsilon_\delta(t,y) &+ \nabla\Xi^\epsilon_\delta(t,y) \cdot(v^\epsilon(t,y) + u^\epsilon(t,y)) dt \\ &= \int_{\mathbb{T}^2} \Xi^\epsilon(t,x) (v^\epsilon(t,x) + u^\epsilon(t,x)) \cdot \nabla_x \vartheta_\delta (y-x) dx dt \\ &\quad + \nu \int_{\mathbb{T}^2} \Xi^\epsilon(t,x) \Delta_x \vartheta_\delta (y-x) dx dt + \int_{\mathbb{T}^2} q^\epsilon(t,x) \vartheta_\delta (y-x) dx dt \\ &\quad + \nabla \Xi^\epsilon_\delta(t,y) \cdot (v^\epsilon(t,y) + u^\epsilon(t,y)) dt, \end{align*} and \begin{align*} d\Xi_\delta(t,y) &+ \nabla\Xi_\delta(t,y) \cdot v(t,y) dt + \sum_{k \in \mathbb{N}} \nabla\Xi_\delta(t,y) \cdot \sigma_k(y) \circ dW^k_t \\ &= \int_{\mathbb{T}^2} \Xi(t,x) v(t,x) \cdot \nabla_x \vartheta_\delta (y-x) dx dt \\ &\quad + \sum_{k \in \mathbb{N}} \int_{\mathbb{T}^2} \Xi(t,x) \sigma_k(x) \cdot \nabla_x \vartheta_\delta (y-x) dx \circ dW^k_t \\ &\quad + \nu\int_{\mathbb{T}^2} \Xi(t,x) \Delta_x \vartheta_\delta (y-x) dx dt + \int_{\mathbb{T}^2} q(t,x) \vartheta_\delta (y-x) dx dt \\ &\quad+ \nabla\Xi_\delta(t,y) \cdot v(t,y) dt + \sum_{k \in \mathbb{N}} \nabla\Xi_\delta(t,y) \cdot \sigma_k(y) \circ dW^k_t. \end{align*} Notice that the following formulas for the gradient of the convolution hold true: $\nabla \Xi^\epsilon_\delta(t,y) = -\int_{\mathbb{T}^2} \Xi^\epsilon(t,x) \nabla_x \vartheta_\delta(y-x)$, and $\nabla \Xi_\delta(t,y) = -\int_{\mathbb{T}^2} \Xi(t,x) \nabla_x \vartheta_\delta(y-x)$; also, $\Delta_x \vartheta_\delta (y-x) = \Delta_y \vartheta_\delta (y-x)$. Substituting into the previous expressions, we get \begin{align*} d\Xi^\epsilon_\delta(t,y) &+ \nabla\Xi^\epsilon_\delta(t,y) \cdot(v^\epsilon(t,y) + u^\epsilon(t,y)) dt \\ &= \left[ - \vartheta_\delta \ast \left( \nabla \Xi^\epsilon_t \cdot \left( v^\epsilon_t + u^\epsilon_t \right) \right) + \left( v^\epsilon_t + u^\epsilon_t \right) \cdot \left( \vartheta_\delta \ast \nabla \Xi^\epsilon_t \right) \right] (y) dt \\ &\quad + \nu \Delta \Xi^\epsilon_\delta(t,y) dt + q^\epsilon_\delta(t,y) dt \\ &= R_\delta \left[ v^\epsilon_t + u^\epsilon_t , \Xi^\epsilon_t \right] (y) dt + \nu \Delta \Xi^\epsilon_\delta(t,y) dt + q^\epsilon_\delta(t,y) dt, \end{align*} and \begin{align*} d\Xi_\delta(t,y) &+ \nabla\Xi_\delta(t,y) \cdot v(t,y) dt + \sum_{k \in \mathbb{N}} \nabla\Xi_\delta(t,y) \cdot \sigma_k(y) \circ dW^k_t \\ &= \left[ - \vartheta_\delta \ast \left( \nabla \Xi_t \cdot v_t\right) + v_t \cdot \left( \vartheta_\delta \ast \nabla \Xi_t\right)\right] (y) dt \\ &\quad + \sum_{k \in \mathbb{N}} \left[ - \vartheta_\delta \ast \left( \nabla \Xi_t \cdot \sigma_k \right) + \sigma_k \cdot \left( \vartheta_\delta \ast \nabla \Xi_t\right)\right] (y) \circ dW^k_t \\ &\quad + \nu \Delta \Xi_\delta(t,y) dt + q_\delta(t,y) dt \\ &= R_\delta \left[ v_t , \Xi_t \right] (y) dt + \sum_{k \in \mathbb{N}} R_\delta \left[ \sigma_k , \Xi_t \right] (y) \circ dW^k_t \\ &\quad + \nu \Delta \Xi_\delta(t,y) dt + q_\delta(t,y) dt, \end{align*} where we have defined $q^\epsilon_\delta \coloneqq \vartheta_\delta \ast q^\epsilon$, $q_\delta \coloneqq \vartheta_\delta \ast q$ and the commutator \begin{align*} R_\delta \left[ v , \Xi \right] \coloneqq - \vartheta_\delta \ast \left( \nabla \Xi \cdot v \right) + v \cdot \left( \vartheta_\delta \ast \nabla \Xi \right). \end{align*} We have obtained differential equations for the spatially smooth processes $\Xi^\epsilon_\delta$ and $\Xi_\delta$. Applying the backwards It\=o Formula to the processes $s \mapsto \Xi^\epsilon_\delta(s,\phi^\epsilon_s ((\phi^\epsilon_t)^{-1}(y)))$ and $s \mapsto \Xi_\delta(s,\phi_s ((\phi_t)^{-1}(y)))$, for fixed $t \in [0,T]$, and taking the expectation with respect to $\tilde{\mathbb{P}}$, we obtain that the process $\Xi^\epsilon_\delta$ is given by \begin{align} \label{eq:Xi_esp_delta} \Xi^\epsilon_\delta(t,y) &= \tE{\Xi^\epsilon_\delta(0,(\phi^\epsilon_t)^{-1}(y)) + \int_0^t q^\epsilon_\delta(s,\phi^\epsilon_s((\phi^\epsilon_t)^{-1}(y))) ds } \\ &\quad+ \nonumber \tE{ \int_0^t R_\delta \left[ v^\epsilon_s + u^\epsilon_s , \Xi^\epsilon_s \right] (\phi^\epsilon_s((\phi^\epsilon_t)^{-1}(y))) ds}, \end{align} whereas the process $\Xi_\delta$ is given by \begin{align} \label{eq:Xi_delta} \Xi_\delta(t,y) &= \tE{\Xi_\delta(0,(\phi_t)^{-1}(y)) + \int_0^t q_\delta(s,\phi_s((\phi_t)^{-1}(y))) ds } \\ &\quad+ \nonumber \tE{ \int_0^t R_\delta \left[ v_s , \Xi_s \right] (\phi_s((\phi_t)^{-1}(y))) ds} \\ &\quad+ \sum_{k\in \mathbb{N}} \tE{ \int_0^t R_\delta \left[ \sigma_k , \Xi_s \right] (\phi_s((\phi_t)^{-1}(y))) \circ dW^k_s} \nonumber \end{align} Let us focus on \eqref{eq:Xi_esp_delta}. By well-known properties of mollifiers, for every fixed $\omega \in \Omega $ and $t \in [0,T]$, the right-hand side $\Xi^\epsilon_\delta(\omega,t,\cdot) \to \Xi^\epsilon(\omega,t,\cdot)$ in $L^1({\mathbb{T}^2})$ as $\delta \to 0$. Concerning the left-hand side, a commutator lemma \cite[Lemma 17]{FlGuPr10} yields for every fixed $\epsilon>0$ \begin{align*} \lim_{\delta \to 0} \int_{\mathbb{T}^2} \left|\tE{ \int_0^t R_\delta \left[ v^\epsilon_s + u^\epsilon_s , \Xi^\epsilon_s \right] (\phi^\epsilon_s((\phi^\epsilon_t)^{-1}(y))) ds} \right| dy = 0, \end{align*} and by well-known properties of mollifiers and Lebesgue dominated convergence Theorem we can prove the convergence \begin{align*} &\tE{\Xi^\epsilon_\delta(0,(\phi^\epsilon_t)^{-1}) + \int_0^t q^\epsilon_\delta(s,\phi^\epsilon_s((\phi^\epsilon_t)^{-1})) ds } \\ &\quad+ \nonumber \tE{ \int_0^t R_\delta \left[ v^\epsilon_s + u^\epsilon_s , \Xi^\epsilon_s \right] (\phi^\epsilon_s((\phi^\epsilon_t)^{-1})) ds} \\ &\to \tE{\Xi^\epsilon(0,(\phi^\epsilon_t)^{-1}) + \int_0^t q^\epsilon(s,\phi^\epsilon_s((\phi^\epsilon_t)^{-1})) ds } \end{align*} in $L^1({\mathbb{T}^2})$ as $\delta \to 0$, for almost every $\omega \in \Omega$ and $t \in [0,T]$. Therefore, by \eqref{eq:Xi_delta} we have and the uniqueness of the $L^1({\mathbb{T}^2})$ limit, for almost every $\omega \in \Omega$, $t \in [0,T]$ and $y \in {\mathbb{T}^2}$: \begin{align*} \Xi^\epsilon(t,y) = \tE{\Xi^\epsilon(0,(\phi^\epsilon_t)^{-1}(y)) + \int_0^t q^\epsilon(s,\phi^\epsilon_s((\phi^\epsilon_t)^{-1}(y))) ds }, \end{align*} that is exactly the desired representation formula \eqref{eq:repr_large_eps}. The argument for \eqref{eq:Xi_delta} is similar, with only a little complication due to the stochastic integral, and we leave it to the reader. \end{proof} As a final remark, since we have seen that the notion of generalized solution is weaker than the notion of $L^\infty$-weak solution, our results are indeed very general: they can be applied at least to every $L^\infty$-weak solution. \subsection{Statement of main results} \subsubsection{Convergence of characteristics} Denote $|x-y|$ the geodesic distance on the flat two dimensional torus between points $x,y \in {\mathbb{T}^2}$. To keep the notation simple, we define the following quantity associated with a measurable map $\phi:{\mathbb{T}^2} \to {\mathbb{T}^2}$: \begin{align*} \|\phi\|_{L^1({\mathbb{T}^2},{\mathbb{T}^2})} \coloneqq \int_{\mathbb{T}^2} |\phi(x)| dx. \end{align*} Notice that $\| \cdot \|_{L^1({\mathbb{T}^2},{\mathbb{T}^2})}$ is not a norm on the space of measurable maps $\phi:{\mathbb{T}^2}\to{\mathbb{T}^2}$, in particular it is not positively homogeneous. However, $\| \cdot \|_{L^1({\mathbb{T}^2},{\mathbb{T}^2})}$ induces a distance on the space $C({\mathbb{T}^2},{\mathbb{T}^2})$ of continuous maps $\phi:{\mathbb{T}^2} \to {\mathbb{T}^2}$. Similarly, we define $\| \cdot \|_{L^\infty({\mathbb{T}^2},{\mathbb{T}^2})}$ as \begin{align*} \| \phi \|_{L^\infty({\mathbb{T}^2},{\mathbb{T}^2})} \coloneqq \mbox{ess}\sup_{x \in {\mathbb{T}^2}} |\phi(x)|. \end{align*} In order to prove convergence of characteristics $\phi^\epsilon \to \phi$, it is clear that one needs some sort of control for the difference $v^\epsilon - v$. Therefore, we assume: \begin{itemize} \item[(\textbf{A4})] there exist a constant $C$ and a negligible set $\mathcal{N} \subset \Omega$ such that for every $\omega \in \mathcal{N}^c$, $\epsilon > 0$ and $t \in [0,T]$: \begin{align*} \| v^\epsilon(\omega,t,\cdot) - v(\omega,t,\cdot) \|_{L^1({\mathbb{T}^2},\mathbb{R}^2)} &\leq C \gamma\left( \tE{\| \phi^\epsilon_t - \phi_t \|_{L^1({\mathbb{T}^2},{\mathbb{T}^2})}} \right) \\ &\quad+ C \int_0^t \gamma\left( \tE{\| \phi^\epsilon_s - \phi_s \|_{L^1({\mathbb{T}^2},{\mathbb{T}^2})}} \right) ds + c_\epsilon, \end{align*} where $c_\epsilon \in \mathbb{R}$ is infinitesimal as $\epsilon\to 0$, $\phi^\epsilon_t=\phi^\epsilon(\omega,\tilde{\omega},t,\cdot)$ is the unique solution of \eqref{eq:char_eps_intro}, and $\phi_t=\phi(\omega,\tilde{\omega},t,\cdot)$ is the unique solution of \eqref{eq:char_intro}. \end{itemize} A little less clear, at this point, is our next assumption on the coefficients $(\varsigma_k)_{k \in \mathbb{N}}$: \begin{itemize} \item[(\textbf{A5})] for every $x \in {\mathbb{T}^2}$ it holds \begin{align*} \sum_{k \in \mathbb{N}} ((K\ast \varsigma_k) \cdot \nabla \varsigma_k)(x) = 0. \end{align*} \end{itemize} The motivations for assuming (A5) will become evident during the proof of \autoref{prop:sup_z} in \autoref{sec:tech}. We are ready to state our first main result: \begin{thm} \label{thm:char} Assume (A1)-(A5). Let $\hE{\cdot} \coloneqq \E{\tE{\cdot}}$ denote the expectation on $\hat{\Omega} \coloneqq \Omega \times \tilde{\Omega}$ with respect to the probability measure $\hat{\mathbb{P}} \coloneqq \mathbb{P} \otimes \tilde{\mathbb{P}}$. Then \begin{align*} \sup_{t \in [0,T]} \hE{ \| \phi_t^\epsilon-\phi_t \|_{L^1({\mathbb{T}^2},{\mathbb{T}^2})}} \to 0 \quad \mbox{ as } \epsilon \to 0. \end{align*} \end{thm} \subsubsection{Convergence of large-scale dynamics} Let $q^\epsilon,q : [0,T] \times {\mathbb{T}^2} \to \mathbb{R}$ be such that: \begin{itemize} \item[(\textbf{A6})] there exists a constant $C$ such that for every $\epsilon>0$ it holds $q^\epsilon,q \in$\\ $L^1([0,T],L^\infty({\mathbb{T}^2}))$ and \begin{align*} \int_0^T \| q^\epsilon_s \|_{L^\infty({\mathbb{T}^2})} ds \leq C, \qquad \int_0^T \| q_s \|_{L^\infty({\mathbb{T}^2})} ds \leq C; \end{align*} \item[(\textbf{A7})] $q^\epsilon-q$ converges to zero in $L^1([0,T],L^\infty({\mathbb{T}^2}))$. \end{itemize} Our second main result is the following: \begin{thm} \label{thm:conv_large} Assume (A1)-(A7) and $\Xi_0\in L^\infty({\mathbb{T}^2})$. Then the solution $\Xi^\epsilon$ of \eqref{eq:large_eps_intro} converges towards the solution $\Xi$ of \eqref{eq:large_intro} in the following sense: for every $f \in L^1({\mathbb{T}^2})$ \begin{align*} \E{\left|\int_{\mathbb{T}^2} \Xi^\epsilon_t(x) f(x)dx -\int_{\mathbb{T}^2} \Xi_t(x) f(x)dx \right|} \to 0 \quad \mbox{ as } \epsilon \to 0, \end{align*} for every fixed $t\in [0,T]$ and in $L^p([0,T])$ for every finite $p$. Moreover, if $q \in L^1([0,T],Lip({\mathbb{T}^2}))$ then the previous convergence holds uniformly in $t \in [0,T]$ and $f \in Lip({\mathbb{T}^2})$ with Lipschitz constant $[f]_{Lip({\mathbb{T}^2})} \leq 1$ and $\|f\|_{L^\infty({\mathbb{T}^2})} \leq 1$. \end{thm} \section{Technical results} \label{sec:tech} In this section and after in the paper, the symbol $\lesssim$ will indicate inequality up to a unimportant multiplicative constant $C$ not depending of $\epsilon$. \subsection{Linearized dynamics} For $\epsilon>0$, denote $\theta^\epsilon$ the solution of the linear problem \begin{align*} d\theta^\epsilon_t=-\epsilon^{-1} \theta^\epsilon_t dt + \epsilon^{-1} \sum_{k \in\mathbb{N}} \varsigma_k dW^k_t, \end{align*} with initial condition $\theta^\epsilon|_{t=0}=0$. The process $\theta^\epsilon$ is explicitly given by the formula $\theta^\epsilon_t = \sum_{k \in \mathbb{N}} \varsigma_k \eta^{\epsilon,k}_t$, where \begin{align*} \eta^{\epsilon,k}_t \coloneqq \epsilon^{-1} \int_0^t e^{-\epsilon^{-1}(t-s)} dW^k_s, \quad k \in \mathbb{N}, \end{align*} is the so called Ornstein-Uhlenbeck process with null initial condition. By \cite[Theorem 2.2]{JiZh20}, for every fixed $p \geq 1$ it holds uniformly in $k \in \mathbb{N}$ \begin{align} \label{eq:OU} \E{\sup_{t \in [0,T]} |\eta^{\epsilon,k}_t|^p } \lesssim \epsilon^{-p/2} \log^{p/2}(1+\epsilon^{-1}), \end{align} and therefore by assumption (A3) \begin{align} \label{eq:est_theta} \E{\sup_{t \in [0,T]} \|\theta^\epsilon_t\|^p_{W^{1,\infty}({\mathbb{T}^2})} } \lesssim \epsilon^{-p/2} \log^{p/2}(1+\epsilon^{-1}). \end{align} The difference $\zeta^\epsilon \coloneqq \xi^{\epsilon} - \theta^\epsilon$ between the small-scale vorticity $\xi^\epsilon$ and $\theta^\epsilon$ solves the equation \begin{align*} d\zeta^\epsilon_t + (v^\epsilon_t+u^\epsilon_t)\cdot\nabla \zeta^\epsilon_t dt = -\epsilon^{-1} \zeta^\epsilon_t dt - (v^\epsilon_t+u^\epsilon_t)\cdot\nabla \theta^\epsilon_t dt \end{align*} with initial condition $\zeta^\epsilon_0=0$, whose solution satisfies \begin{align} \label{eq:zeta_psi} \zeta^\epsilon_t(\psi^\epsilon_t(x)) &= -\int_0^t e^{-\epsilon^{-1}(t-s)} ((v^\epsilon_s + u^\epsilon_s) \cdot\nabla \theta^\epsilon_s)(\psi^\epsilon_s(x)) ds. \end{align} In the following, for $t \in [0,T]$ and $x \in {\mathbb{T}^2}$ we denote $z^\epsilon_t(x) = (K \ast \zeta^\epsilon_t)(x)$. \subsection{Main technical results} We are going to prove two main technical results, needed for the proof of \autoref{thm:char}. Since our strategy consists in replicating the proof of \cite[Proposition 4.1]{FlPa21}, the first result we need is the following: \begin{prop} \label{prop:old} Assume (A1)-(A3). Then the following inequality holds: \begin{align*} \hE{ \sup_{t \in [0,T]}\left\| \sum_{k \in \mathbb{N}} \int_0^t \sigma_k(\phi^\epsilon_s(\cdot)) \eta^{\epsilon,k}_s ds - \sum_{k \in \mathbb{N}} \int_0^t \sigma_k(\phi^\epsilon_s(\cdot)) \circ dW^k_s \right\|_{L^1({\mathbb{T}^2},\mathbb{R}^2)}} \\ \lesssim \epsilon^{1/42} \log^{47/42}(1+\epsilon). \end{align*} \end{prop} In \cite[Section 4]{FlPa21} a similar estimate was proven along the way, using a considerable amount of auxiliary lemmas and computations. In view of this, here we refrain from going again into full detail, and the proof of \autoref{prop:old} will only be sketched. On the other hand, the nonlinear term in \eqref{eq:euler_intro} produces a new term in the equation of characteristcs, that was absent in \cite{FlPa21}. Although the final results is not affected by this new term, it is not trivial to actually prove so. We need the following: \begin{prop} \label{prop:sup_z} Assume (A1)-(A5). Then: \begin{align*} \hE{\sup_{t \in [0,T]}\left\|\int_0^t z^\epsilon_s(\phi^\epsilon_s(\cdot)) ds\right\|_{L^\infty({\mathbb{T}^2},\mathbb{R}^2)}} \lesssim \epsilon^{1/12} \log^{11/12}(1+\epsilon^{-1}). \end{align*} \end{prop} This constitutes the main novelty with respect to \cite{FlPa21}. The proof of \autoref{prop:sup_z} relies strongly on assumption (A5) and the following It\=o Formulas, yielding for every fixed $t\in [0,T]$ and $k,h \in \mathbb{N}$: \begin{align*} \eta^{\epsilon,k}_t \eta^{\epsilon,h}_t &= e^{-\epsilon^{-1} t }\eta^{\epsilon,k}_0 \eta^{\epsilon,h}_0 - \epsilon^{-1} \int_0^t e^{-\epsilon^{-1}(t-s)} \eta^{\epsilon,k}_s \eta^{\epsilon,h}_s ds \\ &\quad +\epsilon^{-1} \int_0^t e^{-\epsilon^{-1}(t-s)} \eta^{\epsilon,k}_s dW^h_s +\epsilon^{-1} \int_0^t e^{-\epsilon^{-1}(t-s)} \eta^{\epsilon,h}_s dW^k_s \\ &\quad +\delta_{k,h} \frac{\epsilon^{-2}}{2} \int_0^t e^{-\epsilon^{-1}(t-s)} ds, \\ \eta^{\epsilon,k}_t \eta^{\epsilon,h}_t &= \eta^{\epsilon,k}_0 \eta^{\epsilon,h}_0 - 2\epsilon^{-1} \int_0^t \eta^{\epsilon,k}_s \eta^{\epsilon,h}_s ds \\ &\quad +\epsilon^{-1} \int_0^t \eta^{\epsilon,k}_s dW^h_s +\epsilon^{-1} \int_0^t \eta^{\epsilon,h}_s dW^k_s +\delta_{k,h} \frac{\epsilon^{-2} t}{2} , \end{align*} with $\delta_{k,h}$ being the Kronecker delta function, allowing to control the time integral of quadratics $\eta^{\epsilon,k}_s \eta^{\epsilon,h}_s$. \subsection{Proof of \autoref{prop:old}} In this paragraph we recall the argument contained in \cite{FlPa21}. Roughly speaking, \autoref{prop:old} is a sort of Wong-Zakai result for the Ornstein-Uhlenbeck process $\eta^{\epsilon,k}$ converging to a white-in-time noise, that is the formal time derivative of the Wiener process $W^k$. We need to exploit a discretization of \eqref{eq:char_eps_intro} to show the closeness, in a certain sense to be specified, between the Stratonovich-to-It\=o corrector $c:\mathbb{T}^2 \to \mathbb{R}^2$, given by: \begin{align*} c(x)=\frac12 \sum_{k \in \mathbb{N}} \nabla\sigma_k(x) \cdot \sigma_k(x), \quad x \in \mathbb{T}^2, \end{align*} coming from the stochastic integral, and the iterated time integral of the Ornstein-Uhlenbeck process. In order to discretize the problem, for every $\epsilon>0$ take a mesh $\delta>0$ such that $T/\delta$ is an integer. For any $n=0,\dots,T/\delta-1$ and fixed $x \in {\mathbb{T}^2}$, consider the following decomposition: \begin{align*} \sum_{k \in \mathbb{N}} \int_{n\delta}^{(n+1)\delta} \sigma_k(\phi^\epsilon_s(x)) \eta^{\epsilon,k}_s ds &= \sum_{k \in \mathbb{N}}\int_{n\delta}^{(n+1)\delta} \left( \int_{n\delta}^{s} \nabla\sigma_k(\phi^\epsilon_r(x)) \cdot v^\epsilon_r(\phi^\epsilon_r(x)) dr \right)\eta^{\epsilon,k}_s ds \\ &\quad+ \sum_{k \in \mathbb{N}}\int_{n\delta}^{(n+1)\delta} \left( \int_{n\delta}^{s} \nabla\sigma_k(\phi^\epsilon_r(x)) \cdot z^\epsilon_r(\phi^\epsilon_r(x)) dr \right)\eta^{\epsilon,k}_s ds \\ &\quad+ \sum_{k,h \in \mathbb{N}}\int_{n\delta}^{(n+1)\delta} \left( \int_{n\delta}^{s} \nabla\sigma_k(\phi^\epsilon_r(x)) \cdot \sigma_h(\phi^\epsilon_r(x)) \eta^{\epsilon,h}_r dr \right)\eta^{\epsilon,k}_s ds \\ &\quad+ \sum_{k \in \mathbb{N}}\int_{n\delta}^{(n+1)\delta} \left( \int_{n\delta}^{s} \nabla\sigma_k(\phi^\epsilon_r(x)) \cdot \sqrt{2\nu} dw_r \right)\eta^{\epsilon,k}_s ds \\ &\quad+ \sum_{k \in \mathbb{N}}\int_{n\delta}^{(n+1)\delta} \sigma_k(\phi^\epsilon_{n\delta}(x)) dW^k_s \\ &\quad- \sum_{k \in \mathbb{N}} \int_{n\delta}^{(n+1)\delta} \sigma_k(\phi^\epsilon_{n\delta}(x)) \epsilon d\eta^{\epsilon,k}_s \\ \eqqcolon&\, I^\epsilon_1(n) + I^\epsilon_2(n) + I^\epsilon_3(n) + I^\epsilon_4(n) + I^\epsilon_5(n) + I^\epsilon_6(n). \end{align*} Regarding the Stratonovich integral, we can rewrite: \begin{align*} \sum_{k\in \mathbb{N}} \int_{n\delta}^{(n+1)\delta}\sigma_k(\phi^\epsilon_s(x)) \circ dW^k_s = &\sum_{k\in \mathbb{N}}\int_{n\delta}^{(n+1)\delta} \left(\sigma_k(\phi^\epsilon_s(x))-\sigma_k(\phi^\epsilon_{n\delta}(x))\right) dW^k_s \\ &+ \sum_{k\in \mathbb{N}}\int_{n\delta}^{(n+1)\delta} \sigma_k(\phi^\epsilon_{n\delta}(x)) dW^k_s \\ &+ \int_{n\delta}^{(n+1)\delta} \left( c(\phi^\epsilon_s(x))- c(\phi^\epsilon_{n\delta}(x)) \right) ds \\ &+ \int_{n\delta}^{(n+1)\delta} c(\phi^\epsilon_{n\delta}(x)) ds \\ \eqqcolon&\, J^\epsilon_1(n) + J^\epsilon_2(n) + J^\epsilon_3(n) + J^\epsilon_4(n). \end{align*} The ingredients for the proof of \autoref{prop:old} are: \begin{itemize} \item a good estimate on $\hE{\sup_{t \in [0,T]} |z^\epsilon_t(\phi^\epsilon_t(x))|}$ (cfr. \autoref{lem:zeta_log}), needed to control $I^\epsilon_2(n)$; \item a good estimate on $\hE{\sup_{\tau \leq \delta} |\phi^\epsilon_{\tau + n\delta}(x)-\phi^\epsilon_{n\delta}(x)|}$ (cfr. \autoref{lem:phi_eps_incr}), needed to approximate $I^\epsilon_3(n)$ with \begin{align} \label{eq:I_3approx} \sum_{k,h \in \mathbb{N}} \nabla\sigma_k(\phi_{n\delta}(x)) \cdot \sigma_h(\phi_{n\delta}(x)) \int_{n\delta}^{(n+1)\delta} \left( \int_{n\delta}^{s} \eta^{\epsilon,h}_r dr \right)\eta^{\epsilon,k}_s ds ; \end{align} \item a better estimate on $\hE{|\phi^\epsilon_{(n+1)\delta}(x)-\phi^\epsilon_{n\delta}(x)|}$ (cfr. \autoref{lem:phi_eps_incr_bis}), needed to control $I^\epsilon_6(n)$ with a discrete integration by parts. \end{itemize} Notice that $I^\epsilon_5(n) = J^\epsilon_2(n)$, and the expression in \eqref{eq:I_3approx}, that approximates $I^\epsilon_3(n)$, must be compensated by subtracting $J^\epsilon_4(n)$. \begin{lem} \label{lem:zeta_log} Assume (A1)-(A3). Then for every fixed $p\geq 1$ it holds \begin{align*} \E{\sup_{t \in [0,T]} \|\zeta^\epsilon_t\|^p_{L^\infty({\mathbb{T}^2})} } \lesssim \log^p(1+\epsilon^{-1}). \end{align*} In particular, since $z^\epsilon_t = K \ast \zeta^\epsilon_t$ we alse have \begin{align*} \E{\sup_{t \in [0,T]} \|z^\epsilon_t\|^p_{L^\infty({\mathbb{T}^2})} } \lesssim \log^p(1+\epsilon^{-1}). \end{align*} \end{lem} \begin{proof} We prove in the first place the weaker estimate: \begin{align} \label{eq:est_zeta_loose} \E{\sup_{t \in [0,T]} \|\zeta^\epsilon_t\|^p_{L^\infty({\mathbb{T}^2})} } \lesssim \epsilon^{-p}. \end{align} Since $\theta^\epsilon$ satisfies the bound above by \eqref{eq:est_theta}, it suffices to prove it for $\xi^\epsilon$. Denote $M^\epsilon_t(x) = \sum_{k \in \mathbb{N}} \int_0^t \varsigma_k(\psi^\epsilon_s(x)) dW^k_s$. Since for every $s,t \in [0,T]$ \begin{align*} \E{\| M^\epsilon_t - M^\epsilon_s \|^4_{L^\infty({\mathbb{T}^2})}} \lesssim \left( \sum_{k \in \mathbb{N}} \| \varsigma_k\|_{L^\infty({\mathbb{T}^2})}^2 \right)^2 (t-s)^2, \end{align*} by (A3) and Kolmogorov continuity Theorem the process $M^\epsilon:\Omega \times [0,T] \to L^\infty({\mathbb{T}^2})$ has a modification $\tilde{M}^\epsilon$ that is $\alpha$-H\"older continuous for every $\alpha<1/4$, with $\alpha$-H\"older constant $K_{\epsilon,\alpha}$ bounded in $L^p(\Omega)$ for every $p<\infty$ uniformly in $\epsilon$. Since $M^\epsilon$ has continuous trajectories, $M^\epsilon_t=\tilde{M}^\epsilon_t$ a.s. as random variables in $L^\infty({\mathbb{T}^2})$ and \begin{align*} \xi^\epsilon_t(\psi^\epsilon_t(x)) &= \epsilon^{-1} \int_0^t e^{-\epsilon^{-1}(t-s)} dM^\epsilon_s(x) \\ &= \epsilon^{-1} \int_0^t e^{-\epsilon^{-1}(t-s)} d(M^\epsilon_s(x)-M^\epsilon_t(x)) \\ &= \epsilon^{-1} \left[ e^{-\epsilon^{-1}(t-s)} (M^\epsilon_s(x)-M^\epsilon_t(x)) \right]^{s=t}_{s=0} \\ &\quad - \epsilon^{-2} \int_0^t e^{-\epsilon^{-1}(t-s)} (M^\epsilon_s(x)-M^\epsilon_t(x)) ds. \end{align*} Clearly $\|\xi^\epsilon_t\|_{L^\infty({\mathbb{T}^2})} = \| \xi^\epsilon_t \circ \psi^\epsilon_t \|_{L^\infty({\mathbb{T}^2})}$, and therefore \begin{align*} \| \xi^\epsilon_t \|_{L^\infty({\mathbb{T}^2})} &\leq \epsilon^{-1} e^{-\epsilon^{-1}t} \|M^\epsilon_t\|_{L^\infty({\mathbb{T}^2})} +\epsilon^{-1} K_{\epsilon,\alpha}, \end{align*} and \eqref{eq:est_zeta_loose} follows. Recalling \eqref{eq:zeta_psi}, the following inequality holds \begin{align} \label{eq:zeta_iterative} \|\zeta^\epsilon_t\|_{L^\infty({\mathbb{T}^2})} \leq \int_0^t e^{-\epsilon^{-1}(t-s)} \|(v^\epsilon_s + u^\epsilon_s) \cdot\nabla \theta^\epsilon_s \|_{L^\infty({\mathbb{T}^2})} ds. \end{align} Using assumption (A2) and $u^\epsilon_s = K \ast \zeta^\epsilon_s + K \ast \theta^\epsilon_s$ we get \begin{align*} \|(v^\epsilon_s + u^\epsilon_s) \cdot\nabla \theta^\epsilon_s \|_{L^\infty({\mathbb{T}^2})} &\lesssim \left( 1 + \| \zeta^\epsilon_s \|_{L^\infty({\mathbb{T}^2})} + \| \theta^\epsilon_s \|_{L^\infty({\mathbb{T}^2})} \right) \| \nabla \theta^\epsilon_s \|_{L^\infty({\mathbb{T}^2})}, \end{align*} that can be plugged back into \eqref{eq:zeta_iterative} to produce the recursive estimate \begin{align*} \|\zeta^\epsilon_t\|_{L^\infty({\mathbb{T}^2})} &\lesssim \int_0^t e^{-\epsilon^{-1}(t-s)} \left( 1 + \| \theta^\epsilon_s \|_{L^\infty({\mathbb{T}^2})} \right) \| \nabla \theta^\epsilon_s \|_{L^\infty({\mathbb{T}^2})} ds \\ &\quad+ \int_0^t e^{-\epsilon^{-1}(t-s)} \|\zeta^\epsilon_s \|_{L^\infty({\mathbb{T}^2})} \| \nabla \theta^\epsilon_s \|_{L^\infty({\mathbb{T}^2})} ds \\ &\lesssim \epsilon \left( \sup_{s \in [0,T]}\| \theta^\epsilon_s \|_{L^\infty({\mathbb{T}^2})} + \sup_{s \in [0,T]}\| \zeta^\epsilon_s \|_{L^\infty({\mathbb{T}^2})} \right) \sup_{s \in [0,T]}\| \nabla \theta^\epsilon_s \|_{L^\infty({\mathbb{T}^2})}. \end{align*} By H\"older inequality and \eqref{eq:est_zeta_loose} we deduce from the previous inequality \begin{align*} \E{ \sup_{t \in [0,T]} \|\zeta^\epsilon_t\|^p_{L^\infty({\mathbb{T}^2})}} \lesssim \log^p(1+\epsilon^{-1}) + \epsilon^{-p/2} \log^{p/2}(1+\epsilon^{-1}), \end{align*} improving the bound \eqref{eq:est_zeta_loose} itself. Iterating the same argument one more time we obtain the desired estimate. \end{proof} \begin{lem} \label{lem:phi_eps_incr} Assume (A1)-(A3). Then for every fixed $p\geq 1$ and $\alpha \in (0,1/2)$ \begin{equation*} \hE{\sup_{\substack{t+\tau \leq T \\ \tau \leq \delta}} \| \phi^\epsilon_{t+\tau} - \phi^\epsilon_t \|^p_{L^{\infty}(\mathbb{T}^2,\mathbb{T}^2)}} \lesssim \delta^p \epsilon^{-p/2} \log^{p/2}(1+\epsilon^{-1}) + \delta^{p\alpha}. \end{equation*} \end{lem} \begin{proof} The increment $\phi^\epsilon_{t+\tau}(x) - \phi^\epsilon_{t}(x)$ can be written as \begin{align*} \phi^\epsilon_{t+\tau}(x) - \phi^\epsilon_{t}(x) = &\int_{t}^{t+\tau} v^\epsilon_s(\phi^\epsilon_s(x)) ds + \sum_{k \in \mathbb{N}} \int_{t}^{t+\tau} \sigma_k(\phi^\epsilon_s(x))\eta^{\epsilon,k}_s ds \\ &+\int_{t}^{t+\tau} z_s(\phi^\epsilon_s(x)) ds + \sqrt{2\nu} (w_{t+\tau}-w_t), \end{align*} therefore, by assumption (A2) we have \begin{align*} \sup_{\substack{t+\tau \leq T}} \| \phi^\epsilon_{t+\tau} - \phi^\epsilon_t \|_{L^{\infty}(\mathbb{T}^2,\mathbb{T}^2)} \lesssim\, &\tau + \tau\sum_{k \in \mathbb{N}} \|\sigma_k\|_{L^\infty({\mathbb{T}^2})} \sup_{s\in[0,T]} |\eta^{\epsilon,k}_s| \\ &+ \tau \sup_{s\in[0,T]}\|\zeta^\epsilon_s\|_{L^\infty({\mathbb{T}^2})} + K_\alpha \tau^\alpha, \end{align*} where $K_\alpha$ denotes the $\alpha$-H\"older constant of $w$. The thesis follows easily by (A3), \eqref{eq:OU} and \autoref{lem:zeta_log}. \end{proof} \begin{lem} \label{lem:phi_eps_incr_bis} Assume (A1)-(A3). Then for every fixed $p\geq 1$ we have, uniformly in $n=0,\dots,T/\delta -1$: \begin{align*} \hE{ \| \phi^\epsilon_{(n+1)\delta} - \phi^\epsilon_{n\delta} \|^p_{L^{\infty}(\mathbb{T}^2,\mathbb{T}^2)}} &\lesssim \delta^{2p} \epsilon^{-p} \log^p(1+\epsilon^{-1}) \\ &\quad+ \delta^{p(1+\alpha)} \epsilon^{-p/2} \log^{p/2}(1+\epsilon^{-1}) \\ &\quad+ \delta^{p/2} + \epsilon^{p/2} \log^{p/2}(1+\epsilon^{-1}). \end{align*} \end{lem} \begin{proof} The increment $\phi^\epsilon_{(n+1)\delta}(x) - \phi^\epsilon_{n\delta}(x)$ can be written as \begin{align*} \phi^\epsilon_{(n+1)\delta}(x) - \phi^\epsilon_{n\delta}(x) = &\int_{n\delta}^{(n+1)\delta} v^\epsilon_s(\phi^\epsilon_s(x)) ds \\ &+ \sum_{k \in \mathbb{N}}\int_{n\delta}^{(n+1)\delta} \left( \sigma_k(\phi^\epsilon_s(x))-\sigma_k(\phi^\epsilon_{n\delta}(x)) \right)\eta^{\epsilon,k}_s ds \\ &+ \sum_{k \in \mathbb{N}} \int_{n\delta}^{(n+1)\delta} \sigma_k(\phi^\epsilon_{n\delta}(x)) \eta^{\epsilon,k}_s ds \\ &\quad +\int_{n\delta}^{(n+1)\delta} z^\epsilon_s(\phi^\epsilon_s(x)) ds + \sqrt{2\nu} (w_{(n+1)\delta}-w_{n\delta}). \end{align*} The first, fourth and fifth term are easy. The second one is bounded in $L^\infty(\mathbb{T}^2,\mathbb{T}^2)$ uniformly in $n$ by \begin{align*} \int_{0}^{\delta} \sum_{k \in \mathbb{N}} \|\nabla \sigma_k \|_{L^\infty(\mathbb{T}^2,\mathbb{R}^4)} \sup_{\substack{t+s \leq T}} \| \phi^\epsilon_{t+s} - \phi^\epsilon_t \|_{L^{\infty}(\mathbb{T}^2,\mathbb{T}^2)} \sup_{s\in[0,T]} |\eta^{\epsilon,k}_s| ds, \end{align*} and by (A3) and H\"older inequality with exponent $q>1$ \begin{align*} &\hE{ \left( \int_{0}^{\delta} \sum_{k \in \mathbb{N}} \|\nabla \sigma_k \|_{L^\infty(\mathbb{T}^2,\mathbb{R}^4)} \sup_{\substack{t+s \leq T}} \| \phi^\epsilon_{t+s} - \phi^\epsilon_t \|_{L^{\infty}(\mathbb{T}^2,\mathbb{T}^2)} \sup_{s\in[0,T]} |\eta^{\epsilon,k}_s| ds \right)^p } \\ &\leq \delta^{p-1} \left( \sum_{k \in \mathbb{N}} \|\nabla \sigma_k \|_{L^\infty(\mathbb{T}^2,\mathbb{R}^4)} \right)^{p-1} \int_{0}^{\delta} \sum_{k \in \mathbb{N}} \|\nabla \sigma_k \|_{L^\infty(\mathbb{T}^2,\mathbb{R}^4)}\\ &\quad \times \hE{ \sup_{\substack{t+s \leq T}} \| \phi^\epsilon_{t+s} - \phi^\epsilon_t \|^{pq}_{L^{\infty}(\mathbb{T}^2,\mathbb{T}^2)} }^{1/q} \hE{ \sup_{s\in[0,T]} |\eta^{\epsilon,k}_s|^{pq'} }^{1/q'} ds \\ &\lesssim \delta^{p-1} \int_{0}^{\delta} \left( s^p \epsilon^{-p} \log^p(1+\epsilon^{-1}) ds + s^{p\alpha} \epsilon^{-p/2} \log^{p/2}(1+\epsilon^{-1}) \right) ds \\ &\lesssim \delta^{2p} \epsilon^{-p} \log^p(1+\epsilon^{-1}) + \delta^{p(1+\alpha)} \epsilon^{-p/2} \log^{p/2}(1+\epsilon^{-1}). \end{align*} The third term is bounded in $L^\infty(\mathbb{T}^2,\mathbb{R}^2)$ by \begin{align*} \sum_{k \in \mathbb{N}} \| \sigma_k \|_{L^\infty(\mathbb{T}^2,\mathbb{R}^2)} \left| \int_{n\delta}^{(n+1)\delta} \eta^{\epsilon,k}_s ds \right| &= \sum_{k \in \mathbb{N}} \| \sigma_k \|_{L^\infty(\mathbb{T}^2,\mathbb{R}^2)} \left| W^{k}_{(n+1)\delta}-W^{k}_{n\delta}\right| \\ &\quad+ \sum_{k \in \mathbb{N}} \| \sigma_k \|_{L^\infty(\mathbb{T}^2,\mathbb{R}^2)} \epsilon \left| \eta^{\epsilon,k}_{(n+1)\delta}-\eta^{\epsilon,k}_{n\delta} \right|, \end{align*} from which we deduce as usual \begin{align*} \hE{\left( \sum_{k \in \mathbb{N}} \| \sigma_k \|_{L^\infty(\mathbb{T}^2,\mathbb{R}^2)} \left| \int_{n\delta}^{(n+1)\delta} \eta^{\epsilon,k}_s ds \right| \right)^p } \lesssim \delta^{p/2} + \epsilon^{p/2} \log^{p/2}(1+\epsilon^{-1}). \end{align*} Putting all together, the thesis follows. \end{proof} \begin{proof}[Proof of \autoref{prop:old}] For any given $t \in [0,T]$, let $\lfloor t \rfloor \eqqcolon m\delta$ be the largest multiple of $\delta$ strictly smaller than $t$. We can therefore decompose \begin{align*} \sum_{k \in \mathbb{N}} \int_0^t \sigma_k(\phi^\epsilon_s(x)) \eta^{\epsilon,k}_s ds &= \sum_{k \in \mathbb{N}} \int_0^{m\delta} \sigma_k(\phi^\epsilon_s(x)) \eta^{\epsilon,k}_s ds + \sum_{k \in \mathbb{N}} \int_{m\delta}^t \sigma_k(\phi^\epsilon_s(x)) \eta^{\epsilon,k}_s ds \\ &= \sum_{j = 1}^6 \sum_{n = 0}^{m-1} I^\epsilon_j(n) + \sum_{k \in \mathbb{N}} \int_{m\delta}^t \sigma_k(\phi^\epsilon_s(x)) \eta^{\epsilon,k}_s ds, \end{align*} and in a similar fashion \begin{align*} \sum_{k \in \mathbb{N}} \int_0^t \sigma_k(\phi^\epsilon_s(x)) \circ dW^k_s &= \sum_{k \in \mathbb{N}} \int_0^{m\delta} \sigma_k(\phi^\epsilon_s(x)) \circ dW^k_s + \sum_{k \in \mathbb{N}} \int_{m\delta}^t \sigma_k(\phi^\epsilon_s(x)) \circ dW^k_s \\ &= \sum_{j = 1}^4 \sum_{n = 0}^{m-1} J^\epsilon_j(n) + \sum_{k \in \mathbb{N}} \int_{m\delta}^t \sigma_k(\phi^\epsilon_s(x)) \circ dW^k_s. \end{align*} By \eqref{eq:OU}, the following estimate holds true \begin{align*} \hE{ \sup_{\substack{m = 0,\dots,T/\delta-1\\t \leq \delta}} \left\| \sum_{k \in \mathbb{N}} \int_{m\delta}^t \sigma_k(\phi^\epsilon_s(\cdot)) \eta^{\epsilon,k}_s ds \right\|_{L^1({\mathbb{T}^2},\mathbb{R}^2)} } \lesssim \delta \epsilon^{-1/2} \log^{1/2}(1+\epsilon^{-1}). \end{align*} Also, by (A3) and Kolmogorov continuity Theorem, for every fixed $\alpha \in (0,1/2)$ we have \begin{align*} \hE{ \sup_{\substack{m = 0,\dots,T/\delta-1\\t \leq \delta}} \left\| \sum_{k \in \mathbb{N}} \int_{m\delta}^t \sigma_k(\phi^\epsilon_s(\cdot)) \circ dW^k_s \right\|_{L^1({\mathbb{T}^2},\mathbb{R}^2)} } \lesssim \delta^\alpha. \end{align*} Finally, by calculations similar to those performed in Lemma 4.6 and Lemma 4.7 of \cite{FlPa21}, for every fixed $\alpha \in (0,1/2)$ \begin{align*} \hE{ \sup_{m = 0,\dots,T/\delta-1} \left\| \sum_{j = 1}^6 \sum_{n = 0}^{m-1} I^\epsilon_j(n) - \sum_{j = 1}^4 \sum_{n = 0}^{m-1} J^\epsilon_j(n) \right\|_{L^1({\mathbb{T}^2},\mathbb{R}^2)} } \\ \lesssim \delta \epsilon^{-1/2}\log^{3/2}(1+\epsilon^{-1}) + \delta^{\alpha-1} \epsilon^{1/2}\log(1+\epsilon^{-1}) \\ \delta^2 \epsilon^{-3/2}\log^{3/2}(1+\epsilon^{-1}) + \delta^{1+\alpha} \epsilon^{-1}\log(1+\epsilon^{-1}) + \delta^{\alpha} . \end{align*} We conclude the proof fixing $\alpha$ close to $1/2$ so that $(1+\alpha)^{-1} < 3/4 < (2-2\alpha)^{-1}$, for instance $\alpha = 3/8$, and optimizing over $\delta$: for $\delta = \epsilon^{16/21}\log^{-4/21}(1+\epsilon^{-1})$, it follows the desired inequality \begin{align*} \hE{ \sup_{t \in [0,T]}\left\| \sum_{k \in \mathbb{N}} \int_0^t \sigma_k(\phi^\epsilon_s(\cdot)) \eta^{\epsilon,k}_s ds - \sum_{k \in \mathbb{N}} \int_0^t \sigma_k(\phi^\epsilon_s(\cdot)) \circ dW^k_s \right\|_{L^1({\mathbb{T}^2},\mathbb{R}^2)}} \\ \lesssim \epsilon^{1/42} \log^{47/42}(1+\epsilon^{-1}). \end{align*} \end{proof} \subsection{Proof of \autoref{prop:sup_z}} Recall the content of \autoref{prop:sup_z}: we need to prove, under assumptions (A1)-(A5) \begin{align*} \hE{\sup_{t \in [0,T]} \left\|\int_0^t z^\epsilon_s(\phi^\epsilon_s(\cdot)) ds\right\|_{L^\infty({\mathbb{T}^2},\mathbb{R}^2)}} \lesssim \epsilon^{1/12} \log^{11/12}(1+\epsilon^{-1}). \end{align*} Comparing the desired inequality with \autoref{lem:zeta_log}, one realizes that time integration of the process $z^\epsilon_s(\phi^\epsilon_s(x))$ allows a better control due to cancellation of opposite-sign oscillations, even if the latter may become of large magnitude for $\epsilon$ going to zero. Concerning the strategy of the proof, in the first place we prove the following: \begin{lem} \label{lem:z} For every fixed $t \in [0,T]$ it holds \begin{align*} \hE{\left\|\int_0^t z^\epsilon_s(\phi^\epsilon_s(\cdot)) ds\right\|_{L^\infty({\mathbb{T}^2},\mathbb{R}^2)}} \lesssim \epsilon^{1/6} \log^{5/6}(1+\epsilon^{-1}). \end{align*} \end{lem} Having at hands the previous result, the proof of \autoref{prop:sup_z} goes as follows: for some parameter $\delta=T/m>0$, $m \in \mathbb{N}$ to be chosen, write \begin{align*} \sup_{t \in [0,T]}\left\|\int_0^t z^\epsilon_s(\phi^\epsilon_s(\cdot)) ds\right\|_{L^\infty({\mathbb{T}^2},\mathbb{R}^2)} &\leq \sup_{n=0,\dots,m-1} \left\|\int_0^{n\delta} z^\epsilon_s(\phi^\epsilon_s(\cdot)) ds\right\|_{L^\infty({\mathbb{T}^2},\mathbb{R}^2)} \\ &\quad+ \sup_{\substack{n=0,\dots,m-1\\t \leq \delta}} \left\|\int_{n\delta}^{n\delta+t} z^\epsilon_s(\phi^\epsilon_s(\cdot)) ds\right\|_{L^\infty({\mathbb{T}^2},\mathbb{R}^2)} \\ &\leq \sum_{n=0}^{m-1} \left\|\int_0^{n\delta} z^\epsilon_s(\phi^\epsilon_s(\cdot)) ds\right\|_{L^\infty({\mathbb{T}^2},\mathbb{R}^2)} \\ &\quad+ \delta \sup_{s \in [0,T]} \|z^\epsilon_s(\phi^\epsilon_s(\cdot))\|_{L^\infty({\mathbb{T}^2},\mathbb{R}^2)}. \end{align*} Hence, by \autoref{lem:zeta_log} and \autoref{lem:z} \begin{align*} \hE{\sup_{t \in [0,T]} \left\|\int_0^t z^\epsilon_s(\phi^\epsilon_s(\cdot)) ds\right\|_{L^\infty({\mathbb{T}^2},\mathbb{R}^2)}} &\leq \sum_{n=0}^{m-1} \hE{ \left\|\int_0^{n\delta} z^\epsilon_s(\phi^\epsilon_s(\cdot)) ds\right\|_{L^\infty({\mathbb{T}^2},\mathbb{R}^2)}} \\ &\quad+ \delta \hE{\sup_{s \in [0,T]} \|z^\epsilon_s(\phi^\epsilon_s(\cdot))\|_{L^\infty({\mathbb{T}^2},\mathbb{R}^2)}} \\ &\lesssim \delta^{-1} \epsilon^{1/6} \log^{5/6}(1+\epsilon^{-1}) + \delta \log(1+\epsilon^{-1}), \end{align*} and the thesis follows by optimizing the choice of $\delta$. \begin{proof}[Proof of \autoref{lem:z}] We will work with fixed $x \in {\mathbb{T}^2}$. The reader can easily check that all the inequalities present in the proof hold uniformly in $x$. Recall $z^\epsilon_t = K \ast \zeta^\epsilon_t$, and for $\psi^\epsilon_{t,s}(x) \coloneqq \psi^\epsilon_s((\psi^\epsilon_t)^{-1}(x))$ the formula \begin{align*} \zeta^\epsilon_t(x) &= -\int_0^t e^{-\epsilon^{-1}(t-s)} ((v^\epsilon_s + K \ast \zeta^\epsilon_s)\cdot\nabla {\theta}^\epsilon_s )(\psi^\epsilon_{t,s}(x)) ds \\ &\quad- \int_0^t e^{-\epsilon^{-1}(t-s)} ((K \ast \theta^\epsilon_s)\cdot\nabla {\theta}^\epsilon_s )(\psi^\epsilon_{t,s}(x)) ds. \end{align*} For notational simplicity let $\Theta^\epsilon_s \coloneqq (K \ast \theta^\epsilon_s)\cdot\nabla {\theta}^\epsilon_s$, and rewrite \begin{align*} \zeta^\epsilon_t(x) &= -\int_0^t e^{-\epsilon^{-1}(t-s)} ((v^\epsilon_s + K \ast \zeta^\epsilon_s)\cdot\nabla {\theta}^\epsilon_s )(\psi^\epsilon_{t,s}(x)) ds \\ &\quad- \int_0^t e^{-\epsilon^{-1}(t-s)} \left( \Theta^\epsilon_s(\psi^\epsilon_{t,s}(x)) - \Theta^\epsilon_s(x) \right) ds \\ &\quad- \int_0^t e^{-\epsilon^{-1}(t-s)} \Theta^\epsilon_s(x) ds \\ &\quad \eqqcolon \zeta^{\epsilon,1}_t(x) + \zeta^{\epsilon,2}_t(x) + \zeta^{\epsilon,3}_t(x). \end{align*} Let us focus on the terms $\zeta^{\epsilon,j}$, $j=1,2,3$ separately. Concerning $\zeta^{\epsilon,1}$, \begin{align*} \|\zeta^{\epsilon,1}_t \|_{L^\infty({\mathbb{T}^2})} &\lesssim \int_0^t e^{-\epsilon^{-1}(t-s)} ds \left(1+\sup_{s \in [0,T]} \|\zeta^\epsilon_s\|_{L^\infty({\mathbb{T}^2})}\right) \\ &\qquad \times \sup_{s \in [0,T]}\| \nabla \theta^\epsilon_s \|_{L^\infty({\mathbb{T}^2},\mathbb{R}^2)}, \end{align*} and thus the following holds by assumption (A2) and \autoref{lem:zeta_log} \begin{align} \label{eq:zeta1} \sup_{t \in [0,T]} \hE{\|\zeta^{\epsilon,1}_t\|_{L^\infty({\mathbb{T}^2})}} \lesssim \epsilon^{1/2} \log^{3/2}(1+\epsilon^{-1}). \end{align} Moving to $\zeta^{\epsilon,2}$, notice that $|\psi^\epsilon_{t,s}(x)-x| = |\psi^\epsilon_{t,s}(x)-\psi^\epsilon_{t,t}(x)|$, and letting $y=(\psi^\epsilon_t)^{-1}(x)$ we have \begin{align*} |\psi^\epsilon_{t,s}(x)-\psi^\epsilon_{t,t}(x)| &= |\psi^\epsilon_s(y)-\psi^\epsilon_t(y)| \\ &\leq \int_s^t |v^\epsilon_r(\psi^\epsilon_r(y))| dr + \int_s^t |u^\epsilon_r(\psi^\epsilon_r(y))| dr \\ &\lesssim |t-s| \left( 1+ \sup_{r \in [0,T]} \|\zeta^\epsilon_r\|_{L^\infty({\mathbb{T}^2})} +\sup_{r \in [0,T]} \|\theta^\epsilon_r\|_{L^\infty({\mathbb{T}^2})} \right), \end{align*} therefore \begin{align*} \|\zeta^{\epsilon,2}_t \|_{L^\infty({\mathbb{T}^2})} &\lesssim \int_0^t e^{-\epsilon^{-1}(t-s)} |t-s| ds \sup_{s \in [0,T]}\| \nabla \Theta^\epsilon_s \|_{L^\infty({\mathbb{T}^2},\mathbb{R}^2)} \\ &\qquad \times \left( 1+ \sup_{r \in [0,T]} \|\zeta^\epsilon_r\|_{L^\infty({\mathbb{T}^2})} +\sup_{r \in [0,T]} \|\theta^\epsilon_r\|_{L^\infty({\mathbb{T}^2})} \right), \end{align*} that implies \begin{align} \label{eq:zeta2} \sup_{t \in [0,T]}\hE{\|\zeta^{\epsilon,2}_t \|_{L^\infty({\mathbb{T}^2})}} &\lesssim \epsilon^{1/2} \log^{3/2}(1+\epsilon^{-1}). \end{align} Finally, let us consider the term $\zeta^{\epsilon,3}$, which requires a preliminary manipulation. Since $\theta^\epsilon_s(x) = \sum_{k \in \mathbb{N}} \sigma_k(x) \eta^{\epsilon,k}_s$, we can rewrite for every $x \in {\mathbb{T}^2}$ \begin{align*} \Theta^\epsilon_s(x) = \sum_{k,h \in \mathbb{N}} (\sigma_k\cdot\nabla \varsigma_h) (x) \eta^{\epsilon,k}_s \eta^{\epsilon,h}_s \eqqcolon \sum_{k,h \in \mathbb{N}} \Theta_{k,h}(x) \eta^{\epsilon,k}_s \eta^{\epsilon,h}_s, \end{align*} where we have used $\sigma_k = K \ast \varsigma_k$ and $\Theta_{k,h} \coloneqq \sigma_k\cdot\nabla \varsigma_h$. Also, rewrite: \begin{align*} \zeta^{\epsilon,3}_t(x) &= - \int_0^t e^{-\epsilon^{-1}(t-s)} \Theta^\epsilon_s(x) ds \\ &= - \sum_{k,h \in \mathbb{N}} \Theta_{k,h}(x) \int_0^t e^{-\epsilon^{-1}(t-s)} \eta^{\epsilon,k}_s \eta^{\epsilon,h}_s ds. \end{align*} By It\=o Formula, for every fixed $t$ and $k,h \in \mathbb{N}$ it holds \begin{align*} \eta^{\epsilon,k}_t \eta^{\epsilon,h}_t &= e^{-\epsilon^{-1} t }\eta^{\epsilon,k}_0 \eta^{\epsilon,h}_0 - \epsilon^{-1} \int_0^t e^{-\epsilon^{-1}(t-s)} \eta^{\epsilon,k}_s \eta^{\epsilon,h}_s ds \\ &\quad +\epsilon^{-1} \int_0^t e^{-\epsilon^{-1}(t-s)} \eta^{\epsilon,k}_s dW^h_s +\epsilon^{-1} \int_0^t e^{-\epsilon^{-1}(t-s)} \eta^{\epsilon,h}_s dW^k_s \\ &\quad +\frac{\epsilon^{-2}}{2} \delta_{k,h} \int_0^t e^{-\epsilon^{-1}(t-s)} ds, \end{align*} with $\delta_{k,h}$ being the Kronecker delta function: $\delta_{k,h}=1$ if $k=h$ and $\delta_{k,h}=0$ if $k\neq h$. Otherwise said: \begin{align} \label{eq:int_eOU} \int_0^t e^{-\epsilon^{-1}(t-s)} \eta^{\epsilon,k}_s \eta^{\epsilon,h}_s ds &= \epsilon \left( {e^{-\epsilon^{-1} t }\eta^{\epsilon,k}_0 \eta^{\epsilon,h}_0-\eta^{\epsilon,k}_t \eta^{\epsilon,h}_t} \right) \\ &\quad \nonumber \int_0^t e^{-\epsilon^{-1}(t-s)} \eta^{\epsilon,k}_s dW^h_s +\int_0^t e^{-\epsilon^{-1}(t-s)} \eta^{\epsilon,h}_s dW^k_s \\ &\quad +\frac{1-e^{\epsilon^{-1} t}}{2} \delta_{k,h}. \nonumber \end{align} By \eqref{eq:int_eOU} and assumption (A5), for every $x \in {\mathbb{T}^2}$ we have \begin{align*} \zeta^{\epsilon,3}_t(x) &= \sum_{k,h \in \mathbb{N}} \Theta^\epsilon_{k,h}(x) \epsilon \left( \eta^{\epsilon,k}_t \eta^{\epsilon,h}_t -e^{-\epsilon^{-1} t }\eta^{\epsilon,k}_0 \eta^{\epsilon,h}_0 \right) \\ &\quad- \sum_{k,h \in \mathbb{N}} \Theta^\epsilon_{k,h}(x) \left( \int_0^t e^{-\epsilon^{-1}(t-s)} \eta^{\epsilon,k}_s dW^h_s +\int_0^t e^{-\epsilon^{-1}(t-s)} \eta^{\epsilon,h}_s dW^k_s\right), \end{align*} and therefore we can rewrite \begin{align*} \int_0^t (K \ast \zeta^{\epsilon,3}_s)(\phi^\epsilon_s(x)) ds &= \sum_{k,h \in \mathbb{N}} \int_0^t (K \ast \Theta^\epsilon_{k,h})(\phi^\epsilon_s(x)) \epsilon \eta^{\epsilon,k}_s \eta^{\epsilon,h}_s ds \\ &\quad- \sum_{k,h \in \mathbb{N}} \int_0^t (K \ast \Theta^\epsilon_{k,h})(\phi^\epsilon_s(x)) \epsilon e^{-\epsilon^{-1} s }\eta^{\epsilon,k}_0 \eta^{\epsilon,h}_0 ds \\ &\hspace{-112pt}\quad - \sum_{k,h \in \mathbb{N}} \int_0^t (K \ast \Theta_{k,h})(\phi^\epsilon_s(x)) \left( \int_0^s e^{-\epsilon^{-1}(s-r)} \eta^{\epsilon,k}_r dW^h_r +\int_0^s e^{-\epsilon^{-1}(s-r)} \eta^{\epsilon,h}_r dW^k_r \right) ds. \end{align*} It holds \begin{align*} \hE{\left| \int_0^t (K \ast \Theta^\epsilon_{k,h})(\phi^\epsilon_s(x)) \epsilon e^{-\epsilon^{-1} s }\eta^{\epsilon,k}_0 \eta^{\epsilon,h}_0 ds\right|} &\lesssim \epsilon \log(1+\epsilon^{-1}), \end{align*} \begin{align*} &\hE{\left| \int_0^t (K \ast \Theta^\epsilon_{k,h})(\phi^\epsilon_s(x)) \int_0^s e^{-\epsilon^{-1}(s-r)} \eta^{\epsilon,k}_r dW^h_r ds \right|} \\ &\quad=\hE{\left| \int_0^t \left(\int_r^t (K \ast \Theta^\epsilon_{k,h})(\phi^\epsilon_s(x)) e^{-\epsilon^{-1}(s-r)} ds \right) \eta^{\epsilon,k}_r dW^h_r \right|} \\ &\quad\lesssim \hE{\left| \int_0^t \left(\int_r^t (K \ast \Theta^\epsilon_{k,h})(\phi^\epsilon_s(x)) e^{-\epsilon^{-1}(s-r)} ds \right) \eta^{\epsilon,k}_r dW^h_r \right|^2}^{1/2} \\ &\quad\lesssim \hE{ \int_0^t \left(\int_r^t (K \ast \Theta^\epsilon_{k,h})(\phi^\epsilon_s(x)) e^{-\epsilon^{-1}(s-r)} ds \right)^2 |\eta^{\epsilon,k}_r|^2 dr }^{1/2} \\ &\quad\lesssim \epsilon^{1/2} \log^{1/2}(1+\epsilon^{-1}). \end{align*} The last non-trivial term is manipulated as follows. Let $\delta=t/m>0$, $m \in \mathbb{N}$ to be suitably chosen. We have \begin{align} \label{eq:zeta3_last} \sum_{k,h \in \mathbb{N}} &\int_0^t (K \ast \Theta_{k,h})(\phi^\epsilon_s(x)) \epsilon \eta^{\epsilon,k}_s \eta^{\epsilon,h}_s ds \\ &= \sum_{k,h \in \mathbb{N}} \sum_{n=0}^{m-1} \int_{n\delta}^{(n+1)\delta} \left( (K \ast \Theta_{k,h})(\phi^\epsilon_s(x))- (K \ast \Theta_{k,h})(\phi^\epsilon_{n\delta}(x)) \right) \epsilon \eta^{\epsilon,k}_s \eta^{\epsilon,h}_s ds \nonumber \\ &\quad+ \sum_{k,h \in \mathbb{N}} \sum_{n=0}^{m-1} (K \ast \Theta_{k,h})(\phi^\epsilon_{n\delta}(x)) \int_{n\delta}^{(n+1)\delta} \epsilon \eta^{\epsilon,k}_s \eta^{\epsilon,h}_s ds. \nonumber \end{align} Recalling \eqref{eq:char_eps_intro}, for every $\alpha \in (0,1/2)$ it holds \begin{align*} |\phi^\epsilon_t(x)-\phi^\epsilon_s(x)| &\leq \int_s^t |v^\epsilon_r(\phi^\epsilon_r(x))| dr + \int_s^t |u^\epsilon_r(\phi^\epsilon_r(x))| dr + \sqrt{2\nu} (w_t-w_s) \\ &\lesssim |t-s| \left( 1+ \sup_{r \in [0,T]} \|\zeta^\epsilon_r\|_{L^\infty({\mathbb{T}^2})} +\sup_{r \in [0,T]} \|\theta^\epsilon_r\|_{L^\infty({\mathbb{T}^2})} \right) + |t-s|^\alpha, \end{align*} which implies \begin{align} \label{eq:delta1} &\hE{\left| \sum_{k,h \in \mathbb{N}} \sum_{n=0}^{m-1} \int_{n\delta}^{(n+1)\delta} \left( (K \ast \Theta_{k,h})(\phi^\epsilon_s(x))- (K \ast \Theta_{k,h})(\phi^\epsilon_{n\delta}(x)) \right) \epsilon \eta^{\epsilon,k}_s \eta^{\epsilon,h}_s ds \right|} \\ &\quad\lesssim \delta \epsilon^{-1/2} \log^{3/2}(1+\epsilon^{-1}) + \delta^\alpha \log(1+\epsilon^{-1}) \nonumber. \end{align} Also, we can apply It\=o Formula again to find an alternative representation for the time integral of the quadratics $\eta^{\epsilon,k}_s \eta^{\epsilon,h}_s$, similar to \eqref{eq:int_eOU}. Indeed, \begin{align*} \eta^{\epsilon,k}_{(n+1)\delta} \eta^{\epsilon,h}_{(n+1)\delta} - \eta^{\epsilon,k}_{n\delta} \eta^{\epsilon,h}_{n\delta} &= -2\epsilon^{-1} \int_{n\delta}^{(n+1)\delta} \eta^{\epsilon,k}_t \eta^{\epsilon,h}_t dt \\ &\quad +\epsilon^{-1} \int_{n\delta}^{(n+1)\delta} \eta^{\epsilon,k}_t dW^h_t +\epsilon^{-1} \int_{n\delta}^{(n+1)\delta} \eta^{\epsilon,h}_t dW^k_t \\ &\quad +\frac{\epsilon^{-2} \delta}{2} \delta_{k,h}, \end{align*} and rearranging the terms we obtain \begin{align} \label{eq:int_OU} \int_{n\delta}^{(n+1)\delta} \epsilon \eta^{\epsilon,k}_t \eta^{\epsilon,h}_t dt &= \frac{\epsilon^2}{2} \left( \eta^{\epsilon,k}_{n\delta} \eta^{\epsilon,h}_{n\delta} - \eta^{\epsilon,k}_{(n+1)\delta} \eta^{\epsilon,h}_{(n+1)\delta} \right) \\ &\quad +\frac{\epsilon}2 \int_{n\delta}^{(n+1)\delta} \eta^{\epsilon,k}_t dW^h_t +\frac{\epsilon}2 \int_{n\delta}^{(n+1)\delta} \eta^{\epsilon,h}_t dW^k_t +\frac{\delta}{4} \delta_{k,h} \nonumber. \end{align} Finally, making use of \eqref{eq:int_OU} above and assumption (A5) we can rewrite \begin{align*} \sum_{k,h \in \mathbb{N}} &\sum_{n=0}^{m-1} (K \ast \Theta_{k,h})(\phi^\epsilon_{n\delta}(x)) \int_{n\delta}^{(n+1)\delta} \epsilon \eta^{\epsilon,k}_s \eta^{\epsilon,h}_s ds \\ &= \sum_{k,h \in \mathbb{N}} \sum_{n=0}^{m-1} (K \ast \Theta_{k,h})(\phi^\epsilon_{n\delta}(x)) \frac{\epsilon^2}{2} \left( \eta^{\epsilon,k}_{n\delta} \eta^{\epsilon,h}_{n\delta} - \eta^{\epsilon,k}_{(n+1)\delta} \eta^{\epsilon,h}_{(n+1)\delta} \right) \\ &\hspace{-10pt}\quad+ \sum_{k,h \in \mathbb{N}} \sum_{n=0}^{m-1} (K \ast \Theta_{k,h})(\phi^\epsilon_{n\delta}(x)) \left(\frac{\epsilon}2 \int_{n\delta}^{(n+1)\delta} \eta^{\epsilon,k}_t dW^h_t +\frac{\epsilon}2 \int_{n\delta}^{(n+1)\delta} \eta^{\epsilon,h}_t dW^k_t \right). \end{align*} We have \begin{align} \label{eq:delta2} \hE{\left| \sum_{k,h \in \mathbb{N}} \sum_{n=0}^{m-1} (K \ast \Theta_{k,h})(\phi^\epsilon_{n\delta}(x)) \frac{\epsilon^2}{2} \left( \eta^{\epsilon,k}_{n\delta} \eta^{\epsilon,h}_{n\delta} - \eta^{\epsilon,k}_{(n+1)\delta} \eta^{\epsilon,h}_{(n+1)\delta} \right) \right|} \\ \lesssim \nonumber \delta^{-1} \epsilon \log(1+\epsilon^{-1}), \end{align} and \begin{align} \label{eq:delta3} &\hE{\left| \sum_{k,h \in \mathbb{N}} \sum_{n=0}^{m-1} (K \ast \Theta_{k,h})(\phi^\epsilon_{n\delta}(x)) \frac{\epsilon}2 \int_{n\delta}^{(n+1)\delta} \eta^{\epsilon,k}_t dW^h_t \right|} \\ &\quad\lesssim \sum_{n=0}^{m-1} \epsilon \hE{\left| \int_{n\delta}^{(n+1)\delta} \eta^{\epsilon,k}_t dW^h_t \right|^2}^{1/2} \lesssim \delta^{-1/2} \epsilon^{1/2} \log^{1/2}(1+\epsilon^{-1}). \nonumber \end{align} It only remains to choose $\delta$ in a suitable way, so that all the terms \eqref{eq:delta1}, \eqref{eq:delta2} and \eqref{eq:delta2} are infinitesimal in the limit $\epsilon \to 0$. Taking for instance $\alpha=1/3$ and optimizing over $\delta$ gives \begin{align} \label{eq:zeta3} \hE{\left| \int_0^t (K \ast \zeta^{\epsilon,3}_s)(\phi^\epsilon_s(x)) ds \right|} \lesssim \epsilon^{1/6} \log^{5/6}(1+\epsilon^{-1}). \end{align} Considering \eqref{eq:zeta1}, \eqref{eq:zeta2} and \eqref{eq:zeta3}, we finally get the desired estimate: the proof is complete. \end{proof} \section{Convergence of characteristics} \label{sec:conv_char} In this section we prove our first major result \autoref{thm:char}. We take the opportunity to point out a mistake in \cite[Lemma 3.8]{FlPa21}, where BDG inequality was applied incorrectly. The present proof also corrects this previous mistake, and it is based on It\=o Formula for a smooth approximation $g_\delta(x)$ of the absolute value $|x|$. \begin{proof}[Proof of \autoref{thm:char}] The strategy of the proof is very similar to that of \cite[Proposition 4.1]{FlPa21}. Indeed, the difference $\phi^\epsilon-\phi$ solves $\hat{\mathbb{P}}$-a.s. for every $t \in [0,T]$ and $x \in {\mathbb{T}^2}$: \begin{align*} \phi_t^\epsilon(x)-\phi_t(x) &= \int_0^t v^\epsilon_s(\phi^\epsilon_s(x)) ds - \int_0^t v_s(\phi^\epsilon_s(x)) ds \\ &\quad + \int_0^t v_s(\phi^\epsilon_s(x)) ds - \int_0^t v_s(\phi_s(x)) ds \\ &\quad + \sum_{k \in \mathbb{N}} \int_0^t \sigma_k(\phi^\epsilon_s(x)) \eta^{\epsilon,k}_s ds - \sum_{k \in \mathbb{N}} \int_0^t \sigma_k(\phi^\epsilon_s(x))\circ dW^k_s \\ &\quad + \sum_{k \in \mathbb{N}} \int_0^t \sigma_k(\phi^\epsilon_s(x)) \circ dW^k_s - \sum_{k \in \mathbb{N}} \int_0^t \sigma_k(\phi_s(x))\circ dW^k_s \\ &\quad +\int_0^t z^\epsilon_s(\phi^\epsilon_s(x)) ds. \end{align*} For $\delta>0$, introduce the smooth function $g_\delta:\mathbb{R}^2 \to \mathbb{R}$ defined by $g_\delta(x) \coloneqq (|x|^2+\delta)^{1/2}$. It holds $\partial_{x_j} g_\delta(x)= x_j g_\delta(x)^{-1}$ and $\partial_{x_j}\partial_{x_i} g_\delta(x)= g_\delta(x)^{-1}(\delta_{i,j}-x_i x_j g_\delta(x)^{-2})$ for every $x \in \mathbb{R}^2$ and $j=1,2$, and moreover $|x| \leq g_\delta(x) \leq |x| + \delta^{1/2}$. Denote \begin{align*} R^\epsilon_t(x) &\coloneqq \int_0^t z^\epsilon_s(\phi^\epsilon_s(x))ds + \sum_{k \in \mathbb{N}} \int_0^t \sigma_k(\phi^\epsilon_s(x)) \eta^{\epsilon,k}_s ds - \sum_{k \in \mathbb{N}} \int_0^t \sigma_k(\phi^\epsilon_s(x))\circ dW^k_s, \end{align*} and \begin{align*} Z^\epsilon_t(x) \coloneqq \phi_t^\epsilon(x)-\phi_t(x)-R^\epsilon_t(x), \end{align*} both seen as functions on the whole plane $\mathbb{R}^2$. Applying It\=o Formula to $g_\delta(Z^\epsilon_t(x))$ yields: {\small \begin{align*} d g_\delta(Z^\epsilon_t(x)) &= Z^\epsilon_t(x) g_\delta(Z^\epsilon_t(x))^{-1} \cdot \left( v^\epsilon_t(\phi^\epsilon_t(x))-v_t(\phi^\epsilon_t(x)) \right) dt \\ &\quad+ Z^\epsilon_t(x) g_\delta(Z^\epsilon_t(x))^{-1} \cdot \left( v_t(\phi^\epsilon_t(x))-v_t(\phi_t(x)) \right) dt \\ &\quad+ \sum_{k \in \mathbb{N}} Z^\epsilon_t(x) g_\delta(Z^\epsilon_t(x))^{-1} \cdot \left( \sigma_k(\phi^\epsilon_t(x))-\sigma_k(\phi_t(x)) \right) dW^k_t \\ &\quad+ Z^\epsilon_t(x) g_\delta(Z^\epsilon_t(x))^{-1} \cdot \left( c(\phi^\epsilon_t(x))-c(\phi_t(x)) \right) dt \\ &\quad +\sum_{k \in \mathbb{N}} \sum_{i,j = 1}^2 g_\delta(Z^\epsilon_t(x))^{-1} (\delta_{i,j}-(Z^\epsilon_t(x))^i (Z^\epsilon_t(x))^j g_\delta(Z^\epsilon_t(x))^{-2}) \\ &\qquad \times \left( \sigma_k(\phi^\epsilon_t(x))-\sigma_k(\phi_t(x)) \right)^i \left( \sigma_k(\phi^\epsilon_t(x))-\sigma_k(\phi_t(x)) \right)^j dt , \end{align*}} and therefore \begin{align*} \hE{\left|\phi^\epsilon_t(x)-\phi^\epsilon_t(x)\right|} &\leq \hE{|Z^\epsilon_t(x)|} + \hE{\left|R^\epsilon_t(x)\right|} \leq \hE{g_\delta(Z_t)} + \hE{\left|R^\epsilon_t(x)\right|} \\ &\lesssim \delta^{1/2} + \hE{\left|R^\epsilon_t(x)\right|} + \hE{ \int_0^t \left| v^\epsilon_s(\phi^\epsilon_s(x))-v_s(\phi^\epsilon_s(x)) \right| ds} \\ &\quad+ \hE{ \int_0^t \left| v_s(\phi^\epsilon_s(x))-v_s(\phi_s(x)) \right| ds} \\ &\quad+ \hE{ \int_0^t \left| \phi^\epsilon_s(x)-\phi_s(x) \right| ds } + \delta^{-1/2} \hE{\sup_{t \in [0,T]} \left|R^\epsilon_t(x)\right|}, \end{align*} where in the last line we have used $g_\delta(Z^\epsilon_s(x))^{-1} \leq \delta^{-1/2}$ and $\left| \phi^\epsilon_s(x)-\phi_s(x) \right| \lesssim |Z^\epsilon_s(x)| + \left|R^\epsilon_s(x)\right|$. Taking the integral over $x \in {\mathbb{T}^2}$ and using assumptions (A2), (A4), concavity of the function $\gamma$, Jensen inequality, \autoref{prop:old} and \autoref{prop:sup_z} we get \begin{align*} \hE{ \| \phi^\epsilon_t - \phi_t \|_{L^1({\mathbb{T}^2},{\mathbb{T}^2})}} &\lesssim \delta^{1/2} + \delta^{-1/2} \epsilon^{1/42} \log^{47/42}(1+\epsilon^{-1}) + c_\epsilon \\ &\quad+ \int_0^t \gamma \left( \hE{ \| \phi^\epsilon_s - \phi_s \|_{L^1({\mathbb{T}^2},{\mathbb{T}^2})}} \right) ds \end{align*} uniformly in $t \in [0,T]$ and $\delta>0$. Taking $\delta = \epsilon^{1/42} \log^{47/42}(1+\epsilon^{-1})$ we deduce the desired result by \autoref{lem:comp}. \end{proof} \section{Convergence of large-scale dynamics} \label{sec:conv_large} Recall the representation formulas for the solutions of \eqref{eq:large_eps_intro} and \eqref{eq:large_intro} \begin{align*} \Xi^\epsilon_t &= \tE{ \Xi_0 \circ (\phi^\epsilon_t)^{-1} + \int_0^t q^\epsilon_s \circ \phi^\epsilon_s \circ(\phi^\epsilon_t)^{-1}ds}, \\ \Xi_t &= \tE{ \Xi_0 \circ (\phi_t)^{-1} + \int_0^t q_s \circ \phi_s \circ(\phi_t)^{-1}ds}, \end{align*} with $\phi^\epsilon$ and $\phi$ solving respectively \eqref{eq:char_eps_intro} and \eqref{eq:char_intro}. As made clear by the following proof, these representation formulas are the key ingredient needed to show \autoref{thm:conv_large}, thus justifying our \autoref{def:sol} in terms of these identities. \begin{proof}[Proof of \autoref{thm:conv_large}] Let $f \in L^1({\mathbb{T}^2})$ and $t \in [0,T]$. We have \begin{align*} &\left|\int_{\mathbb{T}^2} \Xi^\epsilon_t(x)f(x)dx - \int_{\mathbb{T}^2} \Xi_t(x)f(x)dx\right| \\ &\leq \left|\int_{\mathbb{T}^2} \tE{ \Xi_0((\phi^\epsilon_t)^{-1}(x))} f(x)dx - \int_{\mathbb{T}^2} \tE{ \Xi_0((\phi_t)^{-1}(x))}f(x)dx\right| \\ &\quad+ \left|\int_{\mathbb{T}^2} \tE{ \int_0^t q^\epsilon_s(\phi^\epsilon_s((\phi^\epsilon_t)^{-1}(x)))ds} f(x)dx - \int_{\mathbb{T}^2} \tE{ \int_0^t q_s(\phi_s((\phi_t)^{-1}(x)))ds}f(x)dx\right| \\ &= \left|\tE{ \int_{\mathbb{T}^2} \Xi_0((\phi^\epsilon_t)^{-1}(x)) f(x)dx - \int_{\mathbb{T}^2} \Xi_0((\phi_t)^{-1}(x))f(x)dx}\right| \\ &\quad+ \left|\tE{ \int_{\mathbb{T}^2} \int_0^t q^\epsilon_s(\phi^\epsilon_s((\phi^\epsilon_t)^{-1}(x)))ds f(x)dx - \int_{\mathbb{T}^2} \int_0^t q_s(\phi_s((\phi_t)^{-1}(x)))ds f(x)dx}\right| \\ &= \left|\tE{ \int_{\mathbb{T}^2} \Xi_0(y) f(\phi^\epsilon_t(y))dy - \int_{\mathbb{T}^2} \Xi_0(y)f(\phi_t(y))dy}\right| \\ &\quad+ \left|\tE{\int_0^t \int_{\mathbb{T}^2} q^\epsilon_s(\phi^\epsilon_s(y)) f(\phi_t^\epsilon(y))dy ds - \int_0^t \int_{\mathbb{T}^2} q_s(\phi_s(y))f(\phi_t(y)dyds }\right|. \end{align*} Taking expectation with respect to $\mathbb{P}$, the first summand is bounded by \begin{align} \label{eq:bound1} \E{\left|\tE{ \int_{\mathbb{T}^2} \Xi_0(y) f(\phi^\epsilon_t(y))dy - \int_{\mathbb{T}^2} \Xi_0(y)f(\phi_t(y))dy}\right|} \nonumber \\ \leq \|\Xi_0\|_{L^\infty({\mathbb{T}^2})} \hE{ \int_{\mathbb{T}^2} \left| f(\phi^\epsilon_t(y)) - f(\phi_t(y)) \right| dy}. \end{align} As for the second term, we can rewrite \begin{align*} \int_0^t \int_{\mathbb{T}^2}& q^\epsilon_s(\phi^\epsilon_s(y)) f(\phi_t^\epsilon(y))dy ds - \int_0^t \int_{\mathbb{T}^2} q_s(\phi_s(y))f(\phi_t(y))dyds \\ &= \int_0^t \int_{\mathbb{T}^2} q^\epsilon_s(\phi^\epsilon_s(y)) f(\phi_t^\epsilon(y))dy ds - \int_0^t \int_{\mathbb{T}^2} q^\epsilon_s(\phi^\epsilon_s(y)) f(\phi_t(y))dyds \\ &\quad+ \int_0^t \int_{\mathbb{T}^2} q^\epsilon_s(\phi^\epsilon_s(y)) f(\phi_t(y)) dy ds - \int_0^t \int_{\mathbb{T}^2} q_s(\phi^\epsilon_s(y))f(\phi_t(y))dyds \\ &\quad+ \int_0^t \int_{\mathbb{T}^2} q_s(\phi^\epsilon_s(y)) f(\phi_t(y)) dy ds - \int_0^t \int_{\mathbb{T}^2} q_s(\phi_s(y))f(\phi_t(y))dyds, \end{align*} with estimates \begin{align} \label{eq:bound2} &\hE{\left|\int_0^t \int_{\mathbb{T}^2} q^\epsilon_s(\phi^\epsilon_s(y)) f(\phi_t^\epsilon(y))dy ds - \int_0^t \int_{\mathbb{T}^2} q^\epsilon_s(\phi^\epsilon_s(y)) f(\phi_t(y))dyds \right|} \nonumber \\ &\qquad \leq \int_0^t \|q^\epsilon_s\|_{L^\infty({\mathbb{T}^2})} ds \hE{\int_{\mathbb{T}^2} |f(\phi_t^\epsilon(y)) - f(\phi_t(y)) |dy} ; \end{align} \begin{align} \label{eq:bound3} &\hE{\left| \int_0^t \int_{\mathbb{T}^2} q^\epsilon_s(\phi^\epsilon_s(y)) f(\phi_t(y)) dy ds - \int_0^t \int_{\mathbb{T}^2} q_s(\phi^\epsilon_s(y))f(\phi_t(y))dyds \right|} \nonumber \\ &\qquad\leq \int_0^t \|q^\epsilon_s-q_s \|_{L^\infty({\mathbb{T}^2})} ds \| f\|_{L^1({\mathbb{T}^2})} ; \end{align} and \begin{align} \label{eq:bound4} &\hE{\left| \int_0^t \int_{\mathbb{T}^2} q_s(\phi^\epsilon_s(y)) f(\phi_t(y)) dy ds - \int_0^t \int_{\mathbb{T}^2} q_s(\phi_s(y))f(\phi_t(y))dyds \right|} \nonumber \\ &\qquad\leq \hE{\int_0^t \int_{\mathbb{T}^2} |q_s(\phi^\epsilon_s(y)) - q_s(\phi_s(y)) | |f(\phi_t(y))|dyds } \nonumber \\ &\qquad\eqqcolon \hE{\int_0^t \int_{\mathbb{T}^2} |q_s(\phi^\epsilon_s(y)) - q_s(\phi_s(y)) | d\mu(y)ds}, \end{align} where $d\mu(y)\coloneqq |f(\phi_t(y))|dy$ is a random Radon measure on ${\mathbb{T}^2}$. By assumptions (A6) and (A7), the terms \eqref{eq:bound1}, \eqref{eq:bound2} and \eqref{eq:bound3} go to zero as $\epsilon \to 0$, using the same reasoning of \cite[Theorem 5.1]{FlPa21}. Therefore, here we restrict ourselves to only consider the remaining term \eqref{eq:bound4}. Let us argue \emph{per absurdum}. Suppose by contradiction that there exists a subsequence $\epsilon_k \to 0$ such that \begin{align} \label{eq:absurd} \hE{\int_0^t \int_{\mathbb{T}^2} |q_s(\phi^{\epsilon_k}_s(y)) - q_s(\phi_s(y)) | d\mu(y)ds} \geq C \end{align} for some $C>0$ and for every $\epsilon_k$. Let $\mathcal{N}$ and $\mathcal{N}$ be negligible sets such that $\phi_t$ is measure preserving for every $\omega \in \mathcal{N}^c$ and $\tilde{\omega} \in \tilde{\mathcal{N}}^c$. Take $\delta>0$. By Lusin Theorem \cite[Theorem 2.23]{Ru70} there exists a measurable set $C_\delta \subset [0,t] \times {\mathbb{T}^2}$ with $\mathscr{L}_{[0,t]} \otimes \mathscr{L}_{\mathbb{T}^2} ([0,t] \times {\mathbb{T}^2} \setminus C_\delta)<\delta$ and a continuous function $Q_\delta \in C([0,t] \times {\mathbb{T}^2})$ that coincides with $q$ on $C_\delta$. Therefore \begin{align*} \int_0^t \int_{\mathbb{T}^2} |q_s(\phi^{\epsilon_k}_s(y)) - q_s(\phi_s(y)) | &d\mu(y)ds = \int_{C_\delta} |q_s(\phi^{\epsilon_k}_s(y)) - q_s(\phi_s(y)) | d\mu(y)ds \\ &\quad+ \int_{[0,t] \times {\mathbb{T}^2} \setminus C_\delta} |q_s(\phi^{\epsilon_k}_s(y)) - q_s(\phi_s(y)) | d\mu(y)ds \\ &\leq \int_{[0,t] \times {\mathbb{T}^2}} |Q_\delta(s,\phi^{\epsilon_k}_s(y)) - Q_\delta(s,\phi_s(y)) | d\mu(y)ds \\ &\quad+ 2\int_{[0,t] \times {\mathbb{T}^2} \setminus C_\delta} \|q_s\|_{L^\infty({\mathbb{T}^2})} d\mu(y)ds. \end{align*} Let us consider the second term first. Recalling $d\mu(y)=|f(\phi_t(y))|dy$, we have \begin{align*} \int_{[0,t] \times {\mathbb{T}^2} \setminus C_\delta} \|q_s\|_{L^\infty({\mathbb{T}^2})} d\mu(y)ds &= \int_{[0,t] \times {\mathbb{T}^2} \setminus C_\delta} \|q_s\|_{L^\infty({\mathbb{T}^2})} |f(\phi_t(y))|dyds \\ &= \int_{\phi_t^{-1}(C_\delta^c)} \|q_s\|_{L^\infty({\mathbb{T}^2})} |f(y)|dyds, \end{align*} with $\phi_t^{-1}(C_\delta^c) \coloneqq \{ (s,y) : (s,\phi_t(y))\in C_\delta^c \}$. Since $\phi_t$ is measure preserving for every $\omega \in \mathcal{N}^c$ and $\tilde{\omega} \in \tilde{\mathcal{N}}^c$, it is easy to check \begin{align*} \mathscr{L}_{[0,t]} \otimes \mathscr{L}_{\mathbb{T}^2} (\phi_t^{-1}(C_\delta^c)) = \mathscr{L}_{[0,t]} \otimes \mathscr{L}_{\mathbb{T}^2} (C_\delta^c) < \delta \end{align*} $\hat{\mathbb{P}}$-almost surely, and since $\|q\|_{L^\infty({\mathbb{T}^2})}|f| \in L^1([0,t] \times {\mathbb{T}^2})$, absolute continuity of Lebesgue integral gives the existence of $\delta>0$ such that for every $\omega \in \mathcal{N}^c$ and $\tilde{\omega} \in \tilde{\mathcal{N}}^c$ \begin{align*} \int_{[0,t] \times {\mathbb{T}^2} \setminus C_\delta} \|q_s\|_{L^\infty({\mathbb{T}^2})} d\mu(y)ds < C/3. \end{align*} We fix such a $\delta$ hereafter. For the first term we argue as follows: since we have proved \begin{align*} \sup_{t \in [0,T]} \hE{ \| \phi_t^{\epsilon_k}-\phi_t \|_{L^1({\mathbb{T}^2},{\mathbb{T}^2})}} \to 0 \end{align*} as ${\epsilon_k} \to 0$, then for every fixed $s \in [0,T]$ there exists a subsequence (that we still denote $\epsilon_k$) such that the maps \begin{align*} \Phi^{\epsilon_k}_s:\hat{\Omega} \times {\mathbb{T}^2} &\to [0,T] \times {\mathbb{T}^2}, \\ \Phi^{\epsilon_k}_s(\hat{\omega},y)&=(s,\phi^\epsilon(\hat{\omega},s,y)) \end{align*} converge $\hat{\mathbb{P}} \otimes \mathscr{L}_{{\mathbb{T}^2}}$-almost everywhere to $\Phi_s$ given by $\Phi_s(\hat{\omega},y)=(s,\phi(\hat{\omega},s,y))$. By almost sure continuity in time of $\Phi^{\epsilon_k}_s$ and $\Phi_s$, it is possible to extract a common subsequence $\epsilon_k \to 0$ such that $\Phi^{\epsilon_k}_s$ converges $\hat{\mathbb{P}} \otimes \mathscr{L}_{{\mathbb{T}^2}}$-almost everywhere to $\Phi_s$ simultaneously for all $s \in [0,T]$. Therefore, since $Q_\delta$ is continuous on $[0,t] \times {\mathbb{T}^2}$, also $Q_\delta(\Phi^{\epsilon_k})$ converges $\hat{\mathbb{P}} \otimes \mathscr{L}_{[0,t]} \otimes \mathscr{L}_{{\mathbb{T}^2}}$-almost everywhere to $Q_\delta(\Phi)$, and since $\mu$ is absolutely continuous with respect to $\mathscr{L}_{\mathbb{T}^2}$ for almost every $\hat{\omega} \in \hat{\Omega}$, the convergence is actually $\hat{\mathbb{P}} \otimes \mathscr{L}_{[0,t]} \otimes \mu_{\hat{\omega}}$-almost everywhere; moreover, $Q_\delta(\Phi^{\epsilon_k})$ is dominated by the constant $\sup_{s \in [0,t], y \in {\mathbb{T}^2}} |Q_\delta(s,y)|$, and Lebesgue dominated convergence yields convergence in $L^1(\hat{\Omega} \times [0,T] \times {\mathbb{T}^2},\hat{\mathbb{P}} \otimes \mathscr{L}_{[0,t]} \otimes \mu_{\hat{\omega}})$, that is \begin{align*} \hE{\int_{[0,t] \times {\mathbb{T}^2}} |Q_\delta(s,\phi^{\epsilon_k}_s(y)) - Q_\delta(s,\phi_s(y)) | d\mu(y)ds} \to 0, \end{align*} as ${\epsilon_k} \to 0$. This contradicts \eqref{eq:absurd}, and therefore we have proved: for every $f \in L^1({\mathbb{T}^2})$ \begin{align*} \E{\left|\int_{\mathbb{T}^2} \Xi^\epsilon_t(x) f(x)dx -\int_{\mathbb{T}^2} \Xi_t(x) f(x)dx \right|} \to 0 \quad \mbox{ as } \epsilon \to 0, \end{align*} for every fixed $t\in [0,T]$. Since $\| \Xi^\epsilon_t \|_{L^\infty({\mathbb{T}^2})}$ is bounded uniformly in $\epsilon>0$ and $t \in [0,T]$, pointwise converges implies convergence in $L^p([0,T])$ for every finite $p$ by Lebesgue dominated convergence Theorem. Finally, if $q \in L^1([0,T],Lip({\mathbb{T}^2}))$ and $f \in Lip({\mathbb{T}^2})$ with $[f]_{Lip({\mathbb{T}^2})} \leq 1$, we have \begin{align*} \hE{ \int_{\mathbb{T}^2} \left| f(\phi^\epsilon_t(y)) - f(\phi_t(y)) \right| dy} &\leq \hE{ \int_{\mathbb{T}^2} \left| \phi^\epsilon_t(y) - \phi_t(y) \right| dy} \\ &\leq \sup_{t \in [0,T]} \hE{ \| \phi_t^\epsilon-\phi_t \|_{L^1({\mathbb{T}^2},{\mathbb{T}^2})}}, \end{align*} controlling \eqref{eq:bound1} and \eqref{eq:bound2} uniformly in $f$; also, since $\|f\|_{L^\infty({\mathbb{T}^2})} \leq 1$ it holds \begin{align*} &\hE{\int_0^t \int_{\mathbb{T}^2} |q_s(\phi^\epsilon_s(y)) - q_s(\phi_s(y)) | |f(\phi_t(y))|dyds } \\ &\leq \hE{\int_0^t \int_{\mathbb{T}^2} \|q_s\|_{Lip({\mathbb{T}^2})} |\phi^\epsilon_s(y)-\phi_s(y)| dyds } \\ &\leq \int_0^t \|q_s\|_{Lip({\mathbb{T}^2})} ds \sup_{s \in [0,T]} \hE{ \|\phi^\epsilon_s-\phi_s\|_{L^1({\mathbb{T}^2},{\mathbb{T}^2})}}, \end{align*} allowing to bound \eqref{eq:bound4} in a simpler way. Putting all together, we have proved the desired convergence uniformly in $t \in [0,T]$ and $f \in Lip({\mathbb{T}^2})$ with $[f]_{Lip({\mathbb{T}^2})} \leq 1$, $\|f\|_{L^\infty({\mathbb{T}^2})} \leq 1$. The proof is complete. \end{proof} \section{Examples} \label{sec:ex} In this final section, we discuss how assumptions (A1)-(A7) are fullfilled by our main motivational examples, namely advection-diffusion or Navier-Stokes equations at large scales coupled with stochastic Euler equations at small scales - cfr. \autoref{ssec:ex} for details. First of all, notice that in the case of passive scalars, like in the advection-diffusion equations, there is nothing to actually prove since all the subjects of assumptions (A1)-(A7) are given \emph{a priori}. On the other hand, in the Navier-Stokes system the fields $v^\epsilon$, $v$ are given by $v^\epsilon=K \ast \Xi^\epsilon$, $v=K \ast \Xi$, and therefore (A1), (A2) and (A4) need to be checked. The verification of (A4) needs an additional requirement on the external source $q$: assume \begin{itemize} \item[(\textbf{A8})] there exists a constant $C$ such that for almost every $t \in [0,T]$ and almost every $x,y \in {\mathbb{T}^2}$ \begin{align*} |q(t,x)-q(t,y)| \leq C \gamma (|x-y|). \end{align*} \end{itemize} \begin{prop} \label{prop:NS} Let $\nu \geq 0$, $\Xi_0 \in L^\infty({\mathbb{T}^2})$ with zero spatial average and consider the Navier-Stokes ($\nu > 0$) or Euler ($\nu=0$) system \begin{align*} \begin{cases} d\Xi^\epsilon_t + (v^\epsilon_t + u^\epsilon_t) \cdot \nabla \Xi^\epsilon_t dt = \nu \Delta \Xi^\epsilon_t dt + q^\epsilon_t dt, \\ d \xi^\epsilon_t + (v^\epsilon_t+u^\epsilon_t) \cdot \nabla \xi^\epsilon_t dt = - \epsilon^{-1} \xi^\epsilon_t dt + \epsilon^{-1} \sum_{k \in \mathbb{N}} \varsigma_k dW^k_t, \\ v^\epsilon_t = -\nabla^\perp(-\Delta)^{-1}\Xi^\epsilon_t, \\ u^\epsilon_t = -\nabla^\perp(-\Delta)^{-1}\xi^\epsilon_t, \end{cases} \end{align*} and the limiting large-scale dynamics \begin{align*} \begin{cases} d \Xi_t + v_t \cdot \nabla \Xi_t dt + \sum_{k \in \mathbb{N}} \sigma_k \cdot \nabla \Xi_t \circ dW^k_t = \nu \Delta \Xi_t dt + q_t dt, \\ v_t=-\nabla^\perp(-\Delta)^{-1}\Xi_t . \end{cases} \end{align*} Assume (A3), (A5)-(A8) and take $q^\epsilon_t$, $q_t$ with zero spatial average for almost every $t \in [0,T]$. Then the velocity fields $v^\epsilon$, $v$ satisfy (A1), (A2) and (A4). \end{prop} \begin{proof} Concerning (A1), measurability can be deduced by $v^\epsilon=K \ast \Xi^\epsilon$, $v=K \ast \Xi$, representation formulas \eqref{eq:repr_large_eps} and \eqref{eq:repr_large}, and the fact that $\phi^\epsilon$, $\phi$ are stochastic flows of measure-preserving homeomorphisms. Assumption (A2) is given by $v^\epsilon=K \ast \Xi^\epsilon$, $v=K \ast \Xi$, \eqref{eq:K} and \autoref{lem:log_lip}. Finally, let us then verify (A4). Recall \begin{align*} v^\epsilon_t(x) &= \int_{\mathbb{T}^2} K(x-y) \Xi^\epsilon_t(y) dy \\ &= \int_{\mathbb{T}^2} K(x-y) \tE{ \Xi_0((\phi^\epsilon_t)^{-1}(y)) + \int_0^t q^\epsilon_s(\phi^\epsilon_s((\phi^\epsilon_t)^{-1}(y)))ds} dy \\ &= \tE{ \int_{\mathbb{T}^2} K(x-\phi^\epsilon_t(y)) \Xi_0(y) dy} + \tE{ \int_{\mathbb{T}^2} K(x-\phi^\epsilon_t(y)) \int_0^t q^\epsilon_s(\phi^\epsilon_s(y))ds dy} , \end{align*} and \begin{align*} v_t(x) &= \int_{\mathbb{T}^2} K(x-y) \Xi_t(y) dy \\ &= \int_{\mathbb{T}^2} K(x-y) \tE{ \Xi_0((\phi_t)^{-1}(y)) + \int_0^t q_s(\phi_s((\phi_t)^{-1}(y)))ds} dy \\ &= \tE{ \int_{\mathbb{T}^2} K(x-\phi_t(y)) \Xi_0(y) dy} + \tE{ \int_{\mathbb{T}^2} K(x-\phi_t(y)) \int_0^t q_s(\phi_s(y))ds dy} . \end{align*} We have {\small \begin{align*} \int_{\mathbb{T}^2} |v^\epsilon_t(x)-v_t(x)| dx &\leq \tE{ \int_{\mathbb{T}^2} \int_{\mathbb{T}^2} \left| K(x-\phi^\epsilon_t(y)) - K(x-\phi_t(y)) \right| |\Xi_0(y)| dy dx } \\ &\hspace{-3cm}\quad+ \tE{ \int_{\mathbb{T}^2} \left| \int_{\mathbb{T}^2} K(x-\phi^\epsilon_t(y)) \int_0^t q^\epsilon_s(\phi^\epsilon_s(y))ds dy - \int_{\mathbb{T}^2} K(x-\phi_t(y)) \int_0^t q_s(\phi_s(y))ds dy \right| dx } \\ &\leq \tE{ \int_{\mathbb{T}^2} \int_{\mathbb{T}^2} \left| K(x-\phi^\epsilon_t(y)) - K(x-\phi_t(y)) \right| |\Xi_0(y)| dy dx } \\ &\quad+ \tE{ \int_{\mathbb{T}^2} \int_{\mathbb{T}^2} \left| K(x-\phi^\epsilon_t(y))-K(x-\phi_t(y)) \right| \left| \int_0^t q^\epsilon_s(\phi^\epsilon_s(y))ds \right| dy dx } \\ &\quad+ \tE{ \int_{\mathbb{T}^2} \int_{\mathbb{T}^2} \left| K(x-\phi_t(y)) \right| \int_0^t \left|q^\epsilon_s(\phi^\epsilon_s(y))- q_s(\phi^\epsilon_s(y))ds \right| dy dx } \\ &\quad+ \tE{ \int_{\mathbb{T}^2} \int_{\mathbb{T}^2} \left| K(x-\phi_t(y)) \right| \int_0^t \left|q_s(\phi^\epsilon_s(y))- q_s(\phi_s(y))ds \right| dy dx } \\ &\lesssim \gamma \left( \tE{\|\phi^\epsilon_t-\phi_t \|_{L^1({\mathbb{T}^2},{\mathbb{T}^2})}}\right) + \int_0^t \|q^\epsilon_s - q_s \|_{L^\infty({\mathbb{T}^2})} ds \\ &\quad+ \int_0^t \gamma \left( \tE{\|\phi^\epsilon_s-\phi_s \|_{L^1({\mathbb{T}^2},{\mathbb{T}^2})}}\right) ds, \end{align*}} that is the desired estimate, since $\int_0^t \|q^\epsilon_s - q_s \|_{L^\infty({\mathbb{T}^2})} ds \to 0$ as $\epsilon \to 0$ by assumption (A7). \end{proof} \bibliographystyle{plain}
1,108,101,565,030
arxiv
\section{Introduction} In order to extract dynamic response functions (spectral functions) from quantum Monte Carlo (QMC) simulations, the challenging task of numerical analytic continuation of imaginary-time dependent correlation functions to real frequency has to be performed. While significant progress has been made on this inverse problem during the past three decades, there are still questions remaining on what the best approach is for extracting the maximal amount of spectral information (frequency resolution) from a given set of QMC data. The maximum entropy (ME) method, of which there are several variants \cite{gull84,silver90,gubernatis91,jarrell96,boninsegni96,bergeron16}, has dominated the field for some time, but alongside it an alternative line of stochastic analytic continuation (SAC) methods (also called average spectrum methods) \cite{white91,sandvik98,beach04,vafay07,reichman09,syljuasen08,fuchs10,sandvik16,qin17,shao17,ghanem20a,ghanem20b} have been gaining ground and show promise to outperform the ME method. Here the goodness-of-fit $\chi^2(S)$ is used as an analogy of the energy in a statistical mechanics problem, and the spectrum $S(\omega)$, suitably parametrized, is Monte Carlo sampled at a fictitious temperature $\Theta$ using the Boltzmann-like weight function ${\rm exp}{(-\chi^2(S)/2\Theta})$. Some parametrizations in terms of $\delta$-functions are illustrated in Fig.~\ref{fig:spec} and will be explained in detail later. \begin{figure}[t] \centering \includegraphics[width=75mm, clip]{fig01.pdf} \vskip-1mm \caption{Parametrizations of spectra in terms of a large number $N_\omega$ of $\delta$-functions: (a) Variable (sampled) amplitudes on a fixed frequency grid. (b) Identical amplitudes and sampled frequencies in the continuum. (c) Variable frequencies and amplitudes. (d) A ``macroscopic'' $\delta$-function with amplitude $A_0$ at $\omega_0$, followed by $N_\omega$ ``microscopic'' $\delta$-functions at $\omega_i > \omega_0$ with uniform amplitudes $A_i=(1-A_0)/N_\omega$. The amplitude $A_0$ is optimized but held fixed in a given sampling run, while $\omega_0$ is sampled. (e) Equal amplitudes with monotonically increasing spacing $d_i \equiv \omega_{i+1}-\omega_i$. The lowest frequency $\omega_1$ is sampled along with all other frequencies with the constraint $d_{i+1} > d_{i}$. The final spectrum in all cases is the mean amplitude density accumulated in a histogram.} \label{fig:spec} \end{figure} Building on the observation that suppression of configurational entropy with constraints on the sampled spectrum can significantly improve the fidelity of the average spectrum \cite{sandvik16}, we here make further progress on the SAC scheme. We primarily focus on low-temperature spectral functions with sharp features, e.g., narrow quasi-particle peaks and power-law edge singularities (the latter of which often arise in systems with fractionalized excitations). Through a series of systematic studies of different optimized constraints, with tests based on QMC and synthetic imaginary-time data, we demonstrate how spectra with generic, sharp features can be reconstructed almost perfectly using parametrizations such as those illustrated in Figs.~\ref{fig:spec}(d) and \ref{fig:spec}(e). These developments open opportunities for studies of quantitative features of spectral functions that have so far been out of reach of QMC studies, not only in the most obvious setting of condensed matter physics but also in lattice field theory \cite{ding18,aarts21,horak21}. To prepare for these developments of constrained SAC, we first study the role of the parametrization of unconstrained spectral functions, comparing $\delta$-functions on a fixed grid [Fig.~\ref{fig:spec}(a)], where the amplitudes constitute the sampled configuration space, and in the frequency continuum (with the spectral weight distribution collected in a discrete histogram). In the continuum, either only the locations of fixed-amplitude $\delta$-functions are sampled [Fig.~\ref{fig:spec}(b)] or the amplitudes are also updated along with the frequencies [Fig.~\ref{fig:spec}(c)]. The average spectrum exhibits significant differences between the three parametrizations, which we explain quantitatively by non-universal entropies originating from stochastic processes with different degrees of freedom. The fidelity of the continuous frequency approach in reproducing known spectral functions is invariably better, and, moreover, the sampling is also much more efficient in that case, typically requiring only of the order of a few minutes to achieve a smooth spectrum. We discuss a simple method of selecting the sampling temperature $\Theta$ in such a way that the resolution is maximized while still not being affected by over-fitting. The optimal sampling temperature can be related to the entropy of the spectral function in a given parametrization. We derive the entropy in the case of $\delta$-functions in the continuum and confirm its role in the dependence of the optimal sampling temperature on the number $N_\omega$ of $\delta$-functions used; $\Theta \propto 1/N_\omega$. The limit $\Theta \to 0$, corresponding to fitting by pure $\chi^2$ minimization, is also of relevance to our arguments. For a given set of imaginary-time data, we demonstrate that the positive-definite spectrum with lowest $\chi^2$ value defines a noise-limited effective number of fitting parameters. In this context we also study how a small fraction (as low as $10^{-4}$) of negative spectral weight can favorably affect the average spectrum at low $\Theta$. The main purpose of the present work is to introduce various constraints imposed within the continuous frequency-space parametrization. Such constraints, which represent some kind of auxiliary information, e.g., the existence of a sharp edge, can significantly improve the ability of the SAC method to resolve spectral details also at frequencies far away from the feature(s) directly associated with the constraint. In previous work on sharp edges \cite{sandvik16}, a fixed frequency grid was used and the constraints amounted to enforcing and optimizing lower and upper bounds, outside which there is no spectral weight. This methods worked surprisingly well, e.g., when a single-peak condition was also imposed (without any further information on the location or shape of the peak) it was possible to closely reproduce the edge divergence at temperature $T=0$ of the structure factor of the Heisenberg chain---a feat that had been impossible with previous approaches. In practice, the optimization of a constraint, which is based on a simple statistical criterion of minimum $\langle \chi^2\rangle$ at fixed $\Theta > 0$, can be very time consuming. Here we introduce a variety of useful constraints within the continuous-frequency representation, where either no further optimization is required or the optimization process is much faster than in the previous approach. In a previous work with collaborators \cite{shao17}, we already implemented an SAC method incorporating a spectral edge consisting of a single $\delta$-function, whose relative weight $A_0$ was optimized and with the remaining weight $1-A_0$ divided over hundreds or thousands of ``microscopic'' $\delta$-functions to model a continuum; see Fig.~\ref{fig:spec}(d). We here further explore the ability of the statistical optimization scheme to find the correct value of $A_0$. In particular, we investigate how the optimal value converges when the statistical errors of the underlying imaginary-time data are reduced. We also generalize the approach to a quasi-particle peak of finite width by splitting the weight $A_0$ over several sampled edge $\delta$-functions. This way, both broad and narrow quasi-particle peaks can be resolved to a degree far exceeding what is possible with conventional methods. Moving then to power-law and similar edge singularities, we introduce a constraint on the distances between the sampled $\delta$-functions, such that the mean density of $\delta$-functions must increase monotonically when the edge is approached. This parametrization, illustrated in Fig.~\ref{fig:spec}(e), most naturally describes a divergent edge. However, with different amplitude profiles and further constraints, both divergent and convergent spectral edges can be reproduced. We discuss the entropic pressures of the distance-monotonic parametrization and test its ability to reliably reproduce different types of edges. Again, we find a remarkable improvement in the fidelity of the method in resolving not just the edge, but the entire spectral function up to its high-frequency bound. We also generalize this approach to an arbitrary (not necessarily monotonically decaying) continuum above the edge. A key message of our study is that removal by constraints of distortions at the lower edge of a spectrum can also reveal features at higher frequencies at unexpected level of detail. Thus, the imaginary-time data contain ``hidden information'' that is masked when a sharp edge is not treated correctly but is revealed once the primary distortions are removed by appropriate constraints. Our comprehensive series of tests of different parametrizations of increasingly complex spectral functions (also beyond those in Fig.~\ref{fig:spec}) build up to a scheme capable of resolving a broad range of common edge features, with only minimal input beyond the imaginary-time data. The method in effect is a generic curve-fitting machinery, where the type of curve is only specified minimally (e.g., the edge takes an asymptotic power-law form, with the exponent not necessarily specified but optimized in the process) and beyond this information the statistically best average spectrum consistent with the input data is produced. As an application to systems for which previous numerical approaches have provided only very limited information, we use constrained SAC methods to study the dynamic structure factor of 2- and 3-leg spin-$1/2$ Heisenberg ladders \cite{dagotto96}. For the gapped 2-leg ladder, we extract not only the gap $\Delta \approx 0.5025$ and spectral weight (about $96.7\%$) of the dominant $\delta$-function at the gap, but also resolve the three-triplon contributions at $\omega > 3\Delta$. For the gapless 3-leg ladder we study the divergent spectral edge arising from deconfined spinons (as in the Heisenberg chain considered in many of our test examples) and also obtain the profile at higher energies (which is different from that of the single chain). These examples demonstrate how constraints corresponding to known spectral features at the lower edge (a $\delta$-function or a spinon edge) can deliver previously unknown details of the spectrum away from the edge, including features arising from composite excitations. The work reported here on SAC also led us to new insights into the ME method. In particular, we finally establish the complete formal mapping between the ME and SAC methods, which in some important respects is at variance with previous attempts \cite{beach04,bergeron16,ghanem20a}. We also introduce an SAC-inspired criterion for fixing the entropic weighting factor $\alpha$ in the ME method. With the proper relationship between the optimal $\alpha$ and $\Theta$ values, we demonstrate explicitly that SAC with different parametrizations of the spectrum corresponds exactly, when $N_\omega \to \infty$ (a generalized thermodynamic limit of SAC), to the ME approach with different functional forms of the entropy (i.e., the conventional Shannon information entropy is not universal). Thus, unrestricted SAC can often be used interchangeably with the ME method, and some of our constrained parametrizations can also in principle be translated to the ME framework. The sampling approach can still be necessary or preferred, e.g., with some types of constraints and in cases where an analytical form of the entropy is not available. The treatise is organized as follows: In Sec.~\ref{sec:problem} we define the analytic continuation problem in detail and discuss QMC computed imaginary-time correlation functions and the importance of taking covariance into account (which we do in practice by a basis transformation). We also explain how to generate realistic synthetic data, including covariance, for testing and benchmarking purposes. In Sec.~\ref{sec:methods} we briefly review some of the common analytic continuation methods, in particular data fitting based purely on $\chi^2$ minimization, the ME method, as well as conventional SAC methods. We also describe the different parametrizations of the spectrum suitable for SAC sampling and outline the further developments of optimized constraints that we present in detail in the later sections. In Sec.~\ref{sec:plainsampling} we discuss Monte Carlo sampling techniques for spectral functions in the different parametrizations without constraints [those illustrated in Figs.~\ref{fig:spec}(a)-(c)], including moment conserving updates used to increase the sampling efficiency. In Sec.~\ref{sec:theta} we motivate the statistical criterion for the optimal sampling temperature $\Theta$ and present several illustrative examples, using QMC data for the dynamic structure factor of the Heisenberg chain as well as synthetic data. We use a lower bound on the spectrum as an example of a constraint that reduces the sampling entropy and can be optimized by a simple criterion. We argue that an effective number of fitting parameters ($\ll N_\omega$) can be defined by considering the low-$\Theta$ limit of the spectrum obtained with a given set of the noisy imaginary-time data, and also explore an alternative way to regulate the entropy by optimizing the number of sampled $\delta$-functions at $\Theta=1$. In Sec.~\ref{sec:entropy} we first derive the entropy of the spectrum explicitly in the case of equal-amplitude $\delta$-functions in the frequency continuum and discuss related results from the literature. We then discuss and make use of a recent result for the entropy in fixed-grid SAC \cite{ghanem20a}. Based on these two entropy forms, we conjecture a new ``mixed entropy'' when both frequencies and amplitudes are sampled. We also demonstrate how the optimal sampling temperature delivered by our criterion relates to the extensive form of the entropy by $\Theta \propto 1/N_\omega$. In Sec.~\ref{sec:deltapeak} we discuss constrained SAC applied to a spectrum with a sharp peak at the lower edge, either a $\delta$-function as in Fig.~\ref{fig:spec}(d) or a generalization to a multi-$\delta$ parametrization for sampling a quasi-particle peak of finite width. We test the ability of the optimization method to converge to the correct values of the amplitude and width of the peak using synthetic spectral functions. We also discuss results for the dynamic structure factor of the two-dimensional (2D) $S=1/2$ Heisenberg antiferromagnet, where a sharp peak at the lower edge of the spectrum represents the dominant single-magnon excitation. In Sec.~\ref{sec:contedge1} we apply the monotonicity constraint on the distance between the $\delta$-functions, Fig.~\ref{fig:spec}(e), which is appropriate for reproducing a continuum with an edge singularity. We present test results for the dynamic structure factor of the Heisenberg chain, where deconfined spinons lead to a divergent power-law singularity, which can be resolved to a remarkable degree with our method. We also consider a synthetic spectrum with non-divergent sharp edge and show how a parameter regulating the $\delta$-function spacing can be optimized and leads to high-fidelity results without any other input. In Sec.~\ref{sec:contedge2} we discuss the entropic pressures existing within the parametrization used in the previous section and introduce an optimized parameter to further improve the resolution of the edge singularity, specifically enabling determination of the exponent governing an asymptotic power-law divergent or convergent form. Here we also discuss a combination of the parametrization used for the sharp edge with that for a generic continuum, with which a completely arbitrary spectral function with power-law edge can be modeled with no other input. In Sec.~\ref{sec:ladders} we compute the dynamic structure factor of $S=1/2$ Heisenberg 2- and 3-leg ladders. In the former case, we optimize the leading isolated $\delta$-function at the gap, which arises from the triplon quasi-particle at momentum $q=\pi$, and further improve the results by imposing also the three-triplon gap. In the latter case, we employ the parametrization with the expected divergent spinon edge followed by an arbitrary continuum. In Sec.~\ref{sec:maxent} we demonstrate the exact relationships between SAC with different parametrizations of the spectrum and the ME method with the corresponding functional forms of the entropy. By comparing ME and SAC results in the proper way following from our exact mapping, we demonstrate the correctness of the three forms of the entropy discussed in Sec.~\ref{sec:entropy}. We also discuss sampling versus probability maximization within the ME framework and point out a previously overlooked problem arising from extensive sampling entropy. Readers who are interested in these topics and who are familiar with the basic aspects of the SAC and ME methods can read Sec.~\ref{sec:maxent} essentially independently of the other parts of the paper (with just a few jumps back to referenced results of earlier sections). In Sec.~\ref{sec:discussion} we conclude with a brief summary as well as further comments and conclusions. For future prospects, we discuss more general constrained parametrizations and present a proposal for machine learning to identify the best spectrum in a large set of SAC or ME spectra. We also suggest potential advantages of including a small fraction of negative spectral weight in SAC. In \ref{app:lowtheta} we report new insights into the $\chi^2$ minimization procedure corresponding to $\Theta \to 0$, explaining why the ultimate best-fit spectrum should consist of a small number of $\delta$-functions. We also discuss how this limit defines an effective number of fitting parameters for noisy data, when positive definiteness is enforced. In \ref{app:low2} we further discuss how the SAC spectrum at very low sampling temperatures changes in the presence of a small fraction of negative spectral weight, as a result of additional entropy contributions. Our preliminary results indicate that the SAC method may some times be further improved by exploiting negative spectral weight. In \ref{app:fluct} we discuss the fluctuations of the sampled spectral weight within a fixed frequency window; specifically arguing that these fluctuations cannot be translated into statistical errors on the average spectrum. We also explicitly demonstrate the additivity of amplitude and frequency fluctuations of a spectrum sampled with $\delta$-functions. In \ref{app:statmech} we compare and contrast conventional statistical mechanics and the unrestricted SAC sampling problem, providing further arguments for an unusual thermodynamic limit ($N_\omega \to \infty$) of SAC, where the fluctuations of the spectrum about the maximum-probability ME solution vanish. \section{The numerical analytic continuation problem} \label{sec:problem} In Sec.~\ref{sec:defs} we outline the mathematical formalism of the analytic continuation problem, establishing definitions and notation used in the later sections. We discuss QMC generated imaginary-time data in Sec.~\ref{sec:qmcdata}, e.g., the choice of time grid and the characterization of the statistical errors and covariance. In Sec.~\ref{sec:syntdata} we discuss synthetic data, i.e., imaginary-time correlations generated for testing purposes from an artificial model spectrum, with correlated noise added to mimic the statistical fluctuations in typical QMC data. \subsection{Definition of the problem} \label{sec:defs} The correlation function computed in a QMC simulation is defined with some operator $O$ of interest (typically corresponding to some experimental probe) as \begin{equation} G(\tau) = \langle O^\dagger(\tau) O(0)\rangle, \label{gtaudef1} \end{equation} where the imaginary-time dependence is defined in the Heisenberg representation as (working in dimensionless units where $\hbar=1$) \begin{equation} O(\tau) = {\rm e}^{\tau H} O {\rm e}^{-\tau H}, \label{otau} \end{equation} with the Hamiltonian $H$ of the system under study. In the basis of eigenstates $|n\rangle$ and eigenvalues $E_n$ of $H$, the spectral function of $O$ at temperature $T=\beta^{-1}$ (setting $k_B=1$) is given by \begin{equation} S(\omega)=\frac{\pi}{Z} \sum_{m,n}{\rm e}^{-\beta E_n}|\langle m|O|n\rangle|^2 \delta(\omega - [E_m-E_n]), \label{somegasum} \end{equation} where $Z$ is the partition function. The relationship between this spectral function and the imaginary-time correlation function in Eq.~(\ref{gtaudef1}) is \begin{equation} G(\tau) = \frac{1}{\pi}\int_{-\infty}^\infty d\omega S(\omega){\rm e}^{-\tau \omega}. \label{contrel1} \end{equation} For a bosonic operator $O$, $G(\beta-\tau)=G(\tau)$ and we need only $\tau \in [0,\beta/2]$. Further, in the case of a bosonic function the spectral weight distributions at negative and positive frequencies are related according to \begin{equation} S(-\omega)={\rm e}^{-\beta \omega}S(\omega). \label{sminus} \end{equation} The relationship between $S(\omega)$ and $G(\tau)$ can therefore be modified so that integration is required only over positive frequencies, with Eq.~(\ref{contrel1}) written as \begin{equation} G(\tau) = \int_{0}^\infty d\omega S(\omega) K(\tau,\omega), \label{contrel2} \end{equation} where the kernel is given by \begin{equation} K(\tau,\omega) = \frac{1}{\pi}(e^{-\tau\omega} + e^{-(\beta-\tau)\omega}). \label{kernel1} \end{equation} We will here consider only bosonic spectral functions, but the fermionic case can be studied with very minor modifications of the methods. For examples and applications, we will study quantum spin models and synthetic spectral functions. A well known example of a bosonic spectral function is the dynamic spin structure factor $S^\alpha(q,\omega)$, measured, e.g., by magnetic inelastic neutron scattering as the cross section for momentum ($q$) and energy ($\omega$) transfer. In this case the operator in Eq.~(\ref{somegasum}) is $O=S_q^\alpha$, the Fourier transform of the $\alpha$-component $S_r^\alpha$ $(\alpha=x,y,z)$ of the real-space spin operators. We will test our improved (as well as previous) SAC schemes on the dynamic structure factor of spin-isotropic Heisenberg models in one and two dimensions, where the function $S(q,\omega)$ is independent of the direction $\alpha$. In addition, we will also consider synthetic imaginary-time data to test the ability to resolve a variety of spectral features. A QMC simulation delivers a statistical estimate $\bar G_i \equiv\bar G(\tau_i)$ of the true correlation function $G(\tau)$ for a set of imaginary times $\tau_i$, $i=1,\ldots,N_\tau$ (or one can work in Matsubara space with a set of Fourier components at frequencies $\omega_n = n2\pi/\beta$ \cite{bergeron16,smakov05}, but here we will consider only formulations in the original time space). Often a uniform grid of $\tau$ points is used, but other grids are some times preferable, as we will discuss further below in Sec.~\ref{sec:qmcdata}. Ideally, for a given system there are no other approximations than the unavoidable statistical errors of $\bar G_i$, the magnitudes of which depend inversely on the square-root of the length of the QMC run as usual. We denote by $\sigma_i$ one standard deviation of the mean $\bar G_i$. Importantly, the statistical errors of different data points $i$ are correlated, and their full characterization requires the covariance matrix \cite{jarrell96}. With the QMC data divided up into bins $b=1,2,\ldots,N_B$ (assumed to be statistically independent, which in practice is essentially satisfied if the bins represent sufficiently long simulation times), the covariance matrix is given by \begin{equation} C_{ij} = \frac{1}{N_B(N_B-1)}\sum_{b=1}^{N_B} (G^b_i-\bar G_i)(G^b_j-\bar G_j), \label{cijdef} \end{equation} where $G^b_i$ is the mean of the correlation function computed with the data of bin $b$ [or $b$ could represent bootstrap samples, as will be discussed further below, in which case $N_B$ is the number of samples and the factor $1/(N_B-1)$ should be removed above]. The diagonal elements of $C$, the variances, are the squares of the conventional statistical errors; $\sigma_i^2 = C_{ii}$. When applying a numerical analytic continuation procedure, some suitable (sufficiently flexible) parametrization of the spectral function $S(\omega)$ is optimized for compatibility with the QMC data. Alternatively, in the SAC approach, statistically acceptable instances of $S(\omega)$ are sampled. In either case, given $S(\omega)$ the corresponding $G_i$ values can be computed according to Eq.~(\ref{contrel2}) and the overall closeness of these to the QMC-computed values $\bar G_i$ is quantified in the standard way by the ``goodness of the fit'', \begin{equation} \chi^2 = \sum_{i=1}^{N_\tau}\sum_{j=1}^{N_\tau} (G_i-\bar G_i)C^{-1}_{ij}(G_j-\bar G_j). \label{chi2} \end{equation} Some times only the diagonal elements of $C$ are included, and the goodness of the fit then reduces to \begin{equation} \chi_d^2 = \sum_{i=1}^{N_\tau} \left ( \frac{G_i-\bar G_i}{\sigma_i} \right )^2. \label{chi2d} \end{equation} Here we will always use the full covariance matrix, which is necessary for the SAC method to be statistically sound. \subsection{QMC correlation functions} \label{sec:qmcdata} It is convenient to work with normalized spectral functions. According to Eq.~(\ref{contrel1}) the normalization $\int S(\omega)d\omega$ is just the value $\pi G(0)$ and we therefore divide the QMC data $\bar G(\tau_i)$ by $\bar G(0)$ for a spectrum normalized to $\pi$. We point out that dividing by $\bar G(0)$ also cancels out some covariance and makes the standard statistical errors $\sigma(\tau)$ smaller for $\tau$ close to $0$ (vanishing as $\tau \to 0$). We use the bootstrap method for computing the covariance matrix, i.e., with a large number $M$ of samples of $N_B$ randomly chosen bins among all the $N_B$ bins. As mentioned, Eq.~(\ref{cijdef}) then holds if the sum is taken over the $M$ bootstrap samples and the denominator $N_B(N_B-1)$ is replaced by $M$. Normalizing each bootstrap sample, the normalization $\int S(\omega)d\omega=\pi$ is also enforced exactly in the sampled spectra, and we do not use the $\tau=0$ data point explicitly. The original normalization is put back in after the analytic continuation by just multiplying the spectrum by the original pre-normalization value of $\bar G(0)$. For temperatures $T>0$, the integral $\int S(\omega)d\omega$ includes spectral weight at both negative and positive frequencies, and with Eq.~(\ref{contrel2}) $G(0)$ does not correspond directly to a fixed normalization of $S(\omega)$ in the corresponding frequency range $\omega \in [0,\infty)$. The Monte Carlo sampling can be simplified by working with a different spectral function that is normalized to unity on the positive frequency axis. We therefore define a modified spectral function to use internally in the computer program, \begin{equation} A(\omega) = S(\omega)(1 + {\rm e}^{-\beta\omega})/\pi, \label{barelation} \end{equation} for which Eq.~(\ref{contrel1}) implies [with the convention $G(0)=1$] the desired normalization \begin{equation} \int_{0}^\infty d\omega A(\omega) = 1. \end{equation} Then, in place of Eq.~(\ref{contrel2}) and Eq.~(\ref{kernel1}) we use \begin{equation} G(\tau) = \int_{0}^\infty d\omega A(\omega) \bar K(\tau,\omega). \label{eir} \end{equation} where the kernel is given by \begin{equation} \bar K(\tau,\omega) = \frac{e^{-\tau\omega} + e^{-(\beta-\tau)\omega}}{1 + e^{-\beta\omega}}. \label{kernel2} \end{equation} We always convert back from $A(\omega)$ to $S(\omega)$ after the analytic continuation. Several considerations are involved in choosing the set of points $\{\tau_i\}$ at which to evaluate $G(\tau)$ in a QMC simulation. The behavior of $S(\omega)$ at high frequencies is predominantly reflected at short times, because of the kernel $\rm{e}^{-\tau\omega}$ in Eq.~(\ref{contrel1}) and the typical increase in the relative statistical errors with $\tau$. The low-frequency behavior is conversely most cleanly ``filtered out'' at the longest accessible time, before the relative errors increase so much that the data become useless in practice. Thus, we would like to have many $\tau$ points at very short times as well as at long times. However, we do not want too many points in total, because each one requires QMC computation time, and, moreover, the covariance matrix can become unstable (its inverse may become noise dominated) if the matrix is too large for given statistical quality of the data. It should be noted that the computational effort of the analytic continuation procedure also increases with the number of $\tau$ points (linearly, as we will see). To sufficiently cover both short and long times $\tau \in [0,\beta/2]$ for large $\beta$, while limiting the total number of points, it is often convenient to use a non-linear $\tau$ grid, e.g., a quadratic grid where $\tau_i \propto i^2$. It is also sometimes useful to start with a linear grid for short times and switch to a quadratic grid at longer times. Overall, our experience is, thankfully, that the end result for $S(\omega)$ is not very sensitive to exactly what $\tau$ grid is chosen, as long as a reasonably large number of points is used (typically tens of points) and they are spread over the range where the statistical errors are relatively small (which we typically take as below $10\%$ or $20\%$ relative error). In practice, when using the covariance matrix $C$, instead of inverting it and using Eq.~(\ref{chi2}) to compute $\chi^2$, it is better to diagonalize $C$ and transform $\bar G(\tau)$ and the kernel to the resulting eigenbasis. Then $\chi^2$ is given by a formula like Eq.~(\ref{chi2d}), with $\sigma^2_i$ replaced by the $i$th eigenvalue $\epsilon_i$ of the covariance matrix and $\{ \bar G_i\}$ being the elements of the vector obtained by transforming the original $\tau$ data, arranged in a vector, to the eigenbasis of $C$. To summarize the change of basis, with $U$ denoting the orthogonal matrix that transforms the covariance matrix $C$ to its diagonal form, $\epsilon = U^{\rm T} CU$, we make these substitutions when applying the above formulas: \begin{equation} \bar G \to U^{\rm T} \bar G,~~~~\bar K(\omega) \to U^{\rm T}\bar K(\omega), \label{basistransf} \end{equation} where on the right-hand side $\bar G$ denotes the vector of $N_\tau$ QMC-computed $G(\tau)$ points and $\bar K(\omega)$ similarly is the vector containing the $N_\tau$ kernel points evaluated at frequency $\omega$. After the transformation, the invariant goodness-of-fit, Eq.~(\ref{chi2}), is a single sum, \begin{equation} \chi^2 = \sum_{i=1}^{N_\tau} \left ( \frac{G_i-\bar G_i}{\sqrt{\epsilon_i}} \right )^2, \label{chi2eps} \end{equation} where now of course the vector $G$ is automatically produced in the transformed basis because it is computed with the transformed kernel according to Eq.~(\ref{eir}). To compute the correlation functions with QMC simulations, we here use the stochastic series expansion (SSE) method \cite{sandvik10}. It should be noted that there are no other approximations in this scheme beyond the statistical errors. We will not discuss how the correlation functions are calculated with the SSE method; see Refs.~\cite{sandvik92,sandvik96,dorneich01}. We refer to the recent review Ref.~\cite{sandvik19} for some further discussion and references. Fig.~\ref{fig:gtau}(a) shows an example of a QMC-computed correlation function on a quadratic grid of $\tau$ values, with data included up to the point before the relative error (the conventional standard deviation) exceeds $20\%$. Fig.~\ref{fig:gtau}(b) shows the corresponding statistical error versus $\tau$, as well as the eigenvalues of the covariance matrix graphed in ascending order (where we have taken the square-root so that the eigenvalues can be directly compared with the conventional error bars $\sigma_i$). The statistical errors $\sigma_i$ are typically only weakly dependent on $\tau_i$ except when $\tau_i$ are close to zero, as can be seen in Fig.~\ref{fig:gtau}(b). We will refer to the approximate overall magnitude of $\sigma_i$ [with the normalization $\bar G(0)=1$] close to the cut-off value of $\tau$ as the ``error level''. Thus, in Fig.~\ref{fig:gtau} the error level is about $6\times 10^{-6}$. An alternative simple definition of the error level could be the largest eigenvalue of the covariance matrix (which is a bit above $10^{-5}$ in the present case). With the SSE method, it is often possible to reach error levels of $10^{-5}$, or even $10^{-6}$, without too much effort. \begin{figure*}[t] \centering \includegraphics[width=105mm]{fig02.pdf} \vskip-1mm \caption{(a) Normalized imaginary-time spin correlations at momentum $q=4\pi/5$ generated in SSE QMC simulations of the Heisenberg chain of size $L=500$ at a temperature low enough ($\beta=1000$) to produce ground state results. The grid is quadratic, with $\tau_i=i^2/100$ for $i =1,\ldots,30$, with the cut-off representing a relative error exceeding $20\%$. (b) The statistical errors, same as those shown with error bars in (a). The square-root of the eigenvalues of the covariance matrix are graphed in increasing order in the inset.} \label{fig:gtau} \end{figure*} \begin{figure*} \centering \includegraphics[width=107mm]{fig03.pdf} \vskip-1mm \caption{Selected eigenfunctions corresponding to the data in Fig.~\ref{fig:gtau}. Panels (a)--(d) show the eigenfunctions $U_1$, $U_{10}$, $U_{20}$, and $U_{30}$, respectively (with smallest to largest eigenvalues).} \label{fig:geigen} \end{figure*} Some examples of eigenfunctions of $C$ are shown in Fig.~\ref{fig:geigen}. The eigenvector $U_1$ in Fig.~\ref{fig:geigen}(a) corresponds to the smallest eigenvalue $\epsilon_1$ in the inset of Fig.~\ref{fig:gtau}(b) and involves essentially only $\bar G(\tau_i)$ for the three shortest times, $i=1,2,3$, added out-of-phase. The subsequent eigenvectors involve most of the original $\tau$ points, with the weights exhibiting oscillations with increasing distance between maxima and minima. The eigenvector $U_{30}$ for the largest eigenvalue, Fig.~\ref{fig:geigen}(d), has only positive contributions, with significant weight for about the first half of the $\tau$ points. Thus, even after normalizing the data by $\bar G(0)$, which automatically eliminates some of the uniform covariance, the largest fluctuations are still in-phase, involving the points with $\tau \lesssim 3$ in this case. While these details of the covariance depend on the model and the spectral function considered, the behaviors observed in Figs.~\ref{fig:gtau} and \ref{fig:geigen} are qualitatively fairly typical for a wide range of quantum spin models and observables that we have studied (e.g., the 2D and 3D system in Refs.~\cite{shao17} and \cite{qin17}, respectively). It has been argued that only data corresponding to ``large''eigenvalues (or singular values) should be used \cite{jarrell96}, but we do not see any reason for truncation at this stage. Some of the smaller eigenvalues correspond to the very small error bars at small $\tau$, e.g., in the case of the eigenvector $U_1$ in Fig.~\ref{fig:geigen}(a), and throwing away these data may be detrimental. It is also difficult to identify a clear boundary between ``large'' and ``small'' eigenvalues, as seen in the inset of Fig.~\ref{fig:gtau}(b). We here only make sure that the eigenvalues are stable with respect to the amount of QMC data used. Often problems with the covariance matrix can be detected at the analytic continuation stage, if a statistically acceptable $\chi^2$ value cannot be obtained (and we will later quantify the meaning of ``acceptable''). In such cases, we prune the data set by, e.g., only using every second $\tau$ point or by increasing the amount of QMC data until the eigenvalues and eigenvectors become reliable and $\chi^2$ is acceptable. \subsection{Synthetic data} \label{sec:syntdata} In order to rigorously test analytic continuation methods, models with exactly known spectral functions are invaluable. However, models that are exactly solvable, amenable to QMC simulations, and also have non-trivial spectral functions with interesting features are rare. Even in the case of the $S=1/2$ Heisenberg chain, which we will study frequently in the following sections, the dynamic spin structure factor has not been calculated entirely without approximations, though the results can still serve as very useful benchmarks \cite{caux05a,caux05b,pereira06}. Given the need to test methods for many different types of spectral functions, it is necessary to consider synthetic data, by which we mean imaginary-time correlation functions computed directly using Eq.~(\ref{contrel1}) with an artificial spectral function $S(\omega)$ constructed with the desired features to be tested. Several examples will be considered in the tests presented in the later sections. To mimic the statistical errors present in QMC data, noise should be added to the values $G(\tau_i)$ obtained from Eq.~(\ref{contrel1}). Since results for different $\tau$ points are always strongly correlated when they are computed on the basis of the same QMC configurations, we construct synthetic correlated noise in the way already described in Refs.~\cite{qin17} and \cite{shao17} and repeated here for completeness. We begin by generating normal-distributed values $\sigma^0_i$ with zero mean and the same standard deviation $\sigma^0$ for all $i$ (for simplicity). To build in correlations between the noise for different $i$, we compute weighted averages over several points using an exponentially decaying weight function; \begin{equation} \sigma_i = \sum_{j} \sigma^0_j {\rm e}^{-|\tau_i-\tau_j|/\xi_\tau}, \label{corrnoise} \end{equation} which we then add to $G(\tau_i)$. We generate a large number of such noisy data sets (corresponding to QMC data bins) and run these through the same bootstrapping code that we use for real QMC data to compute the mean values and the covariance matrix. The correlation time $\xi_\tau$ and the common (in the simplest case) standard deviation $\sigma^0$ of the generated noise instances $\sigma^0_j$ in Eq.~(\ref{corrnoise}) are adjusted so that the eigenvalues of the covariance matrix are similar to those of real QMC eigenvalues, e.g., those in Fig.~\ref{fig:gtau}(b). For all results presented in this work, we used $\xi_\tau=1$. The overall error level $\sigma$ (i.e., the approximate statistical errors for $\tau\gg \xi_\tau$, where the $\tau$ dependence is weak, as in Fig.~\ref{fig:gtau}) depends on the original (uncorrelated) standard deviation $\sigma^0$ and the number of bins, and these parameters can be adjusted for a desired error level $\sigma$. We note here that covariance should not necessarely be regarded as a detriment to analytic continuation. In fact, covariance is what causes the small eigenvalues, e.g., in the inset of Fig.~\ref{fig:geigen}(b), and some of these small eigenvalues can translate into better frequency resolution compared to data with uniform statistical errors of magnitude similar to the larger eigenvalues. Using tests with synthetic data with and without covariance, but otherwise the same number of $\tau$ points and noise level, we have indeed found that the presence of covariance actually improves the outcome of analytic continuation. We will not discuss this aspect of covariance further, but it is important to keep it in mind when benchmarking methods with synthetic spectral functions. In our tests with synthetic data in this work, we set the temperature in Eq.~(\ref{contrel1}) to a low value in all cases (often $T=0$), since the improved SAC methods considered here are mainly intended for applications to systems at low temperatures, where sharp spectral features often appear. \section{Review of methods and outline of stochastic analytic continuation with constraints} \label{sec:methods} In perturbative diagrammatic many-body calculations, where imaginary-time correlations can be obtained to machine precision, Pad\'e approximants \cite{vidberg77,beach00} are commonly used for analytic continuation (though not always successfully---an apparently superior method was developed recently \cite{fei21}). The Pad\'e approach was applied also with QMC data, with some success for simple cases, e.g., in early work to extract single-mode information \cite{thirumvalai83}. In general this method is not stable when attempting to reproduce more complicated spectral functions, though some progress has been made in recent years \cite{schott16,han17,motoyama22}. The new Nevanlinna analytic continuation method \cite{fei21} has not yet been broadly explored with QMC data. To set the stage for the new developments presented in this treatise, we here first briefly review the most commonly used numerical methods for analytic continuation of QMC data. In Sec.~\ref{sec:methods_a} we summarize some of the maximum-likelihood techniques explored early on and also mention more recent related developments. We then focus on the commonly used ME method in Sec.~\ref{sec:methods_b}. In Sec.~\ref{sec:methods_c} we discuss the main ideas behind the conventional (unrestricted) SAC method (with more details and new insights to follow in Sec.~\ref{sec:plainsampling}). In Sec.~\ref{sec:methods_d} we discuss the motivations for restricted sampling (with optimized constraints) and give a preview of the further progress that will be presented in more detail in Secs.~\ref{sec:deltapeak}, \ref{sec:contedge1}, and \ref{sec:contedge2}. We defer discussion of the more recently explored machine learning methods for analytic continuation \cite{arsenault17,fournier20} to Sec.~\ref{sec:discussion}. \subsection{Data fitting based on $\chi^2$} \label{sec:methods_a} An early approach to analytic continuation of QMC data was to represent the spectrum as a positive-definite histogram with a small number of bins (four to eight) and optimize the distribution of weight by minimizing $\chi^2$ \cite{schuttler85a,schuttler85b} (where sum rules and computed moments of the spectrum were also included in addition to the correlation function). It was noted that the amount of information contained in the noisy QMC data did not warrant a larger number of histogram bins. If one attempts to use a large number of bins (or closely spaced $\delta$-functions with optimized amplitudes) the result is in general not smooth, but sharp spikes appear. Perhaps surprisingly the spectrum truly minimizing $\chi^2$ typically consists of a few isolated $\delta$-functions \cite{sandvik98}, as we will also discuss further below when studying the dependence of a sampled spectrum on the fictitious temperature $\Theta$ in the SAC method. In \ref{app:lowtheta} we confirm that the ultimately best-fit spectrum is a small set of $\delta$-functions. The reason for this behavior can be found in the imposed positive-definiteness of the spectrum. In a further development of the $\chi^2$ fitting approach, a discrete gradient-squared term was maximized along with the minimization of $\chi^2$, thus leading to smooth spectral functions \cite{white89}. While this represented an improvement and some useful results were obtained, the lack of a general criterion for the relative degree of smoothness and goodness of the fit made it hard to judge the validity of the results. Around the same time, the mathematically more appealing ME method \cite{gull84} was adapted to the QMC context \cite{silver90,gubernatis91,jarrell96} and quickly became the dominant analytic continuation technique. In principle, a very good solution to the analytic continuation problem would be to just use $\chi^2$ minimization with a suitable functional form, not based on a dense histogram or a large set of $\delta$-functions but still with enough flexibility to describe the expected spectrum. This approach has been applied with some success with functions depending on a small number of parameters motivated by physical insights, e.g., in Refs.~\cite{sandvik98b,sandvik01,katz14}. However, it is very difficult to construct a generic form with sufficient flexibility to reproduce {\it a priori} unknown spectral functions. Increasing the flexibility beyond some rather small number of parameters leads to problems similar to those encountered with histograms, unless the solution can be sufficiently constrained (regularized). Some progress has been made recently along these lines, by combining rather complex, flexible functional forms with a Bayesian method to discriminate between results of different parametrizations \cite{linden19}. The spectral functions here are still not completely arbitrary but should be constructed based on prior knowledge. Analytic continuation based on singular-value decomposition is also possible \cite{gazit13}. As an alternative to minimizing $\chi^2$ (the $L_2$ norm) other norms defining the ``best'' solution can also be used in sparse modeling. Here the goal is to find a minimal number of parameters, in some representation of the spectrum, to model noisy imaginary-time (or Matsubara frequency) data. So far we have not seen any advantages of these approaches relative to the other methods reviewed here and further below, though the goal is also exactly to solve problems analogous to the spike histograms discussed above. We refer to the recent review Ref.~\cite{otsuki20} for further details on sparse modeling. \subsection{Maximum entropy methods} \label{sec:methods_b} In the ME method the spectrum is normally parametrized as amplitudes on a dense frequency grid, regularized by the information entropy $E(S)$ defined with respect to a default model $D(\omega)$. In the limit of a continuum \begin{equation} E(S)=-\int_{-\infty}^\infty d\omega S(\omega) \ln{\left (\frac{S(\omega)}{D(\omega)} \right )}, \label{esdef} \end{equation} which is maximal ($E=0$) when $S(\omega)=D(\omega)$. The mathematical foundation of the ME method is Bayes' theorem, with \begin{equation} P(S|\bar G)P(\bar G)=P(\bar G|S)P(S), \label{bayes} \end{equation} postulated. Here the prior probability (i.e., the result in the absence of data) is given by \begin{equation} P(S) \propto {\rm exp}[\alpha E(S)], \label{meprior} \end{equation} where $\alpha$ is a constant to be determined [or in some cases integrated over with a suitable prior $P(\alpha)$]. The conditional probability $P(\bar G|S)$ of the data set $\{\bar G(\tau_i)\}$ given a spectrum $S$ (implicitely defined by the model simulated) is well known from the statistics of Gaussian fluctuations; \begin{equation} P(\bar G|S) \propto {\rm exp} \left (-\frac{\chi^2(\bar G,S)}{2} \right ). \label{psgme} \end{equation} Finally, $P(\bar G)$ in Eq.~(\ref{bayes}) acts as an irrelevant normalization, since $\bar G$ has already been generated when solving for $S$. Maximizing the probability $P(S|\bar G) \propto P(\bar G|S)P(S)$ amounts to minimizing the functional \begin{equation} F(S)=\chi^2(S)/2-\alpha E(S), \label{fsdef} \end{equation} which delivers the default model if $\alpha$ is large and corresponds to pure $\chi^2$ fitting when $\alpha \to 0$. These limits represent poor solutions to the problem for completely different reasons, and the ``best'' spectrum should be obtained at some intermediate value of $\alpha$. The entropy serves a similar purpose as the gradient-squared smoothing discussed above in Sec.~\ref{sec:methods_a}, but in the ME method the ambiguity in the choice of the parameter $\alpha$ has been partially resolved by using Bayesian inference arguments \cite{silver90,jarrell96}. The basic premises of those arguments are themselves not rigorous, however, and there are differing opinions on exactly how to best determine $\alpha$ or integrate results over $\alpha$. Recently a completely different criterion, originating in the SAC approach \cite{beach04}, was put forward \cite{bergeron16} where the ``best'' $\alpha$ corresponds to a maximum of the derivative of $\ln(\chi^2)$ with respect to $\ln(\alpha)$ in the region where $\chi^2(\alpha)$ is close to its minimum value. However, as we will show in Sec.~\ref{sec:thetacrit} in the context of the SAC method, and also in Sec.~\ref{sec:maxent} when further discussing the ME method, this criterion cannot guarantee an acceptable fit to the data. To improve the performance of the ME method in general, all prior available information should be incorporated to constrain the solution \cite{linden95}. A default model $D(\omega)$ in the entropy definition, Eq.~(\ref{esdef}), does not impose any hard constraints, but a good model (i.e., close to the true spectrum) often helps significantly to achieve a solution with minimal distortions. In the QMC context, frequency moments from sum rules can be imposed when constructing $D(\omega)$ by maximizing the entropy \cite{jarrell96}. Approximate results, e.g., from perturbation theory \cite{diamantis14,fanto21}, can be also used, but this approach can produce biased or misleading results, given that the perturbative results in most cases would be far from correct \cite{jarrell12}. Often, in the absence of strong constraints or a default model known to be very close to the solution sought, the best option is to use a flat default extending from $\omega=0$ [in the case where the $\omega \ge 0$ formulation with the kernel in Eq.~(\ref{kernel2}) is used] up to some frequency beyond which the remaining spectral weight is negligible. If the lower bound $\omega_0>0$ is known, incorporating it can significantly improve also the resolution at higher frequencies, as was noted in the context of the SAC method \cite{sandvik16}. We will further discuss the ME method in Sec.~\ref{sec:maxent}, where we present transferable insights from the SAC approach. We propose a new SAC-inspired way of fixing the entropy factor $\alpha$. Most importantly, we formulate equivalences (not invoking any mean-field arguments \cite{beach04} or other approximations) between SAC sampling with different parametrizations and ME methods with associated functional forms of the entropy---not always the Shannon information entropy. \subsection{Stochastic averaging} \label{sec:methods_c} It was early on realized \cite{white91} that a different way to achieve a smooth spectrum is to average over many solutions with reasonable $\chi^2$ values, though this method initially fell in the shadows of the ME method. The SAC approach (also called stochastic analytic inference or the average spectrum method) was introduced independently in a slightly different form \cite{sandvik98} several years later. Subsequent works further developed and explored the method \cite{beach04,syljuasen08,fuchs10,sandvik16,qin17,shao17,ghanem20a,ghanem20b}. Applications of SAC methods to various quantum lattice models abound; we list a representative sample of mostly recent works as Refs.~\cite{syljuasen08,qin17,shao17,feldner11,voll15,lohofer15,lohofer17,becker17,becker18,ying19,shu18,sun18,ma18,xu19,li20,raczkowski20,zhou21,sato21,cheng22,liu22}. In most SAC methods, the averaging is carried out by importance sampling of the parameters of a fully flexible positive (semi) definite spectrum $S(\omega)$, e.g., the amplitudes of $\delta$-functions on a dense grid of frequencies or with some large number of $\delta$-functions residing in continuous frequency space, as illustrated in Fig.~\ref{fig:spec}. Such a parametrization with large $N_\omega$ can be regarded as the configuration space of a statistical-mechanics problem, with $\chi^2(S)/2$ playing the role of the energy in a Boltzmann-like weight function. One can then carry out a Metropolis Monte Carlo simulation in this space at a fictitious temperature $\Theta$, whence the probability density of the spectrum $S(\omega)$, given the data $\bar G$ (and the covariance matrix), is \begin{equation} P(S|\bar G) \propto {\rm exp} \left (-\frac{\chi^2(S)}{2\Theta} \right ), \label{psg} \end{equation} where $\chi^2$ also depends on $\bar G$ and the covariance matrix according to Eq.~(\ref{chi2}). With $\Theta=1$, which was used by White~\cite{white91} and also advocated by Sylju{\aa}sen \cite{syljuasen08}, this conditional probability is exactly the likelihood function arising from Bayes' theorem, Eq.~(\ref{bayes}), if one assumes a prior probability $P(S)$ independent of $S$ (i.e., all combinations of the parameters defining the spectrum are a priori equally likely). Leaving out the unimportant (and unknown) normalization $P(\bar G)$ we then have $P(S|\bar G) \propto P(\bar G|S)$, i.e., Eq.~(\ref{psg}). As an alternative to fixing $\Theta=1$, it can be argued that $\Theta$ should be adjusted to make sure that the sampled average goodness of the fit $\langle \chi^2\rangle$ is close to its minimum value (obtained when $\Theta \to 0$). The fluctuations of the spectrum still must be large enough to produce a smooth average \cite{sandvik98,beach04}, and then some criterion balancing $\langle \chi^2\rangle$ minimization and sampling entropy must be devised. It has been shown explicitly \cite{sandvik16} that configurational entropy leads to a deteriorating spectrum (increasing $\langle \chi^2\rangle$) as the number of degrees of freedom of the spectrum is increased if the temperature is held fixed at $\Theta=1$. This effect can be counteracted by imposing constraints \cite{sandvik16}, as we will also discuss here, but in most cases it is still necessary to also suppress the entropy by reducing $\Theta$. The need to lower the sampling temperature below $\Theta=1$ leads to a problem similar to the selection of the entropy weighting factor $\alpha$ in the ME method. A commonly used method has been to identify the point of sharpest drop (peak in the derivative) of $\ln\langle \chi^2(\Theta)\rangle$ versus $\ln(\Theta)$, which, in a loose analogy with the specific heat \cite{beach04} can be regarded as signaling a transition into a ``glassy'' state in which the spectrum becomes difficult to sample and is affected by the statistical errors. Bayesian inference can also be applied to fix $\Theta$ \cite{fuchs10}, with the same caveats as in the case of $\alpha$ in the ME method. As demonstrated by Beach, the ME and SAC methods are in fact closely related \cite{beach04}---treating the SAC setup within a mean-field-like approximation gives exactly the ME method, with $\alpha=\Theta$. This fact initially appeared to lend further credence to SAC as superior to the ME method, as the fluctuations neglected in the ME solution can contain additional spectral structure \cite{fuchs10}. Here, in Sec.~\ref{sec:entropy} and \ref{sec:maxent}, we will further quantify the relationship between the two methods, which in some important aspects is at odds with the previous notions. As mentioned in the previous section, the likelihood function Eq.~(\ref{psg}) with $\Theta=1$ arises from Bayes' theorem with a constant prior $P(S)$. Alternatively, this likelihood function, with $\Theta=1$ or $\Theta \not= 1$, can be heuristically interpreted as a reasonable way to carry out averaging over a class of spectral functions compatible with the QMC data set $\{ \bar G_i\}$. Ultimately, however, a given parametrization of the spectrum also can be regarded as an entropic prior, i.e., there is some bias toward certain shapes of the spectrum due to specific entropic pressures associated with the sampled degrees of freedom (i.e., the details of the stochastic process). Investigating these entropic effects, formally quantifying them and counteracting them with constraints will be the major themes of this paper. There is another line of average-spectrum methods where the sampling is not carried out with a Boltzmann-like weight function, but some other mechanism is used to generate an ensemble of spectra with reasonable $\chi^2$ values \cite{mishchenko00,mishchenko02,mishchenko12,goulko17,bao16,krivenko2019}. We will here only discuss sampling with Eq.~(\ref{psg}), which has the advantage of many analogies with statistical mechanics. Specifically, regularizing the average spectrum with the fictitious temperature $\Theta$, there is an adjustable competition between ``internal energy'', i.e., minimization of $\langle \chi^2\rangle$, and entropy, the latter of which can distort the spectrum but also counteracts overfitting (preventing freezing into an undesirable ground state corresponding to a noise dominated spectrum). Further analogies appear when constraints are imposed, whence the sampling maximizes the entropy within a restricted space. Optimizing the parameter regulating a constraint corresponds to $\langle \chi^2\rangle$ (internal energy) minimization by suppression of entropy at fixed $\Theta$. The non-thermal regularization parameters used in Refs.~\cite{mishchenko00,mishchenko02,mishchenko12,goulko17,bao16,krivenko2019} imply other, less understood, relationships between $\langle \chi^2\rangle$ and entropy. Sampling of the spectrum is also sometimes done within the ME approach \cite{boninsegni96,kora18}, using $e^{-F(S)}$ with $F(S)$ given by Eq.~(\ref{fsdef}) as the weight function. Thus, an entropic prior is used to weight the configurations, and at the same time there is also an often overlooked native entropy of the sampling space (normally the spectral weight distribution stored as a discrete histogram). The latter in principle causes diverging fluctuations in the continuum limit of the frequency space, as we will demonstrate explicitly in Sec.~\ref{sec:maxent}. \subsection{Stochastic analytic continuation; parametrizations and constraints} \label{sec:methods_d} It seems very difficult to completely avoid some bias in any numerical analytic continuation method. A main goal of the work presented here is to show how bias arising from entropic pressures can be removed by constraints. A simple example would be to fix a lower bound of a gapped spectrum at some frequency $\omega_0>0$, thus avoiding ``leaking'' of spectral weight below the bound, which inevitably takes place with unrestricted sampling. While the gap would typically not be known in advance, it can be determined by a statistical goodness-of-fit measure produced by SAC runs at different values of $\omega_0$. Constraints can also be helpful for reproducing more intricate sharp spectral features, especially at the lower edge, which will be our main focus in the later sections of this paper. In some cases, a certain prominent feature may be known, e.g., an edge $\omega_0>0$ at which the spectrum diverges asymptotically as a power law, $(\omega - \omega_0)^{-p}$ when $\omega \to \omega_0$ from above, with a known or unknown exponent $p$. It is then clearly legitimate to incorporate a constraint that implies such an asymptotic form but does not further hard-impose any details, i.e., allowing deviations from the native constrained form away from the asymptotic $\omega \to \omega_0$ limit. We will demonstrate that SAC procedures with such constraints, exemplified in the simplest case in Fig.~\ref{fig:spec}(e), can deliver previously unknown information, e.g., the value of $\omega_0$ and, ideally (if the imaginary-time data are sufficiently good) the exponent $p$ controlling the asymptotic divergence if it is not a priori known. Another example is a quasi-particle peak with some weight and width, for which Fig.~\ref{fig:spec}(d) is the simplest parametrization when the peak is extremely sharp. Provided that the spectrum beyond the edge feature imposed is not too complex, it can also be resolved in its entirety to a much better degree once the edge has been treated correctly. In cases where an edge feature is not known with certainty, imposing a specific constraint can be thought of as testing a hypothesis, and the optimization process should then also ideally be able to signal the inapplicability of an incorrect constraint. Though significant work may be required for selecting, testing, and optimizing constraints, the efforts can pay off with results that far exceed in quality what can be expected with other approaches. A main goal of this work is to provide some examples of what can be achieved with suitable constraints and parametrizations of the spectral function within the SAC approach. \subsubsection{Role of configurational entropy} As already mentioned, if one takes the Bayesian point of view, the natural assumption would be to keep $\Theta=1$ in the sampling weight, Eq.~(\ref{psg}) \cite{syljuasen08}. However, if the number of parameters of the spectrum is large, e.g., when using $\delta$-functions on a dense frequency grid, this leads to poor data fitting, i.e., the sampled average goodness of the fit $\langle \chi^2\rangle$ is much larger than would be statistically expected for a good fit \cite{sandvik16}. The root cause of this effect can be found in the analogy with statistical mechanics, where $\chi^2$ would be an unusual form of the energy. Most notably, $\chi^2$ is not extensive in the number of degrees of freedom sampled with, e.g., the $\delta$-functions as in Fig.~\ref{fig:spec}. This fact is most clearly reflected by the minimum value $\chi^2_{\rm min}$, which is positive and cannot change significantly when more $\delta$-functions are added after some threshold number. Note that $\chi^2_{\rm min}=0$ cannot be reached when positive-definiteness of the spectrum is imposed, as discussed in detail in \ref{app:lowtheta}. Therefore, unlike conventional statistical mechanics, when more degrees of freedom are added, to make the spectrum more flexible, the entropy will eventually, for any fixed value of $\Theta$, drive the spectrum toward some limit that maximizes the entropy while taking $\langle \chi^2\rangle$ further away from $\chi^2_{\rm min}$. We also note here that the form of the entropy depends on the parametrization used (as we discuss in detail in Sec.~\ref{sec:entropy1} and also when relating SAC to the ME method in Sec.~\ref{sec:maxent}) but it is always extensive in the sampled degrees of freedom. While the ``entropic catastrophe'' can be counteracted by choosing $\Theta < 1$ (adapted to $N_\omega$, as will be discussed in Sec.~\ref{sec:plainsampling}), in SAC methods with restricted sampling \cite{sandvik16,shao17} the entropy associated with distortions of sharp spectral features is also impeded by suitable optimized constraints. Then sampling at $\Theta=1$ can be appropriate, but, if the number of parameters used to sample the spectrum is large, it will still be necessary to also reduce $\Theta$. Some entropic effects on the average spectrum were investigated in Ref.~\cite{sandvik16}. In particular, for a spectrum with known lower and upper bounds, it was shown that spectral weight gradually leaks out further beyond these bounds when the number of $\delta$-functions is increased. This deteriorating effect of configurational entropy can then also be used to identify the correct frequency bounds, by monitoring $\langle \chi^2\rangle$ at fixed $\Theta > 0$ when the constraints are varied. Prohibiting spectral weight from certain frequency regions implies an entropy reduction. For a spectrum with a sharp lower edge, a clear minimum in $\langle \chi^2\rangle$ was found when the lower bound was close to the true location of the edge \cite{sandvik16}. The upper bound is normally less sharp, and the $\langle \chi^2\rangle$ signal is weaker but still useful. Further, in cases where the true spectrum has a single peak, it was shown that imposing a single peak in the spectrum at each stage of the sampling can dramatically improve the ability to resolve the shape of the spectrum (and one can expect this to generalize to two or more peaks). This quality boost was also explained by a reduction of configurational entropy when the fluctuations of the sampled spectrum are constrained. The above example of frequency bounds can be regarded as a certain prior probability $P(S)$. With no other restrictions imposed, sampling with one of the three parametrizations in Fig.~\ref{fig:spec}(a)-(c) in the absence of QMC data clearly results in a uniform spectral density. While this flat average spectrum can in some sense be regarded as a default model, as in the ME method, the different entropic pressures of the three parametrizations will still produce different outcomes when the spectra are weighted by $\chi^2$, as discussed in detail in Secs.~\ref{sec:theta} and \ref{sec:entropy}. These differences can also be reproduced with the ME method with a flat default model and different forms of the entropic prior, as we will demontrate in Sec.~\ref{sec:maxent}. In other cases, it may be less useful to think of constraints as analogues to ME default models. For instance, imposing a single peak at the level of each sampled configuration can have a large impact in SAC, as mentioned above, even if the average spectrum has a single peak also in the absence of such a constraint (and that peak would then be broader than when the sampling constraint is imposed). The imposition of the sampling-level constraint has no direct translation (at least not an obvious one) into a default model for the most probable ME spectrum. The difference between the SAC and ME approaches in this regard is that the SAC spectrum is an average, and constraints on how the average is taken (what configurations are sampled over) will affect the outcome, while in the ME method a single function is normally optimized and constraints can only be applied to this one function. An exception is the version \cite{boninsegni96,kora18} of the ME method in which the histogram representing the spectrum is not determined by minimizing $F(S)$ but is averaged over samples with probability $\propto {\rm e}^{-F(S)}$. Then a constraint on the individual sampled spectra could also in principle be built in exactly as in the SAC method implemented with $\delta$-functions on a frequency grid \cite{sandvik16}. However, as will be discussed in Sec.~\ref{sec:maxent2}, such a method may suffer from problems related to the extensive sampling entropy and ``double counting'' of entropy; in the continuum limit the average becomes ill-defined and cannot be sampled properly with the above ME probability distribution. In Ref.~\cite{shao17}, together with collaborators we showed how a the amplitude $A_0$ of an isolated $\delta$-function at the lower edge of a continuum (i.e., a sharp quasi-particle peak) can be optimized within the SAC parametrization in Fig.~\ref{fig:spec}(d). In this case a default model analogy within the standard ME scheme would also be rather contrived, though a optimized $\delta$-function could certainly also be incorporated in a rather similar way within the ME approach, especially in light of our new results for the relationships between the SAC and ME method, detailed in Sec.~\ref{sec:maxent}. \subsubsection{Different parametrizations} \label{sec:parametr} One issue that has not been addressed in sufficient detail previously is how the averaged spectral function depends on the parametrization. A commonly used SAC parametrizations is a set of $\delta$-functions on a fixed frequency grid $\{\omega_i\}$ \cite{sandvik98,syljuasen08,sandvik16}, Fig.~\ref{fig:spec}(a), where the amplitudes $A_i$ are sampled. A non-uniform grid (not considered here) can also be used, in which case the local density of the grid points can be regarded as a default model \cite{ghanem20a}. With the spectrum defined in the frequency continuum, we consider either sampling of only the locations $\omega_i$ of equal-$A_i$ $\delta$-functions as in Fig.~\ref{fig:spec}(b) \cite{qin17}, or also including updates of the amplitudes as in Fig.~\ref{fig:spec}(c) \cite{beach04}. A related ``grid point sampling'' approach was developed in Ref.~\cite{ghanem20b}. The parametrization with fluctuating frequencies and amplitudes has been the most commonly used in SAC applications, e.g., Refs.~\cite{fuchs10,feldner11,voll15,lohofer15,lohofer17,becker17,becker18,ying19,raczkowski20,zhou21,sato21}, and the fixed-amplitude parametrization has also been used in several works, e.g., Refs.~\cite{shao17,ma18,cheng22,liu22}. As an alternative to all-equal amplitudes in Fig.~\ref{fig:spec}(b), some other predetermined distribution of amplitudes can also be used; the values $A_i$ are then still held fixed but they reside at unrestricted frequencies $\omega_i$ \cite{qin17}. Here we will describe all the five different parametrizations illustrated in Fig.~\ref{fig:spec}, and further variations of these cases will be introduced in later sections. In this section we focus on qualitative aspects of configurational entropy and constraints, and defer practical details on the Monte Carlo sampling algorithms to Secs.~\ref{sec:plainsampling} and \ref{sec:contedge1}. While we will perform some tests with the fixed-grid parametrization illustrated in Fig.~\ref{fig:spec}(a), in most of our tests and applications we consider $\delta$-functions in the frequency continuum. The average spectrum is then defined as the mean amplitude density collected in a histogram with bin width suitably chosen to accomodate the details of the spectrum. With a large number $N_\omega$ of $\delta$-functions, the three parametrizations illustrated in Figs.~\ref{fig:spec}(a)--(c) are generic and suitable in principle to describe any kind of spectrum. They are, however, associated with different types of entropic pressures, i.e., for given $\bar G(\tau)$ data and a fixed value of $\Theta$, they will each favor different spectral profiles. Importantly, a result obtained with one parametrization can typically not be reproduced at any value of $\Theta$ with another parametrization. In Sec.~\ref{sec:plainsampling} we will compare results for several examples of spectral functions, following the average spectrum as a function of the sampling temperature $\Theta$. Using a moderately large number ($N_\omega=1000$) of $\delta$-functions at, say, $\Theta=1$, we find that $\langle \chi^2\rangle$ is significantly larger with the fixed-grid parametrization, Fig.~\ref{fig:spec}(a), i.e., the entropic pressures pushing the solution away from the correct one is larger than it is with $\delta$-functions in the continuum, Fig.~\ref{fig:spec}(a) and \ref{fig:spec}(b). In all cases, when $N_\omega$ is large [and exactly how large depends on the error level of the $\bar G(\tau)$ data], $\Theta$ has to be lowered from $1$ in order for the value of $\langle \chi^2\rangle$ to be statistically sound (and we will later define our criterion for statistical soundness). Unless the error level $\sigma$ is extremely low, the average spectra obtained with different parametrizations are all different. Based on our systematic investigations, we conclude that SAC in the frequency continuum has higher fidelity, and sampling such parametrizations is also much faster than on the fixed grid. The fixed-grid results in general have excessively sharp peaks. There are also differences between continuous-frequency spectra sampled with or without amplitude updates, and we will show that sampling with amplitude updates often leads to better frequency resolution. However, only sampling frequencies may be safer from the standpoint of the standard information entropic arguments, according to which a spectrum with a larger Shannon entropy would be preferred. Indeed, we will also find (Sec.~\ref{sec:maxent}) that SAC with only frequency updates [Fig.~\ref{fig:spec}(b)] is exactly equivalent to the ME method with the conventional Shannon entropy in the limit $N_\omega \to \infty$, if $\Theta$ and $\alpha$ are chosen such that $\langle \chi^2(\Theta)\rangle$ in SAC equals $\chi^2(\alpha)$ of the ME spectrum. For other parametrizations, the equivalence between the methods requires different functional forms of the entropy in the prior probability used in ME method. There are many possibilities to build in specific known or expected features of the spectrum, and doing so can lead to substantial reduction or elimination of entropic pressures that distort the averaged spectrum in the absence of such constraints. Imposing a constraint of course means that some piece of information beyond the imaginary-time data is supplied to the solution, but this information can be of a very generic form, e.g., the spectrum should have a sharp lower edge (which the constraints discussed in this paper will mostly be focused on). With a corresponding constraint, it should be possible to extract unknown information on the properties of the edge, e.g., its location and details on the shape (e.g., the width of a quasi-particle peak or the exponent governing a power-law singularity). As we will show, imposing a constraint on the edge can also greatly improve the fidelity of the other parts of the average spectrum. In other words, without the constraint the broadened edge will be reflected in compensating distortions also at higher frequencies, in order for the spectrum to fit the imaginary-time data. Once the edge is treated correctly, the other parts of the spectrum are also better reproduced, often in surprising detail. \begin{figure} \centering \includegraphics[width=7.5cm, clip]{fig04.pdf} \vskip-1mm \caption{Schematic illustration of the optimization of a parameter $p$ regulating an appropriate constraint, with $p=0$ corresponding to unconstrained sampling using, e.g., the parametrization in Fig.~\ref{fig:spec}(b). When $p$ is increased, the configurational entropy of typical spectra is reduced, thereby allowing $\langle \chi^2\rangle$ to decrease even though the temperature $\Theta$ is kept fixed [at a value for which $\langle \chi^2(p=0)\rangle$ is clearly above the minimum value $\chi^2_{\rm min}$ attainable]. When $p$ exceeds its correct value, the spectrum becomes too constrained and the fit deteriorates rapidly. The two competing effects lead to an optimal value $p_{\rm opt}$ (minimizing $\langle \chi^2\rangle$), which is close to the correct $p$ of the spectrum provided that the constraint is applicable to the spectrum sought.} \label{fig:optim} \vskip-1mm \end{figure} The perhaps simplest constraint is just an imposed lower frequency bound $\omega_0$ within the parametrizations in Figs.~\ref{fig:spec}(a)--(c), optimized in the generic way illustrated in Fig.~\ref{fig:optim} by running the SAC procedure for a range of values of the parameter $p=\omega_0$. Unless the true spectrum has a sharp finite step at the edge, this constraint is in general not sufficient to reproduce the actual shape of the edge, however, though the results can still be much better than without any constraints (as we will show with examples). The first more sophisticated constraint studied here, following Ref.~\cite{shao17}, imposes a macroscopic $\delta$-function at the edge frequency $\omega_0$, with an adjustable (optimized) amplitude $A_0$. The edge is the lower bound for a set of $N_\omega$ microscopic $\delta$-functions with total weight $1-A_0$, i.e., their individual amplitudes are $A_i=(1-A_0)/N_\omega$. This constrained parametrization is illustrated in Fig.~\ref{fig:spec}(d), where all the frequencies $\omega_i$ are sampled, including $\omega_0$ (unlike the simpler edge constraints discussed above, where $\omega_0$ is optimized but held fixed during the sampling process). If the $\delta$-edge ansatz is appropriate, $\omega_0$ will fluctuate relatively weakly close to the correct position once $A_0$ has been optimized in the way illustrated in Fig.~\ref{fig:optim} (with the generic parameter $p=A_0$). A different type of edge-tailored constraint, depicted in Fig.~\ref{fig:spec}(e), is intended to model a sharp edge at $\omega=\omega_1$ followed by a continuum with monotonically decreasing spectral weight. The constraint on the equal-weight $\delta$-functions is a monotonically increasing spacing $d_i \equiv \omega_{i+1}-\omega_i$. If the number $N_\omega$ of $\delta$-functions is sufficiently large, the average density becomes essentially continuous and, as we will show, can well reproduce arbitrary monotonically decreasing spectral weight. The entropic pressures in this parametrization naturally favor a divergent peak, and in certain cases no further optimization has to be carried out. The sampling automatically adapts the spectral weight to the correct frequency window. A non-divergent edge can be obtained by modifications of this parametrization, with a single parameter optimized for the edge shape (including arbitrary divergent or non-divergent power law). We will also combine the parametrizations in Figs.~\ref{fig:spec}(b) and \ref{fig:spec}(e) for a generic spectrum consisting of a sharp edge followed by an arbitrary (i.e., not necessarily monotonically decaying) continuum. We will also consider some modifications of these parametrizations and comment on other potentially useful constraints. As test examples, in addition to synthetic spectral functions we will present results for the dynamic structure factor of the $S=1/2$ Heisenberg models in one and two dimensions. In Sec.~\ref{sec:ladders} we present initial results of a study of Heisenberg ladders, which have less well known dynamic structure factors that are expected to host sharp edge features suited for constrained SAC. Other parametrizations in the continuous frequency space have also been used in SAC and related methods, e.g., a number of boxes whose locations and shapes are sampled \cite{mishchenko00,mishchenko02,mishchenko12,bao16,goulko17}. SAC with an orthogonal-polynomial expansion has also been tested \cite{wu13}. In this work we only consider parametrizations based on $\delta$-functions, but it would be interesting to further explore also other parametrizations. \subsubsection{Optimization of constraints} An important question is of course how to optimize a constraint. The general principle was laid out in Ref.~\cite{sandvik16} and is illustrated schematically in Fig.~\ref{fig:optim}. It was shown that the removal of entropy by the imposition of an appropriate constraint leads to a reduction of the mean goodness-of-fit $\langle \chi^2\rangle $ if the sampling temperature $\Theta$ has been suitably chosen (in the way further discussed below). Eventually, when the parameter regulating the constraint, e.g., the weight $A_0$ of the leading $\delta$-function in Fig.~\ref{fig:spec}(d), becomes too large, a good fit is no longer possible (i.e., the constraint is too constraining) and then $\langle \chi^2\rangle$ begins to increase. Thus, there will be a specific value of $A_0$ for which the fit is optimal. This method was used in Ref.~\cite{shao17} to study the magnon pole of the dynamic structure factor of the 2D Heisenberg model, as well as a generalized model in which the magnon pole eventually vanishes due to fractionalization into spinons (see also Ref.~\cite{ma18}). Here we will further characterize the convergence of the $\langle \chi^2\rangle$ minimum to the correct value of the constraining parameter $A_0$ as the data quality increases (the error level $\sigma$ decreases) and the sampling temperature is adjusted. We will also investigate the behavior when the edge peak has finite width, which we model by dividing the peak weight $A_0$ among $N_p > 1$ edge $\delta$-functions. In this case two parameters $A_0$ and $N_p$ must be optimized, which is more time consuming but still feasible with the techniques we have developed. Using the parametrization with monotonically increasing distance between the $\delta$-functions in Fig.~\ref{fig:spec}(e), we will study the dynamic structure factor of the Heisenberg chain. We achieve excellent agreement with Bethe ansatz (BA) results---better than Ref.~\cite{sandvik16} and without the time-consuming work to locate the edges (optimizing the lower and upper frequency bounds) that was necessary there. Both the lower edge and the high-frequency bound are automatically found by the sampling, thanks to the very minor tendency to artificial broadening with the new constrained parametrization. We will show how the entropic pressures within the space of increasing distances between the $\delta$-functions instead naturally favor a divergent peak, which has a specific shape in the absence of data but is morphed into a shape close to the true spectral profile when sampled according to $\chi^2$ with good data. If the actual peak is not divergent, an optimized constraint on the distance $\omega_2-\omega_1$ between the first two $\delta$-functions can be used. As we will show with a synthetic-data example, when the edge feature has been properly treated, also the other parts of the spectrum are reproduced with remarkable fidelity. We will also discuss a modification of the parametrization in Fig.~\ref{fig:spec}(e), in which the amplitudes $A_i$ depend on the index $i$ of the increasing frequencies $\omega_i$ in a way that can be directly related to the exponent controlling an asymptotic power-law behavior $(\omega-\omega_0)^p$ with positive or negative $p$. The exponent can be extracted surprisingly precisely when $p<0$ (using the generic approach in Fig.~\ref{fig:optim}) in tests with synthetic data of reasonable statistical quality, while the case $p>0$ is more challenging (for reasons that we also explain). Finally, we will use synthetic data to study spectral functions with edge divergencies followed by arbitrary continua, using combinations of the parametrizations depicted in Figs.~\ref{fig:spec}(b) and \ref{fig:spec}(e). The relative weight of the two groups of $\delta$-functions then has to be optimized, along with the exponent $p$ if need be. We stress here that our method for optimizing constraints [Fig.~\ref{fig:optim}] is purely based on $\chi^2$ minimization at fixed $\Theta$, motivated by analogies with statistical mechanics and considering the obvious effects of over-constraining. In principle, Bayesian inference could also be used, in a way similar to the treatment of the sampling temperature in Ref.~\cite{fuchs10}. However, then a prior probability for the optimized parameter ($p$ in Fig.~\ref{fig:optim}) also has to be postulated, and the procedures become much more complicated computationally. We will show numerous examples of success with the much the simpler maximum-likelihood approach. Naturally, an applied constraint must be suitable for the spectrum sought, and a clearly unsutable constraint will be reflected in the inability to reach an acceptable goodness-of-fit. For a satisfactory result, the constrained spectrum must not have additional strong entropic pressures that push the sampled configurations away from the correct solution. To the extent additional features of the spectrum are known (or suspected), further constraints can be applied in principle to improve the results. In the examples to be reported here, we will optimize up to two different parameters regulating constraints. \section{SAC with unrestricted sampling} \label{sec:plainsampling} Here we discuss simple Monte Carlo algorithms for spectral function with the fixed-grid and continuous frequency parametrizations illustrated in Figs.~\ref{fig:spec}(a), \ref{fig:spec}(b), and \ref{fig:spec}(c). The fixed-grid case is considered in Sec.~\ref{sec:gridsamp}, and in Sec.~\ref{sec:contsamp} we discuss continuous frequencies without and including amplitude updates. We focus on the principles of unrestricted sampling and defer results and incorporation of constraints to the later sections. \subsection{Fixed-grid spectrum} \label{sec:gridsamp} At first sight, a fixed frequency grid, with a number $N_\omega$ $\delta$-functions at frequencies $\omega_i=i\Delta_\omega$, $i=1,\ldots,N_\omega$, and variable amplitudes $A_i$ corresponding to Eq.~(\ref{barelation}) is perhaps the most natural parametrization of the spectrum. It was indeed used in many of the previous works on the SAC method \cite{white91,sandvik98,vafay07,fuchs10,syljuasen08,sandvik16}. Non-uniform frequency grids have also been considered \cite{ghanem20a,ghanem20b}. While importance sampling of the amplitudes may seem like a trivial task formally, in practice simple moves of weight among the amplitudes $A_i$ can evolve the spectrum only slowly, and collecting smooth averages is time consuming. When the quality of the imaginary-time data is good, say, with the error level $\sigma$ (defined in Sec.~\ref{sec:qmcdata}) of order $10^{-6} \sim 10^{-5}$ or less, only very small changes will be accepted at a good rate. In Ref.~\cite{sandvik98} multi-amplitude moment conserving updates were developed that help considerably in this regard, but still a smooth spectrum may require sampling from several minutes to many hours, depending on the number of frequencies used, the sampling temperature $\Theta$, and the error level of the imaginary-time data (with better data implying less efficient sampling, thus requiring longer runs). The nature of the spectrum sought will implicitly affect the sampling efficiency as well. Since a fixed normalization is used here with the kernel in Eq.~(\ref{kernel2}), the sum of the amplitudes is conserved, $\sum_i A_i=1$, and the simplest Monte Carlo move corresponds to a random re-distribution of the sum $A_i+A_j$ of the amplitudes at two chosen grid points $i$ and $j$. The move is accepted or rejected using the standard Metropolis algorithm with the weight function in Eq.~(\ref{psg}). The change in the weight, which is needed for the acceptance probability, involves the changes in $G(\tau_k)$ at all time points $\tau_k$, and the new value of $\chi^2$ is computed according to Eq.~(\ref{chi2eps}) in the basis where the covariance matrix is diagonal. The change in $G(\tau_k)$ only depends on the affected amplitudes $A_i$ and $A_j$ and can easily be obtained from Eq.~(\ref{eir}) with the kernel pre-computed. In updates involving a set of $n$ amplitudes $\{A_i\}$, $n-1$ moments of the spectrum can be conserved. Unless some moments are exactly known (e.g., because of some sum rule) or approximately known to much higher precision than what is implicit in the imaginary-time data, the purpose here is not for the sampling to actually conserve any moments (beyond the fixed normalization), but to speed up the sampling. The idea \cite{sandvik98} is that an update conserving several moments may reduce the changes in $G(\tau)$ while still involving significant spectral weight re-distribution, thus improving the acceptance rate. It is in principle easy to compute the line in an $n$-dimensional hypercube corresponding to conserving $n-1$ moments, but ensuring positive definiteness complicates the expressions. In the tests here, we only carried out simultaneous updates of two or three amplitudes, conserving only the normalization in the first case and also the first frequency moment in the second case. In moves of weight between two amplitudes, we use two ways of selecting the grid points $i$ and $j$: (a) with $i$ chosen at random among $i \in \{1,\ldots N_\omega-1\}$ and $j=i+1$ or (b) with both $i$ and $j \not=i$ chosen at random among all possibilities. We redistribute the total weight $A_i+A_j$ within a window of the allowed values, i.e, the updated amplitudes are \begin{equation} A'_i=A_i+\delta_A,~~~A'_j=A_j-\delta_A, \label{aprimedelta} \end{equation} with $\delta_A$ uniformly generated with zero mean within a window adjusted so that the acceptance rate is close to $0.5$. Moves violating the positivity constraint are rejected, while those satisfying the constraint are accepted using the standard Metropolis probability with the weight function in Eq.~(\ref{psg}). In principle, the frequency window for the attempted shifts $\delta_A$ can depend on the frequencies but we here use the same window for all $i,j$. It is useful to also attempt some updates with the full allowed range $\delta_A \in [-A_i,A_j]$ in Eq.~(\ref{aprimedelta}), even when the acceptance rate is low. When simultaneously updating three amplitudes $A_i$, $A_j$, $A_k$, the indices $i,j,k$ are here assumed to be ordered so that $\omega_i < \omega_j < \omega_k$ (which is necessary for some of the formulas below to be correct). We again choose either consecutive grid points or randomly select them from the set of $N_\omega$ points. The conserved quantities are \begin{subequations} \begin{eqnarray} m_0 & = & A_i+A_j+A_k \\ m_1 & = & A_i\omega_i+A_j\omega_j+A_k\omega_k, \end{eqnarray} \end{subequations} and the procedure for uniformly sampling among all possible new amplitudes $A'_i$, $A'_j$, $A_k'$ is as follows: First, for positive definiteness of all three updated amplitudes, $A_i'$ is allowed ony within the window $[a_i,b_i]$, where \begin{subequations} \begin{eqnarray} a_i & = & {\rm max}\left [0,\frac{m_0\omega_j-m_1}{\omega_j-\omega_i}\right ],\\ b_i & = & {\rm min}\left [m_0,\frac{m_0\omega_k-m_1}{\omega_k-\omega_i}\right ]. \end{eqnarray} \end{subequations} The new amplitudes are then constructed in order $i,j,k$ according to \begin{subequations} \begin{eqnarray} A'_i & \in & [a_i,b_i],\\ A'_j & = & [m_0\omega _k-m_1-A'_i(\omega_k-\omega_i)]/(\omega_k-\omega_j),~~\\ A'_k & = & m_0 - A'_i - A'_j. \end{eqnarray} \label{amp3updates} \end{subequations} The choice of $A_i'$ in the allowed window $[a_i,b_i]$ is made at random and the expressions for $A_j'$ and $A'_k$ ensure conservation of $m_0$ and $m_1$. If the acceptance rate is low, it can be improved by choosing $A'_i$ in a smaller window centered at the present value $A_i$, instead of selecting the new value from the full allowed range. As in the $n=2$ case, moves within a fixed symmetric window around $A_i$ may in some cases generate negative values, which are rejected immediately. The initial convergence of the spectrum, in particular, benefits from the full-range moves, which can be accepted at a good rate before the sampling has fully equilibrated. The above $n=2$ and $n=3$ amplitude updates are sufficient for the simple tests presented here. The formal scaling of the computational effort of one updating sweep, each with $\propto N_\omega$ attempts with $n=2$ and $n=3$, is linear in both $N_\tau$ and $N_\omega$. The overall sampling efficiency also depends in a less quantifiable way on the frequency spacing $\Delta_\omega$ (thus on $N_\omega)$, as larger relative weight can be transferred between the $\delta$-functions at a good acceptance rate if $\Delta_\omega$ is smaller, but the individual amplitude fluctuations are then also larger. The smoothness of the average spectrum can be improved post-sampling by averaging the mean amplitudes over several of the native grid points of the sampled spectrum. In Ref.~\cite{ghanem20a} an improved sampling algorithm for grid-based spectra was constructed, with a singular-value decomposition used to transform the kernel to the principal axes of $\chi^2$. The different frequency modes still cannot be sampled independently, as they are coupled through the condition of positive definiteness. A scheme of sampling in different ``blocked'' bases was developed and appears to be much more efficient than the above simple updates. In our further developments and practical applications of SAC, we do not use the fixed grid but prefer to sample $\delta$-functions in continuous frequency space, for the entropic reasons that were briefly mentioned in Sec.~\ref{sec:parametr} and which will become apparent in our examples. The Monte Carlo sampling in the continuous representations, Figs.~\ref{fig:spec}(b) and \ref{fig:spec}(c), are also simple and in practice much more efficient than sampling amplitudes on the fixed grid (without the basis transformation used in Ref.~\cite{ghanem20a}, which we have not implemented). In Sec.~\ref{sec:maxent} we will use the fixed-grid updates described here for stochastic optimization of spectra with the ME method, where the inefficiencies are less severe. \subsection{Continuous frequency space} \label{sec:contsamp} With the continuous frequency space, the spectrum is the amplitude weighted mean density of a number $N_\omega$ of $\delta$-functions whose frequencies are sampled. The amplitudes $A_i$ can be kept fixed, as in Fig.~\ref{fig:spec}(b), or importance sampled along with the frequencies as in as in Fig.~\ref{fig:spec}(c). The $\omega$ space can be approximated with a very fine ``micro grid'' of many (e.g., millions of) equally spaced frequencies $\omega_j$ up to a cut-off. The much smaller number $N_\omega$ of actual $\delta$-functions (typically thousands, in some cases more than $10^5$) occupy only a small fraction of the grid. The kernel can be precomputed on the micro grid points, and the integer-valued grid indices $j$ can be used in the sampling procedure and easily converted to the actual frequencies $\omega_j$ when needed (which is only at the stage of final output of the results). Alternatively, floating-point numbers can be used for the frequencies, but then the kernel, which involves time consuming exponential functions, cannot be precomputed exactly. More importantly, the transformation in Eq.~(\ref{basistransf}) to the eigenbasis of the covariance matrix would have to be performed on the fly with the current frequencies $\{\omega_i\}$, or else $\chi^2$ has to be computed in the original basis using the double-sum over $N_\tau$ in Eq.~(\ref{chi2}), instead of performing the single sum, Eq.~(\ref{chi2eps}), in the transformed basis. In either case, the the computational effort will then scale as $N_\tau^2$ instead of $N_\tau$ with the discrete method. To achieve the best of both worlds, realizing the frequency continuum in practice, we still use a grid of points on which the kernel and its frequency derivative are precomputed. The kernel can then be evaluated rapidly for the continuous (double-precision floating-point) frequencies $\omega_i$ by interpolation using the two closest grid points, corrected with the aid of the stored derivatives. With a fine grid, a practically exact kernel can be evaluated faster in this manner than with its mathematically exact expression. Both the kernel and its derivatives are transformed to the eigenbasis of the covariance matrix, and the $O(N_\tau)$ scaling is maintained. \subsubsection{Sampling frequencies} In the simplest case, all the $\delta$-functions have the same amplitude, $A_i=1/N_\omega$. At a given instant, the frequencies of the $\delta$-functions are $\omega_i$, $i \in \{1,\ldots,N_\omega\}$, and these frequencies are integer multiples of a micro grid spacing $\delta_\omega$ (typically we use $10^{-5}$ or $10^{-6}$). In most cases, we have not observed any differences between the discrete and full floating-point representations, which is natural if the spectrum is featureless on the micro grid scale $\delta_\omega$. When the spectrum has fine structure in the form of a very sharp edge and we use the monotonicity constraint in Fig.~\ref{fig:spec}(e) (sampling of which is discussed in Sec.~\ref{sec:hbergedge}), the discrete representation can introduce anomalies, however, and it is then better to use floating-point frequencies. The results in this section and Secs.~\ref{sec:deltapeak} and \ref{sec:contedge1} were obtained with $\delta_\omega=10^{-5}$, which represent the continuum sufficiently well according to our tests. The double-precision floating point representation was used in Sec.~\ref{sec:contedge2}. In a single-frequency update, an index $i \in \{1,\ldots,N_\omega\}$ is selected at random and the corresponding frequency is changed, $\omega_i \to \omega_i + R$, by a random amount $R \in [-\epsilon,\epsilon]$ with $\epsilon$ adjusted to give an acceptance rate of approximately $0.5$ (in the discrete representation with the index itself, $i \to i + I$ with an integer $I \not=0$ similarly drawn within a suitably adjusted window---in the following we use only the frequency representation for clarity). The calculation of the weight ratio for the Metropolis acceptance probability again requires re-computation of the contributions to all $G(\tau_i)$ from the moved frequency $\omega_i$, demanding an $O(N_\tau)$ computational effort for a single update and $O(N_\tau N_\omega)$ for an entire sweep of $\propto N_\omega$ updates. With two frequencies updated simultaneously, indices $i \not= j$ are randomly selected and the corresponding frequencies are changed as \begin{equation} \omega'_i = \omega_i + R,~~~ \omega'_j = \omega_j - R, \label{omegaupdate} \end{equation} with a random displacement $R$ with zero mean in a window adjusted for good acceptance rate. This update of course conserves the first frequency moment since we here consider equal amplitudes. In most of the cases we have considered here, updating one and two frequencies suffices for effective sampling when $N_\omega$ is large. In some cases, there are signs of poor ergodicity; specifically when sampling narrow quasi-particle peaks with the method presented in Sec.~\ref{sec:deltanp}. Simultaneous updates of three frequencies resolves this issue and can speed up the sampling also in other cases. To conserve the first moment when three frequencies are changed, we generalize the two-frequency update Eq.~(\ref{omegaupdate}) to \begin{equation} \omega'_i = \omega_i + 2R,~~~ \omega'_j = \omega_j - R,~~~ \omega'_k = \omega_k - R, \label{omegaupdate2} \end{equation} and then take $R$ such that also the second moment is conserved; \begin{equation} R = \frac{1}{3}(\omega_k + \omega_j - 2\omega_i). \label{romegaupdate2} \end{equation} Here it should be noted that $R$ may not correspond exactly to an integer number of the units $\delta_\omega$ of the frequency grid, but, the spacing being very small, the second moment is still conserved to a high degree. As long as the rounding to a the nearest higher or lower grid point is applied consistently, so that an update with a rounded $R$ has a reverse update with exactly $-R$, detailed balance is not affected. Of course Eq.~(\ref{omegaupdate2}) is not the only way to conserve the first moment, but it is a simple and efficient way in combination with Eq.~(\ref{romegaupdate2}). A Monte Carlo updating sweep consists of a sequence of $N_\omega$ single-frequency moves followed by $N_\omega/2$ two-frequency updates and, optionally $N_\omega/3$ three-frequency moves. After each sweep, the amplitudes are added to the histogram in which the spectral density is accumulated. During the sampling stage, in the discrete representation the working histogram is one with exactly the same frequency spacing $\delta_\omega$ as in the sampling space. At the output stage, the small bins are combined into much larger bins, of size $\Delta_\omega$ small enough so that all features of the spectrum can be well represented while large enough for sampling noise ro be averaged out sufficiently. In the tests reported here we used $\Delta_\omega=0.01$ or $\Delta_\omega=0.005$ in most cases. In a modification of the fixed-amplitude scheme, the amplitudes $A_i$ are initialized with some dispersion, e.g, $A_i = c + di$ with $c$ and $d$ constants such that the normalization $\sum_i A_i = 1$ is obeyed. These amplitudes are kept constant during the sampling process and, because there are no constraints on the locations of the $\delta$-functions, smaller amplitudes can migrate to areas with lower spectral weight. The two-frequency move of the form Eq.~(\ref{omegaupdate}) does not conserve the first moment if the two amplitudes are different, but we can still use it (optionally in combination with amplitude swaps). As long as $N_\omega$ is sufficiently large and the dispersion of $A_i$ is not too large, we find no significant differences in the final averaged spectrum between the methods of uniform or varying (but not updated) amplitudes. A more significant dispersion leads to increased fluctuations and changes in the entropic pressures of the method and slightly different averaged spectra. The sampling efficiency can be better in a parametrization with varying amplitudes if swaps of two amplitudes are included, $A'_i = A_j$, $A'_j = A_i$, with $i$ and $j$ chosen randomly among all the $\delta$-functions. When sampling spectra with two or more peaks with little weight between them, this update leads to considerable improvements in the efficiency of transferring weight between the peaks (especially when the spectrum is not yet equilibrated), as was found in Ref.~\cite{qin17}. Here we will use equal amplitudes when only the fequencies are sampled. \subsubsection{Sampling amplitudes and frequencies} If also the amplitudes are sampled, i.e., with the parametrization in Fig.~\ref{fig:spec}(c) (discussed in the previous literature, e.g., in Refs.~\cite{beach04,fuchs10,ghanem20b}), we supplement the frequency moves with the updates described in Sec.~\ref{sec:gridsamp} for the case of amplitudes on the fixed grid. For updating two amplitudes, with the corresponding frequencies unchanged, it is clear that Eq.~(\ref{aprimedelta}) works. The algorithm for changing three amplitudes is also the same as before, as long as the frequencies $\omega_i,\omega_j,\omega_k$ are labeled in increasing order in Eq.~(\ref{amp3updates}). With the individual amplitudes not conserved, we can also modify the two-frequency update to conserve the first frequency moment. After the proposed move of the frequencies in Eq.~(\ref{omegaupdate}), we change the amplitudes with the normalization conserved as in Eq.~(\ref{aprimedelta}), but now with the unique $\delta_A$ that conserves also the first moment; \begin{equation} \delta_A = \frac{R(A_i-A_j)}{2R + \omega_i - \omega_j}. \label{daconserving} \end{equation} If this $\delta_A$ leads to negative $A'_i$ or $A'_j$ the move is rejected, otherwise it is accepted with the Metropolis probability. We mix these moves with updates of only the frequencies as in Eq.~(\ref{omegaupdate}), with the windows for selecting $R$ adapted individually in the two cases for good acceptance rates. In multi-frequency updates with $n>2$ $\delta$-functions, which we have constructed explicitly only for $n=3$ in Eqs.~(\ref{omegaupdate2}) and (\ref{romegaupdate2}), the corresponding amplitudes can also be updated along with the frequencies to ensure conservation of moments. Then $n$ new frequencies are first according to Eq.~(\ref{omegaupdate2}) with $R$ in Eq.~(\ref{romegaupdate2}) (or generalized formulas for larger $n$), after which conservation of $n-1$ frequency moments is ensured by changing the corresponding amplitudes according to a more complicated generalization of Eq.~(\ref{daconserving}). In practice, the $n\le 3$ updates we have presented above allow rather effective sampling already and we do not go to higher $n$. In general, when the amplitudes are also sampled the number $N_\omega$ of $\delta$-functions can be smaller than when only frequency moves are performed, as some amplitudes can always become very small and reach far into thin spectral tails. However, the sampling efficiency appears to be generally worse, likely because some amplitudes occasionally become large and more difficult to move. Not performing amplitude updates also of course saves time. The amplitude updates are helpful when sampling at very low $\Theta$, to obtain a good approximation for the best goodness of fit $\chi^2_{\rm min}$ (which we need for setting the optimal sampling temperature, as discussed below in Sec.~\ref{sec:theta}). Sufficiently good values can also be easily obtained with only frequency updates if $N_\omega$ is large enough. Whether or not to include amplitude moves is not in the end only a matter of sampling efficiency, since the entropy is also affected. We will demonstrate this fundamental difference between the two parametrizations both in practice, in the test cases presented below in Sec.~\ref{sec:theta}, and formally when comparing the functional forms of the entropies in Sec.~\ref{sec:entropy}. We will also argue that better results (in particular, better resolved peak widths) are in general obtained when amplitude moves are included. If the averaged spectrum sampled in the absence of QMC data is regarded as a default model, then the continuum representations considered here correspond to an infinitely stretched out spectrum when the $\delta$-functions can migrate without bounds (at least in principle). In contrast, the fixed-grid spectrum always has an upper bound, and then the default average spectrum is flat with the amplitude equal to the inverse of the frequency range. In practice, the formal difference in the frequency bound on the grid and in the continuum should not be very important; once the sampling is restricted by the imaginary-time data, $\chi^2$ imposes a similar de facto upper bound in both cases---under the assumption that $\Theta$ is correctly chosen (as explained in the next section) in each case. It is worth noting that a frequency cut-off has to be imposed in the computer program only for the purpose of precomputing the kernel and for storing the histogram. If the cut-off is high enough, it does not imply any restriction on the migration of the $\delta$-functions. The high-frequency tail is in practice dictated by the QMC data, as a $\delta$-function moving up very high in frequency will ruin $\chi^2$ and not be accepted. One might then worry that increasing $N_\omega$ (i.e., reducing the individual amplitudes) would affect the tail, as the $\chi^2$ cost of migrating high up diminishes, especially in the fixed-amplitude case where the unit of spectral weigh is $1/N_\omega$. However, in practice we have not noticed any issues with the tail of the spectrum even for very large $N_\omega$ (more than $10^5$ in some tests) when the sampling temperature $\Theta$ is fixed according to the scheme discussed below in Sec.~\ref{sec:theta}. There is also no fundamental reason to expect any problems when $N_\omega \to \infty$, because of the exact mapping of the SAC to the ME method in this limit (Sec.~\ref{sec:maxent}). We will show examples of the effect of varying $N_\omega$ in Sec.~\ref{sec:nomega}. \section{Optimal sampling temperature} \label{sec:theta} An important aspect of SAC is how to select the sampling temperature $\Theta$. The general situation \cite{sandvik98,beach04} prevailing with unrestricted sampling is that the spectrum at low $\Theta$ freezes into a stable or metastable $\chi^2$ minimum. In the $\Theta \to 0$ limit a few sharp peakst, for reasons that will be mentioned below and further elucidated in \ref{app:lowtheta}. High $\Theta$ values lead to smooth featureless spectra with large $\langle \chi^2\rangle$. There is a range of $\Theta$ over which $\langle \chi^2\rangle$ is small, close to its minimum value $\chi^2_{\rm min}$, but the fluctuations are still significant and smoothen the average spectrum. It is not possible to reach $\chi^2=0$, even in principle, when positive definiteness is imposed; again see \ref{app:lowtheta}, and also \ref{app:low2}, where we include a small fraction of negative spectral weight. There is still no consensus on exactly how $\Theta$ should best be chosen, but overall the different schemes proposed in the literature produce rather similar results in most cases. The $\Theta$-fixing issue is similar to the various ways in which the entropic weighting parameter $\alpha$ of the spectrum can be chosen in the ME method \cite{jarrell96,bergeron16}. Some criteria proposed for optimal $\Theta$ were inspired by the physics analogy of a phase transition between ``data fitting'' and ``noise fitting'' \cite{sandvik98,beach04}. However, there is no rigorous motivation for such physics inspired criteria, and statistically acceptable $\langle \chi^2\rangle$ values cannot be guaranteed. In this section we discuss our alternative optimal-$\Theta$ criterion, show examples of its application, and also discuss related issues that further motivate and support the criterion. We begin by a brief outline of the subsections to follow. The criterion we advocate for the optimal value of $\Theta$ involves a balance between entropy and goodness-of-fit, formulated in a simple way motivated by the properties of the $\chi^2$ distribution, namely, $\Theta$ is adjusted so that \begin{equation} \langle \chi^2(\Theta) \rangle \approx \chi^2_{\rm min} + a\sqrt{2\chi^2_{\rm min}}, \label{eq:chi2} \end{equation} where $\chi^2_{\rm min}$ is the minimum value of $\chi^2$, which can be reached in a simulated annealing \cite{kirkpatrick83} process to very low $\Theta$, and $a$ is of order $1$. This criterion was first applied in Ref.~\cite{qin17} (though it was stated in a slightly different way) and will be further motivated here in Sec.~\ref{sec:thetacrit}. In the later subsections we present various results illustrating the applicability of the criterion when used with the different parametrizations of the spectrum. We stress already here that finding $\chi_{\rm min}$ to sufficient precision is not difficult with the sampling methods we have developed in continuous frequency space [Sec.~\ref{sec:contsamp}]; in typical cases annealing runs of a few minutes suffice, and we have not encountered any severe problems with getting stuck in local minima. In principle, conventional grid-based non-negative least squares fitting methods could also be used for this purpose \cite{bergeron16,Koch18,Ghanemthesis}. Given that simulated annealing works well and uses the same programming elements as the subsequent main sampling runs, we do not incorporate any other optimization methods here. Before providing a detailed motivation for Eq.~(\ref{eq:chi2}), we stress that it is a criterion based purely on the goodness of the fit and the properties of the $\chi^2$ distribution. It is also further justified by the existence of an effective number $N_{\rm para}$ of parameters that the positive definite spectrum can provide to fit the noisy $\bar G(\tau)$ data. We define $N_{\rm para}$ below and demonstrate its origin in more detail in \ref{app:lowtheta}. Bayesian inference arguments can also be used \cite{fuchs10}, but then a postulated prior $\Theta$-distribution is also required. We here stay away from such ambiguous input, instead favoring more concrete statistical arguments. Many of the SAC results in this section will be based on SSE-QMC data for the antiferromagnetic Heisenberg spin-$1/2$ chain, defined by the Hamiltonian \begin{equation} H = J \sum_{i=1}^L {\bf S}_i \cdot {\bf S}_{i+1}, \label{heisenberg} \end{equation} with periodic boundary conditions and $J=1$. In the first test, in Sec.~\ref{sec:example1} we study the same $16$-spin system that was used in Ref.~\cite{sandvik98}, where SAC results for the dynamic structure factor were compared with exact diagonalization results at rather high temperatures. For the second example, in Sec.~\ref{sec:example2} we consider a Heisenberg chain with $L=500$ spins at inverse temperature $\beta=500$, which effectively corresponds to $T=0$ for the structure factor computed. The results will be compared with a numerical BA calculation for the same $L$ \cite{caux05a,cauxdata}. In Sec.~\ref{sec:example3} we consider a case of a synthetic spectrum with non-trivial shape to further illustrate the dependence of the average spectrum on the parametrization used. We also discuss the convergence toward the correct profile with decreasing error level $\sigma$ of the imaginary-time data. We study $\sigma$ as low $\sigma=10^{-8}$, which would in most cases be unrealistic in QMC simulations but may be of interest in potential applications of SAC to problems where there are no statistical errors in the imaginary-time data. We also discuss the convergence of the spectrum with increasing $N_\omega$ at fixed error level. In Sec.~\ref{sec:nomega} we will show how spectra sampled in continuous frequency space evolve with $N_\omega$ when using the Bayesian sampling temperature, $\Theta=1$. The main question here is whether an optimal $N_\omega$ can be identified, as a potential alternative to optimizing $\Theta$ for arbitrary (normally large) $N_\omega$. We conclude that there is no obvious practical way to choose an optimal $N_\omega$, and it is better to aim for the $N_\omega \to \infty$ limit by using sufficiently large $N_\omega$ with $\Theta$ appropriately chosen. We already now point out that Eq.~(\ref{eq:chi2}) implies $\Theta \propto 1/N_\omega$, which will be demonstrated formally in Sec.~\ref{sec:entropy}. We will not show any statistical errors on the spectral functions presented in the tests in this section or elsewhere in the paper. In fact, in Sec.~\ref{sec:maxent} (and further in \ref{app:fluct} and \ref{app:statmech}) we argue that there are no fluctuations in the average spectrum in the limit $N_\omega \to \infty$, and that the fluctuations for finite $N_\omega$ also do not correspond to meaningful statistical errors reflecting the errors in $\bar G(\tau)$. We will also further discuss the issue of statistical errors in Sec.~\ref{sec:errors}. For now, we note that the statistical errors related to typical noise levels of $\bar G(\tau)$, preferably obtained using bootstrapping, are very small compared to the differences between spectra sampled with different parametrizations. \subsection{Optimal-$\Theta$ criterion} \label{sec:thetacrit} The imposed ``thermal noise'' level implied by Eq.~(\ref{eq:chi2}) is in spirit similar to the cut-off in Tikhonov regularization \cite{tikhonov63}, which has recently been discussed in the context of analytic continuation of QMC data by singular-value decomposition \cite{Ghanemthesis}. In practice, the criterion is implemented in SAC as follows: A simulated annealing procedure is first carried out to find $\chi^2_{\rm min}$. The value only needs to be reasonably close to the true minimum [considering the fact that the factor $a \lesssim 1$ in Eq.~(\ref{eq:chi2}) should not have to be specified exactly], which is difficult to reach exactly but which can be rather easily approached to the degree needed here. After this initial step, a second annealing procedure is carried out where $\Theta$ is lowered until the mean value $\langle \chi^2\rangle$ satisfies Eq.~(\ref{eq:chi2}), with $a \approx 0.5$ typically. Alternatively, the $\Theta$ criterion can be applied with saved data from the first stage, but we often do the second run at a slower annealing rate in order to obtain a more precise $\langle \chi^2(\Theta)\rangle$ form (i.e., with smaller statistical errors). With a single slow annealing run, the average spectra accumulated at all $\Theta$ values can also be saved for later examination, e.g., to study in detail the sensitivity of the spectrum to the value of $\Theta$. In some cases, e.g., when the number of $\delta$-functions is small or when constraints are imposed, the optimum according to Eq.~(\ref{eq:chi2}) may be for $\Theta > 1$ (though never much above $1$ in our experience). In such cases it could be argued that $\Theta=1$ should be used, since there would be no reason to introduce more fluctuations than with the original probability from Bayes' theorem. However, the need to use $\Theta < 1$ when $N_\omega$ is large already demonstrates the failure of the application of Bayes' theorem in this context (as further elaborate in Sec.~\ref{sec:pathology}), and then it is also legitimate to use $\Theta > 1$ when $\Theta=1$ leads to an overly constrained spectrum. To further motivate the way in which $\langle \chi^2\rangle$ is fixed above the minimum value $\chi^2_{\rm min}$ in Eq.~(\ref{eq:chi2}), we note that the mean $E(\chi^2)$ and variance $V(\chi^2)$ of the $\chi^2$ distribution are \begin{equation} E(\chi^2) = N_{\rm dof},~~~~V(\chi^2) = 2N_{\rm dof}, \label{evchi2} \end{equation} which applies to a best-fit procedure (with an applicable fitting function) where the number of independent degrees of freedom is \begin{equation} N_{\rm dof} = N_{\rm data} - N_{\rm para}, \label{ndofdef} \end{equation} where $N_{\rm data}$ is the number of fitted data points and $N_{\rm para}$ the number of optimized parameters in the fitting function. In the case of SAC, $N_{\rm data}=N_\tau$, but the number of sampled frequencies or amplitudes (or both) cannot be regarded as $N_{\rm para}$. First, since typically $N_\omega \gg N_\tau$, formally $N_{\rm dof}=N_\tau-N_\omega$ would be negative and nonsensical. Second, with positive definite amplitudes, when more $\delta$-functions (or other parameters defining the positive definite spectrum) are added, at some point $\chi^2 \to \chi^2_{\rm min}>0$ and the value can no longer be reduced. Thus, while a larger number of $\delta$-functions implies a more flexible spectrum, these cannot all independently contribute to minimizing $\chi^2$---instead, one can regard them as collectively forming an effective (rather small) number of parameters. Indeed, when minimizing $\chi^2$ by, e.g., simulated annealing, the optimal spectrum is essentially the same as that of data-fitting with some small number (2-6 typically) of $\delta$-functions (and we argue in \ref{app:lowtheta} that no better positive definite fitting function exists), only with some small broadening of the peaks or small spurious peaks, with both types of flaws diminishing with improved optimization. A number $N_\delta$ of $\delta$-functions functions (or sharp peaks) corresponds to $N_{\rm para}=2N_\delta$ effective fitting parameters (frequencies and amplitudes). Further, according to Eq.~(\ref{evchi2}) we can regard $\chi^2_{\rm min}$ as a proxy for the effective number of degrees of freedom, and similarly $2\chi^2_{\rm min}$ as a proxy for the variance of the $\chi^2$ distribution with this number of degrees of freedom. In practice, a well defined number $N_\delta$ of sharp peaks appear only when $\chi^2$ is extremely close to its perfect minimum, and our method of fixing $\Theta$ only needs a reasonably good approximation to $\chi^2_{\rm min}$, say to within $1\%$ of the optimal value (where the average spectrum may still look very ill-converged unless the annealing process is extremely slow). We can then still consider $\chi^2_{\rm min} \approx N_{\rm dof}$. According to Eq.~(\ref{ndofdef}), the effective number of parameters should then also be given by $N_{\rm para} \approx N_\tau - \chi^2_{\rm min}$, which can be used as a statistical test when comparing with $2N_\delta$ (after a sufficiently slow annealing); we expect \begin{equation} \chi^2_{\rm min} \approx N_{\tau} - 2N_{\delta}, \label{chi2expected} \end{equation} to be satisfied within a well defined uncertainty stemming from the variance of the $\chi^2$ distribution, which is sufficiently well approximated by $2\chi^2_{\rm min}$. It is not necessary to always perform such tests, but we will present examples in \ref{app:lowtheta} when discussing the limit $\Theta \to 0$ in detail. As mentioned above, for typical QMC data quality $N_\delta$ is normally small ($2-6$), and if $N_\tau \gg 2N_\delta$ we should of course simply have $\chi^2_{\rm min}/N_\tau \approx 1$. However, since $N_\delta$ is also itself related to the data quality and the data points are highly correlated, in practice we have found that there is no gain in using very large $N_\tau$ (especially considering the increase in computational effort with $N_\tau$). In our tests, we typically take $N_\tau$ in the range $10$ to $50$, and $\chi^2_{\rm min}/N_\tau$ then only very rarely exceeds $1$. With $a=1$ in Eq.~(\ref{eq:chi2}) the sampled spectrum, on average, has $\chi^2$ deviating from $\chi^2_{\min}$ by one standard deviation. This level of fluctuations should remove distortions arising from ``fitting to the noise'' (a concept that we also make more precise in the case of SAC in \ref{app:lowtheta}). We will show below that the criterion Eq.~(\ref{eq:chi2}) with $a$ of order $1$ indeed produces excellent spectra in tests both on synthetic imaginary-time data and on actual QMC results for systems with known spectral functions. We typically use $a=0.5$, which further reduces the rare fluctuations of $\chi^2$ up to values corresponding to several standard deviations away from the mean. With $a=0.25$, we some times have obtained spectra with clear signs of overfitting. We will illustrate the optimal-$\Theta$ criterion here in SAC with both the fixed-grid and continuum parametrizations of the spectrum. We often normalize the goodness of the fit by the number $N_\tau$ of $\tau$ points. Because $N_\tau$ is always larger than the effective number of degrees of freedom, Eq.~(\ref{ndofdef}), the statistical expectation is that $\chi_{\rm min}^2/N_\tau < 1$. Here it is worth pointing out that the rarely used ``historical'' variant of the ME method imposes $\chi^2/N_\tau = 1$ (in that case there is no averaging) as the criterion for determining the entropic coefficient $\alpha$. As has been pointed out \cite{jarrell96}, it is not always possible, because of ``unlucky'' statistical fluctuations, to reach this $\chi^2$ value, and, moreover, by arguments similar to those above, in general $\chi^2=N_\tau$ will be statistically too high (though only marginally so). In principle, a failsafe criterion like Eq.~(\ref{eq:chi2}) can also be used with the ME method, but we are not aware of any works using it so far. We will demonstrate its applicability with the ME method in Sec.~\ref{sec:maxent}. \subsection{Example 1: 16-spin Heisenberg chain} \label{sec:example1} The exact spectrum $S(q,\omega)$ for a finite system is a sum of $\delta$-functions, Eq.~(\ref{somegasum}), and for only $16$ spins the number of $\delta$-functions with significant weight is not very large (decreasing to just a few when $T \to 0$). No realistic numerical analytic continuation method can resolve many individual $\delta$-functions, and considerable smoothing has to be imposed in order to compare results. As in Ref.~\cite{sandvik98}, we here use a histogram with relatively large bins to represent the exact spectral weight distribution of $S(q,\omega)$, in this case for $q=\pi/2$ at $T=J/2$. \begin{figure}[t] \centerline{\includegraphics[width=8.25cm, clip]{fig05.pdf}} \vskip-1mm \caption{Examples of the evolution of the goodness of the fit with the sampling temperature, shown here on a log-log plot for three different parametrizations of the spectrum [Figs.~\ref{fig:spec}(a)--(c)]; on a fixed frequency grid (open black circles), in the continuum with both frequencies and amplitudes sampled (blue solid circles), and in the continuum with fixed amplitudes $A_i=1/N_\omega$ and sampled frequencies (red solid circles). The system is the $16$-site Heisenberg chain at inverse temperature $\beta=2$, with time spacing $\Delta_\tau=0.125$ in the QMC data (i.e., $N_\tau=8$) and statistical error level $\sigma \approx 10^{-5}$. The dashed blue line corresponds to the criterion for optimal sampling temperature, Eq.~(\ref{eq:chi2}) with the constant $a=0.5$. The inset shows the logarithmic derivatives extracted from differences between data at $\Theta_i$ and $\Theta_{i+1}$.} \label{fig:x2} \end{figure} Figure \ref{fig:x2} shows how $\langle \chi^2\rangle/N_\tau$ evolves with $\Theta$ in SAC simulated annealing runs. In the fixed-grid case, a frequency spacing $\Delta_\omega=0.005$ was used for $\omega \in (0,5)$ and with the continuous-$\omega$ parameterizations $1000$ $\delta$-functions were sampled. In all cases, the annealing processes started at $\Theta=10$ and the temperature was gradually lowered by dividing $\Theta$ by $1.1$ between each of the points in Fig.~\ref{fig:x2}. For each $\Theta$, of the order of $10^6$ full Monte Carlo updating sweeps were carried out and all the accumulated spectra were saved. With all three parameterizations, $\langle \chi^2\rangle$ decreases monotonically with decreasing sampling temperature, eventually converging to, for all practical purposes, the same minimum value, $\langle \chi^2\rangle/N_\tau \approx 0.479$, indeed well below $1$ as expected. It can be noted that the three $\langle \chi^2(\Theta)\rangle$ curves differ considerably at higher $\Theta$, pointing to different entropy contents of the spectra defined with these parameterizations. \begin{figure*} \centerline{\includegraphics[width=16cm, clip]{fig06.pdf}} \vskip-1mm \caption{Spectral functions (dynamic structure factor at $q=\pi/2$) corresponding to Fig.~\ref{fig:x2} for selected values of $\langle \chi^2\rangle/N_\tau$. The three rows correspond to the three different $\delta$-function parameterizations illustrated in Fig.~\ref{fig:spec}(a)--(c); a fixed frequency grid with sampled amplitudes in (a), a continuum with only frequencies sampled (equal fixed amplitudes) in (b), and with both frequencies and amplitudes sampled in (c). The columns correspond to different goodness-of-fit values, indicated on top of each column, which are realized at different $\Theta$ values according to Fig.~\ref{fig:x2}. The histograms (shown with black lines) represent the spectral weight distribution calculated according to the exact form, Eq.~(\ref{somegasum}), for the same $16$-spin Heisenberg chain at the same temperature $T=1/2$.} \label{sw1} \vskip-2mm \end{figure*} The inset of Fig.~\ref{fig:x2} shows the $\ln(\Theta)$ derivative of $\ln \langle \chi^2(\Theta)/N\tau \rangle$, where we observe broad maxima in all three cases. Such maxima have been likened to heat capacity peaks \cite{beach04}, and it was argued that the peak location is a good way to choose $\Theta$ (arguably representing the ``phase transition'' between data fitting and noise fitting). However, in Fig.~\ref{fig:x2} the peak locations correspond to rather large $\langle \chi^2\rangle$ values, well above $\chi^2_{\rm min}$ and outside the range that would be considered acceptable based on the width of the $\chi^2$ distribution. Therefore, the peak location should not, in general, be an optimal criterion, though it has often produced good results \cite{feldner11,voll15,lohofer15,lohofer17,becker17,becker18,ying19,raczkowski20,sato21}. In contrast, adjusting $\Theta$ according to our criterion in Eq.~(\ref{eq:chi2}), indicated by a dashed line corresponding to the factor $a=0.5$ in Fig.~\ref{fig:x2}, guarantees that the spectrum represents a statistically good fit to $\bar G(\tau)$. Figure \ref{sw1} shows examples of average spectra obtained at different sampling temperatures in the same runs as above, with the columns labeled by the corresponding normalized $\langle \chi^2\rangle$ values, with fixed-grid and continuum-frequency results shown at matching values, i.e., at different values of $\Theta$ according to Fig.~\ref{fig:x2}. With the $\delta$-functions in continuous frequency space, the spectral weight density was accumulated in histograms of the same bin size, $\Delta_\omega=0.005$, as in the fixed-grid spectrum. For comparison, each panel in Fig.~\ref{sw1} also shows exact diagonalization results represented by a histogram with bin width $\Delta_\omega=0.1$. Even at the rather high (physical) temperature $T=J/2$ used here, the histogram has a jagged structure that one should not expect to reproduce (and of course the true spectral function is even less smooth, consisting of some hundreds of $\delta$-functions). A realistic hope would be to reproduce the overall shape of the frequency distribution without the fine details. For given $\langle \chi^2\rangle/N_\tau$ value in Fig.~\ref{sw1}, the SAC spectra obtained with the different parameterizations look distinct in their details, though the overall distribution of spectral weight is rather similar and close to the exact histogram when $\langle \chi^2\rangle/N_\tau = 0.6$. The fixed grid produces the sharpest peaks, while the continuum with only frequencies sampled leads to the overall broadest spectra. The continuous-$\omega$ results clearly have better overall shapes when $\langle \chi^2\rangle/N_\tau > 0.6$, where the peak of the fixed-grid spectra is too narrow and does not represent satisfactory envelopes of the histogram. There is also a broad spurious maximum at low freqiency. The continuous-$\omega$ results are overall broader and represent more satisfactory weight distributions. The fact that the spectra sampled with all three parametrizations differ significantly from each other, even for matching values of $\langle \chi^2\rangle$, implies that their entropies do not just differ by constant factors but are affected by their respective stochastic processes in some more fundamental way, which we will investigate in detail in Secs.~\ref{sec:entropy} and \ref{sec:maxent}. Without a quantitative criterion for the match to the exact result, which would not be very useful here given that the exact spectrum has fine structure that can never be reproduced, it is not possible to say definitely which parametrization is the best one here. Overall, it appears that the fixed-amplitude spectra for $1 \ge \langle \chi^2\rangle/N_\tau \ge 0.6$ [Fig.~\ref{sw1}(b)] represent somewhat broadened envelopes of the histograms, and that the added amplitude sampling [Fig.~\ref{sw1}(c)] causes the peak to narrow significantly and visually brings it closer to the histogram. However, the bin size of the histogram is largely arbitrary. Preceding a peak-split at lower temperatures, a broadening is seen in Fig.~\ref{sw1}(c) at $\langle \chi^2\rangle/N_\tau = 0.5$, and a precursor to the splitting is also present in Fig.~\ref{sw1}(a) at $\langle \chi^2\rangle/N_\tau = 0.6$. With all three parametrizations, the spectrum eventually splits into two peaks below a non-universal value of $\Theta$, which can be regarded as a consequence of overfitting with a spectrum constrained to be positive definite, as further discussed in \ref{app:lowtheta} (and also in \ref{app:low2}, where we relax the constraint and include negative spectral weight). As already mentioned, we also establish that each sharp peak corresponds to two effective parameters that the positive definite spectrum provides for fitting the noisy imaginary-time data. Thus, in the present case there are four parameters, and only with significantly improved data quality would this number at some point increase to six. As $\Theta$ is lowered, the splitting into two peaks occurs in Fig.~\ref{sw1} at the highest value of $\langle \chi^2\rangle$ in the fixed-grid case (though at a much lower value of $\Theta$, as seen in Fig.~\ref{fig:x2}), followed by the continuum with both frequency and amplitude updates. In Ref.~\cite{sandvik98}, it was shown that the fixed-grid spectrum actually is the best right before the point where the split occurs, when the single maximum broadens out before the peak at lower frequency emerges; this broadening is manifested in the fixed-grid panel for $\langle \chi^2\rangle/N_\tau = 0.6$ in Fig.~\ref{sw1}(a). A broadening before the split also takes place with the combined amplitude and frequency sampling, as seen in the $\langle \chi^2\rangle/N_\tau = 0.5$ panel of Fig.~\ref{sw1}(c). The shape of the spectrum here is very different from the fixed-grid case in Fig.~\ref{sw1}(a), however. In Ref.~\cite{sandvik98}, the peak splitting was related to a local entropy maximum, which was suggested as a criterion for selecting the best $\Theta$ value. It has proven difficult, however, to clearly identify such an entropy maximum in general. The $\Theta$ fixing based on $\langle \chi^2\rangle$ in Eq.~(\ref{eq:chi2}) is both easier to satisfy in practice and better motivated by statistical arguments. With $a=1$, the optimal sampling temperature according to Eq.~(\ref{eq:chi2}) and Fig.~\ref{fig:x2} corresponds to $\langle \chi^2\rangle/N_\tau \approx 0.82$. In practice, our experience is that somewhat smaller $a$ values produce better results (though the differences are normally only marginal). We typically use $a=0.5$ (corresponding to the horizontal line in Fig.~\ref{fig:x2}), which in this present case corresponds to $\langle \chi^2\rangle/N_\tau \approx 0.65$. Even $a=0.25$ (here $\langle \chi^2\rangle/N_\tau \approx 0.55$) typically produces good spectra without obvious effects of overfitting in the frequency continuum (while the grid-bases spectra seem to suffer earlier from overfitting). If the underlying imaginary-time data are good enough, this detail of the $\Theta$ criterion is not critical, as the spectrum changes very little over a wide range of temperatures before the effects of overfitting (peak splitting) become visible. This insensitivity to $\Theta$ variations is apparent in the set of continuous-frequency results in Figs.~\ref{sw1}(b) and \ref{sw1}(c) for $\langle \chi^2\rangle/N_\tau > 0.5$, while the corresponding fixed-grid results in Figs.~\ref{sw1}(c) evolve more significantly with $\Theta$. Overall, a reasonable conclusion that can be drawn from the results in Fig.~\ref{sw1} is that frequency-only sampling is the safest way to avoid too much structure in the spectrum when the $\Theta$ criterion Eq.~(\ref{eq:chi2}) is applied with a reasonable value of the factor $a\lesssim 1$. However, the profiles obtained when also the amplitudes are sampled visually appear closer to the true weight distribution, not only in the peak shape but also in the way the tail of the spectrum is well reproduced. The generality of these behaviors is of course not clear based on just this test, but we have found very similar differences between spectra in other cases where the correct spectrum consists of a single broad maximum. It is interesting that the spectra for $\langle \chi^2\rangle/N_\tau > 0.5$ in Fig.~\ref{sw1}(c) are very close to an analytical result based on a high-temperature expansion in Ref.~\cite{starykh97} (Fig.~4, $T/J=0.5$ panel). While that result is for the thermodynamic limit, at this high temperature the spectrum obtained with SAC evolves very little from the $L=16$ form when $L$ increases. This excellent agreement with a presumably very good analytical form provides additional support for the favorable effects of amplitude fluctuations on the average spectrum. Moreover, the results in Fig.~\ref{sw1}(b) are closer to the ME result also shown in Fig.~4 of Ref.~\cite{starykh97}. The closeness of the ME result to that of frequency-only sampling will be explained in Sec.~\ref{sec:maxent} based on a mapping between the two methods. The sampling time was a few minutes for each $\Theta$ point in Fig.~\ref{fig:x2} (several hours for a total of 200 $\Theta$ values, down to $\Theta$ much lower than shown in the figure). This is longer than typically required to just establish $\chi^2_{\rm min}$ to sufficient accuracy with the continuous frequency parametrizations. It was done in this way here in order to obtain the smooth goodness-of-fit curves in Fig.~\ref{fig:x2} and sufficiently smooth spectra when $\langle\chi^2\rangle\approx \chi^2_{\rm min}$ in Fig.~\ref{sw1}. All 200 spectra were saved in this process and some of them were selected for Fig.~\ref{sw1}. If such detailed information is not needed, the entire first annealing process can often be carried out in a few minutes and still produces sufficiently converged $\chi^2_{\rm min}$ estimates. It is clear that a slight over-estimation of $\chi^2_{\rm min}$ in practice just corresponds to the effective value of $a$ in Eq.~(\ref{eq:chi2}) being slightly higher than the target value, and, as we have seen, the end result is not sensitive to minor variations in $a$. For the second annealing procedure, which stops when the criterion in Eq.~(\ref{eq:chi2}) is satisfied, it is better to sample a bit longer for each $\Theta$ (and also the rate of lowering $\Theta$ can be slower), so that the error bars on $\langle \chi^2\rangle$ are sufficiently small for reliably applying the optimal-$\Theta$ criterion. For the final sampling when $\Theta$ has been fixed (and the sampling continues from the last configuration of the annealing process), the sampling time can be adapted to the desired smoothness of the spectrum. \begin{figure*}[t] \centerline{\includegraphics[width=16cm,clip]{fig07.pdf}} \vskip-1mm \caption{SAC results for the dynamic spin structure factor (red curves) at $q=4\pi/5$ organized as in Fig.~\ref{sw1} but for $L=500$ spins at $T=J/500$, sufficiently low for ground state results. The spectra are compared with a $T=0$ numerical BA calculation \cite{caux05a,cauxdata} for the same system size (black curves). Results are shown at several values of the goodness of the fit based on simulated annealing with gradually lowered $\Theta$ values, similar to Fig.~\ref{fig:x2}. In this case $\chi^2_{\rm min}/N_\tau=0.619$.} \label{sw2} \vskip-1mm \end{figure*} \subsection{Example 2: 500-spin Heisenberg chain} \label{sec:example2} Next, we present a similar study of a much larger Heisenberg chain: $L=500$ spins simulated with the SSE method at $T=J/500$. We consider the dynamic structure factor at momentum $q=4\pi/5$ and compare with $T=0$ results calculated numerically using the BA wave function \cite{caux05a,caux05b,pereira06,cauxdata} for the same system size. It should be noted again that these BA based results are not exact, but include the contributions from only two- \cite{muller81,bougourzi96,karbach97} and four-spinon \cite{caux05a} processes. The total spectral weight being known exactly, it can be inferred that the BA spectrum contains about $98\%$ of the total weight in the case $q=4\pi/5$ that we consider here. In Sec.~\ref{sec:contedge1} we will also study other $q$ values (with a different SAC parametrization), where the captured weight is slightly different. The lower edge of the BA spectrum is the exact spinon dispersion relation \cite{cloizeaux62}. The correlation function $\bar G(\tau)$ was computed on a uniform imaginary-time grid with spacing $\Delta_\tau=0.25$ between points. For the particular momentum considered here, good data (using 20\% relative error as the cut-off) was obtained up to $\tau=9.5$; a total of $33$ data points and the corresponding covariance matrix was thus used in the SAC runs. Data obtained at $T=J/1000$ with a similar level of error on a nonlinear $\tau$ grid are shown in Fig~\ref{fig:gtau}. For this calculation, the two different $\tau$ grids produced essentially identical results, thus demonstrating that the choice of $\tau$ grid is not critical when a reasonably large number of points is used. In general, once a certain number of $\tau$ points are used, additional points do not contribute substantially more information, due to strong covariance, unless the data quality is further improved. There are also no signs of differences in the $\tau$-dependence at the level of the error bars with these two data sets calculated at two different very low temperatures, and we expect that there are no finite-temperature effects in the spectral functions presented below. \begin{figure*}[t] \centerline{\includegraphics[width=115mm,clip]{fig08.pdf}} \vskip-1mm \caption{Continuous frequency SAC results (red curves) for the dynamic structure factor of the $L=500$ Heisenberg chain (same as in Fig.~\ref{sw2}) with the lower edge of the spectrum fixed at the known frequency $\omega_q \approx 0.923$ for $q=4\pi/5$. The BA result is shown in black. In (a) only frequency updates were carried out while in (b) also the amplitudes were updated. In both (a) and (b), $\Theta$ was adjusted to give $\langle \chi^2\rangle/N_\tau \approx 0.75$ corresponding to $a = 0.5$ in the criterion in Eq.~(\ref{eq:chi2}) when applied with the lower bound imposed (in which case $\chi^2_{\rm min}/N_\tau\approx 0.65$). The insets show results for the goodness-of-fit vs the edge location, where in both cases $\Theta$ was fixed at a higher value than in the main graphs, so that $\langle \chi^2\rangle/N_\tau \approx 1.5$ in both cases at $\omega_q=0$ (and there are no appreciable changes until $\omega_q \approx 0.7$). The correct edge location is indicated by the vertical dashed lines.} \label{w0fix} \vskip-2mm \end{figure*} Simulated annealing runs were carried out and produced consistent goodness-of-fit curves similar to those in Fig.~\ref{fig:x2}, with convergence to $\chi^2_{\rm min}/N_\tau \approx 0.619$ for all three parametrizations. Fig.~\ref{sw2} shows spectral functions obtained at five different values of the mean goodness-of-fit, graphed together with the BA result. The exact dynamic structure factor of the Heisenberg chain in the thermodynamic limit has a power-law singularity at the lower edge $\omega_q$, of the form $S(\omega) \propto (\omega-\omega_q)^{-1/2}$ (with a logarithmic correction that we will discuss later in Sec.~\ref{sec:contedge2}). The BA results also being calculated on a finite chain, a broadening of the $\delta$-functions has been imposed \cite{caux05a,cauxdata} and the divergence is thus quenched. Unrestricted SAC sampling naturally cannot reproduce the sharp edge at $\omega_q$. Then, as is apparent in Fig.~\ref{sw2}, because of the spectral weigh appearing below the true edge there is a compensatory effect by the sampling procedure (in order to produce a good fit to the imaginary-time data) that shifts the tip of the rounded peak to higher frequency. The optimal $\Theta$ criterion in Eq.~(\ref{eq:chi2}) with $a =0.5 \sim 1$ corresponds to $\langle \chi^2\rangle/N_\tau = 0.7 \sim 0.8$, where it can be seen in Fig.~\ref{sw2} that there is always a second spurious broad maximum (often referred to as ``ringing'' in the literature) at $\omega\approx 2$. Clearly, overall the results are unsatisfactory with all parametrizations. When $\Theta$ is taken very low, a spectrum with a few sharp peaks emerges with all parametrizations. With the fixed grid, it is very difficult to reach the ultimate spectrum with true minimum $\chi^2$ value, though the peaks seen in Fig.~\ref{sw2} still sharpen considerably when $\Theta$ is further lowered. While it is difficult to reach the true $\chi^2_{\rm min}$ value also with the other parametrizations, the narrow peaks at low $\Theta$ are again suggestive of the actual $\chi^2$-minimizing positive definite spectrum consisting of four $\delta$-functions. In \ref{app:lowtheta} we show further evidence of this behavior in results at much lower $\Theta$ than in Fig.~\ref{sw2}. To demonstrate that the spurious second maximum, which is present at all reasonable values of $\Theta$ in Fig.~\ref{sw2}, indeed is caused by the inability of the method to resolve the sharp edge, in Fig.~\ref{w0fix} we show results obtained with the continuous frequency parametrizations when the lower edge of the spectrum has been fixed at its known value, which is \cite{cloizeaux62} $\omega_q = \pi\sin(q)/2$ in the thermodynamic limit (and not significantly different for $L=500$ \cite{caux05a}). Here we observe a very sharp peak at the edge and a far less pronounced second maximum. Overall, the results are much closer to the BA spectrum, though some distortions are still visible, and more so when both frequencies and amplitudes are sampled. The distortions in the broad tail portion of the spectrum can still be thought of as induced by the primary distortions close to the edge. With no weight below the true edge, the secondary distortions are also much milder. As a first example of optimization of a constraint according to the principles illustrated in Fig.~\ref{fig:optim}, the insets of Fig.~\ref{w0fix} show how the goodness of the fit changes with the location $\omega_q$ of the imposed lower bound. Here the sampling temperature was held fixed at a value higher than prescribed by our $\Theta$ criterion, but still low enough to give reasonable fits of the spectral functions to the data. With the slightly elevated $\Theta$ value, there is room for the fit to improve when entropy is removed by the constraint, until the fit again deteriorates when the spectrum becomes over-constrained by an excessively high lower bound. In the inset of Fig.~\ref{w0fix}(a) we observe that the $\langle \chi^2 \rangle$ minimum is well developed at $\omega_q$ close to the correct value, while in Fig.~\ref{w0fix}(b) the minimum is much shallower and further away from the correct value. Moreover, in Fig.~\ref{w0fix}(a) the best value of $\langle \chi^2 \rangle/N_\tau$ is well below $1$, in the realm of satisfying our $\Theta$ criterion, while in Fig.~\ref{w0fix}(b) the value always stays high above $1$. In both cases (in particular in the latter case), we can of course improve the goodness of the fit by lowering $\Theta$, but then the minimum becomes shallower. \begin{figure*}[t] \centerline{\includegraphics[width=135mm, clip]{fig09.pdf}} \vskip-1mm \caption{A synthetic $T=1/16$ spectrum (black curves) reproduced by continuous-frequency SAC with only frequency moves in the upper row and also including amplitude moves in the lower row. In both cases, results are shown for five different error levels; from $\sigma=10^{-4}$ in (a) and (b) to $10^{-8}$ in (i) and (j), as indicated on top of each column. The $\sigma=10^{-4}$ and $\sigma=10^{-5}$ $\bar G(\tau)$ data sets included 40 $\tau$ points at spacing $\Delta_\tau=0.2$, while 80 points with $\Delta_\tau=0.1$ were used in the other cases. The number of sampled $\delta$-functions ranged from $10^3$ with both parametrizations at $\sigma=10^{-4}$ to $2\times 10^5$ with frequency updates only at $\sigma=10^{-8}$ ($5\times 10^{4}$ with the amplitude updates included).} \label{syntcomp} \vskip-2mm \end{figure*} Here it should be noted that just imposing the lower bound still does not enable the sampling to reproduce the correct edge shape. This inability to resolve a very sharp feature, because of further entropic pressures not impeded by the simplest edge constraint, should be responsible for the imperfect determination of the bound $\omega_q$ in this case. The $\langle \chi^2 \rangle$ minimum should also only be expected to reflect the correct value of the constraining parameter in the limit of very low error level $\sigma$ of the $\bar G(\tau)$ data, as we will discuss further in Sec.~\ref{sec:delta2} (and we will also further explain how to choose $\Theta$ when scanning over a parameter). Nevertheless, even if it is not yet a perfect constraint, imposing the lower bound (at the known $\omega_q$ or at the imperfect optimized value) clearly improves the fidelity of the method, especially with the equal-amplitude parametrization. The less favorable effects of the constraint when amplitude updates are included (quantitatively seen in $\langle \chi^2 \rangle/N_\tau$ in the insets of Fig.~\ref{w0fix}) indicate that there are other entropic effects, beyond leakage of weight below $\omega_q$, that distort the spectrum in this case, more so than when only the frequencies are sampled. This conclusion is supported by the differences in the spectral functions in Fig.~\ref{w0fix}, where the amplitude updates cause more ringing in the tail of the spectrum. In Ref.~\cite{sandvik16}, the fidelity of the SAC method for edge-divergent spectral functions was further improved significantly by imposing the constraint of a single maximum in the amplitudes versus frequency in the fixed-grid parametrization (i.e., only amplitude updates maintaining the single-peak structure were carried out). Such a constraint clearly impedes the entropic effects tending to flatten the peak and also suppresses the ringing behavior, e.g., non-monotonic undulations are no longer possible. Tests were carried out using the same $L=500$ chain studied here. In Secs.~\ref{sec:contedge1} and \ref{sec:contedge2}, we will present even more powerful methods for treating edge singularities with continuous-frequency SAC representations. \subsection{Example 3: Synthetic spectrum} \label{sec:example3} We next discuss an example based on synthetic data, a spectrum constructed from three Gaussians, shown as the black curves in Fig.~\ref{syntcomp}. Two of the Gaussians are broad and close to each other, so that their individual maxima do not appear and instead a single flatter maximum forms. The third peak is narrower and located clearly below the other two. The low-frequency parts of the broader Gaussians are further damped by a fast exponential decay below the maximum of the taller peak. The peak width is not extremely narrow, so that there is some hope of reconstructing the entire spectrum from imaginary-time data at error levels achievable in QMC calculations. When converting the spectrum to $G(\tau)$ according to Eq.~(\ref{contrel1}), we here set the inverse temperature to $\beta=16$, in light of the fact that the tall peak of this spectrum could mimic a temperature broadened quasi-particle peak or a $T>0$ version of a singular dynamic structure factor such as the one in Example 2 above. \begin{figure*}[t] \centerline{\includegraphics[width=150mm, clip]{fig10.pdf}} \vskip-1mm \caption{Results for the same synthetic spectrum as in Fig.~\ref{syntcomp}, obtained with data of error level $\sigma=10^{-7}$ with sampling only of the frequencies. Results are shown for several different choices of the number $N_\omega$ of $\delta$-functions, and in each case two independent runs were carried out (red and black curves). The spectra were sampled at roughly equal computational effort, using $4 \times 10^{9}/N_\omega$ Monte Carlo sweeps (as defined in Sec.~\ref{sec:contsamp}). In all cases, $\Theta$ was adjusted to give $\langle \chi^2\rangle/N_\tau \approx 0.79$, corresponding to the criterion Eq.~(\ref{eq:chi2}) with $a=0.5$.} \label{nconv} \end{figure*} \subsubsection{Dependence on the data quality} Having already concluded that the continuous-frequency parametrizations are better than the fixed grid, we here only consider the former, comparing the case of frequency-only updates (equal weight $\delta$-functions) with that of both frequency and amplitude updates. We consider noise levels from $\sigma=10^{-4}$ down to $10^{-8}$. When sampling at very low noise levels, the advantage of using large $N_\omega$ are apparent, and we went as high as $N_\omega=2\times 10^5$ in the case of frequency-only updates at $\sigma=10^{-8}$, for which the complete SAC process needed many hours of annealing and final sampling at $\Theta=0.025$. At the smallest error levels the process took only of the order ten minutes. The results are shown in Fig.~\ref{syntcomp} for both continuous frequency parametrizations and may be considered acceptable already at the highest noise levels, especially when both frequency and amplitude moves are used. The rather flat region where the broad maximum merges with the peak is the most challenging feature to reproduce, with the SAC results instead exhibiting excessive broadening of the right side of the peak followed by ringing behavior. Reducing the noise level improves the fidelity of this part of the spectrum, though ringing persists in the case of frequency-only updates even at $\sigma = 10^{-8}$. The gradual improvements with decreasing $\sigma$ appear more systematic when both frequencies and amplitudes are sampled, and even though some deviations persist the spectra at $\sigma = 10^{-7}$ and $10^{-8}$ are very close to correct. Overall, it is clear that the amplitude updates have a favorable effect on the fidelity, with the peak height very close to correct and smaller deviations overall also for the other parts of the spectrum for all $\sigma$ values. However, it should be noted that the tip of the peak actually in some cases is slightly too tall, which may indicate that the sampling entropy in this parametrization overly favors sharp peaks. Still, in Fig.~\ref{syntcomp} it is rather obvious that the spectra obtained with the amplitude updates are better. In all cases, the remaining distortions of the broad maximum and the region between the two peaks can be traced back to imperfect resolution of the tall peak, which leads to secondary distortions like ringing at higher frequencies. There then has to be a compensating effect in order for the spectrum to reproduce $\bar G(\tau)$, which implies a distortion at higher frequency, in particular in the rather flat region where the broad maximum merges with the peak. In Sec.~\ref{sec:deltanp} we will provide further, concrete evidence of this interpretation of distortions propagating from the sharpest spectral feature. The tail portion above $\omega=2$ is rather well reproduced even at the highest error levels and improves systematically as $\sigma$ is reduced with both parametrizations. QMC simulations would not normally be able to produce data with errors $\sigma=10^{-8}$, or even $\sigma=10^{-7}$, but such data are is still relevant for exploring the performance of the method in such extreme cases. There are potential applications of SAC also to analytic continuation of data without statistical errors (but often with some systematical errors instead), e.g., results produced with density matrix renormalization group (DMRG) calculations in imaginary time \cite{linden20}, which is a potentially promising alternative to calculations performed directly in real time \cite{barthel09,yang21}. To process such data by SAC, artificial noise can be added and systematically reduced as far as possible to converge to the best possible positive definite spectrum. With some more computational effort, sampling millions of $\delta$-functions, the methods used here can likely be pushed to error levels much below $\sigma=10^{-8}$; we have not yet established the practical limitations. \subsubsection{Dependence on $N_\omega$} The above example also provides a good opportunity to test the dependence of the results and the sampling efficiency on $N_\omega$. Fig.~\ref{nconv} shows results for the same synthetic spectrum as in Fig.~\ref{syntcomp}, now at fixed error level $\sigma=10^{-7}$ and sampled only with frequency updates. The number of $\delta$-functions ranges from $N_\omega=100$ to $1600$, i.e., in all cases smaller than in Fig.~\ref{syntcomp}, where $N_\omega=10^4$ in the case $\sigma=10^{-7}$. The number of sampling sweeps was chosen proportional to $1/N_\omega$, so that the computational effort is roughly the same in all cases---about 30 single-core CPU minutes for each, after a simulated annealing from higher $\Theta$ of over an hour. We show results for two independent runs in each case to check the consistency of the procedures. Three key observations can be made: (i) When $N_\omega$ is too small for the given data quality, the relatively large units $1/N_\omega$ of spectral weight (in the normalized, sampled spectrum) cannot migrate sufficiently, either to the low or the high tail of the spectrum. As a consequence, a small number of them will form separate small spikes to roughly account for the spectral weigh in the tails and, thus, produce a good $\langle \chi^2\rangle$ value [which is the same for all the cases shown and the same as in Fig.~\ref{syntcomp}(e)]. Note that these spectral features for small $N_{\omega}$ do not simply result from poor sampling efficiency (though indeed the sampling is also slow), but from the inability of a small number of relatively large-amplitude $\delta$-functions to sufficiently approximate the spectral weight distribution. The artificial spikes gradually move and shrink away as $N_\omega$ is increased. \begin{figure*}[t] \centering \includegraphics[width=100mm]{fig11.pdf} \vskip-2mm \caption{Examples of spectra obtained with a relatively small number $N_\omega$ of $\delta$-functions in the continuum, with the sampling temperature fixed at $\Theta=1$ and using the same imaginary-time data underlying Figs.~\ref{sw1} and \ref{sw2}; (a) for $L=16,T=J/2$ with $N_\omega=8$ and (b) for $L=500,T=J/500$ with $N_\omega=64$. In both cases, the red curve was obtained solely with frequency updates, while for the blue curve also amplitude updates were performed. The dependence of $\langle \chi^2\rangle$ on $N_\omega$ is graphed in (c) for $L=16$ and in (d) for $L=500$. Red and blue symbols correspond to sampling without and with amplitude updates, respectively.} \vskip-2mm \label{ndep} \end{figure*} (ii) With larger $N_\omega$ the sampling is much more efficient, leading to smoother curves. For small $N_\omega$, it is very expensive in $\chi^2$ to move a unit $1/N_\omega$ of spectral weight a substantial distance, while for large $N_\omega$ the small units can move significantly. There are clear differences between the two independent runs for both $N_\omega=100$ and $200$, illustrating the very slow evolution of the spectrum and trapping behavior (note, however, that the anomalous edge spikes are quite reproducible in all cases). It is clear from this example that the larger moves in frequency for larger $N_\omega$ very well compensate for the fact that the unit of spectral weight $1/N_\omega$ is smaller and the number of Monte Carlo sweeps is lower (though the total number of frequency update attempts was the same for all cases). (iii) The profiles converge to a limiting shape for increasing $N_\omega$. Looking again at Fig.~\ref{syntcomp}(e), where a much larger $N_\omega=10^4$ was used, and comparing with the results for $N_\omega=800$ and $1600$ in Figs.~\ref{nconv}(d) and \ref{nconv}(e), the latter spectra still have a more pronounced shoulder at $\omega\approx 2$. This feature smoothens out when $N_\omega$ increases, and for $N_\omega = 5000$ and larger this part is very close to the exact spectrum and nothing changes when going to much larger $N_\omega$. It can also be noted that the height of the dominant peak systematically decreases with $N_\omega$ and Fig.~\ref{syntcomp}(e) represents the converged profile also in this respect. Thus, as should be expected based on the analogy with statistical mechanics, there is a ``thermodynamic limit'' of the spectrum, though in this case, because of the intensive property of $\chi^2$ and extensive entropy (which we will demonstrate explicitly for this parametrization in Sec.~\ref{sec:entropy}), the parameter that regulates this limit is not $\Theta$ but $\Theta/N_\omega$. The thermodynamic-limit analogy of $N_\omega \to \infty$, which has some unusual aspects related to $\chi^2$ as an energy and the high-density limit of the $\delta$-functions (``particles''), is discussed in detail in \ref{app:statmech}. Based on this test, we can conclude that it should always be better to use relatively large $N_\omega$. If also the amplitudes are sampled, there are no artificial spikes beyond the main part of the spectrum even for small $N_\omega$ and the spectrum converges faster with increasing $N_\omega$. There is still no harm in using larger $N_\omega$, and the sampling efficiency is then improved also in this case. It should also be noted that there are two sources of un-smoothness in the sampled spectral functions; i) when $N_\omega$ is fundamentally too small and ii) from insufficient averaging. Both effects can be seen in Fig.~\ref{nconv} when comparing with Fig.~\ref{syntcomp}(e). \subsection{Optimal number of $\delta$-functions at $\Theta=1$} \label{sec:nomega} Figure \ref{fig:x2} illustrates how SAC fails when using the Bayesian sampling temperature $\Theta=1$ in Eq.~(\ref{psg}), because of entropic effects when the number of sampled degrees of freedom is large. A useful remedy is to introduce the optimized sampling temperature $\Theta$ as explained above, and from the extensive entropy (Sec.~\ref{sec:entropy}) we know that $\Theta \propto 1/N_\omega$ should be expected. In Ref.~\cite{sandvik16}, another remedy of the entropic catastrophe was proposed---the use of optimized constraints to prohibit the spreading of spectral weight that is the root cause of the problem. Since the entropic catastrophe is related to large $N_\omega$, a different route to solving the problem may be to work with the smallest possible $N_\omega$. The use of very large $N_\omega$ is largely motivated by improvements in sampling efficiency, as we discussed above, as well as to reach the $N_\omega \to \infty$ limit if that is desired. Here we will present some results for the evolution of the spectrum with $N_\omega$ at $\Theta=1$ in the regime where the changes with $N_\omega$ are still substantial. We will in particular investigate whether an optimal $N_\omega$ exists in the sense of a minimum $\langle \chi^2(N_\omega)\rangle$. In general, we find that this approach does not work well. Results for the same $L=16$ and $L=500$ Heisenberg chains as in Figs.~\ref{sw1} and \ref{sw2} are shown in Fig.~\ref{ndep}, now with $\Theta=1$. The previous results were obtained with $N_\omega=1000$ in both cases, while here spectral functions obtained with continuous frequency are graphed for $N_\omega=8$ ($L=16$) and $N_\omega=64$ ($L=500$) in Fig.~\ref{ndep}(a) and \ref{ndep}(b), respectively. Figs.~\ref{ndep}(c) and \ref{ndep}(d) show plots of $\langle \chi^2\rangle/N_\tau$ versus $N_\omega$ for the two systems. In the case of $L=16$ there is no discernible minimum in $\langle \chi^2\rangle$, while for $N=500$ there is in fact a rather well defined minimum. Let us discuss the $L=16$ case first. For the smallest values of $N_\omega$, the spectrum (not shown) exhibits two sharp peaks, similar to the results in Fig.~\ref{sw1} for the smaller $\langle \chi^2\rangle$ values. The spectrum broadens out as $N_\omega$ increases, with $\langle \chi^2\rangle$ slowly increasing as well, as seen in Fig.~\ref{ndep}(c). Even with $N_\omega$ as small as $8$, the average spectral weight in Fig.~\ref{ndep}(a) reproduces quite well the exact diagonalization histogram, though the match at small $\omega$ is worse, with both parametrizations, than the optimal-$\Theta$ results in Fig.~\ref{sw1}. The result obtained with both frequency and amplitude sampling is clearly better, especially at the high-frequency tail. With the fixed amplitudes, it is very costly in $\chi^2$ for even one unit of weight to migrate up to the thin-tail part of the spectrum, as in Fig.~\ref{nconv} for the smaller values of $N_\omega$. However, in this case the consequence is just the sharp drop in mean spectral weight above $\omega=2$ in Fig.~\ref{ndep}(a), with no artificial spike at higher frequency because of the very large amplitude $1/N_\omega$ when $N_\omega=8$. As $N_\omega$ increases, the sharp edge gradually vanishes, and for $N_\omega \approx 50$ and higher the tail is well reproduced. Since there is no minimum in $\langle \chi^2\rangle$, the optimization approach with $N_\omega$ as the parameter in Fig.~\ref{fig:optim} is not applicable in this case. The $L=500, N_\omega=64$ spectra in Fig.~\ref{ndep}(b) look very similar to the optimal ones in Figs.~\ref{sw2}, except for small third peak at high frequency when only the frequencies are sampled. This peak is again a consequence of the rather large unit of spectral weight $1/N_\omega$, which prohibits the density of $\delta$-functions to properly reproduce the tail. When the amplitudes are also sampled, the tail bump is not present. In this case, $N_\omega=64$ is in the region of the $\langle \chi^2\rangle$ minimum observed in Fig.~\ref{ndep}(d), but the results are arguably still worse than the (also far from satisfactory) results obtained with optimized $\Theta$ and large $N_\omega$ in Fig.~\ref{sw2}. Even the best $\langle \chi^2\rangle$ values in Fig.~\ref{ndep}(c) exceed the optimum discussed in Sec.~\ref{sec:thetacrit}. Thus, with $\Theta=1$ the entropy dominates too much over the goodness of the fit (though the $\langle \chi^2\rangle$ values still represent reasonable fits). This problem can be traced back to the fact that the number of effective parameters, discussed above in Sec.~\ref{sec:thetacrit} and further in \ref{app:lowtheta}, is always much smaller than the minimum number of $\delta$-functions required to obtain a smooth averaged spectrum. With unrestricted sampling it is in practice necessary to optimize (lower) $\Theta$, while, as we will see further below (and as was also found in Ref.~\cite{sandvik16}) in some cases constrained sampling with $\Theta=1$ (or even $\Theta > 1$) is valid, with good $\langle \chi^2\rangle$ values even for large $N_\omega$. In a related investigation of the optimal density of frequencies in "grid sampling'' \cite{ghanem20b}, a generic behavior of the goodness of the fit similar to Fig.~\ref{ndep}(c) was noted (and a flatter goodness-of-fit can also be observed in Fig.~1 of Ref.~\cite{sandvik16} before a logarithmic increase sets in at higher grid point density). It was argued \cite{ghanem20b} that the optimal density is in the flat region close to the minimum. In that case the sampling temperature was also not adjusted ($\Theta=1$) and reasonable $\langle \chi^2\rangle$ values were still obtained in the optimal regime. Though an optimal number of degrees of freedom at $\Theta=1$ can exist with some parametrizations, because of particular $\chi^2$ versus entropy competition, in our experience with the parametrizations used here it is better to use large $N_\omega$ and adjust $\Theta$. \section{Sampling entropy} \label{sec:entropy} In the case of sampled frequencies with constant equal amplitudes, the configurational entropy of a given spectral density profile (the $\delta$-function amplitudes collected in a histogram) can be easily calculated exactly, which we will do in Sec.~\ref{sec:entropy1}. We also discuss the non-universality (dependence on the parametrization) of the functional form of the entropy. In Sec.~\ref{sec:entropy2}, we demonstrate that the extensive property of the entropy is consistent with our criterion Eq.~(\ref{eq:chi2}) for determining the sampling temperature. \subsection{Different entropy forms} \label{sec:entropy1} Consider the situation in Fig.~\ref{fig:spec}(b), where $N$ ``particles'' labeled $i=1,\ldots,N$ occupy positions $\omega_i$ in the continuum (where now, in this subsection, we suppress the $\omega$ subscript on $N$ for simplicity of the notation). It will not be necessary to provide any bounds on these positions, and in principle they can be both positive and negative though in the present applications all $\omega_i \ge 0$. For a particle configuration $(\omega_1,\ldots,\omega_N)$ and a histogram with bins $b=1,2,\ldots$ of width $\Delta$, let there be $n_b$ particles in bin $b$. The number of identical histograms from combinatorics is \begin{equation} N_C = \frac{N!}{\prod_b n_b!}. \end{equation} Given that each particle can move within its window of size $\Delta$ without changing the bin assignment, the total configurational volume corresponding to a set of occupation numbers is $V_C=\Delta^NN_C$, and the equal-amplitude (EA) entropy is \begin{equation} E_{\rm EA}=\ln(V_C) = N \ln(\Delta)+\ln(N!) - \sum_b \ln(n_b!). \end{equation} Applying Stirling's formula in the form $\ln(n!)=n\ln(n)-n$ and using $\sum_b n_b = N$, we obtain \begin{equation} E_{\rm EA}= - \sum_b n_b \ln \left ( \frac{n_b}{N\Delta}\right ). \label{eeaderiv1} \end{equation} Stirling's formula is a good approximation when $n_b$ is large, but is also completely correct for $n_b=0$, which is reassuring because bins beyond the tail of the spectrum will have $n_b=0$. For the regions where there is finite spectral weight, $n_b \propto N$. Thus, the effects of the approximation will diminish for large $N$ and the above entropy becomes exact when $N \to \infty$ for small $\Delta$. The mean occupation numbers will then also be proportional to $\Delta$, and we can define amplitudes $A_b$ such that $n_b=N\Delta A_b$. Eq.~(\ref{eeaderiv1}) then becomes \begin{equation} E_{\rm EA}= - N\Delta \sum_b A_b \ln (A_b). \end{equation} Converting to an integral, $\Delta A_b \to d\omega A(\omega)$, we have \begin{equation} E_{\rm EA}= - N \int d\omega A(\omega) \ln [A(\omega)]. \label{eea} \end{equation} Apart from the factor $N$, the entropy then looks like the standard Shannon information entropy. Note that $\int d\omega A(\omega)=1$ follows from the definitions above. The entropy clearly favors configurations with smooth distribution $A(\omega)$. Unlike the way the entropy is defined in the ME method, Eq.~(\ref{esdef}), here there is no default model dividing $A(\omega)$ under the logarithm (though it can be incorporated as well if so desired \cite{beach04}). The above standard result for the entropy was also derived in the SAC context by Bergeron and Tremblay \cite{bergeron16}, and previously also by Beach \cite{beach04}. However, these works did not explicitly consider a specific SAC parametrization but analyzed a stochastic process that is in practice very similar to our fixed-amplitude parametrization. The number $N$ was not regarded as specifically related to the SAC sampling space, but as a device introduced for the purpose of the calculation that drops out when relating the entropy to the ME method. Apparently, the results were also regarded as generic and suggestive \cite{bergeron16} of a relationship between the SAC and the ME method (in a mean-field sense in Ref.~\cite{beach04}). The details of the stochastic process are in fact crucial, and different SAC parameterizations realize different stochastic processes with different functional forms of the entropy. Ghanem and Koch \cite{ghanem20a} recently calculated the entropy in the case of a fixed grid (including a default model), where a more sophisticated functional-integral method was required. Interestingly, the GK entropy is formally equivalent to the Shannon entropy of the default model with respect to the sampled spectrum. This difference from the conventional Shannon entropy with respect to the default model might seem insignificant, but actually is quite dramatic. With a flat default, the GK entropy can be written as \begin{equation} E_{\rm GK}= N \int d\omega \ln [A(\omega)], \label{egk} \end{equation} up to a factor which would only affect (implicitly) the value of $\Theta$ in our approach (fixing $\Theta$ based on $\langle\chi^2\rangle$). This entropic form is quite different from Eq.~(\ref{eea}), and it was suggested that it imposes higher entropic penalties to deviations from the default model \cite{ghanem20a}. This statement seems counter to our results in Fig.~\ref{sw1}, where the grid sampling produced the sharpest peaks (i.e., largest deviations from the flat effective default). As pointed out already, overall scale factors are absorbed in our SAC formulation by the sampling temperature $\Theta$, and what matters for determining the shape of the spectrum $A$ is the functional-integral form of the entropy $E(A)$ irrespective of overall factors. Without more detailed analysis, it is difficult to predict what effects a given entropy will have in combination with the $\chi^2$ weighting when sampling the spectrum. We next consider the most complicated of the three basic SAC parametrization, Fig.~\ref{fig:spec}(c), where both frequencies and amplitudes are sampled. This is also the parametrization first considered by Beach within a mean-field approximation \cite{beach04} of the entire SAC problem including the $\chi^2$ weighting. He assumed the applicability of the conventional Shannon information entropy, i.e., without the factor $N$ in Eq.~(\ref{eea}). We have not managed to compute the entropy exactly in this case but have constructed what should be a good approximation. We take the approach used in \ref{app:fluct}, where, in a simplified example, we compute the variance of the spectral weight fluctuations by summing the independent contributions from amplitude and frequency fluctuations. Similarly, we argue that the entropy should be the sum of the entropies from frequency-only updates, Eq.~(\ref{eea}), and amplitude-only updates, Eq.~(\ref{egk}), at least to a first approximation. In this case, we initially have to keep the default model $D(\omega)$ in the sum of the two entropies, because it appears in different ways in the two forms. Thus, we propose the {\it mixed entropy} $E_{\rm MX}$ defined as \begin{equation} E_{\rm MX} = - N \int d\omega [A(\omega)-D(\omega)] \ln \left ( \frac{A(\omega)}{D(\omega)} \right ). \label{emxw} \end{equation} Here it should be noted that both terms are negative semi-definite and, thus, the maximum mixed entropy is also zero. Taking a flat default model (within the relevant range of $\omega$) and calling its constant value $\gamma$, we obtain a simper form of the mixed entropy that we will consider here; \begin{equation} E_{\rm MX} = - N \int d\omega [A(\omega)-\gamma] \ln[A(\omega)]. \label{emx} \end{equation} It is interesting to note that the integrand here is closer to that for the conventional entropy when $A(\omega)>\gamma$, while being closer to the GK form when $A(\omega) < \gamma$. We conjecture that the mixed entropy for some value of $\gamma$ corresponds closely to the de facto entropic weight when sampling both the frequencies $\omega_i$ and the associated amplitudes $A_i$ with the SAC method. In SAC, we do not use any explicit default model, and, as we have mentioned before, in some sense the default model formally stretches out to $\omega=\infty$ when the continuous frequency space is used. However, the true spectrum covers only a limited range of frequencies and $\gamma$ may then be close to the inverse of an effective width of the spectrum. The ambiguity in defining such a width implies that $E_{\rm MX}$, for any value of $\gamma$, cannot be the exact expression for the entropy. There may be some effective non-flat default model $D(\omega)$ that is generated by the ``confining potential'' imposed implicitly by the $\bar G(\tau)$ data in the SAC process. In Sec.~\ref{sec:maxent} we will demonstrate exact mappings between the SAC method with different parametrizations and corresponding ME methods with different forms of the entropy used in the prior probability $P(S)$ in Eq.~(\ref{meprior}). With these mappings, we can test the three forms of the entropies discussed above. We will find that ME calculations with $E_{\rm EA}$ in Eq.~(\ref{eea}) and $E_{\rm GK}$ in Eq.~(\ref{egk}) match essentially perfectly results of SAC with equal-amplitude $\delta$-functions and fixed grid, respectively, when the goodness-of-fit values of the two methods match. Moreover, SAC results obtained with both amplitude and frequency updates included are well described by the mixed entropy, $E_{\rm MX}$ in Eq.~(\ref{emx}), when $\gamma$ is a number closely corresponding to the inverse width of the spectrum. The role of the parametrization was discussed qualitatively in our previous works with collaborators \cite{qin17,shao17}, where it was pointed out that SAC with $\delta$-functions in the frequency continuum shows less ``entropic bias'' than the fixed-grid sampling. Ghanem and Koch \cite{ghanem20b} likewise found advantages when ``relaxing the grid'' and sampling frequencies. Thanks to the now available explicit results for the entropy in different parametrization, especially the non-Shannon $E_{\rm GK}$ and $E_{\rm MX}$ forms, we have a more quantitative understanding of the different parametrizations. In particular, with the mixed entropy we can understand why spectra obtained with both amplitude and frequency updates, e.g., in Figs.~\ref{sw1} and \ref{sw2}, fall between those when only amplitudes or only frequencies are sampled. More detailed insights will be presented in Sec.~\ref{sec:maxent}. The increase in entropy with $N$ was noted first in the fixed-grid parametrization in Ref.~\cite{sandvik16}, and Eqs.~(\ref{eea}) and (\ref{egk}) demonstrate the extensive property explicitly. Any parametrization with $N$ sampled parameters (here with $N=N_\omega$ when only frequencies are sampled and $N=2N_\omega$ when the amplitudes are also sampled) should have an extensive entropy for large $N$. The extensive property is of course expected if we think of the $\delta$-functions as a set of particles. However, while we here treat the analytic continuation as a statistical mechanics problem, the energy in the form of $\chi^2/2$ does not have the normal extensive property---for example, the minimum value $\chi^2_{\rm min}$ will not change much after some value of $N_\omega$ has been increased but approaches a non-zero constant. Thus, when using the Boltzmann-like probability distribution Eq.~(\ref{psg}) with $\Theta=1$, the sampling will become increasingly entropy dominated when $N_\omega$ grows, causing a gradually smoothing of the average spectrum whose ``internal energy'' $\langle \chi^2\rangle$ diverges with $N_\omega$. This deterioration of the goodness of the fit can be observed in the $\Theta=1$ tests in Fig.~\ref{ndep}. To counteract the entropic catastrophe and keep the average spectrum $N_\omega$-independent for large $N_\omega$, the sampling temperature will have to be reduced as $1/N_\omega$ (which we will explicitly confirm below), or constraints have to be introduced that sufficiently suppress the entropy as was done in Ref.~\cite{sandvik16}. For any parametrization, the entropy eventually, for large $N_\omega$, has to be suppressed by lowering $\Theta$. With $\Theta =\theta/N_\omega$, the SAC sampling problem looks very similar to a statistical mechanics problem at temperature $\theta$, though with some important subtle differences that are spelled out in detail in \ref{app:statmech} and which impact the mapping between SAC and the ME method (Sec.~\ref{sec:maxent}). In the grid case, an imposed density of points acts as a default model \cite{ghanem20a}. In the continuum case, the a priori effective default model in the absence of imposed bound is a spectrum spread out to infinity, and if bounds are imposed a flat default within those bounds is realized. In principle, other default models can be implemented in the continuum by adding a potential on the ``particles'', which was done by Beach \cite{beach04} and later also by Ghanem and Koch \cite{ghanem20b} in their ``grid sampling'' method. We will not consider the conventional default model approach here and instead, starting from Sec.~\ref{sec:deltapeak}, we will introduce various hard constraints that modify the entropic pressures affecting the average spectrum. This approach can also in some sense be regarded as introduction of generalized, optimized default models, though we will not use that language. We will make connections to default models when discussing future prospects in Sec.~\ref{sec:conc3}. \subsection{Consistency with optimal $\Theta$} \label{sec:entropy2} In practice, use of the criterion Eq.~(\ref{eq:chi2}) for the sampling temperature guarantees a good fit though $N_\omega$ is not referenced directly. We next show that this criterion indeed represents the optimal balance between $\chi^2$ and entropy, in the sense that it delivers $\Theta \propto 1/N_\omega$, so that the factor $N_\omega$ in the entropy is in effect canceled. We use a synthetic spectral function with two separated Gaussian peaks, as shown with the red curve in the inset in the low-right corner of Fig.~\ref{nw-th}. We have tested prefactors $a=1$, $2$, and $4$ in Eq.~(\ref{eq:chi2}), with the number of $\delta$-functions $N_\omega$ ranging from from $1000$ to $4000$. With a slow enough annealing process, the sampling temperature is determined for each given $N_\omega$, as shown in the upper left inset of Fig.~\ref{nw-th}. We then calculated the slope $P$, the $N_\omega$ derivative of $\chi^2$, on the basis of pairs of points obtained with $N=N_\omega$ and $N=2N_\omega$ $\delta$-functions. Results for the slope versus $1/N_\omega$ are shown in the main part of Fig.~\ref{nw-th}, where the error bars were obtained using bootstrapping of the $\langle\chi^2\rangle$ data. The values deviate by at most $10\%$ from the expected slope $P=-1$, corresponding to the scaling $\Theta \propto 1/N_\omega$ for which the entropy and $\chi^2$ should be balanced according to the discussion in Sec.~\ref{sec:entropy1}. The systematic corrections to the exponent $P=-1$ are close to linear in $1/N_\omega$, as as shown with the red lines in Fig.~\ref{nw-th}, and the agreement can be further improved by including small second-order corrections (not shown). Thus, the optimal $\Theta$ versus $N_\omega$ should have a power-series form $\Theta = b_1/N_\omega + b_2/N_\omega^2 + \ldots$. \begin{figure}[t] \centering \includegraphics[width=70mm]{fig12.pdf} \vskip-1mm \caption{Slope of the $N_\omega$ dependence of the sampling temperature $\Theta$ determined using Eq.~(\ref{eq:chi2}) with $a=1,2$, and $4$. A synthetic spectrum consisting of two Gaussians, shown as the red curve in the low-right inset, was used to generate imaginary time correlations, to which noise at level $\sigma=10^{-5}$ was added. The slope $P$, shown vs $1/N_\omega$ in the main graph, is defined on the basis of results for the number of frequencies being $N_\omega=N$ and $N_\omega=2N$ with the results for $\Theta(N_\omega)$ shown in the upper-left inset. The red lines show consistency with $P=-1$ for $N_\omega \to \infty$, thus confirming the predicted impact of the expected $\propto N_\omega$ scaling of the entropy. The spectrum obtained with $N_\omega=4000$, $a=2$ is shown in the inset as well (black curve). The spectra obtained with $a=1$ and $a=4$ are almost indistinguishable from the $a=2$ result.} \vskip-1mm \label{nw-th} \end{figure} This demonstration of course only confirms that the optimal $\Theta$ should scale as $1/N_\omega$, and does not tell us what the factor $a$ should be in the fixing criterion, Eq.~(\ref{eq:chi2}). However, the properties of the $\chi^2$ distribution dictate that $a$ should be of order $1$. For reasonably good QMC data, as we have demonstrated, e.g., in Fig.~\ref{sw1}, the final result does not depend significantly on $a$ as long as the value is reasonable; in practice we often use $a=0.5$ as mentioned. The insensitivity of the average spectrum to $a$ is also exemplified in Fig.~\ref{nw-th}, where in the low-right inset we graph the result for $N_\omega=4000$ obtained with $a=2$. This spectrum, as well as those for $a=1$ and $a=4$ (not shown), falls almost on top of the synthetic spectrum. The dependence of the average spectrum on $\Theta$ (and hence on $a$) can be expected to be more significant if the imaginary-time data are barely good enough for resolving some specific feature of the spectrum. \section{Quasi-particle peaks} \label{sec:deltapeak} In this section we begin a detailed discussion of how to adapt the SAC method to spectral functions with sharp features. The prototypical first example is a $\delta$-function edge; a quasi-particle peak which often appears at the lower edge of the spectrum (in most cases with some broadening). In Sec.~\ref{sec:delta1} we review the procedures that we developed in previous work \cite{shao17} to optimize the amplitude of such a $\delta$-function. In Sec.~\ref{sec:delta2} we present a systematic study of the statistical optimization criterion, focusing on the convergence properties as the data quality is improved. We also explain how a finite width of a peak can be detected if the parametrization with a $\delta$-peak is imposed when this form is strictly not appropriate. In Sec.~\ref{sec:deltanp} we propose a new high-resolution method for quasi-particle peaks of finite width. In Sec.~\ref{sec:delta1} and \ref{sec:delta2} we use exclusively the continuum parametrization in Fig.~\ref{fig:spec}(d), with equal-amplitude ``microscopic'' $\delta$-functions to model the continuum and a larger ``macroscopic'' one at the lower edge; the quasi-particle peak with weight $A_0$. In Sec.~\ref{sec:deltanp} we will generalize this parametrization by distributing the quasi-particle weight $A_0$ over a relatively small number $N_p \ll N_w$ of $\delta$-functions (where $N_\omega$ is the total of number $\delta$-functions), with some feature of the peak (e.g., its lower or upper edge or its mean frequency) serving as a lower bound for the continuum. In principle, the microscopic $\delta$-functions could also have fluctuating amplitudes. However, in this case we have found that it is in general better to use fixed equal amplitudes, which is the parametrization that produces the least spectral details in the case of unrestricted sampling (e.g., in Figs.~\ref{sw1} and \ref{syntcomp}). We saw this already in Fig.~\ref{w0fix}, in the context of the lower spectral bound, where the equal amplitudes gave a better spectrum when the correct bound was imposed. It is of course not clear how generic these observations are, and likely amplitude updates could also be useful if there is significant structure (sharp peaks) within the continuum. Here we will consider rather smooth continua. \begin{figure}[t] \centering \includegraphics[width=70mm]{fig13.pdf} \vskip-1mm \caption{Results of unrestricted SAC with the frequency-continuum representation, Fig.~\ref{fig:spec}(b), applied to a synthetic spectrum (shown in black) with a $\delta$-function containing $25\%$ of the total weight at $\omega=2$ and a broad Gaussian peak centered at $\omega=6$. The other curves show SAC results obtained with noise added to the synthetic correlation function $G(\tau)$, with the levels of noise $\sigma$ indicated in the legends. At $\sigma=10^{-5}$, the Gaussian maximum is reproduced almost perfectly and partially covers this part of the synthetic spectrum.} \vskip-1mm \label{delta-free} \end{figure} \subsection{Delta-function and continuum} \label{sec:delta1} We first illustrate the limitations of the unrestricted SAC by tests with a synthetic spectral function, graphed in Fig.~\ref{delta-free}, where a dominant $\delta$-peak containing $25\%$ of the total spectral weight is separated from a broad Gaussian continuum, After calculating $G(\tau)$ on a linear grid with $\Delta_\tau=0.05$ from the synthetic spectrum, we added correlated noise as described in Sec.~\ref{sec:syntdata}, using three different noise levels; $\sigma=1\times10^{-4}$, $3\times10^{-5}$ and $1\times10^{-5}$. With the cut-off in $\tau$ set at relative error of $20\%$, the maximum $\tau$ values were $2.8$, $3.4$, and $4.0$, respectively. We sampled $2000$ equal-weight $\delta$-functions, with $\Theta$ adjusted according to the criterion stated in Eq.~(\ref{eq:chi2}). As seen in Fig.~\ref{delta-free}, when the data quality is good enough, in this case $\sigma=10^{-5}$, the continuum is resolved almost perfectly even though the sharp $\delta$-peak can never be resolved with unrestricted sampling. The lower peak does narrow systematically (likely logarithmically) with improving data quality. When the data quality is insufficient, not only is the tip of the lower peak shifted down from the location of the $\delta$-function (to compensate for the overall more substantial broadening on the high-$\omega$ side of the peak), but also the shape of the continuum is excessively broadened. By including also amplitude updates, a narrower peak can be produced, and the resolution also improves if the peak and continuum are further separated; an example of very narrow isolated peak produced by unrestricted sampling will be presented in Sec.~\ref{sec:hladd2}; the ``triplon'' peak of the 2-leg Heisenberg ladder model. The good resolution of the continuum in Fig.~\ref{delta-free} for $\sigma=10^{-5}$ is aided by the gap between the two spectral features, which of course is frequently not present in physical spectral functions. As an example, the dynamic structure factor of the square-lattice Heisenberg antiferromagnet is known to have a dominant $\delta$-function representing the single-magnon excitation (likely with some extremely small broadening even at $T=0$ \cite{chernyshev09}) followed by a continuum. Unlike the spectrum in Fig.~\ref{delta-free}, the continuum apparently extends all the way down to the quasi-particle peak, with no gap or significant reduction of spectral weight between the two features \cite{shao17}. In this case, unrestricted SAC cannot easily (unless the underlying imaginary-time data are extremely good) resolve even the continuum part of the spectrum, because the entropic broadening at the edge is compensated by distortions of the connected continuum (similar to the results in Fig.~\ref{sw2}). Examples of such distortions of synthetic spectral functions will be given in Sec.~\ref{sec:delta2}. First we discuss our implementation of the $\delta$-edge spectrum and present an illustrative result for the 2D Heisenberg model. To resolve a $\delta$-edge followed by an arbitrary continuum, we parameterize the spectrum as in Fig.~\ref{fig:spec}(d), with a single $\delta$-function with amplitude $A_0$ and location $\omega_0$, followed by $N_\omega$ microscopic $\delta$-functions with equal amplitude $A_i=(1-A_0)/N_\omega$, with the constraint $\omega_i > \omega_0$ for $i>0$. Typically we here use $N_\omega=500-1000$, and the results do not change significantly for larger $N_\omega$. If the amplitude $A_0$ is sampled, its mean will not be close to the correct value, because its effect on $\chi^2$ is small. Even though, as we will demonstrate explicitly below, increasing $A_0$ will reduce $\langle \chi^2\rangle$, the entropy will drive $A_0$ to small values and push $\omega_0$ below the true edge. Therefore, $A_0$ has to be fixed in the sampling process, and its value must be optimized in some way. The way to optimize a generic constraint, regulated by a parameter $p$, is illustrated in Fig.~\ref{fig:optim}, and we already gave a simple (though imperfect) example of optimization of the lower spectral bound in Fig.~\ref{w0fix}. We applied the optimum-$\langle \chi^2\rangle$ approach to the $\delta$-edge, with the optimized parameter $p=A_0$, in previous work \cite{shao17}. We review the details of the method in this case next, in preparation for the further insights on the optimization protocol that we present in Sec.~\ref{sec:delta2}. Before the special amplitude $A_0$ is introduced, we first perform a standard unrestricted SAC simulated annealing procedure (formally we set $A_0=0$), as explained in Sec.~\ref{sec:plainsampling}, resulting in a minimum goodness-of-fit $\chi^2_{\rm min}$. For the next step, we fix the sampling temperature $\Theta$ not according to the statistical criterion in Eq.~(\ref{eq:chi2}), but at a somewhat higher value. The elevated sampling temperature is preferred in this context because the optimization procedure involves scanning over $A_0$ to find a minimum in $\langle \chi^2\rangle$, which will be more pronounced if $\Theta$ is higher. The detailed effects of the sampling temperature will be discussed in Sec.~\ref{sec:delta2}, and here we simply state that we use $\langle \chi^2(\Theta) \rangle \approx \chi^2_{\rm min} + aN_\tau$ instead of Eq.~(\ref{eq:chi2}). Then unrestricted sampling still produces a reasonably good fit, $\langle \chi^2\rangle/N_\tau \lesssim 2$, if the prefactor $a=1$ is used, though strictly speaking the fit is suboptimal. When $A_0$ is turned on gradually, $\langle \chi^2\rangle$ decreases and the minimum value (when $A_0$ is close to its optimum) often satisfies the conventional $\Theta$ criterion. With $\Theta$ being fixed, a scan of $A_0$ is carried out on a grid (running sequentially or in parallel), with typically tens of points, from $A_0=0$ to up to $A_0=1$ (or some smaller window can be chosen if there is already some knowledge on the magnitude of $A_0$). For a given $A_0\neq0$, the sampling now includes two steps of a simple modification of the method presented in Sec.~\ref{sec:contsamp}: (i) the standard updates of the $N_\omega$ equal-amplitude $\delta$-functions with their lower bound being $\omega_0$ (i.e., any move taking $\omega_i$ with $i>0$ below $\omega_0$ is rejected); and (ii) updating the location $\omega_0$ of the edge $\delta$-function (where any move taking it above any of the other $\delta$-functions is rejected). The acceptance rates for $\omega_0$ and $\omega_{i>0}$ are kept track of separately, and the size of the moves (frequency windows) are adapted to give rates close to $0.5$. With the fixed $A_0$ value, the entropy-driven suppression of its value is avoided. Moreover, when $A_0>0$ entropy is removed from the spectrum. Simply put, the larger $A_0$ the less can the spectrum fluctuate overall. In particular, the edge frequency $\omega_0$ will move up as $A_0$ increases, thereby restricting the fluctuations of the continuum. As illustrated schematically in Fig.~\ref{fig:optim} for a general parameter $p$ with similar effect, this entropy suppression initially leads to a reduction of $\langle \chi^2\rangle$. However, when $p$ (here $A_0$) becomes too large, a spectrum representing a good fit to $\bar G(\tau)$ can no longer form and $\langle \chi^2\rangle$ then starts to grow sharply with $A_0$. Thus, there will be a minimum in $\langle \chi^2\rangle$ that represents the point where a further entropy reduction leads to a deteriorated fit, and it is natural to posit $A_0$ at this point as the optimal value. Of course, this can be strictly true only in the limit of vanishing statistical errors in the imaginary-time data, but we invariably obtain good results also with realistic error levels. In a sense, this $\langle \chi^2\rangle$ minimization procedure could be referred to as a ``minimum entropy method'', as the entropy of the statistical-mechanics system, consisting of the large number of $\delta$ functions, is minimized under the condition of also optimizing the mean goodness-of-fit (which acts as the internal energy of the problem). Of course the entropy is still maximized by the sampling within a fixed constraint (here the value of $A_0$). Since $\omega_0$ is also sampled, the spectrum accumulated in a histogram may exhibit a slightly broadened peak instead of a $\delta$-function. Unless $A_0$ is very small, the peak is typically still sharp, however, and the peak often occupies only a single bin of the accumulated histogram. As shown in Ref.~\cite{shao17}, at the $\langle \chi^2\rangle$ minimum both the amplitude $A_0$ and the mean location $\langle \omega_0\rangle$ are typically very close to their correct values (for good enough data, as will be further discussed in Sec.~\ref{sec:delta2}). The lack of fluctuations of the peak location will be further explained in Sec.~\ref{sec:deltanp}. After $A_0$ has been optimized, $\langle \chi^2\rangle$ is often statistically acceptable and the spectrum accumulated there can simply be used. However, we often still carry out a second annealing step (starting from the $\Theta$ value used in the scan) to find a new $\Theta$ according to the standard criterion in Eq.~(\ref{eq:chi2}). The differences in the results are typically minimal at this stage. As an example, we here consider the dynamic structure factor of the 2D Heisenberg model at wave-vector $q=(\pi/2,\pi/2)$, with $\bar G(\tau)$ computed with the SSE QMC method for a system with $32 \times 32$ spins at a temperature low enough to give ground state results in practice (see Ref.~\cite{shao17} for further technical details). In this case we used a quadratic $\tau$ grid, similar to Fig.~\ref{fig:gtau}, from $\tau=0.01$ to $\tau=3.61$. The error level was $\sigma \approx 10^{-5}$. To illustrate the stability of the optimization procedure, in the inset of Fig.~\ref{2dsw} we show results of several independent scans of $A_0$ in the neighborhood of the shallow minimum produced in this case. Because of fluctuations in the individual points, the minimum is best located by carrying out a polynomial fit (here of third order) to several data points and extracting the minimum from the fitted function. The minima extracted in the four independent runs in Fig.~\ref{2dsw} are all very close to each other at $A_0 \approx 0.70$, and of course the procedure can be further improved by sampling longer and using a finer grid of $A_0$ points. Here we carried out short runs on a coarse grid in order to better illustrate the fluctuations and interpolation method. \begin{figure}[t] \centering \includegraphics[width=7cm]{fig14.pdf} \caption{Dynamic structure factor of the 2D Heisenberg model with $32 \times 32$ spins at $q=(\pi/2,\pi/2)$, determined using SAC with leading $\delta$-peak. The inset shows how $\langle\chi^2\rangle$ varies in four different scans over $A_0$, with cubic polynomial fits used to extract the optimal value at the $\langle \chi^2\rangle$ minimum; in all cases here $A_0 \approx 0.70$ as indicated by the solid colored lines matching the fitted curves. For clarity we have not plotted error bars on the data points; they are of order $0.01$ below $A_0=0.7$ and smaller for larger $A_0$. The dashed lines indicate other $A_0$ values at which the spectra in the main graph were sampled, shown with the same color coding.} \label{2dsw} \end{figure} A spectral function obtained with $A_0$ close to its optimal value is shown in red in Fig.~\ref{2dsw}; the results at the four different estimated minima are all very close to this curve and are not shown separately. For reference, we also show results of sampling with $A_0$ away from its estimated optimal value. As seen in the inset of the figure, for $A_0$ larger than the optimum the goodness-of-fit deteriorates rapidly and the statistical fluctuations are small (which follows from the fact that $A_0$ over-constrains the spectrum here). Therefore, it is more likely to underestimate $A_0$ with this method (as we will also discuss in more detail below in Sec.~\ref{sec:delta2}), and we show examples of suboptimal spectra in this region. As expected, the spectral weight close to the $\delta$-edge increases as $A_0$ is reduced, but other parts of the continuum are little affected. \begin{figure*}[t] \includegraphics[width=150mm]{fig15.pdf} \caption{Demonstration of the sensitivity of the optimal leading $\delta$-function amplitude to the error level $\sigma$ (with two cases, $\sigma=10^{-4}$ and $\sigma=10^{-5}$ as indicated) and the sampling temperature $\Theta$. The results also demonstrate the characteristic signals of a broadened edge peak. The synthetic spectral functions with peaks of different widths, labeled $S_1,\ldots,S_4$ in order of increasing width (all with $80\%$ of the weight in the edge peak) are shown in the rightmost panel. The other panels show $\langle \chi^2\rangle/N_\tau$ versus $A_0$ in SAC runs with a $\delta$-function edge, sampled at different $\Theta$ values corresponding to the initial unrestricted $\langle \chi^2\rangle$ values in the way stated in legends to the right. For $S_4$ with $\sigma=10^{-5}$ the $\langle \chi^2\rangle$ minimum is outside the figure, at $A_0=0$ for all values of $a$, and this is the case also for $S_3$ with $a=0.5$ and $1$, thus indicating that the data quality is good enough (and $a$ is low enough) to determine that the true spectra do not host a $\delta$-function edge.} \label{broadened-1} \end{figure*} We can in principle also estimate error bars on $A_0$ by a bootstrapping approach, with optimized $A_0$ based on several random samples of the data set (instead of repeating the optimization steps with the same data sets, as we did for Fig.~\ref{2dsw}). As will be discussed in Sec.~\ref{sec:delta2}, the (rather small) dependence of the result on the value of $\Theta$ in the optimization step should also be considered for such a procedure to produce correct error bars. It should also be noted, however, that statistical errors are not very meaningful, as there are always some systematic errors from various entropic pressures as well. For example, as we will discuss further below in Sec.~\ref{sec:delta2}, $A_0 > 0$ extracted from the $\langle \chi^2\rangle$ minimum is typically somewhat below the correct value. \subsection{Detecting a finite peak width} \label{sec:delta2} Spectral functions with $\delta$-peaks, or extremely narrow peaks, at the lower edge are very common, e.g., spin wave excitation in magnetically ordered states as in the example above. However, these peaks become broadened due to finite-temperature effects, and in most cases there would be at least some broadening also at $T=0$, due to the quasi-particle not being an exact eigenstate of the Hamiltonian. Nevertheless, the spectral information carried by the quasi-particle $\delta$-peak is often the most prominent and important feature of the spectrum. In principle, the method described above for optimizing a $\delta$-edge can be extended to also optimizing its width---there should be a minimum in $\langle \chi^2\rangle$ in the 2D space of amplitude and width of the peak. We will explore such an approach in Sec.~\ref{sec:deltanp}. Here we first show how a finite peak width can be detected within the framework of the $\delta$-edge parametrization. We will show this while also providing further insights into the existence of the $\langle \chi^2\rangle$ minimum; how it evolves with the sampling temperature $\Theta$ and the data quality. As mentioned in the previous section, to see a pronounced minimum value of $\langle \chi^2\rangle$ in the scan over $A_0$, $\Theta$ should be chosen slightly above the value at which the free sampling ($A_0=0$) satisfies the criterion in Eq.~(\ref{eq:chi2}). At the minimum, $\langle \chi^2\rangle$ may then still satisfy Eq.~(\ref{eq:chi2}) with $a\lesssim 1$, but this cannot be guaranteed. It is therefore useful to carry out the optimization process at gradually lower values of $\Theta$ and monitor the evolution of $A_0$ at the $\langle \chi^2\rangle$ minimum. In general, one should expect lower $\Theta$ to produce better results, to the extent that a minimum can be clearly identified (which of course is easier with better data quality). Here we will demonstrate that the scanning procedure can also signal the inapplicability of the $\delta$-edge when, at a given error level of the $\bar G(\tau)$ data, there is enough broadening of the peak for our statistical criteria to signal deviations from the $\delta$-function. We carry out $A_0$ scans with four different imaginary-time data sets, obtained from synthetic spectra $S_1$, $S_2$, $S_3$, and $S_4$, all consisting of a dominant Gaussian centered at $\omega=2$, containing $80\%$ of the spectral weight, and a second broader Gaussian centered at $\omega=6$ and truncated at $\omega=2$ (see the rightmost panel of Fig.~\ref{broadened-1}). The dominant Gaussian has width (standard deviation) $0.01$, $0.1$, $0.15$, and $0.2$, in $S_1,\ldots,S_4$, respectively. For all these cases we use the same SAC parametrization with a $\delta$-edge, which is strictly not correct since all the spectra have a finite-width peak. We set the inverse temperature to $\beta=32$ when converting the spectral functions to imaginary-time functions $G(\tau)$ according to Eq.~(\ref{contrel1}), with $\Delta_\tau=0.1$ and noise added at levels $\sigma=10^{-4}$ and $10^{-5}$. The maximum $\tau$ values were $\tau_{\rm max}=3.0$ and $4.2$, respectively. \begin{figure*}[t] \centering \includegraphics[width=100mm]{fig16.pdf} \caption{The synthetic spectral functions $S_1,\ldots,S_4$, used for the tests in Fig.~\ref{broadened-1}, are shown as black curves in (a)--(d). These spectra are compared to SAC results at the corresponding optimal $A_0$ values at error level $\sigma=10^{-5}$ (red curves). In the cases $S_3$ and $S_4$, where the optimal weight is $A_0=0$ for $a=0.5$ in Fig.~\ref{broadened-1}, only the results of the equivalent unrestricted (free) sampling are shown (blue curves), while in the other cases both free-sampling and optimal-$A_0$ (determined at $a=0.5$) results are shown.} \label{broadened-1-sw} \end{figure*} We carried out systematic scans over $A_0$ at three or four different sampling temperatures, fixing $\Theta$ such that $\langle \chi^2\rangle = \chi^2_{\rm min} + aN_\tau$ at $A_0=0$ for different values of the factor $a$. Since the location of the edge-$\delta$ is not fixed in the SAC procedure, there is some broadening of the peak in the mean sampled density accumulated in the histogram when $A_0 > 0$. However, this peak width is very small when the weight of the $\delta$-function is as large as it is in this example. The question we want to address here is how the finite width of the leading peak of the synthetic spectrum is reflected in the functional form of $\langle \chi^2\rangle$ versus $A_0$, and how this form changes when the sampling temperature and the data quality (noise level) are varied. The test results are collected in Fig.~\ref{broadened-1}, where the top and bottom rows of panels correspond to the two error levels and the four panels in each row show $\langle \chi^2\rangle/N_\tau$ versus $A_0$ for the spectra $S_1,\ldots,S_4$. The different data sets correspond to the factors $a=0.5,1,2,3$ used when fixing $\Theta$ so that $\langle \chi^2\rangle = \chi^2_{\rm min} + aN_\tau$ at $A_0=0$. It is clear that sampling at a higher value of $\Theta$ (larger $a$) has a similar effect as a higher noise level, since in either case the function $G(\tau)$ corresponding to the sampled spectra is pushed further from its true value. This intuitive understanding is confirmed by Fig.~\ref{broadened-1}, where for $\sigma=10^{-4}$ we see clearly that increasing $a$ leads to a sharper $\langle \chi^2\rangle$ minimum, but if $a$ is large the minimum value of $\langle \chi^2\rangle$ is too high for a statistically sound fit. Moreover, the value of $A_0$ at the minimum is consistently too small ($<0.8$) but shifts toward $0.8$ as $a$ is decreased. The $A_0$ value also moves closer to $0.8$ when the error level is decreased from $\sigma=10^{-4}$ to $10^{-5}$ in the case of $S_1$ (the spectrum with the narrowest peak). If the true spectrum indeed has a $\delta$-edge or a very narrow peak, a good fit cannot be obtained when $A_0$ is larger than its correct value $A_{0,{\rm true}}$. Too small a value of $A_0$ can still in principle be compensated for by the distribution of the other sampled $\delta$-functions, but entropic broadening leads to a suboptimal fit (though not to the same extent as does severe overconstraining). Thus, for small $\sigma$ one can expect a minimum with a typically sharp increase for $A_0 > A_{0,{\rm true}}$ and a less dramatic increase for $A_0 < A_{0,{\rm true}}$. As $a$ is reduced, the feature for $A_0 < A_{0,{\rm true}}$ should become more shallow, as the starting $A_0$ value of $\langle \chi^2\rangle$ is closer to the minimum value $\chi^2_{\rm min}$, while for $A_0 > A_{0,{\rm true}}$ an even sharper increase should be expected because of the higher sensitivity to excessive weight in the $\delta$-function. When $\sigma$ is decreased, both sides of the minimum should become sharper, though the variations on the $A_0 < A_{0,{\rm true}}$ side are still limited by the fixing of $\Theta$ using $\langle \chi^2\rangle$ at $A_0=0$ (or some other value of $A_0$ well below the optimal value). While the above behavior is expected strictly for a $\delta$-edge, clearly a very narrow peak will result in the same kind of features, as observed at $\sigma=10^{-5}$ in Fig.~\ref{broadened-1}, unless the error level $\sigma$ is extremely low. \begin{figure*}[t] \centering \includegraphics[width=120mm]{fig17.pdf} \caption{Tests similar to those in Fig.~\ref{broadened-1} but for spectral functions (shown in the rightmost panel) where the continuum is connected to the dominant peak. The other panels show $\langle \chi^2\rangle$ versus $A_0$ in scans with different values of the factor $a$ used in fixing $\theta$ and two different error levels as in Fig.~\ref{broadened-1}.} \label{broadened-2} \end{figure*} Turning now to cases where deviations from the $\delta$-edge shape can be detected, for $S_4$ (the broadest peak), the optimal value of $A_0$ is $0$ (outside the graph boundary) for all cases of $a$ in Fig.~\ref{broadened-1} when the error level is $\sigma^{-5}$, while at $\sigma=10^{-4}$ there is still a clear minimum for $A_0$ between $0.7$ and $0.8$. Thus, with the better data quality the method can correctly detect the inapplicability of the parametrization with $A_0>0$. In this case, the final sampling is equivalent to the unrestricted SAC, and, as shown in Fig.~\ref{broadened-1-sw}, the resulting spectral function is very close to the correct (synthetic) spectrum. In the case of $S_3$, the scans for error level $\sigma=10^{-5}$ with $a=2$ and $a=4$ in Fig.~\ref{broadened-1} produce minima, but the optimal $A_0$ value is smaller for $a=2$ than for $a=4$, and for $a=1$ and $a=0.5$ the minimum is again at $A_0=0$. This case shows clearly how choosing a larger $a$ has a similar effect as a larger error level $\sigma$, so that for $a=2$ and $a=4$ there is an apparent optimal $A_0>0$, as also for all values of $a$ in the case $\sigma=10^{-4}$. It is only when the fit becomes tight enough, for sufficiently small $a$ and $\sigma$, that the procedure becomes sensitive to the finite peak width and results in unrestricted sampling ($A_0=0$) being optimal. In general, for this type of spectral function, once the inapplicability of the $\delta$-peak can be unambiguously detected, the unrestricted SAC produces spectra that match reasonably well the correct profile, as observed in Fig.~\ref{broadened-1-sw} for both $S_3$ and $S_4$. In the case of $S_2$, the behavior of the $\langle \chi^2\rangle$ minimum for $\sigma=10^{-5}$ in Fig.~\ref{broadened-1} also hints at the absence of a strict $\delta$-edge, since the optimal $A_0$ value moves slightly to the left when $a$ is decreased and the minimum is very shallow for $a=0.5$. However, the unrestricted sampling cannot resolve the correct peak width of $S_2$ at the levels of errors considered here, as shown in Fig.~\ref{broadened-1-sw}. We next consider a harder case, where a sharp peak which still contains 80\% of the spectral weight is followed by a broad ``half Gaussian'' continuum, as shown in the rightmost panel of Fig.~\ref{broadened-2}. The peaks in the two synthetic spectral functions $S_1$ and $S_2$ are Gaussians of width $0.01$ and $0.1$, respectively. We have again used a linear $\tau$ grid with $\Delta_\tau=0.1$ and set the inverse temperature to $\beta=32$. The four other panels of Fig.~\ref{broadened-2} show $A_0$ scans in the same kind of arrangement as previously in Fig.~\ref{broadened-1}. As we already discussed in Sec.~\ref{sec:delta1}, when there is no clear separation between the edge peak and the continuum the optimization of $A_0$ is more challenging. In Fig.~\ref{broadened-2}, the very shallow minima seen in the results for error level $\sigma=10^{-4}$ reflect this difficulty. It is still clear that the location of the minimum moves toward the correct value $A_0=0.8$ as the factor $a$ is reduced. With the higher data quality, $\sigma=10^{-5}$, the minima are quite sharp and better converged to $A_0\approx 0.8$. In these tests the two synthetic spectra both have narrow Gaussian edge peaks and the data quality is not sufficient to detect the finite width. With increasing data quality we expect behaviors similar to those seen in Fig.~\ref{broadened-1} when the optimal $A_0$ eventually starts to move toward $0$. \begin{figure*}[t] \centering \includegraphics[width=110mm]{fig18.pdf} \caption{The two synthetic spectral functions $S_1$ and $S_2$ (black curves) used in the tests in Fig.~\ref{broadened-2} compared with SAC results at optimal $A_0$ (determined at $a=0.5$ in Fig.~\ref{broadened-2}) in those tests (red curves) as well as results of unrestricted sampling (blue).} \label{broadened-2-sw} \end{figure*} We show results for the above two synthetic spectral functions obtained with the data of error level $10^{-5}$ in Fig.~\ref{broadened-2-sw}. With unrestricted sampling, the leading peak is much too broad in both cases and the continuum is poorly reproduced. With the $\delta$-edge, the peak is too narrow but the continuum is much closer to the correct shape except very close to the peak. Thus, while the edge is not perfect, the incorporation of a sharp peak still improves the resolution at higher frequency, because of the absence of distorting effects of spectral weight below the peak when the sampling is unrestricted. \subsection{Broadened quasi-particle peaks} \label{sec:deltanp} A macroscopic $\delta$-function at the lower edge of a spectrum is an extreme case of a quasi-particle peak. Typically such a peak has some finite width due to decay processes at $T=0$ or $T>0$. It would clearly be of great value if also the width of a narrow peak could be reliably determined. As we saw above, our method of optimizing the weight of the $\delta$-peak can also in principle, if the $\bar G(\tau)$ data are good enough, detect the unsuitability of this imposed form if the peak actually has finite width. If the peak is broad enough, it can be reproduced by unrestricted sampling, and of course the better the imaginary-time data the narrower the peak that can be properly resolved. Since the location $\omega_0$ of the leading $\delta$-edge in the parametrization, Fig.~\ref{fig:spec}(d), is also sampled, one might imagine that its fluctuations could result in a broadening of the peak collected in a histogram, and that the profile would then correspond to the actual width of a narrow quasi-particle peak when it is not strictly a $\delta$-function. In practice, as observed in the tests in Sec.~\ref{sec:delta2}, the fluctuation broadened peak always comes out too narrow, however (unless the true peak is extremely narrow). This lack of fluctuations fundamentally follows from the entropic downward pressure on the single macroscopic $\delta$-peak from the microscopic continuum contributions. In the limit $N_\omega \to \infty$, these pressures can be expected to completely localize the edge-$\delta$ at a frequency where the entropy is best balanced by $\langle \chi^2\rangle$. The diminishing fluctuations of the SAC spectrum overall when $N_\omega \to \infty$ is discussed in detail in Sec.~\ref{sec:maxent} and \ref{app:statmech}. The inability of the single macroscopic $\delta$-function to model a quasi-particle peak of finite width can be understood in view of the composition of the individual sampled spectral configurations: If the true peak has significant width, then each individual configuration should be able to reproduce this width, at least approximately. The single macroscopic $\delta$-function clearly does not allow such flexibility, given that the continuum contributions are spread out by entropic pressures and cannot concentrate sufficiently within the peak region, unless the peak is very broad. There are potentially many ways to construct suitable parametrizations generalizing the $\delta$-edge in Fig.~\ref{fig:spec}(d) to a finite peak width. For example, we could replace the isolated $\delta$-function by a Gaussian, whose width $\sigma_0$ is also optimized. The SAC would then involve a scan of $\langle \chi^2\rangle$ to optimize the peak in the two-dimensional parameter space $(A_0,\sigma_0)$. The way in which the Gaussian is merged with the microscopic $\delta$-functions belonging to the continuum then also has to be specified. One option would be to let the center of the Gaussian serve as the lower bound of the $\delta$-functions. Of course the real peak may not have a Gaussian shape, and imposing this form from the outset is then not ideal. \subsubsection{Multi-$\delta$ peak parametrization} We here proceed with a different generalization of the $\delta$-edge, illustrated in Fig.~\ref{peakfig}, using a relatively small number $N_p>1$ of $\delta$-functions to model the peak, each of these contributions having amplitude $A_0/N_p$. For the continuum, we use $N_c$ contributions, each of weight $(1-A_0)/N_c$, for a total of $N_\omega=N_p+N_c$ units of spectral weight. We again wish to model a relatively narrow peak located at the lower frequency edge and anticipate that $N_p \ll N_c$, so that the entropic pressures broadening the leading peak will be relatively low. We will devise methods for optimizing the peak by scanning over $A_0$ and $N_p$ for given $N_c$. Constraining the continuum contributions in some way by a lower bound associated with the peak location will ensure that the continuum cannot migrate below the peak. Such a constraint, depending on exactly how it is implemented, can also impose entropic pressures that squeeze the peak from above and prohibit the individual pieces to migrate too much away from each other into the continuum. However, it may also be beneficial to allow the continuum to at least partially penetrate down among the main peak contributions. \begin{figure}[t] \centering \includegraphics[width=70mm]{fig19.pdf} \caption{Parametrizations of the spectrum in terms of two sets of $\delta$-functions, with $N_\omega = N_p + N_c$. The set of $N_p$ contributions at the lower end of the spectrum, with amplitudes $A_0/N_p$, are intended to model the quasi-particle peak (here $N_p=4$, shown in red), while the remaining $N_c$ $\delta$-functions (blue) with amplitudes $(1-A_0)/N_c$ will account primarily for an arbitrary continuum. Typically $N_p \ll N_c$. In (a) and (b), the highest and lowest, respectively, of the peak frequencies acts as the lower bound for the continuum contributions, while in (c) the lower bound is given by the first moment of the peak (vertical dashed line).} \label{peakfig} \end{figure} Figure \ref{peakfig} illustrates three slightly different constraining mechanisms regulating the merger of the peak and the continuum. In Fig.~\ref{peakfig}(a), the highest of the peak $\delta$-functions serves as the lower bound for the continuum. Given that $N_p \ll N_c$, the entire peak is pushed down in frequency by the entropy of the continuum. This entropic squeezing of the peak can become excessive when $N_c$ is large, leading to a very abrupt boundary between the peak and the continuum; an asymmetric averaged peak that is rounded at the lower edge but very sharp at the upper edge. We therefore do not use this constraint in practice, though we will show some test results below. The opposite extreme is represented by the constraint illustrated in Fig.~\ref{peakfig}(b), where the lowest peak frequency acts as the constraining lower bound for the continuum contributions. Except for the lowest $\delta$-function, the peak elements are not explicitly constrained in their migration up in frequency, which in some cases causes them to split off from the lower edge and cause a double peak in the averaged spectrum. A compromise between the two extreme constraints is shown in Fig.~\ref{peakfig}(c), where the lower continuum bound is given by the mean frequency of the peak $\delta$-functions, thus, the overall peak location is pushed down by entropic pressures but the peak width can fluctuate without explicit entropic coupling to the continuum. Both the low-edge contraint in Fig.~\ref{peakfig}(b) and the peak-center constraint in Fig.~\ref{peakfig}(c) allow the continuum to extend down into the peak region, which further contributes flexibility to the peak shape. While the parametrizations with constraints discussed here do not assume that the true peak has a certain shape, as always there are implicit entropic pressures that to some extent will be reflected in the average spectrum---less so for better data quality, so that an arbitrary peak shape can, in principle, emerge in the limit of very small error bars on $\bar G(\tau)$. We will demonstrate that narrow quasi-particle peaks can be reproduced at a fidelity not practically attainable with unconstrained SAC. As indicated above, the $\delta$-functions within the peak and continuum groups will have fixed amplitudes in the tests reported here. Varying amplitudes can of course also be used to generalize the parametrizations in Fig.~\ref{peakfig}. However, our tests on synthetic spectral functions show that better resolution of the peak, in particular, is obtained when sampling only the frequencies and keeping fixed amplitudes. The entropic pressures when the peak amplitudes are sampled along with the frequencies push the spectrum to develop an excessively sharp peak, which is consistent with the tendencies found in Sec.~\ref{sec:theta} but is more pronounced here in typical cases when $N_p$ is small. It is still possible that varying amplitudes could be advantageous in some cases. \subsubsection{Peak optimization} For given $N_c$, sufficiently large to model the continuum (depending on the QMC data quality), we have to optimize $\langle \chi^2\rangle$ over the 2D space $(A_0,N_p)$. We will later see that the task can be simplified by performing only a small number of line scans through limited portions of the space, but we begin by demonstrating that a well-defined minimum can indeed be identified at which the average spectrum represents a close to optimal reproduction of the correct shape. In these tests, we exclusively use synthetic spectral functions. Sampling of the $\delta$-functions can be carried out in the same way as described in Sec.~\ref{sec:contsamp}, but with the two- and three-frequency moves only involving $\delta$-functions within either the quasi-particle part or the continuum part. Moves violating the mutual peak-continuum constraint are of course rejected. The three-frequency moves significantly improve the ergodicity of the peak part when $N_p \ge 3$, and without such moves there are some times signs of meta-stability.\footnote{Meta-stability is reflected in the average spectral function changing when three-frequency updates are included. We have not encountered meta-stability with in any of the parametrizations used in other sections, though the sampling efficiency can often be improved significantly by including three-frequency updates.} It is easy to incorporate the constraints of Fig.~\ref{peakfig}. Since moves involving more than one frequency conserve the first frequency moment, the lower bound of the continuum with the constraint in Fig.~\ref{peakfig}(c) only changes in the single-frequency updates of the peak. The maximum attempted movement of the $\delta$-functions in the single- and two-frequency updates should be adjusted separately for the peak and continuum groups, to keep the acceptance rate close to $0.5$ in either case. To test the method, we first consider a synthetic spectrum consisting of three Gaussians, with the lowest one much sharper than the other two and with the higher ones exponentially damped for $\omega$ below the center of the sharp peak---similar to the spectrum considered before, e.g., in Fig.~\ref{syntcomp}, but now with a much narrower peak that cannot be reproduced by unrestricted SAC for reasonable imaginary-time data quality. The synthetic spectrum is shown in the inset of Fig.~\ref{x2np}(a) along with the result of unrestricted (free) sampling. The peak is clearly too narrow to be properly resolved by free sampling even at the low error level $\sigma = 10^{-6}$ of the data used here. \begin{figure*}[t] \centering \includegraphics[width=120mm]{fig20.pdf} \caption{Mean goodness-of-fit versus the peak weight $A_0$ obtained with the parametrization in Fig.~\ref{peakfig}(b) for the synthetic spectrum shown with the red curve in the inset of (a), where the SAC result obtained with unrestricted frequency and amplitude sampling is also shown (black curve, obtained with $N_\omega=5000$ $\delta$-functions at $\Theta=0.02$, where $\langle \chi^2\rangle/N_\tau \approx 0.498$). The noise level of the imaginary time data (53 points uniformly spaced at $\Delta_\tau=0.2$) was $10^{-6}$. Results for fixed $N_c=500$ and $N_p=1,2,4,8$ and $16$ are shown in (a), and in (b) details of the same data are shown in the neighborhood of the minimum with matching symbol colors. In (b) results are also shown for $N_p=20$, where $\langle \chi^2\rangle$ overall is elevated and the statistical noise is much higher. Corresponding spectral functions, displayed in Fig.~\ref{swnc500}, show that the qualitative change in behavior is caused by the main peak splitting above $N_p=18$. The sampling temperature in all cases was $\Theta=0.5$. Unrestricted sampling gave $\chi_{\rm min}/N_\tau \approx 0.43$ and the same value can almost be reached also with all $N_p$ used here when $\Theta \to 0$.} \label{x2np} \end{figure*} We first consider the constraint of Fig.~\ref{peakfig}(b), where the continuum can extend down to the lowest frequency of the peak. Fig.~\ref{x2np}(a) shows scans over $A_0$ with $N_c=500$ for several different choices of $N_p$ up to $N_p=16$. In all cases, a shallow minimum in $\langle \chi^2\rangle$ is observed, which is shown in more detail in Fig.~\ref{x2np}(b). The location of the minimum is only weakly dependent on $N_p$, and the minimum is typically followed by a maximum at higher $A_0$. A significant change between $N_p=16$ and $N_p=20$ is seen in Fig.~\ref{x2np}(b), with the overall goodness of the fit deteriorating significantly (relatively speaking) for $N_p=20$ and the statistical fluctuations of $\langle \chi^2\rangle/N_\tau$ also increasing dramatically, as evidenced by the significant scattering of the data points. The different behavior of the goodness of the fit when $N_p=20$ can be traced to a qualitative change in the average spectral function, which is shown in Fig.~\ref{swnc500} for all even values of $N_p$ between $2$ and $20$, with $A_0$ fixed in each case at the respective $\langle \chi^2\rangle$ minimum. The peak part of the spectrum has a single maximum up to $N_p=18$, while a small secondary peak at the lower edge has emerged for $N_p=20$. A precursor of the splitting in the form of broadening at the lower edge can be seen already for $N_p=18$. \begin{figure*}[t] \includegraphics[width=150mm]{fig21.pdf} \caption{SAC spectral functions (red curves) obtained with $N_p$ ranging from $2$ to $20$ in steps of $2$, compared with the exact synthetic spectrum (black curves). The continuum is constrained by the lowest peak $\delta$-function, as illustrated in Fig.~\ref{peakfig}(b), with $A_0$ chosen according to the $\langle \chi^2\rangle$ minima shown for selected cases of $N_p$ in Fig.~\ref{x2np}.} \label{swnc500} \end{figure*} In general, peak splitting with the low-edge constraint is a consequence of competition between different entropies: the continuum exerts a downward pressure on its constraining lower bound, i.e., the lowest $\delta$-function in the peak group. The other elements of the peak are not subject to any such pressure but tend to spread out to higher frequencies by their own entropy. When $N_p$ is small and the total peak weight $A_0$ is well below the optimal value, the peak must be localized in a narrow region in order to keep $\chi^2$ low. When $A_0$ increases, the weight at the lower edge, which is pushed down by the entropy of the continuum, becomes too high and it is then advantageous for the rest of the peak to migrate up. This upward shift is further amplified by the entropy of the peak group when $N_p$ increases. Eventually, these effects all conspire to cause the observed peak split when $N_p$ is large and $A_0$ exceeds its optimal value. Indeed, as we will see below, for large $N_c$ the peak often splits even for rather small $N_p$ and $A_0$ below its optimal value. The spectrum with the lowest $\langle \chi^2\rangle$ in the case under consideration here is that for $N_p=15$, which is not included in Fig.~\ref{swnc500} but which naturally falls between those for $N_p=14$ and $16$. The correct spectrum is reproduced remarkably well. There is some dependence of the result (including also the optimal $A_0$ and $N_p$) on the exact value of the sampling temperature $\Theta$, but as long as the the standard $\langle \chi^2\rangle$ criterion is satisfied the variations with $\Theta$ are again only mild. In the case at hand, at the sampling temperature used, Eq.~(\ref{eq:chi2}) is satisfied with $a$ slightly below $0.5$. \begin{figure*}[t] \centering \includegraphics[width=150mm]{fig22.pdf} \caption{Test with the peak-center constraint of Fig.~\ref{peakfig}(c), using the same synthetic spectrum as in Fig.~\ref{x2np}; here shown in (a) as the black curve. The almost identical red curve shows the optimal SAC spectrum, obtained with $N_c=1000$, $N_p=25$, and $A_0=0.232$, the latter corresponding to the $\langle \chi^2\rangle$ minimum in the inset. Color maps of $\langle \chi^2\rangle$ in the relevant range of ($A_0,N_p$) close to the optimum are shown in (b) and (c), with black corresponding to the minimum and lighter colors showing gradually larger values up to a cut-off. The cut-off in (b) is the largest $\langle \chi^2\rangle$ value in the parameter window shown, while in (c) it is $10\%$ of the range from the minimum to maximum value.} \label{chicolor1} \end{figure*} Next, we test the peak-center constraint of Fig.~\ref{peakfig}(c). Fig.~\ref{chicolor1} shows results for the same synthetic spectrum as above [the inset of Fig.~\ref{x2np}(a)], based on runs with $N_c=1000$, for which the optimal number of peak elements is (as we will show below) $N_p \approx 25$. Again, a clear $\langle \chi^2\rangle$ minimum is observed [inset of Fig.~\ref{chicolor1}(a)] and at the corresponding $A_0$ value the resulting average spectrum is very close to the correct profile [main Fig.~\ref{chicolor1}(a)]. To illustrate the well-defined optimum in the full space $(A_0,N_p)$, a 2D color-coded $\langle \chi^2\rangle$ map is shown in Fig.~\ref{chicolor1}(b) in the region close to the minimum. Here black corresponds to the smallest $\langle \chi^2\rangle$ value within the window and lighter colors indicate larger values on a linear (in a standard RGB coding) scale. To see the minimum clearly, it is necessary to stretch the scale, as we have done in Fig.~\ref{chicolor1}(c) by assigning the lightest color (white) to all cases where $\langle \chi^2\rangle$ exceeds the minimum by more than $1/10$ of the full $\langle \chi^2\rangle$ range spanned by the points in the window. Here the individual runs were relatively short (about 2 hours of CPU time each) and statistical fluctuations make it impossible to extract the optimum with absolute certainty (though function fitting, which we have not performed here, would clearly help in this regard). However, the differences in the average spectrum are small within the region of the darkest colors. The spectrum in Fig.~\ref{chicolor1}(a) is the one corresponding to the smallest $\langle \chi^2\rangle$ value obtained in these particular runs, where $N_p=25$ and the $\langle \chi^2\rangle$ minimum at $A_0=0.232$ is seen more clearly in the scan in the inset of Fig.~\ref{chicolor1}(a). \begin{figure*}[t] \centering \includegraphics[width=115mm]{fig23.pdf} \caption{SAC results for the synthetic spectrum shown as the black curves in (a)-(c) (the same as in Figs.~\ref{x2np} and \ref{chicolor1}), obtained with the three different constraints in Figs.~\ref{peakfig}(a)-(c), correspondingly. In all cases $N_c=5000$ and $N_p=30$. The peak weight $A_0$ in each case corresponds to the $\langle \chi^2\rangle$ minimum for the given $N_p$ ($A_0 \approx 0.23$). Panels (d)-(f) show separately the peak and continuum contributions to the results in (a)-(c).} \label{pcompare} \end{figure*} It should be pointed out here that the $\langle \chi^2\rangle$ minimum corresponding to the optimal spectrum is not always the global minimum in the space $(A_0,N_p)$. As seen clearly in Fig.~\ref{x2np}(a), for some $N_p$ values there is a second minimum at $A_0$ above the more consistent (among a range of $N_p$ values) minimum that we have associated with the optimal fit. A small second local minimum is also seen in the inset of Fig.~\ref{chicolor1}(a). In some cases, with all three constraints, we have found that a second minimum at a larger $A_0$ value can be lower than the first one, but the corresponding spectrum is nevertheless much worse, with the peak either excessively broadened or split. For $N_p=1$, there is never any second minimum above the correct one. Based on many tests, we conclude that it is always the minimum for the lowest $A_0$ that should be used when $N_p>1$, even if it is not the global minimum. A good strategy to approximately locate the correct minimum is to first perform a scan over $A_0$ with $N_p=1$, for which the global $\langle \chi^2\rangle$ minimum is invariably close to that for small $N_p > 1$. Other approaches for rapid searches for the relevant region, in $A_0$ as well as $N_p$, will be discussed further below. The second $\langle \chi^2\rangle$ minimum for fixed $N_p$, whether local or global, reflects a locally optimal re-distribution of the peak weight when $A_0$ is larger than the true spectral weight of the peak. At this minimum the fluctuations of the spectrum are suppressed, and $\langle \chi^2\rangle$ can then be low for reasons similar to why the $\Theta \to 0$ spectrum in unrestricted sampling represents a good fit to the imaginary-time data even though the spectral weight distribution is poor (as discussed in Sec.~\ref{sec:theta}). Given the role of the entropic pressures exerted on the peak by the continuum, it should also be expected that the results to some extent depend on the number $N_c$ of $\delta$-functions in the continuum part. While an appropriate value of the sampling temperature $\Theta$ regulates the overall entropy, the relative entropies of the peak and the continuum are also clearly important. Though the optimal $N_p$ value increases with $N_c$, it is not simply proportional to $N_p/N_c$ (at least for small $N_c$), and the optimal $A_0$ value also shows a weak dependence on $N_c$. It is difficult (not practical) to search for the optimum in the space ($A_0,N_p,N_c$), especially since our strategy of determining $\Theta$ is based on fixed $N_c$, with $\Theta$ held fixed in the subsequent scans over $A_0$ and $N_p$. While increasing $N_c$ initially can lead to lower $\langle \chi^2\rangle$ values at fixed $\Theta$, for very large $N_c$ the sampling temperature eventually has to be lowered, for the same reasons discussed in Sec.~\ref{sec:entropy2}. Our criterion for optimizing parameters by minimizing $\langle \chi^2\rangle$ at fixed $\Theta$ then cannot be applied to $N_c$. In the case of the peak-central constraint, we expect (for the reasons as in Sec.~\ref{sec:entropy1}) that the relevant entropies should eventually scale as $N_p$ and $N_c$ for the peak and continuum, respectively, and then only the ratio $N_p/N_c$ will have to be optimized when $N_c$ is sufficiently large. In contrast, with the low-edge constraint the lone edge-$\delta$ is pushed down by an entropy scaling as $N_c$ without any compensating upward entropic pressure, and one can therefore expect problems for large $N_c$ in this case. The extent to which the final spectrum depends on $N_c$ is indeed highly dependent on which of the three constraints in Fig.~\ref{peakfig} is used. In our tests, with the above synthetic spectrum and many others, we find that all three constraints produce reasonable results when $N_c$ is rather small, say $N_c=500$. When increasing $N_c$, both the high- and low-edge constraints produce distortions of the peak, and there is no good agreement with the true spectrum for any ($A_0,N_p$). However, the peak-center constraint, Fig.~\ref{peakfig}(c), works well also when $N_c$ becomes much larger. The different behaviors of the three parametrizations when $N_c$ is relatively large, $N_c=5000$, are demonstrated in Fig.~\ref{pcompare}, using the same synthetic spectrum as before. In this case only the peak-center constraint produces good results, and we therefore fix $N_p$ at the optimal value, $N_p=30$, for said constraint. This $N_p$ value is also close to optimal in the other cases. The $\langle \chi^2\rangle$ minimum for all three constraints is located at $A_0\approx 0.23$, and the results shown are for the respective optimal $A_0$, with Figs.~\ref{pcompare}(a)-(c) corresponding to the constraints depicted in parts (a)-(c) of Fig.~\ref{peakfig}. \begin{figure*}[t] \includegraphics[width=130mm]{fig24.pdf} \caption{Illustration of the line-scan procedure to determine the optimal $A_0$ and $N_p$, using the same $G(\tau)$ data at error level $\sigma=10^{-5}$ as in Figs.~\ref{syntcomp}(a) and \ref{syntcomp}(b); the synthetic spectrum is shown as the black curve in (a). In all cases, $N_c=2\times 10^4$ and the sampling temperature was $\Theta=0.05$. A scan over $A_0$ with the low-edge constraint and $N_p=2$ fixed produced the goodness-of fit results shown in the inset of (a). The optimal two-spike spectrum with $A_0=0.53$ is shown as the red curve in (a). In (b), $A_0$ was fixed at the optimal value from the scan in (a) and a scan over $N_p$ in steps of $10$ was carried out with the peak-central constraint. The optimal $N_p=730$ (extracted approximately from a fit to the data close to the optimum) was used to produce the spectrum shown in red in (c), where the true spectrum is shown in black. The inset of (c) shows a final scan over $A_0$ with $N_p=730$, giving an improved optimal peak weight $A_0=0.54$. The spectrum obtained here (not shown) is almost identical to the one for $A_0=0.53$.} \label{broadsynt} \end{figure*} As seen in Fig.~\ref{pcompare}(c), with the peak-central constraint the result again agrees very well with the true spectrum, the SAC result being very similar to that obtained with $N_c=1000$ and $N_p=25$ in Fig.~\ref{chicolor1}(a). With the low-edge constraint used in Fig.~\ref{pcompare}(a) the peak is too narrow, with a very sharp edge on its right side for the entropic reasons already discussed. With the low-edge constraint, Fig.~\ref{pcompare}(b), the peak has split, with a very sharp small peak below the true peak (corresponding to a single $\delta$-function of the peak group) and the main part of the peak shifted up relative to the true peak. These behaviors indicate that the entropic pressures exerted by the continuum become too severe for large $N_c$, except in the case of the peak-center constraint. The latter does not influence the peak width, only keeps the center down in frequency. This pressure is well controlled because the entire peak located too far down in frequency will lead to a large penalizing $\chi^2$ value. Thus, we conclude that, while the low- and high-edge constraints can often produce acceptable results when $N_c$ is relatively small (more so with the low-edge constraint), the peak-center constraint is much less sensitive to the number $N_c$ of continuum elements and invariably seems to be the best option among the three. Figures \ref{pcompare}(d)-\ref{pcompare}(f) show individually the peak and continuum contributions to the full average spectra in Figs.~\ref{pcompare}(a)-\ref{pcompare}(c). Here the ways in which the different constraints affect the continuum are seen clearly. The edge of the continuum is always sharp when $N_c$ is large and becomes sharper with increasing $N_c$ (eventually forming a step function). In the case of our favored peak-center constraint, Fig.~\ref{pcompare}(f), the step feature causes a small asymmetry that is barely visible at the tip of the peak in Fig.~\ref{pcompare}(c). This distorting feature is more serious when the peak is broad (as we will se explicitly below). Even with the peak-center constraint, there is a detectable difference in the results obtained with $N_c=1000$ in Fig.~\ref{chicolor1}(a) and those with $N_c=5000$ in Fig.~\ref{pcompare}(c): The continuum between $\omega \approx 1.1$ and $1.3$ falls slightly more below the correct profile in the latter case, and this distortion can be traced to a slightly larger $A_0$ when $N_c$ is increased. It is possible that the peak asymmetry associated with the peak-center constraint is at least partially responsible for the slight suppression of weight in the continuum close to the peak, an effect compensating for the larger weight on the upper half of the peak. This effect is less obvious for smaller $N_c$, when the peak center fluctuates more. We will present additional evidence for this distortion mechanism further below. \begin{figure*}[t] \centering \includegraphics[width=130mm]{fig25.pdf} \caption{Test of the peak-center constraint, Fig.~\ref{peakfig}(c), for a spectrum with a very sharp quasi-particle peak, shown at full height in the inset of (a) as the black curve along with the results of unrestricted SAC shown as the red curve (obtained with both frequency and amplitude updates). The error level of the synthetic data was $10^{-6}$. The main figure shows the SAC result for $N_p=2$ (red curve) compared with the true spectrum (black curve). The 2D plot in (b) shows a mapping of $\langle \chi^2\rangle$ as in Fig.~\ref{chicolor1}, but here there is no isolated minimum within the window of reasonable values of $A_0$ and $N_p$, due to the inability of the method to reproduce the sharp peak at the present error level. This test was done with $N_c=5000$, sampled at $\Theta=0.5$.} \label{chicolor2} \end{figure*} We have not comprehensively studied the behavior versus $N_c$ and how to choose an optimal $N_c$. However, all tests that we have carried out suggest that $N_c$ should not be much larger than needed in order to find a clear $\langle \chi^2\rangle$ minimum in the space $(A_0,N_p)$---the minimum becomes more pronounced with increasing $N_c$, but at some point the entropic pressures by the continuum can become too large and the results deteriorate. This behavior is very apparent with the low- and high-edge constraints but less obvious with the peak-center constraint, which we strongly prefer in most cases. When the quasi-particle peak is broad $N_p$ has to be larger, which implies that also $N_c$ may have to be large. We next consider the same spectrum with a broad peak that we tested extensively with unrestricted sampling in Fig.~\ref{syntcomp}. To test a possible advantage of the multi-$\delta$ peak parametrization also for such a broad peak, we use the $\bar G(\tau)$ data at error level $\sigma = 10^{-5}$, the same as in Figs.~\ref{syntcomp}(c) and \ref{syntcomp}(d). In this case $N_c=1000$ gives an optimal $N_p \approx 50$, which is not enough to produce a smooth lower peak edge. The results are good for $N_c=5000$ and above, and in Fig.~\ref{broadsynt} we present results for $N_c=2\times 10^4$. Here we also demonstrate a systematic way to efficiently locate the minimum in the $(A_0,N_p)$ space. While we have argued that the low-edge constraint is not appropriate when $N_c$ is large, we here employ this constraint in another way, illustrated in Fig.~\ref{broadsynt}(a). Applying the low-edge constraint and sampling with $N_p=2$ produces two sharp spikes, one on either side of the true peak. In general, we have found that the optimal $N_p=2$ peak weight $A_0$ is very close to that of the optimum in the full space $(A_0,N_p)$, slightly larger than the best $A_0$ obtained for small $N_p$ with the peak-center constraint. The reason for this improvement is likely that the first $\delta$-function is pushed down more with the low-edge constraint, thus the two $\delta$-functions can bracket the peak in a more optimal way. After performing a scan over $A_0$ with $N_p=2$ and the low-edge constraint, the so obtained optimal $A_0$ is used in a subsequent scan over $N_p$ with $A_0$ held fixed, now using the peak-central constraint. In the present case, with $A_0=0.53$ as illustrated in Fig.~\ref{broadsynt}(b), the optimum is for $N_p\approx 730$. The resulting spectral function is shown in Fig.~\ref{broadsynt}(c). Overall, the result is better than those of free sampling in Figs.~\ref{syntcomp}(c) \ref{syntcomp}(d), though in this case the asymmetric tip of the peak, stemming from the peak-center constraint as discussed above, is seen clearly. To further check the optimum, another scan over $A_0$ can be carried out with $N_p$ fixed, which in the present case gives $A_0=0.54$ [inset of Fig.~\ref{broadsynt}(c)], only slightly different from the original $A_0=0.53$ from the $N_p=2$ scan. The resulting spectral function at $A_0=0.54$ and $N_p=730$ essentially overlaps to within the curve thickness with the profile for $A_0=0.53$ in Fig.~\ref{broadsynt}(c). Additional scans, not shown, confirm the location of the minimum. This method of line scans also works with narrow peaks, e.g., we can reproduce the results in Fig.~\ref{chicolor1}(a) at much less effort. In Fig.~\ref{broadsynt}(c) it is more obvious than before that the peak asymmetry associated with the peak-center constraint may be responsible for the mild suppression of the continuum for $\omega \approx 1.5$, to compensate for the slight excess weight on the right side of the peak. This excess weight is a consequence of the continuum contributions extending significantly only into the right half of the peak, while the peak contributions themselves must be sufficiently large to account completely for the left side of the peak. In principle, the constraint could be modified to allow the continuum to extend also slightly below the peak center (but not all the way to the lower edge to avoid the peak splitting associated with the low-edge constraint). The asymmetry would then be alleviated. Alternatively, a soft potential could be introduced instead of the hard constraint to achieve a similar effect. However, in general we favor less adjustable features, and the peak-center constraint should already be sufficiently good in most cases of moderately narrow quasi-particle peaks. \subsubsection{Resolution and self-consistent test} There must of course be some limit to the ability of this method to resolve a very narrow quasi-particle peak. As an example of excessive broadening, we next consider a synthetic spectrum with a much sharper peak than the previous examples; see the inset of Fig.~\ref{chicolor2}(a), where the unsatisfying result of unrestricted SAC is also shown for comparison. The color map in Fig.~\ref{chicolor2}(b) reveals no isolated minimum, instead the minimum value of $\langle \chi^2(A_0,N_p)\rangle$ versus $A_0$ decreases slowly as $N_p$ is increased while the corresponding location $A_0$ of the minimum drifts toward larger values. The spectrum obtained for $N_p=2$ is shown in Fig.~\ref{chicolor2}(a) and already has a peak broader than that of the true spectrum. The peak width grows with $N_p$, and at the upper edge of the color map in Fig.~\ref{chicolor2}(a), $N_p=35$, the peak of the best-$A_0$ spectrum (not shown) is already wider than the peak of the previous synthetic spectrum in Fig.~\ref{chicolor2}(a). The failure of the method to produce a minimum in the space ($A_0,N_p$) corresponding to close agreement with the true spectrum can be understood as a sign that the peak is too sharp to resolve at the given error level of the imaginary-time data. Because the spectrum broadens further as $N_p$ is increased above $2$, there is no $N_p>2$ for which there is a local optimum in the match between the true and sampled peak widths, which, correspondingly, is reflected in the lack of local $\langle \chi^2\rangle$ minimum in the space ($A_0,N_p$). Thus, when a well-defined minimum cannot be found, a reasonable assumption is that the spectrum sampled with $N_p=2$ represents a resolution-limited result, where the peak produced is broader than the true peak. Conversely, in cases where a $\langle \chi^2\rangle$ minimum is present, we assert that the quasi-particle peak should be well resolved, without artificial broadening (though there can still of course be some distortions). As already discussed in the context of the second $\langle \chi^2\rangle$ minimum in Figs.~\ref{x2np} and \ref{chicolor1}, $\langle \chi^2\rangle$ can also exhibit a ``false'' minimum locally when $A_0$ is larger than the true peak weight. The same mechanism should be at play here as well in causing a false optimum located at increasing $A_0$ with increasing $N_p$ as observed in Fig.~\ref{chicolor2}(b). A second minimum is also apparent for $N_p=2$ in Fig.~\ref{chicolor2}(b) (and also exist for larger $N_p$ outside the edge of the graph)---the spectrum at that minimum has two well separated spikes instead of a single peak. It is actually possible to self-consistently test whether a sampled spectrum has a peak of width larger than the resolution limit set by the imaginary-time data. To see this, we note that the sampled spectrum in the multi-$\delta$ peak parametrization used here has two contributions that can be collected separately during the sampling process: the peak and the continuum. Thus, we can write the result for the sampled normalized spectrum $A(\omega)$ as \begin{equation} A(\omega)=A_0A_p(\omega)+(1-A_0)A_c(\omega). \end{equation} Now consider a hypothetical spectrum obtained from $A(\omega)$ by removing the peak part, whose first moment we call $\omega_\delta$, and replacing it by a macroscopic $\delta$-function with the same weight at $\omega_\delta$. Thus, we define \begin{equation} A_\delta(\omega)=A_0\delta(\omega-\omega_\delta)+(1-A_0)A_c(\omega) \end{equation} and similarly the final output spectrum $S_\delta(\omega)$ according Eq.~(\ref{barelation}) as usual. This new spectral function corresponds to a modified QMC imaginary-time function according to Eq.~(\ref{contrel2}): \begin{equation} \bar G_\delta(\tau_i)=\bar G(\tau)+A_0\int_0^\infty d\omega [\delta(\omega-\omega_\delta)-A_p(\omega)]\bar K(\tau_i,\omega), \label{gdelta} \end{equation} which in the basis corresponding to the transformation according to Eq.~(\ref{basistransf}) has the same form with $\bar G_\delta(\tau_i) \to \bar G_\delta(i)$ and $\bar K(\tau_i,\omega) \to \bar K(i,\omega)$, where the index $i$ corresponds to the eigenvalue $\epsilon_i$ of the covariance matrix (Sec.~\ref{sec:qmcdata}). We can now process this imaginary-time function, using the same value of $A_0$ (and other parameters of the sampling) as before but with different values of $N_p$, to find a new optimal $N_p$ and the corresponding approximation to $A_\delta(\omega)$ (which cannot have an exact $\delta$-function because of the limitations of the method). If the original SAC spectrum has a peak of width resolvable by the method, the new peak (representing the $\delta$-function) should be sharper, and, if so, indicates the degree to which the method can resolve a $\delta$-function when the continuum is that of the original spectrum. Thus, the procedure will deliver the ``instrumental resolution'' of the method (given the noise level in the input data). The new $\delta$-peak should also be associated with statistical noise, and it is necessary to introduce additional normal-distributed perturbations to $\bar G_\delta$ beyond the transformation in Eq.~(\ref{gdelta}). Even though the statistical errors associated with the replaced broadened peak are still present in the imaginary-time data and cannot be removed, just adding the $\delta$-function without new noise would not correspond to a realistic situation (and indeed, in our tests the $\delta$-function is then resolved to an unrealistically good degree). Thus, we add Gaussian noise of standard deviation $\propto \sqrt{\epsilon_i}$ to the imaginary-time QMC data $\bar G_\delta$ expressed in the eigenbasis of the covariance matrix, keeping the covariance matrix itself unchanged. To fix an appropriate constant of proportionality of the noise, we note that the contribution to $\bar G_\delta(i)$ from the macroscopic $\delta$-function is $A_0 \bar K(i,\omega_\delta)$. Lacking detailed knowledge of how the statistical errors should be associated with the different parts of the spectral function, a reasonable assumption is that the $\delta$-peak should be given synthetic noise in proportion to its relative weight in $\bar G_\delta$. Thus, we add normal-distributed random numbers with the following standard deviation to $\bar G_\delta(i)$: \begin{equation} \sigma_\delta(i) = A_0 \frac{\bar K(i,\omega_\delta)}{\bar G_\delta(i)}\sqrt{\epsilon_i}. \end{equation} This additional noise (beyond the noise from the QMC simulation) implies that $\langle \chi^2\rangle$ should be somewhat higher than in the original SAC processing of $\bar G(i)$. \begin{figure}[t] \centering \includegraphics[width=70mm]{fig26.pdf} \caption{Self-consistent resolution test of the SAC spectrum in Fig.~\ref{chicolor1}(a). The original optimal spectrum with $N_p=25$ is shown with the black dashed curve, and those obtained from the peak-replaced imaginary-time data, Eq.~(\ref{gdelta}), with $N_p=2$ and $N_p=4$ are shown with the red and black solid curves, respectively. The inset shows the goodness of the fit in sampling runs where all other parameters are kept the same as in the original procedure but $N_p$ is varied. The best goodness-of-fit values at very small $N_p$, and the corresponding narrow peaks in $S_\delta(\omega)$ show that the method is capable of properly resolving the original peak width.} \label{swdelt} \end{figure} In Fig.~\ref{swdelt} we present results of this type of resolution test, using the data underlying the successfully reproduced spectrum in Fig.~\ref{chicolor1}. The original optimal spectrum was sampled with $N_p=25$, and, with the peak replaced as described above, we keep all parameters the same except for $N_p$ that is scanned over for $N_ p \in [1,25]$. The inset of Fig.~\ref{swdelt} shows that the goodness of the fit systematically decreases as $N_p$ is reduced, and there is no discernible local minimum beyond the statistical errors (judging from the scatter of the data points). The best estimate of the intrinsic resolution of the method is then obtained with $N_p=2$. In Fig.~\ref{swdelt} we show results with $N_p=2$ and $N_p=4$, demonstrating the very weak dependence on the peak width for these small $N_p$ values. These peaks are clearly much sharper than the original peak, thus independently confirming that the original spectrum is reliable as to the its peak width, i.e., the peak is wider than the resolution limit of the method with the data quality at hand. The tests presented above demonstrate that even rather narrow quasi-particle peaks can be resolved to a much higher degree than what may have been expected. We also again have seen how also the high-energy continuum can be much better reproduced once a low-energy feature has been treated better than what is possible with either unrestricted sampling in SAC or with the ME method. This aspect of the improved resolution is very striking when comparing the spectum in Fig.~\ref{chicolor1} with the result of unrestricted sampling in the inset of Fig.~\ref{x2np}(a), where ringing behavior is present far above the much too broad peak. We coinclude that ``hidden information'' in $\bar G(\tau)$ is able to reproduce the true spectrum once the primary edge distortions have been removed. While there is still room to improve the parametrization in Fig.~\ref{peakfig}, specifically to allow some migration of the continuum $\delta$-functions also below the peak center, the method as presented here should already be adequate for many applications. In the context of models discussed in this paper, it could be used to investigate a possible quasi-particle broadening in the 2D Heisenberg model, where so far (Ref.~\cite{shao17} and Fig.~\ref{2dsw}) we have only used the sharp $\delta$-edge. \section{Edge singularities I} \label{sec:contedge1} In some of the most interesting quantum many-body systems, conventional quasi-particles fractionalize into two or more objects. These fractionalized objects propagate essentially independently, and, as a consequence, the spectral functions of relevance to experiments reflect the collective contributions from two (in the simplest case) particles. At a fixed total momentum ${\bf q}$ of a state excited in, e.g., inelastic neutron scattering, combinations of the individual quasi-particle momenta ${\bf q}_1$ and ${\bf q}_2$ with ${\bf q}_1+{\bf q}_2={\bf q}$ (often with a ``semion'' condition on the individual momenta) produce $S({\bf q},\omega)$ with a contunuum of spectral weight in the range of available values of $\omega=\omega_1 + \omega_2$. Typically, because of density of states and matrix element effects, the spectral functions of interest exhibit power-law singularities at the lower edge of the continuum. The perhaps best known and understood example of such an edge-singular spectral function is the dynamic spin structure factor $S(q,\omega)$ corresponding to spin-$1$ excitations of the $S=1/2$ Heisenberg chain at $T=0$, accessed with the operator $O=S^z_{\bf q}$ in Eq.~(\ref{somegasum}). The predominant contributions to $S(q,\omega)$ arise from pairs of spinons (each carrying spin $1/2$) \cite{karbach97} and there are contributions also from four, six, etc.~spinons within the BA framework \cite{caux05a,caux05b,pereira06}. An example of this spectral function was already shown in Fig.~\ref{sw2}, where the difficulties in resolving the sharp edge and associated power-law singularity with the unrestricted SAC method, or any other conventional analytic continuation method, are apparent. In order for analytic continuation to reproduce a sharp edge, this feature has to be imposed in some way from the outset, by which we mean that the parametrization should be robust in the way this aspect of the spectrum is sampled and not suppressed by entropic pressures. An extreme example is the macroscopic edge $\delta$-function discussed in Sec.~\ref{sec:deltapeak}. For a generic edge at $\omega=\omega_0$, with continuously divergent or non-divergent behavior when $\omega \to \omega_0$ from above, we would like to use a parametrization that only imposes a cut-off at $\omega=\omega_0$. The value of $\omega_0$ is not necessary known in advance but should be determined by the process, which ideally should also not in any other way dictate the shape of the spectrum. More realistically, the parametrization should only have a weak bias that can largely be counteracted by imaginary-time QMC data of achievable quality. The simplest way to impose an edge is simply to optimize the lower bound $\omega_0$, which we already discussed an example of in Sec.~\ref{sec:example2} (Fig.~\ref{w0fix}) in the continuous-frequency representations (and in the fixed-grid representation in Ref.~\cite{sandvik16}; see also Ref.~\cite{shao17}). While this approach produces much better results than unrestricted sampling for sharp-edge spectra, there is still rounding of a divergent peak and poor fidelity at higher frequencies. A method for reproducing a sharp peak within the fixed-grid SAC representation was to impose a single-peak constraint at the level of the sampling, i.e., the amplitudes $A_i$ have a single maximum versus $\omega_i$ in each valid configuration \cite{sandvik16}. The dynamic structure factor of the Heisenberg chain was then reproduced at unprecedented fidelity. A drawback of this method was that the upper and lower bounds also had to be optimized (using the generic scanning approach illustrated in Fig.~\ref{fig:optim}), which was time consuming. The motivation for using the parametrization depicted in Fig.~\ref{fig:spec}(e) to model an edge singularity (a divergence in the case shown) is that the increasing distance between the $\delta$-functions corresponds to the spectral weight density decreasing monotonically after the edge. Though a monotonically decaying continuum is of course not completely general, such spectra are, in fact, quite common, and it is worth exploring this case before moving on to the more general spectra with sharp edges. A monotonic behavior of the averaged spectrum is not completely guaranteed with the parametrization in Fig.~\ref{fig:spec}(e), as in principle there could be fluctuations that cause multiple peaks. In practice, however, the fluctuations are strongly suppressed and we have found a monotonic decays in all cases tested. With this parametrization, the upper and lower frequency bounds do not have to be optimized---there are no significant entropic pressures to broaden the spectrum when reasonably good $\bar G(\tau)$ data are used (in fact, as we will show, the entropic tendency is often rather to slightly narrow the spectrum) and the sampling converges to the correct bounds to a remarkable degree. This approach is therefore much more efficient than the scheme in Ref.~\cite{sandvik16}, and also the results are better. We could also in this case instead consider a peak at some arbitrary location in the spectrum (or even consider several peaks, which we will not do here but will discuss qualitatively in Sec.~\ref{sec:pdm}), by modifying the constraint in Fig.~\ref{fig:spec}(e) to monotonically increasing distances on either side of a minimum distance. To describe a non-monotonic continuum, we can mix the parametrizations of Figs.~\ref{fig:spec}(e) and \ref{fig:spec}(b), at a ratio that is again optimized as a generic constraint. In this section we will develop the machinery for a monotonic decay and consider the general case of unrestricted continuum in Sec.~\ref{sec:contedge2}. Here, in Sec.~\ref{sec:hbergedge}, we again use the dynamic structure factor of the Heisenberg chain as an example to illustrate the sampling technique and the kind of results that are produced with just the basic monotonicity condition and no further optimization. In Sec.~\ref{sec:triangle} we study a synthetic sharp-edge spectrum with a non-divergent continuum, where a parameter regulating the edge sharpness is optimized. \subsection{Sampling with monotonicity constraint} \label{sec:hbergedge} We start the sampling process from a configuration with monotonically increasing spacing between the equal-amplitude $\delta$-functions, of the kind illustrated in Fig.~\ref{fig:spec}(e). Typically we initialize the frequencies $\omega_i$ ($i=1,\cdots,N_\omega$) with quadratically increasing distance, $d_i=\omega_{i+1}-\omega_{i}$, within reasonable lower and upper bounds (e.g., based on initial free-sampling runs such as those in Fig.~\ref{sw1}) or aided by sum rules. We stress that we do not optimize any parameter in the most basic application of this parametrization, and the constraint is just the monotonicity condition that is maintained during every step of the sampling. This approach is suitable when the continuum diverges at the edge, and non-divergent edges will require a modification of the monotonicity constraint with an optimized parameter, which we will demonstrate in Sec.~\ref{sec:triangle} (and later in a different way in Sec.~\ref{sec:contedge2}). \subsubsection{Single-$\delta$ update} \begin{figure}[t] \centering \includegraphics[width=75mm]{fig27.pdf} \caption{Illustration of the allowed window $\Delta\omega_i$ for moving the $i$th $\delta$-function (here with $2 < i < N_\omega$) from its current frequency $\omega_i$ with the space-increasing constraint maintained. The right boundary is $(\omega_{i-1}+\omega_{i+1})/2$ and the left boundary is ${\text{max}}(\omega_{i-1}+d_{i-2},\omega_{i+1}-d_{i+1})$, where, as marked in the drawing, $d_{i-2}=\omega_{i-1}-\omega_{i-2}$. For $i \in \{1,2,N_\omega\}$ slightly different rules apply, as described in the text.} \label{fig:edge-1} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=90mm]{fig28.pdf} \caption{Principles of collective updating of a group of $n$ $\delta$-functions with the distance-monotonic constraint maintained. A list of distances $d_i=\omega_{i+1}-\omega_i$ is used. By randomly selecting $i$ and $j$, and with $d=d_i+d_j$ kept fixed (to maintain the sum of all distances, indicated with the dashed line below the horizontal axis), a new value $d'_i$ is chosen randomly between ${\text{max}}(d_{min},d-d_{max})$ and ${\text{min}}(d_{max},d-d_{min})$, and then $d'_j=d-d'_i$. This process is repeated $n/2$ times with different pairs $i,~j$, after which the updated distances are sorted for monotonicity before converted to new frequency values. The update is accepted or rejected in the standard (Metropolis) way based on the change in $\chi^2$.} \label{edge-2} \end{figure*} A straight-forward sampling algorithm amounts to randomly choosing one of the $\delta$-functions and moving it anywhere within the window $\Delta_{\omega_i}$ determined from the monotonically space-increasing constraint, as illustrated in Fig.~\ref{fig:edge-1}. The cases $i=1$, $2$, and $N_\omega$ require special treatment, and we first consider the generic cases $i \in \{3,\ldots,N_\omega-1\}$. The window $\Delta\omega_i$ within which $\omega_i$ is chosen is situated as follows: The upper boundary is $(\omega_{i-1}+\omega_{i+1})/2$ in order for the condition $d'_{i-1} \le d'_i$ on the new distances to hold true after the move. To satisfy $d_{i-2} \le d'_{i-1}$ and $d'_i \le d_{i+1}$, the left boundary must be the larger of $\omega_{i-1}+d_{i-2}$ and $\omega_{i+1}-d_{i+1}$. For moves within the allowed window, the Metropolis acceptance probability is applied. In the case of $\omega_2$, there is no constraining distance $d_{i-2}$ and only the second of the left-boundary conditions apply. As for $i=1$ and $i=N_\omega$, fixed windows centered at $\omega_i$ can be used, with their widths adjusted in the standard way to achieve an acceptance rate close to $0.5$. Any move violating the constraint is of course rejected outright. In practice the allowed windows $\Delta\omega_i$ are small and of course scale approximately as $1/N_\omega$. The change in $\chi^2$ in a tentative move is then also small, thus leading to a large acceptance rate---over $90\%$ if $N_\omega$ is $80$ or larger in the example that we will consider below. With the acceptance rate being large, one might think that the configuration should change significantly after one updating sweep of $N_\omega$ moves. However, $\Delta\omega_i$ is typically much smaller than the distance $d_i$, and in practice each $\delta$-function tends to fluctuate around a mean location. The edge frequency $\omega_1$ also moves very little once equilibrium has been reached, while $\omega_{N_{\omega}}$ has the largest (though still typically small) fluctuations. The limited fluctuations do not just stem from inefficiencies in the sampling algorithm but reflect the way the space-increasing constraint suppresses the configurational entropy. As we will see below, the low entropy is at the heart of the benefit of the parametrization in its ability to resolve details of spectrum with an edge followed by a monotonically decaying continuum. The restricted fluctuations also cause some problems, as we will discuss further below. \subsubsection{Multi-$\delta$ update} A more efficient algorithm for updating the frequencies $\omega_2,\cdots,\omega_{N_\omega-1}$ is to collectively move several consecutive $\delta$-functions. There are many ways to accomplish this, and here we only discuss what is perhaps the simplest (and likely most efficient) method. For a group of $n$ $\delta$-functions at current locations $\omega_k,\ldots,\omega_{k+n-1}$, we work with a corresponding list of distances $d_i$ as is illustrated in Fig.~\ref{edge-2}. In a series of moves performed before applying the Metropolis probability, we randomly select $d_i$ and $d_j$, to be updated with $d=d_i+d_j$ conserved. The minimum and maximum new distances are dictated by the distances to the left and right of the group, $d_{\rm min}=d_{k-1}$ and $d_{\rm max}=d_{k+n}$, respectively. The goal of this update is to keep the distance between $\omega_{k-1}$ and $\omega_{k+n}$ unchanged, and a complete update consists of several such random pair moves (so that most of the $n$ distances change). Since both the updated distances $d'_i$ and $d'_j$ have to be in $[d_{\rm min},d_{\rm max}]$, the updated distance $d'_i$ is chosen randomly between $[{\text{max}}(d_{min},d-d_{max})]$ and $[{\text{min}}(d_{max},d-d_{min})]$, and then $d'_j=d-d'_i$. After repeating this procedure $n/2$ times, the updated distances are sorted into the required monotonically increasing order, which are then converted into updated frequencies $\omega_i$ in the group. The change in $\chi^2$ is computed for the acceptance probability. It can easily be verified that the above algorithm satisfies detailed balance, as it redistributes in an unbiased way how a frequency segment is cut up into pieces. With $i$ and $j$ chosen at random among all the pieces, the ordering aspect plays no role in the selection process. We define one sweep of multi-$\delta$ updates as consisting of $N_\omega/n$ attempts. The acceptance rate decreases with the set size $n$, and we choose this size in the standard way so that the acceptance rate is approximately $0.5$. For given $n$, the acceptance rate depends strongly on the location of the group (its first index $i$) of $\delta$-functions within the spectrum, and it is therefore useful for overall efficiency to adapt the group size to the location, using the $i$-dependent acceptance rate. \subsubsection{Example and practical considerations} \label{sec:heismono} \begin{figure*}[t] \centering \includegraphics[width=110mm]{fig29.pdf} \caption{SAC spectral function $S(4\pi/5,\omega)$ for the $L=500$ Heisenberg chain (read curves) obtained with the distance-monotonicity constraint in Fig.~\ref{fig:spec}(e). Different numbers $N_\omega$ of $\delta$-functions were used, as indicated in the respective panels, and the average spectral density was collected in histograms with the same bin size in all cases. The results are compared with the BA calculation for the same system size (black curves). The oscillations reflect the tendency of the individual $\delta$-functions to fluctuate narrowly around mean positions. With increasing $N_\omega$, the fluctuations of the frequencies relative to their mean separation become large enough to produce smooth average spectra. The noise in the $N_\omega = 320$ spectrum is partially due to the very small histogram bins used in order to show the oscillations present in the other cases.} \label{nobroaden} \end{figure*} While the multi-$\delta$ updates considerably speed up the sampling, there are still some details that have to be addressed in order to obtain good results. In discussing a number of issues below, we will point to Fig.~\ref{nobroaden}, which shows $S(q,\omega)$ of the $L=500$ Heisenberg chain at $q=4\pi/5$, i.e., the same case as the results in Fig.~\ref{sw2} for unrestricted sampling. The spectra presented here were obtained using multi-$\delta$ updates for $N_\omega=40,80,160$, and $320$, and are graphed together with the BA result. We stress again here that the sampling with the constraint is by itself sufficient to locate the edge of the spectrum reasonably precisely (to within less than $1\%$ in Fig.~\ref{nobroaden}), and there is no other input to the process beyond the $\bar G(\tau)$ data and the use of this specific parametrization. Once the spectrum has converged the edge fluctuations are also very small (thus resulting in a histogram with a sharp peak) at the sampling temperature $\Theta$ satisfying the same criterion, Eq.~(\ref{eq:chi2}), as used previously in Sec.~\ref{sec:theta}. When judging the results, it should again be kept in mind that the BA computed $S(q,\omega)$ is not complete but contains about $98\%$ of the total spectral weight in the case of $q=4\pi/5$. Moreover, even though a chain of $500$ spins may seem quite large, the exact $T=0$ spectrum, Eq.~(\ref{somegasum}), likely still consists of less than 100 distinct $\delta$-functions with significant spectral weight. Results obtained with matrix product states indicate only about $10$ $\delta$-functions for $L=64$, $q=4\pi/5$ \cite{wang19} and for $L=100$, $q=\pi$ there is a similar number of sharp peaks \cite{xie18}. The BA results have been broadened to produce a continuum \cite{caux05a,cauxdata}. Given these caveats, the agreement is very good for the entire spectrum for $N_\omega=320$. For smaller $N_\omega$, oscillations in the tail part of the spectrum are observed that will be explained below. To apply the criterion for the sampling temperature $\Theta$, Eq.~(\ref{eq:chi2}), the minimum goodness-of-fit $\chi^2_{\rm min}$ is again determined using simulated annealing. The ultimate form of the spectrum with a small number of sharp peaks in the unrestricted sampling (discussed in detail in \ref{app:lowtheta}) cannot be realized with the monotonicity constrained parametrization, and therefore $\chi^2_{\rm min}$ will be slightly larger in this case. The goodness-of-fit should still be statistically acceptable if the parametrization is appropriate, i.e., if the actual spectrum can be well described as a sharply divergent edge and monotonically decaying tail, which the parametrization mimics for large $N_\omega$. Since the distance-monotonicity represents a very strong constraint on the fluctuations of the spectrum, the changes with $\Theta$ are much smaller than with unrestricted sampling. The same criterion for $\Theta$ as before still leads to optimal results, with only weak sensitivity to the exact value of the factor $a\lesssim 1$ used in Eq.~(\ref{eq:chi2}). We will show examples of the $\Theta$ dependence in Sec.~\ref{sec:monop}, where we apply the method to a synthetic spectrum for which the SAC results can be evaluated without any uncertainties on the true shape of the spectrum. The first problem encountered due to the rather inefficient constrained sampling is that it can be difficult to reach good $\langle \chi^2\rangle$ values, especially $\chi^2_{\rm min}$, when $N_\omega$ is large when starting from a typically large-$\chi^2$ initial configuration. This problem stems primarily from the difficulty of the edge to migrate to its correct position when starting the sampling with $\omega_1$ far from the true edge. Therefore, we have found it helpful to start with a small number $N_\omega$ of $\delta$-functions ($N_\omega=10$, say), in which case the edge converges relatively rapidly. Having stored the configuration with the smallest $\chi^2$ value, at the next stage we double $N_\omega$ by inserting one $\delta$-function between every two in the old configuration (and one more at the end), setting their amplitudes to half the previous value. This doubled configuration becomes the starting point for a new sampling run. The doubling is repeated until the desired $N_\omega$ is reached. This procedure can be carried out at a rather high value of $\Theta$, before the simulated annealing procedure to fix $\Theta$ is carried out. Another option is to carry out an initial 2D scanning procedure over the lower and upper bounds of a spectrum without sampling, saving the configuration with the best $\chi^2$ value and subsequently using it to start the annealing procedure. Another issue, which can be observed for the small-$N_\omega$ cases in Fig.~\ref{nobroaden}, is that the collected spectral function $S(q,\omega)$ exhibits a complicated unrealistic structure of the tail. The oscillations correspond to the individual $\delta$-functions fluctuating narrowly around their mean frequencies $\langle \omega_i \rangle$. When $N_\omega$ is large enough, the fluctuations of $\omega_i$ eventually do become larger than the mean separation $\langle d_i \rangle = \langle \omega_{i+1}-\omega_i \rangle$ for all $i$, and a smooth spectrum is then obtained if sampled sufficiently. The oscillating behavior for a range of frequencies is observed clearly in Fig.~\ref{nobroaden} for $N_\omega=40$ and $80$. Though the peaks broaden somewhat with increasing sampling time, the peak structure is real and does not vanish until $N_\omega$ becomes larger; there is still some hint of oscillations even for $N_\omega=160$ in Fig.~\ref{nobroaden}, while for $N_\omega=320$ the spectrum appears rather noisy, due to the small histogram bins used, without any obvious oscillations. A simple way to circumvent the oscillations is to broaden the $\delta$-functions at the stage of accumulating the histogram. Since the spacing between the $\delta$-functions, for appropriate choices of $N_\omega$, is small on the scale of the true features of the spectrum, such a procedure is a reasonable way to obtain a smooth continuum. A convenient way to impose appropriate broadening automatically adapted to the scale of details of the spectrum is to regard the $\delta$-function at $\omega_i$ as a uniform distribution of the weight $1/N_\omega$ between $\omega_{i}$ and $\omega_{i+1}$, which we did for the remaining results presented in this section. In principle, the spectrum could also mathematically [in the integrations $S(\omega)$ giving $G(\tau)$ for the $\chi^2$ calculation], be regarded as a sum over such finite-width boxes. This would be much more time consuming, however, and for reasonably large $N_\omega$ the differences would be negligible. In Sec.~\ref{sec:contedge2}, we will introduce a way of directly relating the mean spacing $\langle d_i\rangle =\langle \omega_{i+1}-\omega_i\rangle$ to a local spectral density. \begin{figure}[t] \centering \includegraphics[width=70mm]{fig30.pdf} \caption{SAC spectral functions $S(q,\omega)$ (read curves) for selected momenta $q$ of the Heisenberg chain sampled with $N_\omega=80$ $\delta$ functions (red curves). The broadening strategy explained in the text was used to smoothen out the oscillations seen in the $q=4\pi/5$ results in in Fig.~\ref{nobroaden} for small $N_\omega$. The results are are compared with the corresponding BA results (black curves), which have also been subject to smoothing \cite{cauxdata} (even more so than the SAC results, resulting in more rounded peaks).} \label{sw-1d} \end{figure} Figure \ref{sw-1d} shows results for several momenta, again comparing with BA results for the same Heisenberg chain of size $L=500$. In all cases, we used only a modest number of $\delta$-functions, $N_\omega=80$, and the $\delta$-broadening strategy explained above was employed. With the exception of the case of the smallest momentum, $q=\pi/5$, both the edge and the tail are reproduced very well. For $q=\pi/5$ the SAC result is somewhat too narrow though the result is still much better than what normally would be expected with numerical analytic continuation. Here it should again be noted that the BA spectra are broadened and the edges are therefore not divergent. The internal broadening applied in the SAC calculations at the stage of accumulating the spectrum is very small close to the edge, and the peaks are therefore much sharper. Results such as these can simply not be obtained with conventional SAC or ME methods. The total computation time (single core) used for each of the spectral functions in Fig.~\ref{sw-1d} was only of the order of a few minutes. We also stress again that there was no other input to these calculations beyond the QMC imaginary-time data and the choice of parametrization of the spectrum; the method of successive doubling of $N_\omega$ from $10$ to $80$ automatically adapted the edge location and the value of $\langle \chi^2\rangle/N_\tau$ [again fixed according to Eq.~(\ref{eq:chi2}) after simulated annealing to find $\chi^2_{\rm min}$] was statistically good ($<1$) in all cases. It is clear that the parametrization used here favors an edge with a sharp peak, and the same entropic effect also some times tends to compress the overall width of the spectrum, as seen clearly in the $q=\pi/5$ result in Fig.~\ref{sw-1d}. In Sec.~\ref{sec:monoentropy} we will demonstrate explicitly that the entropic pressures produce an inverse square-root singularity natively, when a spectrum is sampled within fixed bounds without imaginary-time data input. This form is close to the singularity of $S(q,\omega)$ for $0 < q < \pi$ of the Heisenberg chain in the thermodynamic limit, but in the latter there is also a multiplicative logarithmic correction to the power law \cite{bougourzi96,karbach97}. In the presence of the QMC data, the shape of the peak is sufficiently pulled away from the native form to match the BA data remarkably well in Figs.~\ref{nobroaden} and \ref{sw-1d}. Moreover, the overall shape of the spectrum, the tail of which depends strongly on $q$, is highly non-trivial, and it was not a priori clear that any analytic continuation with QMC data of typical quality would be able to resolve it as well as we have demonstrated here. Results almost as good were obtained in Ref.~\cite{sandvik16}, but at much higher cost from explicitly optimizing the lower and upper spectral bounds on a frequency grid [the parametrization in Fig.~\ref{fig:spec}(a)]. \subsection{Edge optimization} \label{sec:triangle} Given that the parametrization used here favors a divergent edge, a relevant question is whether a sharp non-divergent edge also can be reproduced in this way. The answer is yes, but the entropic pressure favoring the divergence has to be suppressed. To do this, we here introduce a simple parameter that acts to regularize the edge, namely, a minimum distance $\Delta\omega$ between the first two $\delta$-functions, \begin{equation} d_1 \equiv \omega_2-\omega_1 \ge \Delta\omega. \label{dconstraint} \end{equation} The parameter $\Delta\omega$ is optimized by scanning as in Fig.~\ref{fig:optim} and held fixed during the sampling, with the increasing spacing between all $\delta$-functions maintained as before; $d_{i+1} \ge d_i$ for all $i \in \{1,\ldots,N_\omega-1\}$. The modifications of the sampling algorithm described in the previous section are trivial. Here we use a synthetic test spectrum that may appear contrived from the physics standpoint but illustrates very well the power of the method---a triangular profile illustrated in Fig.~\ref{tri}. We constructed $G(\tau)$ from this spectrum and added correlated noise at the level $\sigma=10^{-6}$. If we sample with only the distance-increasing constraint as described in the previous section, a very sharp edge peak is obtained, as shown with the blue curve in Fig.~\ref{tri}, and after this peak some ringing can be observed. The inability to correctly resolve the shape of the edge reflects the fact that the parametrization has an inherent entropic pressure favoring a sharp edge peak, which we will discuss further in Sec.~\ref{sec:monoentropy}. \begin{figure}[t] \centering \includegraphics[width=70mm]{fig31.pdf} \caption{Test with synthetic data where the correct spectral function has a triangular shape (shown in black). The result of SAC sampling with only the distance-increasing constraint is shown in blue. The goodness-of-fit scan over the minimum edge distance $\Delta\omega$ of the $\delta$-functions is shown in the inset, with the optimal value indicated by the red lines. The red and green curves in the main graph show spectra sampled with the optimal value of the parameter and at a slightly ($10\%$) higher value (the green line in the inset), respectively. In all cases, $N_\omega=160$ $\delta$-functions were used, and when collecting the histogram the spectral weight at $\omega_i$ was distributed uniformly within the range $[\omega_{i},\omega_{i+1}]$.} \label{tri} \end{figure} We next optimize the edge parameter $\Delta\omega$ by scanning and identify a minimum in $\langle \chi^2\rangle$, as shown in the inset of Fig.~\ref{tri}. Here the minimum appears to be rather shallow, but note that the figure zooms in on a narrow range of $\Delta\omega$ values. The red and green curves in Fig.~\ref{tri} show the spectrum sampled at the $\Delta\omega$ value corresponding to the $\langle \chi^2\rangle$ minimum and a slightly larger value, respectively. At and close to the optimum, the entire triangle is resolved well, including the almost linear decay terminating at the correct upper bound. For this non-divergent spectral function, the edge optimization within the otherwise unrestricted SAC, illustrated in Fig.~\ref{w0fix}, can also be used with very good results. Here we used the distance-monotonic parametrization to demonstrate that it can also reproduce a non-divergent edge (of an otherwise monotonically decaying spectrum). The non-divergent edge can be essentially of any shape with this approach, as long as it is monotonic. Applying the same edge optimization to $S(q,\omega)$ of the Heisenberg chain, we also find a minimum in $\langle \chi^2 \rangle$, but in this case the optimal value of $\Delta\omega$ is very close to $0$. The spectrum (not shown here) at the optimal point is almost indistinguishable from the results in Figs.~\ref{nobroaden} and \ref{sw-1d}. This example as well illustrates the success of the constraint optimization method. \section{Edge singularities II} \label{sec:contedge2} Here we present further insights into and generalizations of the SAC method for edge singularities. In Sec.~\ref{sec:density} we first discuss an alternative way of defining the average spectrum as a local density of $\delta$-functions which allows more straight-forward access to the exponent governing a power-law singularity. In Sec.~\ref{sec:monoentropy} we demonstrate that the native entropy driven form of a spectrum parametrized with monotonically increasing distances produces a singularity of the form $S(\omega) \to (\omega - \omega_0)^{p}$ with $p=-1/2$ when $\omega \to \omega_0$. In order to model a different asymptotic form of the singularity, in Sec.~\ref{sec:monop} we modify the parametrization in Fig.~\ref{fig:spec}(e) by making the formerly constant amplitudes $A_i$ dependent on $i$ in a way that can produce a native power-law form with any exponent $p$ (positive or negative). Finally, in Sec.~\ref{sec:monomixed} we discuss how to mix the parametrizations in Figs.~\ref{fig:spec}(b) and \ref{fig:spec}(e) to model a power-law singularity followed by an arbitrary (not necessarily monotonically decaying) continuum. \subsection{Spectral density and asymptotic form} \label{sec:density} As we observed in Sec.~\ref{sec:heismono} (Fig.~\ref{nobroaden}), with the distance-monotonicity imposed, the locations $\omega_{i}$ of the individual $\delta$-functions fluctuate only weakly around well defined mean values $\langle \omega_{i}\rangle$. In contrast, in a continuum sampled without restrictions, even if the frequencies are ordered so that $\omega_i \ge \omega_{i-1}$ at the stage when expectation values are accumulated, the fluctuations in $\omega_i$ are large compared to the mean separation $\langle \omega_i-\omega_{i-1}\rangle$. \begin{figure}[t] \centering \includegraphics[width=75mm]{fig32.pdf} \caption{Constrained SAC results (red curve) for the dynamic structure factor at $q=4\pi/5$ of the Heisenberg chain with $L=500$ spins, sampled with $N_\omega = 400$ distance-monotonic $\delta$-functions and using the self-generated grid definition Eq.~(\ref{sdensity}) for the spectral weigh density. The corresponding BA result \cite{caux05a,caux05b,cauxdata} is shown as the black curve. The inset shows results for different numbers $N_\omega$ of sampled $\delta$-functions on a log-log scale after subtraction of the sampled mean edge position $\omega_0=\langle\omega_1\rangle$ from $\omega$. The blue line shows the form $(\omega-\omega_0)^{-1/2}$. The sampling temperature was $\Theta=1$ for $N_\omega=100$ and slightly lower for $N_\omega=200$ and $400$ [determined according to Eq.~(\ref{eq:chi2}) with $a=0.5$].} \label{fig:hbergsdens} \end{figure} Because of the small frequency fluctuations in the monotonicity constrained SAC, it is appropriate to convert the distance between successive mean frequencies into a local spectral density by \begin{equation} S(\omega_{i+1/2})=\frac{(A_{i}+A_{i+1})/2}{\langle \omega_{i+1}\rangle-\langle \omega_{i}\rangle}, \label{sdensity} \end{equation} where we have distributed half of the spectral weight at $\omega_i$ and $\omega_{i+1}$ in the window between these frequencies and assign this weight to the mid point, \begin{equation} \omega_{i+1/2} = \frac{\langle \omega_{i+1}\rangle+\langle \omega_{i}\rangle}{2}. \label{selfomega} \end{equation} We will later generalize the equal-amplitude $\delta$-functions, $A_i=1/N_\omega$, to $i$ dependent amplitudes and therefore include these in the definition Eq.~(\ref{sdensity}) of the spectral density. We also note that the spectrum should finally be multiplied by the pre-normalization value of $\pi\bar G(\tau=0)$ to correct for the normalization used in the sampling, as discussed in Sec.~\ref{sec:qmcdata}. The mean frequencies defined in Eq.~(\ref{selfomega}) form a self-generated nonlinear grid which automatically adjusts to the singularity by forming an increasing density of points at the same rate as $S(\omega)$ diverges. We have confirmed that this manner of defining the average spectrum agrees extremely well with the histogram method where they can be compared (i.e., when the histogram bins are sufficiently small on the scale of the relevant variations in the spectral function). Fig.~\ref{fig:hbergsdens} shows results for the Heisenberg chain with 500 spins at $q=4\pi/5$. Here the QMC data $\bar G(\tau_i)$ were calculated on the non-linear grid shown in Fig.~\ref{fig:gtau} (the data used are exactly those in the figure) at temperature $T=10^{-3}$. $N_\omega=400$ $\delta$-functions were used in the SAC averaging and the definition Eq.~(\ref{sdensity}) was used for the final spectrum displayed in the figure. The lower bound is within $0.5\%$ of the BA result and the upper bound as well is very closely reproduced. Here it should be noted that the edge is by definition completely sharp when the spectrum is graphed based on the density definition in Eq.~(\ref{sdensity}). However, the lowest frequency $\omega_1$ fluctuates very little, and the edge is sharp (and the peak very tall) also when the spectral density is collected in a histogram in the standard way (as we did in Fig.~\ref{nobroaden}). With the histogram, it is more difficult to investigate the asymptotic form of the spectrum, since a small bin size is needed close to the edge and the results are more affected by noise. With the self-generated frequency points becoming very dense as the edge is approached, it is indeed possible to use Eq.~(\ref{sdensity}) to investigate the asymptotic form of the divergence. As shown in the inset of Fig.~\ref{fig:hbergsdens}, the SAC perfectly captures the known power law form $(\omega-\omega_0)^{-1/2}$ when $\omega -\omega_0 \lesssim 0.1$. However, there should also be a multiplicative logarithmic correction to this form in the Heisenberg chain \cite{bougourzi96,karbach97}, which is not apparent in the SAC result. We will address this issue further in Sec.~\ref{sec:monop}. \subsection{Entropic pressure} \label{sec:monoentropy} It is clear that the $\delta$-functions constrained by the monotonically increasing spacing will favor a spectrum with a sharp edge at some frequency $\omega_0$, with only small fluctuations when the $\bar G(\tau)$ data are good. It is not a priory clear, however, exactly what kind of edge forms asymptotically for $\omega \to \omega_0$. If the frequencies are sampled in the absence of data without any bounds, they will clearly diffuse out and cover the infinite frequency range, and to test for a native shape of the average spectrum in the absence of $\bar G(\tau)$ data we have to impose bounds. \begin{figure}[t] \centering \includegraphics[width=77mm]{fig33.pdf} \caption{Spectral functions obtained by purely entropic sampling of $N_\omega=50$ (black dots) and $N_\omega=100$ (red dots) $\delta$-functions with the monotonic distance constraint [Fig.~\ref{fig:spec}(e)] within the bounds $\omega_1=0$ and $\omega_{N_\omega}=1$. The spectral density, graphed on a log-log scale, was extracted from the mean frequencies $\langle \omega_i\rangle$ according to Eq.~(\ref{sdensity}) on the grid defined by the same data points according to Eq.~(\ref{selfomega}). The blue line has slope $-1/2$, demonstrating that $S(\omega)$ takes the asymptotic form $\omega^{-1/2}$ for $\omega \to 0$. The inset shows the same $N=100$ result (for clarity using line segments connecting the data points in the main graph) compared with $S(\omega)$ collected in a histogram with bin width $\Delta=0.005$ (with the points located at the center of the histogram bins).} \label{n50n100} \end{figure} In the case of unconstrained sampling, the mean spectral density will trivially be uniform between any applied fixed bounds, effectively providing a flat default model between those bounds. To investigate a corresponding native form when the monotonicity constraint is imposed, we sample this parametrization within the bounds $\omega \in [0,1]$ for different numbers $N_\omega$ of $\delta$-functions and define the spectral density from the resulting mean frequencies according to Eq.~(\ref{sdensity}). Results are shown in Fig.~\ref{n50n100} for $N_\omega=50$ and $N_\omega=100$, for which the profile is already well converged except very close to the lower bound. We observe clearly an $\omega^{-1/2}$ behavior for small $\omega$. The inset shows a comparison of the $N_\omega=100$ results with the standard way of collecting the density in a histogram. The two representations of $S(\omega)$ agree fully, except for the lowest-$\omega$ histogram bin for which the total weight in that bin can, naturally, not capture the correct divergent form. There are also deviations at the highest frequencies, where the histogram weights show large oscillations stemming from the fact that the last few $\delta$-functions do not fluctuate much on the scale of their typical spacing (as in Fig.~\ref{nobroaden}, which is not easy to see on the log scale in Fig.~\ref{n50n100} because these point are piled up at the upper bound). The density definition Eq.~(\ref{sdensity}) circumvents this issue by construction. These results show that, within a fixed frequency window, the entropic pressures of the monotonicity constraint induce a singularity of the form $(\omega-\omega_1)^{-1/2}$, i.e., the same form found in Fig.~\ref{fig:hbergsdens} for the Heisenberg chain. Thus, the correct edge type (apart from the expected logarithmic correction) had been implicitly supplied to the process, along with the monotonic decay above the edge. Note, however that the $(\omega-\omega_1)^{-1/2}$ form is exhibited only very close to the edge (see the inset of Fig.~\ref{fig:hbergsdens}), with deviations starting already at $\omega-\omega_0 \approx 0.1$ and good agreement with the BA spectrum is observed over the full profile (except very close to the edge, where the BA result has been broadened). The native shape obtained with this parametrization, when sampled within a fixed frequency window, also exhibits the asymptotic form only very close to the lower edge, as seen in Fig.~\ref{n50n100}. While in both cases the decay is faster than $(\omega-\omega_0)^{-1/2}$ away from the edge, the shapes of the tails are clearly different in Figs.~\ref{fig:hbergsdens} and \ref{n50n100}. Thus, the $\bar G(\tau)$ data, when the statistical errors are small enough, can pull the edge away from its native entropic form toward the correct form except very close to the edge, where the entropy imposes the native form. It is again noteworthy how even the upper bound of the spectrum is very well reproduced in Fig.~\ref{fig:hbergsdens}. \subsection{Singularity with arbitrary exponent} \label{sec:monop} An important question now is whether the method introduced here is only applicable to singular edges of the form $(\omega-\omega_1)^{-1/2}$. We already saw in Sec.~\ref{sec:triangle} that the divergence can be quenched by constraining the first spacing $d_1 \equiv \omega_2-\omega_1$ according to Eq.~(\ref{dconstraint}), but here we are interested in modeling more general divergent forms, and also non-divergent power-law singularities. It should be noted that the asymptotic form of the peak is not always of critical importance as it typically applies only very close to the edge (as seen in Fig.~\ref{fig:hbergsdens}) and it may be more important to accurately determine the location of the edge and the profile away from the very close proximity of the peak. A formally incorrect exponent for the divergence is also much preferable to the drastic rounding of the peak and associated distortions at higher frequencies introduced with other methods (as in the unrestricted sampling results in Fig.~\ref{sw2}, and even with the correct lower bound imposed in Fig.~\ref{w0fix}). Nevertheless, it would be additional icing (or whipped cream) on the cake if asymptotic power laws could also be reproduced with the correct exponent. We here show that this is actually possible with a further generalization of the monotonicity constraint, by introducing an amplitude profile with an adjustable parameter to accommodate generic asymptotic power-law edges with positive or negative exponents. \begin{figure*}[t] \centering \includegraphics[width=140mm]{fig34.pdf} \caption{Synthetic $T=0$ spectrum with edge divergence exponent $p=-1/3$ (black curves) reproduced with SAC (red curves) using $N_\omega=400$ $\delta$-functions with monotonicity constraint and the dynamic form Eq.~(\ref{lnamp}) of the amplitudes, with $c=2p+1=1/3$, $\epsilon=0$, and $n_0$ importance sampled along with the frequencies. Results are shown at four sampling temperatures: (a) $\Theta=10^4$ ($\langle \chi^2\rangle/N_\tau \approx 2.5\times 10^4$), (b) $\Theta=1$ ($\langle \chi^2\rangle/N_\tau \approx 0.99$), (c) $\Theta=10^{-1}$ ($\langle \chi^2\rangle/N_\tau \approx 0.86$), (d) $\Theta=10^{-3}$ ($\langle \chi^2\rangle/N_\tau \approx 0.83$). The noise level in $G(\tau)$ was $4\times 10^{-6}$ and 47 $\tau$ values on a roughly quadratic grid between $\tau=0.05$ and $\tau=16.2$ were used.} \label{sw33} \end{figure*} Consider an arbitrary power-law edge; $S(\omega \to \omega_0^+) \propto (\omega-\omega_0)^{p}$. We first make use of the fact that the native entropic pressure on the $\delta$-functions constrained as in Fig.~\ref{fig:spec}(e) corresponds to $p=-1/2$. For large $N_\omega$, we can consider the denominator of Eq.~(\ref{sdensity}) as the derivative of $\omega$ with respect to $i$. Setting $\omega_0=0$ for simplicity, we then have $S(\omega) \propto (d\omega/di)^{-1}$. Assuming $\omega_{i+1}-\omega_i = d\omega/di \propto i^a$, we have $\omega \propto i^{a+1}$ and $d\omega/di \propto S^{-1}(\omega) \propto \omega^{a/(a+1)}$. Given that the actual exponent here should be $a/(a+1)=1/2$, we have $a=1$, i.e., for constant $A_i$ we will obtain $\langle\omega_{i+1}-\omega_i\rangle \propto i$ and $\omega \propto i^2$ for $i \ll N_\omega$ and large $N_\omega$. If we now let the the amplitudes vary with the index $i$ as \begin{equation} A_i \propto i^c, \label{anform} \end{equation} and using $i \propto \omega^{1/2}$, the spectral density as defined in Eq.~(\ref{sdensity}) gives \begin{equation} S(\omega) \propto (\omega-\omega_0)^{(c-1)/2}, \label{scform} \end{equation} by entropy alone for $\omega$ close to $\omega_0$. Thus, if we want a specific exponent $p$, we should choose $c=2p+1$ in Eq.~(\ref{anform}). This entropy favored form should prevail also when sampling without explicit frequency bounds in the presence of imaginary-time data with the probability distribution Eq.~(\ref{psg}), as it did in the case of the Heisenberg chain with constant amplitudes (Fig.~\ref{fig:hbergsdens}), because the lower edge is always eventually, for sufficiently large $N_\omega$, entropy dominated. The original constrained parametrization with constant $A_i$ has the advantage that it can easily reproduce an arbitrary monotonic decay. To preserve this property for arbitrary $p$, we let the modified amplitudes $A_i \propto i^c$ cross over into the constant form, which we implement by \begin{equation} \ln(A_i)=b+cx_i/2 \pm \sqrt{(cx_i/2)^2+\epsilon^2},~~~~x_i \equiv \ln ({i}/{n_0}). \label{lnamp} \end{equation} where $b$ is the constant normalizing the sum of amplitudes and $+$ and $-$ correspond, respectively, to $c<0$ and $c>0$. The parameter $n_0$ in the definition of $x_i$ is real-valued in the range $n_0 \in \{1,N_\omega\}$ and sets the point of cross-over to the constant amplitudes. Where this cross-over takes place should be dictated by the QMC data, and therefore $n_0$ is sampled along with the other Monte Carlo updates of the spectrum (note that all amplitudes have to be re-normalized when $n_0$ is changed). The parameter $\epsilon$ in Eq.~(\ref{lnamp}) imposes a rounding of the cross-over, but we have found that such a parameter is mostly not needed, as the fluctuations in the sampling of $n_0$ impose a de facto natural (data-imposed) small rounding as well; thus we use $\epsilon=0$ henceforth though in some cases it may still be better to use some small non-zero value. The eventual goal of this modified parametrization will be to determine the best value of the exponent $p$ according to the generic scheme in Fig.~\ref{fig:optim}. We will show that such optimization indeed can be carried out and produces surprisingly good results. We first present some results where the correct exponent is imposed from the outset, with the aim of investigating how well the edge location and the overall profile can be determined when reproducing a synthetic, non-trivial spectrum. In Fig.~\ref{sw33} we present tests for a synthetic spectrum with edge exponent $p=-1/3$, with the power law cut off at high frequency by an exponential tail. To make the profile more complicated we also added a broad Gaussian, cut off at the edge and with other parameters chosen such that the spectrum is still monotonically decaying above the edge at $\omega_0=1/2$. We first present results at four different sampling temperatures $\Theta$, to investigate the role of the optimal-$\Theta$ criterion with this parametrization (which we postponed in Sec.~\ref{sec:contedge2}). \begin{figure*}[t] \begin{center} \includegraphics[width=110mm]{fig35.pdf} \caption{Results of five independent runs for the same synthetic spectrum and $\bar G(\tau)$ data as in Fig.~\ref{sw33}, all at $\Theta=1$. The mean amplitudes (here multiplied by $N_\omega$, so that the average is $1$) resulting from the sampling of $n_0$ according to Eq.~(\ref{scform}) with $\epsilon=0$ and $c=2p+1=1/3$ are shown in (a) vs the corresponding mean frequencies $\omega=\langle \omega_i\rangle$. The five runs have converged to two different groups; the amplitudes in the first group (three of the runs) are shown in black and those of the second group (two runs) are shown in red. Within these groups, the data points are indistinguishable on the scale shown here. The corresponding spectral functions are shown using the same color coding in (b), with the exact spectrum drawn in blue. All the curves fall very close to each other and are hard to distinguish.} \label{n0runs} \end{center} \end{figure*} At $\Theta=10^4$, Fig.~\ref{sw33}(a), where $\langle \chi^2\rangle$ is very far from acceptable, the spectral weight is still roughly in the correct window but the edge is a bit too high and there are clear deviations from the exact spectrum everywhere. In contrast, at $\Theta=1$ and $\Theta=0.1$, Figs.~\ref{sw33}(b) and \ref{sw33}(c), the $\langle \chi^2 \rangle$ values are good and no significant deviations from the exact spectrum can be seen on the scale of the figures (the edge in both cases is less than $0.5\%$ above the correct frequency $\omega_0$). Close to the $\Theta \to 0$ limit, in Fig.~\ref{sw33}(d) the spectrum for $\Theta=10^{-3}$ again has deteriorated, but the edge is still very close to its correct location. Unlike the case of free sampling (e.g., in Fig.~\ref{sw2}), with the monotonicity constraint no multi-peak structure can appear as $\Theta \to 0$, and instead a series of still monotonically decaying shoulders have formed. The main messages here are that (i) $\Theta > 0$ is still required for the best fidelity and (ii) the sampled spectrum is very insensitive to the exact value of $\Theta$ in the range of reasonable values of $a \le 1$ in the $\langle \chi^2\rangle$ Eq.~(\ref{eq:chi2}), here corresponding to $\Theta \lesssim 1$. We next demonstrate in more detail how the cross-over between the imposed asymptotic power-law, Eq.~(\ref{anform}), and the constant amplitudes is realized when $n_0$ in Eq.~(\ref{lnamp}) is sampled together with the other degrees of freedom of the spectrum (as was also done for Fig.~\ref{sw33}). Fig.~\ref{n0runs}(a) shows results for the mean values of amplitudes from five independent $\Theta=1$ runs with the same $\bar G(\tau)$ data as in Fig.~\ref{sw33}, graphed versus the corresponding mean frequencies. The cross-over behavior is rather sharp, but no apparent anomalies are seen (on the scale of the plot) at the corresponding cross-over frequencies in the spectral functions in Fig.~\ref{n0runs}(b). The spectral function is less sensitive to $n_0$ than the amplitudes (i.e., the sampling can compensate for a sub-optimal $A_i$ form), which is illustrated by the fact that the amplitudes of the five runs converged to two groups of slightly different mean values but no such obvious grouping can be observed in the spectral functions. In all cases, the broad tail of the synthetic spectrum is reproduced essentially perfectly, illustrating that the mean sampled frequencies adapt correctly to the amplitudes. If the simulated annealing process is sufficiently slow, and the final spectrum is sampled long enough, the amplitudes (i.e., the cross-over point $n_0$) should also converge consistently. The runs here were rather short (less than a minute for each of 150 annealing steps and about 30 minutes spent at $\Theta=1$). If the exponent $p$ is not known, it can be determined at least approximatively by monitoring $\langle \chi^2\rangle$ at fixed $\Theta$, as illustrated in Fig.~\ref{p33scan} for runs using the same $p=-1/3$ data as above. As shown in the left inset of the figure, a minimum forms in $\langle \chi^2(p)\rangle$ very close to the correct value of the exponent when sampling at $\Theta=1$. At a higher temperature, $\Theta=3$ (where we have subtracted a constant from $\langle \chi^2\rangle/N_\tau$ in order to better visualize the two data sets in the same graph), the minimum is shifted to $p \approx -0.38$. Thus, we see a similar trend as in the tests with $\delta$-function edge in Sec.~\ref{sec:delta2}, where the correct value of the constraint is approached when the sampling temperature is sufficiently low (if the data quality is also sufficiently good), though the optimum is also harder to discern for lower $\Theta$. The location of the edge, shown versus $p$ in the right inset of Fig.~\ref{p33scan}, also takes its best value (within $0.5\%$ of the correct value when $\Theta=1$) when the exponent $p$ is in the close neighborhood of its correct value. While the input value of $c=2p+1$ in the amplitude form, Eq.~(\ref{anform}), dictates the asymptotic form of the edge singularity, the shape of the spectrum away from the very close vicinity of the peak is not very sensitive to the exact value of $c(p)$ used. The main part of Fig.~\ref{p33scan} shows spectra for both $p=-0.3$ and $p=-0.4$, but these curves are too close to each other and the correct profile to be distinguishable in the plot. The fact that $p$ at the $\langle \chi^2\rangle$ minimum (left inset of Fig.~\ref{p33scan}) is considerably better at $\Theta=1$ than at $\Theta=3$ indicates that the data quality in this case is barely sufficient to extract the correct asymptotic exponent, though the resolution of the overall profile, except for the very tip of the divergent peak, is already very stable, with almost indistinguishable results obtained at $\Theta=1$ and $\Theta=3$ (not shown). We stress again that the error level is realistic for actual (long) QMC simulations; $\sigma = 4\times 10^{-6}$, which is close to the noise level in the QMC data for the Heisenberg chain used previously and graphed in Fig.~\ref{fig:gtau}. Here it should be noted that very long sampling runs are required in order for $\langle \chi^2\rangle$ to be determined precisely enough to detect the $\langle \chi^2\rangle$ minimum---about 30 CPU hours was used for each $p$ value (in parallel runs) in the case shown in Fig.~\ref{p33scan}, in order to reach sufficiently small error bars. It is nevertheless remarkable that information on the value of $p$ is contained in the $\bar G(\tau)$ data. With still lower noise level, the optimal exponent becomes easier to identify. We also point out the well known fact that the effort of generating good imaginary-time data with QMC simulations normally far exceeds what is required even in slow SAC procedures such as the above. We stress that the results shown here are typical in how edge exponents can be resolved with data of similar noise levels and with monotonically decaying continua of various shapes. With decreasing (more negative) exponent $p$ (sharper divergence), the sampling of the spectrum becomes less efficient, however, and the slower convergence of $\langle \chi^2\rangle$ (larger error bars for given sampling time) can pose an obstacle for determining the exponent to good precision when $p \lesssim -0.8$. However, as also exemplified above, even if the exponent is formally not quite correct, the overall profile of the spectrum, including the location of the edge, is invariably well reproduced. We have also carried out scans similar to those in Fig.~\ref{p33scan} with the Heisenberg simulation data (same as in Fig.~\ref{fig:hbergsdens}). We find that the optimal exponent is $p\approx -0.7$ in this case. The more negative value than the naively expected $p=-0.5$ can be understood on account on the fact that there is a multiplicative correction to the power-law edge \cite{bougourzi96,karbach97}. Apart from minor changes in the peak form very close to $\omega_q$, the spectrum overall is essentially unchanged (and therefore not shown here) from the results in Fig.~\ref{fig:hbergsdens}. \begin{figure}[t] \begin{center} \includegraphics[width=80mm]{fig36.pdf} \caption{Synthetic spectrum with edge exponent $p=-1/3$ (black curve) and almost indistinguishable results from SAC with $p=-0.3$ and $-0.4$ (blue and red curves), sampled with $N_\omega=400$ at $\Theta=1$. The left inset shows the mean goodness of the fit obtained by sampling with different exponents $p$ at $\Theta=1$ (red circles) and $\Theta=3$ (blue circles, where $0.26$ has been subtracted from the results); the vertical dashed line indicates the correct value $p=-1/3$. The right inset shows the location of the edge versus the exponent for the same $\Theta$ values; the correct value $1/2$ is indicated with the horizontal dashed line.} \label{p33scan} \end{center} \end{figure} We elaborate a bit more on the asymptotic divergence of $S(q,\omega)$ in the case of the Heisenberg chain. It is well known that logarithmic corrections in general (e.g., in the distance dependence of static correlation functions) can lead to ``effective'' power laws with slightly different exponents when analyzed within a finite range of the argument (see, e.g., results for the correlation function of the Heisenberg chain in Ref.~\cite{sandvik10}). It is therefore not surprising that analytic continuation based on a small imaginary-time data set can produce a similar, larger (in magnitude) effective exponent in the dynamic structure factor, in the above example $p \approx -0.7$ instead of $p=-0.5$, to account for the substantial correction from the also divergent log factor. It should also again be noted that, even for $L=500$, the $T=0$ spectrum for the Heisenberg chain is a sum of a not very large number of $\delta$-functions \cite{wang19}, and this deviation from a strictly continuous spectrum may also have some impact on the analytic continuation with a parametrization that mimics a true continuum to a higher degree when $N_\omega$ is large. We discuss one more example, where the edge is not divergent but is controlled by a power law with positive exponent. We test the form $(\omega -1)^{1/2}$, which we further multiply by an exponential function ${\rm e}^{-(\omega-3)}$ for $\omega > 3$; this synthetic spectrum is shown in the inset of Fig.~\ref{m50}. Here the SAC sampling was first run with the correct input exponent $c=2$ corresponding to $p=0.5$ in the amplitude form Eq.~(\ref{anform}). The resulting spectrum agrees very well with the exact profile. The cross-over between the power-law increase and the exponential decay is rather sharp in the synthetic spectrum, and the error in the SAC spectrum is also the largest at the peak (which is seen more clearly on the linear scale used in the inset of Fig.~\ref{m50}), with a rather sharp kink corresponding to only small fluctuations in the cross-over parameter $n_0$ in Eq.~(\ref{lnamp}). It is still noteworthy that the sampling of $n_0$ converges the cross-over to the correct frequency region. Using a small non-zero value of $\epsilon$ in the cross-over form leads to a more rounded maximum, but here we want to avoid additional parameters and consistently show results for $\epsilon=0$. The far tail of the exponential decay is somewhat suppressed, though this is not seen very clearly on the scales used in the plots. This minor deficit is due to the fact that $\langle n_0\rangle \approx 660$ is close to $N_\omega=800$, and the number of $\delta$ functions available for the tail part is therefore rather small. For $N_\omega=1600$ (not shown here), the tail resolution is further improved but the sharp kink at the peak still persists. \begin{figure}[t] \centering \includegraphics[width=81mm]{fig37.pdf} \caption{Synthetic spectrum with edge of the form $(\omega-\omega_0)^{1/2}$ (black curve) together with SAC results with the correct edge exponent $p=0.5$ (red curve), as well as $p=0.4$ (blue dashed curve) and $p=0.6$ (blue solid curve). In order to discern asymptotic power-law behaviors in this log-log plot, the results are graphed vs $\omega-\omega_0$, where the edge location $\omega_0$ is determined from each individual SAC spectrum ($\omega_0 \approx 1.015$ for $p=0.4$, $\omega_0 \approx 1.003$ for $p=0.5$, and $\omega_0 \approx 0.992$ for $p=0.6$). A result obtained with unrestricted frequency updates above the imposed correct edge location $\omega_0=1$ [shown on a linear scale in Fig.~\ref{scanm50}(b)] is also included (black dashed curve). The inset shows the exact synthetic and $p=0.5$ SAC spectra vs $\omega$ on a linear scale. The error level of the QMC data was $\sigma = 10^{-6}$ and $N_\omega=800$.} \label{m50} \end{figure} To test the sensitivity of the spectrum overall to the amplitude exponent $c$ used, we also carried out runs with $c$ corresponding to other edge exponents $p$. Results for $p=0.4$ and $p=0.6$ are shown along with the $p=0.5$ spectrum in the main log-log plot in Fig.~\ref{m50}. The location of the edge $\omega_0$ is determined to better than $0.5\%$ when the correct value $p=0.5$ of the exponent is used, while using $p=0.4$ and $p=0.6$ delivers $\omega_0$ about $1.5\%$ too high and $1\%$ too low, respectively. To observe the asymptotic power law behaviors at the edge, the spectra are graphed versus $\omega-\omega_0$ in Fig.~\ref{m50}, with the edge location $\omega_0(p)$ defined as $\omega_0=\langle \omega_1\rangle$ for given exponent $p$ [i.e., slightly below the first frequency in the grid definition Eq.~(\ref{selfomega})]. For small $\omega-\omega_0$, the different power-law behaviors dictated by the imposed exponent $c$ of the amplitudes in Eq.~(\ref{anform}), but for $\omega-\omega_0 \gtrsim 0.2$ even the $p=0.4$ and $p=0.6$ spectra cross-over into the correct $p=0.5$ form until the peak of the spectrum at $\omega=3$. \begin{figure*}[t] \centerline{\includegraphics[width=160mm, clip]{fig38.pdf}} \vskip-1mm \caption{Results of SAC (red curves) with frequency-only updates [the parametrization in Fig.~\ref{fig:spec}(b)] for the same synthetic spectrum as in Fig.~\ref{m50} (black curves). The spectrum obtained with unrestricted sampling is shown in (a), while a lower bound was imposed in the other cases; $\omega_0=1$ (the correct bound) in (b), $\omega_0=1.04$ (where $\langle \chi^2\rangle$ is minimized) in (c), and $\omega_0=1.1$ in (d). Panel (e) shows the goodness of fit vs the lower bound at two sampling temperatures; $\Theta=1$ (red circles) and $\Theta=2$ (blue circles). The spectral functions in (a)--(d) were obtained with $\Theta=1$.} \label{scanm50} \vskip-2mm \end{figure*} The results in Fig.~\ref{m50} were obtained by sampling at $\Theta=1$, resulting in $\langle \chi^2\rangle/N_\tau \approx 0.92$ for $p=0.5$. In a scan over many values of $p$, the $\langle \chi^2\rangle$ value does not change appreciable until $p < 0.15$ or $p > 0.60$. Thus, in this case, it is much more difficult (than in the case of a divergent edge) to determine the exponent if it is not known; especially to determine a lower bound of $p$. This difficulty can be understood because the spectral weight at the edge is very small when $p > 0$, and a change in $p$ from the correct value can then be largely compensated for (to fit the data) by a shift in the edge location, unless the noise level of $\bar G(\tau)$ is very low (much lower than $\sigma = 10^{-6}$ used here). Despite the difficulties of independently determining $p$ in this case, the level of fidelity at the edge when $p$ is at or close to the correct value would be impossible to obtain with standard analytic continuation methods. To further illustrate this point, Fig.~\ref{scanm50}(a) shows results for the same synthetic spectrum obtained with unrestricted SAC, using equal-amplitude $\delta$-functions. While the agreement with the correct form is actually quite good, even somewhat better than the results in the inset of Fig.~\ref{m50} at the peak and high-frequency tail, there is no clear asymptotic power-law behavior. The edge is rounded and there is associated mild ringing at higher frequencies. Though overall the edge distortion is not very dramatic, it would still be difficult to extract an exponent. The method of just constraining the sampling with a lower bound $\omega_0$ also does not work very well. If the exact lower bound is used, as in Fig.~\ref{scanm50}(b), the agreement improves over Fig.~\ref{scanm50}(a), but now there is a small jump at the edge and noticable (though milder) ringing behavior. The same spectrum is also shown in the log-log plot in Fig.~\ref{m50} (dashed black curve), where the deviations from the asymptotic power-law behavior can be observed more clearly. Like the other results, the correct form is still well reproduced above $\omega = 0.2$. Independently determining the edge location is difficult in this case. Results with $\omega_0=1.04$ and $1.1$ are shown in Figs.~\ref{scanm50}(c) and \ref{scanm50}(d), respectively. The former case actually corresponds closely to the $\langle \chi^2\rangle$ minimum, as shown in Fig.~\ref{scanm50}(e) for two different sampling temperatures $\Theta$, while the latter is clearly too high according to $\langle \chi^2\rangle$. The reason why this simple lower-bound method fails here is again that the spectral weight close to the edge is very small. In some other cases we have tested, the small sharp spike at the edge in Fig.~\ref{scanm50}(d) appears even before the $\langle \chi^2\rangle$ minimum. The example again shows how entropic pressures can lead to distortions when insufficient constraints are imposed, as we saw also for a case of a divergent edge sampled with the simplest type of constraint in Fig.~\ref{w0fix}. In this particular case illustrated in Fig.~\ref{scanm50}, the unrestricted sampling above the edge cannot reproduce the power-law form well enough for the simple optimization of the lower bound to work well, even at a low noise level of the $\bar G(\tau)$ data. In general, for any spectrum that is expected to host an edge, it is useful to first carry out optimization of just the lower bound (after the very first step of carrying out unrestricted SAC). Even if the results may still not be satisfactory, they can give hints as to what the correct edge shape may be, e.g., a sharp peak indicating a divergent edge, as in Fig.~\ref{w0fix}, or a jump (in some cases even a small spike), as Fig.~\ref{scanm50}(b), suggesting the possibility of a power law with a positive exponent. Tests with better parametrizations can then follow. \subsection{Generic continuum} \label{sec:monomixed} For spectra that are not monotonically decaying above the edge, we can mix the parametrizations with the monotonicity constraint (part A with $N_A$ $\delta$-functions) and the equal-amplitude unconstrained $\delta$-functions (part B with $N_B$ $\delta$-functions); see Fig.~\ref{fig:mixed}. In the simplest case, the A part corresponds to a divergent edge with exponent $p=-1/2$, in which case the amplitudes are $A_i=W_A/N_A$ for part A and $B_j=W_B/N_B$, where $W_A+W_B=1$ and the A-part weight $W_A \in (0,1)$ should be determined as an optimized constraint. When sampling, the B part is not completely unrestricted, as the lower edge of the A part serves as a lower bound for the B $\delta$-functions. Other edge exponents can also be imposed or optimized, though in the latter case a 2D scan over both the exponent $p$ and the mixing parameter $W_A$ would be required. \begin{figure}[t] \centering \includegraphics[width=75mm]{fig39.pdf} \caption{Parametrization of the spectrum as two sets of $\delta$-functions. The first set A, with amplitudes $A_i=W_A/N_A$ is sampled under the monotonicity constraint $\omega_{A,i+1}-\omega_{A,i} > \omega_{A,i}-\omega_{A,i-1}$ (here illustrated with all equal amplitudes, corresponding to the edge exponent $p=-1/2$), while the second set with weights $B_j=W_B/N_B$ is only constrained by ${\rm min}\{\omega_{B,j}\} > \omega_{A,1}$ (the vertical dashed line). The limiting cases $W_A=0$ and $W_B=0$ correspond to free sampling and monotonically decaying continuum, respectively.} \label{fig:mixed} \end{figure} To still take advantage of the definition of the spectral weight density in Eq.~(\ref{sdensity}), we combine this form for the A part of the spectrum with the B part recast into a histogram based on the self-generated grid points in Eq.~(\ref{selfomega}). This merger of the two different representations of the spectral density only has to be performed at the output stage. The working histogram for the B part during the sampling process is defined with a very small frequency spacing, the same $\delta_\omega$ as used for the micro grid on which the precomputed kernel and its derivative are stored when the continuous (double precision) frequencies $\omega_i$ are used, as detailed in Sec.~\ref{sec:contsamp}. For any part of the B continuum extending above the highest frequency on the self-generated grid, a regular histogram with equal-size bins is used. \begin{figure}[t] \centering \includegraphics[width=75mm]{fig40.pdf} \caption{Synthetic spectrum with edge singularity of the form $(\omega-\omega_0)^{-1/2}$ followed by two broad maxima (black curve) reproduced using the SAC parametrization in Fig.~\ref{fig:mixed}. The error level of the $\bar G(\tau)$ data was $\sigma = 10^{-6}$. The inset shows the goodness of the fit vs the relative weight $W_A$ of the $A$ (singular) part. The optimum is at $W_A \approx 0.45$. In the main figure the spectrum is shown at $W_A=0.30$, $0.45$, and $0.60$. All the results were obtained with $N_A=400$, $N_B=200$, and with the sampling temperature set to $\Theta=1$.} \label{twobump} \end{figure} Fig.~\ref{twobump} shows an example of a challenging synthetic spectrum where there are two maxima following the edge. Here the edge exponent is $p=-1/2$, and we provide this as input, i.e., using equal $\delta$-function amplitudes as in Fig.~\ref{fig:mixed}(a). An important aspect of this test is that the spectrum is not very sensitive to the exact value of $W_A$ as long as the goodness of the fit is reasonable and not too far from the optimal value. The edge location is less than $0.7\%$ too low at the optimal (within the resolution of the test) $W_A=0.45$, slightly worse at $W_A=0.3$, and slightly better at $W_A=0.6$. The first maximum is also better reproduced at $W_A=0.6$, suggesting that it may be better to use $W_A$ slightly higher than indicated by the best goodness of fit. However, we have not been able so far to define a universal concrete criterion beyond the $\langle \chi^2\rangle$ minimum. Even with the imperfect mixing parameter, the results still capture the edge well and reproduce the two higher maxima reasonably well. In general, we find that the results of this parametrization are also not very sensitive to the ratio $N_B/N_A$ of the number of $\delta$-functions in the two different sets, as long as both are sufficiently large. In Fig.~\ref{twobump} we used $N_A=400$ and $N_B=200$. As in the case of the unrestricted parametrizations, the sampling efficiency is better with larger $N_B$. In principle, a non-divergent edge singularity ($p>0$) can also be mixed with the generic continuum. However, the $\delta$-functions of the continuum will then have an outsize influence when they appear at the edge, causing a discontinuous jump instead of a smooth power-law decay to zero. In contrast, for $p<0$ the divergent spectral weight far outweighs any small contributions from the B-set close to the edge. One promising approach for edges with $p>0$, that we have not yet explored fully, is to consider ordered frequencies $\omega_{j+1} > \omega_j$ for the continuum part and impose monotonically increasing weights, $B_{j+1} > B_j$ with $B_j \to 0$ for $j \to 1$ (e.g., linearly increasing, $B_j \propto j$). If the order is maintained during the sampling of the frequencies, the jump at the edge is avoided. To remedy potentially excessively large (relative) amplitudes for $j \to N_B$, a cross-over form similar to Eq.~(\ref{lnamp}) can be implemented. \section{Heisenberg ladders} \label{sec:ladders} Here we present applications of constrained SAC methods to 2- and 3-leg Heisenberg spin ladders. These systems have been studied for a long time as examples of even-odd effects in the ``dimensional cross-over'' from 1D to 2D antiferromagnetism \cite{barnes93,dagotto96}. For ladders with an even number $n$ of legs (chains) the ground state is gapped, with the gap decreasing exponentially with increasing $n$ (similar to the Haldane gap of integer-spin chains \cite{khveshchenko94}), while for odd $n$ the spectrum is gapless. An $n$-leg ladder consists of $n$ chains of length $L$ with intra- and inter-chain couplings $J_\parallel$ and $J_\perp$, respectively, with Hamiltonian \begin{equation} H = J_\parallel\sum_{c=1}^n \sum_{i=1}^L {\bf S}_{c,i} \cdot {\bf S}_{c,i+1} + J_\perp \sum_{c=1}^{n-1} \sum_{i=1}^L{\bf S}_{c,i} \cdot {\bf S}_{c+1,i}, \label{hhladder} \end{equation} where ${\bf S}_{c,i}$ is a spin-1/2 operator on site $i$ of chain $c$. Here we only consider the case $J_\parallel=J_\perp=1$ and apply periodic boundary conditions on the chains (but not for the interactions between the chains). Although the basic mechanism of the gap formation for even $n$ and the gapless spinon spectrum for odd $n$ (which is similar to that of the Heisenberg chain, $n=1$) is well understood \cite{barnes93,white94}, the dynamic structure factor contains contributions from different low-energy processes and is expected to be rather complicated. It is very difficult to analytically predict the full line shape for fixed longitudinal momentum $q$ (and even or odd parity sector, which we will define further below), though some results are available for band edges in the 2-leg ladder versus $q$ \cite{sushkov98,knetter01}. Numerical calculations of excitations have largely focused on the 2-leg case \cite{barnes93,yang98,schmidiger12}. A dominant isolated $\delta$ function has been identified at the $q$ dependent gap for $q$ at and close to the gap minimum at $q=\pi$. Much smaller spectral weight has been detected at higher energies, far above the gap when $q\approx \pi$, but the profile above the gap has not been resolved in detail. For the $3$-leg ladder, we are not aware of any previous numerical results for the dynamic structure factor. We only present a limited number of preliminary calculations here, for both $n=2$ and $n=3$, as illustrations of the power of constrained SAC. We leave further results for a future publication. \subsection{$S(q,\omega)$ of the 2-leg ladder} \label{sec:hladd2} The 2-leg ladder hosts a reflection symmetry; the Hamiltonian Eq.~(\ref{hhladder}) for $n=2$ is invariant under permutation of the chain index, $c \in \{1,2\} \to \{2,1\}$. Thus, there are even and odd excitations of the even ground state. The system is gapped and the lowest excitation for given parity sector and momentum $q$ can be understood as a propagating rung triplet (often referred to as a triplon), out of which composite excitations at higher energy can also be formed \cite{barnes93}. Here we focus on the odd triplet channel, i.e., with the operator \begin{equation} O_q=S^z_{1,q}-S^z_{2,q} \end{equation} in Eq.~(\ref{gtaudef1}). We consider only $q=\pi$, where the lowest triplet excitation defines the gap $\Delta$ of the system. Early studies extracted the gap $\Delta \approx 0.50$ using Lanczos exact diagonalization \cite{barnes93}, which in this case converges rapidly as a function of the ladder length $L$ because the correlation length is only a few lattice spacings. Extrapolations using $L \le 14$ in Ref.~\cite{yang98} gave $\Delta=0.502$. Higher excitations are formed at $q\approx \pi$ from an odd number of the elementary triplons. While bound states have been predicted in the two-triplon sector \cite{sushkov98}, they appear in parts of the Brillouin zone that are not relevant to the lowest three-triplon excitations at $q=\pi$. Thus, the odd-sector dynamic spin structure factor at $q=\pi$ should consist of an isolated $\delta$-function at the gap $\Delta$, and there should be a second gap $3\Delta$ above which a continuum of three-triplon states form. It is well known \cite{barnes93} that the first $\delta$-function completely dominates the spectral weight at $q=\pi$, containing $96.7\%$ of the total weight in a chain of length $L=12$ \cite{yang98}. In the exact diagonalization results \cite{barnes93,yang98} spectral weight above the gap starts at $\omega \approx 2.5 \approx 5\Delta$ and extends up to $\omega \approx 5$. The lack of spectral weight close to the three-triplon bound $3\Delta$ can likely be explained by strong finite-size effects in the form of repulsive interactions between triplons that push the energy of the lowest three-triplon state to higher energy. A more recent DMRG study \cite{schmidiger12} for an experimentally relevant case of $J_\parallel < J_\perp$ showed the presence of spectral weight far above the $\delta$-function at $\omega=\Delta$. However, also in this case there is no spectral weight close to $3\Delta$, even though $L$ was as large as $200$. The open boundary conditions used in DMRG studies may play some role in pushing the three-triplon states to higher energies, unless $L$ is much larger. \begin{figure*}[t] \centering \includegraphics[width=105mm]{fig41.pdf} \caption{SAC results for the odd-mode $q=\pi$ dynamic structure factor of the 2-leg Heisenberg ladder of length $L=256$. Results of unrestricted sampling with both frequency and amplitude updates [the parametrization in Fig.~\ref{fig:spec}(c)] are shown in (a), with the main panel focusing on the small spectral weight above the dominant peak and the inset showing the entire peak on a scale where the rest of the spectrum is invisible. The spectrum obtained when imposing a macroscopic edge $\delta$-function [Fig.~\ref{fig:spec}(d)] is shown in (b), at $A_0=0.9668$, corresponding to the location of the minimum in $\langle \chi^2\rangle/N_\tau$ at sampling temperature $\Theta \approx 0.07$, shown with red circles in the left inset; the blue circles show results at $\Theta \approx 0.036$, where there is no minimum in this range of $A_0$. The right inset shows the corresponding locations of the $\delta$-function (the gap).} \label{hladd2a} \end{figure*} We here present results obtained for a ladder of length $L=256$ with periodic boundary conditions at inverse temperature $\beta=128$, which for all practical purposes puts the system in the ground state since $T/\Delta \approx 0.016$. For the imaginary-time correlation functions, we used a linear grid of $\tau$ points at spacing $\Delta_\tau = 0.06$ for $\tau<1$, followed by a roughly quadratically increasing $\tau$ spacing for a total of 28 points up to $\tau \approx 17.6$. The error level of $\bar G(\tau)$ was $\sigma \approx 5\times 10^{-6}$. Figure \ref{hladd2a}(a) shows SAC results based on unrestricted sampling with $N_\omega=2 \times 10^4$ at $\Theta \approx 7 \times 10^{-4}$, which satisfies the criterion Eq.~(\ref{eq:chi2}) with the standard value of the coefficient, $a\approx 0.5$ (the minimum goodness-of-fit being $\chi^2_{\rm min}/N_\tau \approx 0.73$). We used both frequency and amplitude updates, which is preferred especially when the spectrum has two almost separated features and it is more difficult for frequency-only sampling to converge to the optimal spectral weight distribution. As we have seen in many of our tests in the preceding sections, amplitude updates also lead to sharper peaks, and this is also the case with the leading peak here. We indeed observe a very sharp peak at $\omega \approx 0.50$ as expected, as well as the likewise expected (based on the previous Lanczos calculations \cite{barnes93,yang98}) much smaller and broadly distributed spectral weight up to $\omega \approx 5$. With only frequency updates (results not shown), the main peak is much broader and the continuum exhibits two peaks, likely as a result of distortions induced by the more significant broadening of the main peak. Given the expected perfect $\delta$-function at the gap, we next use the parametrization in Fig.~\ref{fig:spec}(d) and the method of optimizing the edge-$\delta$ weight $A_0$, described in Sec.~\ref{sec:deltapeak}. In this case we sampled only with frequency updates, as we normally do when a constraint is imposed, with $N_\omega=1000$. Since the optimal amplitude $A_0$ is large, $\chi^2$ is very sensitive to the location $\omega_0$, which therefore fluctuates only very little about its optimum. It is then important to use a sufficiently small spacing $\delta_\omega$ in the micro grid representing the continuous frequency space (discussed in Sec.~\ref{sec:contsamp}). In this case we used $\delta_\omega = 10^{-6}$. Results are shown in Fig.~\ref{hladd2a}(b). In the left inset we show the goodness of fit versus $A_0$ at two sampling temperatures $\Theta$. The spectral function in the main graph is the result obtained at the $\langle \chi^2\rangle$ minimum, $A_0=0.9668$, at the higher $\Theta$ value, which corresponds closely to the optimal-$\Theta$ criterion with $a = 0.5$. The shape of the continuum above the $\delta$-function is slightly different from that obtained with unrestricted sampling in Fig.~\ref{hladd2a}(a), which is expected because the broadening of the $\delta$-peak, caused by the entropy effects in unrestricted sampling, also induce secondary distortions. While a well pronounced minimum is seen at the higher $\Theta$ value in Fig.~\ref{hladd2a}(b), at the lower value the minimum is significantly shifted to the left and is not seen in the graph. The rapid shift of the minimum when $\Theta$ is lowered is reminiscent of the tests in Fig.~\ref{broadened-1}, where the minimum eventually vanishes when the $\delta$-function parametrization is ill suited; in that case because of the large width of the quasi-particle peak of the synthetic spectral function studied. In the present case we have no reason to expect a quasi-particle peak of finite width, but there is another reason for the constraint applied here to be insufficient: there should be a second gap at $3\Delta$. Without imposing the second gap, some spectral weight leaks out into the region between the two gaps, which clearly is the case in Fig.~\ref{hladd2a}(b). We know that such a distortion also propagates to ``compensating'' (in the sense of fitting) re-distribution of weight in other parts of the spectrum. Thus, we are motivated to apply the constraint on the continuum in a different way from before: instead of the microscopic $\delta$-functions having a lower bound $\omega_0$, they should now not be allowed to fall below $3\omega_0$. \begin{figure}[t] \centering \includegraphics[width=75mm]{fig42.pdf} \caption{Results based on the same imaginary-time data as in Fig.~\ref{hladd2a}, but with the second gap $3\Delta$ implemented in SAC as a modified constraint on the microscopic $\delta$-functions (here with $N_\omega=1000$) where, for given sampled location $\omega_0$ of the macroscopic $\delta$-function, the lower bound is at $3\omega_0$. Only the continuum above the second gap is graphed here, for two values of $A_0$ as indicated in the legends. These $A_0$ values correspond to the two minima in the left inset, where the sampling temperatures were $\Theta=0.30$ (red circles) and $0.017$ (blue circles). The horizontal solid line corresponds to the standard optimal-$\Theta$ criterion, while the dashed curve indicates the refined criterion, Eq.~(\ref{chi2a0}). The right inset shows the corresponding $A_0$ dependence of the mean location of the $\delta$-peak.} \label{hladd2b} \end{figure} With this very simple modification of the $\delta$-constraint, we obtain the results for the continuum weight shown in Fig.~\ref{hladd2b}. Now a sharp edge has emerged above the second gap, but the exact shape of the edge is strongly dependent on $A_0$, as illustrated with the two different cases shown. The very sharp $\langle \chi^2\rangle$ minimum shifts very slightly to lower $A_0$ as $\Theta$ decreases, as shown in the left inset of the figure, but the effect on the distribution of the small amount of spectral weight in the continuum ($\approx 3\%$ of the total) is significant. In light of this sensitivity of the continuum, we aply a more sophisticated version of the optimal-$\Theta$ criterion, Eq.~(\ref{eq:chi2}). While normally we take $\chi^2_{\rm min}$ to be the lowest possible goodness-of-fit value with any parametrization, we can also use the minimum value $\chi^2_{\rm min}(A_0)$ that is attainable for a given $A_0$, which we again obtain by simulated annealing to very low $\Theta$. Thus the target value is \begin{equation} \langle \chi^2(A_0)\rangle = \chi^2_{\rm min}(A_0) + a\sqrt{2\chi^2_{\rm min}(A_0)}. \label{chi2a0} \end{equation} In all the test cases studied previously, the sensitivity of $\chi^2_{\rm min}(A_0)$ to the value of $A_0$ when it is close to optimal was not as significant and we did not consider fine details of the spectrum depending so strongly on $A_0$. The sensitivity in the present case is clearly caused by the very large isolated $\delta$-peak. The dashed curve in the left inset of Fig.~\ref{hladd2b} shows the refined criterion for $\langle \chi^2(A_0)\rangle$ with $a=0.5$ in Eq.~(\ref{chi2a0}). In the simulated annealing processes for several values of $A_0$ we also saved the mean values $\langle \chi^2(A_0,\Theta)\rangle$ (and also the spectral functions at all $A_0$ values studied), thus allowing us to find $\Theta$ such that the computed and target values coincide at a point that is also a $\langle \chi^2(A_0)\rangle$ minimum. The red points in left inset of Fig.~\ref{hladd2b} show results at $\Theta \approx 0.30$, where the conditions are fulfilled. Moreover, $\langle \chi^2(A_0)\rangle/N_\tau$ is safely below $1$ at this point, thus indicating statistical soundness of the procedure. For reference, we also show results at $\Theta=0.017$ (blue points), where the $\langle \chi^2(A_0)\rangle$ minimum (to within the resolution $\Delta_\Theta=0.0001$ used here) is at the value corresponding to $\chi^2_{\rm min}/N_\tau \approx 0.73$ for free sampling. As regards the spectral functions in Fig.~\ref{hladd2b}, at the higher $\Theta$ (red curve) there is only a small step at the edge, while a sharp peak has formed at the lower value (blue curve). Most likely, this spectrum is already showing signs of overfitting, given that the fluctuations relative to $\chi^2_{\rm min}$ are too restricted, as seen in the left inset of Fig.~\ref{hladd2b}. The distance between the dashed curve and the point at $A_0 = 0.9669$ corresponds to the $\Theta$-fixing criterion Eq.~(\ref{chi2a0}) with $a \approx 0.2$, well below our standard value $0.5$. The high sensitivity of the continuum to the value of $a$ of course still implies that the data quality is not quite good enough for reliably studying the continuum, though it is clear that our results are consistent with the continuum starting at $\omega = 3\Delta$. With the present data quality we cannot completely exclude a spectrum with an edge peak at $\omega=3\Delta$, but we have also found that the trend with improving data quality is for $A_0$ to increase somewhat (though unlikely going significantly above the present range of values), which makes a peak even less likely. It is possible that the small step (as in the $A_0=0.9670$ curve in Fig.~\ref{hladd2b}) eventually also vanishes (in case of a further small increase in the optimal $A_0$ value), though we expect spectral weight to extend all the way to $\omega=3\Delta$ When the edge peak is sharp in Fig.~\ref{hladd2b}, for $A_0=0.9669$, it looks vaguely similar to the spinon edge in the Heisenberg chain when only the lower bound is imposed in Fig.~\ref{w0fix}. In the ladder, it is the momenta of three triplons (instead of two spinons) that have to add up to the total momentum, here $q=\pi$. It should be noted that, while bound states of two triplons have been predicted \cite{sushkov98,knetter01}, these states do not appear when the total momentum is close to $0$ (i.e., both triplons have individual $q$ close to $\pi$), and all calculations agree that the two-triplon states at $q=0$ have a lower edge starting exactly at $\omega = 2\Delta$. Similarly, we do not expect any bound states of the lowest three-triplon $q=\pi$ states; thus, the lower edge should indeed be exactly at the imposed energy $\omega=3\Delta$ (for sufficiently large $L$, so that any effects of weakly repulsive triplon-triplon interactions are small). For $q$ close to $\pi$, the triplon is known to be only weakly dispersing \cite{sushkov98,knetter01}, and therefore there is a high density of states right at the $3\Delta$ edge. However, matrix elements play a crucial role in the shape of the edge, and most likely, as we have argued above, there is no divergence at the edge, which instead could have a non-divergent power law form (if the step feature in Fig.~\ref{hladd2b} eventually goes away). The broad maximum centered at $\omega \approx 3$ in Fig.~\ref{hladd2b} likely arises from a bound triplon pair together with a third triplon. Five (and higher) triplon contributions should have extremely small spectral weight. The first gap is determined very precisely, as seen in the right inset of Fig.~\ref{hladd2b}, with realistic values of $\Delta$ between $0.50245$ and $0.50255$ based on a conservatively acceptable range of $A_0$ values. While this result and the relative spectral weight agree with previous results (but are more precise), there are important differences in the continuum. Previous studies have not resolved the second gap $3\Delta$, as we discussed above. While the Lanczos results can be explained by finite-size effects, the DMRG results are harder to interpret. Fig.~1(a) of Ref.~\cite{schmidiger12}, shows spectral weight above the gap, starting at $\approx 5\Delta$ for $q=\pi$ (where $\Delta$ is smaller than in our case because of the different coupling ratio). We also observe significant weight in this regime---the larger broad peak in Fig.~\ref{hladd2b}. Given that we have only enforced known aspects of the spectral function, the weight extending all the way to $3\Delta$ should correctly represent the three-triplon excitations, though with details of the edge that cannot be conclusively determined based on the current $\bar G(\tau)$ data. Reference \cite{schmidiger12} did not offer any physical interpretation of the weight observed far above the expected second gap $3\Delta$ close to $q=\pi$, and it remains a mystery why the results differ so much from ours in this regard. We are not aware of any other reliable studies of the dynamic structure factor of the ladder, and future studies, with DMRG and other techniques, are called for to clarify this issue. We note again that the system size we have used here, $L=256$, is about 100 correlation lengths \cite{sandvik10}, and it is unlikely that there are significant finite size effects left, considering also the periodic boundary conditions (in contrast to the open boundaries used in the DMRG calculations). The SSE QMC calculations do not introduce any systematical errors in $\bar G(\tau)$. We plan to further improve the imaginary-time data and carry out studies for other values of $q$ and well as of the even rung mode. \subsection{$S(q,\omega)$ of the 3-leg ladder} \label{sec:hladd3} \begin{figure*}[t] \centering \includegraphics[width=105mm]{fig43.pdf} \caption{SAC results for the dynamic structure factor in the even parity channel of the 3-leg ladder at $q=7\pi/8$ in (a) and $\pi/4$ in (b). In both cases, unrestricted sampling with $N_\omega=2000$ $\delta$-functions was carried out, with both frequency and amplitude updates included. The sampling temperature $\Theta$ was adjusted according to the standard criterion, Eq.~(\ref{eq:chi2}) with $a=0.5$, corresponding to $\langle \chi^2\rangle /N_\tau = 0.91$ and $0.84$ in (a) and (b), respectively. The inset of (a) shows a more detailed view of the tail of the $q=7\pi/8$ spectrum.} \label{hladd3a} \end{figure*} For the 3-leg ladder, $n=3$ in the Hamiltonian Eq.~(\ref{hhladder}), we are not aware of specific numerical results for $S(q,\omega)$, while there are many prior results for static correlations and thermodynamic properties, e.g., Refs.~\cite{frischmuth96,sandvik10}. Thus, in this case we do not have any previous benchmarks for comparison. Though qualitatively it is clear what type of low-energy elementary excitations to expect \cite{dagotto96,white94}, the contributions to the dynamic structure factor from composite excitations at higher energy have not been studied to our knowledge. Given that each rung in isolation has a two-fold degenerate ground state, the low-energy excitations will map onto those of the single $S=1/2$ Heisenberg chain. As in the 2-leg ladder, there are again two distinct modes corresponding to even and odd parity, defined as reflection about the central chain in the Hamiltonian Eq.~(\ref{hhladder}), i.e., permuting the chain index, $c \in \{1,2,3\} \to \{3,2,1\}$. For the mapping, which we will not carry out formally here, it is useful to first consider the gapless points, $q=0$ and $q=\pi$, of the Heisenberg chain. In the 3-leg ladder, the very lowest excitation should correspond to fully antiferromagnetic fluctuations, which would be most naturally accessed with the staggered rung $i$ operator $S^z_{1,i}-S^z_{2,i}+S^z_{3,i}$. The uniform long-wavelength fluctuations, $q\to 0$, should also be gapless, and here the in-phase rung operator $S^z_{1,i}+S^z_{2,i}+S^z_{3,i}$ may seem like the best option. However, both the staggered and uniform operators above are even under reflection, and, thus, we can select one of them to study all $q \in (0,\pi)$ between the gapless points (and we could also in principle just consider the central-chain spins $S^z_{i,2}$). For any $q$ value, the spectral function in the even sector should have the same asymptotic $\omega \to \omega_q$ edge shape as that of the Heisenberg chain. Given that the dominant fluctuations are antiferromagnetic, we expect the largest total spectral weight with the staggered rung operator (which is also borne out by our results for the total spectral weight; the static structure factor), which we therefore use here. After Fourier transformation, \begin{equation} O_q=S^z_{q,1}-S^z_{q,2}+S^z_{q,3}. \end{equation} is the operator used in the imaginary-time correlation function in Eq.~(\ref{gtaudef1}). We here report results for the dynamic spin structure factor based on $\bar G_q(\tau)$ calculated with the SSE method on a 3-leg ladder system of length $L=512$ at inverse temperature $\beta=8192=16L$. The very low temperature is chosen in order to effectively obtain $T=0$ results even for $q$ very close to the gapless points. Here we will not consider extreme cases, however, and study only $q=\pi/4$ and $q=7\pi/8$ as two characteristic examples with different features in the dynamic structure factor. In both cases, the error level of the imaginary-time data is $\sigma \approx 10^{-5}$. We used a linear $\tau$ grid with $\Delta_\tau=0.05$ up to $\tau=1$ and a roughly quadratic grid thereafter, for $\tau$ up about $6.5$ ($40$ $\tau$ points in total) and $14$ ($52$ points) for $q=\pi/4$ and $q=7\pi/8$, respectively [with the $\tau$ cut-off set at relative error of $\approx 20\%$ in $\bar G_q(\tau)$]. \begin{figure*}[t] \centering \includegraphics[width=105mm]{fig44.pdf} \caption{Constrained SAC results for the same 3-leg ladder structure factors as in Fig.~\ref{hladd3a}. In (a), for $q=7\pi/8$ the distance-monotonic parametrization with all equal amplitudes [Fig.~\ref{fig:spec}(e)] was used with $N_\omega=400$. The main graph focuses on the tail of the spectrum and the inset shows the entire spectrum (red curve) in a log-log plot [with the blue line showing the asymptotic $(\omega-\omega_q)^{-1/2}$ form for reference]. In (b), for $q=\pi/4$ a generic continuum had to be included, using the mixed parametrization in Fig.~\ref{fig:mixed} with a total of $N_\omega=800$ $\delta$-functions, whereof $400$ in the B (continuum) part. The goodness of the fit vs the weight in the A (edge) part is shown in the inset for two sampling temperatures; $\Theta=0.3$ (blue circles) and $\Theta=0.1$ (red circles). The two spectra in the main graph were sampled with $W_A=0.34$ (black curve) and $W_A=0.40$ (red curve), both at $\Theta=0.1$.} \label{hladd3b} \end{figure*} We again start with unrestricted SAC sampling, using both amplitude and frequency updates. Results are shown in Fig.~\ref{hladd3a}. As expected, the overall spectral weight is much larger for $q=7\pi/8$, Fig.~\ref{hladd3a}(a), where there is a single sharp peak followed by two shoulder-like features. This spectrum can be compared to that obtained with unrestricted SAC for the Heisenberg chain at the only slightly different momentum $q=4\pi/5$ in Fig.~\ref{sw2}, where there is also a sharp peak but in that case followed by a second distinct peak. We know that the second peak is an artifact of the unrestricted sampling, and the shoulders seen in Fig.~\ref{hladd3a}(a) are likely also produced by entropic distortions of the expected sharp spectral edge. In Fig.~\ref{hladd3a}(b), the spectrum for $q=\pi/4$ looks very different, with a small low-frequency peak followed by a larger broad maximum. Here it is unlikely that distortions of the edge would artificially induce such a large second maximum; thus, both features should be real though distorted by entropic effects. In the Heisenberg chain, the correct spectral function never has two peaks; for any $q$ there is a sharp edge followed by a monotonic decay of spectral weight (Fig.~\ref{sw-1d}). Thus, the result of unrestricted sampling already suggests a more complex continuum for the 3-leg ladder in some parts of the Brillouin zone. Moving on to constrained SAC, here it is of course natural to test the edge parametrization, Fig.~\ref{fig:spec}(e), which we explored extensively in Secs.~\ref{sec:contedge1} and \ref{sec:contedge2}. We keep the edge exponent at $p=-0.5$, which we know works well for the Heisenberg chain even though the logarithmic correction to the power-law form is strictly not captured correctly very close to the edge. For $q=7\pi/8$, this parametrization indeed works very well, with $\chi^2_{\rm min}/N_\tau$ obtained by simulated annealing being well below $1$. Applying the standard $\Theta$ criterion for the final sampling temperature $\Theta$, we obtained the spectrum shown in Fig.~\ref{hladd3b}(a). Comparing with Fig.~\ref{hladd3a}(a), there are no shoulders left, only a smooth decay away from the edge. For $q=\pi/4$, the same parametrization does not work, which is expected because of the double-peak structure with dominant second maximum in Fig.~\ref{hladd3a}(b). Instead, we have to use the mixed parametrization illustrated in Fig.~\ref{fig:mixed}. The inset of Fig.~\ref{hladd3b}(b) shows the goodness of the fit versus the weight in the A (edge) part of the spectrum, obtained at two different $\Theta$ values. There is a clear minimum in both cases, at marginally higher $W_A$ at the lower $\Theta$. The main graph in Fig.~\ref{hladd3b}(b) shows spectra obtained at $W_A=0.34$ (corresponding to the optimum at the lower $\Theta$ value) and at $W_A=0.40$ (slightly above both of the optima). The two-peak structure revealed already in Fig.~\ref{hladd3a}(b) persists, but now of course the edge is sharp, while the second maximum is only slightly different from the unrestricted SAC result. There is also a flat portion of spectral weight between the two peaks. We have also carried out SAC runs including the mixed parametrization at $q=7\pi/8$, where, as mentioned above, $\langle \chi^2\rangle$ is statistically very good even with only the A part of the spectrum. When including a B part at just $0.1\%$ of weight ($W_A=0.999$), $\langle \chi^2\rangle$ already increases significantly (though the fit is still acceptable), thus indicating that there is no statistical advantage of the B part here. The resulting spectrum in this case is also still monotonically decaying above the edge. Thus, the monotonic form here is completely sufficient and the profile in Fig.~\ref{hladd3b}(a) should be very close to correct (similar to our good results for the Heisenberg chain). These results demonstrate consistency with the expected edge in the spectral function, which arises from deconfined spinons according to a single-chain description of the $S=1/2$ rung degrees of freedom. They also illustrate the contributions from other excitations not present in the Heisenberg chain. The fact that the tail of the continuum at $q=7\pi/8$ is much thinner than in the Heisenberg chain can likely be understood as the initial effects of the dimensional cross-over into the 2D Heisenberg model \cite{dagotto96}. In the 2D case, there is a sharp magnon quasi-particle followed by a broad continuum \cite{shao17}, and the thinning of the edge peak in the 3-leg ladder can perhaps be understood as an evolution with $n$ (odd) of the peak to a a very narrow magnon quasi-particle peak (essentially a $\delta$-function \cite{shao17,chernyshev09}). The second maximum for $q=\pi/4$ originates from rung degrees of freedom not present in the single chain. As in the many test cases studied in Secs.~\ref{sec:contedge1} and \ref{sec:contedge2}, we expect that the edge frequency $\omega_q$ is very close the actual value. It will be very interesting to study the dispersion relation as well as the systematic emergence and evolution of the second maximum. We plan to study these features in detail in the future, along with the odd-mode spectral functions. \section{Maximum entropy methods} \label{sec:maxent} Here we discuss the relationships between SAC and the ME method and also present SAC-inspired potential improvements of the latter. We gave a very brief review of the ME method in Sec.~\ref{sec:methods_b} and here first elaborate further on some key facts. In terms of the spectral function $A(\omega)$, which is defined only for $\omega \ge 0$ in Eq.~(\ref{barelation}), the postulated probability distribution given the QMC data $\bar G$ is \begin{equation} P(A|\bar G) \propto {\rm e}^{-\chi^2(A)/2+\alpha E(A)}, \label{pmaxent2} \end{equation} where $A(\omega)$ depends on $\bar G$ according to Eq.~(\ref{eir}) and $E(A)$ is the Shannon information entropy with respect to a default model $D(\omega)$, Eq.~(\ref{esdef}). Normally the entropy is expressed in terms of the original spectral function $S(\omega)$, which is defined on the entire frequency axis $\omega \in (-\infty,\infty)$ and is related to $A(\omega)$ with $\omega \ge 0$ according to Eq.~(\ref{barelation}). The default model should also satisfy the same bosonic relationship as $S(\omega)$, $D(-\omega)={\rm e}^{-\beta\omega}D(\omega)$, but even then the entropy computed according to Eq.~(\ref{esdef}) with $S$ replaced by $A$ is not exactly the same as the original definition with $S$, except in the limit $T \to 0$ (and with very small differences for a gapped spectrum if $T$ is below the gap). With SAC, we normally parametrize and sample $A(\omega)$ [though in principle $S(\omega)$ can also be used]. In order to be able to compare results obtained with the ME method, we also use $A(\omega)$ for the latter. We always use a flat default model, which can be neglected (i.e., $D=1$) in the entropy expression, and in the discrete computer implementation with $N_\omega$ histogram bins [here strictly speaking with $\delta$-functions at frequencies $\omega_i=(i+1/2)\Delta_\omega$] we use the Shannon entropy in the form \begin{equation} E_{\rm SH} = - \sum_{i=1}^{N_\omega} A_i \ln (A_i), \label{esh2} \end{equation} where the amplitudes are normalized as $\sum_i A_i = 1$. The entropic weighting factor $\alpha$ in Eq.~(\ref{pmaxent2}) can be adjusted according to one of several proposed criteria \cite{gull84,silver90,gubernatis91,jarrell96} and the spectrum $A(\omega)$ is normally sought that maximizes the probability at that $\alpha$. Alternatively, in Bryan's method \cite{bryan90} the probability is augmented by a prior distribution of $\alpha$ values; typically $P(\alpha) \propto 1/\alpha$; \begin{equation} P(A,\alpha|\bar G)= P(A|\bar G)P(\alpha). \label{pmaxenta2} \end{equation} Then either $A(\omega)$ and $\alpha$ are optimized together for maximum likelihood or $A_\alpha(\omega)$, which maximizes the probability for given $\alpha$, is integrated over $\alpha$. For a fixed value of $\alpha$, obtaining the maximum probability spectrum corresponds to minimizing the functional \begin{equation} F(A)=\chi^2(A)/2 - \alpha E(A), \label{functional} \end{equation} which can be done either by some stochastic approach, a Newton-type optimizer \cite{bergeron16}, or a more sophisticated non-negative least squares method \cite{Koch18,Ghanemthesis}. In a slightly different approach \cite{boninsegni96,kora18}, instead of aiming for the maximum-probability solution, the amplitudes (and optionally also $\alpha$) are sampled for the average spectrum, as in SAC, using one of the above probability distributions, either Eq.~(\ref{pmaxent2}) or Eq.~(\ref{pmaxenta2}). Here, in Sec.~\ref{sec:maxent1} we propose a new simple way to determine an appropriate $\alpha$ value, inspired by our criterion for $\Theta$ in SAC. In Sec.~\ref{sec:sacme} we present new insights into the relationship between SAC and ME methods based on the different forms of the entropy corresponding to different SAC parametrizations (continuing the discussion started already in Sec.~\ref{sec:entropy1}). In Sec.~\ref{sec:maxent2} we address the issue of maximizing the ME probability versus sampling $A(\omega)$ over the full distribution. \subsection{Criterion for the factor $\alpha$} \label{sec:maxent1} Besides the standard arguments for optimizing $\alpha$ \cite{jarrell96}, it has also been proposed to fix $\alpha$ at the value maximizing $d\ln \chi^2(\alpha)/d\ln(\alpha)$ \cite{bergeron16}, in analogy with the proposal by Beach \cite{beach04} to fix the sampling temperature $\Theta$ in SAC at the value maximizing the logarithmic derivative of $\langle \chi^2(\Theta)\rangle$. However, as mentioned in Sec.~\ref{sec:theta}, in SAC a broad maximum is often located at $\Theta$ high enough to cause a suboptimal fit if sampling there. We will see below that the logarithmic derivative of $\chi^2(\alpha)$ has a qualitatively similar form, and, therefore, this way of determining $\alpha$ cannot in general be the best course of action. Our method of fixing the sampling temperature using the $\langle \chi^2\rangle$ criterion in Eq.~(\ref{eq:chi2}) can also be directly taken over into the ME method, by minimizing the functional in Eq.~(\ref{functional}) for a range of decreasing $\alpha$ values and monitoring $\chi^2$ as it convergences close to its minimum value $\chi^2_{\rm min}$. The minimum value $\chi(\alpha \to 0) \to \chi^2_{\rm min}$ should of course be the same (in practice very close to the same) as obtained in the SAC simulated annealing procedure, where $\langle \chi(\Theta \to 0)\rangle \to \chi^2_{\rm min}$. Using the same arguments that we discussed at length in in Sec.~\ref{sec:theta}, the value of $\alpha$ corresponding to an optimal balance between entropy maximization and data fitting is postulated as: \begin{equation} \chi^2(\alpha) = \chi^2_{\min} + a\sqrt{2\chi^2_{\min}}. \label{eq:chi2me} \end{equation} This criterion of course guarantees a statistically acceptable fit from the outset with the factor $a \lesssim 1$. We here test the $\alpha$-fixing scheme using the same $L=16$ and $L=500$ Heisenberg chain data that we used previously in the tests of unrestricted SAC sampling in Sec.~\ref{sec:theta}. To find the spectrum minimizing the functional $F(A)$, Eq.~(\ref{functional}), we use a uniform frequency grid in the range from $\omega=0$ to $\omega=5$; the same cut-off value as used with the SAC method in Figs.~\ref{sw1}(a) and \ref{sw2}(a). To optimize the spectrum, we employ the same Monte Carlo procedures with 2- and 3-amplitude updates as for the SAC fixed-grid sampling (described in Sec.~\ref{sec:gridsamp}), but in this case at $\Theta=0$ (i.e., only updates leading to higher probability are accepted). It can still be beneficial to start the process by first sampling at $\Theta>0$, with some initial large value of $\alpha$, and annealing to some low $\Theta$ to obtain a good starting configuration for the subsequent $\Theta=0$ run. After this initial step, $\alpha$ is reduced in a way similar to the simulated annealing process (dividing by $1.05$ or $1.1$ each time and with a large number of updating sweeps for each $\alpha$ value). Given the ability of the 3-amplitude moves, in particular, to re-distribute spectral weight among different parts of the spectrum, we do not expect any issues related to potential local minima of $F(A)$. Indeed, in test runs with different random number seeds, the same spectral functions were obtained versus $\alpha$, as long as sufficiently many Monte Carlo sweeps were carried out. \begin{figure}[t] \centering \includegraphics[width=75mm]{fig45.pdf} \caption{Goodness-of-fit vs the entropy weighting parameter obtained by maximizing the probability Eq.~(\ref{pmaxent2}). For the $L=16$ case ($\beta=2$, $q=\pi/2$) the same Heisenberg $\bar G(q,\tau)$ data as in Figs.~\ref{fig:x2} and \ref{sw1} were used, and the $L=500$ ($\beta=500$, $q=4\pi/5$) results are based on the same data as in Fig.~\ref{sw2}. The inset shows the corresponding numerical log derivatives.} \label{x2_me} \end{figure} Figure \ref{x2_me} shows $\chi^2$ versus $\alpha$ for the Heisenberg chains, with the log derivatives shown in the inset. Similar to the SAC results in Fig.~\ref{fig:x2}, for $L=16$ the maximum derivative corresponds to a suboptimal fit with $\chi^2/N_\tau \approx 2.5$ and for the $L=500$ system the value is higher still. Below the global maximum, a second broad maximum can be observed at much lower $\alpha$ in the case of $L=16$ (and going beyond the left edge of the figure, there is also additional nontrivial structure), while for $L=500$ there are several small but sharp local maxima and shoulders. The minimum $\chi^2$ values reached at the end of the process are fully compatible with the SAC $\Theta$ annealing results. The ultimate low-$\alpha$ ME spectra also exhibit the same few-peak structure that we discussed previously in Sec.~\ref{sec:theta} and illustrated in the $L=500$ case in Fig.~\ref{sw2} (see also \ref{app:lowtheta} for results at still lower $\Theta$). The good agreement between the two different ways of approaching the pure $\chi^2$ fitting limit also supports the ability of our method to correctly find the maximum-probability ME spectrum versus $\alpha$. Even though the small-$\alpha$ limit is correct, one might perhaps suspect that the rather sharp features seen in the log derivative for $L=500$ in Fig.~\ref{x2_me} could be associated with metastability. However, the results are fully reproducible when carrying out the $\alpha$ annealing from different starting values and with different types of Monte Carlo updates included. Thus, we believe that there are no problems with the stochastic optimization algorithm. Since the process results in a single spectrum maximizing the probability (as opposed to sampling an average), it is also not too surprising that there could be certain $\alpha$ values at which rather sudden minor re-organizations of the spectrum take place. These sharp features are only clearly visible in the derivatives, however, and will not in any way be detrimental when applying our new criterion, Eq.~(\ref{eq:chi2me}), to determine the ``best'' $\alpha$. \begin{figure}[t] \centering \includegraphics[width=84mm]{fig46.pdf} \caption{ME results for the spectral functions of the same Heisenberg chains as in Figs.~\ref{sw1} and \ref{sw2}, $L=16$ at $\beta=2$ in (a) and $L=500$ at $\beta=500$ in (b), shown in red. The corresponding exact diagonalization and BA results are shown in black. The criterion in Eq.~(\ref{eq:chi2me}) was used with $a=0.5$ to fix the value of the parameter $\alpha$, based on the data shown in Fig.~\ref{x2_me}.} \label{sw_me} \end{figure} In Fig.~\ref{sw_me} we show the spectral functions obtained with $a=0.5$ in Eq.~(\ref{eq:chi2me}), for both $L=16$ and $L=500$. Note that here we show $S(\omega)$, obtained from $A(\omega)$ according to Eq.~(\ref{barelation}). It is striking how similar these results are to those of the unrestricted SAC in Figs.~\ref{sw1}(b) and \ref{sw2}(b), i.e., those obtained with equal-weight $\delta$-functions sampled in the frequency continuum in the range of good values of $\langle \chi^2(\Theta)\rangle/N_\tau$. In fact, the curves for matching goodness-of-fit values fall on top of each other, not only in the cases shown in Fig.~\ref{sw_me} (where we only graph the ME results) but for any pairs of values $(\alpha,\Theta)$ such that $\chi^2(\alpha) = \langle \chi^2(\Theta)\rangle$. These essentially identical results from the two methods are explained by an {\it exact} mapping between them, as we explain next. \subsection{Relationship to SAC} \label{sec:sacme} In Sec.~\ref{sec:entropy}, we computed the exact entropy of the SAC spectrum with equal-amplitude $\delta$-functions accumulated in a histogram, resulting in $E_{\rm EA}$ in Eq.~(\ref{eea}). Except for the factor $N=N_\omega$, this expression is the information entropy $E_{\rm SH}$ used in the ME method with a flat default model. We also discussed the GK entropy $E_{\rm GK}$, Eq.~(\ref{egk}), pertaining to amplitudes sampled on a fixed grid, and our conjectured mixed entropy $E_{\rm MX}$, Eq.~(\ref{emx}), for a spectrum with both frequencies and amplitudes sampled. For the sake of convenience and uniformity of the notation, we here repeat the three entropy expressions but now in the discrete summation form \begin{subequations} \begin{eqnarray} E_{\rm EA} & = & - \sum_{i=1}^{N_\omega} A_i \ln (A_i) ~=~ E_{\rm SH} \label{eea2} \\ E_{\rm GK} & = & + \sum_{i=1}^{N_\omega} \ln (A_i), \label{egk2} \\ E_{\rm MX} & = & - \sum_{i=1}^{N_\omega} (A_i-\gamma/N_\omega) \ln (A_i), \label{emx2} \end{eqnarray} \label{allentropies} \end{subequations} where again $\sum_i A_i=1$ and we have left out the factor $N_\omega$ in front of the expressions, for convenience below when we relate the SAC and ME methods to each other. In the case of SAC, we sample at a temperature $\Theta$ using Eq.~(\ref{psg}), while in the ME probability Eq.~(\ref{pmaxent2}) there is no such adjustable temperature (i.e., $\Theta=1$). In Sec.~\ref{sec:entropy} we found that the optimal sampling temperature scales with the inverse of the number of $\delta$-functions, and we therefore introduce a normalized temperature $\theta$ through; \begin{equation} \Theta =\theta/N_\omega. \end{equation} The SAC generated spectral weight distribution is always, for any of the parametrizations, collected in a histogram, and we know the forms of the entropies $N_\omega E_{\rm X}$ [where X is one of EA, GK, or MX in Eqs.~(\ref{allentropies})] of the spectra represented by those histograms. We can therefore write down probability distributions for those histograms defined by amplitudes $A_i$ in the same way as done in the ME method. The probability distributions for the ME and SAC spectra are \begin{subequations} \begin{eqnarray} P_{\rm ME,X}(A) & \propto & {\rm e}^{-[\chi^2(A)/2-\alpha E_{\rm X}(A)]},\label{pme} \\ P_{\rm SAC,X}(A) & \propto & {\rm e}^{-N_\omega[\chi^2(A)/2-\theta E_{\rm X}(A)]/\theta}. \label{psac} \end{eqnarray} \end{subequations} The factor $N_\omega$ multiplying the exponent in Eq.~(\ref{psac}) is very important, originating from the sampling entropies corresponding to the different parametrizations (Sec.~\ref{sec:entropy}). Its presence here implies that increasing $N_\omega$ in SAC eventually completely enforces the minimization of $\chi^2/2-\theta E_{\rm X}$. Thus, for large $N_\omega$, sampling the original spectrum in some specific parametrization X leads exactly to the same spectrum as does minimization of $\chi^2/2-\alpha E_{\rm X}$ in the ME method with $\alpha=\theta$. The probability distribution in Eq.~(\ref{psac}) looks like that in a conventional statistical mechanics problem (with $\chi^2$ corresponding to the energy density, $E_{\rm X}$ to the relevant form of the entropy density, $N_\omega$ to the number of particles, and $\theta$ to the temperature $T$), and the claim (mentioned in passing several times) that fluctuations play no role in Eq.~(\ref{psac}) when $N_\omega \to \infty$ may therefore seem unfounded. However, there are fundamental differences between the configuration spaces in SAC and conventional statistical mechanics. In particular, as we discuss in more detail in \ref{app:statmech}, in SAC there is in practice no analogy to low-energy infrared fluctuations when $N_\omega \to \infty$, because the spectrum has well defined upper and lower bounds, corresponding to a volume that does not grow with $N_\omega$. Thus, the ''thermodynamic limit'' here is an infinite-density limit, in contrast to the normally fixed finite density in statistical mechanics. All possible fluctuations when $N_\omega \to \infty$ are then suppressed. This is one of the most important previously neglected aspects of the problem of relating SAC to the ME method. Since two of the entropies, $E_{\rm GK}$ and $E_{\rm MX}$, also have constant factors in front that we have neglected (discussed in Sec.~\ref{sec:entropy}), the exact relationship between the SAC and ME parameters is $\alpha=x\theta$ with some $N_\omega$ independent factor $x$. We can circumvent this uncertainty by comparing results for which $\langle \chi^2(\Theta)\rangle$ in the SAC case equals $\chi^2(\alpha)$ in the ME case. The mapping between the two methods is then complete and testable in practice. We next confirm this statement by actually performing comparisons with all three parametrizations in SAC and the corresponding entropies, Eqs.~(\ref{allentropies}), in the ME method. \begin{figure*}[t] \centering \includegraphics[width=110mm]{fig47.pdf} \caption{Dynamic structure factor for the same Heisenberg chain as in Fig.~\ref{sw1}, obtained by SAC with amplitudes sampled on a fixed grid with 1000 points at spacing $\Delta_\omega=0.005$ (black curves) and with the ME method with the fixed-grid entropy, Eq.~(\ref{egk2}) (red curves). Results are shown at three values of $\chi^2/N_\tau$ in the ME method, and these values match $\langle \chi^2\rangle /N_\tau$ in the SAC calculations. The values are $\chi^2/N_\tau=1.0$ in (a), $0.6$ in (b), and $0.5$ in (c). The red ME curve in (a) almost completely covers the SAC result.} \label{newent} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=112mm]{fig48.pdf} \caption{Results as in Fig.~\ref{mixent} but for SAC with both amplitude and frequency updates (black curves) and the ME method with the mixed entropy, Eq.~(\ref{emx2}), with $\gamma=0.44$ (red curves). In (c), results obtained at $\langle \chi^2\rangle/N_\tau=0.51$ are also shown (the blue curve).} \label{mixent} \end{figure*} \subsubsection{Tests with Heisenberg QMC data} We already concluded in Sec.~\ref{sec:maxent1} that SAC results for the Heisenberg chains based on sampling the equal-amplitude $\delta$-functions resulted in spectra almost identical to those obtained with the ME method [comparing results in Fig.~\ref{sw1} and Fig.~\ref{sw2} with those in Fig.~\ref{sw_me}]. The fact that this SAC parametrization indeed generates the same Shannon form of the entropy used in the conventional ME method explains this remarkable agreement between the two approaches when $\alpha$ and $\Theta$ are chosen consistently so that $\langle \chi^2(\Theta)\rangle = \chi^2(\alpha)$. To make SAC different from the conventional ME method, another parametrization has to be used. Indeed, from the results in Figs.~\ref{sw1} and \ref{sw2}, it is clear that sharper peaks are formed when the amplitudes are also sampled, and the results, e.g., those in Figs.~\ref{sw1}(a) and \ref{sw1}(c), cannot be reproduced by any choice of $\alpha$ in the conventional ME method. Whether or not the amplitude fluctuations will have a favorable effect on the average spectrum may not always be clear, though in Sec.~\ref{sec:theta} we presented evidence from several examples that pointed to better results overall. Sampling amplitudes on a grid consistently appears to be the least preferred option in practice (with peaks often being too sharp). To test the fixed-grid case, results with $E_{\rm GK}$ in the ME method are shown in Fig.~\ref{newent} for the $16$-site Heisenberg chain, along with the previous grid SAC results from Fig.~\ref{sw1}(a). We consider three different sampling temperatures $\Theta$ and corresponding $\alpha$ values, matching the two methods by the goodness of the fit as explained above. Note that we here again present results for the original spectral function $S(\omega)$ according to Eq.~(\ref{barelation}). For each goodness-of-fit value $\chi^2(\alpha)$ in the ME case, we observe excellent agreement with the SAC results at the matching value of $\langle \chi^2(\Theta)\rangle$. The small deviations are primarily due to some jaggedness of the SAC spectra in Figs.~\ref{newent}(b) and \ref{newent}(c), which is a consequence of the rather inefficient sampling on the grid at low $\Theta$. There may also be some very small effects of the goodness-of-fits not being perfectly matched, and also from the fact that the $N_\omega \to \infty$ limit is not completely realized in the SAC case (grid sampling being slow, we only obtained well converged results up $N_\omega=1000$). These results give further credence to the rather involved functional-integral calculations leading to $E_{\rm GK}$ by Ghanem and Koch \cite{ghanem20a}. The variations among the results for different parametrizations also illustrate perfectly that details of the SAC stochastic process impact the functional form of the entropy, not just an unimportant factor. This aspect of SAC was not always recognized in the past, e.g., in Refs.~\cite{beach04,bergeron16}. Beyond analytic continuation, the Shannon entropy is also not a universal form of the entropy that should always be assigned to a curve---see, e.g., Ref.~\cite{balestrino09}, where both the Shannon entropy and a form analogous to the GK entropy appear in a completely different context. In the case of sampling with both frequency and amplitude updates, with the parametrization in Fig.~\ref{fig:spec}(c), our conjectured form $E_{\rm MX}$ in Eq.~(\ref{emx2}) contains the unknown ``mixing factor'' $\gamma$. We also proposed in Sec.~\ref{sec:entropy} that $\gamma^{-1}$ should be approximately the effective width of the spectrum, though it is not clear exactly how such a width should be defined. In the tests here, we adjusted $\gamma$ so that the peak height of the spectrum (using the same $L=16$ Heisenberg structure factor as before) is the same in the results of both methods when $\chi^2(\alpha)/N_\tau=\langle \chi^2(\Theta)\rangle/N_\tau=1$, which gives roughly $\gamma=0.44$. The inverse of this number $\gamma^{-1} = 2.27$ indeed represents a reasonable effective width of the spectrum. \begin{figure*}[t] \centering \includegraphics[width=110mm]{fig49.pdf} \caption{Comparison of SAC (black curves) and ME results (red curves) for the synthetic spectrum previously studied in Fig.~\ref{syntcomp}, here shown with dim gray curves in order to not obscure the SAC and ME results. Three different SAC parametrizations and associated entropies in the ME method were used with the data at error level $\sigma=10^{-6}$: Amplitudes on a fixed grid and the GK entropy, Eq.~(\ref{egk2}) in (a), equal-weight $\delta$-functions and the Shannon entropy, Eq.~(\ref{eea2}) in (b), and both frequency and amplitude sampling in (c), where the mixed entropy, Eq.~(\ref{emx2}) with $\gamma=0.5$ was used. The goodness of fit in all cases is $\chi^2_{\rm ME}/N_\tau=\langle \chi^2\rangle_{\rm SAC}/N_\tau=0.97$.} \label{syntme} \end{figure*} As seen in Fig.~\ref{mixent}(a), the entire spectral function matches the SAC result well, though not as perfectly as in Fig.~\ref{newent}. Keeping the same value of $\gamma$, the result at $\chi^2/N_\tau=0.6$ in Fig.~\ref{mixent}(b) has not changed much and the match between the SAC and ME spectra is still good. At $\chi^2/N_\tau=0.5$, Fig.~\ref{mixent}(c), the ME result deviates more from the SAC spectrum, with a precursor of the eventual splitting of the maximum into two peaks already setting in. In the SAC case the splitting starts at lower sampling temperature and is very pronounced when $\langle \chi^2\rangle/N_\tau=0.48$ in Fig.~\ref{sw1}. In Fig.~\ref{mixent}(c) we also show the ME result at slightly larger $\alpha$, corresponding to $\chi^2/N_\tau=0.51$, and this result matches quite well the SAC spectrum at $\langle \chi^2\rangle/N_\tau=0.50$. It should be noted that this particular profile is not realized with any of the other parametrizations that we have used in SAC. It is also possible to match the spectra better at the same goodness-of-fit values by changing the value of $\gamma$ slightly between the different cases in In Fig.~\ref{mixent}. Based on these tests we conclude that the mixed entropy with fixed $\gamma$ indeed appears to be realized in SAC when sampling both the frequencies and the amplitudes. However, since the entropy $E_{\rm MX}$ is not exact, the perfect match between the sampled spectrum and the one maximizing the ME probability is ruined when $\Theta$ and $\alpha$ are adjusted for $\chi^2_{\rm ME}(\alpha)/N_\tau=\langle \chi^2(\Theta)\rangle_{\rm SAC}/N_\tau$. The closest match is obtained with some small (at least in the example studied) deviation from this inequality. In other words, for a series of spectra $S_\Theta$ and $S_\alpha$ computed with the two methods, with the same $\bar G(\tau)$ data and with optimal entropy mixing $\gamma$, there should still be some function $\alpha(\Theta)$ such that $S_\Theta \approx S_\alpha(\Theta)$ holds to a good approximation (but clearly the spectra cannot be exactly the same if the goodness-of-fit values are not the same). Alternatively, the best $\gamma$ may have a weak dependence on $\alpha$, but we have not systematically explored how to change $\gamma$ for the best match between the SAC and ME spectra. The mixed entropic form with the parameter $\gamma$ also offers a broader range of ME methods, with $E_{\rm MX}(\gamma)$ versus $\gamma$ smoothly interpolating between the entropies exactly corresponding to grid sampling and equal-amplitude frequency sampling. To use this entropy in practice, we need a way to determine the optimal $\gamma$ value. Heuristically, we can just set $\gamma^{-1}$ equal to the length of a box (corresponding to a flat default) with the same first and second moments as the spectrum, which can be determined to a good approximation from $\bar G(\tau)$. The moments correspond to derivatives of $\bar G(\tau)$ at $\tau=0$ \cite{sandvik98b} and are, thus, encoded in the short-time correlation function. We can also determine the moments from the spectrum itself during the initial stages of optimization procedure, or from a previous SAC calculation with any parametrization. In the above case, this approach gives $\gamma=0.36$, for which the peak height of the spectrum at $\chi^2/N_\tau=0.6$ (which is in the acceptable range according to our criterion) only is slightly less than in Fig.~\ref{mixent} and the result overall matches well the exact diagonalization result. As we will see below in Sec.~\ref{sec:maxent2}, the mixed entropy will also be relevant in the case of sampling spectra within the ME approach. \subsubsection{Tests with synthetic data} For our final test of the three different entropic forms, we consider the synthetic spectrum previously investigated in Sec.~\ref{sec:example3} by SAC calculations with the two continuous frequency parametrization. We consider the case of imaginary-time data with error level $\sigma=10^{-6}$, for which SAC results were shown in Fig.~\ref{syntcomp}(e) and \ref{syntcomp}(f), and compare these results with maximum-probability spectra obtained with the ME method with the corresponding entropic forms Eqs.~(\ref{allentropies}). Here we include also the fixed-grid case that was not studied in Sec.~\ref{sec:example3}. We match the goodness of fit values in the two methods according to our criteria in Eqs.~(\ref{eq:chi2}) and (\ref{eq:chi2me}), with $a=0.5$ as before, i.e., we again have $\chi^2(\alpha)/N_\tau=\langle \chi^2(\Theta)\rangle/N_\tau$ but now focusing only on the statistically optimal spectra. According to the rigorous mappings between the SAC and ME methods when sampling either equal-amplitude $\delta$-functions or fixed-grid amplitudes, which lead to entropies given by Eqs.~(\ref{eea2}) and (\ref{egk2}), respectively, the two methods should produce identical results as long as $N_\omega$ is sufficiently large. In the case of sampling both frequencies and amplitudes, we again expect the mixed entropy, Eq.~(\ref{emx2}), to be a good approximation to its unknown exact form if $\gamma$ is appropriately chosen. Here we just use the method described above of matching the first and second moments of a flat default model, which gives $\gamma=0.50$. Results are shown in Fig.~\ref{syntme}. Indeed, the agreement between the SAC and ME results is almost perfect in the case of the fixed grid and frequencies-only sampling, Figs.~\ref{syntme}(a) and \ref{syntme}(b). In the case of combined amplitude and frequency sampling, Fig.~\ref{syntme}(c), the agreement is also quite good, thus providing further support for the mixed entropy form, and also for the simple heuristic way of determining the mixing coefficient $\gamma$. \subsubsection{Conclusions on SAC to ME mapping} The test results presented above leave little doubt that the unconstrained SAC method delivers exactly the same spectrum as the maximum-probability ME method, provided that the appropriate form of the entropy is used in the ME method and that $\Theta$ and $\alpha$ are chosen for matching the goodness of fit of the two spectra. Further, $N_\omega$ must be sufficiently large for convergence to the $N_\omega \to \infty$ limit in the SAC case (and naturally a fine enough grid has to be used for the ME calculations, though the convergence here appears to be typically faster). Thus, we have finally established the exact relationship between the two methods. While some of these relationships were anticipated in previous works, they were never spelled out in a definite, consistent, and systematic manner. Some aspects of the claims are incomplete or incorrect. Before gathering and further discussing our conclusions on the mapping, we summarize some of the key prior insights and views. Beach, in his pioneering work on the ME--SAC equivalence \cite{beach04}, used a mean-field approach that did not produce the factor $N_\omega$ of the entropy and also involved the wrong functional form of the entropy for the parametrization used (both frequencies and amplitudes sampled). The Shannon entropy was derived without taking into account the details of the stochastic process, which instead should result approximately in the mixed form introduced here. The lack of $N_\omega$ scaling is natural within mean-field theory, and the Shannon entropy was assumed to be generic relative to an arbitrary default model (which also was first incorporated into SAC by Beach). Bergeron and Tremblay correctly noted the factor $N_\omega$ in their derivation of the Shannon entropy, but regarded it not as arising from a parametrization with $\propto N_\omega$ degrees of freedom but as a conceptual device. They also did not discuss the details of the mapping beyond stating that the entropy derivation ``suggests a connection'' \cite{bergeron16} between SAC and the ME method. Working with a histogram representation, the form of the entropy was again incorrect (i.e., not $E_{\rm GK}$), and there was no mention of different entropic forms depending on the SAC parametrization. Fuchs, Pruschke, and Jarrell \cite{fuchs10} explicitly stated that SAC minimizes a functional including an entropy, as in Eq.~(\ref{functional}). However, they regarded this entropy as a macroscopic thermodynamic entropy (i.e., a single number generated by the averaging process under given conditions), not one directly expressed for each spectrum $A$ as $E(A)$ originating from the degrees of freedom of the stochastic process. Their main aim was to fix the sampling temperature $\Theta$ (or eliminate it by integration) from the problem by using Bayesian inference. The final conclusion was that SAC is better than the ME method because the exact probability distribution is sampled instead of maximized. Here we have shown that the two approaches, under the prevailing conditions of unrestricted sampling used in Ref.~\cite{fuchs10}, give exactly the same results (strictly speaking when $N_\omega \to \infty$, where the fluctuations of the average spectrum vanish). The entropy $E_{\rm GK}$ \cite{ghanem20a} was extremely useful in our work presented above, but Ghanem and Koch also did not take the final step to show the exact equivalence between the methods under different conditions, although they realized the non-universality \cite{ghanem20b} of results obtained in different sampling spaces. The crucial role of the sampling space in producing different entropic pressures was indeed also the motivation for switching from a fixed frequency grid to $\delta$-functions in continuous frequency space in earlier work ny one of us and collaborators \cite{qin17}, but it was not realized that the fixed-apmplitude case maps to the conventional ME method and that it is actually the fixed frequency grid that leads to a new form, Eq.~(\ref{egk}), of the entropy \cite{ghanem20a}. The mixed entropy, Eq.~(\ref{emxw}), that we have proposed here is not based on a formal derivation and is likely not exact in general. It still offers an interesting perspective on the case of fluctuating amplitudes and frequencies, including its use in generalized ME methods. It would be interesting to further analyze the entropy in this space using analytical approaches. While a functional-integral representation may not formally exist in this case, as pointed out by Ghanem and Koch \cite{ghanem20a}, the stochastic process of SAC is always well defined. There is no dependence on the discretization as long as the histogram used to collect the spectrum has sufficiently small bins to capture all details of the average. We have computed the entropy of the spectrum exactly in the case of all-equal amplitudes [Sec.~\ref{sec:entropy1}] by using a ``particles on a line'' method but have not yet been able to incorporate the fluctuating amplitudes analytically within this approach. The entropic forms also formally require the limit $N_\omega \to \infty$, which in practice is realized for values of $N_\omega$ that can be easily reached at least with the parametrizations in the frequency continuum (with larger $N_\omega$ required as the error level of the imaginary-time data decreases). We here take the point of view that the unrestricted (without any constraints) SAC methods should be considered in the large-$N_\omega$ limit, which is also conceptually different from the approach of Ghanem and Koch \cite{ghanem20b}, who focused on sampling at $\Theta=1$ in a range of $N_\omega$ for which the fit is still good (similar in this regard to Ref.~\cite{sandvik16}). Both points of view are valid, and it is possible that additional spectral structure can be resolved with an optimal $N_\omega$ that is not yet in the ME limit, so that there are still fluctuations about the maximum-probability ME spectrum. In our $\Theta=1$ tests here, presented in Fig.~\ref{ndep}, we did not observe any advantage of taking $N_\omega$ away from the large-$N_\omega$ limit (with $\Theta$ adjusted), but any gain in resolution may depend on the problem studied. The use of a default model may also play some role. As a final remark on the three different representations and associated entropies, we note again that the underlying sampling of both frequencies and amplitudes appears to generate the best balance between resolution (sharper peaks than with only frequency sampling) while avoiding overly sharp features. We have seen several examples of this behavior in our tests in the preceding sections. These apparently generic behaviors are also well represented in Fig.~\ref{syntme}, where clearly the amplitudes and frequency sampling (mixed entropy) gives the best match with the exact spectrum. The frequencies-only sampling (conventional Shannon entropy) results in the broadest spectrum, while the fixed-grid (GK entropy) results in a much too sharp peak. Here it is also worth emphasizing that the error level in the tests in Fig.~\ref{syntme}, $\sigma = 10^{-6}$, is achievable in QMC simulations, but normally only at great effort. The fact that the three different parametrizations and corresponding entropies still produce such different results then clearly demonstrates that it is important to use the best possible SAC parametrization (or entropic form in the ME method). For work with unrestricted sampling, all indications are that the combined amplitude and frequency sampling should be the best option. Similarly, in the ME method the mixed entropy should be optimal. To fix the value of $\gamma$, our simple spectrum-width approach with a flat default model works well. Using the mixed entropy with this $\gamma$ is likely the best ME method with a flat default. Other default models could also be used with the more general form, Eq.~(\ref{emxw}), of the mixed entropy. We leave further exploration of the ME method with the mixed entropy for the future. With constrained SAC, we typically sample only the frequencies, since it appears advantageous to minimize the remaining entropic pressures once the constraint has been optimized, as exemplified in Fig.~\ref{w0fix}. It is still possible that continua with more spectral details would benefit from also including amplitude updates. We also note that some of the SAC constraints and optimization methods that we have developed can also be adapted to ME method. The above considerations will then also apply as to what form of the entropy to use in that case. \subsection{Maximum probability versus sampling} \label{sec:maxent2} As an alternative to finding the spectrum minimizing the functional, Eq.~(\ref{functional}), as typically done in the ME method (some times with further integration over $\alpha$), the spectrum can also be averaged by sampling as in SAC. Then either the probability distribution Eq.~(\ref{pme}) for a fixed value of $\alpha$ (optimized in some way) can be used, or $\alpha$ can also be sampled using a version of Eq.~(\ref{pmaxenta2}) (with the prior distribution normally taken $\propto \alpha^{-1}$, as discussed, e.g., in Ref.~\cite{jarrell96}). The sampling approach may from the outset seem like the better option, because the use of the optimal spectrum appears to presuppose a distribution with a sharp maximum. In practice, the maximum may not not be very sharp, and including the contributions from near-optimal solutions will then significantly affect the spectrum. The ME sampling approach has been used, in particular, in the context of the dynamic structure factor of $^4$He \cite{boninsegni96,kora18}. It is interesting to compare and contrast the sampling of the spectrum within the ME framework and the SAC method. When sampling, there is a native configurational entropy of the spectrum that depends on the parametrization used, as we discussed exhaustively in the previous subsection. The fact that the entropy is always extensive implies that it will ultimately (for a large number $N_\omega$ of sampled degrees of freedom) drive the SAC spectrum to a bad fit, unless the sampling temperature $\Theta$ is lowered at the rate $1/N_\omega$. When sampling the ME probability, the Shannon entropy is explicitly used in weighting the configurations according to Eq.~(\ref{pme}), but there is also configurational entropy of the sampling space, exactly as in the SAC method. Thus, with no $\Theta$ to adjust, the sampled ME spectrum should eventually, when a large number $N_\omega$ of parameters is sampled, flow to poor solution, in this case to the default model (which typically would correspond to a very large $\langle \chi^2\rangle$ value), In the case of sampling amplitudes on a grid, which as far as we are aware is the only parametrization that has been considered with the ME approach, the sampling generates the GK entropy $E_{\rm GK}$, Eq.~(\ref{egk2}), which implicitly combines with the Shannon entropy used for the prior weight. Thus, it can be expected that the total entropy of the spectrum in this case is exactly the mixed entropy $E_{\rm MX}$ defined in Eq.~(\ref{emx2}), again for some value of $\gamma$. Here we will not be concerned with the exact form of the combined entropy, though it would also be interesting to further explore the mixed entropy in this context as well. We will focus on the fact that the sampling part of the entropy is extensive, which has been ignored so far in this context. There is no reason to expect that $\Theta=1$ sampling according to the distribution in Eq.~(\ref{pmaxent2}) would avoid the entropic catastrophe when the number of histogram bins is large. As an example demonstrating this potential problem, we consider the $16$-site Heisenberg chain as before. We used the distribution Eq.~(\ref{pme}) to sample a histogram in the range $\omega \in [0,5]$ (effectively with a flat default model in this range), using different numbers $N_\omega$ of bins. \begin{figure}[t] \centering \includegraphics[width=80mm]{fig50.pdf} \caption{Goodness of fit vs the entropy weighting parameter obtained by sampling the probability Eq.~(\ref{pmaxent2}) with spectra defined on grids $\{ \omega_1,\ldots,\omega_{N_\omega}\}$ with $\omega_{N_\omega}=5$ and $N_\omega=10,20,40,80,160$. The system parameters are the same as for the $L=16$ results in Fig.~\ref{x2_me}. The inset shows the averaged spectral functions at $\alpha \approx 50$. The dim gray histogram bins represent the exact-diagonalization result as before (Fig.~\ref{sw1}). Note that the sampling with small $N_\omega$ is very slow (as demonstrated with SAC in Fig.~\ref{nconv}), which causes the noisy behavior of $\langle \chi^2\rangle/N_\tau$ for $N_\omega=10$ and $20$.} \label{x2_alpha} \end{figure} The main graph in Fig.~\ref{x2_alpha} shows $\langle \chi^2\rangle/N_\tau$ versus $\alpha$ and the inset shows the corresponding spectral functions together with the histogram based on exact diagonalization as, e.g., in Fig.~\ref{sw_me}(a). The behavior overall versus $N_\omega$ is similar to what we found previously with $\Theta=1$ SAC in Fig.~\ref{ndep}; for given $\alpha$, $\langle \chi^2\rangle$ increases and eventually grows beyond the statistically acceptable bound. Interesting, for the smallest $N_\omega$ cases (up to at least $N_\omega=40$) shallow minima can be seen between $\alpha=50$ and $\alpha=100$. While the histogram is very coarse for $N_\omega=10$, those for $N_\omega=20$ and $40$ capture the overall shape quite well. For $N_\omega=160$ there is no $\chi^2$ minimum, and a small spurious maximum has formed in the spectrum at low frequency, similar to the SAC results in Fig.~\ref{sw1} for the larger goodness-of-fit values. One could argue that $\alpha$ should be set based on the $\chi^2$ minimum where possible, but it should be noted that $\chi^2$ is somewhat too high already with the small values of $N_\omega$ for which a minimum exists. We have also not investigated this behavior in detail for other systems and spectral functions. These results confirm our expectation that the probability distribution that forms the basis of the ME method, Eq.~(\ref{pme}), is flawed when a histogram with a large number of bins is used. There is an implicit assumption, that has not been explicitly stated in this context as far as we are aware, that the number of bins (or in general some number of parameters) must be rather small, so that the inherently extensive entropy of the configuration space does not come significantly into play. In a recent application \cite{kora18}, where also $\alpha$ was sampled, indeed $\langle \chi^2\rangle$ is also marginally higher than would be expected for good statistical fits, though the effects on the spectral function is likely only marginal and unlikely impacts any conclusions. It should also be noted that a good default model reduces the detrimental effects of the extensive sampling entropy. In principle, the entropy increase with $N_\omega$ can be counteracted by introducing a factor similar to $N_\omega/\theta$ in the SAC distribution in Eq.~(\ref{psac}), but then both $\alpha$ and $\theta$ have to be optimized. Eventually for large $N_\omega$ the sampled spectrum would again become identical to that maximizing the probability as discussed above in Sec.~\ref{sec:sacme}, in this case with the mixed entropy. Alternatively, $N_\omega$ should just not be too large, so that the entropic effect does not yet affect the result. In practice, it is easier to use SAC or the maximum-probability version of the ME method, with the mixed entropy if so desired. Pathologies of the ME probability distribution are further discussed in Sec.~\ref{sec:pathology}. \section{Conclusions and future prospects} \label{sec:discussion} An overarching theme of this work has been the critical role of the sampling space (parametrization) of the spectral function within the SAC approach to analytic continuation. We have emphasized the fact that all parametrizations are associated with their different inherent entropic pressures under Monte Carlo sampling. Thus, while good underlying QMC data will ensure that the main spectral features are well reproduced, details of the results will depend significantly on the parametrization when the imaginary-time data are of statistical quality typical for QMC simulations. We have identified the main problems and advantages with three different basic parametrizations, using $\delta$-functions on a grid or in the frequency continuum [Figs.~\ref{fig:spec}(a)--(c)]. Following studies of unrestricted sampling, we introduced a number of constraints intended to eliminate or reduce entropic effects that otherwise cause smearing of sharp spectral features. We focused on the lower frequency bound, which physically reflects signatures of the effective conventional quasi-particles or fractionalized excitations that are of primary interest in experimental and theoretical studies of quantum matter. Our results demonstrate a remarkable fidelity of the methods to locate edges and determine their properties---even widths of narrow quasi-particle peaks and exponents governing power-law singularities. The proper treatment of an edge also translates into significant fidelity enhancement at higher frequencies, since edge distortions appearing in the absence of constraints (with all conventional analytic continuation methods) propagate and cause distortions of the spectrum away from the edge. The new methods open opportunities to study quantitative aspects of spectral functions that have been out of reach until now. To illustrate the power of these methods, we presented (Sec.~\ref{sec:ladders}) new results for the dynamic structure factor of 2- and 3-leg $S=1/2$ Heisenberg ladders, uncovering previously unexplored features. While the main focus of our work has been on analytic continuation within the SAC framework, along the way we have also obtained new insights into the ME method and its relationship to SAC. We have found that the Bayesian-based probability distribution underlying the ME method has a flaw in that it describes divergent fluctuations in the continuum limit, though the most probable spectrum is still well defined. We have also revised the formal relationship between the ME and SAC methods, finally achieving a rigorous mapping between the two. We demonstrated the relationship concretely by comparing SAC calculations with different parametrizations with ME calculations with entropic priors corresponding to those parametrizations. These insights and tests prove that the conventional Shannon information entropy used in the traditional ME method is not universal, and that generalized ME methods with other entropic priors may produce better resolution. Since the treatise is extensive, with progress reported on several interrelated fronts, we here concisely summarize and remark further on some of the key developments and discoveries, not necessarily in the order presented previously but with references to relevant sections, equations, and figures. In Sec.~\ref{sec:conca1} we focus on the developments of the SAC method itself, while in Sec.~\ref{sec:conca2} we discuss the more formal entropic aspects of the sampling approach, our new insights into the ME method, and the relationships between the SAC and ME methods. These two sections do not merely review material from the previous sections but include substantial commentary and additional conclusions from synthesizing different results. In Sec.~\ref{sec:conc3} we discuss future prospects, including further developments of constraints (which can be regarded as generalized default models), the possibility to use machine learning to augment SAC or ME calculations, as well as potential advantages of including a small fraction of negative spectral weight in SAC sampling. \subsection{Stochastic analytic continuation} \subsubsection{Optimal $\Theta$ and $\Theta \to 0$ limit} \label{sec:conca1} An important aspect of SAC is how to determine the optimal sampling temperature $\Theta$. We have here expanded on the statistical motivations for the previously suggested (in our work with collaborators in Refs.~\cite{qin17} and \cite{shao17}) simple criterion in Eq.~(\ref{eq:chi2}), which relates $\langle \chi^2\rangle$ to the optimal $\Theta$. By an exact calculation of the configurational entropy in one of the parametrizations (Sec.~\ref{sec:entropy1}), we demonstrated that $\Theta \sim 1/N_\omega$ is necessary, with $N_\omega$ being the number of $\delta$-functions in the sampling space. This scaling behavior, which follows from the fact that the entropy is extensive in $N_\omega$ while $\chi^2$ acts as an unusual intensive energy in a statistical-mechanics analogy (see \ref{app:statmech}), is realized automatically through our procedures. We have investigated the $\Theta \to 0$ limit (in detail in \ref{app:lowtheta}), which defines the best goodness-of-fit $\chi^2_{\rm min}$ that also enters in the $\Theta$-criterion. We explain why the spectrum in this limit consists of a small number $N_\delta$ of $\delta$-functions (about 2-6 for QMC data of typical quality), and how these define an effective number of fitting parameters $N_{\rm para}=2N_\delta$ (from the position and amplitude of each $\delta$-function) that can be collectively produced by a positive definite spectrum, given the noise in the imaginary-time data (which corresponds to some violations of positive definiteness). The identification of the number $N_{\rm para}$, along with $\chi^2_{\rm min}$ serving as a proxy for the number of effective degrees of freedom of the fit, closes the circle on the formal statistical applicability of the $\Theta$ criterion. These insights also enable an a posteriori check of the statistical soundness of the covariance matrix for the QMC data set, in the form of the expected relationship Eq.~(\ref{chi2expected}) between $\chi^2_{\rm min}$ and $N_{\rm para}$. The designation of $N_{\rm para}$ based on the best-fit spectrum also provides a possible resolution to a mystery that most practitioners of numerical analytical continuation have undoubtedly pondered: When improving the quality of the imaginary-time data, existing peaks and other features typically evolve somewhat (as we have seen many examples of here, e.g., in Fig.~\ref{syntcomp}). However, it is commonly very difficult to achieve large qualitative improvements, i.e., to observe the emergence of new previously unresolved spectral features (e.g., peaks or minima), even when increasing the data quality substantially. This slowly evolving behavior, possibly followed by ultimately large changes in the limit of very good data, is indeed what should be expected in light of the smallness of $N_{\rm para}=2N_\delta$. If we consider the low-$\Theta$ behavior, a spectrum with a very small number $N_\delta$ of spikes, say $N_\delta=2$ or $N_\delta=3$, is typically very stable, with $N_\delta$ itself as well as the locations and amplitudes of the spikes are not very sensitive to variations in $\bar G(\tau)$. For a given observed $N_\delta$, it may take orders-of-magnitude improvements in the data quality to increase $N_\delta$ even by $1$, considering that the problem is exponentially hard in the noise level. It is likely only when $N_\delta$ increases that additional significant spectral features also emerge in the sampled spectrum with $\Theta$ in the optimal range. We have not yet systematically investigated this aspect of the analytic continuation problem but it would be very interesting to pursue further. In this regard, we also point to the possibility that a very small fraction of negative weight in the spectrum may in some cases lead to larger $N_\delta$ (\ref{app:low2}), which would suggest that sampling with negative weight can perhaps improve the resolution of SAC in some cases. We have also demonstrated that a previously often used $\Theta$ criterion based an a maximum log derivative of $\chi^2(\Theta)$ in general results in a spectrum with suboptimal fit to the data, while our method guarantees a good fit by construction. Moreover, our criterion is much easier to use in practice, because the derivative is much noisier (i.e., requires longer sampling) than just $\langle \chi^2\rangle$ and $\chi^2_{\rm min}$. Our criterion is also much easier to implement than the Baysian approach suggested in Ref.~\cite{fuchs10}, which we have not tested. \subsubsection{Resolving sharp spectral edges} \label{sec:conca2} Constraining the sampling space in some way implies that certain additional pieces of information have been provided to the SAC process, in addition to the QMC data. Thus, some specific aspect of the spectrum is built in, and this aspect should ideally correspond exactly to a known feature in the spectrum sought, e.g., the existence of a sharp quasi-particle peak or a power-law singularity at the lower edge of the spectrum. The method can also be regarded as a form of hypothesis testing. In some cases, a treatment without constraint may already hints at a sharp peak or edge, and it is then worth trying constrained parametrizations to further explore possible functional forms at the edge. An imposed constraint often involves an optimized parameter that provides quantitative information not supplied as input. Such information is frequently not just the optimized parameter itself, but additional quantitative features that can be reliable extracted from the optimal spectrum. For instance, when optimizing the amplitude of a quasi-particle peak, also its location can be extracted at a level of precision that would otherwise be impossible. We stress that the amount of information provided when imposing a constraint seemingly is very limited, but the implications in terms of improved resolution and extraction of quantitative not-supplied information can be dramatic. We also point out that it would be very difficult to construct generically useful fitting functions with the degree of flexibility offered by our most generic constrained SAC parametrizations, e.g., a power-law edge followed by an arbitrary continuum (Fig.~\ref{fig:mixed} and further generalizations). What our constrained methods provide is a generic machinery for optimizing spectra of specific types based on statistical criteria and minimal information provided. Functions with close to the same degree of flexibility as SAC would still require a large number of parameters; say tens of parameters. Though this would be much less than typically $N_\omega$ in SAC, the parameters would still be numerous enough to prohibit optimization in conventional ways, for similar reasons as why $\chi^2$ minimization does not work with the parametrizations used here. The forms would have to be regularized in some way, e.g., by sampling parameters, and then the task starts to resemble SAC with parametrizations beyond those considered here. Further complications may arise from the likely complicated inherent entropic pressures of such forms, because of the way different parameters can affect the function very differently in general. The parametrizations with large $N_\omega$ used here have the advantage of similarity to statistical mechanics, including well-defined generalized thermodynamic limits. Following our initial work on spectra with sharp quasi-particle peaks in Ref.~\cite{shao17}, we have clarified (Sec.~\ref{sec:delta2}) how the quasi-particle weight converges toward its correct value with increasing data quality. We have also established the statistical signatures of a quasi-particle peak of finite width when a $\delta$-function is incorrectly imposed in the parametrization, i.e., the method can detect the inapplicability of the constraint (to an extent set by the data quality). As an application of this parametrization and optimization to a problem with still open questions, we investigated the dynamic structure factor of the 2-leg Heisenberg spin-1/2 ladder (Sec.~\ref{sec:hladd2}), where, at momenta close to $q=\pi$, there is an isolated $\delta$-peak at the gap $\Delta$, originating from the triplon quasi-particle, and a second gap $3\Delta$ is also expected. We built in the second gap as a further constraint, which revealed new spectral features from three-triplon excitations. We have further generalized the $\delta$-peak method to a quasi-particle peak of finite width, using a parametrization, Fig.~\ref{peakfig}(c), where the peak weight is split up among $N_p$ $\delta$-functions, the number of which is much smaller than the total number $N_\omega$ (and their amplitudes conversely are much larger). Both $N_p$ and the total peak weight $A_0$ are parameters that have to be optimized, for which we have proposed a reasonably efficient scheme. We have presented promising test results for synthetic spectra with both narrow and broad peaks, showing that narrow peaks can be reproduced at a level far beyond what can be achieved with conventional SAC or ME methods. With an SAC parametrization specifically tailored to power-law singularities---the distance-monotonic $\delta$-functions depicted in Fig.~\ref{fig:spec}(e)---we have demonstrated unprecedented fidelity in reproducing various synthetic spectral functions as well as the dynamic structure factor of the Heisenberg chain, which hosts a spinon continuum with edge divergence. It should be noted that, as in the case of the quasi-particle peak discussed above, prior knowledge of the location of the edge of the spectrum is not required, though if available such information can be used as input as well. Otherwise the sampling process equilibrates to a stable edge very close to the correct location, typically to better than $1\%$ error in our tests (see, e.g., Fig.~\ref{fig:hbergsdens}). With the basic distance-monotonic constraint, we can reproduce continuous spectra with strongly divergent edge followed by a monotonic decay. The divergence can be quenched by introducing an optimized constraint on the minimum $\delta$-function separation. With a further generalization of the parametrization involving varying amplitudes, the exponent governing a divergent or convergent power-law edge can also be extracted by optimizing a parameter (as in the example in Fig.~\ref{p33scan}). In our most complete modeling of a spectrum with arbitrary continuum and power-law edge, we combine unrestricted SAC sampling with the edge parametrization discussed above. Following promising tests with a synthetic spectrum (Fig.~\ref{fig:mixed} and other cases not presented here), this parametrization was used to study the dynamic structure factor of the 3-leg Heisenberg ladder in Sec.~\ref{sec:hladd3}. While there is still limited ability to resolve more than one or two peaks in the continuum, the constrained SAC methods offer unprecedented access to spectral edges originating from quasi-particle fractionalization. We note that there is really no other way to reliably extract sharp edges. The most competitive alternative method is time-dependent DMRG, which in some cases can deliver spectral functions with much more structure (more peaks) than what is possible in practice with analytic continuation of QMC data. However, sharp edges pose a problem for DMRG calculations, and we have also pointed out apparent difficulties with resolving the three-triplon contributions to the dynamic structure factor of the 2-leg Heisenberg ladder (which we did detect in Sec.~\ref{sec:hladd2}). While spectral weight above the isolated $\delta$-peak at the gap $\Delta$ has been detected in DMRG work \cite{schmidiger12}, this weight does not extend down to $\omega = 3\Delta$ but, for reasons that remain unresolved, exists only at higher energy. There are many other applications of time dependent DMRG and related methods directly targeting the frequency space. In relation to models discussed in this work, the dynamic structure factor of the standard Heisenberg 1D chain at low temperatures has been studied at both high an low temperatures \cite{barthel09}, where in the latter case the sharp edge is still rounded by $T>0$ effects. There are also effects of the approximations made in the time evolution. A method based on matrix product states and Chebyshev polynomials (in frequency space) at $T=0$ also leads to rounding of the edge \cite{xie18}, partially because of the need to smoothen a rather small number of $\delta$-functions that appear in the exact (or well approximated) finite-size spectrum (see also Ref.~\cite{wang19}). Even for the system size $L=500$ that we have considered for $T=0$ Heisenberg tests here, the ultimately discrete spectrum may still come into play in some way. However, the sharp-edge SAC method also implicitly provides a natural smoothing mechanism for the continuum, while still preserving the sharp edge, in a way that is fully appropriate for reaching the thermodynamic limit with increasing system size. In the presence of trimerization, a more intricate spectral function forms in the Heisenberg chain as a consequence of coexistence of spinons with propagating internal trimer excitations. Similar spectral features have been resolved using both QMC-SAC \cite{cheng22} and DMRG \cite{bera22} (and were also recently observed experimentally \cite{bera22}), but the larger periodic systems accessible with QMC-SAC makes it possible to resolve the low-energy spinons, in particular, to a much higher degree even with unrestricted sampling. We also point to a DMRG study of a chain with long-range interactions \cite{yang21}, where the sharp (expected) magnon peak is likely broadened by the finite time evolution, and the continuum exhibits (likely) artificial oscillations, at least partially because of true finite-size structure. It will be interesting to apply the method discussed above for a quasi-particle of finite width to this case. The DMRG and related matrix product methods \cite{aristov10,xie18,shu18} are also largely limited to 1D systems (though some impressive 2D results have also been obtained recently \cite{verresen18,sherman22}), while the methods discussed here are applicable to higher-dimensional systems as well (of course within the limitations set by the QMC sign problem). Here we point to Ref.~\cite{ma18}, where unconstrained SAC was applied to a 2D quantum-critical system with deconfined spinon excitations. It would be interesting to study the same model (and other models exhibiting spinon deconfinement) with the power-law edge parametrization of the spinon spectrum. \subsection{Maximum entropy methods} \label{sec:conc2} \subsubsection{Equivalence between sampling and maximum probability} We have derived the exact entropy of spectra collected in histograms in SAC with equal-amplitude $\delta$-functions in the frequency continuum (Sec.~\ref{sec:entropy}). The result, Eq.~(\ref{eea}), is the well known Shannon entropy, apart from the prefactor $N_\omega$ that we discussed above in Sec.~\ref{sec:conca2}. With the temperature $\Theta$ scaled as $1/N_\omega$ in order to reach good $\chi^2$ values for the sampled spectra, we define the intensive temperature $\theta=\Theta/N_\omega$. Then, if we identify $\theta=\alpha$, the probability distribution, Eq.~(\ref{psac}), is the same as that in the ME method, Eq.~(\ref{pme}), apart from the overall factor $N_\omega/\theta$ in the exponential. This factor is of critical importance, as it makes the ``free energy'' minimum increasingly deep as $N_\omega$ increases, thus effectively enforcing the minimization of the functional $\chi^2(A)/2-\theta E(A)$ when $N_\omega \to \infty$. The average spectrum in SAC then becomes exactly the maximum-probability ME spectrum with $\alpha=\theta$. In \ref{app:statmech} we further discuss the reason why the fluctuations are completely suppressed in this case, unlike in conventional statistical mechanics. An important consequence of the effective enforcement of maximum probability with increasing $N_\omega$ is that the sampling fluctuations of the SAC spectrum for finite $N_\omega$ cannot be used as reliable statistical errors on the average spectrum (as is some times implied \cite{beach04}). We will discuss error calculations further below in Sec.~\ref{sec:errors}. We have confirmed explicitly (Sec.~\ref{sec:sacme}) that SAC with different parametrizations correspond to the ME method with different entropies in the prior probability; the Shannon entropy is generated only in the above discussed case of sampling the frequencies of equal-amplitude $\delta$-functions. With amplitudes sampled on a grid, the entropy Eq.~(\ref{egk}) previously calculated by Ghanem and Koch \cite{ghanem20a} applies. When both the frequencies and the amplitudes are sampled, we have conjectured, based on additivity of entropies, that a mixed entropy, Eq.~(\ref{emx}), applies to a good approximation. This form can interpolate smoothly between the Shannon and GK entropies by tuning of the factor $\gamma$. To match the actual entropy generated by sampled frequencies and amplitudes, we have proposed a simple heuristics method of setting $\gamma$ to the inverse width of the spectrum, suitably defined (e.g., using frequency moments). Interestingly, the mixed entropy is also generated when the conventional ME probability distribution is sampled stochastically (similar to SAC, but with prior-entropy weighting included), instead of solving for the most probable spectrum (Sec.~\ref{sec:maxent2}). Though entropy calculations in previous works had pointed to formal relationships between SAC and the ME method \cite{beach04,bergeron16,ghanem20a,ghanem20b}, they had never been pursued to the level of the full equivalence demonstrated here. Our results also differ in important respects from some of the previous suggestions, as we discussed in Sec.~\ref{sec:maxent} and address further in Sec.~\ref{sec:pathology}. The equivalence between SAC with different sampled parametrizations and the maximum-probability ME method with a corresponding entropy is fully manifested when directly matching the goodness-of-fit of results obtained with the two methods, $\langle \chi^2_{\rm SAC}(\Theta)\rangle = \chi^2_{\rm ME}(\alpha)$, and we have also developed similar methods to determine the optimal $\alpha$ or $\Theta$ value. Given this equivalence between the two methods, a relevant question is which one is better in practice. While the ME spectrum probably can be optimized faster in many cases, the average spectrum also can be sampled very efficiently with the continuous-frequency representations when $N_\omega$ is large (despite the fact that the computational effort of the sampling algorithm in principle also scales as $N_\omega$). The high efficiency is in part related to the fact that the variance of the fluctuations relative to the average spectrum decays as $1/N_\omega$. Based on our tests, with the continuum representations it is also normally easier to extract the best goodness-of-fit, $\chi^2_{\rm min}$, which is needed to fix the value of $\Theta$ or $\alpha$. Another aspect to consider is that the entropy is not exactly known in the case of both amplitudes and frequencies sampled, and this, we have argued, is the best parametrization with SAC. Still, the mixed entropy with the simple $\gamma$ fixing produces similar results---and in general better results than the conventional Shannon entropy. Overall, it is essentially a matter of taste which method to use for unconstrained spectral functions and flat default models. We have not investigated any other default models in our work reported here, and it would be interesting, in particular, to further explore the mixed entropy, Eq.~(\ref{emxw}), with other default models. We note here that regularizing functionals different from the Shannon entropy have also previously been discussed in the context of the generic ME method \cite{bryan86} (though not in the QMC context as far as we are aware), including a form like $E_{\rm GK}$. However, those proposed alternative forms were not derived from stochastic processes. Though $E_{\rm SH}$ can be considered superior for suppressing certain correlations, it has also been recognized that it does not necessarily produce the best solutions \cite{bryan86}. Our finding that the mixed entropy may be the best option therefore does not contradict any fundamental notion of the ME framework, though these methods have been completely dominated by the use of $E_{\rm SH}$. Constrained SAC can also in some cases be translated directly to the ME framework, e.g., the optimized plain lower frequency bound or a $\delta$-function edge. Parameters would be optimized by minimizing the goodness of the fit at constant $\alpha$, in direct analogy with fixed $\Theta$ in SAC. The multi-$\delta$ peak parametrizations in Fig.~\ref{peakfig} can perhaps be converted to the ME formalism by considering two components of the spectrum with different relative spectral weight, parametrized by the (relative) peak weight $A_0$ (and corresponding continuum weight $1-A_0$), with the continuum and peak parts mutually constrained in some way analogous to Fig.~\ref{peakfig}. Instead of $N_p$ and $N_c$ in SAC, the peak and continuum parts should be given different (optimized) entropy factors $\alpha_p$ and $\alpha_c$. After setting $\alpha_c$ in the same way as $\Theta$ in SAC, the peak weight and the ratio $\alpha_p/\alpha_c$ would be the adjustable parameters in the quasi-particle ME scheme. The monotonic distance constraint used for power-law edges does not appear to have any obvious counterpart in the ME method, since it effectively introduces a completely different entropy and is not directly tied to any conventional default model. In Sec.~\ref{sec:pdm}, we will discuss how to formulate this constraint, and generalizations of it, as a new type of default model. It is still not obvious how to translate these concepts to the maximum-probability ME scheme, however. \subsubsection{Pathological probability distribution} \label{sec:pathology} The ME method has been the standard analytical continuation tool for more than 30 years, since it was adapted \cite{silver90,gubernatis91} from its more general use in statistics \cite{gull84,bryan90} to extracting spectral functions from QMC data. However, it is apparent from the relationships that we have derived between SAC and ME method in Sec.~\ref{sec:maxent} that the underlying probability distribution, Eq.~(\ref{pmaxent2}), has a fundamental flaw. While the maximum-probability distribution, i.e., the spectrum minimizing the functional $\chi^2(S)/2-\alpha E(S)$, is well defined, the fluctuations about this well-defined optimal spectrum are not. This problem arises because the mean spectrum in the continuum limit is defined as a functional integral over all possible spectral functions, and when parametrizing the spectrum in some way (typically as a histogram), the entropy is extensive in the number of parameters (histogram bins) $N_\omega$. Thus, if the spectrum is actually sampled, as it is in some implementations of the ME method \cite{boninsegni96,kora18}, it will approach the default model with increasing $N_\omega$, thus maximizing the entropy and leading to a typically very large $\langle \chi^2(S)\rangle$ value, regardless of the choice of $\alpha$. We observed this entropic catastrophe by sampling within the ME method in Sec.~\ref{sec:maxent2}. The same mechanism affects the SAC method as well, but in that context it was realized early on \cite{sandvik98,beach04} that the sampling temperature $\Theta$ can be adjusted so that a statistically good fit is always obtained. As discussed above, $\Theta$ must be chosen to achieve balance between the extensive entropy and the intensive goodness-of-fit $\chi^2$ (which corresponds to the energy in statistical mechanics, even though its form is unusual; see further discussion in \ref{app:statmech}). In Sec.~\ref{sec:entropy2} we demonstrated that our $\Theta$ criterion, Eq.~(\ref{eq:chi2}), indeed satisfies the expected scaling form $\Theta \sim 1/N_\omega$. In the absence of an adjustable temperature, the maximum-probability ME spectrum is still well defined, because minimizing the functional only relies on the explicitly imposed entropic prior and is not affected by the entropy inherent in the entire space of spectral functions. In a sense, the probability-maximizing ME spectrum is a mean-field solution (as realized by Beach \cite{beach04}, though we partially disagree with some important details of his conclusions) which is unstable once the fluctuations are taken into account. Hence, it is only the fluctuations about the most probable spectrum that are problematic, which may not be that serious in practice (except for aspects discussed below in Sec.~\ref{sec:errors}) but is still of interest for properly understanding the method and its relationship to SAC. The original ME method \cite{gull84} has been extremely successful in many areas of statistics, and the pathological probability distribution has not been noted before, as far as we are aware. The root cause of the problem in the context of spectral functions, i.e., the extensive entropy, most likely does not arise in many other contexts where ME methods are applied. It arises here because of the large number of degrees of freedom $N_\omega$ of the spectrum---infinite in the limit of continuous curves. In other applications of the ME method, the number of degrees of freedom of the solution would typically be finite, and with no need to approach a continuum there would be no entropic catastrophe. Thus, there is nothing wrong with the ME method per se, but the conditions under which it is applied to spectral functions lead to a pathological distribution. As we have discussed (in detail in \ref{app:lowtheta}), in analytic continuation the problem is further exacerbated by the very small number $N_{\rm para}$ of effectively independent fitting parameters that a positive definite spectrum can realize, given the noisy and highly correlated QMC data points. The fact that the number of effective parameters is very small has of course been recognized before, e.g., Jarrell and Gubernatis \cite{jarrell96} discuss a different but in spirit similar concept of a number of ``good measurements'', which is determined using the eigenvalues of the covariance matrix and which we expect to be close to $N_{\rm para}$. The underdetermination of the spectrum is the well known fact that numerical analytic continuation is an ``ill posed'' problem. However, it should be noted that the formally divergent fluctuations of the probability distribution Eq.~(\ref{pme}) in the limit of a continuous spectral function is a feature of the ME method, not of the analytic continuation problem inherently (though the ill-posedness is of course inherent). It has been regarded as essentially a matter of taste whether to use the maximum-probability spectrum or the average spectrum defined by the distribution \cite{jarrell96}. What we we have shown here is that the non-trivial maximum-probability spectrum exists in the continuum limit, but the average spectrum flows toward the imposed defaul model. Given that the maximum-probability ME solution is still well defined (and equivalent to the SAC average spectrum in the large-$N_\omega$ limit) even in the continuum limit, one may ask why the pathologies of the probability distribution even matter. One can argue that the ME method with the standard probability distribution Eq.~(\ref{pme}) is perfectly fine also for sampling (as in many cases it is \cite{kora18}), as long as the number of sampling degrees of freedom is not taken too large, i.e., a histogram with, say, $N_\omega=100$ bins (and also a good default model will help in this regard). Then the distribution can still be sampled, if so desired, with an acceptable $\langle \chi^2\rangle$ value, and with or without sampling, an error analysis based on fluctuations (discussed further below in Sec.~\ref{sec:errors}) can be performed. However, the results will to some extent depend on $N_\omega$, and it also seems fundamentally unsatisfying to work with a distribution that does not have the desired continuum limit. It is in this sense that we categorize the ME probability distribution as pathological. \subsubsection{Fluctuations and statistical errors} \label{sec:errors} In practice, the main problem with the ME probability distribution is that it is not possible to use it for computing meaningful statistical errors of the spectrum when the fluctuations are divergent. Error analysis has been carried out \cite{jarrell96,linden96} by formally analyzing fluctuations about the maximum-probability spectrum. It was noted \cite{jarrell96}, that the correlation function $\langle \delta A_i \delta A_j\rangle$ of the deviations of the amplitudes (in a histogram) from their mean values is proportional to elements of an inverse matrix $\Gamma^{-1}_{ij}$. The matrix $\Gamma$ (the definition of which we do not need here) is expected to have some small eigenvalues, i.e., there are directions in amplitude space where the probability density is very flat, and it was pointed out that this could lead to large fluctuations \cite{jarrell96}. For a finite discretization, the fluctuations defined by $\Gamma$ are still well defined and can formally be used to compute error bars, as was done in examples in Ref.~\cite{jarrell96}. It was also correctly pointed out that well-defined (finite) error bars can only be computed in this way for the spectral weight integrated over some frequency window. What we have emphasized here (and which was also noted in Refs.~\cite{sandvik16,ghanem20a}) is that the entropy of the spectrum increases with increasing density of grid points, to the extent that the mean spectrum flows toward the default model when $N_\omega \to \infty$. A leading-order analysis of the fluctuations about the most probable spectrum in an essentially completely flat space then is insufficient, and it is doubtful whether results for finite discretization have any well-defined statistical meaning, since the fluctuations about the optimal spectrum also in finite frequency windows will grow with $N_\omega$. In SAC, the situation is the opposite, in the sense that the factor $N_\omega$ in the exponent of the probability distribution Eq.~(\ref{psac}) causes diminishing fluctuations about the average spectrum when $N_\omega \to \infty$ (and, as we summarized above, this average spectrum becomes the same as the most probably ME spectrum). Thus, the method of directly using spectral weight fluctuations during the sampling process to estimate statistical errors \cite{beach04} (as has also been done in the sampling version of the ME method \cite{kora18}) is also flawed, as those statistical errors depend on $N_\omega$ and vanish when $N_\omega \to \infty$ (and in the ME case the sampled average also deteriorates). In both SAC and ME calculations, the purely statistical errors propagated to the spectrum from the noise in $\bar G(\tau)$ can be calculated by bootstrapping. The analytic-continuation procedures are then carried out also for some number (at least ten or more) of different bootstrap realizations of $\bar G(\tau)$, to compute the statistical fluctuations about the spectrum obtained with the original full data set. However, in our experience, such statistical errors are often much smaller than remaining errors originating from various sources of bias in the analytic continuation process. Thus, it remains difficult to estimate all the errors in some absolute way, and it is important to consider the evolution of extracted quantities with the data quality. Benchmark studies, based on exact model solutions or synthetic data are very useful, as we have seen in many examples in this work (e.g., the convergence tests illustrated in Fig.~\ref{broadened-1}). Our main focus here has been on edge features, and we have seen that they are amenable to quantitative analysis. Statistical errors and bias related to completely unresolvable (given typical QMC data quality) features above the edge, e.g., multiple sharp peaks, can likely never be estimated properly in the absence of further information. \subsection{Future prospects} \label{sec:conc3} We foresee a wealth of applications of the methods discussed here to quantum-many body systems accessible to QMC simulations, including not only spin Hamiltonians and other quantum lattice models, but also dynamical mean-field calculations \cite{georges96,song20}. Analytic continuation of simulation data is also a key aspect of lattice field theory that is growing in importance \cite{ding18,aarts21,horak21}, and the improved SAC and ME techniques may find applications there as well. The methods may also potentially be useful for analytic continuation of imaginary-time DMRG data \cite{linden20}. We do not discuss specific applications here, but outline possible future technical developments. \subsubsection{Default peak structures and profile default models} \label{sec:pdm} The distance-monotonic parametrization, Fig.~\ref{fig:spec}(e), can be regarded as a special case of a more general {\it default peak structure} (DPS), where density maxima are built in by requiring a certain number $n$ of local minima in the separation $d_i = \omega_{i+1}-\omega_i$ between the $\delta$-functions. In the case in Fig.~\ref{fig:spec}(e), there is only one minimum, at the left edge of the sampled spectrum, but in general the minima can be located anywhere. An example with $n=2$ is shown in Fig.~\ref{dps}. This type of DPS will clearly entropically favor $n$ sharp peaks when the number of $\delta$-functions is large, and, by a generalization of the edge regulator we discussed in Sec.~\ref{sec:triangle}, the asymptotic inverse-square divergences when $N_\omega \to \infty$ can be quenched by setting bounds $\Delta\omega_i$ on the minimum separations. These bounds can either be optimized, which would be difficult and time consuming if different values $\Delta\omega_i$ are considered for several peaks, or they can just be imposed without optimization to avoid excessively narrow peaks. The locations of the minima should be allowed to migrate during the sampling process, keeping $n$ fixed (otherwise entropy would lead to large $n$---a proliferation of peaks). Thus, one of the peaks may (if the imaginary-time data so dictate) migrate to the lower edge of the spectrum and form the kind of one-sided singularity that we have investigated in detail. \begin{figure}[t] \centering \includegraphics[width=75mm]{fig51.pdf} \caption{Example of a DPS constrained to have two peaks (density maxima), with optional minimum distances controlled by $\Delta\omega_1$ and $\Delta\omega_2$.} \label{dps} \end{figure} We have already tried $n=2$ (Fig.~\ref{dps}) for a synthetic spectrum with two peaks. As in the case of the edge singularity, the sampling adapts to the correct bounds of the spectrum and the peaks appear in the correct locations. We did not optimize $\Delta\omega_1$ and $\Delta\omega_2$, and the peaks were much sharper than the Gaussians used in our tests. Spectra with several sharp peaks are also common and have been the subject of recent methods specifically adapted to resolving a number of such peaks \cite{linden19}. The DPS approach appears to be competitive for such spectra. When the number of peaks is unknown, $n$ can be gradually increased until a satisfactory $\langle \chi^2\rangle$ value can be obtained, thus signaling that the data at the present error level does not support additional peaks. Compared to the method with a sharp edge and generic continuum, Sec.~\ref{sec:monomixed}, the DPS with $n>1$ should be better for reproducing multiple sharp peaks, while the generic continuum should be better when the spectrum is rather featureless beyond the edge (or a single peak somewhere in the middle of the spectrum). We envision a number of further developments of the DPS parametrization. For instance, it could be combined with amplitude sampling to better model the behavior between the peaks---perhaps also to better morph the peaks into their correct shapes, though that may be more difficult because of the large entropic pressures toward sharp peaks, unless the $\Delta\omega$ regulators are imposed as well. In this context, it may be useful to adopt the method of not updating the amplitudes but only swapping them, $A_i,A_j \leftrightarrow A_j,A_i$, after initiating a set of varying amplitudes \cite{qin17}, e.g., $A_i \propto i$ or $A_i \propto {\rm constant} + i$. Then, to improve the fit in case the DPS by itself is sub-optimal, the sampling can distribute the available amplitudes optimally among the frequencies. This method of including some amplitude fluctuations, but not fully sampling the space of all possible amplitudes, may be preferable here because there are less entropic pressures of the amplitudes themselves favoring sharper peaks (as in our tests, e.g., in Fig.~\ref{sw1}). Of course the DPS can also in principle be combined with the amplitudes adapted for specific exponents of the singularities, as in Sec.~\ref{sec:monop}, though this approach may become too complicated if there are several peaks to optimize. We emphasize that there is an important difference between default models used in the ME method (and often also within SAC \cite{beach04,fuchs10,ghanem20a,ghanem20b}) and constrained SAC parametrizations, specifically the DPS proposed here but also other cases that we have discussed (such as the $\delta$-peak). A conventional default model is locked to the frequency axis, i.e., it dictates where specific structures are favored. In contrast, the DPS is self-adapting, in that it does not correspond to a well-defined spectrum in the absence of data (in which case it completely spreads out over the entire frequency space) but finds the frequency bounds inherent in the imaginary-time data once $\chi^2$ is used in the sampling. In the case of $n=1$ with the density maximum at the edge [Fig.~\ref{fig:spec}(e)], we found this self-adaptation of the bounds to be remarkably precise, and we expect that to be the case also for $n>1$. There are also other potential ways to achieve self-adaptation of the bounds with generalized default models. A DPS can be regarded as an extreme case of hard-core interactions between the $\delta$-function ``particles'' (in this case a rather odd interaction that enforces $n$ density maxima). Other, ``softer'' particle-particle potentials could be imposed instead---imagine the spectral amplitudes residing on a string of particles connected by some kind of nonlinear springs with tunable properties. Such potentials would change the entropic pressures and lead to different native spectral-weight distributions. The native shape can be regarded as a {\it profile default model} (PDM), and it should be noted that a PDM does not define the location of the spectrum in frequency space. However, unlike the DPS, the shape can be defined also in the absence of data if the interactions overall form a finite equilibrium shape, with bounds that can be regulated by a single optimized parameter (the overall mean distance between the particles). Once $\chi^2$ is included in the sampling, the shape can be further modified from the native shape, similar to how the optimal spectrum in the ME method is distorted relative to the default model. This overall approach of introducing potentials bears some resemblance to the way of combining $\chi^2$ fitting with a gradient-squared term imposed on the spectrum \cite{white89}, but in continuous frequency space instead of the grid and with the possibility of more general potentials between the particles, as well as the fact that the locations $\omega_i$ (and possible also the amplitudes $A_i$) are sampled, not just statically optimized. \subsubsection{Machine learning} Machine learning (ML) methods are making inroads on many fronts in quantum many-body physics, including applications to the analytic continuation problem \cite{arsenault17,fournier20,yoon18,song20,huang22,yao21,zhang22}. Without going into details, the attempts so far typically use a large data set of known spectral functions $S(\omega)$ and their associated imaginary-time correlation function $\bar G(\tau)$. The neural network is trained to deliver $S(\omega)$ given $\bar G(\tau)$ (with noise if appropriate for the intended application). These methods show some promise, e.g., it was claimed that they can resolve a Gaussian peak better than the ME method \cite{fournier20,zhang22}, and examples of spectra with sharp-edged features have also been presented \cite{yao21}. However, mostly what has been produced so far can also be achieved with standard ME or SAC methods, and much better results can likely be produced with the improved methods developed here. There are reasons to be skeptical about ML as a universal unbiased analytic continuation tool. While it is certainly possible to consider some class of spectral functions, e.g., sums of small numbers of Gaussians \cite{zhang22}, more generally the possibilities are endless and it is difficult to imagine constructing a complete training set. A specific problem with analytic continuation is that a given set of noisy imaginary-time data would normally be consistent with a vast number of different spectral functions, as we have seen plentifully in the examples studied here. While regularization can be built in at the stage of training (i.e., only including ``regular'' spectral functions), still many spectra regarded as regular would be equally consistent with the data, and some additional criterion has to be applied to select the ``best'' spectrum. It is not clear why this would be any better than using SAC or ME methods (or even function fitting with a relatively small number of parameters), though of course one could decide to categorize different types of training spectra, e.g., with sharp edge features, and then select which set to use for a given application, in analogy with selecting constraints for SAC. We propose that ML could be used in a different way, in combination with SAC or ME results. The basic idea is that input to the neural network can go beyond imaginary-time data $\bar G(\tau)$. As an example, looking at all our results in Fig.~\ref{sw1} (and for an even wider range of $\langle \chi^2\rangle$ values), is it possible to determine which of the many spectral functions is the closest to the correct result? Thus, we envision that the neural network is not trained just with a set of spectra $S(\omega)$ and corresponding $G(\tau)$ data, but that a whole series of spectra, produced for a range of $\Theta$ ($\alpha$) values by SAC (ME) runs with different parametrizations (different entropy forms). Importantly, results for $\langle \chi^2(\Theta)\rangle$ or $\chi^2(\alpha)$ could also be included in the training and decision making, as these functions may contain information that could also improve the outcome, i.e., resulting in the best spectrum among those with acceptable $\langle \chi^2\rangle$ values. The imaginary-time data set of course implicitly already contains all available information and, strictly speaking, a whole set of spectra produced under different conditions does not contain any new information beyond $\bar G(\tau)$ (and its covariance matrix). Coming in the form of a rather small set of floating-point numbers, the information on $S(\omega)$ is contained in $\bar G(\tau)$ in a rather subtle manner, however, in the details of these numbers that are not ideal for ML to recognize. With pre-processing by SAC or ME procedures, information ``expanded'' from the original $\bar G(\tau)$ data sets is produced. This new representation of the same information should be easier for ML handle. The training would have to be done at different levels of noise added to the $G(\tau)$ data, and later used at noise level corresponding to given QMC input data. Here a complication is the fact that the QMC data are correlated, but the majority of these correlations can likely be modeled just by an autocorrelation time, as we do for synthetic data, Eq.~(\ref{corrnoise}). Going beyond basic unrestricted parametrizations, a series of training spectra can further be produced with various constraints imposed, and, along the same lines as above, the ML process would determine which constraint is the most appropriate. Characteristic ways in which output spectra depend on the parameters in SAC would very likely be helpful for discriminating between various spectral features used in constrained sampling. Thus, for a single spectral function to be included in a training set, SAC runs would be carried out for several different parametrizations and values of constraining parameters. After training with spectra with different types of edge features (or other prominent features), the ML process may be able to identify the proper constraint and the optimal value of any associated parameter $p$ based on just just curves $\langle \chi^2(p)\rangle$ generated by SAC for different types of constraints. These ideas could be initially explored, e.g., by testing whether ML could distinguish between a power-law edge singularity and a sharp quasi-particle peak. This problem is important in the context of fractionalization of excitation, where cross-overs between these two types of spectral edges can be expected in realistic situations \cite{shao17,zhou21}. As an alternative to full-fledged ML approaches, simpler validation methods developed in statistical learning theoory \cite{mehta21} may be useful for optimizing parameters and for quantifying the suitability of various parametrizations and constraints. Here the original data set is divided into two groups, and in the present context the SAC would be run with one of these subsets and the other subset used to statistically validate the results. A simple form of such a cross-validator has already been tested in the SAC context, with promising results \cite{efremkin21} that motivate further explorations along these lines. \subsubsection{Negative sampling weights} \label{sec:prospnegative} The imposed positive definiteness is responsible for the anomalies appearing in the spectrum when sampling at very low $\Theta$ (corresponding to $\chi^2$ minimization), i.e., a small number of spikes that would become true $\delta$-functions if the $\Theta \to 0$ limit could be realized in practice (\ref{app:lowtheta}). The entropy at $\Theta > 0$ (specifically for $\Theta$ in a range satisfying our statistical criterion) is what removes these anomalies and leads to a dense spectrum. We have further demonstrated this point by relaxing the constraint, introducing a small fraction of negative spectral weight in the sampling (\ref{app:low2}). Even a very small negative relative weight (we tested $10^{-4}$ and $10^{-3}$) leads to a dramatic lowering (by orders of magnitude) of the $\Theta$ value at which the spike anomalies appear. Thus, negative spectral weight is another mechanism that can supply entropy to the system and remove anomalies. Our criterion for fixing $\Theta$, Eq.~(\ref{eq:chi2}), is intended to remove the anomalies by thermal fluctuations. In the presence of a non-thermal smoothing mechanism---negative spectral weight---not only can $\Theta$ be lowered further but $\langle \chi^2\rangle$ remains low and can even be below the lowest acceptable value attainable with only positive weight without causing overfitting (Fig.~\ref{fig:nega}). Thus, the negative weights appear to be a mechanism for taking into account those noise features in $\bar G(\tau)$ that cause anomalies. The negative weight is of course itself an unphysical feature, but at a small fraction its presence is inconsecuential and can be neglected. Though we have not yet constructed a suitable criterion for fixing both the fraction of negative spectral weight and $\Theta$, it is possible that the resolution of the SAC method can be further improved in this way. In particular, the resolution may be improved at high physical temperatures $T$ (small $\beta=1/T$), where the imaginary-time correlations are available only in a small range $\tau \in [0,\beta/2]$. The effective number of parameters is then typically very small, often $N_{\rm para}=4$, i.e., two spikes appear in the $\Theta \to 0$ spectrum (as in the case of the $L=16$ Heisenberg chain at $\beta=2$, which we have used in many tests). The limited information is still sufficient to resolve the rough distribution of spectral weight, but we have also seen (Fig.~\ref{sw1}) that the details of the profile depend significantly on the parametrization used. Some of these details are very important, e.g., the $\omega \to 0$ limit dictates the spin-lattice relaxation rate in NMR experiments (see Ref.~\cite{shu18} for an SAC application in this context). If the number of effective parameters can be increased by just one step by sampling with some negative spectral weight, e.g., from $N_{\rm para}=4$ to $N_{\rm para}=6$, as is in fact the case in the test reported in \ref{app:low2}, the reliability of the spectrum for small $\omega$ may be improved significantly. The small fraction of unphysical negative weight typically appears at high frequencies (at least for gapless spectra) and would not directly be included in the final result. For spectra where $N_{\rm para}$ is larger, the impact of negative weights is likely in general smaller. In addition to the tests reported in Fig.~\ref{fig:nega}, we have investigated the impact of negative weights with the synthetic spectrum in Fig.~\ref{syntcomp}. In this case the number of low-$\Theta$ spikes is five with all-positive amplitudes and remains the same when a small fraction of negative spectral weight is included. We plan more extensive studies of SAC with negative amplitudes. \section*{Acknowledgments} We would like to thank Kevin Beach, Khaldoon Ghanem, and Erik Koch for their detailed comments on the manuscript. We also thank Adrian Feiguin for discussions about the time dependent DMRG method and Markus Holzmann for pointing out the cross-validation method. H.S. was supported by the National Natural Science Foundation of China under Grants No.~12122502, No.~11904024, and No.~11734002, and by National Key Projects for Research and Development of China under Grant No.~2021YFA1400400. A.W.S. was supported by the the Simons Foundation under Simons Investigator Award No.~511064. Most of the computations were performed using the Shared Computing Cluster administered by Boston University’s Research Computing Services.
1,108,101,565,031
arxiv
\section{Introduction} The non-linear nature of the scalar sectors of the maximal supergravities has been enlarged to formulate the non-gravitational bosonic field equations as non-linear realizations in \cite{julia1,julia2}. The coset formulation of the scalars is improved to cover the other bosonic fields as well. The method of \cite{julia1,julia2} includes the dualisation of the field content and the construction of a Lie superalgebra which generates the doubled coset element whose Cartan form would lead to the original field equations by satisfying the Cartan-Maurer equation. After the determination of the algebra structure it is possible to express the first-order field equations as a twisted self-duality condition which the dualized Cartan form satisfies. In \cite{west1,west2,west3} a more general coset formulation of the IIA \cite{2A1,2A2,2A3}, the IIB \cite{2B1,2B2,2B3} and the D=11 \cite{d=11} supergravity theories is introduced to include the gravity as well. The scalar sectors of a wide class of supergravities, in particular the scalar sectors of all the pure and the matter coupled $N>2$ extended supergravities in $D=4,5,6,7,8,9$ dimensions as well as the maximally extended supergravities in $D\leq11$ can be formulated as symmetric space sigma models. The global symmetry groups $G$ of the scalar also the bosonic sectors of the lower dimensional Kaluza-Klein descendant supergravities of the D=11 supergravity (the maximal supergravities) are semi-simple split real forms (maximally non-compact). For this reason the scalar coset manifolds $G/K$ where $K$ is the maximal compact subgroup of $G$ are Riemannian globally symmetric spaces \cite{hel} and they can be parameterized by the Borel subalgebra of $G$. In general, especially for the matter coupled supergravities, the scalar coset manifolds $G/K$ are based on non-split \footnote{By non-split we mean that $G$ is a non-compact real form of a semi-simple Lie group but it is not necessarily maximally non-compact (split).} global symmetry groups $G$. In this case one has to use the solvable Lie algebra gauge \cite{fre} to parameterize the Riemannian globally symmetric space scalar coset manifold $G/K$. In \cite{nej2} the $G/K$ symmetric space sigma model is discussed in detail when the global symmetry group $G$ is in general, a non-compact semi-simple real form. The dualisation and the first-order formulation of the general non-split symmetric space sigma model is also performed in \cite{nej2}. In this work we consider the coupling of other fields to the scalar coset Lagrangian of the general non-split $G/K$ symmetric space sigma model. We will perform the complete dualisation of the fields and the first-order formulation when there is coupling of other ($m-1$)-form matter fields to the scalar coset $G/K$. We will construct the dualized coset element which will realize the field equations of the scalar coset which is coupled to the ($m-1$)-form fields. We will assume the most general non-split scalar coset case which is discussed in \cite{nej2,ker2,ker1}. Beside the scalar fields there will be a number of $m$-form field strengths whose number is fixed by the dimension of the fundamental representation of the Lie algebra $\mathbf{g}_{0}$ of $G$. As it will be clear in the next section the dimension of the representation and the number of the coupling fields must be the same so that the coupling kinetic term between the scalar coset and the matter fields in the lagrangian can be constructed within an appropriate representation of the global symmetry group $G$ \cite{ker2,ker1}. We will follow the standard dualisation method of \cite{julia1,julia2} by introducing auxiliary dual fields and by assigning generators to the original and the dual fields. The first objective of this work will be to derive the Lie superalgebra structure which generates the doubled coset element. The first-order formulation will then be presented as a twisted self-duality equation \cite{julia1,julia2} by using the derived algebra structure and by calculating explicitly the doubled field strength. The dualisation method presented in \cite{julia1,julia2} is the non-linear realization of the relative supergravity theory, it is also another manifestation of the Lagrange multiplier methods in which the dual fields correspond to the Lagrange multipliers which are introduced to construct the Bianchi Lagrangians. For this reason the Cartan form which is generated by the dualized coset element, not only realizes the original second-order field equations of the matter coupled scalar coset by satisfying the Cartan-Maurer equation but also yields the first-order field equations via a twisted self-duality equation \cite{julia1,julia2,nej2}. This first-order formulation corresponds to the construction of the dualized Lagrangian by adding the Bianchi terms to the Lagrangian of the original fields and consequently to the derivation of the first-order algebraic field equations of the original fields in terms of the Lagrange multiplier (dual) fields \cite{pope}. We start by discussing the Lagrangian and deriving the field equations in Section two. In Section three we work out the dualisation and we construct the algebraic structure which realizes the field equations and finally we obtain the first-order field equations. \section{The Symmetric Space Sigma Model and the Couplings} The scalar sectors of a wide class of supergravity theories are formulated as $G/K$ symmetric space sigma models \cite{julia1,julia2,ker2,ker1}. The group $G$ is the global symmetry group of the corresponding scalar Lagrangian and it is a non-compact real form of a semi-simple Lie group. The local symmetry group $K$ is the maximal compact subgroup of $G$. The coset space $G/K$ is a Riemannian globally symmetric space for all the possible $G$-invariant Riemannian structures on $G/K$ \cite{hel}. There is a legitimate parametrization of the coset representatives by using the solvable Lie algebra of $G$ \cite{hel,fre}. If $\mathbf{h_{k}}$ is the subalgebra of the Cartan subalgebra $\mathbf{h}_{0}$ of $\mathbf{g}_{0}$ (the Lie algebra of $G$) which generates the maximal R-split torus in $G$ \cite{hel,fre,nej2,ker2} let for $i=1,...,r$ $\{H_{i}\}$ be the generators of $\mathbf{h_{k}}$ and also let $\{E_{m}\}$ be the subset of the positive root generators of $\mathbf{g}_{0}$ such that $m\in\Delta_{nc}^{+}$. The roots in $\Delta_{nc}^{+}$ are the non-compact roots with respect to the Cartan involution $\theta$ which is induced by the Cartan decomposition \begin{equation}\label{cartandecomp} \mathbf{g}_{0}=\mathbf{k}_{0}\oplus\mathbf{u}_{0}, \end{equation} where $\mathbf{k}_{0}$ is the Lie algebra of $K$ and $\mathbf{u}_{0}$ is a vector subspace of $\mathbf{g}_{0}$ \cite{hel,nej2}. The positive root generators $\{E_{m}\}$ generate a nilpotent Lie subalgebra $\mathbf{n_{k}}$ of $\mathbf{g}_{0}$ \cite{ker2}. The coset representatives of $G/K$ which are the image points of the map from the $D$-dimensional spacetime (we assume D$\:>2$ in order that the dualisation analysis of the next section would be meaningful and we will take the signature of the spacetime as ($-,+,+,...$) ) into the group $G$ can be expressed as \begin{equation}\label{nu} \nu (x)=e^{\frac{1}{2}\phi ^{i}(x)H_{i}}e^{\chi ^{m}(x)E_{m}}. \end{equation} This is called the solvable Lie algebra parametrization \cite{fre}. We should state that we make use of the Iwasawa decomposition \begin{subequations}\label{iwasawa} \begin{align} \mathbf{g}_{0}&=\mathbf{k}_{0}\oplus \mathbf{s}_{0}\notag\\ \notag\\ &=\mathbf{k}_{0}\oplus \mathbf{h_{k}}\oplus \mathbf{n_{k}},\tag{\ref{iwasawa}} \end{align} \end{subequations} where $\mathbf{s}_{0}$ is the solvable Lie subalgebra of $\mathbf{g}_{0}$ which is isomorphic to $\mathbf{u}_{0}$ as a vector space \cite{hel,nej2}. The difeomorphism from $\mathbf{u}_{0}$ onto the Riemannian globally symmetric space $G/K$ \cite{hel} enables the construction of the parametrization in \eqref{nu}. An involutive automorphism $\theta\in Aut(\mathbf{g}_{0})$ of a semi-simple real Lie algebra $\mathbf{g}_{0}$ is called a Cartan involution if the induced bilinear form $B_{\theta}(X,Y)=-B(X,\theta(Y))$ where $B$ is the Killing form on $\mathbf{g}_{0}$ is strictly positive definite $\forall X,Y\in \mathbf{g}_{0}$. If the semi-simple complex Lie algebra $\mathbf{g}=\mathbf{g}_{0}^{C}$ is the complexification of $\mathbf{g}_{0}$ then the set of elements $\mathbf{t}$ of $\mathbf{g}$ which is generated as \begin{equation}\label{ch3315} \mathbf{t}=\mathbf{k}_{0}+i\mathbf{u}_{0}, \end{equation} through the complexification of $\mathbf{g}_{0}$, is a compact real form of $\mathbf{g}$ whose conjugation will be denoted by $\tau$. We should bear in mind that $\mathbf{g}_{0}$ has the set equivalent images in $\mathbf{g}=\mathbf{g}_{0}^{C}$ whose realizations in $\mathbf{g}_{0}\times \mathbf{g}_{0}$ are isomorphic to $\mathbf{g}_{0}$. In this way $\mathbf{k}_{0}$ and $\mathbf{u}_{0}$ can be considered as subsets of $\mathbf{g}$ and then $\mathbf{t}$ which is a subset of $\mathbf{g}$ is also a subset of one of the images of $\mathbf{g}_{0}$ in $\mathbf{g}$. Thus under the realization of $\mathbf{g}$, $\,\mathbf{t}^{R}$ corresponds to a subalgebra of $\mathbf{g}_{0}$. The real semi-simple Lie algebra $\mathbf{g}_{0}$ is also a real form of its complexification $\mathbf{g}$ so that we may define $\sigma$ as the conjugation of $\mathbf{g}$ with respect to $\mathbf{g}_{0}$. The map $\theta=\sigma\cdot \tau=\tau\cdot \sigma$ is an involutive automorphism of $\mathbf{g}$. In fact $\theta$ is a Cartan involution of $\mathbf{g}$. The ${\Bbb{R}}$-linear restriction of $\theta$ on the image of $\mathbf{g}_{0}$ in $\mathbf{g}$ induces a Cartan involution on $\mathbf{g}_{0}$ which we will again denote by $\theta$. After the introduction of the Cartan involution $\theta$ we can easily define the roots in $\Delta_{nc}^{+}$. For each element $\alpha\in \mathbf{h}_{0}^{\ast}$ the dual space of the Cartan subalgebra $ \mathbf{h}_{0}$ of $\mathbf{g}_{0}$ we can define the element $\alpha^{\theta}\in \mathbf{h}_{0}^{\ast}$ such that $\alpha^{\theta}(H)=\alpha(\theta(H))$, $\,\forall \,H\in \mathbf{h}_{0}$. If $\alpha\in \Delta$ then $\alpha^{\theta}\in \Delta$ as well. Thus we have defined \begin{equation}\label{delta} \Delta_{nc}^{+}=\{\alpha\mid \alpha\in\Delta^{+},\,\alpha\neq\alpha^{\theta}\}. \end{equation} The scalar Lagrangian is defined in terms of the internal metric $\mathcal{M=}\nu ^{\#}\nu $ where we have introduced the generalized transpose $\#$ which is over the Lie group $G$ such that $(exp(g))^{\#}=exp(g^{\#})$ $\forall g\in \mathbf{g}_{0}$. It is induced by the Cartan involution $\theta$ over the Lie algebra $\mathbf{g}_{0}$ of $G$ ($g^{\#}=-\theta(g)$) \cite{julia1,ker1,nej2}. Thus in terms of the internal metric $\mathcal{M}$ the globally $G$-invariant and the locally $K$-invariant scalar Lagrangian \cite{julia1,julia2} is \begin{equation}\label{scalag} \mathcal{L}_{scalar}=\frac{1}{4}tr(d\mathcal{M}^{-1}\wedge \ast d\mathcal{M}). \end{equation} The $G/K$ symmetric space sigma model is studied in detail in \cite{nej2}. Thus referring to \cite{nej2} we can calculate the Cartan form $\mathcal{G}_{0}=d\nu \nu ^{-1}$ generated by the map \eqref{nu} as \begin{equation}\label{g0} \mathcal{G}_{0}=\frac{1}{2}d\phi ^{i}H_{i}+\overset{\rightharpoonup }{ \mathbf{E}^{\prime }}\:\mathbf{\Omega }\:\overset{\rightharpoonup }{d\chi }. \end{equation} We have used that $[H_{i},E_{\alpha}]=\alpha_{i}E_{\alpha}$. The row vector $\overset{\rightharpoonup }{ \mathbf{E}^{\prime }}$ has the components $(\overset{\rightharpoonup }{ \mathbf{E}^{\prime }})_{\alpha}=e^{% \frac{1}{2}\alpha _{i}\phi ^{i}}E_{\alpha}$. The column vector $\overset{\rightharpoonup }{d\chi }$ is ($d\chi ^{\alpha}$). We have also defined the matrix $\mathbf{\Omega}$ as \begin{equation}\label{omega} \begin{aligned} \mathbf{\Omega}&=\sum\limits_{n=0}^{\infty }\dfrac{\omega ^{n}}{(n+1)!}\\ \\ &=(e^{\omega}-I)\,\omega^{-1} \end{aligned} \end{equation} where $\omega _{\beta }^{\gamma }=\chi ^{\alpha }\,K_{\alpha \beta }^{\gamma }$ with the structure constants $K_{\alpha \beta }^{\gamma }$ defined as $[E_{\alpha },E_{\beta }]=K_{\alpha \beta }^{\gamma }\,E_{\gamma }$. Here both $\mathbf{\Omega}$ and $\omega$ are $n\times n$ matrices where $n$ is the number of the roots in $\Delta_{nc}^{+}$ \cite{nej2}. We will consider the coupling of $(m-1)$-form potential fields $\{A^{l}\}$ to the $G/K$ scalar coset where the number of the coupling fields is determined such that they form a fundamental representation of $\mathbf{g}_{0}$. The quadratic terms due to this coupling which must be added to the scalar Lagrangian \eqref{scalag} are the combinations of the internal metric $\mathcal{M}$ and the field strengths $F^{l}=dA^{l}$ \begin{equation}\label{lagm} \begin{aligned} \mathcal{L}_{m}&=-\frac{1}{2}\mathcal{M}_{kl} F^{k}\wedge\ast F^{l}\\ \\ &=-\frac{1}{2}F\wedge\mathcal{M} \ast F. \end{aligned} \end{equation} As it is clear from above $\mathcal{M}$ and $\nu$ are in an appropriate representation (i.e. fundamental representation of $\mathbf{g}_{0}$) which is compatible with the number of the coupling fields. Thus the total Lagrangian becomes \begin{equation}\label{lag} \mathcal{L}=\frac{1}{4}tr( d\mathcal{M}^{-1}\wedge \ast d\mathcal{M}) -\frac{1}{2}F\wedge\mathcal{M} \ast F. \end{equation} The Cartan involution $\theta$ induced by the Cartan decomposition \eqref{cartandecomp} is an involutive automorphism of $\mathbf{g}_{0}$ for this reason it has two eigenspaces $\theta^{+}$, $\theta^{-}$ with eigenvalues $\pm 1$. The Cartan involution $\theta$ induces the eigenspace decomposition of the Lie algebra $\mathbf{g}_{0}$ as \begin{equation}\label{eigen} \mathbf{g}_{0}=\theta^{+}\oplus \theta^{-}. \end{equation} The elements of $\theta^{+}$ are called compact while the elements of $\theta^{-}$ are called non-compact. If the subgroup of $G$ generated by the compact generators is an orthogonal group then in the fundamental representation the generators can be chosen such that $\mathbf{g}^{\#}=\mathbf{g}^{T}$. Therefore $\#$ coincides with the ordinary matrix transpose and $\mathcal{M}$ becomes a symmetric matrix in the representation we choose. We will assume this case in our further analysis bearing in mind that for the general case higher dimensional representations are possible in which we can still take $\mathbf{g}^{\#}=\mathbf{g}^{T}$ \cite{ker1}. By following the analysis of \cite{nej2,ker2,ker1} and by using \eqref{g0} we can derive the field equations for the coupling potentials $\{A^{k}\}$, the axions $\{\chi^{m}\}$ and the dilatons $\{\phi^{i}\}$ of the Lagrangian \eqref{lag}. Thus the corresponding field equations are \begin{equation}\label{fielde} \begin{aligned} d(\mathcal{M}_{kl}\ast F^{l})&=0,\\ \\ d(e^{\frac{1}{2}\gamma _{i}\phi ^{i}}\ast U^{\gamma })&=-\frac{1}{2}\gamma _{j}e^{\frac{1}{2}\gamma _{i}\phi ^{i}}d\phi ^{j}\wedge \ast U^{\gamma }\\ \\ &+\sum\limits_{\alpha -\beta =-\gamma }e^{\frac{1}{2} \alpha _{i}\phi ^{i}}e^{\frac{1}{2}\beta _{i}\phi ^{i}}N_{\alpha ,-\beta }U^{\alpha }\wedge \ast U^{\beta },\\ \\ d(\ast d\phi ^{i})&=\frac{1}{2}\sum\limits_{\alpha\in\Delta_{nc}^{+}}^{}\alpha _{i}% e^{\frac{1}{2}\alpha _{i}\phi ^{i}}U^{\alpha }\wedge e^{% \frac{1}{2}\alpha _{i}\phi ^{i}}\ast U^{\alpha }\\ \\ &+(-1)^{D+1}\frac{1}{2}((H_{i})_{nl}\nu_{m}^{n}\nu_{j}^{l})F^{j}\wedge\ast F^{m} \end{aligned} \end{equation} where $i,j=1,...,r$ and $\alpha,\beta,\gamma\in\Delta_{nc}^{+}$. The roots in $\Delta_{nc}^{+}$ and their corresponding generators $\{E_{m}\}$ are assumed to be enumerated. We have also defined the vector $U^{\alpha}=\mathbf{\Omega}_{\beta}^{\alpha}\,d\chi^{\beta}$. Furthermore the matrices $\{(H_{i})_{nl}\}$ are the representatives of the Cartan generators $\{H_{i}\}$ under the representation chosen. We use the notation $[E_{\alpha },E_{\beta }]=N_{\alpha,\beta }E_{\alpha+\beta}$. We should remark that in the dilaton equation in \eqref{fielde} the contribution from the coupling fields $\{A^{k}\}$ is expressed in terms of the original fields rather than their weight expansions unlike the expressions in \cite{ker2,ker1}. For notational convenience we raise or lower the indices of the matrices by using an Euclidean metric. \section{Dualisation and the First-Order Formulation} In this section we will adopt the method of \cite{julia1,julia2} to establish a coset formulation and to derive the first-order field equations for the lagrangian \eqref{lag}. Basically we will improve the analysis presented for the non-split scalar coset in \cite{nej2} to the case when there is matter field coupling to the non-split scalar coset. We will first define a Lie superalgebra which will realize the doubled coset element. We assign the generators $\{H_{i},E_{m},V_{j}\}$ to the fields $\{\phi^{i},\chi^{m},A^{j}\}$ respectively. We assume that $\{H_{i},E_{m}\}$ are even generators within the superalgebra structure since the coupling fields are scalars and they have even rank. The generators $\{V_{j}\}$ are even or odd whether the rank of the coupling fields $\{A^{j}\}$ namely $(m-1)$ is even or odd. The next step is to introduce the dual fields $\{\widetilde{\phi}^{i},\widetilde{\chi}^{m},\widetilde{A}^{j}\}$ which would arise as a result of the local integration of the field equations \eqref{fielde}. The first two are (D$-2$)-forms and the last ones are (D$\,-m-1$)-forms. We also assign the dual generators $\{\widetilde{H}_{i},\widetilde{E}_{m},\widetilde{V}_{j}\}$ to these dual fields respectively. The dual generators are even or odd depending on D and $m$ in other words according to the rank of the dual fields they are assigned to. We will derive the structure of the Lie superalgebra generated by the original and the dual generators we have introduced so that it will enable a coset formulation for the lagrangian \eqref{lag}. Similar to the non-linear coset structure of the scalars presented in the last section we can define the map \begin{equation}\label{doublenu} \nu^{\prime}=e^{\frac{1}{2}\phi^{i}H_{i}}e^{\chi^{m}E_{m}}e^{A^{j}V_{j}} e^{\widetilde{A}^{j}\widetilde{V}_{j }}e^{\widetilde{\chi}^{m}\widetilde{E}_{m}} e^{\frac{1}{2}\widetilde{\phi}^{i}\widetilde{H}_{i}}, \end{equation} which can be considered as the parametrization of a coset via the differential graded algebra \cite{julia2} generated by the differential forms on the D-dimensional spacetime and the Lie superalgebra of the original and the dual generators we propose. We are not intending to detect the group theoretical structure of this coset, rather we will only aim to construct the Lie superalgebra of the original and the dual generators which function in the parametrization \eqref{doublenu}. If one knows the structure constants of this algebra one can calculate the Cartan form $\mathcal{G}^{\prime}=d\nu^{\prime}\nu^{\prime-1}$ which is induced by the map \eqref{doublenu}. Due to its definition the Cartan form $\mathcal{G}^{\prime}$ obeys the Cartan-Maurer equation \begin{equation}\label{cm} d\mathcal{G}^{\prime}-\mathcal{G}^{\prime}\wedge\mathcal{G}^{\prime}=0. \end{equation} By following the outline of \cite{julia1,julia2} the structure constants of the Lie superalgebra will be chosen so that when we calculate the Cartan form $\mathcal{G}^{\prime}$ it will lead us to the second-order field equations \eqref{fielde} via the identity \eqref{cm} and it will satisfy the twisted self-duality equation $\ast\mathcal{G}^{\prime}=\mathcal{SG}^{\prime}$ where the action of the pseudo-involution $\mathcal{S}$ \cite{julia2} on the generators is taken as \begin{gather}\label{s} \mathcal{S}H_{i}=\widetilde{H}_{i}\quad,\quad\mathcal{S}E_{m}= \widetilde{E}_{m}\quad ,\quad\mathcal{S} \widetilde{E}_{m}=(-1)^{D}E_{m}\quad,\quad\mathcal{S}\widetilde{H}_{i}=(-1)^{D}H_{i} ,\notag\\ \notag\\ \mathcal{S}V_{j}=\widetilde{V}_{j}\quad,\quad\mathcal{S}\widetilde{V}_{j}=(-1)^{m(D-m)+1}V_{j}. \end{gather} We know that the twisted self-duality equation will give us the locally integrated first-order field equations which can be obtained from \eqref{fielde} by extracting an overall exterior derivative operator on both sides of the equations \cite{julia1,julia2,nej2,pope}. This local integration produces auxiliary fields which are the dual fields we introduce. The dualisation method is nothing but another manifestation of the Lagrange multiplier method while the dual fields correspond to the Lagrange multiplier fields which are introduced to construct the Lagrange multiplier Lagrangian terms of the Bianchi identities of the original field strengths \cite{pope}. We may first calculate the Cartan form $\mathcal{G}^{\prime}=d\nu^{\prime}\nu^{\prime-1}$ from the map \eqref{doublenu} in terms of the unknown structure constants of the Lie superalgebra of the original and the dual generators. We intend to construct an algebraic structure so that the Cartan form satisfies the twisted self-duality equation $\ast\mathcal{G}^{\prime}=\mathcal{SG}^{\prime}$. In a sense the twisted self-duality equation would correspond to the equation of motion of the dualized Lagrangian \cite{julia2}. At this stage we will assume that the Lie superalgebra of the original and the dual generators has a general structure in which the commutator or the anti-commutator of two original generators gives another original generator, an original and a dual generator leads to a dual generator while two dual generators vanish under the algebra product. When we calculate the structure constants of the Lie superalgebra which generates the correct Cartan form $\mathcal{G}^{\prime}$ which leads to the field equations \eqref{fielde} in \eqref{cm} we will see that they obey such a general scheme indeed. We may use the proposed twisted self-duality property of the dualized Cartan form primarily to write it only in terms of the original fields because as it is clear from (3.3) the pseudo-involution sends the original generators to the dual ones and the dual ones to the originals with a sign factor. Thus by using the formulas \begin{equation}\label{formul} \begin{aligned} de^{X}e^{-X}&=dX+\frac{1}{2!}[X,dX]+\frac{1}{3!}[X,[X, dX]]+....,\\ \\ e^{X}Ye^{-X}&=Y+[X,Y]+\frac{1}{2!}[X,[X,Y]]+...., \end{aligned} \end{equation} effectively and by applying the twisted-self duality condition $\ast\mathcal{G}^{\prime}=\mathcal{SG}^{\prime}$, the calculation of the Cartan form $\mathcal{G}^{\prime}=d\nu^{\prime}\nu^{\prime-1}$ only in terms of the original fields yields \begin{subequations}\label{firstcartan} \begin{align} {\mathcal{G}}^{\prime}&=\frac{1}{2}d\phi ^{i}H_{i}+\overset{\rightharpoonup}{{\mathbf{E}}^{\prime }}\:{\mathbf{\Omega }}\:\overset{\rightharpoonup}{d\chi }+\overset{\rightharpoonup}{ {\mathbf{V}}}e^{{\mathbf{U}}}e^{{\mathbf{B}}}\overset{\rightharpoonup }{{\mathbf{dA}}}\notag\\ \notag\\ &\quad +\frac{1}{2}(-1)^{D}\ast d\phi^{i}\widetilde{H}_{i}+(-1)^{D}e^{\frac{1}{2}\alpha_{i} \phi^{i}}{\mathbf{\Omega} }_{\beta }^{\alpha }\,\ast d\chi ^{\beta }\widetilde{E}_{\alpha}\notag\\ \notag\\ &\quad +(-1)^{(m(D-m)+1)}\overset{\rightharpoonup }{\widetilde{{\mathbf{V}}}}e^{{\mathbf{U}}}e^{{\mathbf{B}}}\ast\overset{\rightharpoonup }{{\mathbf{dA}}}.\tag{\ref{firstcartan}} \end{align} \end{subequations} We have defined the yet unknown structure constants as \begin{equation}\label{structurecons2} [H_{i},V_{n}]=\theta_{in}^{t}V_{t}\quad,\quad[E_{m},V_{j}]=\beta_{mj}^{l}V_{l}. \end{equation} The matrices ${\mathbf{U}}$ and ${\mathbf{B}}$ in \eqref{firstcartan} are \begin{equation}\label{matrice1} ({\mathbf{U}})_{v}^{n}=\frac{1}{2}\phi^{i}\theta_{iv}^{n}\quad,\quad ({\mathbf{B}})_{n}^{j}=\chi^{m}\beta_{mn}^{j}. \end{equation} We introduce the row vectors $\overset{\rightharpoonup }{{\mathbf{V}}}$ and $\overset{\rightharpoonup }{\widetilde{{\mathbf{V}}}}$ as ($V_{i}$) and ($\widetilde{V}_{j}$), respectively, the column vector $\overset{\rightharpoonup }{{\mathbf{dA}}}$ is ($dA^{i}$). We have also taken \begin{equation}\label{veq} [V_{m},V_{n}\}=0. \end{equation} In \eqref{firstcartan} we have made use of the results of \cite{nej2} in the calculation of the scalar sector of the Cartan form $\mathcal{G}^{\prime}=d\nu^{\prime}\nu^{\prime-1}$. Now inserting the Cartan form \eqref{firstcartan} (which is written only in terms of the original fields by primarily applying the twisted self-duality condition) in the Cartan-Maurer identity \eqref{cm} should result in the second-order field equations \eqref{fielde} \cite{julia2,nej2}. This main feature of the coset formulation enables us to derive the commutation and the anti-commutation relations of the original generators which are already encoded in \eqref{firstcartan} and the commutators and the anti-commutators of the dual and the mixed (an original and a dual) generators which arise in the calculation of \eqref{cm} within the graded differential algebra structure of the differential forms and the generators. Thus a straightforward calculation of \eqref{cm} by inserting \eqref{firstcartan} and then the comparison of the result with the second-order field equations \eqref{fielde} gives us the desired structure constants of the commutators and the anti-commutators. We have \begin{gather}\label{coms} [H_{j},E_{\alpha }]=\alpha _{j}E_{\alpha }\quad ,\quad [E_{\alpha },E_{\beta }]=N_{\alpha ,\beta }E_{\alpha+\beta},\notag\\ \notag\\ [H_{l},V_{i}]=(H_{l})_{i}^{k}V_{k}\quad,\quad[E_{\alpha},V_{i}]=(E_{\alpha})_{i}^{j}V_{j},\notag\\ \notag\\ [H_{j},\widetilde{E}_{\alpha }]=-\alpha _{j}\widetilde{E}_{\alpha }\quad,\quad [E_{\alpha },\widetilde{E}_{\alpha }]=\frac{1}{4}\,{\sum}_{j=1}^{r}\alpha _{j}\widetilde{H}_{j},\notag\\ \notag\\ [E_{\alpha },\widetilde{E}_{\beta }]=N_{\alpha ,-\beta }\widetilde{E}% _{\gamma },\quad\quad\alpha -\beta =-\gamma,\;\alpha \neq \beta,\notag\\ \notag\\ [H_{i},\widetilde{V}_{k}]=-(H_{i}^{T})_{k}^{l}\widetilde{V}_{l}\quad,\quad [E_{\alpha},\widetilde{V}_{k}]=-(E_{\alpha}^{T})_{k}^{l}\widetilde{V}_{l},\notag\\ \notag\\ [V_{l},\widetilde{V}_{k}\}=(-1)^{D-m}\,\frac{1}{4}\,{\sum}_{i}\,(H_{i})_{lk}\,\widetilde{H}_{i}, \end{gather} where the indices of the Cartan generators and their duals are $i,j,l=1,...,r$ and $\alpha,\beta,\gamma\in\Delta_{nc}^{+}$. The matrices ($(E_{\alpha})_{i}^{j}$,$(H_{l})_{i}^{j}$) above are the representatives of the corresponding generators ($(E_{\alpha})$,$(H_{l})$). Also ($(E_{\alpha}^{T})_{i}^{j}$, $(H_{l}^{T})_{i}^{j}$) are the matrix transpose of ($(E_{\alpha})_{i}^{j}$, $(H_{l})_{i}^{j}$). We should state once more that the dimension of the matrices above namely the dimension of the fundamental representation of $\mathbf{g}_{0}$ is equal to the number of the coupling fields and their corresponding generators since this is how we have defined and constructed the coupling of the matter fields $A^{k}$ to the scalar coset $G/K$ in the Lagrangian \eqref{lag}. The remaining commutators or the anti-commutators of the original and the dual generators which are not listed in (3.9) vanish indeed. We observe that as we have assumed before the Lie superalgebra we have constructed in (3.9) has the general form \begin{gather}\label{comsorigin} [O,\widetilde{D}\}\subset\widetilde{D}\quad,\quad [O,O\}\subset O,\notag\\ \\ [\widetilde{D},\widetilde{D}\}=0,\notag \end{gather} where $O$ is the set of the original and $\widetilde{D}$ is the set of the dual generators. Now that we have determined the structure constants of the algebra generated by the original and the dual generators we can explicitly calculate the Cartan form $\mathcal{G}^{\prime}=d\nu^{\prime}\nu^{\prime-1}$in terms of both the original and the dual fields. By using the identities in \eqref{formul} also the structure constants given in (3.9) effectively we have \begin{equation}\label{expg} \begin{aligned} \mathcal{G}^{\prime}&=\frac{1}{2}d\phi ^{i}H_{i}+\overset{\rightharpoonup }{ \mathbf{E}^{\prime }}\:\mathbf{\Omega }\:\overset{\rightharpoonup }{d\chi}+\overset{\rightharpoonup }{\widetilde{\mathbf{T}}}e^{\mathbf{\Gamma}}e^{\mathbf{\Lambda}}\overset{\rightharpoonup }{\widetilde{\mathbf{S}}}\\ \\ &+\overset{\rightharpoonup }{\mathbf{V}}\,\nu\,\overset{\rightharpoonup }{\mathbf{dA}}+\overset{\rightharpoonup }{\widetilde{\mathbf{V}}}\,(\nu^{T})^{-1}\,\overset{\rightharpoonup }{\mathbf{d\widetilde{A}}}\\ \\ &+(-1)^{m(D-m)}\,\overset{r}{\underset{i=1}{\sum}}\,\frac{1}{4}\,(H_{i})_{kl}A^{k}\wedge d\widetilde{A}^{l}\widetilde{H}_{i}. \end{aligned} \end{equation} In addition to the definitions given in Section two we have introduced the row vectors $\overset{\rightharpoonup }{\mathbf{V}}$ and $\overset{\rightharpoonup }{\widetilde{\mathbf{V}}}$ as ($V_{k}$) and ($\widetilde{V}_{l}$) respectively. The column vectors $\overset{\rightharpoonup }{\mathbf{dA}}$ and $\overset{\rightharpoonup }{\mathbf{d\widetilde{A}}}$ are ($F^{k}$) and ($d\widetilde{A}^{l}$). Besides we have the row vector of the duals of the solvable Lie algebra generators of $G$ as $\widetilde{\mathbf{T}}_{i}=\widetilde{H}_{i}$ for $i=1,...,r$ and $\widetilde{\mathbf{T}}_{r+\alpha}=\widetilde{E}_{\alpha}$ for $\alpha\in\Delta_{nc}^{+}$. The column vector $\overset{\rightharpoonup }{\widetilde{\mathbf{S}}}$ is defined as $\widetilde{\mathbf{S}}^{i}=\frac{1}{2}d\widetilde{\phi}^{i}$ for $i=1,...,r$ and $\widetilde{\mathbf{S}}^{r+\alpha}=d\widetilde{\chi}^{\alpha}$ for $\alpha\in\Delta_{nc}^{+}$. We have introduced the matrices $\mathbf{\Gamma}$ and $\mathbf{\Lambda}$ as \begin{equation}\label{matrice2} \mathbf{\Gamma }% _{n}^{k}=\frac{1}{2}\phi ^{i}\,\widetilde{g}_{in}^{k}\quad,\quad \mathbf{\Lambda }_{n}^{k}=\chi ^{m}\widetilde{f}_{mn}^{k}. \end{equation} Here we have used the structure constants $\{\widetilde{g}_{in}^{k}\}$ and $\{\widetilde{f}_{mn}^{k}\}$ from their definitions in \begin{equation}\label{gfstruc} [E_{\alpha },\widetilde{T}_{m}]=\widetilde{f}_{\alpha m}^{n}\widetilde{T} _{n}\quad,\quad[H_{i},\widetilde{T}_{m}]=\widetilde{g}_{im}^{n} \widetilde{T}_{n}. \end{equation} They can directly be read from (3.9). If one inserts \eqref{expg} in the Cartan-Maurer equation \eqref{cm} one would obtain the second-order field equations and the Bianchi identities of the original fields in terms of the original and dual fields which are the Lagrange multipliers \cite{julia2,pope}. One can use the twisted self-duality equation which \eqref{expg} obeys and which gives the first-order equations to eliminate the dual fields and then write the second-order field equations solely in terms of the original fields namely one would reach \eqref{fielde}. This is analogous to what we have done in the derivation of the algebra structure. We have obtained the second-order field equations in terms of the structure constants of the algebra by inserting \eqref{firstcartan} in \eqref{cm} and then we have compared the result with \eqref{fielde} to read the structure constants. The second-order field equations in terms of the structure constants that are mentioned above do not contain the dual, Lagrange multiplier fields since we have used primarily the twisted self-duality condition that relates the dual fields to the original ones and we have written the Cartan form $\mathcal{G}^{\prime}$ only in terms of the original fields in \eqref{firstcartan}. Since we have obtained the explicit form of the Cartan form $\mathcal{G}^{\prime}$ in \eqref{expg} we can use the twisted self-duality equation $\ast\mathcal{G}^{\prime}=\mathcal{SG}^{\prime}$ to find the first-order field equations of the lagrangian \eqref{lag}. The validity of the twisted self-duality equation is justified in the way that we have primarily assumed that $\mathcal{G}^{\prime}$ obeys it when we derived the structure constants which are chosen such that they give the correct Cartan form $\mathcal{G}^{\prime}$ which leads to the second-order field equations \eqref{fielde} in \eqref{cm}. Therefore directly from \eqref{expg} the twisted self-duality equation $\ast\mathcal{G}^{\prime}=\mathcal{SG}^{\prime}$ yields \begin{gather} \nu_{l}^{k}\ast dA^{l}=(-1)^{m(D-m)+1}((\nu^{T})^{-1})_{l}^{k}d\widetilde{A}^{l},\notag\\ \notag\\ e^{\frac{1}{2}\alpha_{i}\phi^{i}}(\mathbf{\Omega})_{l}^{\alpha+r}\ast d\chi^{l}=(-1)^{D}(e^{\mathbf{\Gamma}} e^{\mathbf{\Lambda}})_{j}^{\alpha+r}\,\widetilde{\mathbf{S}}^{j},\notag\\ \notag\\ \frac{1}{2}\ast d\phi^{i}=(-1)^{D}(e^{\mathbf{\Gamma}}e^{\mathbf{\Lambda}})_{j}^{i}\widetilde{\mathbf{S}}^{j}+ (-1)^{m(D-m)+D}\,\frac{1}{4}\,(H_{i})_{kl}A^{k}\wedge d\widetilde{A}^{l}. \end{gather} The exterior differentiation of (3.14) gives the second-order field equations \eqref{fielde} indeed. We should remark once more that the roots in $\Delta_{nc}^{+}$ and the corresponding generators $\{E_{\alpha}\}$ are enumerated. We can also express equation (3.14) in a more compact form as \begin{gather} \mathcal{M}\ast \overset{\rightharpoonup }{\mathbf{dA}}=(-1)^{m(D-m)+1}\,\overset{\rightharpoonup }{\mathbf{d\widetilde{A}}},\notag\\ \notag\\ \ast \overset{\rightharpoonup }{\mathbf{\Psi }}=\overset{\rightharpoonup }{\mathbf{P}}+ (-1)^{D}\,e^{\mathbf{\Gamma }% }e^{\mathbf{\Lambda }}\overset{\rightharpoonup }{\widetilde{\mathbf{S}}} \end{gather} where we define the column vector $\overset{\rightharpoonup }{\mathbf{\Psi }}$ as \begin{equation}\label{psi} \begin{aligned} \mathbf{\Psi} ^{i}&=\frac{1}{2}d\phi ^{i}\quad\quad\text{for} \quad\quad i=1,...,r,\\ \\ \mathbf{\Psi} ^{\alpha +r}&=e^{\frac{1}{2}\alpha _{i}\phi ^{i}}\mathbf{\Omega }_{l}^{\alpha}d\chi ^{l}\quad\quad\text{for} \quad\quad\alpha\in \Delta_{nc}^{+}. \end{aligned} \end{equation} Also the vector $\overset{\rightharpoonup }{\mathbf{P}}$ is \begin{equation}\label{p} \begin{aligned} \mathbf{P}^{i}&=(-1)^{m(D-m)+D}\,\frac{1}{4}\,(H_{i})_{kl}\,A^{k}\wedge d\widetilde{A}^{l}\quad\quad\text{for} \quad\quad i=1,...,r,\\ \\ \mathbf{P}^{\alpha+r}&=0\quad\quad\text{for}\quad\quad \alpha\in \Delta_{nc}^{+}. \end{aligned} \end{equation} \section{Conclusion} After a concise discussion of the symmetric space sigma model with its algebraic background we have defined the coupling of m-form field strengths to the scalar Lagrangian in section two. We have also obtained the field equations following the outline of \cite{ker2,ker1}. In section three we have adopted the dualisation method of \cite{julia1,julia2} to establish a coset formulation of the theory and to explore the Lie superalgebra which leads to the first-order equations of motion as a twisted self-duality condition. The validity of the twisted self-duality property of the Cartan form is implicitly justified by our construction of the algebra since beside using the second-order field equations and the Cartan-Maurer equation we have also assumed that the Cartan form obeys the twisted self-duality equation in expressing it only in terms of the original fields during the derivation of the structure constants of the algebra. As a result we have constructed a coset element by defining a Lie superalgebra structure and we have shown that both the first and the second-order field equations can be directly obtained from the Cartan form of the coset element. This work can be considered as an extension of the results which are obtained in \cite{nej2}. The dualisaton of the $G/K$ symmetric space sigma model is performed in \cite{nej2} when the global symmetry group is a non-split semi-simple real form. Here we have studied the dualisation of the non-split scalar coset when it is coupled to other matter fields. We have constructed a framework in which the dualisation analysis of \cite{nej2} is improved to include the coupling matter fields. As a result we have obtained a general scheme which can be effectively used in the coset realizations of the whole set of matter coupled supergravities. The formulation given in this work assumes a general non-split scalar coset $G/K$ in D$\:>2$ spacetime dimensions. The coupling potentials are assumed to be ($m-1$)-forms. As it is clear from the construction, the results are general and they are applicable to a wide class of supergravity theories which contain similar couplings. In \cite{matter} the bosonic sector of the ten dimensional simple supergravity which is coupled to $N$ Abelian gauge multiplets is compactified on the Euclidean tori $T^{10-D}$ and the resulting theories in various dimensions have scalar cosets with couplings based on global symmetry groups which are non-compact real forms of some semi-simple Lie groups. Therefore the results presented here are applicable on them. One can improve the dualized coset formulation presented here by including the gravity and the Chern-Simons terms as well. This would extend the algebra structure obtained here. The group theoretical aspects of the coset formulation and the symmetry properties of the first-order equations which are not considered in this work also need to be examined. One can also study the Kac-Moody symmetry scheme \cite{west1,west2,west3} of the matter coupled scalar cosets.
1,108,101,565,032
arxiv
\section{Introduction} Quantum field theories (QFTs) in de Sitter space are of interest for many reasons. Some of these include the increasingly precise measurements of the cosmic microwave background (CMB) \cite{Komatsu:2008hk} which have prompted many to study predictions of the CMB spectrum beyond the Born approximation (see, e.g. \cite{Weinberg:2005vy,Weinberg:2006ac,Seery:2005wm,Seery:2007we, Cheung:2007st,Senatore:2009cf,Seery:2010kh,Giddings:2010nc,Giddings:2011zd}). Others include a growing interest in understanding local measurements in eternal inflation (e.g., \cite{Bousso:2006aa, Guth:2007aa,Hartle:2010dq}), as well as a renewed interest in approaches to de Sitter quantum gravity \cite{Seery:2006tq,Anninos:2009yc,Anninos:2010gh,Anninos:2011vd} inspired by dS/CFT \cite{Witten:2001kn,Strominger:2001pn}. In addition, the fact that de Sitter (dS) is a maximally symmetric (and thus relatively simple) example of a spacetime where horizons limit the observations of freely-falling observers makes QFTs on dS of interest in their own right. One of the chief concerns with QFTs in de Sitter has been their infrared stability (see e.g. \cite{ Nachtmann:1968aa,Myhrvold:1983hx,Hu:1985uy,Hu:1986cv,Boyanovsky:1996ab,Bros:2006gs, Polyakov:2007mm,Akhmedov:2008pu,Higuchi:2008tn,Higuchi:2009zza,Higuchi:2009ew,Akhmedov:2009ta,Polyakov:2009nq, Burgess:2010dd,Marolf:2010zp,Rajaraman:2010zx,Youssef:2010dw,Boyanovsky:2011xn,Hollands:2010pr, Marolf:2010nz,Krotov:2010ma}). Two recent papers \cite{Hollands:2010pr,Marolf:2010nz} have established that at least massive scalar field quantum field theories are infra-red stable (at all orders of perturbation theory) in a particular sense. In order to state their results precisely, we first introduce the (interacting) Hartle-Hawking state $\ket{\rm HH}$ \cite{Hartle:1976tp} defined by analytically continuing all correlation functions from Euclidean signature. Next, consider a normalized state constructed by the application of smeared field operators on $\ket{\rm HH}$: \eq{ \label{eq:genericPsi} \ket{\Psi} = \int_{y_1}\cdots\int_{y_n} f(y_1,\dots,y_n) \phi(y_1)\cdots\phi(y_n) \ket{\rm HH} , } with $f(y_1,\dots,y_n)$ a smooth smearing function of compact support. By the Reeh-Schlieder theorem of curved spacetimes \cite{strohmaier:5514} the set of states of the form (\ref{eq:genericPsi}) is dense in the Hilbert space. The works \cite{Hollands:2010pr,Marolf:2010nz} show that, at all orders of perturbation theory, the correlation functions of $\ket{\Psi}$ reduce to those of the Hartle-Hawking state when evaluated in the asymptotic future/past of de Sitter space: \eq{\label{eq:nh} \C{\phi(x_1)\cdots\phi(x_n)}_\Psi \to \C{\phi(x_1)\cdots\phi(x_n)}_{\rm HH} . } In particular, de Sitter invariance of $|{\rm HH} \rangle $ means that all one-point functions approach constants (whether the associated operators are elementary or composite). Ref \cite{Hollands:2010pr} calls the result (\ref{eq:nh}) a `quantum cosmic no hair theorem,' while in the language of \cite{Marolf:2010zp} one says that $|{\rm HH} \rangle$ is an attractor state for local correlators. Although stated as a result concerning QFTs in exact de Sitter space, the no-hair theorem just described may also be usefully applied to more interesting scenarios. Since it relies only on the asymptotic behavior, it should be valid in the asymptotic region of any asymptotically-de Sitter spacetime, or within the causal patch of an observer who finds herself in a locally de Sitter spacetime. One expects physically relevant states to take the form (\ref{eq:genericPsi}) within the de Sitter region so that the theorem applies. Our main purpose here is to provide evidence that this is indeed the case by studying (at the one-loop level) a particular scenario recently discussed by Krotov and Polyakov \cite{Krotov:2010ma}. In this scenario, the spacetime is again exact de Sitter but the theory is time dependent. The particular model involves a cubic interaction $g(x) \phi^3(x)$ with time-dependent coupling $g(x)$, taken to vanish in the far past and to approach some constant $g_f$ in the far future. The state of the system is taken to be the free Bunch-Davies vacuum in the region where $g(x) =0$ and we take the coupling to turn on at some fixed time. We explicitly compute the $O(g^2)$ (one-loop) corrections to the 2-point function in this model and verify that they approach those of the de Sitter Hartle-Hawking state in the far future. We work entirely in Lorentz signature, in part to counter concerns \cite{Polyakov:2007mm,Polyakov:2009nq,Krotov:2010ma} about the Euclidean techniques used in \cite{Hollands:2010pr,Marolf:2010nz}. In addition, we note that the current work provides an an explicit example of the renormalization techniques used in \cite{Marolf:2010nz} which combine Pauli-Villars regularization with Mellin-Barnes representations. Our techniques also apply to other examples where the coupling does not depend on time but in which the spacetime is de Sitter only after some finite time. Although \cite{Krotov:2010ma} concluded that de Sitter QFT is ``unstable,'' it is useful to point out that our technical results are completely consistent with those of \cite{Krotov:2010ma}. As stated below their equation (17), for fixed $g(x)$ their approximations are not valid for correlators computed at late times. Instead, \cite{Krotov:2010ma} focused on correlators defined at some {\it fixed} time in the limit where $g(x)$ turns on at very early times. The divergence they find is in fact to be expected from the results of \cite{Hollands:2010pr,Marolf:2010nz}, which suggest that correlators in well-behaved states approach those of $|{\rm HH} \rangle$ in the far past. Using the free vacuum when $g(x)=0$ and taking $g(x)$ to turn on very early ensures that correlators at such early times differ significantly from those in $|{\rm HH} \rangle$. As a result, one already expects from \cite{Hollands:2010pr,Marolf:2010nz} that the state is not well-behaved under the limit taken in \cite{Krotov:2010ma}. We emphasize that the present work considers only perturbative effects in massive theories. Non-perturbative effects can yield qualitatively different behavior (though see \cite{Shlaer:2009aa}), and including massless fields (whether scalar or tensor\footnote{The 2-point functions of Maxwell fields in dS are known to behave like those of massive scalars \cite{Allen:1985wd,Tsamis:2006gj} and so provide no new subtleties.}) would raise new issues. As a result, our analysis does not directly address the much-discussed possibility of novel infrared effects in de Sitter quantum gravity (e.g., \cite{Tsamis:1992sx,Tsamis:1994ca,Mottola:2006ew,Antoniadis:2006wq, Garriga:2007zk,Tsamis:2007is,Urakawa:2010it,Tsamis:2011uq,Giddings:2010nc,Giddings:2011zd}). Nevertheless, a detailed study of theories on a fixed de Sitter background helps to disambiguate those effects which are truly quantum gravitational from those that are generic for quantum fields in de Sitter. We begin in \S\ref{sec:prelims} with a brief review of linear quantum field theory on de Sitter. We then analyze time-dependant couplings in global de Sitter in \S\ref{sec:time}. Section \ref{sec:phi2} studies a simple theory with a time-dependant $\phi^2(x)$ interaction (i.e., a time-dependant mass perturbation). We study the more complicated model of a time-dependant $\phi^3(x)$ interaction in \S\ref{sec:phi3}, relying at times upon the results derived in the $\phi^2(x)$ model. Some further calculational details are presented in appendices. We provide a concluding discussion in \S\ref{sec:disc}. \section{Preliminaries} \label{sec:prelims} We begin by reviewing some basic features of {\it free} quantum fields in de Sitter space. Recall that global de Sitter may be described by the metric \eq{ \label{eq:metric} ds^2 = \ell^2\left[ - \frac{1}{1+\eta^2} d\eta^2 + (1+\eta^2) d\Omega_{D-1}^2 \right] , } where $\ell$ is the de Sitter radius, $\eta$ is a time coordinate with range $-\infty < \eta < + \infty$, and $d\Omega^2_{D-1}$ is the metric of the unit sphere $S^{D-1}$. The coordinate $\eta$ is related to the more familiar global de Sitter coordinate $t$ (for which $g_{tt}=-1$) via $\eta = \sinh(t/\ell)$. In these coordinates the volume element is $\sqrt{-g(x)} d^Dx = \ell^D (1+\eta^2)^{(D-2)/2} d\eta \,d\Omega_{D-1}({\vec{x}})$ with ${\vec{x}}$ a unit vector in $\mathbb{R}^{D-1}$ parameterizing $S^{D-1}$. The Euclidean section of de Sitter is the Euclidean sphere $S^D$ with radius $\ell$. Free massive scalar field theories on de Sitter have been well-understood for decades (e.g., \cite{Allen:1985ux,Birrell:1982ix}). Such theories may be described by the classical Lagrangian density \eq{ \mathcal{L} = \frac{1}{2} \nabla_\mu \phi \nabla^\mu \phi + \frac{M^2}{2}\phi^2 , } from which it follows that the classical equation of motion is the Klein-Gordon equation; correspondingly, in the quantum theory the Schwinger-Dyson equations are \eq{ \label{eq:FreeSDEquations1} (\Box_i - M^2) \CP{\phi(x_1)\cdots\phi(x_i)\cdots\phi(x_N)} = 0 . } The free theory admits a unique Hadamard de Sitter-invariant state known as the Bunch-Davies, Euclidean, or (free) Hartle-Hawking state, which we denote by $\ket{0}$. The latter two names come from the fact that the correlation functions of this state may be defined by the analytic continuation from the Euclidean section. We denote the time-ordered, anti-time-ordered, and Wightman 2-point functions of this state by \eqn{ \label{eq:Greens} G_\sigma(x_1,x_2) &:=& \C{ T \phi_\sigma(x_1)\phi_\sigma(x_2) }_0 , \nonumber \\ G^*_\sigma(x_1,x_2) &:=& \C{ \overline{T} \phi_\sigma(x_1)\phi_\sigma(x_2) }_0 , \nonumber \\ W_\sigma(x_1,x_2) &:=& \C{ \phi_\sigma(x_1)\phi_\sigma(x_2)}_0 . } In these expressions we have introduced a label $\sigma$ to keep track of the bare mass $M$. We define $\sigma$ by \eq{ \sigma = - \left(\frac{D-1}{2}\right) + \left[\frac{(D-1)^2}{4} - M^2\ell^2 \right]^{1/2} , } from which it follows that $M^2 \ell^2 = - \sigma(\sigma + D-1)$. At times it will be convenient to expand the scalar Green's functions (\ref{eq:Greens}) in Klein-Gordon modes $\phi_{\sigma {\vec{L}}}(x)$. These modes are orthonormal with respect to the Klein-Gordon inner product: \eq{ \label{eq:KGnorm} \left( \phi_{\sigma {\vec{L}}},\,\phi_{\sigma {\vec{M}}} \right)_{\rm KG} := - i \ell^{D-1} (1+\eta^2)^{(D-1)/2} \int d\Omega_{D-1}({\vec{x}}) \,n^\mu \left[ \phi_{\sigma {\vec{L}}}(\eta,{\vec{x}}) \overleftrightarrow{\nabla}_\mu \phi^*_{\sigma {\vec{M}}}(\eta,{\vec{x}}) \right] = \delta_{{\vec{L}} {\vec{M}}} . } Here $n^\mu$ is the future-directed normal vector ($n^\mu = (1+\eta^2)^{1/2} \delta^\mu_\eta/\ell$) to an $\eta={\rm const.}$ surface and $A \overleftrightarrow{\nabla}_\mu B := A \nabla_\mu B - B \nabla_\mu A$. The Klein-Gordon modes may be written explicitly as \eq{ \label{eq:KGmodes} \phi_{\sigma {\vec{L}}}(x) = \ell^{(2-D)/2} u_{\sigma L}(\eta) Y_{{\vec{L}}}({\vec{x}}) , } with $Y_{\vec{L}}({\vec{x}})$ spherical harmonics on $S^{D-1}$ and $u_{\sigma L}(\eta)$ given by \eq{ \label{eq:u} u_{\sigma L}(\eta) = N_{\sigma L} (1+\eta^2)^{-(D-2)/4} \left[\frac{1-i\eta}{1+i\eta}\right]^{(L+(D-2)/2)/2} F_{\sigma L}(\eta) , } where $F_{\sigma L}(\eta)$ is a Gauss hypergeometric function \eq{ \label{eq:F} F_{\sigma L}(\eta) = \2F1{\sigma+\frac{D}{2}}{1 - \sigma -\frac{D}{2}}{L+\frac{D}{2}}{\frac{1-i\eta}{2}} , } and the normalization coefficient is \eq{ \label{eq:N} N_{\sigma L} = \frac{1}{\G{L+\frac{D}{2}}} \left[ \frac{\GG{L-\sigma,L+\sigma+D-1}}{2} \right]^{1/2} . } Using these modes we may expand, e.g., the Wightman function \eqn{ \label{eq:Wmodes} W_\sigma(x_1, x_2) &=& \ell^{2-D} \sum_{\vec{L}} \phi_{\sigma {\vec{L}}}(x_1) \phi^*_{\sigma {\vec{L}}}(x_2) \nonumber \\ &=& \ell^{2-D} \frac{\G{\frac{D-2}{2}}}{4 \pi^{D/2}} \sum_{L=0}^\infty (2L+D-2) u_{\sigma L}(\eta_1) u^*_{\sigma L}(\eta_2) C_L^{(D-2)/2}({\vec{x}}_1\cdot{\vec{x}}_2) . } Here $C_L^{(D-2)/2}(z)$ is a Gegenbauer polynomial and once again ${\vec{x}}_1,{\vec{x}}_2$ are unit vectors in $\mathbb{R}^{D-1}$ parameterizing the $S^{D-1}$. To obtain the last equality we sum over angular momenta via the useful identity \eq{ \sum_{\vec{j}} Y_{\vec{L}}({\vec{x}}_1) Y^*_{\vec{L}}({\vec{x}}_2) = \frac{\G{\frac{D-2}{2}}}{4 \pi^{D/2}} (2L+D-2) C_L^{(D-2)/2}({\vec{x}}_1\cdot{\vec{x}}_2), \quad {\vec{L}} = (L,{\vec{j}}) . } It will be useful to note some qualitative features of the of the Klein-Gordon mode function $u_{\sigma L}(\eta)$. First, as one might expect from the fact that the volume of the $S^{D-1}$ is smallest at $\eta =0$, the mode functions are bounded by their values at $\eta = 0$: \eqn{ \label{eq:ubound} |u_{\sigma L}(\eta)|^2 \le |u_{\sigma L}(0)|^2 &=& 2^{-(2L+D-1)}\pi \frac{\GG{L-\sigma, L+\sigma+D-1}} {\left(\GG{\frac{1+L-\sigma}{2}, \frac{1+L+\sigma+D-1}{2}}\right)^2} \nonumber \\ &=& \frac{1}{2L}\left(1 + O(L^{-1})\right) , \quad {\rm when\;} L \gg 1, \;\; \sigma\; {\rm fixed \;} , } (see eq. (56) of \cite{Marolf:2008hg}). From this we see that $|u_{\sigma L}(\eta)|$ may be bounded by a function of $L$ that decreases as $L \to \infty$. Second, the expansion of the universe and the ensuing growth of the physical wavelength at fixed $L$ suggest that at large $|\eta|$ all modes behave like the $L=0$ mode. Indeed, as derived in detail in Appendix \ref{app:asymptotic}, the following asymptotic expansion for $u_{\sigma L}(\eta)$ is valid for large $|\eta|$ when $\sigma$, $L$, and $D$ are held fixed: \eqn{ \label{eq:uLargeEta} u_{\sigma L}(\eta) &=& \frac{N_{\sigma L}}{2^{\sigma+(D-2)/2}} \GGG{L+\frac{D}{2}, 2\sigma+D-1}{L+\sigma+D-1, \sigma+\frac{D}{2}} \exp\left[i\frac{\pi}{2}(L+\sigma+D-2)\right] (\eta)^{\sigma} \left[1 + O\left(\frac{(L-\sigma)}{\eta}\right)\right] \nonumber \\ & & + (\sigma \to - (\sigma+D-1)) , \quad {\rm for\;} |\eta| \gg 1, \;\; |\eta| \gg (L-\sigma) . } Now, while (\ref{eq:uLargeEta}) gives the correct asymptotic behavior for a given mode at large $|\eta|$ (with all other parameters fixed), it does not correctly reproduce the behavior of the mode function at some arbitrarily large-but-finite $|\eta|$ as $L \to \infty$. To understand the behavior of the mode function in this regime, we instead use the WKB approximation valid when \eq{ f(\eta) := \frac{L(L+D-2)}{(1+\eta^2)} + M^2\ell^2 \gg 1 . } The WKB approximation is derived in appendix~\ref{app:WKB} and is given by \eq{ \label{eq:WKB} u_{\sigma L}(\eta) \approx \frac{1}{\sqrt{2}} (1+\eta^2)^{(1-D)/4} \left[ f(\eta) \right]^{-1/4} e^{\pm i \Upsilon(\eta) } , } with $\Upsilon(\eta)$ satisfying \eq{ \frac{d}{d\eta} \Upsilon(\eta) = \left[\frac{f(\eta)}{(1+\eta^2)}\right]^{1/2} . } The key feature of this expression is that $\Upsilon(\eta)$ is large in the regime of validity, so (\ref{eq:WKB}) is a highly oscillatory function of $\eta$. Let us now turn the discussion to interacting theories. The Hartle-Hawking state $\ket{\rm HH}$ is constructed by analytically continuing all correlation functions from the Euclidean section. We denote the Hartle-Hawking state constructed perturbatively in an interacting theory by $\ket{\rm HH}$, and reserve $\ket{0}$ to denote the Hartle-Hawking state of the free theory. The state $\ket{\rm HH}$ has been studied in detail for massive scalar field theories in \cite{Higuchi:2009ew,Marolf:2010zp,Hollands:2010pr,Marolf:2010nz, Higuchi:2010aa} by performing the relevant analytic continuations. However, in the current work we perform all calculations explicitly in Lorentz-signature using standard Schwinger-Keldysh (a.k.a ``in-in'', ``real-time'', ``closed time path'') perturbation theory (for original works see \cite{Schwinger:1960qe,Keldysh:1964ud}; for more tractable introductions see \cite{Jordan:1986ug,Paz:1990jg,Vilkovisky:2007ny} and the appendix of \cite{Weinberg:2005vy}). \section{Time-dependent couplings in de Sitter} \label{sec:time} \begin{figure} \centering \includegraphics{figdS.pdf} \caption{The Penrose diagram of de Sitter. We consider the state $\ket{\Psi}$ defined by the Bunch-Davies vacuum on a Cauchy surface $\Sigma$. The interaction turns on across the region $\mathcal{R}$ via a smooth coupling function $g(x)$ such that $g(x)=g_f$ in the causal future of $\mathcal{R}$ and $g(x)=0$ in the causal past of $\mathcal{R}$. We compute the time-ordered 2-point function $\CP{T\phi(x_1)\phi(x_2)}$ of two points in the distant future. } \label{fig:dS} \end{figure} \renewcommand{\thefigure}{\arabic{figure}} Consider a massive scalar field on global de Sitter with a time-dependent self-interaction. In particular, let the self-interaction vanish in the asymptotic past but turn on smoothly across a spacetime region $\mathcal{R}$. We require the coupling function $g(x)$ to satisfy \eq{ g(x) = \left\{ \begin{array}{ll} 0 & \quad {\rm for\;} x \in J^-(\mathcal{R})/\mathcal{R} \\ g_f & \quad {\rm for\;} x \in J^+(\mathcal{R})/\mathcal{R} \end{array} \right. , } where as usual $J^\pm$ denotes the causal past and future of a set and $A/B$ denotes the set of points in $A$ that do not lie in $B$. We sketch the scenario in Fig.~\ref{fig:dS}. We wish to compute correlation functions with respect to the state $\ket{\Psi}$ which coincides with the free Bunch-Davies vacuum $\ket{0}$ in the past region $J^+(\mathcal{R})/\mathcal{R}$. For infinitesimal coupling $g(x)$ the time-ordered 2-point function with respect to $\ket{\Psi}$ can be expanded as \eq{ \label{eq:2pt} \CP{ T \phi(x_1)\phi(x_2) } = \C{ T \phi(x_1)\phi(x_2) }_0 + \sum_{n=1}^\infty \CP{T \phi(x_1)\phi(x_2)}^{(n)} , } where $\CP{T \phi(x_1)\phi(x_2) }^{(n)}$ is of $O(g^n)$. We choose a surface $\Sigma$ in the past of $\mathcal{R}$ as our initial Cauchy surface -- see Fig.~\ref{fig:dS}. Given the above choice of state, the appropriate Green's functions to use in the Schwinger-Keldysh formalism are those of the Bunch-Davies vacuum (\ref{eq:Greens}). The main result of this section is to show that, when it is evaluated at $x_1$, $x_2$ in the far future of $\mathcal{R}$, the 2-point function (\ref{eq:2pt}) reduces to that of the Hartle-Hawking state of the analogous theory with constant coupling $g_f$. We consider both quadratic and cubic couplings ($g(x)\phi^2(x)$ and $g(x) \phi^3(x)$) below, showing in each case that the perturbative corrections $\C{T \phi(x_1)\phi(x_2)}_\Psi^{(n)}$ approach those of the Hartle-Hawking state at late times up to order $n=2$. Although it may be of less physical interest, the analysis of the quadratic coupling in section \ref{fig:phi2} will help to greatly simplify our 1-loop treatment of the cubic coupling in section \ref{fig:phi3}. \subsection{Example: $\phi^2(x)$ interaction} \label{sec:phi2} \begin{figure} \centering \includegraphics{figphi2.pdf} \caption{Corrections to the time-ordered 2-point function $\CP{T \phi(x_1)\phi(x_2)}$ due to the interaction $-\frac{1}{2}g(x)\phi^2(x)$. The $O(g(x))$ correction is depicted in Fig. (a) while the $O(g^2(x))$ correction is depicted in Fig. (b). It is convenient for computation to label each leg of the diagram by a distinct mass parameter $\sigma_i$.} \label{fig:phi2} \end{figure} \renewcommand{\thefigure}{\arabic{figure}} In our first example we consider the simple quadratic interaction term \eq{ \mathcal{L}_{\rm int}[\phi] = - \frac{g(x)}{2} \phi^2(x) . \label{eq:Lp2}} Treating this term perturbatively yields only tree-level diagrams; no regularization or renormalization is needed. We compute both the $O(g)$ and the $O(g^2)$ corrections to the two-point function of this model below. As we will see in section \ref{sec:phi3}, the $O(g^2)$ correction from (\ref{eq:Lp2}) is closely related to the 1-loop correction from a $\phi^3$ interaction. As a result, the results below will greatly simplify the manipulations in section \ref{sec:phi3}. \subsubsection{The $O(g)$ correction} \label{sec:Og} The $O(g)$ correction is depicted in the Feynman diagram shown in Fig.~\ref{fig:phi2}~(a) and is given by the expression \eq{ \label{eq:phi2} \CP{T \phi_{\sigma_1}(x_1) \phi_{\sigma_2}(x_2)}^{(1)} = i \int_y g(y)\left[ G_{\sigma_1}(y,x_1) G_{\sigma_2}(y,x_2) - W_{\sigma_1}(y,x_1) W_{\sigma_2}(y,x_2)\right] . } We denote by $\int_y\dots$ an integral over the future of $\Sigma$. For the moment it is convenient to let each Green's function have a distinct mass; we will take the limit of equal masses later. Consider the first term in (\ref{eq:phi2}): \eq{ \label{eq:T1} T_1(x_1,x_2) := i \int_y g(y) G_{\sigma_1}(y,x_1) G_{\sigma_2}(y,x_2) . } Making use of the Green's functions' equations of motion \eqn{ (\Box_x - M^2) G_\sigma(x,y) &=& (\Box_y - M^2) G_\sigma(x,y) = i \delta(x,y), \nonumber \\ (\Box_x - M^2) W_\sigma(x,y) &=& (\Box_y - M^2) W_\sigma(x,y) = 0 , } we may usefully re-write (\ref{eq:T1}) as \eqn{ T_1(x_1,x_2) &=& \frac{i}{M_1^2 - M_2^2} \int_y g(y) \bigg[ (\Box_y G_{\sigma_1}(y,x_1))G_{\sigma_2}(y,x_2) \nonumber \\ & & - G_{\sigma_1}(y,x_1)(\Box_y G_{\sigma_2}(y,x_2)) - i \delta(x_1,y) G_{\sigma_2}(y,x_2) + i \delta(x_2,y) G_{\sigma_1}(y,x_1) \bigg] \nonumber \\ &=& \frac{1}{M_1^2-M_2^2}\left[ g(x_1) G_{\sigma_2}(x_1,x_2) - g(x_2) G_{\sigma_1}(x_1,x_2) \right] \nonumber \\ & & + \frac{i}{M_1^2 - M_2^2} \int_y g(y) \bigg[ (\Box_y G_{\sigma_1}(y,x_1))G_{\sigma_2}(y,x_2) - G_{\sigma_1}(y,x_1)(\Box_y G_{\sigma_2}(y,x_2)) \bigg] . \nonumber \\ } Making the simple re-arrangement \eqn{ g(y) (\Box G_{\sigma_1}(y,x_1)) G_{\sigma_2}(y,x_2) &=& \nabla^\mu \big[ g(y) (\nabla_\mu G_{\sigma_1}(y,x_1)) G_{\sigma_2}(y,x_2)\big] \nonumber \\ & & - (\nabla^\mu g(y)) (\nabla_\mu G_{\sigma_1}(y,x_1)) G_{\sigma_2}(y,x_2) \nonumber \\ & & - g(y) (\nabla_\mu G_{\sigma_1}(y,x_1)) (\nabla^\mu G_{\sigma_2}(y,x_2)) , } we obtain \eqn{ \label{eq:T1b} T_1(x_1,x_2) &=& \frac{1}{M_1^2-M_2^2}\left[ g(x_1)G_{\sigma_2}(x_1,x_2) - g(x_2)G_{\sigma_1}(x_1,x_2) \right] \nonumber \\ & & - \frac{i}{M_1^2-M_2^2} \int_y \nabla^\mu \left[ g(y) G_{\sigma_1}(y,x_1) \overleftrightarrow{\nabla}_\mu G_{\sigma_2}(y,x_2) \right] \nonumber \\ & & + \frac{i}{M_1^2-M_2^2} \int_y (\nabla^\mu g(y) ) \left[ G_{\sigma_1}(y,x_1) \overleftrightarrow{\nabla}_\mu G_{\sigma_2}(y,x_2) \right] . } One can perform the same manipulations for the second term in (\ref{eq:phi2}). The only difference is that the Wightman function satisfies the homogeneous equation of motion, so there are no analogs of the terms on the top line of (\ref{eq:T1b}). All together we obtain the expression \eqn{ \label{eq:phi2b} & &\CP{T \phi_{\sigma_1}(x_1) \phi_{\sigma_2}(x_2)}^{(1)} = \frac{1}{M_1^2-M_2^2}\left[ g(x_1)G_{\sigma_2}(x_1,x_2) - g(x_2) G_{\sigma_1}(x_1,x_2) \right] \nonumber \\ & & - \frac{i}{M_1^2-M_2^2} \int_y \nabla^\mu \left[ g(y) \left( G_{\sigma_1}(y,x_1) \overleftrightarrow{\nabla}_\mu G_{\sigma_2}(y,x_2) - W_{\sigma_1}(y,x_1) \overleftrightarrow{\nabla}_\mu W_{\sigma_2}(y,x_2) \right) \right] \nonumber \\ & & + \frac{i}{M_1^2-M_2^2} \int_y (\nabla^\mu g(y) ) \left[ G_{\sigma_1}(y,x_1) \overleftrightarrow{\nabla}_\mu G_{\sigma_2}(y,x_2) - W_{\sigma_1}(y,x_1) \overleftrightarrow{\nabla}_\mu W_{\sigma_2}(y,x_2) \right] . } In the second line of (\ref{eq:phi2b}) there is an integral of a total derivative. By Stokes' theorem this integral can be expressed as an integral over the boundary of the region to the future of $\Sigma$. This boundary is simply the union of $\Sigma$ and future infinity $I^+$. Now, the combination of Green's functions in the integrand is such that the integrand has support only on the union of the past light cones of $x_1$ and $x_2$, so the integral over $I^+$ vanishes. Furthermore, the coupling function $g(y)$ vanishes on $\Sigma$, so the integral over $\Sigma$ vanishes as well. We conclude that the integral in the second line of (\ref{eq:phi2b}) is identically zero: \eqn{ \label{eq:phi2c} & &\CP{T \phi_{\sigma_1}(x_1) \phi_{\sigma_2}(x_2)}^{(1)} = \frac{1}{M_1^2-M_2^2}\left[ g(x_1) G_{\sigma_2}(x_1,x_2) - g(x_2) G_{\sigma_1}(x_1,x_2) \right] \nonumber \\ & & + \frac{i}{M_1^2-M_2^2} \int_y (\nabla^\mu g(y) ) \left[ G_{\sigma_1}(y,x_1) \overleftrightarrow{\nabla}_\mu G_{\sigma_2}(y,x_2) - W_{\sigma_1}(y,x_1) \overleftrightarrow{\nabla}_\mu W_{\sigma_2}(y,x_2) \right] . } We are interested in $x_1,x_2$ in the future region $J^+(\mathcal{R})/\mathcal{R}$. Since the gradient $\nabla^\mu g$ has support only in $\mathcal{R}$, we need not allow $y$ in (\ref{eq:phi2c}) to coincide with $x_1,x_2$ or to lie in the future of either point. We may therefore replace $G_{\sigma_1}, G_{\sigma_2}$ by appropriate Wightman functions in the integral and write the second line of (\ref{eq:phi2c}) in the form \eqn{ \label{eq:T2} T_{2\,\sigma_1\sigma_2}(x_1,x_2) &:=& \frac{i}{M_1^2-M_2^2} \int_y (\nabla^\mu g(y) ) \left[ W_{\sigma_1}(x_1,y) \overleftrightarrow{\nabla}_\mu W_{\sigma_2}(x_2,y) - W_{\sigma_1}(y,x_1) \overleftrightarrow{\nabla}_\mu W_{\sigma_2}(y,x_2) \right] \nonumber \\ &=& \frac{2}{M_1^2 - M_2^2} {\rm Im}\,\left\{ \int_y (\nabla^\mu g(y) ) W_{\sigma_1}(y,x_1) \overleftrightarrow{\nabla}_\mu W_{\sigma_2}(y,x_2) \right\} . } To keep the computation simple we choose $g(y)$ to be a smooth function of the time coordinate $\eta$ alone, i.e., \eq{ \label{eq:geta} \nabla^\mu g(y) = - \ell^{-2} (1+\eta^2) g'(\eta) \delta^\mu_\eta . } By expanding the Wightman functions in Klein-Gordon modes as in (\ref{eq:Wmodes}) we compute \eqn{ T_{2\,\sigma_1\sigma_2}(x_1,x_2) &=& - \frac{2 \ell^{D-2}}{M_1^2 - M_2^2} {\rm Im}\,\left\{ \int d\eta \, (1+\eta^2)^{(D-1)/2} g'(\eta) \int d\Omega_{D-1} W_{\sigma_1}(y,x_1) \overleftrightarrow{\partial_n} W_{\sigma_2}(y,x_2) \right\} \nonumber \\ &=& - \ell^{2-D} \frac{\G{\frac{D-2}{2}}}{2\pi^{D/2}} {\rm Im}\, \sum_{L=0}^\infty \bigg\{ (2 L+D-2) \chi_{\sigma_1\sigma_2}(L)\, u^*_{\sigma_1 L}(t_1) u^*_{\sigma_2 L}(t_2) C^{(D-2)/2}_L({\vec{x}}_1\cdot{\vec{x}}_2) \bigg\} , \nonumber \\ \label{eq:T2again} } with \eq{ \label{eq:chi} \chi_{\sigma_1\sigma_2}(L) = \frac{1}{(M_1^2-M_2^2)} \int d\eta\, (1+\eta^2)^{(D-1)/2} g'(\eta) \left[ u_{\sigma_1 L}(\eta) \overleftrightarrow{\partial_n} u_{\sigma_2 L}(\eta) \right] , } where $\partial_n$ denotes a derivative along the unit (future-pointing) timelike normal to the surface $\eta = {\rm const}$. The integral in (\ref{eq:chi}) is guaranteed to be finite for any pair $u_{\sigma_1 L}(\eta)$, $u_{\sigma_2 L}(\eta)$ as the harmonics are bounded and $g'(\eta)$ is smooth with compact support. It is also finite in the limit $M_2^2 \to M_1^2$, as can be seen by using l'Hopital's rule. It remains, however, to determine the convergence of the sum over $L$ in (\ref{eq:T2again}). This is governed by the behavior of $\chi_{\sigma_1\sigma_2}(L)$ at large $L \gg 1$. Since $g'(\eta)$ has compact support it must vanish for $|\eta|$ greater than some $|\eta|_{\rm max}$, and for $L(L-D+2) \gg 1 + \eta_{max}^2$ we may use the WKB approximation (\ref{eq:WKB}). Since the phase $\Upsilon(\eta)$ in (\ref{eq:WKB}) is highly oscillatory at large $L$, we expect $\chi_{\sigma_1\sigma_2}(L)$ to decrease rapidly with $L$. To verify that this is the case, suppose that in fact $L(L-D+2) \gg M^2 \ell^2 (1 + \eta_{max}^2)$ so that we may also expand the phases $\Upsilon_\sigma(\eta)$ (where we have added the label $\sigma$ to indicated the dependence on mass) as \begin{equation} \Upsilon_\sigma = \sum_{n \ge 0} \left( L(L-D+2) \right)^{(1-n)/2} \Upsilon_{n,\sigma}. \end{equation} Noting that $\Upsilon_{0,\sigma}$ is independent of $\sigma$, we now introduce a new time coordinate $\tilde \eta$ defined by i) $\tilde \eta$ is a smooth strictly increasing function of $\eta$, ii) $\tilde \eta = \Upsilon_{0,\sigma}$ in the region where $g'(\eta)\neq 0$, and iii) $\tilde \eta = \eta$ at large $|\eta|$. Then for large $L$ (\ref{eq:chi}) becomes ($L^{-1}$ times) the Fourier transform of a smooth $L$-independent function of $\tilde \eta$ and, as a result, decays faster than any power law in $L$. (Here we use the fact that the sub-leading terms in the WKB expansion correct (\ref{eq:WKB}) by multiplying (\ref{eq:WKB}) by functions which become essentially constant at large $L$ for $|\eta| < \eta_{max}$.) For later use we note that for $L \gg |\eta_{max}| M \ell$ we have shown that $|\chi_{\sigma_1,\sigma_2}(L)| \le 1/L^n$ and, as a result, that for all $L$ and $n$ we have a bound \begin{equation} \label{eq:fb} |\chi_{\sigma\s}(L)| \le C_n \left( \frac{M \ell}{L}\right)^n \end{equation} for appropriate $C_n$ determined by only $g(x)$ (and in particular, for which the $C_n$ do not depend on $M, L$). It follows that the sum over $L$ in (\ref{eq:T2again}) is absolutely convergent and yields a finite result, even when $x_1=x_2$. To see this, we bound the harmonics $|u^*_{\sigma L}(x_1) u^*_{\sigma L}(\eta_2)| \le |u_{\sigma L}(0)|^2$ as in (\ref{eq:ubound}), and we bound the Gegenbauer polynomial by it's value at coincidence: \eq{ C^{(D-2)/2}_L(1) = \GGG{L+D-2}{L+1, D-2} , } (see eq. (44) of \cite{Marolf:2008hg}). For $L \gg 1$ this behaves like $L^{D-3}$. From these bounds it follows that the summand in (\ref{eq:T2again}) may be bounded at large $L$ by $L^{D-3} \chi_{\sigma_1\sigma_2}(L)$. Since $\chi_{\sigma_1\sigma_2}(L)$ decays faster than any polynomial as $L\to \infty$, the sum is absolutely convergent. Furthermore, we can show that when $|\eta_1|,\,|\eta_2| \gg |\eta|_{\rm max}^2$ the expression $T_{2\,\sigma\s}(x_1,x_2)$ decays like $(\eta_1\eta_2)^\sigma$. For $\eta_1$, $\eta_2$ in this regime, let us choose some $L_{\rm cut}$ such that ${\rm min}(|\eta_1|, |\eta_2|) \gg 4 L^2_{\rm cut} \gg |\eta|_{\rm max}^2$. We then split the sum over $L$ in (\ref{eq:T2again}) into two parts: One part $T_{2,<}(x_1,x_2)$ is the finite series containing the terms below $L_{\rm cut}$. The other part $T_{2,>}(x_1,x_2)$ is the infinite series containing the terms with $L \ge L_{\rm cut}$. For each term in $T_{2,<}(x_1,x_2)$ we may approximate $u^*_{\sigma L}(\eta_1)$ and $u^*_{\sigma L}(\eta_2)$ by the asymptotic expansion (\ref{eq:uLargeEta}). It follows that each term in the series $T_{2,<}(x_1,x_2)$ decays like $(\eta_1\eta_2)^\sigma$, and so $T_{2,<}(x_1,x_2)$ does as well. On the other hand, we can bound the contribution of the infinite series $T_{2,>}(x_1,x_2)$ by \eqn{ \label{eq:T2>} T_{2>},(x_1,x_2) &\le & \ell^{2-D} \frac{\G{\frac{D-2}{2}}}{2\pi^{D/2}} \sum_{L=L_{\rm cut}}^{\infty} \bigg\{ (2 L+D-2) |\chi_{\sigma_1\sigma_2}(L)| |u_{\sigma L}(0)|^2 C^{(D-2)/2}_L(1) \bigg\} . } This series converges absolutely and (due to the rapid decay of $\chi_{\sigma_1\sigma_2}(L)$) decreases faster than any power law as $L_{\rm cut}$ is increased. Thus, as $|\eta_{1,2}| \to \infty$ we may increase $L_{\rm cut}$ (taking $L_{\rm cut}$ to be, e.g., any geometric mean of $|\eta|_{max}$ and $\min(|\eta_1|,|\eta_2|)$) and the contribution due to $T_{2,>}(x_1,x_2)$ becomes negligible. We conclude that the full expression $T_{2\,\sigma,\sigma}(x_1,x_2)$ decays like $(\eta_1\eta_2)^\sigma$. For later use, let us also consider $T_{2\,\sigma,\mu}(x_1,x_2)$ for all ${\rm Re}\, \mu < {\rm Re}\, \sigma$. Of course, the same argument shows that each $T_{2\,\sigma,\mu}(x_1,x_2)$ decays in the same way, and in particular that it is bounded by $C|(\eta_1\eta_2)^\sigma|$. Using (\ref{eq:fb}) we see that one can choose the constant $C$ to grow with $\mu$ at most as some polynomial whose order depends on $D$ through the power $n$ in (\ref{eq:fb}) required to show convergence of the mode sums. To summarize, the $O(g)$ correction to the time-ordered 2-point function may be written \eq{ \CP{T \phi_\sigma(x_1) \phi_\sigma(x_2)}^{(1)} = - g_f \partial_{M^2} G_{\sigma}(x_1,x_2) + T_{2\,\sigma,\sigma}(x_1,x_2) . } The first term is precisely the $O(g)$ correction to the Hartle-Hawking state \cite{Marolf:2010zp}. At sufficiently late times $|\eta_{1,2}| \gg |\eta|^2_{\rm max}$ the second term decays like $|\eta_1\eta_2|^\sigma$; i.e., exponentially in the usual global time coordinate $t$. Furthermore, the coefficient of the decay term grows at large $\sigma$ no faster than a power law in $\sigma$, and similarly for all $ T_{2\,\sigma,\mu}(x_1,x_2)$ with ${\rm Re}\, \mu \le {\rm Re}\, \sigma$. \subsubsection{The $O(g^2)$ correction} Let us now compute the $O(g^2)$ correction to the time-ordered 2-point function. Although it is somewhat tedious, the effort will be worthwhile as we will make use of this result in studying the 1-loop $\phi^3$ correction in the next section. The $O(g^2)$ correction is depicted in Fig.~\ref{fig:phi3}~(b) and is given by the expression \eqn{ & &\CP{T \phi_{\sigma_1}(x_1)\phi_{\sigma_2}(x_2)}^{(2)} \nonumber \\ &=& i^2 \int_{y} \int_{{\overline{y}}} g(y) g({\overline{y}}) \bigg\{ G_{\sigma_1}(y,x_1) G_{\sigma_2}({\overline{y}},x_2) G_{\sigma_3}(y,{\overline{y}}) - W_{\sigma_1}(y,x_1) G_{\sigma_2}({\overline{y}},x_2) W_{\sigma_3}(y,{\overline{y}}) \nonumber \\ & & \phantom{i^2 \int_{y} \int_{{\overline{y}}} g(y) g({\overline{y}}) \bigg\{ } + W_{\sigma_1}(y,x_1) W_{\sigma_2}({\overline{y}},x_2) G^*_{\sigma_3}(y,{\overline{y}}) - G_{\sigma_1}(y,x_1) W_{\sigma_2}({\overline{y}},x_2) W_{\sigma_3}({\overline{y}},y) \bigg\} . \nonumber \\ } This expression may be organized into two terms, each of which contains the $O(g)$ corrections to an appropriate 2-point function: \eqn{ \label{eq:A0} \CP{T \phi_{\sigma_1}(x_1)\phi_{\sigma_2}(x_2)}^{(2)} &=& i \int_{{\overline{y}}} g({\overline{y}}) G_{\sigma_2}({\overline{y}},x_2) \CP{T \phi_{\sigma_1}(x_1)\phi_{\sigma_3}({\overline{y}})}^{(1)} \nonumber \\ & & - i \int_{{\overline{y}}} g({\overline{y}}) W_{\sigma_2}({\overline{y}},x_2) \CP{\phi_{\sigma_3}({\overline{y}})\phi_{\sigma_1}(x_1)}^{(1)} . } The $O(g)$ correction to the time-ordered correlator (first line of (\ref{eq:A0})) was studied in section \ref{sec:Og} above, and the $O(g)$ correction to the Wightman correlator (second line of (\ref{eq:A0}) can be analyzed similarly. In particular, using manipulations analogous to those that led to (\ref{eq:phi2b}), one obtains \eqn{ \label{eq:A2} & &\CP{\phi_{\sigma_1}(x_1)\phi_{\sigma_2}(x_2)}^{(1)} = i \int_y g(y) \left[ W_{\sigma_1}(x_1,y) G_{\sigma_2}(x_2,y) - G_{\sigma_1}^*(y,x_1) W_{\sigma_2}(y,x_2) \right] \nonumber \\ & & = \frac{1}{M_1^2-M_2^2}\left[ g(x_1) W_{\sigma_2}(x_1,x_2) - g(x_2) W_{\sigma_1}(x_1,x_2)\right] \nonumber \\ & & \phantom{= } + \frac{i}{M_1^2-M_2^2} \int_y(\nabla^\mu g(y)) \left[ W_{\sigma_1}(x_1,y) \overleftrightarrow{\nabla}_\mu G_{\sigma_2}(x_2,y) - G^*_{\sigma_1}(y,x_1) \overleftrightarrow{\nabla}_\mu W_{\sigma_2}(y,x_2) \right] . } After inserting (\ref{eq:phi2c}) and (\ref{eq:A2}) into (\ref{eq:A0}) and rearranging terms one can again recognize the $O(g)$ corrections which we have already computed: \eqn{ \label{eq:A3} & &\CP{T \phi_{\sigma_1}(x_1) \phi_{\sigma_2}(x_2)}^{(2)} \nonumber \\ &=& \frac{1}{M_1^2-M_3^2} \left\{ g(x_1) \CP{T \phi_{\sigma_3}(x_1)\phi_{\sigma_2}(x_2)}^{(1)} - \CP{T\phi_{\sigma_1}(x_1)\phi_{\sigma_2}(x_2)}^{(1')} \right\} \nonumber \\ & & + \frac{i}{M_1^2 - M_3^2} \int_y (\nabla^\mu g(y)) \bigg\{ - \CP{T\phi_{\sigma_3}(y)\phi_{\sigma_2}(x_2)}^{(1)}\overleftrightarrow{\nabla}_\mu G_{\sigma_1}(y,x_1) \nonumber \\ & & \phantom{+\frac{i}{M_1^2-M_3^2} \int_y (\nabla^\mu g(y))\bigg\{} + \CP{\phi_{\sigma_3}(y)\phi_{\sigma_2}(x_2)}^{(1)}\overleftrightarrow{\nabla}_\mu W_{\sigma_1}(y,x_1) \bigg\} . } Here $\CP{T\phi_{\sigma_1}(x_1)\phi_{\sigma_2}(x_2)}^{(1')}$ denotes the same integral expression (\ref{eq:phi2}) as $\CP{T\phi_{\sigma_1}(x_1)\phi_{\sigma_2}(x_2)}^{(1)}$ but with a $g(y)$ replaced by $g^2(y)$. Once again we may use (\ref{eq:phi2c}) and (\ref{eq:A2}) to simplify this expression. The result may be written \eqn{ \label{eq:phi2gg} \CP{T \phi_{\sigma_1}(x_1)\phi_{\sigma_2}(x_2)}^{(2)} &=& g^2_f \bigg[ \frac{G_{\sigma_1}(x_1,x_2)}{(M_1^2-M_2^2)(M_1^2-M_3^2)} + \frac{G_{\sigma_2}(x_1,x_2)}{(M_2^2-M_1^2)(M_2^2-M_3^2)} \nonumber \\ & & \phantom{g^2 \bigg[} + \frac{G_{\sigma_3}(x_1,x_2)}{(M_3^2-M_1^2)(M_3^2-M_2^2)} \bigg] + T_{3\,\sigma_1\sigma_2\sigma_3}(x_1,x_2) , } with $T_{3\,\sigma_1\sigma_2\sigma_3}(x_1,x_2)$ the collection of integration terms \eq{ \label{eq:T3} T_{3\,\sigma_1\sigma_2\sigma_3}(x_1,x_2) = (g\nabla g {\rm \; terms}) + (\nabla g \overline{\nabla} g {\rm \; terms}) , } where \eqn{ \label{eq:gdelg} (g \nabla g {\rm \; terms}) &:=& \frac{1}{(M_1^2-M_3^2)} i \int_y \nonumber \\ & & \bigg\{ (\nabla^\mu g(y)) g(x_1) \left[\frac{ G_{\sigma_3}(y,x_1)\overleftrightarrow{\nabla}_\mu G_{\sigma_2}(y,x_2) - W_{\sigma_3}(y,x_1)\overleftrightarrow{\nabla}_\mu W_{\sigma_2}(y,x_2)}{(M_3^2-M_2^2)} \right] \nonumber \\ & & \phantom{\bigg\{\;} + (\nabla^\mu g^2(y)) \left[ \frac{ G_{\sigma_1}(y,x_1)\overleftrightarrow{\nabla}_\mu G_{\sigma_2}(y,x_2) - W_{\sigma_1}(y,x_1)\overleftrightarrow{\nabla}_\mu W_{\sigma_2}(y,x_2)}{(M_2^2-M_1^2)} \right] \nonumber \\ & & \phantom{\bigg\{\;} + (\nabla^\mu g(y)) g(y) \left[\frac{ G_{\sigma_1}(y,x_1)\overleftrightarrow{\nabla}_\mu G_{\sigma_2}(y,x_2) - W_{\sigma_1}(y,x_1)\overleftrightarrow{\nabla}_\mu W_{\sigma_2}(y,x_2)}{(M_3^2-M_2^2)} \right] \nonumber \\ & & \phantom{\bigg\{\;} + (\nabla^\mu g(y)) g(x_2) \left[ \frac{ G_{\sigma_1}(y,x_1)\overleftrightarrow{\nabla}_\mu G_{\sigma_3}(y,x_2) - W_{\sigma_1}(y,x_1)\overleftrightarrow{\nabla}_\mu W_{\sigma_3}(y,x_2)}{(M_2^2-M_3^2)} \right] \bigg\} , \nonumber \\ } \eqn{ \label{eq:delgdelg} (\nabla g \overline{\nabla}g \;{\rm terms}) &:=& \frac{1}{(M_1^2-M_3^2)(M_2^2-M_3^2)} \int_y \int_{\overline{y}} (\nabla^\mu g(y))(\nabla^{{\overline{\nu}}}g({\overline{y}})) \nonumber \\ & & \bigg\{ W_{\sigma_1}(x_1,y) \overleftrightarrow{\nabla}_\mu G_{\sigma_3}({\overline{y}},y) \overleftrightarrow{\nabla}_{{\overline{\nu}}} W_{\sigma_2}(x_2,{\overline{y}}) \nonumber \\ & & \phantom{\bigg\{ \;} - W_{\sigma_1}(x_1,y) \overleftrightarrow{\nabla}_\mu W_{\sigma_3}({\overline{y}},y) \overleftrightarrow{\nabla}_{{\overline{\nu}}} W_{\sigma_2}({\overline{y}},x_2) \nonumber \\ & & \phantom{\bigg\{ \;} - W_{\sigma_1}(y,x_1) \overleftrightarrow{\nabla}_\mu W_{\sigma_3}(y,{\overline{y}}) \overleftrightarrow{\nabla}_{{\overline{\nu}}} W_{\sigma_2}(x_2,{\overline{y}}) \nonumber \\ & & \phantom{\bigg\{ \;} + W_{\sigma_1}(y,x_1) \overleftrightarrow{\nabla}_\mu G^*_{\sigma_3}(y,{\overline{y}}) \overleftrightarrow{\nabla}_{{\overline{\nu}}} W_{\sigma_2}({\overline{y}},x_2) \bigg\} . } Let us simplify the lengthy expressions (\ref{eq:gdelg}) and (\ref{eq:delgdelg}). The terms in (\ref{eq:gdelg}) are of the same form as $T_{2\,\sigma_1\sigma_2}(x_1,x_2)$ above; indeed, after a few simple manipulations we may write (\ref{eq:gdelg}) as \eqn{ (g \nabla g {\rm \; terms}) &=& \frac{g_f}{(M_1^2 - M_3^2)} T_{2\,\sigma_3,\sigma_2}(x_1,x_2) + \frac{g_f}{(M_2^2 - M_3^2)} T_{2\,\sigma_1,\sigma_3}(x_1,x_2) \nonumber \\ & & + \frac{(M_1^2+M_2^2-2M_3^2)}{2(M_3^2-M_2^2)(M_1^2-M_3^3)} T_{2'\,\sigma_1\sigma_2}(x_1,x_2) . } Here $T_{2'\,\sigma_1\sigma_2}(x_1,x_2)$ denotes the same integral expression (\ref{eq:T2}) as $T_{2\,\sigma_1\sigma_2}(x_1,x_2)$ but with $g^2(y)$ in place of $g(y)$. The $(\nabla g \overline{\nabla} g {\rm \; terms})$ may be decomposed into the sum of the two terms \eqn{ \label{eq:TW} & & T_{W\,\sigma_1\sigma_2,\sigma_3}(x_1,x_2) := \frac{-2}{(M_1^2-M_3^2)(M_2^2-M_3^2)} \nonumber \\ & & \;\;\; {\rm Re}\, \left\{ \int_y \int_{\overline{y}} (\nabla^\mu g(y))(\nabla^{{\overline{\nu}}}g({\overline{y}})) W_{\sigma_1}(x_1,y) \overleftrightarrow{\nabla}_\mu W_{\sigma_3}({\overline{y}},y) \overleftrightarrow{\nabla}_{{\overline{\nu}}} W_{\sigma_2}({\overline{y}},x_2) \right\} , \\ \label{eq:TG} & & T_{G\,\sigma_1\sigma_2,\sigma_3}(x_1,x_2) := \frac{2}{(M_1^2-M_3^2)(M_2^2-M_3^2)} \nonumber \\ & & \;\;\; {\rm Re}\, \left\{ \int_y \int_{\overline{y}} (\nabla^\mu g(y))(\nabla^{{\overline{\nu}}}g({\overline{y}})) W_{\sigma_1}(x_1,y) \overleftrightarrow{\nabla}_\mu G_{\sigma_3}({\overline{y}},y) \overleftrightarrow{\nabla}_{{\overline{\nu}}} W_{\sigma_2}(x_2,{\overline{y}}) \right\}. } Expanding the Green's functions in modes, one easily obtains \eqn{ \label{eq:TW2} & & T_{W\,\sigma_1\sigma_2,\sigma_3}(x_1,x_2) \nonumber \\ &=& \ell^{2-D} \frac{\G{\frac{D-2}{2}}}{2\pi^{D/2}} {\rm Re}\, \left\{ \sum_{L=0}^\infty (2L+D-2) \chi_{\sigma_1 \sigma_3}^*(L) \chi_{\sigma_3 \sigma_2}(L) u_{\sigma_1 L}(\eta_1) u^*_{\sigma_2 L}(\eta_2) C_L^{(D-2)/2}({\vec{x}}_1\cdot{\vec{x}}_2) \right\}. \nonumber \\ } This leaves only $T_{G\,\sigma_1\sigma_2\sigma_3}(x_1,x_2)$, which one may treat similarly using $G_\sigma(\bar y,y) = \theta(\bar \eta - \eta) W_\sigma(\bar y, y) +\theta(\bar \eta - \eta) W^*_\sigma(\bar y, y) $ and the mode sum (\ref{eq:Wmodes}). One finds \eqn{ \label{eq:TG2} & & T_{G\,\sigma_1\sigma_2,\sigma_3}(x_1,x_2) \nonumber \\ &=& \ell^{2-D} \frac{\G{\frac{D-2}{2}}}{2\pi^{D/2}(M_2^2-M_3^2)} {\rm Re}\, \Bigg\{ \sum_{L=0}^\infty C_L^{(D-2)/2}({\vec{x}}_1\cdot{\vec{x}}_2) u_{\sigma_1 L}(\eta_1) u_{\sigma_2 L}(\eta_2) \cr &\times& \frac{1}{L^2}\left( \int d \bar \eta (1+\bar \eta^2)^{(D-1)/2} g'(\bar \eta) [u^*_{\sigma_3 L} (\bar \eta) \overleftrightarrow{\partial_n} u^*_{\sigma_2 L} (\bar \eta)] \zeta_{L \sigma_1 \sigma_3} (\bar \eta) + (\sigma_1 \leftrightarrow \sigma_2) \right) \Bigg\}, \nonumber \\ } where \eq{ \label{eq:zeta} \zeta_{L \sigma_1\sigma_3}(\bar \eta) = \frac{L^2}{(M_1^2-M_3^2)} \int_{\bar \eta}^\infty d\eta\, (1+\eta^2)^{(D-1)/2} g'(\eta) \left[ u^*_{\sigma_1 L}(\eta) \overleftrightarrow{\partial_n} u_{\sigma_3 L}(\eta) \right] . } Note that the integral in (\ref{eq:zeta}) converges since the integrand is smooth and $g'(\eta)$ has compact support. Using (\ref{eq:WKB}), we see that the large $L$ limit of (\ref{eq:zeta}) is a smooth function of $\bar \eta$ which is in fact independent of $L,M_1,M_3.$ As a result, the $\bar \eta$ integral in (\ref{eq:TG2}) once again gives a function of $L$ which decreases faster than any power of $L$. In particular, as with $\chi_{\sigma_1 \sigma_2}(L)$, for any $p > 0$ it may be bounded by $C_p(\sigma_1,\sigma_3) L^{-p}$ where $C_p(\sigma_1,\sigma_3)$ is a polynomial in $\sigma_1,\sigma_3$ whose order and coefficients are determined by $p$. It is clear that (\ref{eq:TW2}) and (\ref{eq:TG2}) are similar in form to (\ref{eq:T2}). As a result, $T_{W\,\sigma_1\sigma_2\sigma_3}(x_1,x_2)$ and $T_{G\,\sigma_1\sigma_2\sigma_3}(x_1,x_2)$ can be shown to decay like $\eta_1^{\sigma_1}\eta_1^{\sigma_2}$ via arguments analogous to those used for $T_{2\,\sigma_1\sigma_2}(x_1,x_2)$. Furthermore, just as with $T_{2\,\sigma_1\sigma_2}(x_1,x_2)$, we find that these functions are in fact bounded by $C |(\eta_1)^\sigma_1(\eta_1)^\sigma_2|$ for some polynomial function $C(\sigma_1,\sigma_2)$. Collecting our results, we conclude that the $O(g^2)$ correction to the time-ordered 2-point function is \eqn{ \label{eq:result} & & \CP{T \phi_{\sigma_1}(x_1)\phi_{\sigma_2}(x_2)}^{(2)} \nonumber \\ &=& g^2_f \bigg[ \frac{G_{\sigma_1}(x_1,x_2)}{(M_1^2-M_2^2)(M_1^2-M_3^2)} + \frac{G_{\sigma_2}(x_1,x_2)}{(M_2^2-M_1^2)(M_2^2-M_3^2)} + \frac{G_{\sigma_3}(x_1,x_2)}{(M_3^2-M_1^2)(M_3^2-M_2^2)} \bigg] \nonumber \\ & & + \frac{g_f}{(M_1^2 - M_3^2)} T_{2\,\sigma_3,\sigma_2}(x_1,x_2) + \frac{g_f}{(M_2^2 - M_3^2)} T_{2\,\sigma_1,\sigma_3}(x_1,x_2) \nonumber \\ & & + \frac{(M_1^2+M_2^2-2M_3^2)}{2(M_3^2-M_2^2)(M_1^2-M_3^3)} T_{2'\,\sigma_1\sigma_2}(x_1,x_2) + T_{W\,\sigma_1\sigma_2,\sigma_3}(x_1,x_2) + T_{G\,\sigma_1\sigma_2,\sigma_3}(x_1,x_2) . \nonumber \\ } We remind the reader that $T_{2\,\sigma_1\sigma_2}(x_1,x_2)$ is defined in (\ref{eq:T2}), $T_{2'\,\sigma_1\sigma_2}(x_1,x_2)$ is (\ref{eq:T2}) with $g^2(y)$ in place of $g(y)$, $T_{W\,\sigma_1\sigma_2,\sigma_3}(x_1,x_2)$ is defined in (\ref{eq:TW}), and $T_{G\,\sigma_1\sigma_2,\sigma_3}(x_1,x_2)$ is defined in (\ref{eq:TG}). In (\ref{eq:result}), the term in square brackets is the $O(g^2)$ correction to the Hartle-Hawking state. The remaining terms are bounded for all $x_1,x_2 \in J^+(\mathcal{R})/\mathcal{R}$ and in particular are finite at coincidence ($x_1=x_2$). In addition, these terms all decay like $\eta_1^{\sigma_1} \eta_2^{\sigma_2}$ when $|\eta_{1,2}| > |\eta|_{\rm max}$. One may readily verify that the full expression (\ref{eq:result}) is regular in the limit of coincident masses. In this limit, and at late times, we obtain \eq{ \CP{T \phi_{\sigma}(x_1)\phi_{\sigma}(x_2)}^{(2)} = \frac{1}{2} g_f^2 \partial^2_{M^2} G_{\sigma}(x_1,x_2) + O\left((\eta_1\eta_2)^\sigma\right) . } \subsection{Example: $\phi^3(x)$ interaction} \label{sec:phi3} \begin{figure} \centering \includegraphics{figphi3.pdf} \caption{The $O(g^2(x))$ correction to the time-ordered 2-point function $\CP{T\phi(x_1)\phi(x_2)}$ in a theory with an $g(x)\phi^3(x)$ interaction. Once again we label each leg of the diagram by a distinct mass parameter $\sigma_i$.} \label{fig:phi3} \end{figure} \renewcommand{\thefigure}{\arabic{figure}} In this section we consider a cubic self-interaction \eq{ \mathcal{L}_{\rm int}[\phi] = - \frac{g(x)}{3!}\phi^3(x) , } and compute the $O(g^2)$ correction to the time-ordered 2-point function. The relevant Feynman diagram is shown in Fig.~\ref{fig:phi3}. In spacetime dimension $D\ge 4$ this correction contains ultraviolet divergences, so we will need to regulate our computation and renormalize the theory. We restrict attention to $D \le 6$ for which $\phi^3$-theory is power-counting normalizable; for these dimensions, and to $O(g^2)$ the counterterms needed for renormalization are \eq{ \label{eq:Lct} \mathcal{L}_{\rm ct}[\phi] = -\frac{(Z_\phi(x)-1)}{2} \nabla_\mu\phi(x)\nabla^\mu\phi(x) - \frac{(Z_M(x)-1)M^2}{2}\phi^2(x) , } with $Z_\phi(x)$ and $Z_M(x)$ given by $Z_i = 1 + O(g^2)$. The renormalization coefficients are position-dependant as a result of our position-dependant coupling $g(x)$. However, no renormalization is required for $D=2,3$. To regulate our computation of the diagram (Fig.~\ref{fig:phi3}) we replace the internal Green's functions with Pauli-Villars regulated Green's functions. Thus the full $O(g^2)$ correction is given schematically by \eq{ \label{eq:phi32pt} \CP{T \phi_{\sigma_1}(x_1)\phi_{\sigma_2}(x_2)}^{(2)} = ({\rm diag}) + ({\rm c.t.}) , } where $({\rm diag})$ is the regulated Feynman diagram given by the expression \eqn{ \label{eq:diag} ({\rm diag}) &=& i^2 \int_{y_1} \int_{y_2} g(y_1) g(y_2) \bigg\{ G_{\sigma_1}(y_1,x_1) G_{\sigma_2}(y_2,x_2) G^{\rm reg}_{\sigma_3}(y_1,y_2) G^{\rm reg}_{\sigma_4}(y_1,y_2) \nonumber \\ & & \phantom{i^2 \int_{y_1} \int_{y_2} g(y_1) g(y_2)\bigg\{} - W_{\sigma_1}(y_1,x_1) G_{\sigma_2}(y_2,x_2) W^{\rm reg}_{\sigma_3}(y_1,y_2) W^{\rm reg}_{\sigma_4}(y_1,y_2) \nonumber \\ & & \phantom{i^2 \int_{y_1} \int_{y_2} g(y_1) g(y_2)\bigg\{} + W_{\sigma_1}(y_1,x_1) W_{\sigma_2}(y_2,x_2) G^{{\rm reg}\,*}_{\sigma_3}(y_1,y_2) G^{{\rm reg}\,*}_{\sigma_4}(y_1,y_2) \nonumber \\ & & \phantom{i^2 \int_{y_1} \int_{y_2} g(y_1) g(y_2)\bigg\{} - G_{\sigma_1}(y_1,x_1) W_{\sigma_2}(y_2,x_2) W^{\rm reg}_{\sigma_3}(y_2,y_1) W^{\rm reg}_{\sigma_4}(y_2,y_1) \bigg\} , } and $(\rm c.t.)$ are the counterterms generated by (\ref{eq:Lct}): \eqn{ \label{eq:cts} ({\rm c.t.}) &=& - (Z_\phi(x_2)-1) G_{\sigma_1}(x_1,x_2) \nonumber \\ & & + i M_2^2 \int_y \left(Z_\phi(y) + Z_M(y) - 2 \right) \left[ G_{\sigma_1}(y,x_1) G_{\sigma_2}(y,x_2) - W_{\sigma_1}(y,x_1) W_{\sigma_2}(y,x_2) \right] .\quad } At the end of our computation we will set the masses to be equal. A useful way to proceed is to make use of the linearization formulae for Bunch-Davies Green's functions: \eq{ \label{eq:lin} H^{\rm reg}_{\sigma_1}(x,y)H^{\rm reg}_{\sigma_2}(x,y) = \int_\mu f(\mu) H_\mu(x,y) . } Here $H_\sigma(x,y)$ and $H^{\rm reg}_\sigma(x,y)$ may be taken to be any Bunch-Davies Green's function (e.g., time-ordered, Wightman, etc.). In addition to $\mu$, the function $f(\mu)$ implicitly depends upon $\sigma_1$, $\sigma_2$, and the spacetime dimension, as well as the collection of Pauli-Villars masses for the two Green's functions. To derive an expression for the function $f(\mu)$ it is sufficient to construct the linearization formula for the Euclidean Green's function $\Delta_\sigma(x,y)$. Since the Bunch-Davies Green's functions are given by the Euclidean Green's function with an appropriately chosen prescription for avoiding the cut in the complex $Z$ plane, the extension of the linearization formula to these Green's functions follows immediately. We derive the linearization formulae in Appendix~\ref{app:linearization} following \cite{Marolf:2010nz}; here we will simply state the results. For the time-ordered Green's function we have \eqn{ \label{eq:Glin} G^{\rm reg}_{\sigma_1}(x,y)G^{\rm reg}_{\sigma_2}(x,y) &=& \ell^{2-D} \int_\mu \left(f_{\sigma_1\sigma_2}(\mu)+f^{\rm van}(\mu)\right) G_\mu(x,y) , \quad\quad\quad\quad\quad\quad\quad\quad\quad (D=2,3) \nonumber \\ G^{\rm reg}_{\sigma_1}(x,y)G^{\rm reg}_{\sigma_2}(x,y) &=& \ell^{2-D} \int_\mu \left(f_{\sigma_1\sigma_2}(\mu)+f^{\rm van}(\mu)\right) G_\mu(x,y) - i \ell^{4-D} c_0 \delta(x,y) , \quad\quad (D=4,5) \nonumber \\ G^{\rm reg}_{\sigma_1}(x,y)G^{\rm reg}_{\sigma_2}(x,y) &=& \ell^{2-D} \int_\mu \left(f_{\sigma_1\sigma_2}(\mu)+f^{\rm van}(\mu)\right) G_\mu(x,y) \nonumber \\ & & -i \ell^{4-D} c_0 \delta(x,y) -i \ell^{6-D} c_1 \Box \delta(x,y) , \quad\quad (D=6,7) . } where $f_{\sigma_1\sigma_2}(\mu)$ depends only on $\sigma_1,\sigma_2$ and is independent of the regulator masses. In the complex $\mu$ plane the function $(f_{\sigma_1\sigma_2}(\mu)+ f^{\rm van}(\mu))$ decays exponentially away from the real axis and is analytic in the strip ${\rm Re}\,(\sigma_1+\sigma_2) < {\rm Re}\, \mu$. The contours of integration (\ref{eq:Glin}) and (\ref{eq:Wlin}) lie within the strip ${\rm Re}\,(\sigma_1+\sigma_2) < {\rm Re}\, \mu < 0$. The coefficients $c_0$ and $c_1$ in (\ref{eq:Glin}) are real functions of the Pauli-Villars masses but do not depend on $x,y$ or the integration variable $\mu$. These expressions have been organized to make it easy to take the limit of large Pauli-Villars regulator masses. As discussed in the Appendix, in this limit the function $f_{\sigma_1\sigma_2}(\mu)$ remains, $f^{\rm van}(\mu)$ vanishes, and $c_0$ and $c_1$ diverge. The explicit expressions for $f_{\sigma_1\sigma_2}(\mu)$, $f^{\rm van}(\mu)$, $c_0$ and $c_1$ can be found in the Appendix, but we will not need them here. For the Wightman Green's function we have \eq{ \label{eq:Wlin} W^{\rm reg}_{\sigma_1}(x,y)W^{\rm reg}_{\sigma_2}(x,y) = \ell^{2-D} \int_\mu (f_{\sigma_1\sigma_2}(\mu)+ f^{\rm van}(\mu)) W_\mu(x,y) , \quad ({\rm all}\; D) . } This expression is finite when the Pauli-Villars masses are taken to infinity. This reflects that fact that the product of such Wightman functions $W_{\sigma_1}(x,y)\cdots W_{\sigma_n}(x,y)$ is positive/negative frequency with respect to $x$/$y$. Let us start with the simple $D=2,3$ where there are no ultraviolet divergences at this order. In this case we may set $Z_M = Z_\phi = 1+O(g^4)$ and immediately take the limit of large regulator masses\footnote{We could also have done the full computation without ever involving regulators. The unregulated Green's functions for $D=2,3$ satisfy the analogues of (\ref{eq:Glin}), (\ref{eq:Wlin}) with $f^{van} =0$.} (which sends $f^{\rm van}(\mu)$ to zero) in the linearization formulae (\ref{eq:Glin}) and (\ref{eq:Wlin}). The full correction to the 2-point function (\ref{eq:phi32pt}) then becomes \eqn{ \label{eq:phi3tree} & & \CP{T \phi_{\sigma_1}(x_1)\phi_{\sigma_2}(x_2)}^{(2)} = \ell^{2-D} \int_\mu f_{\sigma_3\sigma_4}(\mu) \bigg\{ \nonumber \\ & & i^2 \int_{y_1} \int_{y_2} g(y_1) g(y_2) \bigg[ G_{\sigma_1}(y_1,x_1) G_{\sigma_2}(y_2,x_2) G_{\mu}(y_1,y_2) - W_{\sigma_1}(y_1,x_1) G_{\sigma_2}(y_2,x_2) W_{\mu}(y_1,y_2) \nonumber \\ & & \phantom{i^2 \int_{y_1} \int_{y_2} g(y_1) g(y_2) \bigg\{ } + W_{\sigma_1}(y_1,x_1) W_{\sigma_2}(y_2,x_2) G^*_{\mu}(y_1,y_2) - G_{\sigma_1}(y_1,x_1) W_{\sigma_2}(y_2,x_2) W_{\mu}(y_2,y_1) \bigg]\bigg\} . \nonumber \\ } The astute reader will recognize the term in braces as the expression for the $O(g^2)$ correction to the 2-point function in the $\phi^2$-theory discussed in \S\ref{sec:phi2}. Inserting our final expression (\ref{eq:result}) for that correction we obtain \eqn{ \label{eq:int} & & \CP{T \phi_{\sigma_1}(x_1)\phi_{\sigma_2}(x_2)}^{(2)} = \nonumber \\ & & \ell^{2-D} \int_\mu f_{\sigma_3\sigma_4}(\mu) \bigg\{ g^2_f \bigg[ \frac{G_{\sigma_1}(x_1,x_2)}{(M_1^2-M_2^2)(M_1^2-M_\mu^2)} + \frac{G_{\sigma_2}(x_1,x_2)}{(M_2^2-M_1^2)(M_2^2-M_\mu^2)} + \frac{G_{\mu}(x_1,x_2)}{(M_\mu^2-M_1^2)(M_\mu^2-M_2^2)} \bigg] \nonumber \\ & & \phantom{ \ell^{-1} \int_\mu f_{\sigma_1\sigma_2}(\mu) \bigg\{} + T_{3\,\sigma_1\sigma_2\mu}(x_1,x_2) \bigg\} \\ &=& \ell^{2-D} \int_\mu f_{\sigma_3\sigma_4}(\mu) \bigg\{ g^2_f \frac{G_{\mu}(x_1,x_2)}{(M_\mu^2-M_1^2)(M_\mu^2-M_2^2)} + g_f \frac{T_{2\,\mu \sigma_2}(x_1,x_2)}{(M_1^2 - M_\mu^2)} + g_f \frac{T_{2\,\sigma_1 \mu}(x_1,x_2)}{(M_2^2 - M_\mu^2)} \nonumber \\ & & \phantom{\ell^{-1} \int_\mu f_{\sigma_3\sigma_4}(\mu) \bigg\{\;\;} + T_{W\,\sigma_1\sigma_2\mu}(x_1,x_2) + T_{G\,\sigma_1\sigma_2\mu}(x_1,x_2) \bigg\}, } for $x,y \in J^+(\mathcal{R})/\mathcal{R}$. To obtain the second line we have inserted the definition of $T_{3\,\sigma_1\sigma_2\mu}(x_1,x_2)$ and noted that terms in the integrand of (\ref{eq:int}) whose only dependence upon $\mu$ is a factor of $1/(M_i^2 - M_\mu^2)$ make no contribution to the integral over $\mu$. This is because $1/(M_i^2 - M_\mu^2)$ contributes only poles to the left of the integration contour. For these terms the integration contour may be closed to the right. But there are no poles contained in the right half-plane, so these integrals vanish. In higher dimensions the only additional complication is that we must take care to cancel the ultraviolet divergences contained in (\ref{eq:diag}) with our available counterterms. Our use of Pauli-Villars regularization as well as our linearization formulae make this procedure quite transparent. Consider first the case of $D=4,5$. After utilizing our linearization formulae (\ref{eq:Glin}) and (\ref{eq:Wlin}) the divergent terms in $\CP{T \phi_{\sigma_1}(x_1)\phi_{\sigma_2}(x_2)}^{(2)}$ are \eq{ + c_0 \ell^{4-D} i \int_y g^2(y) \left[ G_{\sigma_1}(y,x_1) G_{\sigma_2}(y,x_2) - W_{\sigma_1}(y,x_1) W_{\sigma_2}(y,x_2) \right] . } Comparing this expression to our counterterms (\ref{eq:cts}) we see that these terms are cancelled by setting \eq{ Z_M(x) = 1 - \frac{c_0 \ell^{4-D}}{M_2^2} g^2(x) + O(g^4), \quad Z_\phi(x) = 1 + O(g^4), \quad (D = 4,5) . } In $D=6$ dimensions the divergent terms are \eq{ - c_1 g_f^2 G_{\sigma_1}(x_1,x_2) + \left(c_0 \ell^{-2}+ c_1 M_2^2 \right) i \int_y g^2(y) \left[ G_{\sigma_1}(y,x_1) G_{\sigma_2}(y,x_2) - W_{\sigma_1}(y,x_1) W_{\sigma_2}(y,x_2) \right] , } which may be canceled by setting \eq{ Z_M(x) = 1 - \left(\frac{c_0}{M_2^2\ell^2} + c_1 \right) g^2(x) + O(g^4), \quad Z_\phi(x) = 1 - c_1 g_f^2 + O(g^4) , \quad (D = 6) . } With all divergent terms cancelled by the counterterms we may take the Pauli-Villars masses to infinity, so that $f^{\rm van}(\mu) \to 0$, and then proceed as for $D=2,3$ above. For all $2 \le D \le 6$ we find \eqn{ \label{eq:phi3final} \CP{T \phi_{\sigma}(x_1)\phi_{\sigma}(x_2)}^{(2)} &=& \ell^{2-D} \int_\mu f_{\sigma\s}(\mu) \bigg\{ g^2_f \frac{G_{\mu}(x_1,x_2)}{(M_\mu^2-M_\sigma^2)^2} + g_f \frac{T_{2\,\mu \sigma}(x_1,x_2)}{(M_\sigma^2 - M_\mu^2)} + g_f \frac{T_{2\,\sigma \mu}(x_1,x_2)}{(M_\sigma^2 - M_\mu^2)} \nonumber \\ & & \phantom{\ell^{-1} \int_\mu f_{\sigma\s}(\mu) \bigg\{\;\;} + T_{W\,\sigma\s,\mu}(x_1,x_2) + T_{G\,\sigma\s,\mu}(x_1,x_2) \bigg\}, } where we have now taken the limit in which all masses become equal. Note that the units are correct: mass${}^2$ has units $\ell^{-2}$, $g^2(x)$ has units $\ell^{D-6}$, and $G_\sigma(x_1,x_2)$, $T_{2\,\sigma_1\sigma_2}(x_1,x_2)$ $T_{W\,\sigma_1\sigma_2\mu}(x_1,x_2)$, $T_{G\,\sigma_1\sigma_2\mu}(x_1,x_2)$ each has units $\ell^{2-D}$. The first term in (\ref{eq:phi3final}) gives precisely $\C{T \phi_{\sigma}(x_1)\phi_{\sigma}(x_2)}^{(2)}_{HH,g_f}$, the associated correction to the correlator in the Hartle-Hawking vacuum of the theory with $g=g_f={\rm const.}$ for all time. Each of the remaining terms inside the braces was analyzed in detail in section \ref{sec:phi2}. Choosing the $\mu$ contour to satisfy ${\rm Re}\, \mu \le {\rm Re}\, \sigma$, we see that each such term is bounded by $C(\mu)|\eta_1\eta_2|^\sigma$. Recalling that $C$ depends at most polynomially on $\mu$ while $f_{\sigma\s}$ decays exponentially at large imaginary $\mu$ we see that the integral of these terms over $\mu$ is bounded by $\tilde C |\eta_1\eta_2|^\sigma$ for some constant $\tilde C$; i.e., \eq{ \label{eq:phi3finallate} \CP{T \phi_{\sigma}(x_1)\phi_{\sigma}(x_2)}^{(2)} = g_f^2 \ell^{2-D} \int_\mu f_{\sigma\s}(\mu) \frac{G_{\mu}(x_1,x_2)}{(M_\mu^2-M_\sigma^2)^2} + O\left( (\eta_1\eta_2)^\sigma \right) , } where the first term is precisely the one-loop correction \cite{Marolf:2010nz} $\C{T \phi_{\sigma}(x_1)\phi_{\sigma}(x_2)}^{(2)}_{HH,g_f}$ to the Hartle-Hawking vacuum of the theory with constant $g = g_f$. We emphasize that the $O\left( (\eta_1\eta_2)^\sigma \right)$ term is finite at coincidence ($x_1=x_2$). It follows immediately that the one point function of the composite operator $\phi^2(x_1)$ in our state can again be written as the sum of two terms, the first being its (finite) value in the Hartle-Hawking state for constant $g$ and the other decaying like $\eta_1^{2\sigma}$. In particular, defining $\phi^2(x_1)$ via any de Sitter-invariant regulator gives a result of the form \begin{equation} \label{eq:phi2res} \CP{\phi_{\sigma}^2(x_1)}^{(2)} = {\rm const.} + O((\eta_1)^{2\sigma}). \end{equation} \section{Discussion} \label{sec:disc} We have shown that, in the time-dependent model of section \ref{sec:time}, the two-point function at one-loop level approaches that of the constant coupling Hartle-Hawking state in the limit where its arguments are evaluated at late times $\eta_1,\eta_2$. A key feature of this model is a time-dependent cubic coupling $g(x) \phi^3$ which vanishes before some fixed initial (global) time which we may call $\eta_0$. We also chose $g(x)$ to be a constant ($g_f$) to the future some Cauchy surface. The state was taken to coincide with the (free) Bunch-Davies vacuum in the region where $g(x)=0$, so that the state in the region with $g=g_f$ is determined by the particular way in which the coupling turns on. Since we required only that $g(x)$ be smooth in this transition region, this scenario describes a wide range of possible states. Our results were established by working entirely in Lorentz signature and in global coordinates. Our main result is that the difference between our two-point function and that of the $g=g_f$ Hartle-Hawking state is negligible for $\eta_1,\eta_2 \gg |\eta_0|$. In other words, while the discrepancy may be significant (and perhaps even large!) for some period of time, it decreases rapidly once the universe enters its expanding phase and the size of the spheres becomes significantly larger than they were when the coupling turned on. This is precisely what one would expect based on the free theory in which perturbations rapidly disperse as, after this time, their effect on local correlators decays rapidly. This behavior was shown in \cite{Hollands:2010pr,Marolf:2010nz} to hold to all orders in perturbation theory for a dense set of states; our results here indicate that this dense domain allows for physically interesting initial conditions. The above model was recently considered by Krotov and Polyakov \cite{Krotov:2010ma}. While they characterized the model as ``unstable,'' we remind the reader that their technical results are completely consistent with ours. As stated in their paper (below their equation (17)), their analysis applies in the regime $\eta_0 \ll - |\eta_1|, -|\eta_2|$ (our notation); i.e., in precisely the complimentary regime to that studied here. As noted in the introduction, the divergence they find as $\eta_0 \rightarrow -\infty$ is not only consistent with, but in fact is naturally expected from, the results found here and in \cite{Hollands:2010pr,Marolf:2010nz}. Despite various technical features of our analysis, we see that the approach to the Hartle-Hawking state at late times followed from a few simple ingredients. First, for quadratic perturbations $g(x)\phi^2$, one can write the full two-point function at each order as a sum of the Hartle-Hawking two-point function and a `boundary term' associated with the transition region (see (\ref{eq:T1b}), (\ref{eq:A3})). These manipulations involve only integrations by parts and will clearly hold in a general spacetime which is asymptotically de Sitter to the future. The rapid expansion of the universe at late times then i) causes any given mode to decay as a power law in $\eta$ and ii) implies that the modes which have not yet decayed at some late time $\eta$ correspond to very high $L$. As a result, at the early time $\eta_0$ when $g(x)$ was time dependent, these high $L$ modes were very high frequency. Since quadratic perturbations do not lead to loops, the Green's functions that appear in this boundary term are all positive frequency, at least at the key step (see the discussion of $T_{G\,\sigma_1\sigma_2,\sigma_3}(x_1,x_2)$ surrounding (\ref{eq:TG2})). As a result, in the boundary term these modes appear with coefficients involving what is effectively the Fourier transform of $g(x)$ at large momentum, which vanishes rapidly since $g(x)$ is smooth. Thus the effect of the boundary term decays with time, leaving only the Hartle-Hawking term in the two-point function. For the cubic perturbation $g(x) \phi^3$, we used the linearization formulae (\ref{eq:lin}) to make renormalization straightforward and to reduce the one-loop calculation to the quadratic-perturbation calculation described above. It is clear that analogous results follow immediately in any context where similar linearization formulae hold for the associated free Green's functions. While such formulae are not obvious for general fields in general spacetimes, they must hold for conformally coupled free fields in general spherically symmetric spacetimes (which are necessarily conformal to dS), at least after inserting powers of the appropriate conformal factor $\Omega(x)$. Note that this argument requires conformal invariance only for the free theory about which we perturb\footnote{Recall that conformally coupled free fields correspond to $\sigma = -(D-2)/2$ in de Sitter.} and that there is no restriction on the interaction. The only impact of these extra factors of $\Omega(x)$ is to provide what is in effect an extra time-dependence in the coupling. Thus, to the extent that one can study a theory of general spacetime-dependent mass $m^2(x)$ by perturbing the free conformally coupled theory by a quadratic perturbation $g(x) \phi(x)^2$, our cubic results extend for all $M^2 > 0$ to any spherically symmetric spacetime which is asymptotically de Sitter in the far future. It is useful to comment on the special case where the spacetime is taken to be the Einstein Static Universe (ESU) $S^3 \times {\mathbb R}$ to the past of some $S^3$. Since the ESU is static and spatially compact, we can take a limit where the coupling $g(x)$ is turned on adiabatically slowly (after which it is $g=g_f = {\rm const.}$) and in the distant past. We are then guaranteed that, in the ESU region, the state is given by the interacting ESU vacuum. As a result, subject to the same qualifiers as above we may consider the theory with $g=g_f= {\rm const.}$ for all time. Taking the state to be the (interacting) ESU vacuum at early times, we see that at late times correlators again approach those of the de Sitter Hartle-Hawking vacuum. This result can then be further generalized to either $n-$particle or thermal states in the ESU, all of which approach the same de Sitter Hartle-Hawking vacuum at late times. In summary, we have shown at the one-loop level that a wide class of physically-motivated initial conditions lead to two-point functions which approach that of the Hartle-Hawking state at late times. This suggests that states defined by general physical initial conditions lie in the dense set of states where the cosmic quantum no hair theorems of \cite{Hollands:2010pr,Marolf:2010nz} apply. We expect that this can be explicitly checked by extending the calculations reported here to all orders in perturbation theory. After all, the techniques used above were essentially Lorentz-signature versions of the Euclidean methods applied in \cite{Marolf:2010zp,Marolf:2010nz}. So by adapting further such techniques to Lorentz signature we expect to obtain all-orders results analogous to those found in \cite{Hollands:2010pr,Marolf:2010nz}. Some readers may feel a lingering uneasiness with these results due to the well-established infrared divergences (see e.g. \cite{Sasaki:1992ux,Polyakov:2007mm,Polyakov:2009nq,Higuchi:2009ew}) associated with in-out perturbation theory in global de Sitter. In particular, as noted in e.g. \cite{Marolf:2010zp}, at sufficient loop orders such divergences occur in the future expanding region even for very large masses. This certainly indicates that {\it some} quantity is becoming large in the infrared. However, the key point to realize is that the quantity need not be local. In particular, we suggest that it is merely the operator relating the (free) Bunch-Davies vacuum $|0\rangle$ to the interacting Hartle-Hawking state $|HH\rangle$ which becomes large at late times. This operator involves integrals over an entire $S^3$ at each time and can become large as the $S^3$ grows in size. In Minkowski space, a corresponding IR divergence is forbidden due to the exponential decay of massive propagators at large spacelike separations. But since the volume element also grows exponentially in dS, such IR divergences can occur. Indeed, the operator relating $|0\rangle$ and $|HH \rangle$ is closely related to the vacuum to $n$-particle amplitudes of in-out perturbation theory noted to diverge above. This stands in sharp contrast to the good IR behavior of (unintegrated) $n$-point functions as established here and in \cite{Marolf:2010zp,Hollands:2010pr,Marolf:2010nz}. \vspace{2cm} \noindent{\bf Acknowledgements:} It is a pleasure to thank Atsushi Higuchi, Viatcheslav Mukhanov, Alexander Polyakov, Albert Roura, and Richard Woodard for useful discussions. DM and IM are supported in part by the US National Science Foundation under NSF grant PHY08-55415 and by funds from the University of California.
1,108,101,565,033
arxiv
\subsection*{Acknowledgments} The authors thank Marc Abrahams and Nigel Snoad for helpful discussion, and Prof. Per Bak for providing a continual source of inspiration. CRS thanks Prof. David Griffeath and the undergraduate students at Madison for providing financial support, and Prof. Yuri Klimontovich, whose book (\cite{Klimontovich}) first alerted him to the possibilities of simple models of evolution by physicists. WAT has already thanked an undisclosed set of people of analytically determined size, and expects eventually to be thanked by others in kind due simply to expansionary propagation of that original thankfulness (described in a subsequent paper, now in preparation).
1,108,101,565,034
arxiv
\section{Introduction} \setcounter{equation}{0} There is currently much interest in quantum spin systems which exhibit frustration. This has been stimulated in particular by the work on the magnetic properties of the cuprates which become high T$_c$ superconductors when doped. The frustration in these 2D materials arises because of antiferromagnetic exchange across the diagonals of the squares as well as along the edges. Other 2D frustrated systems are the triangular and Kagom\'e lattices. In this paper we study a simple 1D spin system which is also frustrated for some range of its parameters. This is a spin-1/2 model with isotropic nearest and next-nearest neighbour exchange given by \begin{equation} {\cal H}=\cos \omega \sum_l{\sl s}_l.{\sl s}_{l+1}+\sin \omega \sum_l% {\sl s}_l.{\sl s}_{l+2}, \end{equation} \noindent where the sum over $l$ is over all $N$ atoms with periodic boundary conditions. We shall also use the notation $J_1=\cos \omega $ and $J_2=\sin \omega $. The $T=0$ phase diagram of this model is given in Fig. 1. The antiferromagnetic (AF) phase extends over the region $-\pi /2<\omega<\omega _{MG}$, where $\omega _{MG}=\tan ^{-1}(1/2)$. The point $\omega _{MG}$ is the Majumdar-Ghosh (MG) Hamiltonian (Majumdar and Ghosh 1969a,b. See also Haldane 1982) at which the ground state consists of dimerised singlets with a gap to the excited states. In a recent paper by two of the present authors (Zeng and Parkinson 1995), a dimer variational wave function was proposed which is exact at $\omega _{MG}$ and gives good results for a large range around this point. Much of the recent work on this system has focused on the transition from a gapless `spin-liquid' state which is known exactly at $\omega =0$ to a dimerised regime with a gap which is also known exactly at $\omega =\omega_{MG}$. The transition occurs at $J_2/J_1=\tan\omega_c=0.2411(1)$ ($\omega_c=0.2366(1)$) (Okamoto and Nomura 1992). The same authors have also studied the phase diagram in the vicinity of this transition in the anisotropic version of this model (Nomura and Okamoto, 1993,1994). The frustrated regime is given by $\omega _{MG}<\omega <\omega _{FF}$, where $\omega _{FF}=\tan ^{-1}(-1/4)=3.3865$ is the point at which a first order transition to a ferromagnetic regime occurs. This was first studied numerically by Tonegawa and Harada (1987) who found evidence of change in the position of the peak of the correlation function as a function of $% \omega $. Here we shall use a variety of methods to investigate the whole of the frustrated regime, including $\omega >\pi /2$. It will be useful to compare our results with those of the classical Hamiltonian. In this regime the minimum classical energy is obtained by forming a spiral with a pitch angle $\theta $ between neighbouring spins where $\theta =\cos ^{-1}(-J_1/4J_2)$. The classical boundary with the AF phase is at $\omega _C=\tan ^{-1}(1/4)=0.2450$. The real-space periodicity thus increases monotonically from 2 at the $AF$ boundary to infinity at the ferromagnetic boundary. \section{The CCM formalism} \setcounter{equation}{0} In a recent paper (Farnell and Parkinson, 1994, referred to as I), the coupled-cluster method (CCM) was applied by two of the present authors to the antiferromagnetic (AF) phase. For a description of the CCM applied to spin systems see Bishop {\it et al. }(1991) and the references in I. In the AF phase the natural choice of a model state for the CCM is the N\'eel state used in I. \noindent For the frustrated regime, however, this model state is physically unrealistic and the CCM based upon it gives poor results. One possible choice is suggested by the fact that when $\omega =\pi /2$ we have $J_1=0 $ and $J_2=1$, so the Hamiltonian (1.1) describes two uncoupled antiferromagnetic Heisenberg chains. At this point a `double-N\'eel' model state with a periodicity of 4 unit cells would be appropriate and would lead to precisely the same results as for the single chain $(J_1=1,J_2=0)$ with suitable scaling factors. We did carry out CCM calculations based on this model state and obtained reasonable results for a range of $\omega $ around $\pi /2$. These results will be described briefly later. Another possible model state is suggested by the classical ground state in this regime. For this reason we have performed CCM calculations based on a spiral model state in which the pitch angle $\theta $ is taken as a variational parameter. A necessary condition to perform CCM calculations is the existence of a complete set of mutually commuting creation operators so that an arbitrary state of the system can be constructed starting from the model state. We obtain these as follows. The spiral model state is taken to have all spins aligned in the $XZ$ plane with the n'th spin making an angle $n\theta $ with the $Z$ axis. We then introduce local axes such that each atom is in the quantum spin state $\mid ->$. We use the usual notation $\mid \pm >$ for the states with eigenvalues of $s^z$ equal to $\pm {\frac 12}$. Using the local axes the Hamiltonian (1.1) becomes \[ {\cal H}=J_1/4\sum_i\{[\cos (\theta )-1](s_i^{-}s_{i+1}^{-}+s_i^{+}s_{i+1}^{+})+[\cos (\theta )+1](s_i^{-}s_{i+1}^{+}+s_i^{+}s_{i+1}^{-}) \] \[ +2\sin (\theta )(s_i^{-}+s_i^{+})(s_{i+1}^z-s_{i-1}^z)+4\cos (\theta )s_i^zs_{i+1}^z\} \] \[ +J_2/4\sum_i\{[\cos (2\theta )-1](s_i^{-}s_{i+2}^{-}+s_i^{+}s_{i+2}^{+})+[\cos (2\theta )+1](s_i^{-}s_{i+2}^{+}+s_i^{+}s_{i+2}^{-}) \] \begin{equation} +2\sin (2\theta )(s_i^{-}+s_i^{+})(s_{i+2}^z-s_{i-2}^z)+4\cos (2\theta )s_i^zs_{i+2}^z\}. \end{equation} This equation contains terms which have an odd number of spin-flips multiplied by a coefficient $\sin (\theta )$ or $\sin (2\theta )$. By symmetry the ground-state energy $E_g$ will be an even function of $\theta $% , which suggests that these terms should not contribute to $E_g$. We have confirmed explicitly that this is correct for the CCM approximation scheme described in the following section, and for clarity we shall omit these terms from ${\cal H}$ from now on. \subsection{Approximation schemes.} \setcounter{equation}{0} We shall work with Pauli spin operators $\sigma _i^\alpha $, related to the spin angular momentum operators in the usual way: $\sigma _i^\alpha \sigma _i^{\pm }=s_i^{\pm }$. These definitions apply to all sites as there is no partition into different sublattices in this scheme. The Hamiltonian of Eq.(1.1) becomes \[ {\cal H}=J_1/4\sum_i\{[\cos (\theta )-1] (\sigma _i^{-}\sigma_{i+1}^{-}+\sigma _i^{+}\sigma _{i+1}^{+})+ [\cos (\theta )+1](\sigma_i^{-}\sigma _{i+1}^{+} +\sigma _i^{+}\sigma _{i+1}^{-}) \] \[ +\cos (\theta )\sigma _i^z\sigma _{i+1}^z\}+J_2/4\sum_i\{[\cos (2\theta )-1](\sigma _i^{-}\sigma _{i+2}^{-}+\sigma _i^{+}\sigma _{i+2}^{+}) \] \begin{equation} +[cos(2\theta )+1](\sigma _i^{-}\sigma _{i+2}^{+}+ \sigma _i^{+}\sigma_{i+2}^{-})+ \cos (2\theta )\sigma _i^z\sigma _{i+2}^z\} \end{equation} In the CCM the true ground state is written \begin{equation} \mid \Psi >=e^S\mid \Phi >. \end{equation} \noindent The CCM correlation operator $S$ is constructed entirely out of creation operators with respect to the model state, i.e. out of a sum of terms containing all possible $C_I^{+}$, where $C_I^{+}$is a product of creation operators from $\{\sigma _i^{+}\}$ consistent with the conserved quantities. The Hamiltonian of Eq.(2.1) contains only terms which involve an even number of spin flips. This means that all terms in $e^S$ and hence in $% S $ should only involve even numbers of $\sigma ^{+}$ operators. Note that this would not be true had the $\sin (\theta )$ and $\sin (2\theta )$ terms not been neglected, and this point is considered further below. We shall use the following approximation schemes, all of which were described in I. \noindent 1) Full SUB2. In this scheme $S$ includes all possible products of two spin-flip operators: \begin{equation} S={\frac 12}\sum_i\sum_rb_r\sigma _i^{+}\sigma _{i+r}^{+}, \end{equation} \noindent where i runs over all $N$sites and $r$is a positive or negative integer with $\mid r\mid \le N/2.$ By symmetry $b_{-r}=b_r$. \noindent 2) SUB2-3. This is a subset of full SUB2 in which all $b_r$are set to zero except $b_{\pm 1}$: and $b_{\pm 2}$ \begin{equation} S=b_1\sum_i\sigma _i^{+}\sigma _{i+1}^{+}b_1+b_2\sum_i\sigma _i^{+}\sigma _{i+2}^{+} \end{equation} $\qquad $ Using the same notation as in I we calculate the similarity transform with respect to $S$ of the spin operators. For example \begin{equation} \tilde \sigma _i^{+}=e^{-S}\sigma _i^{+}e^S. \end{equation} \noindent Using these the transformed Hamiltonian $\tilde {{\cal H}}$ can be obtained. Operating on the ground state Schr\"odinger equation \begin{equation} \tilde {{\cal H}}\mid \Psi >=E_g\mid \Psi > \end{equation} with $<\Phi \mid $ then gives the following equation for the ground-state energy per spin in either approximation as \begin{equation} E_g/N=J_1/4\{\cos (\theta )+(\cos (\theta )-1)b_1)\}+J_2/4\{\cos (2\theta )+(\cos (2\theta )-1)b_2)\} \end{equation} To find $b_1$ and $b_2$ we obtain a set of coupled non-linear equations for the coefficients retained in each of the approximation schemes by operating on Eq.(2.6) with $<\Phi \mid C_I$, where $C_I$ is the Hermitian conjugate of one of the strings of creation operators (combinations of $\sigma _i^{+})$% present in S. Lastly in this section we note that if odd numbers of spin flips been allowed there would be a term in $S$ of the form \[ a\sum_i\sigma _i^{+}. \] \noindent We have performed calculations in the SUB2-3 approximation in which the extra $\sin (\theta )$ and $\sin (2\theta )$ terms were retained in the Hamiltonian. In this case $a=0$ is the only physically reasonable solution, and the extra terms give zero contribution to the ground-state energy. \section{The Coupled Non-linear Equations.} \setcounter{equation}{0} Using the $S$ given by Eq.(2.4), we operate on Eq.(2.7) with $\Sigma _i\sigma _i^{-}\sigma _{i+t}^{-}$. and obtain the full SUB2 equations. \[ J_1\sum_\rho (1-\delta _{r,0})\{A_1\delta _{r,\rho }+B_1b_r+2[\cos (\theta )+1]b_{r+\rho }+[\cos (\theta )-1]\sum_sb_{r+s+\rho }b_s \] \begin{equation} +J_2\sum_\delta (1-\delta _{r,0})\{A_2\delta _{r,\delta }+B_2b_r+2[\cos (2\theta )+1]b_{r+\rho }+[\cos (2\theta )-1]\sum_sb_{r+s+\delta }b_s=0 \end{equation} \noindent where \begin{equation} A_1=[\cos (\theta )-1](1+2b_1^2)+4b_1\cos (\theta ), \end{equation} \begin{equation} A_2=[\cos (2\theta )-1](1+2b_2^2)+4b_2\cos (2\theta ), \end{equation} \begin{equation} B_1=-4\cos (\theta )+4[1-\cos (\theta )]b_1 \end{equation} \begin{equation} B_2=-4\cos (2\theta )+4[1-\cos (2\theta )]b_2 \end{equation} \noindent with $\rho =\pm 1,\delta =\pm 2$ and s is any positive or negative integer. The solution of Eq.(3.1) is given in section 4. For the SUB2-3 approximation scheme Eq.(3.1) reduces to the pair of coupled non-linear equations \[ J_1\{[\cos (\theta )-1](1+2b_2^2-3b_1^2)-4b_1\cos (\theta )+2b_2[[\cos (\theta )+1]\} \] \begin{equation} +J_2\{[1-\cos (2\theta )]4b_1b_2-8b_1\cos (2\theta )+2b_1[[\cos (2\theta )+1]\}=0 \end{equation} and \[ J_1\{[1-\cos (\theta )]4b_1b_2-8b_2\cos (\theta )+2b_1[[\cos (\theta )+1]\} \] \begin{equation} +J_2\{[\cos (2\theta )-1](1+2b_1^2-3b_2^2)-4b_2\cos (2\theta )\}=0. \end{equation} Eqs.(3.4,5) can be solved numerically and hence $E_g/N$ obtained in the SUB2-3 approximation for a given $\theta $. Finally $\theta $ is varied to find a minimum value for $E_g/N$. The results for $\theta $ as a function of $\omega $ are shown in Fig. 2. We observe that the value of $\theta $ obtained by this method remains close to $\pi /2$ over a much wider range of $\omega $ than in the classical calculation. We mentioned earlier that calculations based on a `double-N\'eel' model state have been carried out. As can now be easily understood, the results were in good agreement with the ones based on the spiral model state over quite a wide range of $\omega $ around $\pi /2.$ The results for the ground-state energy per spin are shown in Fig.3 and are compared with the values obtained by direct diagonalisation of a chain of 20 spins, the results of spin-wave theory (SWT), and also with a `classical' result which is the expectation value of the Hamiltonian in the classical ground-state. The exact results at $\omega =\omega _{MG}$ and $\omega =\pi $. The full SUB2 equations can be solved numerically by first performing a Fourier transform as in I. Details are given in Appendix 1. The results are similar to the SUB2-3 results except for the existence of `terminating points' which are also shown on the figures. \section{DMRG study of the periodicity} \setcounter{equation}{0} We next turn to the density matrix renormalisation group (DMRG) method in order to perform a numerical study of the periodicity which can be compared with the CCM results discussed above. We achieve this by accurately calculating the position of the peak of the Fourier transformed ground state correlation function (Bursill {\em et al} 1995). \subsection*{The DMRG method} The DMRG was introduced in a series of papers by White and coworkers (White and Noack 1992, White 1992 and 1993) and a highly successful application to the spin-1 antiferromagnetic chain (White and Huse 1993) established the DMRG as the method of choice for studying the low energy physics of quantum lattice systems in one dimension. Efficient algorithms for calculating low-lying energies and correlation functions of spin chains are described in great detail in (White 1993) so we will only briefly describe the method here. We restrict our discussion to the infinite lattice algorithm (White 1993) which was used in our calculations. The DMRG is an iterative, truncated basis procedure whereby a large chain (or superblock) is built up from a single site by adding a small number of sites at a time. At each stage the superblock consists of system and environment blocks (determined from previous iterations) in addition to a small number of extra sites. Also determined from previous iterations are the matrix elements of various operators such as the block Hamiltonians and the spin operators for the sites (at the end(s) of the blocks) with respect to a truncated basis. Tensor products of the states of the system block, the environment block and the extra sites are then formed to provide a truncated basis for the superblock. The ground state $\left| \psi \right\rangle $ (or other targeted state) of the superblock is determined by a sparse matrix diagonalization algorithm. At this point, correlation functions, local energies and other expectation values are calculated with respect to $\left|\psi\right>$. Next, a basis for an augmented block, consisting of the system block and a specified choice of the extra sites, is formed from tensor products of system block and site states. The augmented block becomes the system block in the next iteration. However, in order to keep the size of the superblock basis from growing, the basis for the augmented block is truncated. We form a density matrix by projecting $\left|\psi\right>\left<\psi\right|$ onto the augmented block which we diagonalise with a dense matrix routine. We retain the {\em most probable} eigenstates (those with the largest eigenvalues) of the density matrix in order to form a truncated basis for the augmented block that is around the same size as the system block basis. Matrix elements for the Hamiltonian and active site operators, together with any other operators that are required for say, correlation functions are then updated. The environment block used for the next iteration is usually chosen to be a reflected version of the system block. The initial system and environment blocks are chosen to be single sites. The accuracy and computer requirements of the scheme is fixed by $n_{% \mbox{\protect\scriptsize s}}$, the number of states retained per block (of good quantum numbers) at each iteration. $n_{\mbox{\protect\scriptsize s}}$ determines the truncation error, which is the sum of the eigenvalues of the density matrix corresponding to states which are shed in the truncation process. The error in quantities such as the ground state energy scale linearly with the truncation error (White and Huse 1993). \subsection*{Application of the DMRG to the frustrated spin-$1/2$ chain} We have applied the infinite lattice DMRG algorithm to (1.1) using a number of superblock configurations and boundary conditions. All the interactions (intrablock, interblock and superblock Hamiltonians) commute with the total $% z$ spin $S_{\mbox{\protect\scriptsize T}}^z\equiv \sum_iS_i^z$, so $S_{% \mbox{\protect\scriptsize T}}^z$ is a good quantum number which can be used to block diagonalize the system, environment and super blocks. For even numbers of sites, the ground state of the superblock $\left| \psi \right\rangle $ is a singlet with zero total spin so we only need to consider superblock states with $S_{\mbox{\protect\scriptsize T}}^z=0$. We found that the most cpu efficient configuration was the standard open ended superblock of the form system-site-site-environment (White 1993). As mentioned, in applying the DMRG to (1.1), we are concerned with the correlation function \begin{equation} C_{jl}\equiv \left< S_{j}^{z}S_{l}^{z} \right> \end{equation} and hence its Fourier transform \begin{equation} \tilde{C}(q)= \frac{1}{V}\sum_{jl}C_{jl}e^{iq(j-l)} \end{equation} We are particularly interested in $q^{*}$, the value of $q$ where $\tilde{C}% (q)$ has its peak. This leads to a natural (working) identification of the ground state periodicity with $2\pi/q^{*}$ which was given in (Bursill {\em % et al} 1995) where another frustrated spin model, the spin-1 model with bilinear and biquadratic exchange, was studied. In practice, $C_{jl}$ is calculated with $j$ and $l$ approximately equidistant from the centre of the superblock and far from the ends of the block so as to avoid end effects. In forming $\tilde C(q)$ we calculate $% C_{jl}$ for $0\leq |j-l|\leq 60$. The algorithm is iterated until these quantities converge. We test the algorithm by exactly calculating $C_{jl}$ for finite chains of up to 20 sites using the Lanzcos method and ensuring that these results are reproduced by the DMRG. In (Bursill {\em et al} 1995) it was noted that there are two impediments to an accurate calculation of (4.2). Firstly, for given $j$ and $l$, we must have $n_{\mbox{\protect\scriptsize s}}$ sufficiently large that $C_{jl}$ is accurately determined. Secondly, for given $q$, we must retain enough accurately calculated $C_{jl}$ in truncating the infinite series to ensure an accurate result. It was found (Bursill {\em et al} 1995) that if the system has a significant energy gap and exponentially decaying correlation functions with a short correlation length, then the $C_{jl}$ converge rapidly with $n_{\mbox{\protect\scriptsize s}}$ and the Fourier series converges very rapidly. On the other hand, in critical or near critical regions where the energy gap is small or zero and the correlation functions decay algebraically or have a large correlation length then convergence is very slow. By choosing $n_{\mbox{\protect\scriptsize s}}$ up to 90, it is found that the main source of inaccuracy in calculating $\tilde{C}(q)$ in these regions is Fourier series truncation. We plot $\tilde{C}(q)$ as a function of $q$ for various values of $J_{2}/J_{1}$ in Fig. 4. As mentioned, it was determined using exact diagonalization and finite size scaling methods (Okamoto and Nomura 1992) that the model is critical (gapless with algebraically decaying correlation functions) for $0\leq J_2/J_1\leq\tan\omega_{\mbox{\protect\scriptsize c}}$ and gapped beyond this region where $\tan\omega_{\mbox{\protect\scriptsize c}}=0.2411(1)$ Correspondingly, we find that $\tilde{C}(q)$ converges slowly and has oscillation due to Fourier series truncation in and around the critical region. In the region $0.3\leq J_2/J_1\leq 2$ we find that $\tilde{C}(q)$ converges rapidly to a smooth function. Now at the extreme point where $J_1=0$ we have two decoupled Heisenberg chains and so $C_{jl}$ vanishes if $j$ and $l$ lie on different sublattices but $C_{jl}$ decays algebraically on a given sublattice. We in fact find that $\tilde{C}(q)$ converges slowly for $J_2/J_1\geq 2.5$ indicating that there may be a finite interval around the $J_1=0$ point where the model is critical. We next turn to the question of periodicity in the ground state. As mentioned, we define the periodicity in terms of the position $q^{*}$ at which $\tilde C(q)$ has its peak. A plot of $q^{*}$ as a function of $\omega$ is included in Fig. 2. We see that the simple, analytical CCM result for the pitch angle improves dramatically upon the classical result. Also, we see that the dimer variational wavefunction (Zeng and Parkinson 1995) gives an excellent estimate of the pitch angle in a region to the right of the solvable point. $q^{*}$ converges very rapidly with $n_{\mbox{\protect\scriptsize F}}$ (the number of Fourier coefficients used in forming (4.2)) and $n_{\mbox{\protect\scriptsize s}}$ in the region $0.3<J_2/J_1<2$ and we can accurately determine the threshold (the onset of the spiral phase) $\tilde \omega $ at which $q^{*}$ begins to move away from $\pi $ (as the periodicity begins to change from 2 to 4). Such a threshold was found in (Bursill {\em et al} 1995) as the biquadratic interaction was increased relative to the bilinear interaction. Again $q^{*}$ could be accurately determined near the threshold. Using the same analysis as in (Bursill {\em % et al} 1995) then, we find \begin{equation} \tan \tilde \omega =0.52063(6) \end{equation} This is to be compared with the classical threshold (0.25) and the terminating point from the CCM theory (0.557). In a recent preprint (Chitra {\it et al} 1994) have studied the extension of (1.1) where there is also dimerization $\delta $ such that nearest neighbour exchange carries a factor of $1+\delta $ and $1-\delta $ on successive bonds. They conjectured that there is a disorder line given by $% J_2/J_1=\frac 12(1-\delta )$ such that, in the $\delta $-$J_2/J_1$ plane, the structure factor has its peak at $\pi $ below the line and decreases from $\pi $ to $\pi /2$ as $J_2/J_1$ is increased above the line. In the case (1.1) of no dimerisation $(\delta =0)$ gives $\tan $ $\tilde \omega =1/2 $ (i.e. the threshold is the exactly solvable point). Now at the solvable point the ground state is a perfect dimer where spins form a singlet with their dimer pair but are otherwise uncorrelated. The correlation function is \[ C_{ij}=\left\{ \begin{array}{l} \frac{1}{4}\mbox{ for }i=j \\ -\frac{1}{4}\mbox{ for }i\mbox{ and }j\mbox{ on the same dimer} \\ 0\mbox{ otherwise} \end{array} \right. \] The Fourier transform is therefore \begin{equation} \tilde C(q)=\frac 14(1-\cos q) \end{equation} whence $\tilde C^{\prime \prime }(\pi )=-1/4\ne 0$ so, unless $\tilde C^{\prime \prime }(\pi )$ is highly singular at the threshold, the threshold (i.e. the point where $\tilde C^{\prime \prime }(\pi )$ vanishes) cannot occur at the solvable point. This is borne out by our result (4.3). \subsection*{Further interpretation of the spiral phase} We have defined the ground state periodicity and the spiral phase ($\tilde w$% ) in terms of the peak position of the Fourier transformed correlation function. It has, however been shown by (Schollw\"ock {\it et al} 1995) that further insight into disorder and incommensurate spin distortions in the ground state can be gained by investigating the correlation function in real space. In Table 1 we list the correlation function in real space $C(r)$ for $% J_2/J_1=$0.49, 0.5 (the solvable point), 0.51 and 0.5206\ldots (the threshold). (As we shall see, in the gapped region, the ground state has broken translational symmetry and $C(r)$ is defined to be the average of $C_{j\;j+r} $ over a number of the sites $j$ in the middle of the chain). We see that modulations begin to appear for $J_2/J_1$ values between the solvable point and the threshold where $C(2)$ changes sign. That is, the Majumdar-Gosh point {\it is} a disorder point, separating phases of commensurate and incommensurate correlations (in real space). Following (Schollw\"ock {\it et al} 1995), the threshold $\tilde \omega $, where incommensurate spin oscillations would begin to be observed (experimentally) in the structure factor, is identified as a Lipshitz point. We would expect that in the limit of large spin $S$, the classical disorder point, the quantum disorder point and the Lipshitz point would merge, there being a single point separating commensurate and incommensurate phases both in terms of real and momentum space. \subsection*{Translational symmetry breaking in the ground state---a dimer order parameter} As mentioned, (Okamoto and Nomura 1992) calculated the critical point $\tan\omega _{\mbox{\protect\scriptsize c}}=0.2411(1)$ separating gapped from gapless phases. (Chitra {\it et al} 1994) calculated the energy gap using the DMRG and deduced $\tan\omega _{\mbox{\protect\scriptsize c}}=0.298(1)$, a result which is incompatible with that of (Okamoto and Nomura 1992). It is however known (White 1993, Bursill {\em et al} 1995 and Schollw\"ock {\em et al} 1995) that it is difficult to obtain accurate energies with the DMRG for critical or near-critical systems. This is again borne out when we apply the DMRG to the calculation of another order parameter which characterizes this phase transition. It is known that the ground state for the Heisenberg model $J_2=0$ has no symmetry breaking whereas at the Majumdar-Gosh point $J_2=J_1/2$ the ground state has broken translational symmetry, the correlator $C_{j\;j+1}$ equating to $0$ and $-1/4$ on successive bonds $(j,j+1)$. To measure this broken symmetry, we define a dimer order parameter $D$ by \[ D(N)\equiv \left| C_{N/2-1\;N/2}-C_{N/2\;N/2+1}\right| \] and $D\equiv \lim_{N\rightarrow \infty }D(N)$ where $N$ is the size of an even, open chain. $D(N)$ converges very slowly in and around the critical region $0\leq J_2/J_1<0.35$ and rapidly (with respect to both $N$ and $n_{\mbox{\protect\scriptsize s}}$) around the threshold $0.45\leq J_2/J_1<1$. A plot of $D$ versus $J_2/J_1$ for $n_{\mbox{\protect\scriptsize s}}=40$ is given in Fig. 5. We note that $D$ is maximal at around $J_2/J_1\approx 0.58$ i.e. neither the disorder point (0.5) nor the Lipshitz point (0.52\ldots ). The fact that $D$ exceeds 1/4 to the right of the disorder point is indicative of the incommensurate oscillations whereby the values of $C_{j\;j+1}$ on successive bonds $(j,j+1)$ can have opposite sign. We see that the critical point is not well defined and only qualitative information about the phase transition can be deduced from this procedure. We shall attempt to address the question of how the DMRG can be adapted to study critical phenomena in future publications. \section{The Marshall sign results.} \setcounter{equation}{0} An additional method of studying the periodicity of the ground state in the frustrated phase is by means of the Marshall-Peierls (Marshall, 1955) sign criterion. Preliminary results were reported in an earlier paper (Zeng and Parkinson, 1995) so a detailed description will not be given here. We have now obtained results for an open chain of 16 atoms and these confirm and extend those of shorter chains. In Fig. 6 we show the parameter $\rho _i$ for $i=1,2,3,4$, corresponding to a periodicity of $2i$ in the 16 atom chain. This parameter will be close to $1$ if the ground state `conforms' to the given periodicity and will be close to $0.5$ if the conformity is poor. The main features are as follows. For $\omega $ in the range $0\leq \omega \leq \omega _{MG}$ (outside the frustrated regime) $\rho _1$ is very close to $1$. For $\omega _{MG}\leq \omega $ there is an extended region in which $\rho _2$ is closest to $1$. An interesting and totally unexplained feature is the shallow double minimum in the value of $\rho _2$ for $\omega $ near $\pi /4$, which was also observed for shorter chains. At $\omega \approx 2.74$ there is a smooth crossover to a state in which $\rho _3$ is largest and finally a more complicated behaviour as $\omega $ approaches $\omega _{FF}$. An enlarged picture of the latter region is shown in Fig. 7. The sharp changes in $\rho _3$ at $\omega \approx 2.82$ and $2.85$ are caused by the crossing of a quintuplet state to become the ground state between these two values. This may be a `small N' effect, although even here $\rho _3$ is larger than the other $\rho _i$. Finally we observe a region closer to $\omega _{FF}$ in which $\rho _4$ is the largest. Results in this area are difficult to obtain because there are many states lying close to the ground state and convergence is extremely slow. Nevertheless, these results do suggest that the periodicity in the frustrated regime increases as the ferromagnetic boundary is approached. At present, the quantum system looks rather different to the classical as the change in periodicity occurs as a sequence of crossovers rather than smoothly. However the chains are still relatively short and it may well be that in the large $N$ limit the behaviour would approximate more closely to the classical. \section{Conclusion} \setcounter{equation}{0} The quantum mechanical behaviour of the frustrated phase of this system is clearly rather complex. The picture that is beginning to emerge is that the variation in periodicity with $\omega $ that is characteristic of the classical ground state may well survive partially in the quantum system. However, there are clearly many differences in detail and also some completely new features. The main difference in detail is that the periodicity of the quantum system, as predicted by the Coupled-cluster method and the variational method and confirmed by the DMRG results, remains closer to $\pi /2$ over a much wider range of $\omega $ than does the classical system. Another difference, suggested by the Marshall sign calculations, is that the changes in periodicity close to the ferromagnetic boundary may occur less smoothly. The behaviour of the quantum system close to the Majumdar-Ghosh point is quite different, as there is no classical analogue of the highly dimerised nature of the ground state.
1,108,101,565,035
arxiv
\section{Introduction} Activity recognition is the "automatic recognition of physical activities” from sensor data~\cite{Bulling2014}. The most common application of wearable sensor-based activity recognition is indeed in fitness trackers, but research has also focused on more complex activities such as daily living activities~\cite{Riboni2019709,8945220} or work-related activities~\cite{Inoue2016,DiPietro2019}. However, the accuracy of recognition for such complex activities is still low. This is due to several factors like the inter-class similarity, the difficulty in defining each activity and its boundaries, and the lack of open datasets. A complex activity is composed of actions~\cite{Liu2015} while physical activities are made of a periodic repetition of the same action. For example, walking entails the repetition of steps. An activity such as “cooking”, however, involves actions like “cut”, “take” or “mix” in different order and frequency. We call these actions “micro-activities” and we name the complex activities “macro activities”. Identifying micro-activities is useful for reasoning about macro-activities: to identify if all the required micro-activities were followed, to identify if they were in the correct order and to identify differences among macro activities or in the execution of any macro-activity. To show how the recognition of micro activities can aid reasoning, consider a nursing scenario. Nurses usually take blood from their patients. The steps involved are ``hand washing'' , ``opening injection'', ``blood collection'', ``healing'' and ``disposing waste''. In this example, nurses must wash their hands prior to any other step to prevent infections but they can change the order of the last two steps without much problem. For the activity ``changing diaper'', ``hand-washing'' must be done both at the beginning and at the end. Identifying micro-activities, as well as the macro-activity of which they are part of, is an important step in reasoning about complex activities. Few open datasets have more than one level of labels~(Section~\ref{sec:related}) despite the importance of these relations. In this paper, we present a new dataset with two-level labels: one level for the macro activities and the other for the micro-activities~(Section~\ref{sec:summary}). The main contributions in this dataset are the multi-modality and multi-sensor data and the two-level labeling approach. The data was collected in a cooking scenario using recipes as macro-activities. The micro activities, steps in a recipe, appear in a different order and frequency depending on the macro activity. We chose this setting because cooking is an important activity of daily living. The recognition and analysis of cooking activities can offer insights on health, well-being and the ability to live independently. In addition, it offers an easy setup to study the relation between micro and macro activities. We analyze the use of traditional activity recognition pipelines~\cite{Bulling2014} for the recognition of micro-activities (Section~\ref{sec:baseline}). We show that the recognition of micro-activities must consider new models different from those of physical activity recognition because of the lack of periodicity, data imbalances and differences among subjects. \section{Related datasets} \label{sec:related} Although recently there has been increasing interest for sharing datasets, the diversity and compatibility of open datasets remains a problem. Most datasets with wearable devices focus on locomotion activities. From the datasets focusing on other activities, the diversity of activities is high, making it difficult to combine different datasets into a single one with the same activities. A review of sensor-based activity recognition datasets can be found in~\cite{De-La-Hoz-Franco2018}. As no one dataset can cover all possible scenarios, or have enough participants, we believe that creating compatible dataset, with similar sensors and similar activities is a good way of creating collaborative data. With similar sensors and activities, it will be possible to combine several datasets for different research purposes. Our dataset thus aims to create an additional source for researchers looking into cooking activity recognition, gesture recognition, and other scenarios that might benefit from the different granularity of activity labels. We use several sensors including optical motion capture to make it diverse. Although it has a small number of participants and recipes, the large number of sensors gives the dataset a special recognition since other datasets lack body positions or other data that can help to make sense of the data. In this section, we describe other datasets that are publicly available and comprise cooking scenarios and/or micro and macro activities. \textbf{CMU Lifestyle dataset:}~\cite{cmudataset} This dataset was recorded using three different modalities including: video, audio, and 5 inertial measurement units placed on the subjects back, legs, and arms. The main dataset contains 18 subjects cooking five different recipes: brownies,pizza, sandwich, salad and scrambled eggs. Labels are given for detailed activities including their object, for example, ``put-bakingpan-into-oven'', ``walk--to-counter'' or ``open-browniebag''. Although the dataset planned to use motion capture, it is only available for one subject. \textbf{Cooking activities dataset}~\cite{cookingdataset}: The data was collected using motion capture system based on wearable inertial measurement units (five positions). The scope was the activities done during meal time, labeling different micro activities and five macro activities: cooking meal, setting table, eating meal, cleaning up and putting away utensils. The subjects of this experiment followed an experimenter who described the tasks, so there was a slight dependency although subjects were free to chose their order of actions. Only one recipe is done. \textbf{Opportunity dataset}~\cite{5573462}: This benchmark dataset contains information about gestures that occur during some high level activities, similar to micro and macro activities. The data was collected in an environment rich with sensors. There were 72 sensors in the environment and on the body of the subjects. The dataset includes complex cooking activity mainly associated with breakfast (preparing a sandwich). Four subjects performed 17 micro activities in a predefined scenario. The main limitation is the number of recipes (only one). \textbf{Unmodified Kitchen Dataset}~\cite{Mohammad2017} This dataset was collected in real kitchens with 10 women participants. It consists of 2 recipes cooked freely, and 74 basic activity labels given for each hand separately. Accelerometer data from two smartwatches, one in each wrist, was collected. The labeled activities include ``Mix with chopsticks'', ``Move aside'' and ``Shake''. This dataset provides realistic data both in terms of the preparation of recipes and data, which was collected using commercial devices. The limitation is in the number of recipes and, due to the detailed granularity of labels, small number of samples for most of them. \section{Data Collection Experiment Design} \label{sec:experiment} The dataset was recorded in the Smart Life Care Unit of the Kyushu Institute of Technology in Japan~\cite{openlaburl}. This unit is located in the Wakamatsu Campus of the University and has optical and inertial-sensor based motion capture equipments, sensors such as EMG, EEG, eye movement sensors and others and was equipped with furniture so as to simulate a kitchen environment as shown in Figure~\ref{fig:studio}. The experiment was held on November 19, 22,and 25, 2019. \begin{figure}[hbt] \centering \includegraphics[width=0.75\linewidth]{figures/pictures/laboratory_front.jpeg} \includegraphics[width=0.75\linewidth]{figures/pictures/laboratory_side.jpeg} \caption{Smart Life Care Unit of Kyushu Institute of Technology equipped as a kitchen. View from the front (top) and side (bottom)} \label{fig:studio} \end{figure} In this section, we detail the activities, sensors, data collection protocol used in the experiment. \subsection{Activities} We recorded data for 10 micro activities and 3 macro activities. One macro activities corresponds to one recipe. For the success of the experiment, we had the following restrictions when choosing the recipes: \begin{itemize} \item Recipes should not involve heating as it can not be done in the laboratory for safety. \item Each micro activity should be present in at least 2 macro activities. \item The duration of the recipe should be short and it should be easy to prepare as volunteers have no cooking experience. \end{itemize} Considering the previous restrictions, we chose the following recipes: \begin{itemize} \item \textbf{\textit{Sandwich}} A cheese and vegetables (lettuce and tomato) sandwich. Although it includes a 'spread' micro activity for spreading mayonnaise, we don't consider it as it is not a part of any other activity. \item \textbf{\textit{Cereal}}: Pouring milk and cereal into a bowl. We include banana into the cereal to include cut and peel micro activities. \item \textbf{\textit{Fruit salad}}: A fruit salad including 3 fruits (banana, apple and tangerine) that must be peeled, cut and then mixed with yogurt. \end{itemize} We designed the micro-activities in accordance to previous datasets labels~(Section~\ref{sec:related}). In those datasets, the action is accompanied by the object, if it is relevant. To make our dataset compatible with previous ones, we also collect object information as part of the micro activity label. Figure~\ref{fig:matrix} shows a summary of the micro activities involved in each recipe. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figures/micro_activity.pdf} \caption{Micro activities involved in each macro activity} \label{fig:matrix} \end{figure} As a micro activities has a semantic meaning instead of a motion related meaning, we expect some of them to show some intra-class variability in the sensor data. For instance, the peel activity is expected to have very different movements depending on the object of the activity, i.e. peeling a banana is significantly different than peeling an apple. However, it is the semantic meaning that is interesting to recognize. Similarly, cutting can be slightly different when cutting fruits than when cutting vegetables due to different forces applied. As another example, we expect take to differ significantly across the three objects (shelf, drawer and refrigerator). Please note that in this dataset ``Take'' involves opening, taking and closing the container, whereas in other datasets such as the CMU dataset, this would be labeled as three actions. In our case, ``Open'' refers to opening a food package such as milk carton or yogurt pot. Other micro-activities were labeled but are not considered as they appear in only one of the activities. This include ``Open'', ``Spread'' and ``Add''. The following is a description of each activity. \begin{itemize} \item \textit{Peel}: Removing the skin of a fruit. In this dataset it can refer to peeling a banana, apple or tangerine. For the apple, a knife is used whereas for the banana and tangerine the hands are used. \item \textit{Take}: Taking an ingredient or object from either the shelf, refrigerator or drawer. For the shelf, raising the arms might be involved depending on the height of both the object's location and the participant. For taking from the drawer, a way of going low (either bending or squatting) is required. \item \textit{Wash}: A simulation of washing a fruit or vegetable. Since there was no real water connection for the sink, this activity was mainly simulated by opening the water, placing the object beneath and closing. \item \textit{Cut}: Cutting a fruit (banana, apple, tangerine) or vegetable (lettuce, tomato) \item \textit{Pour}: Pouring yogurt from pot to bowl or pouring milk from carton to bowl. \item \textit{Put}: Including one ingredient into a mix of the others. This includes, putting cheese on top of bread, putting tomato or lettuce for the sandwich; putting fruits into the bowl for fruit salad; and, putting cereal into the bowl. \item \textit{Mix}: This means combining fruits in the salad with a spoon or cereal with the milk. \item \textit{Open}: Includes opening packages like milk carton, cheese slice (packaged individually), and yogurt pot. \item \textit{other}: It refers to micro-activities that were not part of at least two recipes and were not considered as labels for the final dataset. For example, 'Spread'. It might also refer to the static poses at the beginning and end of the recordings. \end{itemize} \subsection{Sensors} We collected data from motion capture, open pose and accelerometer sensors. Video was also be recorded for each run, but it is not released due to privacy concerns of the participants. The description of each data source is given below. \textbf{Motion Capture:} We used the motion capture system form Motion Analysis Company~\footnote{\underline{http://motionanalysis.com/movement-analysis/}}. The setup used consisted of 29 body markers located as in Figure~\ref{fig:markers}. The markers are tracked using 16 infrared cameras (Kestrel Digital Real Time). The three dimensional position of each marker was recorded with a frequency of 100Hz. The markers may be labeled incorrectly in some cases due to the complex setting which sometimes obstructs some markers. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figures/marker_position.png} \caption{Motion capture markers used in this dataset} \label{fig:markers} \end{figure} \textbf{Open Pose:}~\cite{cao2018openpose} Open Pose is an open source system to detect 135 keypoints of the human body, hand, facial, and foot using a single COTS camera. We use the keypoints from body (25). The main purpose is to compare results of motion capture with those of open pose. Open-pose may be used as a low-cost, low-precision body-tracking system. \textbf{Accelerometer sensor:} As accelerometer sensors, we used two smartphones and two smartwatches placed as in Figure~\ref{fig:accelerometers}. The smartphones have a sensitivity of $\pm1g$ and a sampling rate of ~70Hz in average. The smartwatches are a TIC Watch E, which uses Google wear and connects to the smartphone via bluetooth. The sampling rate was set to the maximum possible, but it varied greatly during the experiment from ~50Hz to ~250Hz. Their sensitivity is of $\pm2g$. All measurements are given in $m/s^2$ Two smartphones were used. The left hip smartphone (connected to the left wrist smartwatch) was a Samsung Galaxy S9 SCV38 and the right arm smartphone (connected to the right wrist smartwatch) was a Huawei P20 Lite smartphone. Both smartphones used Android version 9 as operating system. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{figures/accelerometer.pdf} \caption{Placement of the acceleration sensors for the experiment} \label{fig:accelerometers} \end{figure} \subsection{Protocol} During the experiment, there was a director, four observers and one subject. The director read the next steps of the recipe for the subject to perform the activity in the desired order and without forgetting any step. Each observer controlled one sensor (smartphone X2, motion capture and video) to start and stop the sensing at the same time. Only one observer labeled each step with an application designed for this purpose. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/protocol.png} \caption{Protocol to record each trial in the experiment} \label{fig:protocol} \end{figure} Based on previous experiences in data collection experiments, the following practices were followed. \textit{Sensor synchronization} was made by clock synchronization. We also use initial and end pose calibrations so that the start of the action can be better synchronized. \textit{Orientation calibration pose} In many datasets, the orientation of the sensor is not known. This makes the analysis difficult since the measurements depend on the orientation. We use an initial pose holding for 0.5 seconds so that the initial data can be used for orientation calibration of all accelerometers. \subsection{Participants} All four participants are male students between 18 and 25 years who participated voluntarily and without compensation in this experiment. \section{Data Summary} \label{sec:summary} In this section we describe statistics of the dataset. We first describe statistics related to the time distributions of the dataset. Then we analyze the accelerometer sensor reading distributions to analyze the differences and similarities in the distribution for each micro-activity. Finally, we analyze the quality of the data by estimating the label quality, the missing data rate, and the timestamp synchronization quality. \subsection{Analysis by time} \label{sec:time} The following analysis aims to answer the following questions:\newline \textit{\bf TQ.1: How much time was recorded for each activity?}\newline \textit{\bf TQ.2: How much time was recorded for each subject?}\newline \textit{\bf TQ.3: What are the average durations of each activity?}\newline The dataset comprises almost three hours of data, almost evenly distributed across each recipe~(Figure~\ref{fig:time_recipe}) and across each subject~(Figure~\ref{fig:time_subject}). Only one subject (subject 1) has less recorded time due to an error during the experiment. For this reason, Subject 1 has only 2 trials for the fruit salad activity. For all other recipes and all the other subjects, there are 5 trials per subject per recipe. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/timeactivity.png} \caption{Time distribution by recipe} \label{fig:time_recipe} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/timesubject.png} \caption{Time distribution by subject} \label{fig:time_subject} \end{figure} When looking at the time distribution across micro-activities~(Figure~\ref{fig:time_microactivity}), the dataset is imbalanced. The activities 'Take', 'Peel', 'Put', and 'Cut' make almost 2/3 of the data. The 'Take' activity is the most frequent one and the 'Peel' activity was the most difficult to perform, so they take the longest time. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/timemicroactivities.pdf} \caption{Time distribution by micro activity} \label{fig:time_microactivity} \end{figure} The average duration of each activity is shown in~Table~\ref{tab:avg_duration}. It confirms that 'Peel' is the activity with the longest duration, due to its difficulty. The shortest activity is 'Open'. \begin{table}[] \caption{Average duration of each micro activity} \label{tab:avg_duration} \centering \begin{tabular}{|c|c|c|} \hline \textbf{Activity} & \textbf{Avg. duration (seconds) } & \textbf{SD. of duration (seconds) }\\ \hline ``Peel''&25.42&18.32\\ ``Spread''&16.62&2.65\\ ``Take''&13.98&3.48\\ ``Wash''&10.92&3.54\\ ``Cut''&8.74&3.48\\ ``Pour''&8.56&2.26\\ ``Put''&7.49&6.20\\ ``Mix''&6.10&1.90\\ ``Open''&4.93&0.79\\ ``other''&6.70&5.95\\ \hline \end{tabular} \end{table} \subsection{Accelerometer readings analysis} \label{sec:measurements} The following analysis is based only on the accelerometer sensor values and aims to answer the following questions:\newline \textit{\bf MQ.1: Do the accelerometer readings have different distributions for each activity?}\newline \textit{\bf MQ.2: What statistical measures will most likely aid in the classification process?}\newline Analyzing the distribution of sensor readings can guide feature selection and help assess whether different activity have different measurement distribution which makes them easier or harder to classify. We show in Figure~\ref{fig:dist_lw} the distribution of left-wrist accelerometer sensor measurements for six different activities and in Figures~\ref{fig:dist_rw}, ~\ref{fig:dist_lh}, ~\ref{fig:dist_ra} those of the right wrist, left hip and right arm sensors respectively. Notice that the distributions of the right wrist sensor have the most significant differences, showing that this sensor might be the most important for classification. In this case, all participants are right-handed which helps explain this result. Although the distributions look similar, their range is different as can be seen from the x-axis limits in the graphs, as is their standard deviation. This suggests that standard deviation, maximum value, minimum value and mode value might be better features than mean value. Notice also that most distribution have a center in zero-value which is explained by the multiple crossings that the signal does (transitioning from positive to negative value) as is shown in Figure~\ref{fig:acc_series}. \begin{figure} \centering \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Take_sw.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Put_sw.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Peel_sw.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Cut_sw.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Mix_sw.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Wash_sw.pdf} \caption{Distribution of the sensor measurements during different micro activities on the left wrist} \label{fig:dist_lw} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figures/example_acc_series.png} \caption{Example of the time series of sensor measurements for the right wrist sensor during fruit salad activity} \label{fig:acc_series} \end{figure} \subsection{Data Quality Analysis} \label{sec:missing} Due to the irregular sampling rates caused by Android and some communication problems with bluetooth, we experience some "missing data" problems when the time series is resampled to achieve a regular sampling rate. In this section we analyze the missing data rate when the accelerometer series are resampled at 20Hz. We calculate the expected number of non-null samples based on the total time for each micro-activity (based on label times) and obtain the number of non-null samples after resampling~\footnote{we used resampling with a limit of 5 samples, so if more than 5 consecutive null values then it is not interpolated}. Figure~\ref{fig:missing_data} shows the rate of missing samples per sensor. It is evident that the left sensors had a larger missing data rate. There are many possible causes for this. First, communication between the smartwatch and the smartphone may have been interrupted or delayed, causing the data to be missed. Second, when the battery of the smartwatch was low, the sampling rate significantly decreased, causing missing data. Third, operation errors when saving the data may have caused missing files (each smartphone was operated by a different person). Due to these problems, we don't recommend the use of the left-wrist data but make it available for research in such scenarios. \begin{figure} \centering \includegraphics[width=\textwidth]{figures/misisng_data_rate_sensor.pdf} \caption{Missing data rate of each accelerometer sensor. Low battery and communication with phone are the main causes of missing data.} \label{fig:missing_data} \end{figure} Another cause for this missing data is the state of the battery of the smart watch. We notice that as the battery drained, the number of missing data got higher. This is an important consideration for wearable activity recognition, as most reports are based on complete, clean data. In real-life, activity recognition will not achieve the same accuracy if the battery of the device is not high. This is one reason for high- dropout rates of wearable devices. If we can make models that work well even under low-battery conditions, then user acceptance could potentially increase. \section{Activity Recognition Evaluation} \label{sec:baseline} An important application is that of recognizing micro-activities. With a correct activity estimation, it is possible to assist a user during cooking or understand if all steps have been followed. In this section, we evaluate activity recognition for the micro-activities in the dataset. For this, we follow a typical activity recognition pipeline. We evaluate different features, window sizes and algorithms. \subsection{Activity Recognition Pipeline Description} As a first step we re-sample all signals to 20Hz. This is to correct for the different sampling rates of all sensors and the sampling rate variability caused by Android API. Then we segment the data into sliding windows with 50\% overlap. We use window sizes of 1, 2, 3, 4 and 5 seconds. We chose such sizes considering the average duration of the activities as longer windows would possibly contain traces of a large number of micro-activities~(Figure~\ref{fig:window_labeling}). Even with short windows there might be cases when the situation depicted in Figure~\ref{fig:window_labeling} occurs. In such cases, we chose the label for the window based on the time for each label, such that the label with the longer time is the label for the window. For example, in Figure~\ref{fig:window_labeling}, window 2 is labeled with Wash and window 3, with Cut. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/example_windows.png} \caption{A window may contain traces of different activities. In this example both windows 2 and 3 contain traces of 'Wash' and 'Cut', though at different proportions each. } \label{fig:window_labeling} \end{figure} For each window we extract features using two common feature extraction techniques for activity recognition: \textit{Statistical Features}~\cite{Zhang2011, Huynh2005}: summary statistics calculated over the frequency and/or time domains. We use only time-domain features and include mean, standard deviation, maximum, minimum, kurtosis, skew, interquartile range, mean of derivative, and standard deviation of derivative \textit{Distribution based features}~\cite{Hammerla2013}: Using the Empirical Cumulative Distribution (ECDF) was proposed to capture the characteristics of the distribution of input data. Instead of representing the input as the time sequence values, the cumulative distribution is calculated and the values of the function at equally spaced points are used as features. We use 30 points to represent the ECDF of each axis. We evaluated three models: Support Vector Machines with linear and RBF kernel and Random Forests. These models have shown good results in previous literature. Although deep learning approaches have also shown high results, we consider that the amount of data in this experiment is not large enough for a deep learning model. \subsection{Results} We evaluated our results using the macro and the micro-average F1-Score. The macro average F1-Score is penalized by the large imbalance in the dataset, however, the micro average reduces the weight of the minority classes. Taking the micro-average results in higher scores largely due to the better recognition of the majority classes. We evaluated three classifiers with two different feature sets each. The results for all classifiers using the statistical features and the ECDF features are shown in Figures~\ref{fig:f1stat} and~\ref{fig:f1ecdf}, respectively. The number of windows for each activity when using 4 seconds is shown in Figure~\ref{fig:windows-4seconds} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/f1score-compare-stat.png} \caption{Micro activity classification F1-Score (micro and macro average) for classifiers using statistical features.} \label{fig:f1stat} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/F1Score-compare.png} \caption{Micro activity classification F1-Score (micro and macro average) for classifiers using ECDF features.} \label{fig:f1ecdf} \end{figure} We observe that, for all classifiers, the statistical features perform better than the Empirical Cumulative Distribution Features. Due to the imbalance, the micro-average F1-Score is higher than the macro-average score. As reference, a classifier that classifies all windows in the majority class (Take) would achieve a macro F-Score of ~6\% and a micro F-Score of 36\%~\footnote{when the window size is 5 seconds.}. Trained models show a better macro F-Score but little improvement for the micro score. We also observe that the peak performance is usually at windows of 4 seconds. Nonetheless, considering the average duration of the micro-activities~(table~\ref{tab:avg_duration}), when 4 seconds windows are used the shortest activities might be hard to distinguish because they have a high probability of belonging in window with other activity. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{figures/windows-4second.png} \caption{Number of windows per micro-activity when window length is 4 seconds.} \label{fig:windows-4seconds} \end{figure} \section{Conclusions} \label{sec:learned_rec} In this paper we have introduced a new dataset for activity recognition based on body-movement analysis that combines visual and inertial sensor data. The dataset has been labeled at two different granularity labels: macro activities, that represent different recipes; and micro activities, representing the steps followed for preparing the recipes. The combination of these two levels can provide researchers with data to study algorithms that combine recognition of actions, shared by many activities, and activities, composed of similar actions in different orders or frequencies. This dataset also combines visual and inertial sensors for measuring body movement. For both sensors, we used two different types of sensors. As for the visual sensors, we have used motion capture and open pose. As for the inertial measurement sensors, we used smartphones and smartwatches. This combination also allows researchers to experiment and measure model performance degradation when data quality is lower. In addition, as there are multiple body positions being tracked by each type of sensor, performance under different combinations of sensors can also be studied. We have described the protocol for data collection. Our interest was to collect realistic data. Therefore we used commercial smartphones and smartwatches instead of specific-purpose sensors which can be more accurate and have a constant sampling rate. We observed a high-missing data rate for the left smartwatch, despite it being the same model of the right smartwatch. The missing data rate is an important consideration when designing applications for real-life use. The 'Cooking Dataset' introduced in this paper proposes several challenges for the activity recognition community. Not only are there two levels of granularity in the labels, but also there is a high imbalance in the micro-activity level. This imbalance comes from the different duration and frequency of the activities. Understanding how this play a role in the metrics and in final applications is an interesting research direction that we would like to study. \section*{Appendix} \label{sec:appendix} \begin{figure} \centering \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Take_mw.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Put_mw.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Peel_mw.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Cut_mw.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Mix_mw.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Wash_mw.pdf} \caption{Distribution of the sensor measurements during different micro activities on the right wrist sensor} \label{fig:dist_rw} \end{figure} \begin{figure} \centering \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Take_mp.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Put_mp.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Peel_mp.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Cut_mp.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Mix_mp.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Wash_mp.pdf} \caption{Distribution of the sensor measurements during different micro activities on the fore arm sensor} \label{fig:dist_ra} \end{figure} \begin{figure} \centering \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Take_sp.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Put_sp.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Peel_sp.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Cut_sp.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Mix_sp.pdf} \includegraphics[width=0.3\textwidth]{figures/sensor_distribution/Wash_sp.pdf} \caption{Distribution of the sensor measurements during different micro activities on the hip sensor} \label{fig:dist_lh} \end{figure} \section*{Acknowledgment} The authors would like to thank the participants of the experiment for their time and collaboration during data collection. \bibliographystyle{plain}
1,108,101,565,036
arxiv
\section{Introduction} The purpose of this note is to explain some results on duality for complex and real $K$-theory and their ramifications for the $K(1)$-local Picard group at the prime 2. The results obtained herein are not new to experts, and the major ingredients can be found in the literature. However, to the best of the authors' knowledge, those ingredients have not been previously blended together in the manner presented in this paper. That perspective and approach was of great use as a guiding example for the second author in her work on duality for topological modular forms~\cite{stojanoska2012duality}. There are two main forms of duality which algebraic topologists have used in $K(n)$-local settings, namely, Spanier-Whitehead duality, which is simply the monoidal duality, and Gross-Hopkins duality, which is a $K(n)$-local analogue of Pontryagin or Brown-Comenetz duality. We will be interested in an integral version of the latter, namely Anderson duality. This is not $K(n)$-local, though upon such localization is closely related to Gross-Hopkins duality. Instead, it is defined on the category of spectra and makes surprising appearances in geometry and the study of quantum field theories \cite{hopkins2005quadratic, FreedMooreSegal}. Consider the $C_2$-action on the periodic $K$-theory spectrum $KU$ via complex conjugation. The main computational result in this paper is the following \theoremstyle{plain} \newtheorem*{thm:Andersondual}{\Cref{thm:Andersondual}} \begin{thm:Andersondual} The Anderson dual $I_\mathbb{Z} KU$ is $C_2$-equivariantly equivalent to $\Sigma^4KU$. \end{thm:Andersondual} The real $K$-theory spectrum $KO$ is the spectrum of homotopy fixed points $KU^{hC_2}$, and the above duality is reflected in the following result as a ``wrong side computation." \newtheorem*{thm:ushriek}{\Cref{thm:ushriek}} \begin{thm:ushriek} The forgetful map \[u_*:(KO\text{-mod})\to (S\text{-mod})\] has a right adjoint $u^!=F(KO,-)$ such that for a spectrum $A$ \[ u^!A =F(KO, A) \simeq F ( I_\mathbb{Z} A, \Sigma^4 KO).\] \end{thm:ushriek} This theorem has a derived algebro-geometric interpretation which we pursue in~\Cref{sec:andersonduality}. The first step of the proof of~\Cref{thm:Andersondual} consists of a simple calculation that $KU$ is Anderson self-dual. We then proceed to develop a descent strategy for deducing the above-stated result. The fact that $KO$ is the homotopy fixed points of $KU$ under the conjugation action is insufficient for descending duality. Notwithstanding, we show that $KO$ is also equivalent to the homotopy orbits of $KU$ under the same $C_2$ action by proving that the associated Tate spectrum vanishes. This allows a calculation of the Anderson dual of $KO$, and from there we complete the proof of~\Cref{thm:Andersondual}. As an application of the above theorems, we use the relationship between Anderson duality and $K(1)$-local Gross-Hopkins duality to independently conclude the well-known fact that the $K(1)$-local category contains ``exotic" elements in its Picard group. The organization of this paper is as follows. In~\Cref{sec:andersonbasics} we define Anderson duality, and in~\Cref{sec:andersonduality} we give a detailed algebro-geometric interpretation of the duality in general, as well as an interpretation of~\Cref{thm:ushriek} using derived algebraic geometry. Note that the proof of~\Cref{thm:ushriek}, while included in~\Cref{sec:andersonduality}, depends on the results of Sections \ref{sec:norm} through \ref{sec:KOanderson}; these Sections build to the proof of~\Cref{thm:Andersondual}. In particular, in~\Cref{sec:HFPSS} we use equivariant homotopy theory to deduce, from scratch, the differentials in the homotopy fixed point spectral sequence for $KO\simeq KU^{hC_2}$. Finally, in~\Cref{sec:picard} we study the implications of~\Cref{thm:Andersondual} for the invertible $K(1)$-local spectra and deduce that an exotic such spectrum must exist. \begin{rem} Where possible we have included information that, though known to experts, is perhaps not easily or at all found in the literature, such as the use of equivariant homotopy to determine a differential in a homotopy fixed point spectral sequence which we learned from Mike Hopkins and Mike Hill. \end{rem} \section*{Conventions} We will denote homotopy fixed point and orbit spectral sequences by $\operatorname{HFPSS}$ and $\operatorname{HOSS}$ respectively. In all spectral sequence diagrams a box represents a copy of $\mathbb{Z}$ whilst a dot is a copy of $\mathbb{Z}/2$. Vertical lines represent multiplication by 2, whilst lines of slope 1 represent multiplication by $\eta$. All spectral sequences will be Adams indexed; that is, a class in cohomological degree $s$ and internal degree $t$ will be drawn in position $(t-s,s)$. By a commutative ring spectrum we will always mean a highly structured commutative ring. \section*{Acknowledgements} The authors thank Paul Goerss for his invaluable suggestions to greatly improve earlier versions of this work. The second author also thanks Jeremiah Heller for helpful discussions about $C_2$-equivariant homotopy theory and for a careful proofreading of a draft of this document. \section{Background on Anderson duality}\label{sec:andersonbasics} The functor on spectra $X \mapsto \mathop{\mathrm{Hom}}(\pi_{-*}X,\mathbb{Q}/\mathbb{Z})$ is a cohomology theory since $\mathbb{Q}/\mathbb{Z}$ is injective. Let $I_{\mathbb{Q}/\mathbb{Z}}$ denote the representing spectrum. The Brown-Comenetz dual of $X$ is then defined as the function spectrum $I_{\mathbb{Q}/\mathbb{Z}}X = F(X,I_{\mathbb{Q}/\mathbb{Z}})$. In a similar way we can define $I_\mathbb{Q}$ to be the spectrum representing the functor $X \mapsto \mathop{\mathrm{Hom}}(\pi_{-\ast} X,\mathbb{Q})$, and there is a natural map $I_{\mathbb{Q}} \to I_{\mathbb{Q}/\mathbb{Z}}$. We denote the homotopy fiber of this map by $I_\mathbb{Z}$. Of course, $I_\mathbb{Q}$ is the rational Eilenberg-MacLane spectrum. For a spectrum $X$ we then define $I_\mathbb{Z} X$ as the function spectrum $F(X,I_\mathbb{Z})$. This contravariant functor $\IZ$ on the category of spectra is known as Anderson duality, first used by Anderson~\cite{anderson1969universal}; see also~\cite{yosimura1975universal,hopkins2005quadratic}. We shall see that the representing spectrum $\IZ$ is the dualizing object in the category of spectra (see \Cref{sec:andersonduality}). \begin{example} If $A$ is a finite group, then the Brown-Comenetz dual of the Eilenberg-MacLane spectrum $HA$ is again $HA$, via a non-canonical isomorphism between $A$ and its Pontryagin dual. Since there are no non-zero maps $A\to \mathbb{Q}$, it follows that the Anderson dual $I_{\mathbb{Z}} HA $ is equivalent to $\Sigma^{-1} HA$. \end{example} For a spectrum $X$, we can use a short spectral sequence to calculate the homotopy of $I_\mathbb{Z} X$. By considering the long exact sequence in homotopy of the fiber sequence \begin{equation}\label{eq:fibSeq} I_\mathbb{Z} X \to I_\mathbb{Q} X \to I_{\mathbb{Q}/\mathbb{Z}} X, \end{equation} we can form an exact couple and hence a spectral sequence, \begin{equation}\label{eq:andersonSS} \text{Ext}^s_\mathbb{Z}(\pi_tX,\mathbb{Z}) \Rightarrow \pi_{-t-s}I_\mathbb{Z} X, \end{equation} where the spectral sequence is only non-zero for $s=0,1$. Often this leads to very simple calculations, for example in the case of $KU$. Here, of course, $\pi_* KU $ is the ring of Laurent polynomials $ \mathbb{Z}[u^{\pm 1}]$ on the Bott element $u$ of degree $2$. Hence we easily conclude that the homotopy groups $\pi_* I_\mathbb{Z} KU$ are $\mathbb{Z}$ in even degrees and $0$ in odd degrees. We can do more by observing the additional available structure; note that if $R$ is a ring spectrum, then $I_\mathbb{Z} R=F(R,I_\mathbb{Z})$ is a module over $R$. The following easy observation has crucial applications. \begin{prop}\label{prop:moduleeq} Let $R$ be a homotopy ring spectrum\footnote{i.e. a ring object in the homotopy category} and let $M$ be an $R$-module such that $\pi_* M$ is, as a graded module, free of rank one over $\pi_* R$, generated by an element in $\pi_t M$. Then $M\simeq \Sigma^t R$. \end{prop} \begin{proof} The generating element is represented a map of spectra $S^t \to M$ which, by the standard adjunction can be extended to a map of $R$-modules $S^t\wedge R \to M$ using the $R$-module structure of $M$. The assumptions ensure that $\varphi$ is an equivalence. \end{proof} Since $ \mathop{\mathrm{Hom}}_\mathbb{Z}(\mathbb{Z}[u^{\pm 1},\mathbb{Z}])$ is a free $\mathbb{Z}[u^{\pm 1}]$-module of rank one, an immediate corollary of this proposition is that $I_\mathbb{Z} KU$ is equivalent to $KU$, i.e. $KU$ is Anderson self-dual. Similarly, we can run the spectral sequence~\eqref{eq:andersonSS} for $I_\mathbb{Z} KO $ and conclude that $\pi_k I_\mathbb{Z} KO$ is $\mathbb{Z}$ when $k$ is divisible by $4$, $\mathbb{Z}/2$ when $k$ is congruent to $5$ and $6$ modulo $8$, and zero otherwise. This indicates that perhaps $I_\mathbb{Z} KO$ is a fourfold suspension of $KO$, which we show is true in~\Cref{theorem:andersonKO}. However, determining the $KO$-module structure is tricky from the direct approach that works for $I_\mathbb{Z} KU$ because multiplication by $\eta$ on the dual non-torsion classes cannot be detected algebraically. Thus we proceed using a more sophisticated approach.\footnote{The reader interested in the direct approach is referred to the appendix of \cite{FreedMooreSegal} for details.} A natural question that arises at this point is whether we can use the self-duality of $KU$ to deduce any information about the Anderson dual of the real $K$-theory spectrum $KO$, using (only) that $KO$ is the homotopy fixed point spectrum $KU^{hC_2}$, where $C_2 = \langle c | c^2=1 \rangle$ acts on $KU$ by complex conjugation. In other words, we would like to know \begin{equation}\label{eq:AD} I_\mathbb{Z} KO \simeq I_\mathbb{Z} KU^{hC_2} \simeq F(KU^{hC_2},I_\mathbb{Z}) \end{equation} as a $KO$-module. Note that homotopy fixed points are the right, not left adjoint to the forgetful functor, hence we cannot deduce much solely from the definitions and the self-duality of $KU$. However, we show in~\Cref{sec:tate} that $KO$ is also the homotopy orbit spectrum $KU_{hC_2}$, and then proceed to the identification of $I_\mathbb{Z} KO$. \section{Algebro-geometric meaning of Anderson duality}\label{sec:andersonduality} Before proceeding to the computational specifics for $K$-theory, we would like to put the above duality discussion in a broader perspective. Namely, we claim that Anderson duality naturally occurs from a categorical viewpoint, as it is related to the existence of ``wrong side" adjoints, or from an algebro-geometric viewpoint as such adjoints signify the compactness of certain geometric objects.\footnote{For example, compact manifolds have Poincar\'e duality, and proper smooth schemes have Serre duality.} Perhaps the first notion of duality one encounters is that for vector spaces over a field $k$; the dual $V^\vee$ of a vector space $V$ is the space of linear maps from $V$ to the base field $k$, and has the universal property that linear maps $W\to V^\vee$ are in bijection with bilinear pairings $W\otimes V\to k$. If we restrict our attention to finite dimensional vector spaces, we can recover $V$ from its dual as $V\cong (V^\vee)^\vee$. As is well known, if we try to directly imitate this situation in the more general case of (finitely generated) modules over a ring $R$ and define a ``naive" dual of $M$ as $\mathop{\mathrm{Hom}}_R(M,R)$, perhaps the first problem we encounter is the inexactness of the functor $\mathop{\mathrm{Hom}}$. For example, if $R=\mathbb{Z}$ and $M$ is finite, $\mathop{\mathrm{Hom}}_\mathbb{Z}(M,\mathbb{Z})$ is zero, so the naive dual cannot recover $M$. The initial obstacle is easily resolved by passing to the derived category of (complexes) of $R$-modules. One observes that for this to work properly, $R$ ought to have finite injective dimension as a module over itself. If it does not, we may still be able to define good dual modules by mapping into some other object instead of $R$. Hence we arrive at the following definition. \begin{definition}\label{def:dualizingalg} An $R$-module\footnote{i.e. an object of the derived category of complexes of $R$-modules} $D$ is dualizing if \begin{enumerate} \item\label{c1} for any finitely generated module $M$, the natural double duality map \[M\to \mathrm{Ext}_R(\mathrm{Ext}_R(M,D),D)\] is an isomorphism, and \item\label{c2} $D$ has finite injective dimension; in other words, if a module $M$ is bounded below, then its $D$-dual $\mathrm{Ext}_R(M,D)$ is bounded above. \end{enumerate} Then $I_D M:=\mathrm{Ext}_R(M,D)$ is called the dual of $M$, and the contravariant functor $I_D$ represented by $D$ is called a (representable) duality functor. \end{definition} Note that if condition \eqref{c2} holds, and the map $R\to \mathrm{Ext}_R(D,D)$ is an isomorphism, i.e. if condition \eqref{c1} holds for $M=R$, then \eqref{c1} holds for any $M$; thus checking whether an object $D$ is dualizing requires relatively little work. \begin{example}\label{exam:dualz} The following example may provide some motivation for the definition of the Anderson dual as the fiber of $I_\mathbb{Q} \to I_{\mathbb{Q}/\mathbb{Z}}$. The complex $\mathbb{Q} \to \mathbb{Q}/\mathbb{Z}$ is an injective resolution of $\mathbb{Z}$; hence we can use it to compute $I_\mathbb{Z}(M)=\mathrm{Ext}_\mathbb{Z}(M,\mathbb{Z})$ for any abelian group $M$ as the two-term complex $I_\mathbb{Q}(M)=\mathop{\mathrm{Hom}}_\mathbb{Z}(M,\mathbb{Q})\xrightarrow{\pi} \mathop{\mathrm{Hom}}_\mathbb{Z}(M,\mathbb{Q}/\mathbb{Z})=I_{\mathbb{Q}/\mathbb{Z}}(M)$. If $M$ is a finite abelian group, then $I_\mathbb{Z}(M)$ and $ I_{\mathbb{Q}/\mathbb{Z}}(M)$ are equal, up to a shift, so the double duality map $M\to I_\mathbb{Z}(I_\mathbb{Z}(M))$ is an isomorphism. For finitely generated non-torsion $M$, the map $\pi$ is surjective, so $I_\mathbb{Z}(M)=\mathop{\mathrm{Hom}}_{\mathbb{Z}}(M,\mathbb{Z})$ and the double duality map $M\to I_\mathbb{Z}(I_\mathbb{Z}(M))$ is again an isomorphism. Hence $\mathbb{Z}$ is a dualizing $\mathbb{Z}$-module. \end{example} A dualizing $R$-module in the non-derived setting need not exist; for example the category of $\mathbb{Z}$-modules (i.e.~abelian groups) does not have a dualizing module. Nevertheless, its derived category does has a dualizing module: $\mathbb{Z}$ itself serves that role (as in~\Cref{exam:dualz}). This is a significant example of duality in algebra as $\mathbb{Z}$ is an initial object in rings, and dualities for (derived) categories of $R$-modules can be obtained by studying the map $u:\mathbb{Z}\to R$ (or rather, the induced map on the corresponding categories.) To be more precise, consider the forgetful functor $u_*$ between the derived categories of $R$-modules and $\mathbb{Z}$-modules. It always has a left adjoint $u^*$ given by tensoring up with $R$. However, suppose $u_*$ also has a right adjoint $u^!$. Then for any abelian group $A$ and $R$-module $M$, there is an isomorphism \[\mathrm{Ext}_\mathbb{Z}(u_*M,A)\cong \mathrm{Ext}_R(M,u^!A); \] which immediately gives that $u^!\mathbb{Z}$ is a good candidate for a dualizing $R$-module. Indeed, we have that \[I_{u^!\mathbb{Z}} (I_{u^! \mathbb{Z}} M) = \mathrm{Ext}_R( \mathrm{Ext}_R(M, u^!\mathbb{Z} ), u^!\mathbb{Z} ) \cong \mathrm{Ext}_\mathbb{Z}( \mathrm{Ext}_\mathbb{Z}(M, \mathbb{Z} ),\mathbb{Z} ) \cong M.\] In the above discussion, there was no need to restrict ourselves to modules over commutative rings. Indeed, considering quasi-coherent modules over the structure sheaf of a scheme gave rise to Grothendieck-Serre duality; in the algebro-geometric world, a dualizing module is defined precisely as in the algebraic Definition \ref{def:dualizingalg}. Moreover, given a map of schemes $f:X\to Y$, such that $Y$ has a dualizing module $D_Y$ and the pushforward functor $f_*$ is faithful and has a right adjoint $f^!$, we get that $f^!D_Y$ is a dualizing module over $X$. For details, the reader is referred to \cite{hartshorne,neeman,fauskhumay}. Dualizing objects can also be constructed in the category of ring spectra. Here the sphere is an initial object, and the category of $S$-modules is the category of spectra. (Note that this is already derived.) The following definition is due to Lurie~\cite{LurieDAGXIV}. \begin{definition} Let $A$ be a connective commutative ring spectrum, and let $K$ be an $A$-module (for example, as in~\cite{kriz2007rings}). Then $K$ is a dualizing $A$-module if \begin{enumerate \item\label{item1} Each homotopy group $\pi_n K$ is a finitely generated module over $\pi_0 A$ and $\pi_n K$ vanishes for $n \gg 0.$ \item\label{item2}[Dualizing property] The canonical map $A \to F_A(K,K)$ is an equivalence. \item\label{item3} The module $K$ has finite injective dimension: there is an integer $n$ such that if $M$ is an $A$-module with $\pi_i M = 0$ for $i > n$, then $\pi_iF_A(M,K) = 0$ for $i < 0$. \end{enumerate} \end{definition} \begin{example} We immediately see that the sphere is not a dualizing module over itself (in particular it does not satisfy the vanishing condition), but the Anderson spectrum $I_\mathbb{Z}$ plays this role. The proof is easy and can be found in~\cite[Example 4.3.9]{LurieDAGXIV}. Indeed, \eqref{item1} and \eqref{item3} are obvious from the definition, and \eqref{item2} follows from the fact that $\mathbb{Z}$ is the dualizing object in the derived category of $\mathbb{Z}$-modules. More precisely, duality in $\mathbb{Z}$-modules tells us that the homotopy groups $\pi_*F(I_\mathbb{Z},I_\mathbb{Z})\cong \mathrm{Ext}_\mathbb{Z}(\mathrm{Ext}_\mathbb{Z}(\pi_*S,\mathbb{Z} ))$ are isomorphic to $\pi_*S$ as a $\pi_*S$-module. Similarly, the category of $p$-local spectra, i.e. modules over the $p$-local sphere spectrum, has a dualizing object which is given by the spectrum $I_{\mathbb{Z}_{( p)}}$, the homotopy fiber of the natural map $I_{\mathbb{Q}} \to I_{\mathbb{Q}/\mathbb{Z}_{( p)}}$. \end{example} \begin{rem}\label{rem:duals} The above definition is suitable only for modules over \emph{connective} ring spectra, and therefore cannot be applied as is to $KU$ or $KO$-modules. However, if $R$ is \emph{any} ring spectrum, we can study dualizing $R$-modules relative to the unit map $u: S\to R$. Namely, the forgetful functor $u_*$ from $R$-modules to spectra has both a left and a right adjoint, the right being given by taking the function spectrum $F(R,-)$. Thus by the above formal reasoning, $u^! I_\mathbb{Z} = F(R,I_\mathbb{Z})$ will have the dualizing property for $R$-modules. \end{rem} \subsection{Homotopical duality for stacks} We would like to patch together the geometric and homotopical notions of duality, thus we need a notion of a derived scheme which is a good notion of a locally ringed-by-spectra space. In fact, we shall not restrict ourselves to schemes, but consider (very simple) stacks, as we would like to have an object $BG$ for a finite group $G$. For our purposes, the following definition (which is in fact a theorem of Lurie's) will suffice. \begin{definition} A derived stack is an ordinary stack $(\mathcal{X},\mathcal{O}_0)$ equipped with a sheaf of commutative ring spectra $\OX$ on its small \'etale site such that \begin{enumerate \item The pair $(\mathcal{X},\pi_0 \OX)$ is equivalent to $(\mathcal{X},\mathcal{O}_0)$, and \item $\pi_k \OX$ is a quasi-coherent sheaf of $\pi_0 \OX$-modules. \end{enumerate} Here $\pi_k\OX$ is the sheafification of the presheaf $U\mapsto \pi_k(\OX(U))$. In this case say that $(\mathcal{X},\OX)$ or, abusively, $\mathcal{X}$, is a derived stack. \end{definition} \begin{example} A commutative ring spectrum $R$ defines a derived stack $\Spec R$ whose underlying ordinary ``stack" is the affine scheme $\Spec \pi_0 R$. In particular, the sphere spectrum $\Spec S$ derives $\Spec \mathbb{Z}$, as does the Eilenberg-MacLane spectrum $H\mathbb{Z}$. In fact, $\Spec H\mathbb{Z}$ is the usual $\Spec \mathbb{Z}$, but it is no longer terminal; now $\Spec S$ is the terminal object. \end{example} By an $\OX$-module we will mean a quasi-coherent sheaf $\mathcal{F}$ of $\OX$-module spectra, i.e. a sheaf of $\OX$-modules such that for any diagram \[ \xymatrix{ U \ar[rr]\ar[rd] &&V\ar[ld]\\ &\mathcal{X}, }\] in which $U$ and $V$ are derived affine schemes \'etale over $\mathcal{X}$, we get an equivalence \[ \mathcal{F}(V) \underset{\OX(V)}{\wedge} \OX(U) \xrightarrow{\sim} \mathcal{F}(U). \] We will consider the following situation. Let $(\mathcal{X},\OX)$ be a derived stack over the sphere spectrum $S$ and let \[f:\mathcal{X}\to \Spec S\] be its structure map. The push-forward or global sections functor $f_*: (\OX\text{-mod}) \to (S\text{-mod})$ always has a left adjoint, $f^*$, the constant sheaf functor. However, in some situations, it also has a right adjoint, usually denoted $f^!$. When such $f^!$ exists, $f^!\IZ$ is a perfect candidate for a dualizing module for $\OX$-modules.\footnote{Compare~\Cref{rem:duals}} \begin{example} Let $R$ be a commutative ring spectrum, and let $u:\Spec R\to \Spec S$ be the structure map. Then $u_*$, which is the forgetful functor from $R$-modules to spectra has both a left adjoint $u^* = - \wedge R$ and a right adjoint $u^! = F(R, -)$. Hence $u^! I_\mathbb{Z} = F(R,I_\mathbb{Z}) $ has the dualizing property for $R$-modules. \end{example} \begin{example} Let $R$ be a commutative ring spectrum with an action by a finite group $G$. Then $(BG,R)$ is an example of a derived stack, whereby modules over the structure sheaf are $R$-modules with a compatible $G$-action. If $M$ is such a module, then $f_* M = M^{hG}$ has a left adjoint given by smashing with $R$. However, $f_*$ may or may not have a right adjoint for non-trivial $G$. The existence of such is related to the Tate spectra that we recall in the next section. Note that in a $K(n)$-local setting the right adjoint of such an $f_*$ always exists by a theorem of Greenlees-Sadofsky \cite{GreenleesSadofsky} recently generalized by Hopkins-Lurie \cite{HopkinsLurie}. \end{example} \subsection{Duality for $K$-theory, geometrically} We now describe how $K$-theory fits into this perspective; to begin with, we recall the setup from \cite[Appendix A]{lawson2012strictly}. The reader is referred to loc.cit. for more details and proofs. Let $\mathcal{M}_{\mathbb{G}_m}/\operatorname{Spec}{\mathbb{Z}}$ denote the stack classifying group schemes which become isomorphic to $\mathbb{G}_m$ after a faithfully flat extension. These are also called forms of the multiplicative group. Since the automorphism group of $\mathbb{G}_m$ over $\mathbb{Z}$ is $C_2$, we get that $\mathcal{M}_{\mathbb{G}_m}$ is equivalent to the stack $BC_2$ classifying principal $C_2$-torsors; explicit details can be found in loc.cit. There exists a sheaf $\mathcal{O}^\text{top}$ of commutative ring spectra on the stack $\mathcal{M}_{\mathbb{G}_m}$ such that to each \'etale map $\Spec R \to \mathcal{M}_{\mathbb{G}_m}$, the sheaf $\mathcal{O}^\text{top}$ assigns a complex orientable weakly even periodic commutative ring spectrum $\mathcal{O}^\text{top}( R)$ such that \begin{enumerate}[(a)] \item $\pi_0 \mathcal{O}^\text{top} ( R)$=R, and \item the formal completion of the form of $\mathbb{G}_m$ classified by $\Spec R \to \mathcal{M}_{\mathbb{G}_m}$ is isomorphic to the formal group law of $\mathcal{O}^\text{top}( R)$. \end{enumerate} Lawson-Nauman show that the spectrally ringed stack $(\mathcal{M}_{\mathbb{G}_m}, \mathcal{O}^{\text{top}})$ (over the sphere spectrum $\Spec S$) is equivalent to $(BC_2,KU)$, where $C_2$ acts on $KU$ (by commutative ring maps) via complex conjugation. In particular, the category of $\mathcal{O}^{\text{top}} $-modules is equivalent to the category of $C_2$-equivariant $KU$ modules. More specifically, let $\Spec R \to \mathcal{M}_{\mathbb{G}_m}$ be an \'etale map; and let $\tilde R$ denote the ring which fits in a pullback square \[\xymatrix{ \Spec \tilde R \ar[r] \ar[d] & \Spec R \ar[d]\\ \Spec \mathbb{Z} \ar[r] & \mathcal{M}_{\mathbb{G}_m} \cong BC_2, } \] where the bottom map is the usual covering map of $BC_2$ (which, in particular, is an affine map). The ring $\tilde R $ constructed in this way has a free $C_2$-action and as such has a homotopically unique $C_2$-equivariant realization $S(\tilde R) $ according to \cite{BakerRichter}, with \[\pi_* S(\tilde R) \cong \pi_* S\otimes_{\mathbb{Z}} \tilde R.\] The value that the sheaf $\mathcal{O}^{\text{top}}$ takes on the affine $\Spec R \to \mathcal{M}_{\mathbb{G}_m}$ is then given by $(KU\wedge S(\tilde R))^{hC_2}$, where $C_2$ acts diagonally on the smash product. We have \[\pi_* \mathcal{O}^{top}( R) = H^0(C_2, \pi_*KU \otimes \tilde R) \] For example, if $R=\mathbb{Z}$ and the map is the covering $\Spec \mathbb{Z} \to \mathcal{M}_{\mathbb{G}_m}$ (i.e. the map classifying $\mathbb{G}_m$ itself), $\tilde R \cong \mathbb{Z}[C_2] $, and $\mathcal{O}^{top}(\mathbb{Z}) = (KU[C_2])^{hC_2} \simeq KU$. Thus the global sections of $\mathcal{O}^{\text{top}}$ over $\mathcal{M}_{\mathbb{G}_m}$ are the homotopy fixed points $KU^{hC_2}$, i.e. the real $K$-theory spectrum $KO$. In fact, using Galois descent or the general affineness machinery of Mathew-Meier \cite[Ex.~6.1]{MeierMatthew}, one shows that the category of $\mathcal{O}^{\text{top}}$-modules, which we saw is equivalent to $C_2\text{-}KU$-modules, is further equivalent to the category of $KO$ modules as, by loc.cit., the derived stack $(BC_2,KU)$ is equivalent to $\Spec KO$. Note that in the category of $C_2\text{-}KU$-modules, the internal $\mathop{\mathrm{Hom}}$ is given by the function spectrum of non-equivariant $KU$-module maps equipped with the $C_2$-action by conjugation; we will denote this function spectrum by $F_{C_2\text{-}KU}$. \begin{theorem}\label{thm:ushriek} The following three equivalent statements are true. \begin{enumerate}[(i)] \item Let $u$ be the structure map $ \Spec KO \to \Spec S$. The push forward (i.e. forgetful) map \[u_*:(KO\text{-mod})\to (S\text{-mod})\] has a right adjoint $u^!=F(KO,-)$ such that for a spectrum $A$ \[ u^!A =F(KO, A) \simeq F ( I_\mathbb{Z} A, \Sigma^4 KO).\] In particular, $u^! I_\mathbb{Z} \simeq \Sigma^4 KO $ is the relative dualizing $KO$-module. \item\label{part:thm:g} The push forward $g_*$ along the structure map $g:(BC_2,KU)\to \Spec S$ has a right adjoint $g^!$ given by \[ g^! A = F_{C_2\text{-}KU} (g^* (I_\mathbb{Z} A), \Sigma^4 KU ). \] \item The push forward $f_*$ along $(\mathcal{M}_{\mathbb{G}_m},\mathcal{O}^{\text{top}}) \to \Spec S$ has a right adjoint $f^!$ such that \[ f^! A = \mathop{\mathrm{Hom}}{ }_{\mathcal{O}^{\text{top}}} (f^*(I_\mathbb{Z} A),\Sigma^4 \mathcal{O}^{\text{top}} ).\] In particular, $f^! I_\mathbb{Z} \simeq \Sigma^4\mathcal{O}^{\text{top}} $ is the relative dualizing $\mathcal{O}^{\text{top}}$-module. \end{enumerate} \end{theorem} \begin{rem} The proof of this theorem depends on~\cref{thm:Andersondual}, which is the main result of this paper and to whose proof~\cref{sec:norm} through~\cref{sec:KOanderson} are devoted. Thus the logical placement of the proof of this theorem is at the end of~\cref{sec:KOanderson}, but we have decided to include it here for better readability. The reader is advised to come back to the proof after consulting the statements of~\cref{thm:Andersondual} and~\cref{cor:FixedHtpyFixed}. \end{rem} \begin{proof} That the statements are equivalent is immediate from the above discussion. We will prove~\cref{part:thm:g}. What we need to show is that for a spectrum $A$ and a $C_2\text{-}KU$ module M, there is a natural equivalence \begin{align}\label{eq:adjunction} g_* F_{C_2\text{-}KU}(M, F_{C_2\text{-}KU} (g^* (I_\mathbb{Z} A), \Sigma^4 KU )) \simeq F(g_* M, A). \end{align} We split the proof in two parts. \begin{enumerate}[(a)] \item When $A$ is the Anderson spectrum $I_\mathbb{Z}$, \eqref{eq:adjunction} reduces to showing that \[ F_{KU}(M,\Sigma^4 KU)^{hC_2} \simeq F(M^{hC_2}, I_\mathbb{Z} ).\] However, $\Sigma^4 KU$ is equivariantly equivalent to $I_\mathbb{Z} KU$ by~\Cref{thm:Andersondual}, and $M^{hC_2}$ is equivalent to $M_{hC_2}$ by~\Cref{cor:FixedHtpyFixed}. The latter gives an isomorphism $F(M^{hC_2},I_\mathbb{Z}) \simeq F(M_{hC_2},I_\mathbb{Z}) \simeq F(M,I_\mathbb{Z})^{hC_2}$ which in turn implies that we have a chain of equivalences \[ F_{KU}(M,\Sigma^4 KU)^{hC_2} \simeq F_{KU}(M,F(KU,I_\mathbb{Z}) )^{hC_2} \simeq F(M,I_\mathbb{Z})^{hC_2} \simeq F(M^{hC_2}, I_\mathbb{Z} ).\] \item For a general $A$, we note that the map $A \to F( F(A,I_\mathbb{Z}),I_\mathbb{Z})$ is an equivalence, so we substitute $A$ by its double dual. For a $C_2\text{-}KU$ module $M$, since $M_{hC_2}\simeq M^{hC_2} $ (by~\Cref{cor:FixedHtpyFixed}) we have that \[ M^{hC_2} \wedge F(A,I_\mathbb{Z}) \simeq (M\wedge F(A,I_\mathbb{Z}))^{hC_2},\] implying \begin{align*} F(M^{hC_2}, A) & \simeq F\big(M^{hC_2}, F(F(A,I_\mathbb{Z}),I_\mathbb{Z}) \big) \simeq F(M^{hC_2}\wedge F(A,I_\mathbb{Z}),I_\mathbb{Z})\\ & \simeq F\big( (M\wedge F(A,I_\mathbb{Z}))^{hC_2},I_\mathbb{Z} \big). \end{align*} By part (a), $ F\big( (M\wedge F(A,I_\mathbb{Z}))^{hC_2},I_\mathbb{Z} \big) $ is equivalent to \[ F_{KU} \big( M\wedge F(A,I_\mathbb{Z}), \Sigma^4 KU \big)^{hC_2}, \] proving our result. \end{enumerate} \end{proof} \section{The norm cofibration sequence}\label{sec:norm} Suppose a finite group $G$ acts on a module $M$. The map on $M$ given by \[m \mapsto \sum_{g\in G} gm\] factors through the orbits $M_G$ and lands in the invariants $M^G$, giving the norm $N_G:M_G \to M^G$. The Tate cohomology groups are then defined as \begin{equation}\label{eq:tateDef} \hat{H}^n(G;M) = \begin{cases} H^n(G;M) &\text{ for } n \ge 1 \\ \operatorname{coker}(N_G) &\text{ for } n = 0 \\ \text{ker} (N_G) &\text{ for } n = -1 \\ H_{-n-1}(G;M) &\text{ for } n \le -2. \end{cases} \end{equation} In the same vein, Greenlees and May~\cite{greenlees1995generalized} have assigned a Tate spectrum to a $G$-spectrum $X$. The story starts with the cofiber sequence \[ EG_+ \to S^0 \to \tilde EG, \] which gives rise to the following commutative diagram of equivariant maps \[\xymatrix{ EG_+ \wedge X \ar[r] \ar[d] & X \ar[r] \ar[d] & \tilde EG \wedge X \ar[d] \\ EG_+ \wedge F(EG_+,X) \ar[r] & F(EG_+,X) \ar[r] & \tilde EG \wedge F(EG_+,X), } \] in which the rows are cofiber sequences. The left vertical arrow is always a $G$-equivalence. Upon taking $G$-fixed points and using the Adams isomorphism \cite[XVI.5.4]{may1996equivariant}, the diagram becomes \begin{align}\label{eq:NormCofibration} \xymatrix{ X_{hG} \ar[r] \ar@{=}[d] & X^G \ar[r] \ar[d] & \Phi^GX\ar[d] \\ X_{hG} \ar[r] & X^{hG} \ar[r] & X^{tG}, } \end{align} where $\Phi^G X$, which we define as $(\tilde EG \wedge X)^G$, is the spectrum of geometric fixed points, and $X^{tG}=( \tilde EG \wedge F(EG_+,X))^G$ is the Tate spectrum of $X$. The map $X_{hG} \to X^{hG}$ is often called the ``norm map", and can be thought of as the spectrum level version of the algebraic norm map considered above. Thus the Tate spectrum is the cofiber of the norm map. Associated to the homotopy orbit, homotopy fixed point, and Tate spectrum are spectral sequences \begin{equation}\label{ss:GreenleesMay} \begin{aligned} H_s(G,\pi_t(X)) \Rightarrow \pi_{t-s} X_{hG} \\ H^s(G,\pi_t(X)) \Rightarrow \pi_{t-s} X^{hG} \\ \hat H^s(G,\pi_t(X)) \Rightarrow \pi_{t-s} X^{tG} . \end{aligned} \end{equation} If $R$ is a ring spectrum, then so are $R^{hG}$ and $R^{tG}$; moreover, the homotopy fixed point and Tate spectral sequences are spectral sequences of differential algebras and there is a natural map between them that is compatible with the differential algebra structure. From the definition it is clear that $X^{tG}$ is contractible if and only if the norm map is an equivalence $X_{hG} \simeq X^{hG}$ between homotopy orbits and homotopy fixed points. We will show that this is the case for $X=KU$ with the $C_2$-action by complex conjugation. \section{The homotopy fixed point spectral sequence for $KO\simeq KU^{hC_2}$}\label{sec:HFPSS} In the equivalence $KO = KU^{hC_2}$, $C_2$ acts on $KU$ by complex conjugation. This action has been made ``genuinely" equivariant by Atiyah \cite{atiyah1966k}, who constructed a $C_2$-equivariant version of $KU $ often denoted by $K\mathbb{R}$ indexed by representation spheres. In particular, the fixed points $K\mathbb{R}^{C_2}$ are equivalent to the homotopy fixed points $KU^{hC_2}=KO$. In view of this, we will abandon the notation $K\mathbb{R}$ and refer to this ``genuine" $C_2$-spectrum by the name $KU$ as well. The $E_2$-term of the homotopy fixed point spectral sequence for $KU^{hC_2}$ is given by $H^*(C_2;KU_*)$, with the generator $c$ acting on $KU_*=\mathbb{Z}[u^{\pm 1}]$ via $c(u)= -u$. The standard projective resolution \[ \cdots \xrightarrow{1-c} \mathbb{Z}[C_2]\xrightarrow{1+c} \mathbb{Z}[C_2]\xrightarrow{1-c} \mathbb{Z} \] of $\mathbb{Z}$ immediately gives that the $E_2$-term of the HFPSS can be described as \[ E_2^{s,t} = \mathbb{Z}[\eta,t^{\pm 1}] /(2\eta), \] with $|\eta| = (1,1)$ and $|t| = (4,0)$, where $t = u^2$ is the square of the Bott class and $\eta$ is the class that ends up representing the Hopf element $S^1\to KO$. It has become somewhat standard to compute the differentials in the HFPSS as follows. Via Bott periodicity we know that $KO$ is an 8 periodic ring spectrum, hence, an analysis of the spectral sequence forces there to be a differential $d_3(t)=\eta^3$, and the algebra structure determines the rest of the differentials. Even though this is a famous differential, we include here a proof from scratch that we learned from Mike Hopkins and Mike Hill and which only uses the fact that $KO\simeq KU^{hC_2} \simeq KU^{C_2}$, as well as the equivariant form of Bott periodicity. The importance of the argument lies in its potential to be generalized to higher chromatic analogues of $KO$, such as $TMF$ or $EO_n$'s, where less is known a priori. The HFPSS is the Bousfield-Kan spectral sequence for the cosimplicial spectrum $ F((EC_2)_+, KU)$, i.e. \[\xymatrix{ KU \ar@<0.5ex>[r]^-c\ar@<-0.5ex>[r]_-1 & F((C_2)_+, KU) \ar@<0ex>[r] \ar@<1ex>[r]\ar@<-1ex>[r] & F((C_2)^2_+, KU) \ar@<0.5ex>[r]\ar@<-0.5ex>[r] \ar@<1.5ex>[r]\ar@<-1.5ex>[r] &\cdots }\] Its associated normalized complex (or $E_1$ page of the spectral sequence) is \[\xymatrix{KU \ar[r]^{1-c} & KU \ar[r]^{1+c} & KU \ar[r]^{1-c} &\cdots }\] Let $\rho$ denote the regular complex representation of $C_2$; then $\rho=1\oplus \sigma$, with $\sigma$ the sign representation. The one-point compactification $S^{\rho}$ of $\rho$ is $C_2$-equivalent to $\mathbb{C} P^1$. As such, it gives a $C_2$-map $S^{\rho} \hookrightarrow BU(1) \to BU \to KU$ which we will denote by $v_1$. This $v_1$ defines an element in $\pi_{\rho} KU $ which corresponds to a $KU$-module map \[S^{\rho }\wedge KU \to KU \] which is the $C_2$-equivalence known as Bott periodicity~\cite[XIV.3.2]{may1996equivariant}. An equivalent way to construct $v_1$ is by observing that the composition $S^1 \xrightarrow{\eta} KO \to KU$ is null and $C_2$-equivariant (where, of course, $S^1 $ and $KO$ have trivial actions). Thus it extends over the interior $D^2$ of $S^1$ in two ways; if $\tau:D^2\to KU$ is one of them, then $c \circ \tau:D^2\to KU$ is the other. By construction, $\tau$ and $c\circ \tau$ agree on the boundary $S^1$ of $D^2$, hence can be glued to give a map $S^{\rho} \to KU$ which is precisely $v_1$, as $cu=-u$. Our goal is to show that in the HFPSS for $KU$, $d_3 (v_1^2)=\eta^3$, and our method is to look at the map of HFPSSs induced by $v_1^2:S^{2\rho} \to KU$. This will be fruitful because the HFPSS for $S^{2\rho}$ is determined by the cell decomposition of $(S^{2\rho})^{hC_2}$. For any integer $n$, we have that the homotopy orbits $(S^{-n\sigma})_{hC_2}$ are equivalent to the Thom spectrum $(\mathbb{R} P^\infty)^{-n\xi}$, where $\xi$ is the tautological line bundle of $\mathbb{R} P^\infty$. This Thom spectrum is briefly denoted $\mathbb{R} P^\infty_{-n}$, is called a stunted projective space, and has been extensively studied. In particular, its cell structure is well known. The schematic of the cell structure for $n=2$ is as follows \begin{align}\label{eq:celldiagram1} \xymatrix{ \underset{-2}{\bullet} \ar@{-}@/^1pc/[rr]& \underset{-1}\bullet \ar@{-}@/_1pc/[rr] \ar@{-}[r] & \underset{0}\bullet &\underset{1}\bullet \ar@{-}[r] & \underset{2}\bullet \ar@{-}@/^1pc/[rr] & \ar@{-}[r] \underset{3}\bullet \ar@{-}@/_1pc/[rr] & \underset{4}\bullet &\ar@{-}[r] \underset{5}\bullet & \underset{6}\bullet \cdots } \end{align} where the bullets denote cells in dimension given by the corresponding number, straight short lines between two bullets indicate that the cells are attached via multiplication by $2$, and the curved lines indicate attaching maps $\eta$. The pattern is four-periodic. Let $DX$ denote the Spanier-Whitehead dual of $X$. We have a chain of equivalences \[(S^{2\rho})^{hC_2} \simeq (D S^{-2\rho})^{hC_2} \simeq D((S^{-2\rho})_{hC_2}) \simeq D(\Sigma^{-2} (S^{-2\sigma})_{hC_2} ) \simeq \Sigma^2 D(\mathbb{R} P^\infty_{-2}). \] Hence the cell diagram of $(S^{2\rho})^{hC_2} $ is \begin{align} \xymatrix{ \cdots \underset{-4}\bullet \ar@{-}[r] & \underset{-3}\bullet \ar@{-}@/_1pc/[rr] & \underset{-2}\bullet \ar@{-}[r] \ar@{-}@/^1pc/[rr] & \underset{-1}\bullet &\underset{0}\bullet \ar@{-}[r] &\underset{1}\bullet \ar@{-}@/_1pc/[rr] &\underset{2}\bullet \ar@{-}[r] \ar@{-}@/^1pc/[rr] &\underset{3}\bullet &\underset{4}\bullet } \end{align} i.e. it is the diagram in~\eqref{eq:celldiagram1} flipped and shifted by $2$. This determines the $E_1$-page of the HFPSS for $S^{2\rho}$ and all differentials in HFPSS for $KU$. Let us describe how that works.~\Cref{fig:HFPSS1} shows the relevant parts of the $E_1$-terms for the HFPSS for $S^{2 \rho}$ and $KU$, the first following from the cell decomposition above. In particular, the $d_1$ differential is non-zero where shown in which case it is simply given by multiplication by 2. The map $v_1^2$ relates the spectral sequences, and in particular, as noted, the generator labelled $e_4$ maps to the generator labeled $t:=v_1^2$. It is a simple matter now to determine the relevant parts of the $E_2$-page. Note that there is a possible $d_2$ differential on the $S^{2 \rho}$ spectral sequence, whilst sparsity precludes such a differential on $KU^{hC_2}$. In fact, the attaching map $\eta$ in the cell decomposition above determines the $d_2$ differential $d_2(e_4) = \eta h_1^2$ on the $S^{2 \rho}$ spectral sequence, as shown in~\Cref{fig:HFPSS2}. Under the map of spectral sequences induced by $v_1^2$, the classes $e_4$ and $h_1^2$ map to the classes $v_1^2=t$ and $\eta^2$, respectively. Moreover, multiplication by $\eta$ remains multiplication by $\eta$ under $v_1^2$, but it is a map of cohomological degree $1$ in the HFPSS for $KU$. In particular, we get the differential $d_3(v_1^2) = \eta^3$ as expected; multiplicativity now gives that \[ d_3(v_1^{2+4k} \eta^m) = v_1^{4k}\eta^m d_3(v_1^2) = v_1^{4k} \eta^{m+3}, \] i.e. the usual differential pattern depicted in~\Cref{fig:e3ss}. \begin{figure}[tbh] \centering \includegraphics[scale=0.93]{HFPSS1a.pdf} \includegraphics[scale=0.93]{HFPSS1b.pdf} \caption{The $E_1$-pages for the HFPSS for $S^{2 \rho}$ and $KU$. By equivariant Bott periodicity, $v_1^2$ induces an isomorphism in degree $(4,0)$.}\label{fig:HFPSS1} \end{figure} \begin{figure}[tbh] \centering \includegraphics[scale=0.93]{HFPSS2a.pdf} \includegraphics[scale=0.93]{HFPSS2b.pdf} \caption{The $E_2$-pages for the HFPSS for $S^{2 \rho}$ and $KU$. Note that there is a $d_2$-differential on the $E_2$-page for $S^{2 \rho}$ but there is no room for a $d_2$ differential for $KU$.}\label{fig:HFPSS2} \end{figure} \begin{figure}[tbh] \centering \includegraphics{HFPSS3.pdf} \caption{The $E_3$-page for the HFPSS for $KU$.}\label{fig:e3ss} \end{figure} \section{The Tate spectrum and the Euler class}\label{sec:tate} In addition to the spectral sequences \eqref{ss:GreenleesMay}, there is a conceptual way to compute the homotopy of a Tate spectrum from that of the homotopy fixed points and it involves a (co)homology class called the Euler class, described below. The calculations in this section bear some similarity to those of Fajstrup~\cite{tatefajstrup} who calculates that the generalized Tate spectrum associated to $K\mathbb{R}$ is equivariantly contractible. The key to both calculations is the fact that the generator $\eta \in \pi_1 KO$ is nilpotent. We now define the Euler class. Think of the sign representation $\sigma$ as a $C_2$-bundle over a point. Then the Euler class is a characteristic class \[e_\sigma \in H^1_{C_2}(\ast) = H^1(\mathbb{R} P^\infty) \simeq \mathbb{Z}/2.\] Alternatively, the Euler class $e_{\sigma} $ is the element of \[\pi_{-\sigma}^{C_2} KU = [S^0, \Sigma^{\sigma} KU]^{C_2}\cong [S^1,\Sigma^{\rho}KU]^{C_2} \cong [S^1,KU]^{C_2} \cong \pi_1 KO=\mathbb{Z}/2,\] which is defined as the $C_2$-fixed points of composition of the inclusion of fixed points $S^0 \to S^{\sigma}$ with the $\sigma$-suspension of the unit map $S^0\to KU$. Note that in the chain of equivalences above, we have used the equivariant form of Bott periodicity. We claim that $e_{\sigma}$ is the non-trivial element of $\pi_{-\sigma}^{C_2} KU$; indeed, the inclusion $S^0\to S^{\sigma}$ is (unstably) the identity on $C_2$-fixed points, and the fixed points of the unit map $S^0\to KU $ contain the unit map for $KU^{C_2}\simeq KO$, which is non-trivial. Thus we conclude that $e_{\sigma} = \eta \in \pi_1 KO$. \begin{prop}\label{thm:tatevanish} The Tate spectrum $KU^{tC_2}$ as well as the geometric fixed point spectrum $\Phi^{C_2} KU$ are both contractible. \end{prop} \begin{proof} By~\cite[XVI.6.8]{may1996equivariant}, $\Phi^{C_2} KU \simeq KU^{C_2}[e_{\sigma}^{-1}] $ and $KU^{tC_2}\simeq KU^{hC_2}[e_{\sigma}^{-1}]$. But $e_{\sigma}=\eta$ is nilpotent in $KO\simeq KU^{C_2} \simeq KU^{hC_2}$, whence the claim. \end{proof} We can also show that the Tate spectrum $KU^{tC_2} $ vanishes by a direct computation. This will have the advantage of making the homotopy orbit spectral sequence easy to calculate. From the above discussion, the $E_2$-term of the Tate spectral sequence for $KU$ can be described as \[ \hat E_2^{s,t} = E_2^{s,t}[\eta^{-1}]= \mathbb{Z}/2[\eta^{\pm 1},t^{\pm 1}]. \] We proved above that $\eta^3=0$ in the homotopy of $KO$. Since the map between the homotopy fixed point and Tate spectral sequence is compatible with the differential algebra structure, we also have that $\eta^3=0$ in the Tate spectral sequence. Thus we must have $d_3(t) = \eta^3$ and the differentials then follow the pattern shown in~\Cref{fig:TateSS} because of multiplicativity. In particular the $E_4$-page is 0, hence the Tate spectrum is contractible as the Tate spectral sequence is conditionally convergent. \begin{figure}[hbt] \centering \includegraphics{TateSS.pdf} \caption{Tate spectral sequence for $KU$}\label{fig:TateSS} \end{figure} The vanishing of the Tate spectrum allows us to prove the following two corollaries which we used in~\Cref{sec:andersonduality}. \begin{prop} Let $M$ be a $C_2$-equivariant $KU$ module. Then $M^{tG}$ and $\Phi^G M$ are contractible. \end{prop} \begin{proof} The spectra $M^{tG}$ and $\Phi^G M$ are modules over $KU^{tG}$ and $\Phi^G KU$, respectively, and these are contractible by~\Cref{thm:tatevanish}. \end{proof} \begin{cor}\label{cor:FixedHtpyFixed} Let $M$ be a $C_2$-equivariant $KU$ module. Then the fixed points $M^{C_2}$ and the homotopy fixed points $M^{hC_2}$ are equivalent. \end{cor} \begin{proof} Follows immediately from the diagram \eqref{eq:NormCofibration}, in which the two rightmost spectra become contractible by the above proposition. \end{proof} \section{The homotopy orbit spectrum for $KO$} To calculate the homotopy orbit spectral sequence we can use the Tate cohomology groups as given in~\Cref{eq:tateDef} as well as the calculations shown in~\Cref{fig:TateSS}. We just need to calculate the co-invariants; for the trivial action $H_0(C_2;\mathbb{Z}) \simeq \mathbb{Z}$, whilst for the sign-representation $H_0(C_2,\mathbb{Z}_{sgn}) = \mathbb{Z}/2$. We draw the homotopy orbit spectral sequence with cohomological grading, i.e. set $E_2^{s,t} = H_{-s}(C_2,\pi_tKU)$. The differentials can be inferred from those in the Tate spectral sequence. The resulting spectral sequence is shown in~\Cref{fig:HOSS}. Note that there has to be an additive extension as shown with the dashed line, since $\pi_0 KO \simeq \mathbb{Z}$ is torsion free. \begin{figure}[tbh] \centering \includegraphics{HOSS.pdf} \caption{The homotopy orbit spectral sequence for $KU$.}\label{fig:HOSS} \end{figure} \section{The Anderson dual of $KO$}\label{sec:KOanderson} From~\Cref{eq:AD,thm:tatevanish}, we get the following sequence of equivalences \[ I_\mathbb{Z} KO \simeq F(KU^{hC_2},I_{\mathbb{Z}}) \simeq F(KU_{hC_2},I_{\mathbb{Z}}) \simeq F(KU,I_{\mathbb{Z}})^{hC_2}. \] Note that even though $I_{\mathbb{Z}} KU \simeq KU$, this does not imply that $(I_\mathbb{Z} KU)^{hC_2} \simeq KO$, as the action may have changed. However, the action on the homotopy groups of $I_\mathbb{Z} KU$ is unchanged as both the trivial representation and the sign representation of $C_2$ are self-dual. Hence the $E_2$-terms of the HFPSS associated to $KU$ and $I_\mathbb{Z} KU$ are isomorphic, and a twist in the action would alter the differential pattern. Since the $E_2$-term of the HFPSS's is 4-periodic, and the HFPSS of $I_{\mathbb{Z}} KU$ is a module over the one for $KU$, we have that $I_\mathbb{Z} KO$ is either $KO$ or $\Sigma^4 KO$. We will show that it is the latter by determining the pattern of differentials. \begin{theorem}\label{theorem:andersonKO} The Anderson dual of $KO$ is $\Sigma^4 KO$. \end{theorem} \begin{proof} The argument here will follow the proof of Theorem 13.1 of~\cite{stojanoska2012duality}. Our goal is to show that the spectral sequence computing $F(KU_{hC_2},I_{\mathbb{Z}})$ is the linear dual of the homotopy orbit spectral sequence for $KO = KU_{hC_2}$. As in the proof of~\cite[Thm.13.1]{stojanoska2012duality} (which in turn is modeled after Deligne's ``Lemma of two filtrations"~\cite{Deligne}), we can construct a commuting square of spectral sequences \[ \xymatrix{ \mathrm{Ext}^v_\mathbb{Z}(H_h(C_2,\pi_t KU),\mathbb{Z}) \ar@{=>}[r]^*+[Fo]{B} \ar@{=>}[d]_*+[Fo]{A} & \mathrm{Ext}^v_\mathbb{Z}(\pi_{t+h}(KU)_{hC_2},\mathbb{Z}) \ar@{=>}[d]^*+[Fo]{D} \\ H^{h+v}(C_2,\pi_{-t}I_\mathbb{Z} KU) \ar@{=>}[r]_*+[Fo]{C} & \pi_{-t-h-v} I_\mathbb{Z}(KU_{hC_2}). } \] Here $B$ is dual to the homotopy orbit spectral sequence, $A$ is obtained via the Grothendieck composite functor spectral sequence associated to the functors $\mathop{\mathrm{Hom}}$ and coinvariants, and $D$ is the Anderson duality spectral sequence given by~\Cref{eq:andersonSS}. We want to show that the spectral sequences $B$ and $C$ are isomorphic. As explained in~\cite{stojanoska2012duality}, the differentials in $B$ are compatible with the filtration giving $C$ if and only if $A$ collapses, which holds in our case. Consequently, $B$ and $C$ are isomorphic. Thus we apply $\text{Ext}_{\mathbb{Z}}(-,\mathbb{Z})$ to homotopy orbit spectral sequence to obtain the homotopy fixed point spectral sequence for computing the Anderson dual of $KO$, and we can read the differentials straight from the homotopy orbit spectral sequence. A diagram of the spectral sequence can be seen in~\Cref{fig:HFPSS}. This gives an isomorphism on homotopy $\pi_*(I_\mathbb{Z} KO) \simeq \pi_*(\Sigma^4 KO)$. Invoking~\cref{prop:moduleeq} now produces an equivalence $\Sigma^4KO \simeq I_\mathbb{Z} KO$. \end{proof} \begin{figure}[h!] \centering \includegraphics{AndSS.pdf} \caption{The homotopy fixed point spectral sequence for $I_\mathbb{Z} KU$}\label{fig:HFPSS} \end{figure} We are finally in a position to prove~\Cref{thm:Andersondual}. \begin{theorem}\label{thm:Andersondual} The Anderson dual $F(KU, I_\mathbb{Z})$ is $C_2$-equivariantly equivalent to $\Sigma^4KU$. \end{theorem} \begin{proof} By~\Cref{prop:moduleeq}, we can construct a non-equivariant equivalence $\Sigma^4 KU \to I_\mathbb{Z} KU$ which is nevertheless a $KU$-module map. We claim that this map has an equivariant refinement, or equivalently, that the spectrum map $d: S^4 \to I_\mathbb{Z} KU$ generating the dualizing class has an equivariant refinement. To see that, consider the $C_2$-cofiber sequence \[ C_{2+} \to S^0 \to S^{\sigma},\] which after mapping to $I_\mathbb{Z} KU$ on $\pi_4^{C_2}$ gives an exact sequence \[ [S^4, I_\mathbb{Z} KU ]_{C_2} \to [S^4, I_\mathbb{Z} KU ] \to [S^{\sigma+3},I_\mathbb{Z} KU ]_{C_2}. \] It suffices to show that $d \in [S^4, I_\mathbb{Z} KU]$ maps to zero in $[S^{\sigma+3},I_\mathbb{Z} KU ]_{C_2}$. But using equivariant Bott periodicity and~\Cref{cor:FixedHtpyFixed}, the latter is equivalent to \[[S^2, I_\mathbb{Z} KU]_{C_2} \cong \pi_2 (I_\mathbb{Z} KU)^{C_2}\cong \pi_2 (I_\mathbb{Z} KU)^{hC_2}, \] which by~\Cref{theorem:andersonKO} is zero. \end{proof} \section{Applications to the $K(1)$-local Picard group}\label{sec:picard} Fix a prime $p$; in this section we use the usual notation for the chromatic players. So, $K(n) $ and $E_n$ denote the Morava $K$ and $E$-theory spectra at height $n$; $\mathbb{G}_n$ is the (big) Morava stabilizer group, $L_n$ denotes localization with respect to $E_n$\footnote{or equivalently with respect to the Johnson-Wilson spectrum $E(n)$}, and $M_n$ is the monochromatic layer functor, i.e. the homotopy fiber of the map $L_n\to L_{n-1}$. As currently defined, even if $X$ is a $K(n)$-local spectrum, $I_{\mathbb{Q}/\mathbb{Z}}X$ need not be. The relevant details to properly extend Brown-Comenetz duality to the $K(n)$-local category have been worked out by Gross and Hopkins~\cite{hopkins1994rigid}, and this form of duality also goes by the name Gross-Hopkins duality. For $K(n)$-local $X$, the Gross-Hopkins dual of $X$ is defined as $$I_nX = F(M_nX,I_{\mathbb{Q}/\mathbb{Z}});$$ This spectrum is $K(n)$-local; indeed by~\cite[Proposition 2.2]{stojanoska2012duality} it is in fact isomorphic to $L_{K(n)}F(L_nX,I_{\mathbb{Q}/\mathbb{Z}})$. Since $I_\mathbb{Q} X$ becomes contractible after $K(n)$-localisation (for $n \ge 1$) we get that, if $X$ is $E_n$-local, the fiber sequence in~\Cref{eq:fibSeq} gives \begin{equation}\label{eq:andersonBC} I_nX = L_{K(n)}\Sigma I_{\mathbb{Z}} X. \end{equation} Since $KO$ is $E_1$-local we immediately obtain the following statement. \begin{cor}\label{cor:K1localBCdual} The $K(1)$-local Brown-Comenetz dual of $KO$ is given by \[ I_1 KO \simeq \Sigma^5 KO. \] \end{cor} The other main type of duality in the $K(n)$-local category is $K(n)$-local Spanier-Whitehead duality, defined by $D_nX = F(X,L_{K(n)}S^0)$, and now we relate it to Gross-Hopkins duality. To do so we need the Picard group of the $K(n)$-local homotopy category, introduced by Hopkins~\cite{hopkins1994constructions}. We call a $K(n)$-local spectrum $X$ invertible if there exists a spectrum $X^{-1}$ such that $L_{K(n)}(X \wedge X^{-1}) \simeq L_{K(n)}S^0$; the Picard group $\text{Pic}_n$ is the group of isomorphism classes of invertible objects. There is a map $\epsilon: \text{Pic}_n \to H^1(\mathbb{G}_n,(E_n)_0^\times)$, and if $p$ is large compared to $n$ then this map is injective~\cite[Prop.7.5]{hopkins1994constructions}. \addtocounter{footnote}{1} Hopkins, Mahowald and Sadofsky further show that a $K(n)$-local $X$ being invertible is equivalent to $(E_n)_*X$ being a free $(E_n)_*$-module of rank 1.\footnote{Note that we follow the usual convention in that $(E_n)_*X:=\pi_*L_{K(n)}(E_n \wedge X)$.} Noting that there is an equivalence $M_n X \simeq M_n L_{K(n)}X$, let $I_n$ denote $I_n L_{K(n)}S^0$. By work of Gross and Hopkins~\cite{hopkins1994rigid}, $(E_n)_*I_n$ is a free $(E_n)_*$-module of rank 1 and hence $I_n $ defines an element of $ \text{Pic}_n$. In particular $I_n$ is dualisable, so there is a $K(n)$-local equivalence \begin{equation}\label{eq:grossSW} I_nX \simeq F(X,I_n) \simeq D_nX \wedge I_n . \end{equation} In fact more can be said by incorporating the action of the Morava stabilizer group $\mathbb{G}_n$. We recall briefly that there is a reduced determinant map $\mathbb{G}_n \to \mathbb{Z}_p^\times$, and if $M$ is a $(E_n)_*$-module with a compatible action of $\mathbb{G}_n$, then we write $M \langle \text{det} \rangle$ for the module with $\mathbb{G}_n$ action twisted by the determinant map. Then Hopkins and Gross in fact prove the stronger statement (see also~\cite{strickland2000gross}) that \[ (E_n)_*I_n \simeq \Sigma^{n^2-n}(E_n)_* \langle \text{det} \rangle. \] There is a `twisted sphere' spectrum $S^0 \langle \text{det} \rangle$ (see~\cite[Remark 2.7]{goerss2012hopkins}) such that $(E_n)_* S \langle \text{det} \rangle \simeq (E_n)_* \langle \text{det} \rangle$ and thus whenever $\epsilon:\text{Pic}_n \to H^1(\mathbb{G}_n,(E_n)_0^\times)$ is injective there is an equivalence \begin{equation}\label{eq:hopgross} I_n \simeq \Sigma^{n^2-n} S^0\langle \text{det} \rangle. \end{equation} We will show that at $n=1,p=2$ this does \emph{not} hold, and hence that the map $\epsilon$ has non-trivial kernel traditionally denoted $\kappa_1$ and called the `exotic' $K(1)$-local Picard group. We note that at $n=1, S^0\langle\text{det} \rangle \simeq S^2$, so trivial kernel would imply that $I_1 \simeq S^2$. From now on, we set $p=2$ and omit $K(1)$-localization from the notation with the understanding that everything is in the $K(1)$-local category. Note that the Morava $E$-theory spectrum $E_1$ is now the $2$-completion\footnote{equivalently, $K(1)$-localization} of $KU$, the Morava stabilizer group $\mathbb{G}_1$ is $\mathbb{Z}_2^\times$ with its elements corresponding to $2$-adic Adams operations. The subgroup $C_2=\{ \pm 1\} \in \mathbb{Z}_2^\times $ is a maximal finite subgroup whose non-trivial element acts on $KU$ by $\psi^{-1}$ which is complex conjugation. Let $l$ be a topological generator of $\mathbb{Z}_2^{\times}/\{\pm 1\}$; it is well known (eg.~\cite{bou79}) that the $K(1)$ local sphere sits in a fiber sequence \[ S \to KO \xrightarrow{\psi^l-1} KO.\] Taking the Anderson dual of this sequence gives \[ \Sigma^4KO \xrightarrow{I_\mathbb{Z}\psi^l -1} \Sigma^4KO \to \Sigma^{-1}I_1. \] Let $P$ denote $\Sigma^{-2}I_1$; hence $P$ is an element of $\Pic_1$ such that $P\neq S$ if and only if $I_1 \neq S^2$. \begin{lemma}\label{lem:dualAdams} The Anderson dual of the Adams operation $\psi^l$ is \[I_\mathbb{Z}\psi^l=\Sigma^4(l^{-2}\psi^{1/l}) = \Sigma^{-4}(l^2 \psi^{1/l}) .\] \end{lemma} \begin{proof} The idea of the proof is that the self-maps of $KO$ are detected by their effect on homotopy. To be more precise, we recall that there is an equivalence \[ KO[[\mathbb{Z}_2^\times/\{\pm 1\}]] \to F(KO,KO), \] where the left hand side is defined as $\holim_H (KO \wedge H)$, where $H$ runs over the finite quotients of the profinite group $\mathbb{Z}_2^\times/\{\pm 1\}$ (a reference is, for example, \cite[Prop.2.6]{GHMR}). In particular, this describes the homotopy classes of maps $KO \to KO$ as the completed group ring $\mathbb{Z}_2[[\mathbb{Z}_2^\times/\{\pm1\}]]$, i.e. power series on the $2$-adic Adams operations of $KO$, and these can be distinguished by what they do on the non-torsion elements of $KO_*$. Let $t \in KO_{4n}$ be an arbitrary non-torsion element of $KO_*$. Then the Adams operation $\psi^l$ acts on $t$ as \[ \psi^l(t) = l^{2n}t. \] Note that therefore, $\Sigma^{8m} \psi^l:KO\to KO$ is $l^{-4m} \psi^l$. Since the linear dual of multiplication by $m$ is again multiplication by $m$, we see that \[ I_{\mathbb{Z}} \psi^l:I_\mathbb{Z} KO \to I_\mathbb{Z} KO \] is given on the non-torsion homotopy by \[ I_\mathbb{Z} \psi^l(t^\vee) = l^{2n} t^\vee. \] Identifying $\Sigma^{-4}I_\mathbb{Z} KO$ with $KO$, we get that \[ \Sigma^{-4}I_\mathbb{Z} \psi^l:KO \to KO \] takes an element $t\in KO_{4n} \cong (I_\mathbb{Z} KO)_{4n+4}$ and maps it to $ l^{-2n-2} t = l^{-2} \psi^{1/l}(t) $. \end{proof} Note that, in particular, this lemma provides resolutions of $P$ as \begin{equation}\label{eq:P-resolution} \begin{aligned} P \to \Sigma^4KO \xrightarrow{\Sigma^4(l^{-2}\psi^{1/l})-1} \Sigma^4 KO, \text{ or} \\ P \to\Sigma^{-4}KO \xrightarrow{\Sigma^{-4}(l^{2}\psi^{1/l})-1} \Sigma^{-4} KO, \end{aligned} \end{equation} which are a $K(1)$-local analogue (us to a suspension shift) of the resolution of the $K(2)$-local Brown-Comenetz spectrum of Goerss-Henn~\cite{goerss2012brown}. We smash the second resolution with $\Sigma^{4} KO$ to deduce the following result. \begin{prop}\label{prop:PKO} The smash product $P\wedge KO$ is $\Sigma^4 KO$. Consequently, $P$ defines a non-trivial element of $\kappa_1$. \end{prop} \begin{proof We consider the fiber sequence of $KO$-modules \[ \Sigma^{4}P \wedge KO \to KO\wedge KO \xrightarrow{\alpha} KO\wedge KO,\] from which we will determine $\pi_*\Sigma^{4}P\wedge KO$, where for brevity we have denoted by $\alpha$ the map $(l^{2}\psi^{1/l}-1) \wedge 1$. The homotopy groups of $KO\wedge KO$ are (see \cite[Prop.1]{hopkins1998k} or \cite[Prop.2.4]{GHMR}) \[ \pi_* (KO\wedge KO) \cong KO_0KO \otimes_{KO_0} KO_* \cong \text{Map}^c(\mathbb{Z}_2^\times/\{ \pm 1\},\mathbb{Z}_2) \otimes_{\mathbb{Z}_2} KO_*, \] and the map $\alpha$ sends $f \otimes x \in \text{Map}^c(\mathbb{Z}_2^\times/\{ \pm 1\},\mathbb{Z}_2) \otimes_{\mathbb{Z}_2} KO_* $ to $g\otimes x$, where $g \in \text{Map}^c(\mathbb{Z}_2^\times/\{ \pm 1\},\mathbb{Z}_2)$ is the function determined by \[ g(k) = l^2 f(k/l) - f(k). \] Consequently, $f\otimes x$ is in the kernel of $\pi_*\alpha$ if and only if for every $k\in \mathbb{Z}_2^\times/\{\pm 1\}$, we have $f(k)=l^2 f(k/l)$. In particular, the kernel of $\pi_0 \alpha$ is $\mathbb{Z}_2$, where $a\in \mathbb{Z}_2$ corresponds to the continuous function $f_a$ determined by $f_a(1)=a$ and the relation $f_a(k)=l^2 f_a(k/l)$; thus the kernel of $\pi_* \alpha $ is $\Ker \pi_0\alpha \otimes KO_*$. Next, we claim that $\pi_0 \alpha$, and therefore $\pi_*\alpha$, is surjective. Indeed, let $g$ be a function $\mathbb{Z}_2^\times/\{\pm 1 \} \to \mathbb{Z}_2$; then the function defined by \[ f(k) = \sum_{n=1}^\infty \frac{1}{l^{2n}} g(l^nk ) \] is such that $\pi_0(\alpha)(f)=g$. We conclude that $\pi_*\Sigma^{4}P\wedge KO \cong \Ker\pi_0\alpha \otimes KO_* \cong KO_*$ as a $KO_*$-module. Appealing to~\cref{prop:moduleeq} now gives us that $\Sigma^{4}P\wedge KO \simeq KO$. \end{proof} \begin{prop}\label{prop:ordertwo} The element $P $ of $\kappa_1$ has order two, i.e. $P\wedge P \simeq S$. \end{prop} \begin{proof} The proof is similar to the one for~\cref{prop:PKO}; we smash the resolution of $P$ in~\cref{eq:P-resolution} with $P$ and use the description of $P\wedge KO$ from the proof of~\cref{prop:PKO}. Denote again by $\alpha$ the map $(l^2\psi^{1/l}-1)\wedge 1:KO\wedge KO$, and let $\beta$ denote the map $1\wedge (l^2\psi^{1/l}-1):KO\wedge KO$. Consider the commutative diagram \[\xymatrix{ \Sigma^{-4}KO\wedge\Sigma^{-4}KO \ar[r]^{\Sigma^{-8}{\beta}} \ar[d]_{\Sigma^{-8}\alpha} & \Sigma^{-4}KO\wedge\Sigma^{-4}KO\ar[d]^{\Sigma^{-8}\alpha}\\ \Sigma^{-4}KO\wedge\Sigma^{-4}KO \ar[r]^{\Sigma^{-8}{\beta}} & \Sigma^{-4}KO\wedge\Sigma^{-4}KO; } \] we know that the vertical fibers are $P\wedge \Sigma^{-4} KO\simeq KO$, and the fiber of the induced map between them is $P\wedge P$. Let us again identify the homotopy groups of $KO\wedge KO$ with $\text{Map}^c(\mathbb{Z}_2^\times/\{\pm 1\},\mathbb{Z}_2)\otimes KO_*$; then (as in~\cite{hopkins1998k}) $\beta$ takes a function $f$ to the function $\beta f$ defined by \[\beta f(k) = l^2 \psi^{1/l}(f(kl)) - f(k). \] Now suppose that $f$ is in the kernel of $\alpha$, i.e. it is in the homotopy of the fiber of $\alpha$; then for every $k$, $f(kl)=l^2f(k)$, whence \[\beta f(k) =l^4 \psi^{1/l}(f(k)) -f(k). \] This implies that the restriction of $\Sigma^{-8}\beta$ to the fibers of $\Sigma^{-8}\alpha$ is \[\Sigma^{-8}(l^4\psi^{1/l}- 1): P\wedge\Sigma^{-4} KO \simeq KO \to P\wedge\Sigma^{-4} KO \simeq KO. \] But we have that, as in the proof of~\Cref{lem:dualAdams}, \[\Sigma^{-8}(l^4\psi^{1/l} ) = \frac{1}{l^4} l^4 \psi^{1/l} = \psi^{1/l}, \] so our fiber sequence becomes \[ P\wedge P \to KO \xrightarrow{\psi^{1/l}-1 } KO, \] showing that $P\wedge P\simeq S$, as $1/l$ is also a topological generator of $\mathbb{Z}_2^\times/\{\pm 1 \}$. \end{proof} The effect of~\Cref{prop:PKO,prop:ordertwo} is that we have produced a subgroup $\mathbb{Z}/2$ of $\kappa_1$ generated by $P$. It is known by the work of Hopkins-Mahowald-Sadofsky~\cite{hopkins1994constructions} that in fact this is all of $\kappa_1$; for completeness we include here what is essentially their construction of a surjective map $\kappa_1\to \mathbb{Z}/2$, thus recovering the following result. \begin{prop} The exotic Picard group $\kappa_1$ is $\mathbb{Z}/2 $ generated by $P$. \end{prop} \begin{proof} Let $Z$ be an arbitrary element of $\kappa_1$; the key point is that $KU_*Z $ and $ KU_*$ are isomorphic as $KU_*$-modules, and the isomorphism respects the $\mathbb{Z}_2^\times$-action. (Recall that we are working $K(1)$-locally, so $KU$ really means the $2$-completion of $KU$.) Therefore the $E_2$-term for the $K(1)$-local Adams-Novikov spectral sequence for $Z$ coincides with that for the sphere, although the differentials may be different. By~\cite[Rem.3.4]{goerss2012hopkins} (see also the work of~\cite{hoveysadofsky,shimomura}) there is a group homomorphism \[\tau:\kappa_1 \to H^3(\mathbb{Z}_2^\times,(KU)_2) \simeq \mathbb{Z}/2,\] defined in the following way. Let $\iota_Z \in H^0(\mathbb{Z}_2^\times,(KU)_0Z) \simeq \mathbb{Z}_2$ be the identity element. The first (and indeed the only) possible differential in the $K(1)$-local ANSS for $Z$ is a $d_3$. Given a choice of a $\mathbb{Z}_2^\times$-equivariant isomorphism $f:KU_* \xrightarrow{\simeq} KU_*Z$, we have a diagram \[ \xymatrix{ H^0(\mathbb{Z}_2^\times,(KU)_0) \ar@{-->}[r]^{\phi} \ar[d]_{f_*}^{\simeq} &H^3(\mathbb{Z}_2^\times,(KU)_2) \ar[d]_{\simeq}^{f_*} \\ H^0(\mathbb{Z}_2^\times,(KU)_0Z) \ar[r]_{d_3} & H^{3}(\mathbb{Z}_2^\times,(KU)_2Z). } \] and we define $\tau(Z) = \phi(\iota_Z) = f_*^{-1} d_3 f_*(\iota_Z)$. This does not depend on the choice of $f$ and defines the claimed group homomorphism $\kappa_1 \to H^3(\mathbb{Z}_2^\times,(KU)_2) \simeq \mathbb{Z}/2$. Note that $Z \simeq L_{K(1)}S^0$ if and only if $\iota_Z$ survives the spectral sequence. The $E_2$-term of the spectral sequence for $Z$ is a free module of rank one over the $E_2$-term of that for the sphere, generated by the class $\iota_Z$, and this fully determines the $d_3$ differential for $Z$. Standard calculations (for example~\cite{hoveysadofsky}) show that $E_4=E_\infty$ and therefore the only differential possible is a $d_3$. This implies that $P$ maps to the non-trivial element of $\mathbb{Z}/2$ under the map $\tau$, and so $\tau$ is a surjection. It is also injective for if $d_3(\iota_Z) = 0$, then $\iota_Z$ is a permanent cycle and the resulting map extends to an equivalence $Z \simeq L_{K(1)}S^0$. \end{proof} As an additional application, we can use~\Cref{prop:PKO,prop:ordertwo} to compute the $K(1)$-local Spanier-Whitehead dual of $KO$, thus recovering the result of~\cite[Lemma 8.16]{hahn2007iwasawa}. \begin{cor} The $K(1)$-local Spanier-Whitehead dual of $KO$ is given by $D_1(KO) \simeq \Sigma^{-1} KO$. \end{cor} \begin{proof} We have the series of equivalences \begin{align*} D_1KO &\simeq F(KO,S) \simeq F(KO,P) \wedge P \\ &\simeq F(KO,\Sigma^{-1}I_\mathbb{Z}) \wedge P \simeq \Sigma^{-1} \Sigma^4 KO \wedge P \\ &\simeq \Sigma^{-1} KO. \end{align*} \end{proof} \bibliographystyle{amsalpha}
1,108,101,565,037
arxiv
\section{Introduction} Let us consider the classical internal energy of the bosonic membrane, which in a light-cone description in orthonormal gauge can be written as (for more details see e.g. \cite{relmemb}) \begin{align} \mathbb{M}^2&= \int_{\Sigma} \left( \frac{\vec{p}^2}{\rho}+\rho\sum_{i<j}\lbrace x_i, x_j\rbrace\right) d^{2}\varphi, \end{align} with the constraints \begin{equation} \sum_{i=1}^d \lbrace x_i,p_i\rbrace=0,\label{contraints continuum} \end{equation} where the integral is performed over a 2-dimensional compact manifold $\Sigma$ and $\lbrace f,g\rbrace:=\frac{1}{\rho(\varphi)}(\partial_1 f \partial_2g-\partial_2 f \partial_1 g)$ denotes the Poisson bracket. It is convenient to use the mode expansions $x_i(\varphi)= x_{i \alpha} Y_{\alpha}(\varphi),~~ p_j(\varphi)= p_{j \alpha} Y_{\alpha}(\varphi)$ in terms of the eigenfunctions $\lbrace Y_{\alpha}\rbrace_{\alpha=1}^{\infty}$ of the Laplace operator on $\Sigma$, where the zero modes are subtracted. This allows to rewrite $\mathbb{M}^2$ as an infinite sum over the internal modes \begin{align} \mathbb{M}^2&=p_{i \alpha}p_{i \alpha} +\frac{1}{2}g_{\alpha \beta \gamma} g_{\alpha \beta' \gamma'}x_{i\beta} x_{i\beta'}x_{j\gamma} x_{j\gamma'},\label{field theory}\\ g_{\alpha \beta \gamma}&:=\int{Y_{\beta}\epsilon^{a b}\partial_{a}Y_{\alpha}\partial_{b}Y_{\gamma}d^{2}\varphi},~~i=1,...,d, ~~~\alpha, \beta, \gamma=1,...,\infty. \end{align} It has been shown by Goldstone and Hoppe \cite{Hoppe_phd, hoppe02} that the full field-theoretic Hamiltonian (\ref{field theory}) admits a regularization procedure, where the classical phase-space variables $x_i(\varphi), p_j(\varphi)$ are replaced by $n$-dimensional matrices, the Poisson bracket by the matrix commutator and integrals over $\Sigma$ by the matrix trace. The original volume-preserving diffeomorphisms symmetry of $\Sigma$, represented by (\ref{contraints continuum}), is recovered in the $n \rightarrow \infty$ limit from the $SU(n)$ invariance of its matrix regularizations. The family of $n$-dimensional matrix models constructed in this way reads \begin{equation} H_N= Tr(\vec{P}^2)-(2 \pi n)^2 n\sum_{i<j}^d Tr([X_i,X_j]^2), \label{matrix model} \end{equation} with the $SU(n)$ invariance constraints \begin{equation} \sum_{i=1}^d [X_i,P_i]=0, \end{equation} where $P_i, X_i$ are hermitian traceless $n \times n$ matrices. The scaling factor in front of the quartic potential is chosen in such a way that $\lim_{N \rightarrow \infty} H_N=\mathbb{M}^2$. Using a basis of $su(n)$, $T_a$, $a=1,...,n^2-1:=N$, with $Tr(T_a T_b)= \delta_{ab}$ and $[T_a,T_b]=i \hbar_n \frac{1}{\sqrt{n}} f^{(n)}_{a b c} T_c$, $\hbar_n = \frac{1}{2 \pi n}$, $f^{(n)}_{abc}=\frac{2 \pi n^{\frac{3}{2}}}{i}Tr(T_a \left[T_b,T_c \right])$, we can rewrite $H_N$ (and the constraints) in terms of $d(n^2-1)$ canonical pairs $p_{ia}, x_{ia}$ ($X_i=x_{ia}T_a, P_i=p_{ia}T_a$) as a \emph{finite} sum over the matrix modes, cp.(\ref{field theory}), \begin{align} H_N(p,x) &= p_{ia} p_{ia} +\frac{1}{2} f^{(n)}_{abc}f^{(n)}_{ab'c'}x_{ib}x_{ib'}x_{jc}x_{jc'}, \label{classmembr} \\ f_{abc}^{(n)}x_{ib}p_{jc}&=0. \label{constraints} \end{align} In contrast to string theory, the Hamiltonian of the membrane with its quartic interaction makes the problem of quantisation rather difficult. One approach, which was proposed in the literature many years ago \cite{Hoppe_phd, hoppe02}, is to take advantage of the symmetry preserving matrix regularizations (\ref{matrix model})/(\ref{classmembr}), quantize it for finite $N$ and then take the limit $N \rightarrow \infty$. While it has been proved that all finite $N$ expressions are well defined Schrödinger operators \footnote{See \cite{douglas thesis} for a discussion of the spectrum, including the supersymmetric version of the model and related issues} with purely discrete spectrum \cite{simon,Luscher}, it seems that almost nothing is known about the large $N$ limit (apart from the case $d=1$, where the quartic potential vanishes \cite{33}). In particular, it is interesting to ask whether the spectrum remains purely discrete in the limit and how it scales with $N$ and $d$. In this paper we study the large $N$ behaviour of the canonically quantized family of Hamiltonians (\ref{classmembr}) based on a Fock space description in order to approach those questions. \\ According to common knowledge, in order to quantize a system with many degrees of freedom one should rescale the quartic interaction by the suitable power\footnote{The power of $n$ for a quartic interaction it is usually argued to be $-1$, (the t' Hooft coupling \cite{tHooft}), however note that due to the definition of $f_{abc}^{(n)}$ which contains an explicit factor of $n^{\frac{3}{2}}$, the coupling constant in front of our potential should be multiplied by $n^{-4}$. In Section 4 we will show that this also follows from our construction} of $n$ to make the quadratic part competitive at large $n$. This can be realized as a rescaling of the classical phase-space variables preserving the canonical Poisson-commutation relations (and the form of the constraints), i.e. $x_{ia} \rightarrow N^{-\alpha} x_{ia}, p_{ia} \rightarrow N^{\alpha} p_{ia}$, which leads to the following classical energy \begin{equation} H_N(p,x)=p_{ia} p_{ia} +N^{-\gamma}\frac{1}{2} f^{(n)}_{abc}f^{(n)}_{ab'c'}x_{ib}x_{ib'}x_{jc}x_{jc'}, \label{generalresc} \end{equation} where by abuse of notation we denote the rescaled operator $N^{-2 \alpha} H_N$ by the same symbol $H_N$ and $\gamma:=6 \alpha$. The canonically quantized expression corresponding to (\ref{generalresc} ) is a priori ill-defined in the large $n$ limit due to possible divergences coming from the infinite vacuum energy and therefore it needs to be renormalized by subtracting a multiple of the identity operator. This is a rather common property of quantum models with many degrees of freedom, so in order to see the general pattern let us consider a whole class of Hamiltonians parametrized by a sequence of real tensors $c^{(N)}_{IJKL}$, $I,J,K,L=1,...,N$: \begin{equation} H_N(p,x)=\sum_{I \in \mathcal{J}_N} p_I p_I + \sum_{I \in \mathcal{J}_N} \omega_{0I}^2 x_I x_I + N^{- \gamma} \sum_{I,J,K,L \in \mathcal{J}_N} c^{(N)}_{IJKL}x_I x_J x_K x_L, \label{general} \end{equation} where $ \mathcal{J}_N$ is the index set having the form of a cartesian product of two discrete sets $ \mathcal{J}_N= \mathcal{D} \times \mathcal{K}_n$, with $|\mathcal{D}|=d=const$, $|\mathcal{K}_n|=N \nearrow \infty$. For instance, for the Membrane Matrix Models (MMM) (\ref{classmembr}) we have $N=n^2-1$, $\mathcal{K}_n= \lbrace1,...,n^2-1 \rbrace $, $\mathcal{D}=\lbrace 1,...,d\rbrace $, $\omega_{0I}=0$ and $c_{IJKL}\equiv c_{(bi)(b'i')(cj)(c'j')}=\frac{1}{2} f_{abc}f_{ab'c'}\delta_{ii'}\delta_{jj'}$.\\ It is known that a Fock space description of local Hamiltonians of the form $p^2+q^2+V(x)$ is rather inconvenient due to the occurence of terms containing only annihilation or only creation operators, which implies that the so-called single trace sector is not invariant under the action of the Hamiltonian and the true vacuum is not a simple gaussian in the variables $x_I$ (see \cite{Fock N} for a review). Therefore the problem of finding eigenvalues becomes in most cases extremely difficult due to the rapidly increasing with $N$ number of relevant degrees of freedom. We will however show that despite all of that, one can get certain spectral properties of local Hamiltonians (\ref{general}) using Fock space methods, such as upper and lower bounds for the spectrum in the planar limit, as well as a qualitative picture of the subtle interplay between the quadratic terms and the interaction leading to a redefinition of the vacuum energy and mass. The tools described in this paper are of high relevance especially for multi matrix-models where other methods based on the diagonalisation of the matrix degrees of freedom and integrating out the "angle" variables are not directly applicable, like for the Membrane Matrix Models (MMM). The rest of the paper is organized as follows. In the next section we will introduce the notion of optimized Fock space providing a very convenient decomposition of the quantum Hamiltonian corresponding to (\ref{general}) and giving directly the gaussian variational bound for the ground state energy. The idea consists in tuning the frequencies of the harmonic oscillators, whose eigenfunctions are used as a basis of the corresponding Fock space. We give an algorithm how to choose an optimal set of these frequencies such that the form of the Hamiltonian (\ref{general}) expressed in the language of creation and annihilation operators is the simplest possible. This almost trivial observation (usually not spelled out explicitly in the literature though, see however \cite{beyond}), based on the fact that the optimal Fock space frequencies in the quantum representation of (\ref{general}) are not always the $\omega_{0I}$'s, is the starting point of our study of the MMM (\ref{generalresc}), a system where there is no obvious choice of a basis consisting of harmonic oscillators since all the $\omega_{0I}$'s are equal to zero. Thus one has to explore the quartic potential in more detail, which gives birth to a mass term (quadratic in $x_I$) in a properly chosen Fock space. In Section 3 we consider a toy model, the $U(n)$-invariant anharmonic oscillator (AO), and present Fock space-based techniques to get upper and lower bounds for the ground state energy in the planar limit, which agree with the exact answer \cite{planar,collective,singlet spectrum mondello onofri,shapiro, Yaffe, planar limit marchesini onofri}. In Section 4 we apply the method to the MMM. Finally, in Section 5 we show that the perturbative expansion suggested by the optimized Fock space decomposition serves as a very good approximation for the vacuum energy and for the spectral gap of the AO at least up to the third order, even in the strong-coupling regime $g \rightarrow \infty $, restoring the correct scaling of the spectrum with the coupling constant $\propto g^{\frac{1}{3}}$. We also observe that the MMM contains a small hidden parameter (an effective coupling constant) and we perform the corresponding expansion, which turns out to be consistent with our lower and upper bounds for the vacuum energy, justifying the validity of the planar limit for this model. In particular, we find that the effective coupling constant is proportional to $\frac{1}{d-1}$, hence the perturbative expansion has better convergence properties in higher dimensions. Moreover, the perturbation series for the first $SO(d)\times SU(n)$ invariant excited state indicates that the spectral gap is finite at large $n$. Although the purely bosonic model (\ref{quantum membrane}) is interesting on its own, more attention has been recently paid to its supersymmetric extensions, in particular for $d=9$ dimensions (resp. for $d=9$ matrices), see e.g. \cite{supermembrane} and \cite{douglas thesis} for a more recent review of this topic. Remarkably, the spectrum of the supersymmetric version of (\ref{quantum membrane}) turns out to be continuous and equal to the interval $[0, \infty)$, \cite{unstable supermembrane}. However, there have been also evidence of existence of discrete eigenvalues embedded in the continuous spectrum \cite{akm, coexisting wosiek}. In particular, zero energy normalizable states are of high importance for the supermembrane as well as in the context of the BFSS conjecture of M-Theory \cite{BFSS}. Despite a number of profound results concerning zero energy eigenfunctions, e.g. an explicit construction in the pure fermionic sector for the $n=2$ model \cite{wosiek}, large $x$ behaviour \cite{largex}, or existence results on compact regions \cite{compact}, the question posed in the full generality remains open. We believe that the approach discussed in this paper can shed new light on this problem since our results correspond to the purely bosonic sector of the supermembrane and our construction can be possibly extended to the fermionic sectors allowing to study the embedded part of the spectrum of the supermembrane. Moreover, as our method simplifies the Fock space representation of the Hamiltonian, it should also allow to optimize cut-off Fock space algorithms for studying various quantum mechanical systems related to (Super)Yang-Mills theories, introduced in \cite{cut-off}. \section{Optimized Fock space} \label{opt Fock space} The canonically quantized Hamiltonian (\ref{general}) becomes formally a Schrödinger operator acting on $\otimes_{I=1}^{N} L^2(\mathbb{R},dx^I)$ with the classical coordinates replaced by operators in the usual way, i.e. $p_I=-i \partial_I$ and $x_I$ being the multiplication operator. Since we are interested in the large $N$ limit, it is convenient to embed $\otimes_{I=1}^{N} L^2(\mathbb{R},dx^I)$ in the standard bosonic Fock space $\mathcal{H}_{\omega}$, defined as the Hilbert space generated by states of the form \begin{equation} \psi_{\lbrace I_1,...,I_k \rbrace}:= a_{I_1}^{\dagger}(\omega_{I_1}) ...a_{I_k}^{\dagger}(\omega_{I_k})\Psi_0(\omega),~~k ~\text{finite}, \label{elmentary} \end{equation}where \begin{align} a_I(\omega_I)=\frac{1}{\sqrt{2}}\left(\frac{\partial_I}{\sqrt{\omega_I}}+ \sqrt{\omega_I} x_I \right), \nonumber \\ a_I^{\dagger}(\omega_I)=\frac{1}{\sqrt{2}}\left(-\frac{\partial_I}{\sqrt{\omega_I}} + \sqrt{\omega_I} x_J \right), \end{align} with \begin{equation} [a_I(\omega),a_J^{\dagger}(\omega)]= \delta_{I J} \mathbb{I}. \label{alg} \end{equation} We denote the norm and scalar product in $\mathcal{H}_{\omega}$ by $||.||$ and $\langle.,.\rangle $ respectively. The vacuum state $\Psi_0(\omega)$ has the form of an infinite product $\Psi_0(\omega):= \Pi_{I=1}^{\infty} \psi_{\omega_I}(x_I) $ with $\psi_{\omega_I}(x_I):= \sqrt[4]{\frac{ \omega }{\pi}} e^{-\frac{1}{2} \omega_I x_I^2}, ~a_I(\omega_I)\Psi_0(\omega)=0~ \forall I $. \\ Then \begin{align} p_I(\omega_I)=-\frac{i \sqrt{\omega_I}}{\sqrt{2}}(a_I- a_I^{\dagger}), \nonumber \\ x_I(\omega_I)=\frac{1}{\sqrt{2 \omega_I}}(a_I + a_I^{\dagger}). \label{pandx} \end{align} Since $\otimes_{I=1}^{N} L^2(\mathbb{R},dx^I) $ is isomorphic to the subspace $\mathcal{H}_{\omega,N} \subset \mathcal{H}_{\omega}$ generated by the first $N$ creation operators $a_I^{\dagger}, I=1,...,N$, the action of $H_N$ can be naturally extended to the whole Fock space by taking the tensor product of $H_N$ with the identity acting on $\mathcal{H}_{\omega,N}^{\perp}$. Therefore the operator induced on $\mathcal{H }_{\omega}$ by $H_N$ has the form \begin{equation} H_N=\left(\sum_I p_I p_I + \sum_{I}\omega_{0I}^2 x_I x_I+ N^{-\gamma} \sum_{I,J,K,L}c^{(N)}_{IJKL}x_I x_J x_K x_L \right ) \otimes \mathbb{I}_{\mathcal{H}_{\omega,N}^{\perp}}. \label{fullquantfN} \end{equation} $H_N$ is well defined for finite N (if $c^{(N)}_{IJKL}$ are finite), but in general not for $N=\infty$. In order to assure that the domain of $H_N$ contains more than the zero vector, one has to subtract the divergent ground state energy by adding a multiple of the identity operator $\beta_N \mathbb{I}$ to the Hamiltonian. We expect that for $\gamma$ large enough (and $ c^{(N)}_{IJKL}$ not too pathological) one should get at least a finite limit of $ ||(H_{N}+ \beta_N \mathbb{I})\psi||$ for a generic $\psi\in \mathcal{H}$ and some $\beta_N$. This is unfortunately not always the case since the kinetic energy contains terms proportional to $a_I a_I$ and $a_I^{\dagger} a_I^{\dagger}$, which are divergent at large $N$ because e.g. $||\sum_I a_I^{\dagger} a_I^{\dagger} \psi||\rightarrow \infty$ for every non-zero $\psi \in \mathcal{H}_{\omega}$. As a consequence, in order to simplify the large $N$ behaviour of the theory one should cancel such terms by finding corresponding counter-terms in the potential. We will see that this is possible for a relatively large class of models including all models with a large symmetry group, e.g. $SO(n)$ symmetric vector models or $U(n)/SU(n)$ symmetric matrix models, where the interaction has the form of a trace. The idea is that the sequence of the Fock space frequencies $\lbrace \omega_I \rbrace$ has to be adjusted to the Hamiltonian, i.e. the choice of the subspace $\mathcal{H}_{\omega} \subset \otimes_{I=1}^{\infty}L_2(\mathbb{R},dx^I)$ where the limit is taken, depends on the operator and in most cases the optimal choice is not the natural choice $\omega_I= \omega_{0I}$ (especially when all $\omega_{0I}=0$ like for the MMM). Let us see how it works in detail. $H_N$ rewritten in terms of $a_I$ and $a_I^{\dagger}$ becomes (for the sake of transparency, we leave out the $\mathbb{I}_{\mathcal{H}_{\omega,N}^{\perp}}$ part of (\ref{fullquantfN})) \begin{align} H_N \equiv T_N+V_N^{(2)}+V_N^{(4)} = \frac{1}{2}\sum_I \omega_I(2a_I^{\dagger}a_I-a_I^{\dagger}a_I^{\dagger}-a_I a_I + \mathbb{I})+\frac{1}{2}\sum_I \frac{\omega_{0I}^2}{\omega_I}(2a_I^{\dagger}a_I+a_I^{\dagger}a_I^{\dagger}+a_I a_I + \mathbb{I})\nonumber \\+\frac{ N^{-\gamma}}{4 }\sum_{IJKL}\frac{c_{IJKL}}{\sqrt{\omega_I\omega_J \omega_K \omega_L}}(a_I a_J a_K a_L +a_I^{\dagger} a_J a_K a_L+a_I a_J^{\dagger} a_K a_L+a_I a_J a_K^{\dagger} a_L+a_I a_J a_K a_L^{\dagger}\nonumber \\ +a_I^{\dagger} a_J^{\dagger} a_K a_L+a_I^{\dagger} a_J a_K^{\dagger} a_L+a_I^{\dagger} a_J a_K a_L^{\dagger}+a_I a_J^{\dagger} a_K^{\dagger} a_L+a_I a_J^{\dagger} a_K a_L^{\dagger}+a_I a_J a_K^{\dagger} a_L^{\dagger}\nonumber \\ a_I^{\dagger}a_J^{\dagger}a_K^{\dagger}a_L+a_I^{\dagger}a_J^{\dagger}a_K a_L^{\dagger} +a_I^{\dagger}a_Ja_K^{\dagger}a_L^{\dagger}+a_I a_J^{\dagger}a_K^{\dagger}a_L^{\dagger} \nonumber \\ +a_I^{\dagger}a_J^{\dagger}a_K^{\dagger}a_L^{\dagger}). \end{align} We rewrite the quartic potential using the commutation relations (\ref{alg}) \begin{align} V_N^{(4)}= N^{-\gamma}\frac{1}{4 } \sum_{IJK}\frac{1}{\omega_K\sqrt{\omega_I\omega_J }}\left(\frac{1}{2} a_I^{\dagger} a_Jc_{(I J KK)}+ \frac{1}{4}(a_I a_J+a_I^{\dagger} a_J^{\dagger})c_{(I J KK)}\right)\\ +\frac{ N^{-\gamma}}{4} \sum_{I,J}\frac{1}{\omega_I \omega_J}(c_{IIJJ}+c_{IJIJ}+c_{IJJI}) \mathbb{I} + N^{-\gamma}:V_N: \\ \equiv N^{-\gamma} A^{(N)}_{IJ} a_I^{\dagger} a_J+ N^{-\gamma}\frac{1}{2} A^{(N)}_{IJ} (a_I a_J + a_I^{\dagger} a_J^{\dagger})+\\ +N^{-\gamma}f(N)\mathbb{I}+ N^{-\gamma}:V_N:, \end{align} where we have defined $A^{(N)}_{IJ}:=\sum_{K}\frac{c^{(N)}_{(I J KK)}}{8 \omega_K \sqrt{\omega_I\omega_J}}$ and $f(N):=\sum_{I,J}\frac{c^{(N)}_{IIJJ}+c^{(N)}_{IJIJ}+c^{(N)}_{IJJI}}{4 \omega_I \omega_J} $, $\lbrace (I_1,...,I_k) \rbrace :=\sum_{\pi \in S_k}\lbrace I_{\pi(1)},...,I_{\pi(k)} \rbrace$ denotes the symmetrization and $::$ is the normal ordering with respect to $\Psi_0({\omega})$. We get \begin{align} H_N+\beta_N \mathbb{I}=\sum_{I,J} \lbrace ((\omega_I+\frac{\omega_{0I}^2}{\omega_I}) \delta_{IJ} + N^{-\gamma} A^{(N)}_{IJ})a_I^{\dagger}a_J + \frac{1}{2}(N^{-\gamma} A^{(N)}_{IJ} -(\omega_I+\frac{\omega_{0I}^2}{\omega_I}) \delta_{IJ})(a_I a_J + a_I^{\dagger} a_J^{\dagger}) \rbrace\\+ \left(\frac{1}{2}\sum_I (\omega_I+\frac{\omega_{0I}^2}{\omega_I})+N^{-\gamma}f(N)+\beta_N \right)\mathbb{I}+ N^{-\gamma}:V_N: \\ \equiv \sum_{I,J} \left( A^{(N+)}_{IJ} a_I^{\dagger}a_J + \frac{1}{2} A^{(N-)}_{IJ}(a_I a_J + a_I^{\dagger} a_J^{\dagger})\right)+ N^{-\gamma}:V_N:, \end{align} where we have chosen $\beta_N:= -\frac{1}{2}\sum_I(\omega_I+\frac{\omega_{0I}^2}{\omega_I})-N^{-\gamma}f(N)$ and have introduced two more matrices $A^{(N+)}_{IJ}:=N^{-\gamma} A^{(N)}_{IJ}+(\omega_I+\frac{\omega_{0I}^2}{\omega_I}) \delta_{IJ}$ and $A^{(N-)}_{IJ}:=N^{-\gamma} A^{(N)}_{IJ} -(\omega_I-\frac{\omega_{0I}^2}{\omega_I}) \delta_{IJ}$.\\ Let us now assume that there exists a sequence $\lbrace \tilde{\omega}_I \rbrace$, called the optimized Fock space frequencies (and $\mathcal{H}_{\tilde{\omega}}$ called the optimized Fock space), s.t. \begin{equation} \lim_{N \rightarrow \infty} A^{(N-)}_{IJ}=0,~~\forall I,J, \label{diagonallimit} \end{equation} i.e. that the matrix $A^{(N)}_{IJ}$ is diagonal at large $N$ (this is in fact the case for the mentioned above $SO(n)$ symmetric vector models or $U(n)/SU(n)$ symmetric matrix models\footnote{One way to see it is to notice that for these models all double contractions of the tensor defining the quartic interaction produce Kronecker deltas, i.e. $c_{IJKK}^{(N)}\propto \delta_{IJ}$} ). The smallest $\gamma$ for which it is possible we call $\gamma_{crit}$. From eq. (\ref{diagonallimit}) we get that $ \lim_{N \rightarrow \infty} N^{- \gamma}A^{(N)}_{IJ}=diag((\tilde{\omega}_1-\frac{\omega_{0I}^2}{\tilde{\omega}_1}),(\tilde{\omega}_2-\frac{\omega_{0I}^2}{\tilde{\omega}_2}),...)$ and thus $ \lim_{N \rightarrow \infty} A^{(N+)}_{IJ}=diag(2 \tilde{\omega}_1, 2\tilde{\omega}_2,...)$. Moreover the sequence $\beta_N$ exhibits a very nice property given in the following lemma. \begin{lemma} \label{lemma1} ({\bf Optimized Fock space decomposition}) Assume that $ A_{IJ}^{(N-)}\simeq O(\frac{1}{N})$. Then the optimized Fock space frequencies $\tilde{\omega}_I$ coincide with the optimized gaussian vacuum frequencies at large $n$ and $-\frac{\beta_N}{N}$ converges to the upper gaussian variational bound $e_{0,N}^{(0)}$ for the ground state energy of $\frac{H_N}{N}$ as $n \rightarrow \infty$. Moreover, the Hamiltonian (\ref{fullquantfN}) admits the following decomposition in $\mathcal{H}_{\tilde{\omega}}$ \begin{equation} H_N=\left( 2\sum_{I=1}^N \tilde{\omega}_I a_I^{\dagger}a_I+\frac{1}{N^{- \gamma}}:V_{N}:+ N e_{0}^{(0)} \mathbb{I} \right) \otimes \mathbb{I}_{\mathcal{H}_{\tilde{\omega},N}^{\perp}} + R_N, \end{equation} where $e_{0}^{(0)}= -\lim_{n \rightarrow \infty} \frac{\beta_N}{N} =\lim_{n \rightarrow \infty} e_{0,N}^{(0)}$ is given by the condition $\lim_{n \rightarrow \infty} A_{IJ}^{(N-)}=0$, and \\$\lim_{n \rightarrow \infty} ||R_N \psi||=0,~ \forall \psi \in \mathcal{H}_{\tilde{\omega}}$. \end{lemma} \begin{proof} By noting that $\langle \Psi_0(\omega), H_N \Psi_0(\omega)\rangle=-\beta_N$ and using the variational principle we get that the optimized gaussian frequencies satisfying \begin{align} 0=\frac{\partial \beta_N}{\partial \omega_I}=\frac{1}{2}(1-\frac{\omega_{0I}^2}{\omega_I^2})-\frac{N^{- \gamma}}{4 \omega_I^2}\sum_J\frac{1}{\omega_J}(c^{(N)}_{IIJJ}+c^{(N)}_{JJII}+c^{(N)}_{IJJI}+c^{(N)}_{JIIJ}+c^{(N)}_{IJIJ}+c^{(N)}_{JIJI}), \end{align} or equivalently \begin{align} \omega_I^2=\omega_{0I}^2 +\frac{N^{-\gamma}}{8}\sum_J\frac{1}{\omega_J}c^{(N)}_{(IIJJ)}, \label{renormalized frequencies} \end{align} converge to the solutions of (\ref{diagonallimit}). Moreover, the rest term $R_N$ originates from the matrix $ A_{IJ}^{(N-)}$, which means that $R_N \simeq \frac{const.}{N}\sum_I (a_I a_I + a_I^{\dagger} a_I^{\dagger} )$ and therefore $ ||R_N \psi||_{\tilde{\omega} } \rightarrow 0 ~\forall \psi \in \mathcal{H}_{\tilde{\omega}}$. \end{proof} Note that this result does not imply that the system described by the Hamiltonian (\ref{general}) becomes necessarily a system of decoupled harmonic oscillators despite that according to Lemma \ref{lemma1} \begin{equation} \lim_{N \rightarrow \infty} \langle \psi, (H_N+\beta_N \mathbb{I}) \phi \rangle=\langle \psi,2 \sum_{I} \tilde{\omega}_I a_I^{\dagger}a_I \phi \rangle, \end{equation} $ \forall \phi, \psi \in \mathcal{H}_{\tilde{\omega}}$ containing a finite number of elementary excitations (\ref{elmentary}). This means that, in order to avoid such a trivialisation, one has to employ states with infinitely many oscillatory modes and treat the optimized Fock space as the first step towards the correct quantum description of the model. This is the case for the two considered here matrix models, as we will see in the next sections, where the relevant space of interest is the space of $SU(n)$ invariants. \section{$U(n)$-invariant anharmonic oscillator} Let us consider the $U(n)$ symmetric matrix model \begin{align} 2 H_N=Tr(P^2)+Tr( M^2+ \frac{2 g}{n} M^4), \label{1matrix} \end{align} where $M$ is a hermitian $n \times n$ matrix and $P$ its conjugate momentum. Since the exact value of the ground state energy of the model is known and even the $U(n)$ symmetric sector has been solved in the large $n$ limit one could expect that the Fock space approach should allow to rederive these results. Unfortunately, due to various technical difficulties, such as the fact that the Fock space representation of the Hamiltonian (\ref{1matrix}) does not annihilate the Fock space vacuum and does not preserve the single trace sector, there has been no exact solutions based on a Fock space formalism for this model so far. Nevertheless, one can still get some information about the spectrum using Fock space methods, which is of high importance for models where other methods fail. The Hamiltonian (\ref{1matrix}) is an excellent laboratory for testing our large $N$ techniques before approaching the Membrane Matrix Models (\ref{matrix model}), since it exhibits all the properties which make the quantization of a system with a large number of degrees of freedom cumbersome.\\ Let us start with the optimized Fock space decomposition for (\ref{1matrix}). By expanding $M$ and $P$ in the basis $\lbrace T_a \rbrace_{a=1}^{n^2}$ consisting of $N:=n^2$ generators of $U(n)$ with normalisation $Tr(T_a T_b)=\delta_{ab}$, satisfying the completeness relation \begin{equation} (T_a)_{ij}(T_a)_{kl}=\delta_{jk} \delta_{il}, \label{completeness relation1} \end{equation} we arrive at $M=T_a x_a, P=T_a p_a$ ($N=n^2-1,~\mathcal{K}_N = \lbrace 1,...,n^2-1 \rbrace, ~ d=1,~ \omega_{0I}=1$) and \begin{align} 2 H_N= p_a p_a + x_a x_a + c_{abcd} x_a x_b x_c x_d, \label{q1matrix} \end{align} with $c_{abcd}=\frac{2 g}{n} Tr(T_a T_b T_c T_d)$. The action of $H_N$ in the Fock space $\mathcal{H}_{\omega}$, given in terms of the creation and annihilation operators, becomes (assuming that $\omega_a= \omega~ \forall a$) \begin{align} 2H_N= \sum_{a,b} \left( A^{(N+)}_{ab} a_a^{\dagger}a_b + \frac{1}{2} A^{(N-)}_{ab}(a_a a_b + a_a^{\dagger} a_b^{\dagger})\right)+ \frac{2 g}{n} :Tr{M^4}:- \beta_N \mathbb{I}. \end{align} As mentioned previously, the matrix $A_{ab}^{(N)}$ is diagonal\footnote{$A_{IJ}^{(N)}\simeq O(1)$ and thus $\gamma_{crit}=0$. Note also that we perform the calculation for $2 H_N$ instead of $H_N$ in order to match it with the conventions from Section 2 and in the end we divide the results by 2 to compare them with the work of Brezin et al.} and \begin{align} A^{(N+)}_{ab}&=(\omega +\frac{1}{\omega}+\frac{4g}{\omega^2})\delta_{ab}+\frac{2g}{n} \delta_{a0} \delta_{b0},\\ A^{(N-)}_{ab}&=(-\omega +\frac{1}{\omega}+\frac{4g}{\omega^2})\delta_{ab}+\frac{2g}{n} \delta_{a0} \delta_{b0},\\ \beta_N&=-\frac{n^2}{2}(\omega+\frac{1}{ \omega}+\frac{2g}{ \omega^2}+O(\frac{1}{n^3})). \label{betatilden} \end{align} The condition $\lim_{n \rightarrow \infty} A^{(N-)}_{ab}=0$ (or equivalently $\lim_{n \rightarrow \infty} \frac{\partial \beta_N}{\partial \omega}=0$) implies the equation \begin{equation} \omega^3=\omega+4 g, \label{opt freq} \end{equation} whose real solution $\tilde{\omega}$ is obviously \emph{different} than the natural choice $\omega=1$ suggested by the original quadratic term. As we will see in Section \ref{perturbations diagrams}, $\tilde{\omega}$ provides a crude approximation of the spectral gap for this model. According to Lemma \ref{lemma1}, the Hamiltonian in the optimized Fock space $\mathcal{H}_{\tilde{\omega}}$ takes the following form \begin{align} 2H_N= 2\sum_{a} \tilde{\omega} a_a^{\dagger}a_a + \frac{2 g}{n} :Tr{M^4}:+ n^2 e_{0}^{(0)} \mathbb{I}+R_N, \label{quantum anharmonic} \end{align} where $||R_N \psi|| \rightarrow 0~~ \forall \psi \in \mathcal{H}_{\tilde{\omega}}$. Inserting $\tilde{\omega}$ to (\ref{betatilden}) gives the gaussian variational upper bound for the ground state energy \begin{equation} \frac{e_0^{(0)}(g)}{2}=\frac{\tilde{\omega}}{4}+\frac{1}{4 \tilde{\omega}}+\frac{g}{2 \tilde{\omega}^2}, \label{gaussian bound AO} \end{equation} which is in an excellent agreement with the result of \cite{planar} (even for a large coupling $g$), where the authors obtained the exact value. Asymptotically, for large $g$, they have $e_0(g) \simeq 0.58993 g^{\frac{1}{3}}$. Our variational bound $\frac{e_0^{(0)}(g)}{2}\simeq 0.59527 g^{\frac{1}{3}}$ is at most $\approx 9 \permil$ wrong (see Table \ref{table1}). \subsection{Spectral bounds} \label{spectral bounds AO} In order to produce a lower bound for the spectrum, one has to take into account the interaction term $\frac{1}{n}:V_N:$. This involves quite technical, but instructive estimates of matrix elements of the Hamiltonian between $U(n)$ invariant wave functions, which we present in this subsection. Proceeding along the lines of \cite{Fock Space methods} we introduce a basis of the $U(n)$ invariant subspace $\mathcal{I}_{\tilde{\omega}}^{(n)} \subset \mathcal{H}_{\tilde{\omega}}$ spanned by $U(n)$-invariant linear combinations of the first $N=n^2$ creation operators, called the partitions basis \begin{equation} \psi_{\lambda}:= \mathcal{N}_{\lambda} (a^{\dagger})^{\lambda} \Psi_0(\tilde{\omega}) \equiv \mathcal{N}_{\lambda} Tr(a^{\dagger \lambda_1}) Tr(a^{\dagger \lambda_2})...Tr(a^{\dagger \lambda_m}) \Psi_0(\tilde{\omega}), \label{partitions basis} \end{equation} where $Tr(a^{\dagger \lambda_i}):=Tr(T_{b_1}...T_{b_{\lambda_i}})a^{\dagger}_{b_1}...a^{\dagger}_{b_{\lambda_i}}$ and $\lambda=(1^{\lambda_1},2^{\lambda_2},...,m^{\lambda_m})$ is a partition of a certain natural number $k=|\lambda|:= \sum_i i \lambda_i$. The partitions basis becomes orthonormal at $N= \infty$ for the properly chosen normalisation factors $\mathcal{N}_{\lambda} \propto n^{-\frac{|\lambda|}{2}}$ (generically), see \cite{33}. As shown in \cite{Fock Space methods}, the matrix elements of the Hamiltonian (\ref{quantum anharmonic}) in the partitions basis contain three groups of divergent terms \begin{enumerate} \item the vacuum expectation value $(\Psi_0, H_N \Psi_0)\propto n^2$ corresponding to our $\beta_N \mathbb{I}$, \item $(\psi_{\lambda}, H_N \psi_{\delta})\propto n$, where $\lambda_2=\delta_2 \pm 1$, coming from the $Tr(a^{\dagger 2}+a^2)$ part of $H_N$, \item $(\psi_{\lambda}, H_N \psi_{\delta})\propto n$, where $\lambda_4=\delta_4 \pm 1$, coming from the $\frac{1}{n} Tr(a^{\dagger 4}+a^4)$ part of $H_N$. \end{enumerate} Renormalization of $H_N$ is based on a proper choice of basis (in particular the vacuum) such that the second and third group of divergent matrix elements would be "absorbed" into the first one as a constant shift of the whole spectrum. The divergent vacuum energy can be then easily subtracted. According to Lemma \ref{lemma1}, the proper choice of the Fock space frequencies allows to absorb Group 2 into the ground state energy by eliminating the $Tr(a^{\dagger 2}+a^2)$ part from the description. Then, for the suitable $\beta_N$, the only divergent matrix elements of $H_N+\beta_N \mathbb{I}$ belong to Group 3 but they are much more difficult to handle since the resulting vacuum is no longer a standard Fock vacuum (there are no counterterms which would cancel $Tr(a^{\dagger 4}+a^4)$ after an appropriate choice of the Fock space frequencies as it happened with $Tr(a^{\dagger 2}+a^2)$ ). In order to approach this problem let us point out that in the planar limit one can interpret $Tr(a^{\dagger 4})$ and $ Tr(a^4)$ as composite creation-annihilation operators \begin{align} A&:=Tr(T_a T_b T_c T_d) a_a a_b a_c a_d \\ \left[A, A^{\dagger} \right]&= 4 n^4 \mathbb{I}+O(n^2). \end{align} Intuition suggests that one should treat $A$ and $A^{\dagger}$ in a special way in the Fock space description. This can be realized as follows. Let us introduce a new basis, completely equivalent to the partitions basis, consisting of \begin{equation} \psi_{\lambda, k}:= \mathcal{N}_{k,\lambda} (a^{\dagger})^{\lambda} (A^{\dagger})^k \psi_0,~ k=0,1,2...,~ \lambda_4=0, \label{Abasis} \end{equation} where $ \mathcal{N}_{k,\lambda} \propto \mathcal{N}_{\lambda}n^{-2k}$. It turns out that the states $\psi_{\lambda,k}$ exhibit very useful properties, which we explore below. \begin{lemma} \label{lemma2} The action of the operator \begin{equation} H_N= \alpha a_a^{\dagger} a_a+ \frac{\beta}{n}(A+A^{\dagger} + \gamma Tr(a^{\dagger}a^{\dagger} a a)) \label{operator1} \end{equation} on the states $\psi_{ \lambda k}$, asymptotically \footnote{ All operator inequalities are meant here in the usual sense, i.e. $H_1 \geq H_2$ iff $\langle \psi,H_1 \psi \rangle \geq \langle \psi,H_2 \psi \rangle~~ \forall \psi \in Dom(H_1) \subset Dom(H_2)$. Also, terms of order $O(\frac{1}{n})$ are meant in the sense of norm in $\mathcal{H}_{\tilde{\omega}}$ and asymptotically equal terms differ by terms of order at most $O(\frac{1}{n})$. We call an operator $T_N$ to be of order $O(n^k)$ iff $||T_N \psi|| \leq const.(\psi) n^k, \forall \psi \in \mathcal{H}_{\omega} $. The key observation which allows to compute the leading terms at large $n$ (i.e. of order $O(1)$, $O(n)$ and $O(n^2)$ ) is the fact that they originate from Wick contractions of adjacent operators sitting in one $U(n)/SU(n)$ trace, resp. corresponding to planar contractions in the diagramatic representation (see Section \ref{perturbations diagrams}) and we refer to the large $n$ limit taken by neglecting all non-planar contributions, resp. all subleading terms (i.e. of order $O(\frac{1}{n})$ and lower) as the planar limit. In Section 5 we give a perturbative justification of this limit up to the third order}, at large $n$, becomes \begin{equation} H_N \psi_{\lambda,k}= \left( \frac{\Omega}{n^4} B^{\dagger} B+ (\tilde{e}_0 n^2 +G(\lambda))\mathbb{I} \right) \psi_{\lambda,k} \end{equation} with $G(\lambda)= \alpha|\lambda| + \beta \gamma \sum_{i=2} i \lambda_i $, $\Omega= \alpha + \beta \gamma$ and $\tilde{e}_0= -\frac{\beta^2}{\alpha + \beta \gamma}$ and $ B=A+\frac{\beta n^3}{\alpha+ \beta \gamma} \mathbb{I}$ \end{lemma} \begin{proof} First we prove that \begin{equation} a^{\dagger}_a a_a \psi_{\lambda k} \simeq \left( \frac{1}{n^4} A^{\dagger} A + |\lambda| \right)\psi_{\lambda,k} \label{adaggera_AdaggerA exchange} \end{equation} Indeed, $a^{\dagger}_a a_a \psi_{\lambda k}= (4 k+|\lambda|)\psi_{\lambda k}$ and \begin{align} \frac{1}{n^4} A^{\dagger} A \psi_{\lambda k} = \frac{\mathcal{N}_{k,\lambda}}{n^4} A^{\dagger} [A, (a^{\dagger})^{\lambda} (A^{\dagger})^k] A \psi_{0} = \frac{\mathcal{N}_{k,\lambda}}{n^4} \left(A^{\dagger} [A,(a^{\dagger})^{\lambda}](A^{\dagger})^k + A^{\dagger}(a^{\dagger})^{\lambda}[A,(A^{\dagger})^k] \right) \psi_0\\ = \frac{\mathcal{N}_{k,\lambda}}{n^4} A^{\dagger} [A,(a^{\dagger})^{\lambda}](A^{\dagger})^k \psi_0 + 4k \psi_{\lambda,k}+O(\frac{1}{n^2})\label{first term} \end{align} One has to show that the first term in (\ref{first term}) converges to $0$ in norm at large $n$ $\forall \lambda$. According to Wick's theorem we have 4 cases \begin{itemize} \item \emph{single contractions}: we get $n^{-\frac{1}{2}}$ from the fact that the partition $\lambda$ has been shortened by 1. Then we are left with three annihilation operators acting on $(A^{\dagger})^k \psi_0$, which prolongs $\lambda $ by 1 and thus gives a factor of $n^{\frac{1}{2}}$ as well as $n^2$ (at most, when we perform a planar contraction, resp. when adjacent indices of $U(N)$ are contracted) from the contraction with $A^{\dagger}$. The total factor is then $n^{-2} \rightarrow 0$; \item \emph{double contractions}: we get $n^1$ (at most) from the double contraction, $n^{-1}$ from the fact that the partition $\lambda$ has been shortened by 2. Then we are left with two annihilation operators acting on $(A^{\dagger})^k \psi_0$, which prolongs $\lambda $ by 2 and thus gives a factor of $n^1$ as well as one more $n^1$ (at most) from the contraction with $A^{\dagger}$. The total factor is then $n^{-2} \rightarrow 0$; \item \emph{triple contractions}: we get $n^2$ (at most) from the triple contraction, $n^{-\frac{3}{2}}$ from the fact that the partition $\lambda$ has been shortened by 3. Then we are left with one annihilation operator acting on $(A^{\dagger})^k \psi_0$, which prolongs $\lambda $ by 3 (and thus gives a factor of $n^{\frac{3}{2}}$). The total factor is $n^{-2} \rightarrow 0$; \item \emph{quadrupole contractions}: we get $n^3$ (at most) from the quadrupole contraction, $n^{-2}$ from the fact that the partition $\lambda$ has been shortened by 4 and $n^2$ coming from the fact that $k$ has increased by 1. The overall factor becomes then $n^{-4} n^3 n^{-2} n^2= n^{-1} \rightarrow 0 $, \end{itemize} which proves (\ref{adaggera_AdaggerA exchange}). Then we observe (using a similar justification as above) that at large $n$ we get \begin{align} \frac{1}{n}(a^{\dagger}a^{\dagger} a a)\psi_{\lambda,k} \simeq \left(\sum_{k=2}^{\infty} \lambda_k + 4 k \right)\psi_{\lambda,k}\simeq \left(\sum_{k=2}^{\infty} \lambda_k + \frac{1}{n^4} A^{\dagger} A ,\right)\psi_{\lambda,k}. \end{align} After introducing a new operator $B=A+\frac{\beta n^3}{\alpha+ \beta \gamma} \mathbb{I}$, we obtain \begin{equation} H_N \psi_{\lambda,k}= \left( \frac{\Omega}{n^4} A^{\dagger} A+ \frac{\beta}{n}(A+A^{\dagger}) +G(\lambda)\mathbb{I} \right) \psi_{\lambda,k}=\left( \frac{\Omega}{n^4} B^{\dagger} B+ (\tilde{e}_0 n^2 +G(\lambda))\mathbb{I} \right) \psi_{\lambda,k}. \end{equation} \end{proof} Now we come back to the full Hamiltonian, which can be rewritten as \begin{align} H_N=\tilde{\omega} (1- \epsilon) a_a^{\dagger} a_a +\frac{g}{4 n \tilde{\omega}^2} (A+A^{\dagger}+4 Tr(a^{\dagger} a^{\dagger} a a))\label{Hneps1}\\ + \epsilon \tilde{ \omega} a_a^{\dagger} a_a+ \frac{g}{n\tilde{\omega}^2} (Tr(a^{\dagger} a^{\dagger} a^{\dagger} a)+Tr(a^{\dagger} a a a))\label{Hneps2}\\ +\frac{g}{ 2 n \tilde{\omega}^2}:Tr(a^{\dagger}a a^{\dagger}a):+\frac{e_0^{(0)}}{2}n^2 \mathbb{I},\label{Hneps3} \end{align} for some $0< \epsilon <1$. The first part of the Hamiltonian i.e. (\ref{Hneps1}) is exactly of the form considered in Lemma \ref{lemma2}, while the first term in (\ref{Hneps3}) is of order $O(\frac{1}{n})$, hence it does not contribute to the planar limit, see Section \ref{perturbations diagrams}. Now we will determine for which values of $\epsilon$ the middle part of $H_N$ namely (\ref{Hneps2}) is asymptotically non-negative, which will allow to bound the whole Hamiltonian from below by an operator of the form (\ref{Hneps1}). \begin{lemma} Let $\omega, \delta >0$. Then the operator \begin{equation} H_{n}= \omega a_a^{\dagger} a_a+ \frac{\delta}{n}\left( Tr( a^{\dagger} a^{\dagger} a^{\dagger} a)+Tr(a^{\dagger} aaa) \right) \end{equation} is non-negative definite at large $n$ if $\frac{\omega}{\delta} > 2$.\\ \end{lemma} \begin{proof} Consider the following non-negative definite operator ($(abc):=Tr(T_a T_b T_c)$) \begin{equation} (\alpha (\epsilon a b)a_a a_b+ \beta (\epsilon a b) a_a^{\dagger}a_b)^{\dagger} (\alpha (\epsilon c d)a_c a_d + \beta (\epsilon c d ) a_c^{\dagger} a_d) \geq 0,~~~~\alpha, \beta \in \mathbb{R}. \end{equation} After multiplying the parentheses, using the completeness relation (\ref{completeness relation1}), and noting that $(a^{\dagger}a) \geq \frac{1}{n}(a^{\dagger}a^{\dagger} aa) + O(\frac{1}{n})$ we get \begin{align} -\frac{\alpha \beta}{n}((a^{\dagger}a a a )+(a^{\dagger}a^{\dagger}a^{\dagger}a)) \leq \frac{1}{n}(\alpha^2 + \beta^2 )(a^{\dagger}a) +O(\frac{1}{n}) \end{align} By taking $-\alpha \beta<0$ and choosing the optimal constants $\alpha, \beta$ we get \begin{equation} \frac{1}{n}(Tr( a^{\dagger} a^{\dagger} a^{\dagger} a)+Tr(a^{\dagger} aaa)) \geq -2 Tr(a^{\dagger} a)+O(\frac{1}{n}) \end{equation} and thus $H_n \geq \delta (\frac{\omega}{\delta}-2) a_a^{\dagger} a_a +O(\frac{1}{n}) \geq O(\frac{1}{n})$ provided $\frac{\omega}{\delta} > 2$. \end{proof} Using Lemma 2 with $\alpha=\tilde{ \omega} (1 - \epsilon)$, $\beta =\frac{g}{4 \tilde{\omega}^2}$ and $\gamma=4$ and Lemma 3 with $\omega= \tilde{ \omega}\epsilon $, $\delta= \frac{g}{ \tilde{\omega}^2} $ and $\epsilon >\frac{g}{\tilde{\omega}^3}$ we get quite a useful bound. \begin{theorem} The Hamiltonian of the $U(n)$ invariant anharmonic oscillator (\ref{1matrix}) is bounded below at large n by the following operator \begin{align} H_{N}^{-}:= \frac{\Omega}{n^4} B^{\dagger} B+ \sum_{\lambda} G(\lambda) P_{\lambda}+ \frac{1}{2} \left(e_0^{(0)}+ \tilde{e}_0\right) n^2 \mathbb{I} +O(\frac{1}{n}) \end{align} with $G(\lambda)= \alpha|\lambda| + 4 \beta \sum_{i=2} i \lambda_i $, $\Omega= \alpha + 4 \beta $ and $\tilde{e}_0= -\frac{2 \beta^2}{\alpha + 4 \beta }$, $\alpha=\tilde{ \omega}(1-\epsilon)$, $\beta =\frac{g}{4 \tilde{\omega}^2}$, $ B=A+\frac{\beta n^3}{\alpha+ 4 \beta} \mathbb{I}$ and $\epsilon >\frac{g}{\tilde{\omega}^3}$, where $P_{\lambda}$ is the orthogonal projection onto the subspace $span(\psi_{k,\lambda}),~k=0,1,...$ with $\lambda_4=0$. \end{theorem} Since $H_{N}^{-} \geq \frac{1}{2} \left(e_0^{(0)}+ \tilde{e}_0\right) n^2 \mathbb{I} +O(\frac{1}{n})$, we can easily find a lower bound for $\frac{e_0}{2}$ (and for the whole spectrum) in the planar limit \begin{align} \frac{e_0^{(lower)}}{2}:=\frac{e_0^{(0)}}{2} -\frac{\beta^2}{\alpha+4 \beta}. \end{align} This also shows the mechanism how the $\frac{1}{n} Tr(a^{\dagger 4} + a^4)$ part of the Hamiltonian (so terms of order $O(n)$) shifts the whole spectrum by $const. n^2$. We refer the reader to Table \ref{table1} for several numerical values.\\ \begin{table}[H] \caption{Comparison of the exact ground state energies $e_0$ \cite{planar} with the optimized Fock space ground state energy (the gaussian upper bound $e_0^{(0)}/2$) and the lower bound $ e_0^{(lower)}/2$ from Section \ref{spectral bounds AO} for several values of the coupling constant $g$, at $n= \infty$.} \label{table1} \begin{center} \begin{tabular}{ | l | l | l |l |l |l | p{5cm} |} \hline g & $e_0^{(0)}/2$& $ e_0^{(lower)}/2 $ & $e_0$ \\ \hline 0.01 &0.505 & 0.505 & 0.505 \\ \hline 0.1 & 0.543&0.542&0.542 \\ \hline 0.5 &0.653& 0.651& 0.651 \\ \hline 1.0 & 0.743& 0.740 & 0.740 \\ \hline 50 & 2.235& 2.214& 2.217 \\ \hline 1000 & 5.968 & 5.907& 5.915 \\ \hline $g \rightarrow \infty $ & 0.59527 $g^{\frac{1}{3}}$ & 0.589075 $g^{\frac{1}{3}}$ & 0.58993 $g^{\frac{1}{3}}$ \\ \hline \end{tabular} \end{center} \end{table} \section{Membrane Matrix Models} \label{chapter mmm} We will start with the optimized Fock space decomposition for the MMM (\ref{generalresc}) and obtain an upper bound for the ground state energy for arbitrary $d$. Later, we will produce a lower bound for the spectrum based on a direct generalization of Theorem 1 (and the preceding lemmas) to the multi-matrix case. We restrict the Hamiltonian to the constrained by (\ref{constraints}) subspace of $\mathcal{H}_{\omega}$ i.e. to the $SU(n)\times SO(d)$ invariant sector and thus it is natural to assume that the optimized Fock space frequencies are all the same $ \tilde{\omega}_{I} \equiv \tilde{\omega}_{a_I i_I}= \tilde{\omega} ~~ \forall I, I=(a_I,i_I),~~a_I=1,...,n^2-1,~~i_I=1,...,d$ and consider only Fock spaces with all the frequencies equal. Using \begin{equation} c_{IJKL} =\frac{1}{2} f_{a a_I a_K}f_{a a_J a_L}\delta_{i_I i_J}\delta_{i_K i_L} \end{equation} with $f^{(n)}_{abc}=\frac{2 \pi n^{\frac{3}{2}}}{i}Tr(T_a \left[T_b,T_c \right])$ and the completeness relation \begin{equation} (T_a)_{ij}(T_a)_{kl}=\delta_{jk} \delta_{il}-\frac{1}{n}\delta_{ij} \delta_{kl}, \label{completeness relation} \end{equation} we get \begin{align} \sum_{K}\frac{c^{(N)}_{KK IJ}}{\omega_K \sqrt{\omega_I \omega_J}}=\sum_{K}\frac{c^{(N)}_{ IJ KK}}{\omega_K \sqrt{\omega_I \omega_J}}=(2 \pi)^2 n^4 \delta_{a_I a_J}\delta_{i_I i_J}\frac{d}{\omega^2},\\ \sum_{K}\frac{c^{(N)}_{K IJK}}{\omega_K \sqrt{\omega_I \omega_J}}=\sum_{K}\frac{c^{(N)}_{IKKJ}}{\omega_K \sqrt{\omega_I \omega_J}}=-(2 \pi)^2 n^4 \delta_{a_I a_J}\delta_{i_I i_J}\frac{1}{\omega^2},\\ c^{(N)}_{KIKJ}=c^{(N)}_{IKJK}=0,\\ \sum_{IJ}\frac{c^{(N)}_{II JJ}}{\omega_I \omega_J}=(2 \pi)^2 n^4(n^2-1)\frac{d^2}{\omega^2},\\ \sum_{IJ} \frac{c^{(N)}_{I JJI}}{\omega_I \omega_J}=-(2 \pi)^2 n^4(n^2-1) \frac{d}{\omega^2},\\ \end{align} and therefore \begin{align} f(n)=\frac{\pi^2 d(d-1)(n^2-1)}{\omega^2} n^4,\\ A_{IJ}^{(N)}=\frac{4\pi^2 (d-1)}{\omega^2} n^4 \delta_{IJ},~~~\gamma_{crit}=2,\\ A_{IJ}^{(N \pm)}=\frac{4\pi^2 (d-1)}{\omega^2} \frac{n^4}{(n^2-1)^2} \delta_{IJ} \pm \omega \delta_{IJ}. \label{MMM tensors} \end{align} Then the condition (\ref{diagonallimit}) is equivalent to \begin{align} \omega =\lim_{n \rightarrow \infty } (n^2-1)^{-2}A^{(N)}_{II}=\lim_{n \rightarrow \infty } n^{-4}\sum_{K}\frac{c^{(N)}_{(I I KK)}}{8 \omega_K \omega_I} = (2 \pi)^2 \frac{1}{\omega^2}(d-1), \label{optimized frequencies MMM} \end{align} which shows that $\tilde{\omega} = \sqrt[3]{4 \pi^2 (d-1)}$. Therefore for the choice $\omega=\tilde{\omega}, ~\beta_N= -\frac{1}{2}d(n^2-1)\tilde{\omega}-\frac{\pi^2 d(d-1)(n^2-1)}{\tilde{\omega}^2}$ the renormalized Hamiltonian becomes \begin{align} H_N+\beta_N \mathbb{I}=2 \tilde{\omega} \sum_{i,a} a_{ia}^{\dagger}a_{ia} +n^{-4}:V_N:+R_N\\ =2 \tilde{\omega} \sum_{i,a} a_{ia}^{\dagger}a_{ia} +\frac{4 \pi^2}{n}\left(Tr(abcd)-Tr(acbd)\right):x_{ia}x_{ib}x_{jc}x_{jd}:+R_N, \label{quantum membrane} \end{align} where $R_N \propto \frac{1}{n^2}\sum_{i,a} (a_{ia}^{\dagger}a_{ia}^{\dagger}+a_{ia}a_{ia})\propto O(\frac{1}{n^2})$, hence $||R_N \psi|| \rightarrow 0~~ \forall \psi \in \mathcal{H}_{\tilde{\omega}}$, and the ground state energy for the optimized Fock space approximation (i.e. the gaussian upper bound) is given by \begin{equation} e_0^{(0)}= -\lim_{n\rightarrow \infty} \frac{\beta_N}{n^2}=\frac{ \tilde{\omega}d }{2}+\frac{\pi^2d (d-1)}{\tilde{\omega}^2}=\frac{3 d(d-1)^{\frac{1}{3}}\pi ^{\frac{2}{3}}}{ 2^{\frac{4}{3}}}. \label{gaussian bound MMM} \end{equation} Let us point out that the scaling given by $\gamma_{crit}$ leads in fact to 't Hooft's scaling, since the quartic potential in the rescaled Hamiltonian (\ref{quantum membrane}) is multiplied by $\frac{1}{n}$. Moreover, this is the only scaling for which the optimized Fock space frequency is non-trivial, i.e. $0< \tilde{\omega} < \infty$ (cp. (\ref{opt freq}) for the AO, where 't Hooft's coupling is the only one leading to a non-trivial redefinition of the optimized mass/frequency). As we will see in Section \ref{perturbations diagrams}, $4\tilde{\omega}$ provides a crude approximation for the mass gap in the $SU(\infty) \times SO(d) $ invariant sector. \subsection{Spectral bounds} \label{spectral bounds MMM} In this section we will present estimates for matrix elements of various parts of the Hamiltonian (\ref{quantum membrane}) leading to a lower bound for the ground state energy in the planar limit, which is a direct generalisation of the procedure introduced in Section \ref{spectral bounds AO} for the AO. In order to obtain a lower bound for the spectrum, one has to take into account the quartic interaction term $n^{-4}:V_N:$ in the $SO(d) \times SU(n)$ invariant sector. As for the AO, one can show that the only divergent matrix elements of $H_N+\beta_N \mathbb{I}$ (\ref{quantum membrane}) in the appropriately modified partitions basis of the space of invariants are those coming from the operators $Tr(a^{\dagger 4}), Tr(a^4)$ due to the fact that the optimal choice of the Fock space frequencies (\ref{optimized frequencies MMM}) eliminates the $Tr(a^{\dagger 2}+a^2)$ part from the game. As previously, a special treatment of the $Tr(a^{\dagger 4}+a^4)$ part of $H_N$ is the main point of our construction. Let us introduce two composite annihilation operators \begin{align} A&:= Tr(abcd)a_{ai}a_{bi}a_{cj}a_{dj} \equiv (iijj),\\ B&:=Tr(abcd)\left(\frac{1}{d+1} a_{ai}a_{bi}a_{cj}a_{dj}-\frac{1}{2} a_{ai}a_{bj}a_{ci}a_{dj}\right) \equiv \frac{1}{d+1} (iijj)-\frac{1}{2} (ijij),\\ \left[A, A^{\dagger} \right]&=2 d n^4 (d+1)+O(n^2),\\ \left[B, B^{\dagger} \right]&=\frac{d(d+2)(d-1)}{d+1}n^4+O(n^2),\\ \left[A, B^{\dagger} \right]&=\left[B, A^{\dagger} \right]=O(n^2), \end{align} and the following $SU(N)\times SO(d)$ invariant states, keeping the dependence on $A$ and $B$ explicit, cp. (\ref{Abasis}), \begin{equation} \psi_{kl\Lambda}:= \mathcal{N}_{kl\Lambda}(a^{\dagger})^{\Lambda}(A^{\dagger})^k (B^{\dagger})^l \psi_0,~~ \mathcal{N}_{kl\Lambda} \propto n^{-\frac{|\lambda_{\Lambda}|+4k+4l}{2}} \label{partitions basis2} \end{equation} where $\Lambda= \lbrace \lambda_{\Lambda}, I_{\Lambda}\rbrace$ is a partition of some $k \in \mathbb{N}$ equipped with the $SO(d)$ invariant structure of the state $\psi_{kl\Lambda}$, i.e. $\lambda_{\Lambda} \vdash k$. $I_{\Lambda}$ is a sequence containing the information about $SO(d)$ indices "compatible" with the $SU(n)$ partition structure\footnote{The Hamiltonian asymptotically stabilizes a subspace of $SU(n) \times SO(d)$ invariants, where contractions between $SO(d)$ indices do not occur between different $SU(n)$ traces. One way to see this is to use (\ref{consise potential1})-(\ref{consise potential}) and the completeness relation (\ref{completeness relation}) and note that the only terms violating the invariance of this subspace are those coming either from the "$\frac{1}{n}$ part" of the completeness relation or from contractions between non-adjacent indices producing subleading terms in $n$.} , e.g. for $\lambda= (2^2)$, i.e. $\lambda_2=2, \lambda_i=0, ~i\neq 2 $, a possible $I$ could be $I=(ii,jj)$, but not $I=(ij,ij)$. The corresponding states $\psi_{kl\Lambda}$ would be then \begin{align} \psi_{kl\Lambda}= \mathcal{N}_{kl\Lambda}Tr(ab)Tr(cd) a^{\dagger}_{ia} a^{\dagger}_{ib} a^{\dagger}_{jc} a^{\dagger}_{jd}(A^{\dagger})^k (B^{\dagger})^l \psi_0, ~~~I_{\Lambda}=(ii,jj), \end{align} and the not allowed state \begin{align} \psi_{kl\Lambda}= \mathcal{N}_{kl\Lambda}Tr(ab)Tr(cd) a^{\dagger}_{ia} a^{\dagger}_{jb} a^{\dagger}_{ic} a^{\dagger}_{jd}(A^{\dagger})^k (B^{\dagger})^l \psi_0, ~~~I_{\Lambda}=(ij,ij). \end{align} We denote the subspace spanned by $\psi_{kl \Lambda}$ by $\mathcal{I}_{\tilde{\omega}}^{(n)}$. Note that the basis $(\ref{partitions basis2})$ is not orthonormal and at large $n$ we have the orthogonality relation \begin{equation} \langle \psi_{kl\Lambda}, \psi_{k'l'\Lambda'}\rangle=\delta_{k k'}\delta_{l l'} \delta_{\lambda_{\Lambda} \lambda_{\Lambda'}} G(I_{\Lambda},I_{\Lambda'}), \end{equation} where $G(I_{\Lambda},I_{\Lambda'})$ is the Gram matrix of (\ref{partitions basis2}) restricted to $span(\psi_{kl\Lambda})$ with $k,l,\lambda_{\Lambda}$ fixed. The normal ordered quartic potential in (\ref{quantum membrane}) can be rewritten as \begin{align} V_n= \frac{\pi^2}{n \tilde{\omega}^2}\left( (i^{\dagger}i^{\dagger}j^{\dagger}j^{\dagger})+(iijj) -(i^{\dagger}j^{\dagger}i^{\dagger}j^{\dagger}) - (ijij)\right)\label{consise potential1}\\ + \frac{2\pi^2}{n \tilde{\omega}^2}\left( (i^{\dagger} i^{\dagger} jj) + (i^{\dagger} j^{\dagger} ji)-2(i^{\dagger} j^{\dagger} ij) -:(i^{\dagger} j i^{\dagger} j):+\frac{1}{2}:(i^{\dagger} i j^{\dagger} j):+\frac{1}{2}:(i^{\dagger} j j^{\dagger} i)):\right) \\+ \frac{2 \pi^2}{n \tilde{\omega}^2}\left((i^{\dagger}i^{\dagger}j^{\dagger}j)+(j^{\dagger}i^{\dagger}i^{\dagger}j)-2(i^{\dagger}j^{\dagger}i^{\dagger}j)+ (i^{\dagger}ijj)+(j^{\dagger}iij)-2(i^{\dagger}jij) \right), \label{consise potential} \end{align} and therefore it is useful to introduce the following operators asymptotically stabilizing the subspaces $ span(\psi_{kl\Lambda},\psi_{k+1, l-1,\Lambda},\psi_{k-1, l+1,\Lambda})_{\lambda_{\Lambda}=\lambda} $ (which means that they only mix up the $SO(d)$ indices and not the $SU(n)$ partition structure) \begin{align} \frac{1}{n} S_1:= \frac{1}{n} Tr(abcd)a_{ai}^{\dagger}a^{\dagger}_{bi}a_{cj}a_{dj} \equiv \frac{1}{n}(i^{\dagger} i^{\dagger} j j),\\ \frac{1}{n} S_2:=\frac{1}{n} Tr(abcd)a_{ai}^{\dagger}a^{\dagger}_{bj}a_{cj}a_{di}\equiv \frac{1}{n} (i^{\dagger} j^{\dagger} j i),\\ \frac{1}{n} S_3:=\frac{1}{n} Tr(abcd)a_{ai}^{\dagger}a^{\dagger}_{bj}a_{ci}a_{dj} \equiv \frac{1}{n} (i^{\dagger} j^{\dagger} i j),\\ \end{align} as well as \begin{align} \frac{1}{n} T_1:=\frac{1}{n}Tr(abcd)a_{ai}^{\dagger}a_{bi}a_{cj}a_{dj} \equiv \frac{1}{n}(i^{\dagger} i j j),\\ \frac{1}{n} T_2:=\frac{1}{n}Tr(abcd)a_{ai}^{\dagger}a_{bj}a_{cj}a_{di}\equiv \frac{1}{n}(i^{\dagger} jj i),\\ \frac{1}{n} T_3:=\frac{1}{n}Tr(abcd)a_{ai}^{\dagger}a_{bj}a_{ci}a_{dj} \equiv \frac{1}{n}(i^{\dagger} j i j),\\ \end{align} which change also the $SU(n)$-partition structure (the $\lambda_{\Lambda}$ part). Now we come back to the full Hamiltonian, which can be rewritten in terms of the recently introduced operators \begin{align} H_N=2 (1-\epsilon) \tilde{\omega}(i^{\dagger} i) +\frac{2\pi^2}{\tilde{\omega}^2n}\left(\frac{d-1}{2(d+1)}( A+A^{\dagger})+B+B^{\dagger}+ S_1+S_2-2S_3\right)\label{ABSoperator} \\ + 2 \epsilon \tilde{\omega}\left[ (i^{\dagger} i)+ \frac{ \pi^2}{ n \tilde{\omega}^3 \epsilon }\left(T_1+T_1^{\dagger}+T_2+T_2^{\dagger}-2(T_3+T_3^{\dagger})\right)\right] \label{Toperator}\\ + \frac{\pi^2}{n \tilde{\omega}^2}:\left( (i^{\dagger} i j^{\dagger} j)+(i^{\dagger} j j^{\dagger} i))-2(i^{\dagger} j i^{\dagger} j)\right):+e^{(0)}_0 n^2 \mathbb{I},\label{lastpart} \end{align} for some $0<\epsilon <1$, cp. (\ref{Hneps1})-(\ref{Hneps3}). Following the strategy established for the AO, we will express the action of the first part of the Hamiltonian, i.e. (\ref{ABSoperator}) in terms of $A,B$ and the representation of $S_1+S_2-2S_3$ in the finite dimensional blocks $P_{\lambda,k,l} \mathcal{I}_{\tilde{\omega}}:=span(\psi_{kl \Lambda})_{\lambda_{\Lambda}=\lambda}$, then we will determine for which value of $\epsilon$ the middle part (\ref{Toperator}) is non-negative, which will allow to bound the full Hamiltonian from below by an operator of the form as in (\ref{ABSoperator}) modulo terms of order $O(\frac{1}{n})$ which do not affect the divergent vacuum energy in the planar limit. Let us perform this in several steps. \begin{lemma} \label{lemma5} The action of the operator \begin{equation} H_N= \omega a_{ia}^{\dagger} a_{ia}+ \frac{1}{n}(\epsilon_1(A+A^{\dagger}) + \epsilon_2(B+B^{\dagger}) +g(S_1+S_2-2S_3) ) \label{operator11} \end{equation} on the states $\psi_{k l \Lambda }$, asymptotically at large $n$, becomes \begin{equation} H_N \psi_{ k l \Lambda}= \left[\Omega_1 \tilde{A}^{\dagger}\tilde{A}+\Omega_2 \tilde{B}^{\dagger} \tilde{B} +\Omega_3( \tilde{B}^{\dagger}\tilde{A}+ \tilde{A}^{\dagger}\tilde{B})+(\tilde{e}_0 n^2 +g G(\lambda_{\Lambda})+\omega |\lambda_{\Lambda}|)\mathbb{I} \right] \psi_{ k l \Lambda}, \end{equation} where $G(\lambda_{\Lambda})$ is the finite dimensional representation of $S_1+S_2-2S_3$ in $P_{\lambda_{\Lambda},k,l} \mathcal{I}_{\tilde{\omega}}$, \begin{align} \Omega_1=\Omega_1(\omega,g)&=\frac{g}{n^4}\left( \frac{1}{d} +\frac{2}{d(d+1)}-\frac{2(d+3)}{d(d+1)^2} \right)+\frac{2 \omega}{d(d+1) n^4},\\ \Omega_2=\Omega_2(\omega,g)&=\frac{4g(d+3)}{d (d-1)(d+2) n^4}+\frac{4 \omega(d+1)}{d(d-1)(d+2)n^4}, \\ \Omega_3=\Omega_3(\omega,g)&=\frac{4g}{d(d+1) n^4 },\\ n^2 \tilde{e}_0=n^2\tilde{e}_0(\omega,\epsilon_1,\epsilon_2,g)&= \Omega_1 \alpha^2 + \Omega_2 \beta^2 + 2 \Omega_3 \alpha \beta + 2 \frac{\epsilon_1 \alpha}{n} + 2 \frac{\epsilon_2 \beta}{n}, \end{align} and $\tilde{A}=A+\alpha \mathbb{I},~~\tilde{B}=B+\beta \mathbb{I}$ with \begin{equation} \label{matrixcondition} \left( \begin{array}{c} \alpha \\ \beta \\ \end{array} \right)=-\frac{1}{n}\left( \begin{array}{cc} \Omega_1 & \Omega_3 \\ \Omega_3 & \Omega_2 \\ \end{array} \right)^{-1} \left( \begin{array}{c} \epsilon_1 \\ \epsilon_2 \\ \end{array} \right). \end{equation} \end{lemma} \begin{proof} Repeating the argument from Lemma 1, as the $SO(d)$ structure does not affect the order in $n$, one gets immediately that \begin{equation} a_{ia}^{\dagger} a_{ia} \psi_{kl\Lambda}\simeq \left[ \frac{4}{n^4}\left( \frac{1}{2d(d+1)} A^{\dagger}A +\frac{d+1}{d(d-1)(d+2)} B^{\dagger}B\right)+|\lambda_{\Lambda}| \right] \psi_{kl\Lambda}, ~~ d>1. \end{equation} Using the relations (holding at large $n$) \begin{align} [S_1,(i^{\dagger}i^{\dagger}j^{\dagger}j^{\dagger})]& \simeq 2n(d+1)(i^{\dagger}i^{\dagger}j^{\dagger}j^{\dagger}), \label{S1} \\ [S_1,(i^{\dagger}j^{\dagger}i^{\dagger}j^{\dagger})]&\simeq 4n (i^{\dagger}i^{\dagger}j^{\dagger}j^{\dagger}),\\ [S_2,(i^{\dagger}i^{\dagger}j^{\dagger}j^{\dagger})]&\simeq4n (i^{\dagger}i^{\dagger}j^{\dagger}j^{\dagger}),\\ [S_2,(i^{\dagger}j^{\dagger}i^{\dagger}j^{\dagger})]&\simeq 4n (i^{\dagger}j^{\dagger}i^{\dagger}j^{\dagger}),\\ [S_3,(i^{\dagger}i^{\dagger}j^{\dagger}j^{\dagger})]&\simeq 2n (i^{\dagger}j^{\dagger}i^{\dagger}j^{\dagger})+2n (i^{\dagger}i^{\dagger}j^{\dagger}j^{\dagger}),\\ [S_3,(i^{\dagger}j^{\dagger}i^{\dagger}j^{\dagger})]&\simeq 4n (i^{\dagger}i^{\dagger}j^{\dagger}j^{\dagger}), \label{S3} \end{align} one can express the action of $S_1, S_2, S_3$ in terms of $A^{\dagger}A, B^{\dagger}B$ and $ A^{\dagger}B+B^{\dagger}A$ \begin{align} \frac{1}{n} S_1\psi_{kl\Lambda} &\simeq\left( \frac{1}{d n^4} A^{\dagger}A+G_1(\lambda_{\Lambda})\right) \psi_{kl\Lambda},\label{S1a}\\ \frac{1}{n} S_2\psi_{kl\Lambda} &\simeq a_{ia}^{\dagger} a_{ia} \psi_{kl\Lambda} \simeq \left[ \frac{4}{n^4}\left( \frac{1}{2d(d+1)} A^{\dagger}A +\frac{d+1}{d(d-1)(d+2)} B^{\dagger}B\right)+|\lambda_{\Lambda}|\right] \psi_{kl\Lambda},\\ \frac{1}{n} S_3\psi_{kl\Lambda} &\simeq \left[\frac{d+3}{d(d+1)^2n^4} A^{\dagger}A - \frac{2}{d(d+1)n^4}( B^{\dagger}A+ A^{\dagger}B)-\frac{4}{d(d-1)(d+2)n^4}B^{\dagger}B +G_3(\lambda_{\Lambda})\right] \psi_{kl\Lambda}, \label{S3a} \end{align} where $G_1(\lambda_{\Lambda}),G_3(\lambda_{\Lambda})$ are the finite dimensional representations of $S_1, S_3$ in the subspaces $P_{\lambda_{\Lambda},k,l} \mathcal{I}_{\tilde{\omega}}$ corresponding to the partition $\Lambda$. Now, by combining (\ref{S1a})-(\ref{S3a}), one can express the action of $S_1+S_2-2S_3$ in terms of $A^{\dagger}A, B^{\dagger}B, A^{\dagger}B+B^{\dagger}A$ and the finite dimensional representation $G(\lambda_{\Lambda})$ of $\frac{1}{n}( S_1+S_2-2S_3)$ in $P_{\lambda_{\Lambda},k,l} \mathcal{I}_{\tilde{\omega}}$, which determines $\Omega_1, \Omega_2, \Omega_3$. \\ In order to get rid of the linear terms in $A+A^{\dagger}$ and $B+B^{\dagger}$ we introduce two new operators \begin{align} \tilde{A}=A+\alpha \mathbb{I},~~~~\tilde{B}=B+\beta \mathbb{I}, \end{align} and require that the coefficients in front of the A,B-linear terms are zero, which gives exactly (\ref{matrixcondition}) as well as the shift of the vacuum energy $\tilde{e}_0$. \end{proof} Note that in the planar limit, $S_2-S_3 \geq 0$, because $S_2 \simeq (\epsilon ij)^{\dagger}(\epsilon ij)$ and $S_{3}\simeq (\epsilon ji)^{\dagger}(\epsilon ij),~~~ (\epsilon ij)= (\epsilon ab)a_{ia}^{\dagger}a_{jb}^{\dagger} $, and thus \begin{align} (\alpha d_{\epsilon ij}^{\dagger}+\beta d_{\epsilon ji}^{\dagger})(\alpha d_{\epsilon ij}+\beta d_{\epsilon ji})= (\alpha^2 + \beta^2)S_2 + 2 \alpha \beta S_3. \end{align} By taking $(\alpha^2 + \beta^2)=1$ and $2 \alpha \beta = -1$ we get $S_2-S_3\geq 0$. It turns out that the operator $S_1-S_3$ is not positive definite, i.e. the eigenvalues of $G(\lambda)$ can be negative. Since the operator in Lemma \ref{lemma5} will be used as a lower bound for the full Hamiltonian, one has to assure that (\ref{operator11}) is non-negative definite. This is the case when $\omega \geq g_3$ because asymptotically $S_1+S_2-2S_3= S_1 +2(S_2-S_3)-S_2 \geq -S_2 \simeq -na_{ia}^{\dagger}a_{ia}=: -n(i^{\dagger}i)$. We also need further bounds. \begin{lemma} \label{lemma6} We have \begin{equation} \frac{1}{n}S_1 \leq d (i^{\dagger}i) + O(\frac{1}{n}). \end{equation} \end{lemma} \begin{proof} Consider the following non-negative operator \begin{equation} 0 \leq (\alpha (\epsilon k k)\delta_{ij} + \beta (\epsilon j i))^{\dagger}(\alpha (\epsilon k k)\delta_{ij} + \beta (\epsilon j i)),~~ \alpha, \beta \in \mathbb{R} \end{equation} where $(\epsilon ji):=Tr(T_{\epsilon} T_a T_b)a_{ja}a_{ib}$. Multiplying the parentheses and using the completeness relation (\ref{completeness relation}) gives asymptotically \begin{equation} (d \alpha^2 + 2 \alpha \beta)S_1+\beta^2 S_2\geq 0, ~~ \alpha,\beta \in \mathbb{R}. \end{equation} Therefore by taking $\beta= -\frac{\alpha d}{ 2}-\epsilon, ~ \epsilon >0$ and $\alpha > 0 $ we get at large $n$ \begin{equation} \frac{1}{n} S_1 \leq \frac{(\frac{\alpha d}{2}+\epsilon)^2}{2 \epsilon \alpha } (i^{\dagger}i). \end{equation} The optimal value of the constant is $\min_{\alpha, \epsilon >0} \frac{(\frac{\alpha d}{2}+\epsilon)^2}{2 \epsilon \alpha } = d$. \end{proof} Using the last lemma we can prove \begin{lemma} \label{lemma7} Let $g>0$. Then the operator \begin{equation} H_N= (i^{\dagger} i) + \frac{g}{n}(T_1+ T_1^{\dagger}+T_2+ T_2^{\dagger}-2(T_3+ T_3^{\dagger}) ) \end{equation} is non-negative definite at large $n$ for $g < \frac{\sqrt{d}}{2(d-1)} $. \end{lemma} \begin{proof} Using a similar strategy as in Lemma \ref{lemma6}, we consider another non-negative operator of the form $K_{\epsilon ij}^{\dagger}K_{\epsilon ij}$ with \begin{align} K_{\epsilon ij}:=(\alpha( \epsilon kk)\delta_{ij}+ \beta (\epsilon ij)+ \gamma_1 (\epsilon k^{\dagger}k)\delta_{ij}+\gamma_2(\epsilon i^{\dagger}j)+\gamma_3 (\epsilon j^{\dagger}i)). \end{align} By computing $K_{\epsilon ij}^{\dagger}K_{\epsilon ij}$ explicitly and using the completeness relation (\ref{completeness relation}) we get at large $n$ \begin{align} \frac{1}{n}\left[(\alpha (\gamma_1 d+\gamma_2 + \gamma_3) +\gamma_1 \beta)(T_1+ T_1^{\dagger})+ \beta \gamma_2 (T_2+ T_2^{\dagger})+ \beta \gamma_3 (T_3+ T_3^{\dagger}) \right] \\ \geq \frac{1}{n}\left[-(\alpha^2 d+2 \alpha \beta)S_1- \beta^2 S_2 -(\gamma_1^2 d +2 \gamma_1 \gamma_2 +2 \gamma_1 \gamma_3)(i^{\dagger}i j^{\dagger}j) -2 \gamma_2 \gamma_3 (i^{\dagger}j i^{\dagger}j)-(\gamma_2^2+\gamma_3^2)(i^{\dagger}j j^{\dagger}i) \right]\\ \geq -\left[ \alpha^2 d^2+2 \alpha \beta d+ \beta^2 +\gamma_1^2 d +2\gamma_1 \gamma_2 +2 \gamma_1 \gamma_3+2\gamma_2 \gamma_3 +d(\gamma_2^2+\gamma_3^2) \right] (i^{\dagger} i) +O(\frac{1}{n}), \label{boundt1t3a} \end{align} where in the last step we employ the large $n$ relations \begin{align} (i^{\dagger}i j^{\dagger}j)=:(i^{\dagger}i j^{\dagger}j):+n(i^{\dagger} i),\\ (i^{\dagger}j i^{\dagger}j)= :(i^{\dagger}j i^{\dagger}j):+n(i^{\dagger} i),\\ (i^{\dagger}j j^{\dagger}i)= :(i^{\dagger}j j^{\dagger}i): + d n(i^{\dagger} i), \end{align} the fact that $0 \leq \frac{1}{n} S_2 \leq (i^{\dagger} i) $ and Lemma \ref{lemma6}. Constrained minimization of the constant in (\ref{boundt1t3a}) \begin{align} \min_{\alpha, \gamma_1, \gamma_2, \gamma_3, \beta} \left[ \alpha^2 d^2+2 \alpha \beta d+ \beta^2 +\gamma_1^2 d +2\gamma_1 \gamma_2 +2 \gamma_1 \gamma_3+2\gamma_2 \gamma_3 +d(\gamma_2^2+\gamma_3^2) \right]= \frac{2 (d-1)}{\sqrt{d}},\\ (\alpha (\gamma_1 d+\gamma_2 + \gamma_3) +\gamma_1 \beta)=1 \label{constr1},\\ \beta \gamma_2=1, ~~~~\beta \gamma_3=-2 \label{constr2}, \end{align} gives the bound \begin{equation} \frac{1}{n}(T_1+ T_1^{\dagger}+T_2+ T_2^{\dagger}-2(T_3+ T_3^{\dagger}) )\geq -2\frac{d-1}{\sqrt{d}} (i^{\dagger} i) + O(\frac{1}{n}). \label{boundt1t3} \end{equation} As a consequence, $H_N \geq (1-2 g \frac{d-1}{\sqrt{d}} ) (i^{\dagger} i) \geq O(\frac{1}{n})$, provided $g<\frac{\sqrt{d}}{2(d-1)}$. \end{proof} Lemma \ref{lemma7} implies that the operator in (\ref{Toperator}) is non-negative for $\epsilon >\frac{2 \pi^2 (d-1)}{\tilde{\omega}^3 \sqrt{d}}$. The operator $\frac{\pi^2}{n \tilde{\omega}^2}:\left( (i^{\dagger} i j^{\dagger} j)+(i^{\dagger} j j^{\dagger} i))-2(i^{\dagger} j i^{\dagger} j)\right):$ is of order $O(\frac{1}{n})$ and thus it does not contribute to the planar limit. Finally by using Lemma \ref{lemma5} with $\epsilon_1=\frac{\pi ^2 (d-1)}{(d+1) \tilde{\omega}^2},~~\epsilon_2=\frac{2 \pi ^2}{ \tilde{\omega}^2}$, $g=\frac{2 \pi^2}{\tilde{\omega}^2}$ and $\omega= 2 (1 -\epsilon)\tilde{\omega}$ we get the following bound. \begin{theorem} The Hamiltonian of the MMM (\ref{quantum membrane}), asymptotically at large $n$, is bounded below by the following non-negative operator \begin{align} H_N^{-} :=\Omega_1 \tilde{A}^{\dagger}\tilde{A}+\Omega_2 \tilde{B}^{\dagger} \tilde{B} +\Omega_3( \tilde{B}^{\dagger}\tilde{A}+ \tilde{A}^{\dagger}\tilde{B}) +(e_0^{(0)}+\tilde{e}_0) n^2 \mathbb{I} +\sum_{\lambda,\lambda_4=0,k,l}(gG(\lambda) +\omega |\lambda|)P_{\lambda,k,l}+O(\frac{1}{n})\label{theoremlower} \end{align} where $\Omega_1:=\Omega_1(\omega,\frac{2 \pi ^2}{\tilde{\omega}^2}),~\Omega_2:=\Omega_2(\omega,\frac{2 \pi ^2}{\tilde{\omega}^2}), \Omega_3:=\Omega_3(\frac{2 \pi ^2}{\tilde{\omega}^2})$, $\tilde{e}_0:=\tilde{e}_0(\omega, \epsilon_1, \epsilon_2, \frac{2 \pi ^2}{\tilde{\omega}^2})$ as well as $\tilde{A}, \tilde{B}$ are defined in Lemma \ref{lemma5}. \end{theorem} \begin{proof} It remains to prove that $H_N^{-} \geq 0$ at large $n$. Note that $H_N^{-}$ can be rewritten, after performing a Bogoliubov transformation \begin{align} \tilde{A}&= (\cos(x) \hat{A} +\sin(x) \hat{B})\sqrt{2d(d+1)},\\ \tilde{B}&=( -\sin(x) \hat{A}+ \cos(x)\hat{B}) \sqrt{\frac{d(d+2)(d-1)}{d+1}} , \end{align} with $[\hat{A},\hat{A}^{\dagger}]=[\hat{B},\hat{B}^{\dagger}]=n^4 \mathbb{I}+O(n^2)$ and $[\hat{B},\hat{A}^{\dagger}]=[\hat{A},\hat{B}^{\dagger}]=O(n^2)$, as a sum of non-negative operators. Indeed, by demanding that the mixed terms $\hat{A}^{\dagger}\hat{B}+\hat{B}^{\dagger}\hat{A}$ vanish, one finds the value of $x$ such that \begin{align} H_N^{-}=\tilde{\Omega}_1 \hat{A}^{\dagger} \hat{A}+\tilde{\Omega}_2 \hat{B}^{\dagger} \hat{B} + \sum_{\lambda,\lambda_4=0,m}(gG(\lambda) +\omega |\lambda|) P_{\lambda} +(e_0^{(0)}+\tilde{e}_0) n^2 \mathbb{I}+O(\frac{1}{n}), \end{align} for some $\tilde{\Omega}_1>0, \tilde{\Omega}_2>0$. Moreover, $g G(\lambda)+\omega|\lambda| \geq 0 ~ \forall \lambda$ since $\omega> g$. \end{proof} Since $H_N^{-} \geq (e_0^{(0)}+\tilde{e}_0) n^2 \mathbb{I} + O(\frac{1}{n})$, we get a lower bound for $e_0$ in the planar limit \footnote{In Section 5 we give a perturbative argument, by constructing a perturbative series up to the third order in a certain effective coupling constant, that the planar limit is valid for this model, i.e. the neglected operators of order $O(\frac{1}{n})$ do not affect the ground state energy.} \begin{align} e_0^{(lower)}:=e_0^{(0)}+\tilde{e}_0. \end{align} We refer the reader to Table \ref{table2} for several numerical values.\\ \section{Perturbative expansion} \label{perturbations diagrams} We have seen that the Hamiltonian of an interacting model with a quartic interaction satisfying the assumption in Lemma 1 can be rewritten as a sum of a non-interacting Hamiltonian $\hat{H}_{0,N}:= H_{0,N}+ N e_0^{(0)} \mathbb{I}$ and a normal ordered interaction (w.r.t. the optimized Fock vacuum $\Psi_0(\tilde{\omega})$) \begin{equation} H_N=\hat{H}_{0,N} + \frac{1}{N^{\gamma}}:V_N:. \label{perturbed_H} \end{equation} This suggests a possible perturbative expansion around the spectrum of $\hat{H}_{0,N}$. Denote the eigenvalues and eigenstates of $\hat{H}_{0,N}$ by $E_{k,N}^{(0)}$ and $\psi_{k,N}^{(0)}$ respectively, where the index $k$ takes values in the appropriate index space $\mathcal{J}_N$. Using the standard technique of stationary perturbations in quantum mechanics (Rayleigh-Schrödinger perturbation theory), i.e. assuming that the actual eigenvalues and eigenstates of the full Hamiltonian, $E_{k,N}$ and $\psi_{k,N}$, can be expanded in a power series in $\epsilon_N:= \epsilon N^{-\gamma}$, where $\epsilon$ is a book keeping parameter \footnote{In fact $\epsilon$ is not only a book keeping parameter but also a manifestation of the silent assumption about the existence of a small parameter hidden in the model, which in general does not have to exist. However, we will see that for both matrix models such a parameter exists.}, one can compute a first few terms of the perturbative expansion for finite $N$, take the limit $N \rightarrow \infty$, and in the end put $\epsilon=1$. The corrections to the eigenvalue $E_{k,N}$ of $H_N$ up to the third order read \begin{align} E_{k,N}&= E_{k,N}^{(0)}+\epsilon_N E_{k,N}^{(1)} + \epsilon_N^2 E_{k,N}^{(2)}+\epsilon_N^3 E_{k,N}^{(3)}+ O(\epsilon^4),\\ E_{k,N}^{(1)}&=\langle \psi_k^{(0)},:V_N: \psi_k^{(0)} \rangle, \label{perturbative expansion}\\ E_{k,N}^{(2)}&=\langle \psi_k^{(0)},:V_N: \frac{Q_0}{E_{k,N}^{(0)}-\hat{H}_{0,N} }:V_N: \psi_k^{(0)}\rangle,\\ E_{k,N}^{(3)}&=\langle \psi_k^{(0)},:V_N: \frac{Q_0}{E_{k,N}^{(0)}-\hat{H}_{0,N} }:V_N:\frac{Q_0}{E_{k,N}^{(0)}-\hat{H}_{0,N} } :V_N: \psi_k^{(0)}\rangle \\&-E_{k,N}^{(1)}\langle \psi_k^{(0)},:V_N: \frac{Q_0}{(E_{k,N}^{(0)}-\hat{H}_{0,N} )^2}:V_N: \psi_k^{(0)}\rangle, \end{align} where $Q_0$ is the orthogonal projection on $span(\psi_k^{(0)})^{\perp}$ and $\frac{Q_0}{E_{k,N}^{(0)}-\hat{H}_{0,N} }:=Q_0(E_{k,N}^{(0)}-\hat{H}_{0,N})^{-1}Q_0$. It is of course not clear whether the series converges, not even if separate terms do in the limit $n \rightarrow \infty$ and one usually expects that such a series is only asymptotic. However we shall see below that the first three terms turn out to provide an astonishingly accurate approximation for the ground state energy and the spectral gap for the AO and to be consistent with the bounds obtained for the MMM. We will also show that the spectral gap for the MMM remains finite at large $n$ at least up to the $2^{nd}$ order. Another important aspect of the considered here perturbative expansion is that it establishes a natural connection between the calculation presented in the previous sections and the planar limit in gauge field theories. In the next subsection we will give examples of planar and non-planar contributions (resp. leading and subleading) to the vacuum energy using a diagramatic representation of certain terms popping up in the above perturbative series (\ref{perturbative expansion}), which also justifies why one can neglect the operators of order $O(\frac{1}{n})$ in (\ref{Hneps3}) and (\ref{lastpart}), at least in the vacuum energy calculations. \subsection{Vacuum energy corrections} Let us remind that the $0^{th} $ term gives the gaussian variational upper bound and point out that the $1^{st}$ order term is always zero. We will see that (at least up to $3^{rd}$ order) the leading terms of the perturbative contributions to the vacuum energy for the models considered in this note correspond to \emph{connected} planar diagrams in the diagramatic representation and they are proportional to $n^2$. Let us compute the second and third order corrections to the ground state energy of the AO (\ref{1matrix}). We put $E_{0,N}=N e_0$ and $E_{0,N}^{(0)}=N e_0^{(0)}$ \begin{itemize} \item $2^{nd}$ order correction for the AO \begin{align} \frac{e_{0,N}^{(2)}}{2}=\lim_{n \rightarrow \infty}\frac{\epsilon_N^2}{N}\langle \psi_0,:V_N: \frac{Q_0}{E_{0,N}^{(0)}-\hat{H}_{0,N} }:V_N: \psi_0 \rangle=\\-\lim_{n \rightarrow \infty} \frac{g^2 \epsilon^2}{ 64 n^4 \tilde{\omega}^5} \langle \psi_0,A A^{\dagger} \psi_0 \rangle = -\frac{g^2 \epsilon^2}{ 16 \tilde{\omega}^5}, \label{2ndorder} \end{align} where we have used that $\hat{H}_{0,N} A^{\dagger }\psi_0= (4 \tilde{\omega} +N e_0^{(0)})A^{\dagger }\psi_0$. Let us point out the connection between our perturbative expansion and the topological expansion with the leading, genus 0 order, being the planar limit. The contraction occuring in (\ref{2ndorder}) has the form $Tr(T_a T_b T_c T_d) Tr(T_{\tilde{a}} T_{\tilde{b}} T_{\tilde{c}} T_{\tilde{d}}) \delta_{a \tilde{b}}\delta_{b \tilde{a}} \delta_{d \tilde{c}} \delta_{c \tilde{d}}$ and it can be represented graphically as \begin{center} \begin{tikzpicture} \tikzstyle{every node}=[draw,circle,fill=white,minimum size=4pt, inner sep=0pt] \draw[ultra thick] (0,0) node[circle, minimum height=1cm,minimum width=1cm,draw] (1) {}; \draw (1/1.41,1/1.41) node[diamond, minimum height=0.3cm,minimum width=0.3cm,draw] (2) [label=right:$a$] {}; \draw (-1/1.41,1/1.41) node[diamond, minimum height=0.3cm,minimum width=0.3cm,draw] (3) [label=right:$b$] {}; \draw (-1/1.41,-1/1.41) node[diamond, minimum height=0.3cm,minimum width=0.3cm,draw] (4) [label=right:$c$] {}; \draw (+1/1.41,-1/1.41) node[diamond, minimum height=0.3cm,minimum width=0.3cm,draw] (5) [label=right:$d$] {}; \draw[ultra thick] (1) -- (2); \draw[ultra thick] (1) -- (3); \draw[ultra thick] (1) -- (4); \draw[ultra thick] (1) -- (5); \draw[ultra thick] (3,0) node[circle, minimum height=1cm,minimum width=1cm,draw] (6) {}; \draw (3+1/1.41,+1/1.41) node (7) [label=right:$\tilde{a}$] {}; \draw (3-1/1.41,+1/1.41) node (8) [label=right:$\tilde{b}$] {}; \draw (3-1/1.41,-1/1.41) node (9) [label=right:$\tilde{c}$] {}; \draw (3+1/1.41,-1/1.41) node (10) [label=right:$\tilde{d}$] {}; \draw[ultra thick] (6) -- (7); \draw[ultra thick] (6) -- (8); \draw[ultra thick] (6) -- (9); \draw[ultra thick] (6) -- (10); \draw (3) to [out=60,in=120] (7); \draw (4) to [out=-60,in=-120] (10); \draw (2) to [out=60,in=120] (8); \draw (5) to [out=-60,in=-120] (9); \end{tikzpicture} \end{center} where a ring with 4 outgoing branches represents the trace of a product of 4 matrices $T_a$, a circle at the end represents an annihilation operator while a square stands for a creation operator carrying the indicated index. Thin lines between vertices represent Kronecker deltas resp. Wick contractions between annihilation and creation operators. It is clear that the maximal power of $n$ is attained when adjacent indices in the two traces are contracted, which corresponds exactly to the drawn above planar contraction. If one contracts e.g. $b $ with $\tilde{b}$ and $a$ with $\tilde{a}$ instead then the contribution to (\ref{2ndorder}) is subleading and the resulting diagram cannot be drawn in the plane without intersections of the contraction lines. \item $3^{rd}$ order correction for the AO \begin{align} \frac{e_{0,N}^{(3)}}{2}=\lim_{n \rightarrow \infty} \frac{\epsilon_N^3}{N} \langle \psi_0,:V_N: \frac{Q_0}{E_{0,N}^{(0)}-\hat{H}_{0,N} }:V_N:\frac{Q_0}{E_{0,N}^{(0)}-\hat{H}_{0,N} } :V_N: \psi_0\rangle \\ = \lim_{n \rightarrow \infty} \frac{g^3 \epsilon^3}{ 4^5 n^5 \tilde{\omega}^8} \langle \psi_0,A (a^{\dagger}a^{\dagger} a a) A^{\dagger}\psi_0 \rangle = \frac{g^3 \epsilon^3}{ 4^2 \tilde{\omega}^8}. \label{3rdorder} \end{align} A graphical representation of this contribution has the following form \begin{center} \begin{tikzpicture} \tikzstyle{every node}=[draw,circle,fill=white,minimum size=4pt, inner sep=0pt] \draw[ultra thick] (0,0) node[circle, minimum height=1cm,minimum width=1cm,draw] (1) {}; \draw (1/1.41,1/1.41) node[diamond, minimum height=0.3cm,minimum width=0.3cm,draw] (2) [label=right:$a$] {}; \draw (-1/1.41,1/1.41) node[diamond, minimum height=0.3cm,minimum width=0.3cm,draw] (3) [label=right:$b$] {}; \draw (-1/1.41,-1/1.41) node[diamond, minimum height=0.3cm,minimum width=0.3cm,draw] (4) [label=right:$c$] {}; \draw (+1/1.41,-1/1.41) node[diamond, minimum height=0.3cm,minimum width=0.3cm,draw] (5) [label=right:$d$] {}; \draw[ultra thick] (1) -- (2); \draw[ultra thick] (1) -- (3); \draw[ultra thick] (1) -- (4); \draw[ultra thick] (1) -- (5); \draw[ultra thick] (3,0) node[circle, minimum height=1cm,minimum width=1cm,draw] (6) {}; \draw (3+1/1.41,+1/1.41) node (7) [label=right:$\tilde{a}$] {}; \draw (3-1/1.41,+1/1.41) node (8) [label=right:$\tilde{b}$] {}; \draw (3-1/1.41,-1/1.41) node (9) [label=right:$\tilde{c}$] {}; \draw (3+1/1.41,-1/1.41) node (10) [label=right:$\tilde{d}$] {}; \draw[ultra thick] (6) -- (7); \draw[ultra thick] (6) -- (8); \draw[ultra thick] (6) -- (9); \draw[ultra thick] (6) -- (10); \draw[ultra thick] (1.5,-3) node[circle, minimum height=1cm,minimum width=1cm,draw] (11) {}; \draw (1.5+1/1.41,-1/1.41-3) node[diamond, minimum height=0.3cm,minimum width=0.3cm,draw] (12) [label=right:$e$] {}; \draw (1.5+1/1.41,1/1.41-3) node[diamond, minimum height=0.3cm,minimum width=0.3cm,draw] (13) [label=right:$f$] {}; \draw (1.5+-1/1.41,1/1.41-3) node (14) [label=right:$g$] {}; \draw (1.5-1/1.41,-1/1.41-3) node (15) [label=right:$h$] {}; \draw[ultra thick] (11) -- (12); \draw[ultra thick] (11) -- (13); \draw[ultra thick] (11) -- (14); \draw[ultra thick] (11) -- (15); \draw (3) to [out=60,in=120] (7); \draw (9) to [out=225,in=45] (13); \draw (2) to [out=60,in=120] (8); \draw (10) to [out=-45,in=0] (12); \draw (5) to [out=-45,in=180] (14); \draw (4) to [out=225,in=180] (15); \end{tikzpicture} \end{center} where the bottom circle represents the operator $(a^{\dagger} a^{\dagger}a a)=Tr(T_e T_f T_g T_h) a^{\dagger}_e a^{\dagger}_f a_g a_h $. If one replaces it with the operator $:(a^{\dagger}a a^{\dagger} a): Tr(T_e T_f T_g T_h) a^{\dagger}_e a^{\dagger}_g a_f a_h$ then the resulting diagram is no longer planar and it leads to a subleading contribution to (\ref{3rdorder}). These corrections decrease the error of determining the ground state energy at large $g$ to $1.5 \permil $ and $1.1 \permil$ resp. (see Table \ref{table3}). \begin{table}[H] \caption{Comparison of the exact ground state energies $e_0$ \cite{planar} with the optimized Fock space ground state energies up to $3^{rd}$ order of the perturbation expansion $e_0^{(0)}/2,e_0^{(2)}/2, e_0^{(3)}/2$ for several values of the coupling constant $g$ for the AO.} \label{table3} \begin{center} \begin{tabular}{ | l | l | l |l |l |l | p{5cm} |} \hline g & $e_0^{(0)}/2$& $e_0^{(2)}/2$ & $e_0^{(3)}/2$ & $e_0$ \\ \hline 0.01 &0.505 &0.505& 0.505 & 0.505 \\ \hline 0.1 & 0.543&0.542& 0.542&0.542 \\ \hline 0.5 &0.653& 0.651& 0.651& 0.651 \\ \hline 1.0 & 0.743& 0.740& 0.740 & 0.740 \\ \hline 50 & 2.235&2.214& 2.219& 2.217 \\ \hline 1000 & 5.968&5.907& 5.922&5.915 \\ \hline $g \rightarrow \infty $ & 0.59527 $g^{\frac{1}{3}}$ & 0.589075 $g^{\frac{1}{3}}$ & 0.59062 $g^{\frac{1}{3}}$& 0.58993 $g^{\frac{1}{3}}$ \\ \hline \end{tabular} \end{center} \end{table} For the MMM one can easily repeat the calculation \item $2^{nd}$ order correction for the MMM \begin{align} e_{0,N}^{(2)}=-\lim_{n \rightarrow \infty}\frac{1}{n^4}\frac{\pi^4}{8 \tilde{\omega}^5}\langle \psi_0,\left((iijj)-(ijij) \right)\left( (k^{\dagger}k^{\dagger}l^{\dagger}l^{\dagger})-(k^{\dagger}l^{\dagger}k^{\dagger}l^{\dagger})\right) \psi_0 \rangle =-\frac{6d(d-1)\pi^4}{8 \tilde{\omega}^5} \end{align} \item $3^{rd}$ order correction for the MMM \begin{align} e_{0,N}^{(3)}= \lim_{n \rightarrow \infty} \frac{1}{n^5}\frac{\pi^6}{32\tilde{\omega}^8} \langle \psi_0,\lbrace (iijj)-(ijij)\rbrace \lbrace S_1+S_2 -2 S_3 \rbrace \lbrace (i^{\dagger}i^{\dagger}j^{\dagger}j^{\dagger})-(i^{\dagger}j^{\dagger}i^{\dagger}j^{\dagger}) \rbrace\psi_0 \rangle \\ = d \left(d^2+10 d-11\right) \frac{\pi^6}{8 \tilde{\omega}^8}, \end{align} where we have used the asymptotic formulas (\ref{S1})-(\ref{S3}). A diagramatic representation of these contributions is qualitatively the same as for the AO, which leads to the conclusion that the operator $:((i^{\dagger}i j^{\dagger}j)+(i^{\dagger}j j^{\dagger} i)-2(i^{\dagger}j i^{\dagger} j)):$ in (\ref{lastpart}) is negligible in the planar limit. \end{itemize} The reader is referred to Table \ref{table2} for several numerical values, which show that the proposed perturbative expansion is consistent with our upper and lower bounds. One can see that the operators $Tr(a^{\dagger4}), Tr(a^4)$, i.e. $A,A^{\dagger}$ for the AO and $(iijj), (ijij),(i^{\dagger} i^{\dagger}j^{\dagger}j^{\dagger}), (i^{\dagger}j^{\dagger}i^{\dagger}j^{\dagger})$ for the MMM are the main sources of corrections to the vacuum energy for both models. \begin{table}[H] \caption{Comparison of the ground state energies for the MMM in the $0^{th}$, $2^{nd}$ and $3^{rd}$ order as well as the lower bound $e_0^{(lower)}$ found in Section \ref{spectral bounds MMM}} \label{table2} \begin{center} \begin{tabular}{ | l | l | l |l |l | l | p{5cm} |} \hline d & 3 & 9 & 15 & 25 & 35 \\ \hline $e_0^{(0)}$ & 9.653 & 45.968 & 92.324 & 184.158 & 289.562 \\ \hline $e_0^{(2)}$ & 9.351 & 45.609 & 91.912 & 183.679 & 289.030 \\ \hline $e_0^{(3)}$ & 9.439 & 45.646 & 91.944 &183.709 & 289.060\\ \hline $e_0^{(lower)}$ & 9.349 &45.583 &91.887& 183.658 &289.013 \\ \hline \end{tabular} \end{center} \end{table} \subsection{Spectral gap corrections} The spectrum of the singlet sector for the AO consists of a divergent vacuum energy and an infinite family of equally spaced excited states i.e. with a constant, finite energy gap $\omega(g)$ \cite{shapiro,Yaffe,planar limit marchesini onofri}: \begin{equation} E_{\lambda}= e_0 n^2+ \omega(g) \sum_{j} j \lambda_j. \end{equation} The degeneracy of an energy level $E_k=n^2 e_0 + k \omega(g)$ is given by the number of partitions of $k$. This result is restored in our perturbative expansion, even in the $0^{th}$ order, so for $U(n)$ invariant eigenstates of $\hat{H}_{0,N}= \tilde{\omega}(a^{\dagger}a)+ \frac{e_0^{(0)}}{2} n^2 \mathbb{I}$. For later convenience we choose the second excited eigenstate of $\hat{H}_{0,N}$, i.e. $\psi_{\lambda}= \mathcal{N}_{\lambda} (a^{\dagger} a^{\dagger})\Psi_0(\tilde{\omega})$, $\lambda_i=0, ~i\neq 2,~\lambda_2=1$, as a starting point for the perturbative expansion. The first order correction is of order $O(1)$ \begin{align} E_{\lambda}^{(1)}= \lim_{n \rightarrow \infty} \frac{g}{\tilde{\omega}^2n} \langle \psi_{\lambda},(a^{\dagger}a^{\dagger} a a)\psi_{\lambda}\rangle =\lim_{n \rightarrow \infty} \mathcal{N}_{\lambda}^2 \frac{g}{\tilde{\omega}^2n} \langle \psi_{\lambda}, (aa)(a^{\dagger}a^{\dagger} a a) (a^{\dagger}a^{\dagger}) \psi_{\lambda}\rangle =\frac{g}{\tilde{\omega}^2}. \end{align} The second and third order contributions $E_{\lambda}^{(2)},E_{\lambda}^{(3)}$ obtained in this way diverge like $n^2$. However, the divergent parts are exactly equal to the corresponding corrections to the vacuum energy and thus the renormalized energies are finite. \begin{align} & E_{\lambda,R}^{(2)}:=\lim_{n \rightarrow \infty}(E_{\lambda}^{(2)}-e_0^{(2)}n^2) \label{canc1} \\ &= \lim_{n \rightarrow \infty}\left[ \frac{g^2}{4^2\tilde{\omega}^4 n^2} \left(16 \langle \psi_{\lambda},(a^{\dagger}a a a) \frac{Q_0}{E_{\lambda,N}^{(0)}-\hat{H}_{0,N}}(a^{\dagger}a^{\dagger} a^{\dagger} a)\psi_{\lambda}\rangle + \langle \psi_{\lambda},A \frac{Q_0}{E_{\lambda,N}^{(0)}-\hat{H}_{0,N} }A^{\dagger}\psi_{\lambda}\rangle \right) -e_0^{(2)}n^2\right] \\ & =-\frac{ 5 g^2}{ \tilde{\omega}^5}, \end{align} and similarly the $3^{rd}$ order renormalized term \begin{align} E_{\lambda,R}^{(3)}:=\lim_{n \rightarrow \infty}(E_{\lambda}^{(3)}-e_0^{(3)}n^2) = \frac{310 g^3}{32\tilde{\omega}^8}. \label{canc2} \end{align} Therefore the spectral gap $\omega(g)$ up to the third order becomes \begin{equation} \omega(g)=\frac{1}{2}(2 \tilde{\omega}+E_{\lambda,R}^{(1)}+E_{\lambda,R}^{(2)}+E_{\lambda,R}^{(3)})=\tilde{\omega}+\frac{g}{\tilde{\omega}^2}-\frac{5 g^2}{2 \tilde{\omega}^5}+\frac{155 g^3}{32 \tilde{\omega}^8}. \end{equation} Table \ref{table4} shows several numerical values. The accuracy of our approximation is very good, also in the strong coupling limit $g \rightarrow \infty$. The reason why it works so well is the following. In the optimized Fock space formulation the Hamiltonian takes the form, cp. (\ref{Hneps1})-(\ref{Hneps3}), \begin{align} H_N +\beta_N \mathbb{I}= \tilde{\omega}\left[(a^{\dagger}a) + \frac{g}{4\tilde{\omega}^3} (\text{interaction terms}) \right], \end{align} hence the effective coupling constant is not $g$, but $\tilde{g}:=\frac{g}{4\tilde{\omega}^3}$. By using eq.(\ref{opt freq}), one gets $\max_g \tilde{g}=\frac{1}{16}$, attained in the limit $g \rightarrow \infty$, which shows that the system in the optimized Fock space is coupled weakly even if the coupling $g$ in the original formulation is strong. Let us also point out that each order in $\tilde{g}$ gives the correct scaling of the spectrum as \begin{align} \tilde{\omega}\tilde{g}^k \propto \frac{g^k}{\tilde{\omega}^{3k-1}} \propto g^{\frac{1}{3}}, ~~~ g\rightarrow \infty. \end{align} \begin{table}[H] \caption{Comparison of the exact value of the spectral gap $\omega(g)$ \cite{singlet spectrum mondello onofri} with the perturbative expansion in the optimized Fock space $\omega^{(0)}=\tilde{\omega}$, $\omega^{(1)}$, $\omega^{(2)}$, $\omega^{(3)}$.} \label{table4} \begin{center} \begin{tabular}{ | l | l | l |l |l |l | p{5cm} |} \hline g & $\omega(g)$& $\omega^{(0)}$ & $\omega^{(1)}$ & $\omega^{(2)}$ & $\omega^{(3)}$ \\ \hline 2 &2.454 &2.166& 2.592 & 2.382&2.463 \\ \hline 50 & 6.811 &5.905&7.340& 6.468&6.878 \\ \hline 200 &10.76& 9.319& 11.62& 10.20&10.88 \\ \hline 1000 & 18.37&15.90&19.85 & 17.39&18.58 \\ \hline \end{tabular} \end{center} \end{table} Let us repeat the calculation for the MMM. As the starting point for the perturbation expansion we choose an analogue of the previously considered for the AO $\psi_{\lambda}$, i.e. the first $SO(d)\times SU(n)$ invariant excited state of $\hat{H}_{0,N}= 2 \tilde{\omega}(i^{\dagger} i) + e_0^{(0)}n^2 \mathbb{I}$, namely $\psi_{\Lambda}=\mathcal{N}_{\Lambda} (i^{\dagger}i^{\dagger})\Psi_0(\tilde{\omega}) $ and thus $E_{\Lambda}^{(0)}=4 \tilde{\omega}+e_0^{(0)}n^2$. The first order correction reads \begin{align} E_{\Lambda}^{(1)}=\lim_{n \rightarrow \infty}\frac{2 \pi^2}{\tilde{\omega}^2 n} \langle \psi_{\Lambda},(S_1+S_2-2S_3)\psi_{\Lambda}\rangle =\frac{4 \pi^2}{\tilde{\omega}^2}(d-1), \end{align} and the second order is again divergent with the singular part equal to the corresponding correction to the vacuum energy $e_0^{(2)}n^2$. Thus the renormalized contribution \begin{align} E_{\Lambda,R}^{(2)}&:=\lim_{n \rightarrow \infty}(E_{\Lambda}^{(2)} - e_0^{(2)}n^2) \label{canc3}= \lim_{n \rightarrow \infty}[- \frac{1}{4\tilde{\omega}} \frac{4 \pi^4}{\tilde{\omega}^4 n^2} \langle \psi_{\Lambda},(T_1+T_2-2T_3)(T_1^{\dagger}+T_2^{\dagger}-2T_3^{\dagger}),\psi_{\Lambda}\rangle \\&-\frac{1}{8 \tilde{\omega}}\frac{\pi^4}{\tilde{\omega}^4 n^2} \langle \psi_{\Lambda},((iijj)-(ijij))((k^{\dagger} k^{\dagger} l^{\dagger} l^{\dagger})-(k^{\dagger}l^{\dagger}k^{\dagger}l^{\dagger})),\psi_{\Lambda}\rangle - e_0^{(2)}n^2] \\&= -\frac{40 \pi^4}{\tilde{\omega}^5}(d-1)-\frac{\pi^4}{\tilde{\omega}^5}(d^2+7), \end{align} is finite at latge $n$. Table \ref{table5} contains several numerical values of the renormalized energies $E_{\Lambda,R}^{(k)}=\sum_{i=0}^{k}(E_{\Lambda}^{(i)}-e_0^{(i)}n^2),~k=1,2$. It is apparent that the perturbative series has better convergence properties for higher dimensions $d$, which can be justified by localizing an expansion parameter in the Hamiltonian, cp. (\ref{ABSoperator})-(\ref{lastpart}), \begin{align} H_N +\beta_N \mathbb{I}= 2\tilde{\omega}\left[(i^{\dagger}i) + \frac{\pi^2}{\tilde{\omega}^3} (\text{interaction terms}) \right], \end{align} hence the effective coupling constant is $\tilde{g}:=\frac{\pi^2}{\tilde{\omega}^3}=\frac{1}{4(d-1)}$. \begin{table}[H] \caption{The perturbative expansion for the renormalized energy (the vacuum energy subtracted) of the first $SO(d)\times SU(n)$ invariant excited state for the MMM at large $n$} \label{table5} \begin{center} \begin{tabular}{ | l | l | l |l |l |l | p{5cm} |} \hline d &3 & 9 &15& 25& 35 \\ \hline $E_{\Lambda,R}^{(0)}$ &17.16 &27.24& 32.82& 39.29& 44.12 \\ \hline $E_{\Lambda,R}^{(1)}$ & 21.45 &34.05&41.03&49.11&55.15 \\ \hline $E_{\Lambda,R}^{(2)}$ &16.09& 31.92& 39.57& 48.09&54.34 \\ \hline \end{tabular} \end{center} \end{table} \section{Concluding remarks} We have seen in Section \ref{opt Fock space} that the special choice of the Fock vacuum (resp. Fock space frequencies, $\tilde{\omega}_I$) eliminates one of the divergent parts of the Hamiltonian, i.e. $Tr(a^2+a^{\dagger 2}) \propto O(n)$ and minimizes the vacuum expectation value $\langle \Psi_0(\omega), H_N \Psi_0(\omega) \rangle=e_0^{(0)} n^2$ providing a rigorous upper bound for the ground state energy at large $n$ in a concise way for a rather general family of Hamiltonians with a quartic potential (\ref{general}). A further study of the remaining divergent operators, i.e. $\frac{1}{n} Tr(a^{\dagger 4}+ a^4)$ allows to produce astonishingly good lower bounds for the ground state energy for the AO as well as for the MMM (Theorems 1 and 2) in the planar limit. A similar mechanism shall lead to the true vacuum in Fock space, however the task is rather challenging. Our results suggest that one should consider bases consisting of non-linear coherent states instead of polynomials in simple $U(n)/SU(n)$ invariants (\ref{partitions basis}), (\ref{partitions basis2}), which would fully capture the divergent behaviour of $\frac{1}{n} Tr(a^{\dagger 4}+ a^4)$. For instance, a possible vacuum state $\phi_0$ for the operator $\frac{1}{n^4} B^{\dagger}B$ from Lemma \ref{lemma2} has the following form \begin{equation} \phi_0:= \mathcal{N}e^{\tilde{\gamma} C^{\dagger}} \psi_0,~~ [A,C^{\dagger}]=\mathbb{I}, ~~\tilde{\gamma}=-\frac{\beta n^3}{\alpha+ \beta \gamma}, \label{renormalized vacuum} \end{equation} and $C$ can be found recursively as an infinite sum of single-trace operators with increasing length, $C=\sum_{k=0}C_k$, $C_0= \frac{1}{4n^4}A$. Nevertheless, the two considered here matrix models admit a perturbative formulation, which provides evidence for the validity of the planar limit and it seems to be efficient in approximating the spectrum, including the strong-coupling regime for the AO, allowing to rederive the well-knwon result of Brezin et al \cite{planar} for the ground state energy with high accuracy. In particular, the Hamiltonian for the Membrane Matrix Models (\ref{classmembr}), originally not containing a quadratic term, gains an effective mass in the optimized Fock space and can be treated as a perturbation of a non-interacting Hamiltonian by the original normal ordered quartic potential with the effective coupling constant $\tilde{g}= \frac{1}{4(d-1)}$, which means that the model contains a small hidden parameter. This is perhaps not surprising in the context of the established validity of $1/D$ expansions (where $D=d+1$ is the dimension of space-time) in various Yang-Mills theories, see \cite{1 over d expansion, 1 over d expansion 2}. Our numerical results, Table \ref{table2} and \ref{table5}, show that the perturbative expansion is stable, especially for higher dimensions of the embedding space, the results are more accurate for larger values of $d$, and that it leads to a finite energy gap at least up to the second order. One expects that cancellations of the vacuum contributions (corresponding to vacuum diagrams), like in formulas (\ref{canc1}), (\ref{canc2}) and (\ref{canc3}), should occur in any order, depending on the validity of the linked cluster theorem for this model. Therefore one can conjecture that the spectrum of the properly rescaled family of Membrane Matrix Models (\ref{quantum membrane}) in the large $n$ limit remains purely discrete with a finite spectral gap and a divergent vacuum energy, i.e. a generic energy level $E_{\Lambda}$ can be written as \begin{equation} E_{\Lambda}= e_0 n^2 + e_{\Lambda},~~ 0< e_{\Lambda} < \infty. \end{equation} \section*{Acknowledgements} The author would like to thank J. Hoppe, G. Lambert, E. Langmann, D. Lundholm and O.T. Turgut for valuable discussions, T. Morita and E. Onofri for e-mail correspondence as well as KTH and the Swedish Research Council for financial support.
1,108,101,565,038
arxiv
\section{\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex}{2.3ex plus .2ex}{\large\bf}} \def\arabic{section}.{\arabic{section}.} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \long\def\@makefntext#1{\parindent 0cm\noindent \hbox to 1em{\hss$^{\@thefnmark}$}#1} \def\let\@currentlabel=\theequation\refstepcounter{equation{\let\@currentlabel=\arabic{section}.\arabic{equation}\refstepcounter{equation} \global\@eqnswtrue \global\@eqcnt\z@\tabskip\@centering\let\\=\@eqncr $$\halign to \displaywidth\bgroup\@eqnsel\hskip\@centering $\displaystyle\tabskip\z@{##}$&\global\@eqcnt\@ne \hfil${{}##{}}$\hfil &\global\@eqcnt\tw@ $\displaystyle\tabskip\z@{##}$\hfil \tabskip\@centering&\llap{##}\tabskip\z@\cr} \def\lefteqn#1{\hbox to 4\arraycolsep{$\displaystyle #1$\hss}} \begin{document} \begin{titlepage} \vspace{.5in} \begin{flushright} UCD-98-18\\ hep-th/9812013\\ November 1998\\ revised January 1999\\ \end{flushright} \vspace{.5in} \begin{center} {\Large\bf Black Hole Entropy\\[1ex] from Conformal Field Theory\\[1.2ex] in Any Dimension}\\ \vspace{.4in} {S.~C{\sc arlip}\footnote{\it email: [email protected]}\\ {\small\it Department of Physics}\\ {\small\it University of California}\\ {\small\it Davis, CA 95616}\\{\small\it USA}} \end{center} \vspace{.5in} \begin{center} {\large\bf Abstract} \end{center} \begin{center} \begin{minipage}{5.2in} {\small Restricted to a black hole horizon, the ``gauge'' algebra of surface deformations in general relativity contains a Virasoro subalgebra with a calculable central charge. The fields in any quantum theory of gravity must transform accordingly, i.e., they must admit a conformal field theory description. Applying Cardy's formula for the asymptotic density of states, I use this result to derive the Bekenstein-Hawking entropy. This method is universal---it holds for any black hole, and requires no details of quantum gravity---but it is also explicitly statistical mechanical, based on counting microscopic states. } \end{minipage} \end{center} \end{titlepage} \addtocounter{footnote}{-1} \section{Introduction} Since the discovery that black holes behave as thermal objects, an outstanding open question has been whether black hole thermodynamics has a ``statistical mechanical'' description in terms of microscopic states. An answer could shed light on broader problems of quantum gravity; at a minimum, black hole thermodynamics provides an important consistency check for any candidate theory of quantum gravity. Until recently, we had no convincing model of microscopic black hole states. Today, we have a plethora of possibilities, from D-brane states in string theory \cite{Dbrane} to spin network states in loop quantum gravity \cite{Ashtekar}. A fundamental issue remains, however, perhaps best described as the problem of universality \cite{Strominger1}. The Bekenstein-Hawking entropy can be computed entirely within the framework of quantum field theory in a fixed curved background. It is hard to see how such a calculation could ``know'' the details of a microscopic gravitational theory. Rather, it seems more likely that some unknown universal mechanism forces {\em any\/} suitable quantum theory to give the standard result. A major step toward finding a universal mechanism was taken by Strominger \cite{Strominger2}, who reanalyzed the (2+1)-dimensional black hole \cite{BTZ} using conformal field theory methods. Brown and Henneaux had shown that the asymptotic symmetry algebra for this solution was a Virasoro algebra, implying that any theory of microscopic states should be a conformal field theory \cite{Brown}. Strominger observed that the Cardy formula \cite{Cardy} for the asymptotic growth of states could thus be used to compute the entropy, and that the result agreed with the usual Bekenstein-Hawking expression. This analysis was subsequently extended to a number of higher-dimensional black holes with near-horizon geometries resembling that of the (2+1)-dimensional black hole (see \cite{Carlip1} for a partial list of references). But while many black holes have the appropriate near-horizon geometry for such an analysis, others do not. Moreover, the Virasoro algebra of Brown and Henneaux is an algebra of asymptotic symmetries at spatial infinity, while black hole entropy should arguably be a more local property of horizons. In this paper, I generalize Strominger's approach by looking at the symmetries of the horizon of an arbitrary black hole. The relevant algebra of surface deformations contains a physically important Virasoro algebra, essentially consisting of deformations of the $r$--$t$ plane that leave the horizon fixed. The analysis of Brown and Henneaux can be extended to find the central charge of this algebra. I show that the Cardy formula then yields the correct Bekenstein-Hawking entropy, independent of the details of the black hole. \section{Metric and Boundary Terms} Let us start with a general black-hole-like metric in $n$ spacetime dimensions,\footnote{Greek letters from the middle of the alphabet are spacetime indices, Roman letters are spatial indices, and Greek letters from the beginning of the alphabet are ``boundary'' or ``angular'' indices.} \begin{equation} ds^2 = -N^2 dt^2 + f^2(dr + N^r dt)^2 + \sigma_{\alpha\beta}(dx^\alpha + N^\alpha dt)(dx^\beta + N^\beta dt) , \label{a1} \end{equation} with a lapse function $N$ that vanishes at a horizon $r=r_+$ and behaves near $r=r_+$ as \begin{equation} N^2 = h(x^\alpha)(r-r_+) + O(r-r_+)^2 , \qquad n^a\partial_a N = {2\pi /\beta} , \label{a2} \end{equation} where $n^a$ is the unit normal to $r=r_+$ on a constant $t$ slice. For a stationary black hole with coordinates such that $N^r=0$, $\beta$ is the inverse Hawking temperature, and is constant on the horizon. I will treat $r=r_+$ as a boundary---or, more precisely, a surface at which certain fields are fixed, and at which boundary terms are therefore needed in a variational principle \cite{Carlip2}---and will assume that the metric approaches that of a standard, momentarily stationary, black hole near this boundary. Suitable fall-off conditions near $N=0$ are \begin{eqnarray} &&f = {\beta h\over4\pi}N^{-1} + O(1) , \qquad N^r = O(N^2) , \qquad \sigma_{\alpha\beta} = O(1) , \qquad N^\alpha = O(1), \nonumber\\ &&(\partial_t - N^r\partial_r) g_{\mu\nu} = O(N) g_{\mu\nu}, \qquad \nabla_\alpha N_\beta + \nabla_\beta N_\alpha= O(N) . \label{a3} \end{eqnarray} The last condition is essentially the requirement that angular velocity be constant on the horizon. The extrinsic curvature of a slice of constant time then behaves as \begin{equation} K_{rr} = O(1/N^3) , \qquad K_{\alpha r} = O(1/N) , \qquad K_{\alpha\beta} = O(1) \label{a4} \end{equation} near the horizon. (Note that $\partial_r N = O(1/N)$.) In the Hamiltonian (ADM) formulation of general relativity, the group of ``gauge'' symmetries is the surface deformation group, generated by the quantity \begin{equation} H[{\hat\xi}] = \int_\Sigma d^{n-1}x\, {\hat\xi}^\mu{\cal H}_\mu + \hbox{\em boundary terms} , \label{a5} \end{equation} where $\{ {\cal H}_t, {\cal H}_a \}$ are the Hamiltonian and momentum constraints. The parameters ${\hat\xi}^\mu$ are almost, but not quite, identical to parameters $\xi^\mu$ labeling infinitesimal spacetime diffeomorphisms; the two are related by \cite{Brown2} \begin{equation} {\hat\xi}^t = N\xi^t, \qquad {\hat\xi}^a = \xi^a + N^a\xi^t . \label{a6} \end{equation} As usual, the variation of the ``volume piece'' of $H[{\hat\xi}]$ contains surface terms at the boundary---in this case, the horizon---which take the standard form \cite{Brown} \begin{equation} -{1\over16\pi G} \int_{r=r_+}\!\! d^{n-2}x\, \left\{ \sqrt{\sigma}\left(\sigma^{ac}n^b - \sigma^{ab}n^c\right) \left({\hat\xi}^t\nabla_c\delta g_{ab} - \nabla_c{\hat\xi}^t\delta g_{ab}\right) + 2 {\hat\xi}^a\delta\pi_a{}^r - {\hat\xi}^r \pi^{ab}\delta g_{ab} \right\} , \label{a7} \end{equation} where $\pi^{ab} = f\sqrt{\sigma}(K^{ab} - g^{ab}K )$ is the momentum conjugate to $g_{ab}$. To have a well-defined symmetry generator, we must add a boundary term to $H[{\hat\xi}]$ to cancel the variation (\ref{a7}). Note that the fall-off conditions (\ref{a3})--(\ref{a4}) necessitate that \begin{equation} {\hat\xi}^r = O(N^2), \qquad {\hat\xi}^t = O(N), \qquad {\hat\xi}^\alpha = O(1) , \label{a9} \end{equation} since otherwise surface deformations would change the metric near $N=0$. It is straightforward to check that a term \begin{equation} J[{\hat\xi}] = {1\over8\pi G}\int_{r=r_+}\!\! d^{n-2}x\, \Bigl\{ n^a\nabla_a{\hat\xi}^t\sqrt{\sigma} + {\hat\xi}^a\pi_a{}^r + n_a{\hat\xi}^a K\sqrt{\sigma} \Bigr\} \label{a10} \end{equation} added to the generator (\ref{a5}) yields a variation \begin{equation} \delta\left(H[{\hat\xi}] + J[{\hat\xi}]\right) = \hbox{\em bulk terms} + {1\over8\pi G} \int_{r=r_+}\!\! d^{n-2}x\, \left( \delta n^r\partial_r{\hat\xi}^t + {1\over f}{\hat\xi}^r\delta K_{rr} + \delta n_r{\hat\xi}^r K\right)\sqrt{\sigma} . \label{a11} \end{equation} In constrast to more familiar variational problems, the normal $n^a$ need not be fixed at the boundary, but $\delta n^r$ can be computed from the requirement that $\delta(g_{ab}n^an^b) = 0$. If we now restrict our variations to those satisfying $\delta f/ f = O(N)$ and $\delta K_{rr}/ K_{rr} = O(N)$, the boundary term in (\ref{a11}) vanishes, as required. A useful check of eqn.\ (\ref{a10}) can be obtained by specializing to variations $\delta H$ that are themselves surface deformations. Let $L[{\hat\xi}] = H[{\hat\xi}]+J[{\hat\xi}]$ be the full generator of surface deformations. Then the deformation of $L[{\hat\xi}]$ should itself be generated by $L[{\hat\xi}]$: that is, it should be given by the Poisson bracket \cite{Brown} \begin{equation} \delta_{{\hat\xi}_2}L[{\hat\xi}_1] = \left\{ L[{\hat\xi}_2], L[{\hat\xi}_1] \right\} = L[\{ {\hat\xi}_1, {\hat\xi}_2 \}_{\hbox{\scriptsize SD}}] + K[{\hat\xi}_1,{\hat\xi}_2] \label{a13} \end{equation} where $K[{\hat\xi}_1,{\hat\xi}_2]$ is a possible central term in the algebra. Here $\{ {\hat\xi}_1, {\hat\xi}_2 \}_{\hbox{\scriptsize SD}}$ is the Lie bracket for the algebra of surface deformations, given by \cite{Brown2} \begin{eqnarray} \{ {\hat\xi}_1, {\hat\xi}_2 \}_{\hbox{\scriptsize SD}}^t &=& {\hat\xi}_1^a\partial_a{\hat\xi}_2^t - {\hat\xi}_2^a\partial_a{\hat\xi}_1^t \nonumber\\ \{ {\hat\xi}_1, {\hat\xi}_2 \}_{\hbox{\scriptsize SD}}^a &=& {\hat\xi}_1^b\partial_b{\hat\xi}_2^a - {\hat\xi}_2^b\partial_b{\hat\xi}_1^a + g^{ab}\left( {\hat\xi}_1^t\partial_b{\hat\xi}_2^t - {\hat\xi}_2^t\partial_b{\hat\xi}_1^t \right) . \label{a13a} \end{eqnarray} The equality (\ref{a13}) will be used in the next section to compute the central charge. For now, let us note that when evaluated at a stationary black hole metric in standard coordinates, for which $K_{rr} = 0 = K_{\alpha\beta}$, the boundary term in (\ref{a11}) becomes \begin{equation} -{1\over8\pi G} \int_{r=r_+}\!\! d^{n-2}x\, \sqrt{\sigma}\left\{ {1\over f^2}\left[ \partial_r(f{\hat\xi}_2^r)\partial_r{\hat\xi}_1^t - \partial_r(f{\hat\xi}_1^r)\partial_r{\hat\xi}_2^t \right] + {1\over f}\partial_r\left[ {\hat\xi}_1^r\partial_r{\hat\xi}_2^t - \delta_{{\hat\xi}_2}{\hat\xi}_1^t \right] \right\} . \label{a14} \end{equation} If we assume, as suggested by eqn.\ (\ref{a13a}), that $\delta_{{\hat\xi}_2}{\hat\xi}_1^t = {\hat\xi}_2^a\partial_a{\hat\xi}_1^t$, then this expression is antisymmetric in ${\hat\xi}_1$ and ${\hat\xi}_2$, as required by eqn.\ (\ref{a13}). Our boundary terms are thus consistent with the interpretation of $L[{\hat\xi}]$ as the generator of surface deformations in the presence of a horizon. \section{The Virasoro Algebra} In the preceding section, we considered general variations of a general black-hole-like metric. Let us now specialize to the case of an axially symmetric black hole, with an adapted angular coordinate $\phi$ such that $\partial_\phi g_{\mu\nu} = 0$. For simplicity, I will assume that only the component $N^\phi$ of the shift vector is nonzero; the higher-dimensional generalization to more than one rotational Killing vector is straightforward. We now focus our attention on a particular subalgebra of the surface deformation algebra with the following three properties: \begin{enumerate} \item The surface deformations are restricted to the $r$--$t$ plane. This specialization is inspired by the path integral approach to black hole thermodynamics, in which it is clear that the $r$--$t$ plane has the central role in determining the entropy \cite{Banados}. \item The diffeomorphism parameter $\xi^t = {\hat\xi}^t/N$ ``lives on the horizon,'' in the sense that near $r=r_+$ it depends on $t$ and $r$ only in the combination $t-r_*$, where $fdr = Ndr_*$ in the time-slicing such that $N_r=0$. For the Kerr black hole, $t-r_*$ is essentially the standard Eddington-Finkelstein retarded time, up to corrections of order $r-r_+$. \item The lapse function $N^2$ is fixed at the horizon. The horizon is physically defined by the condition $N^2=0$, while our boundary term (\ref{a10}) is written at $r=r_+$; this condition ensures that the boundary remains at the horizon. \end{enumerate} Condition $1$ and eqn.\ (\ref{a6}) imply that the diffeomorphism parameter $\xi^\phi$ has the form \begin{equation} \xi^\phi = -N^\phi\xi^t = -{N^\phi\over N}{\hat\xi}^t . \label{b1} \end{equation} Condition $2$ requires that \begin{equation} \partial_r\xi^t = -{f\over N}\partial_t\xi^t \label{b2} \end{equation} at $r=r_+$, allowing us to write radial derivatives at the horizon in terms of time derivatives. To impose condition $3$, we can examine diffeomorphisms of $g^{tt} = -1/N^2$. With initial coordinates chosen so that $N_r=0$, we find that \begin{equation} \delta g^{tt} = 0 = {2\over N^2}\left( \partial_t - N^\phi\partial_\phi\right)\xi^t + {h\over N^4}\xi^r . \label{b23} \end{equation} This structure suggests that we split our diffeomorphisms into left-moving modes $\xi^t$, for which $\partial_t\xi^t = \Omega\partial_\phi\xi^t$, and right-moving modes $\tilde\xi^t$, for which $\partial_t{\tilde\xi}^t = -\Omega\partial_\phi{\tilde\xi}^t$, where $\Omega = -N^\phi(r_+)$ is the angular velocity of the horizon. Then \begin{equation} \xi^r = -{4 N^2\over h}\partial_t\xi^t , \qquad {\tilde\xi}^r = 0. \label{b4} \end{equation} Note that $\xi^{r_*} = (f/N)\xi^r$ is, like $\xi^t$, a function of retarded time $t-r_*$ at the horizon. We can use these results to write the left-moving modes at the horizon as \begin{equation} \xi^t_n = {T\over4\pi}\exp\left\{ {2\pi i n\over T} \left( t - r_* + \Omega^{-1}\phi \right) \right\} , \label{b5} \end{equation} where $T$ is an arbitrary period. (A possible choice is $T=\beta$, which matches the periodicity of the Euclidean black hole, but as we shall see, $T$ drops out of the final expression for the entropy.) The normalization (\ref{b5}) has been fixed by the requirement that the surface deformation algebra (\ref{a13a}) reproduce the $\hbox{Diff}\,S^1$ algebra \begin{equation} \{ {\hat\xi}_m, {\hat\xi}_n \}_{\hbox{\scriptsize SD}}^t = i(n-m){\hat\xi}_{m+n}^t . \label{b6} \end{equation} Substituting the modes (\ref{b5}) into the boundary term (\ref{a14}), we obtain \begin{equation} \delta_{{\hat\xi}_m} L[{\hat\xi}_n] = \hbox{\em bulk terms} + {A\over8\pi G}{\beta\over T}in^3\delta_{m+n} , \label{b8} \end{equation} where $A$ is the area of the boundary at $r=r_+$. We can now use a trick of Brown and Henneaux to evaluate the central term $K[{\hat\xi}_m, {\hat\xi}_n]$. When evaluated on shell, the Hamiltonian and momentum constraints vanish, so $H[{\hat\xi}]=0$. Equation (\ref{a13}) thus reduces to a collection of boundary terms, \begin{equation} {A\over8\pi G}{\beta\over T}in^3\delta_{m+n} = J[\{ {\hat\xi}_m, {\hat\xi}_n \}_{\hbox{\scriptsize SD}}] + K[{\hat\xi}_m,{\hat\xi}_n] = i(n-m)J[{\hat\xi}_{m+n}] + K[{\hat\xi}_m,{\hat\xi}_n] . \label{b9} \end{equation} {}From eqn.\ (\ref{a10}), it is easily checked that \begin{equation} J[{\hat\xi}_p] = {A\over16\pi G}{T\over\beta}\delta_{p0} \label{b10} \end{equation} on shell. Hence \begin{equation} K[{\hat\xi}_m,{\hat\xi}_n] = {A\over8\pi G}{\beta\over T}in(n^2-{T^2\over\beta^2})\delta_{m+n} , \label{b11} \end{equation} the correct form\footnote{The $n$ dependence in (\ref{b11}) can be made the usual $n(n^2-1)$ by shifting $L_0$ by a constant. This alters the eigenvalue (\ref{b10}), but also changes the ``effective central charge'' so that the entropy (\ref{c2}) is unaffected.} for the central term of a Virasoro algebra with central charge \begin{equation} c = {3A\over2\pi G}{\beta\over T} . \label{b12} \end{equation} \section{Counting States} The models we are investigating are not two-dimensional. Nevertheless, the results above imply that the quantum states that characterize a black hole horizon must transform under a Virasoro algebra with central charge (\ref{b12}). This is sufficient to permit the use of powerful methods from conformal field theory to count states. In particular, a conformal field theory with a central charge $c$ has a density of states $\rho(L_0)$ that grows asymptotically as \begin{equation} \log\rho(L_0) \sim 2\pi\sqrt{ {c_{\hbox{\scriptsize eff}}L_0\over6} } , \label{c1} \end{equation} where $c_{\hbox{\scriptsize eff}}$ is an ``effective central charge'' \cite{Cardy,Kutasov}. If the spectrum satisfies reasonable, although not universal, conditions \cite{Carlip1}---notably that the ground state is an eigenstate of $L_0$ with eigenvalue zero---then $c_{\hbox{\scriptsize eff}} = c$. Following Strominger \cite{Strominger2}, let us assume these conditions are satisfied in quantum gravity. Then from eqns.\ (\ref{b10}) and (\ref{b12}), \begin{equation} \log\rho(L_0) \sim {A\over4G} , \label{c2} \end{equation} recovering the standard Bekenstein-Hawking entropy. In general, right-moving modes may make an additional contribution to the density of states, but it is clear from eqn.\ (\ref{b4}) that the central charge for those modes vanishes, so eqn.\ (\ref{c2}) gives the full entropy. \section{Four Questions and Two Answers} The analysis above strongly suggests that any quantum description of black hole horizon states must yield the standard Bekenstein-Hawking entropy. Here, I will briefly address some details of this analysis and raise several remaining questions.\\[-.8ex] \noindent {\bf 1.}\ What is the significance of the boundary condition $N=0$?\\[-.8ex] For a stationary black hole, in the coordinates $N^r=0$, $f\sim 1/N$ that we used to evaluate the central charge, this is the condition for an apparent horizon. In other coordinates, however, the apparent horizon condition is considerably more complicated. An investigation of possible alternative boundary conditions might help answer the question of what kind of ``horizon'' is needed for black hole entropy. It may also be possible to extend this analysis to more general gravitational actions along the lines of Ref.\ \cite{Brown3}.\\ \noindent {\bf 2.}\ The Cardy formula (\ref{c1}) comes from two-dimensional conformal field theory. What are the two relevant dimensions here?\\[-.8ex] The Cardy formula requires a modular invariant partition function of the form \begin{equation} Z = \hbox{Tr}\,\exp\{ i (J\phi + Et)\} , \label{d1} \end{equation} where $J$ and $E$ are conserved charges associated with translations in $\phi$ and $t$. For an axially symmetric black hole, $\phi$ and $t$ are determined by the two Killing vectors, and modular invariance is a consequence of diffeomorphism invariance. The two ``preferred'' directions are thus picked out by the symmetries. (For a black hole in more than four dimensions with more than one axial Killing vector, the left-moving modes are determined by the condition $\partial_t\xi^t = -N^\alpha\partial_\alpha\xi^t$, so the shift vector $N^\alpha$ picks out an angular direction.)\\ \noindent {\bf 3.} What specific degrees of freedom account for the entropy (\ref{c2})?\\[-.8ex] Like Strominger's derivation \cite{Strominger2}, this computation does not address this question, but rather uses symmetry arguments to derive the behavior of any microscopic theory of black hole horizon states. This is both good and bad: good because it provides a universal explanation of black hole statistical mechanics, bad because it offers little further insight into quantum gravity. One possible picture of the microscopic degrees of freedom comes from considering the dimensional reduction of Einstein gravity to the $r$--$t$ plane near a horizon. The resulting action contains a scalar field, essentially $\sqrt{\sigma}$, that couples to the two-dimensional scalar curvature ${}^{(2)}R$. The action is not conformally invariant, but we know from the $c$ theorem \cite{Zam} that it must flow to a conformal field theory---presumably a Liouville theory---under the renormalization group. Since the dimensionally reduced action has a prefactor of $A/16\pi G$, the central charge of such a Liouville theory is likely to be proportional to $A/16\pi G$, and might reproduce the central charge (\ref{b12}). De Alwis has considered a similar renormalization group flow in a somewhat different context \cite{DeAlwis}, and Solodukhin has recently proposed a related analysis of dimensionally reduced gravity \cite{Solo}. It also seems plausible that the description of black hole entropy here is related to the picture of microscopic states as ``would-be pure gauge'' degrees of freedom that become dynamical at a boundary \cite{Carlip2,Bal}. The existence of a central charge is an indication that the algebra of surface deformations has become anomalous, and that invariance cannot be consistently imposed on all states. This connection has not been developed, however.\\ \noindent {\bf 4.}\ Does the relevant conformal field theory satisfy the technical conditions required for the Cardy formula? In particular, does the ground state have $L_0=0$?\\[-.8ex] Without a much more detailed description of the conformal field theory, this question cannot be answered. My approach differs from Strominger's in an important respect. Strominger's boundary conditions were those of anti-de Sitter space, offering the possibility that anti-de Sitter space is the ground state. The boundary conditions of this paper are those of a specific black hole, and depend on the horizon metric. This difference is hard to avoid, since without the extra length scale provided by a cosmological constant it is difficult to write down a dimensionless central charge independent of the boundary values of the metric. (One could, of course, choose $T$ to be proportional to $A/\beta$ in eqn.\ (\ref{b12}), but there seems to be little physical justification for such a choice.) There is, however, a plausible candidate for the ground state in the model developed here: the extremal black hole, which is typically characterized by a lapse function behaving as $N^2\sim(r-r_+)^2$ near the horizon. Such a configuration satisfies the boundary conditions assumed in this paper, but in contrast to the nonextremal result (\ref{b10}), eqn.\ (\ref{a10}) now gives $J[{\hat\xi}_0]=0$, implying that at least the classical contribution to $L_0$ vanishes. \vspace{1.5ex} \begin{flushleft} \large\bf Acknowledgements \end{flushleft} I would like to thank Rob Meyers for pointing out some errors in the first version of this paper. This work was supported in part by National Science Foundation grant PHY-93-57203 and Department of Energy grant DE-FG03-91ER40674.
1,108,101,565,039
arxiv
\section{Parametric Noise Injection (PNI)} \label{sec:PNI} In this section we give an overview of the PNI~\cite{Rakin2019ParametricNI} adversarial defense technique. This method injects noise to different components or location within the DNN in the following way: \begin{equation} \label{eq:PNI} \begin{aligned} &\Tilde{v}_i = f_{\text{PNI}}(v_i) = v_i + \alpha_i \cdot \eta, \\ &\eta \sim \mathcal{N}(0,\sigma^2), \\ &\sigma = \sqrt{\frac{1}{N} \sum_i (v_i - \mu)}, \end{aligned} \end{equation} where $v_i$ is an element of a noise-free tensor $v$, and $v$ represents the input/weight/inter-layer tensor. Next, $\mu$ is the estimated mean of $v$, and $\eta$ is the additive noise term, which is a random variable following the Gaussian distribution. Finally, $\alpha_i$ is the coefficient which scales the magnitude of the injected noise, and it is a learnable parameter which is optimized for the network's robustness. The default setting in~\cite{Rakin2019ParametricNI} is to apply PNI to weight tensors of convolutional and fully-connected layers (denoted as PNI-W) , and to share the element-wise noise coefficient $\alpha_i$ for all elements of a specific weight tensor (denoted as layerwise). We also evaluate the setting where the PNI is applied to tensors which are outputs of the convolutional and fully connected layers (denoted as PNI-A-a), because it has also shown good results~\cite{Rakin2019ParametricNI}. Furthermore, we also explore sharing $\alpha_i$ just for different channels inside the tensor (denoted as channelwise), or having different $\alpha_i$ for different elements (denoted as elementwise). \section{Experiments} \label{sec:experiments} \subsection{Experimental setup} \noindent\textbf{Adversarial attack strategies.} In general, adversarial attacks exploit the differentiability of the network $f(x)$ and its prediction loss $\mathcal{L}(f(x),l)$, with respect to the input image $x$, where $l$ is the label. The attack aims to slightly modify the input $x$ to maximize the prediction loss for the correct label $l$. To craft stronger adversarial samples, the fast gradient sign method (FGSM)~\cite{Goodfellow2015ExplainingAH} is repeated K times with a step size of $\alpha$, followed by a projection to an $\epsilon$ hypercube around the initial sample $x$, $\hat{x}^{k} = \Pi_{x, \epsilon} \left[\hat{x}^{k-1} + \alpha \, \text{sgn}(\nabla_x \mathcal{L}(f(\hat{x}^{k-1}), l))\right]$. This is known as the projected gradient descent (PGD-K) attack~\cite{Madry2018TowardsDL}. Furthermore, the expectation-over-transformation (EOT) gradient estimation is usually more effective when dealing with noise inside the network, because of common gradient obfuscations~\cite{Athalye2018ObfuscatedGG}. This can be viewed as using PGD~\cite{Madry2018TowardsDL} with the proxy gradient, $\mathbb{E}_{q(z)} \left[\nabla_x \mathcal{L}(f(\hat{x}^{k-1}, z), l)\right] \approx \frac{1}{T} \sum_{t=1}^{T} \nabla_x \mathcal{L}(f(\hat{x}^{k-1}, z_t), l)$ , where $q(z)$ represents the distribution of the noise ${z\sim q(z)}$ injected into the randomized classifier $f(x,z)$. \noindent\textbf{Adversarial vulnerability metrics.} In order to evaluate adversarial robustness, we craft adversarial examples for each image in the validation set with the aforementioned attacks, and test the model’s accuracy on those adversarial examples. When crafting each adversarial example, we initialize the attack's starting point randomly, inside the $\epsilon-$hypercube centered on $x$. We restart this procedure R times to find the strongest attack and always set the step size $\alpha=1$. \noindent\textbf{Dataset.} For conducting experiments, we use the ILSVRC-2012 ImageNet dataset, containing 1.2M training and 50000 validation images grouped into 1000 classes~\cite{Russakovsky2015ImageNet}. \noindent\textbf{Adversarial training.} PNI is trained with the help of adversarial training~\cite{Rakin2019ParametricNI}. Since we use the more computationally demanding ImageNet dataset, we employ the recent efficient adversarial training procedure described in~\cite{Wong2020FastIB}. Following~\cite{Wong2020FastIB}, the step sizes during adversarial training are $\alpha= \{\frac{2.5}{255}, \frac{5}{255} \}$ for $\epsilon= \{ \frac{2}{255}, \frac{4}{255}\}$, respectively. The models are evaluated for the same $\epsilon$ used during the training. \noindent\textbf{Baselines.} For all experiments, the baseline is the original network, without the PNI stochasticity. In a way, this baseline serves as an ablated model. A significant improvement over this baseline is necessary to claim any improvement in robustness. More importantly, the same must hold even with EoT gradient estimation so as to ensure that the robustness is not mainly due to gradient obfuscation. \noindent\textbf{Implementation details.} For the main experiments, we use the ResNet architecture, where every convolutional layer has been extended with the PNI, as described in~\eqref{eq:PNI}. More specifically, we use the ResNet-50, as it provides a good trade-off between performance and computational complexity, and we train it from scratch. We use 100 randomly selected classes, because of the high computational demand of the ImageNet dataset, adversarial training, and adversarial evaluation altogether. During training, we use hyperparameters recently proposed in~\cite{Touvron21aDeIT}, which have shown to work well for ResNets~\cite{Wightman2021ResnetStrikes}. We train for 150, because it turns out to be sufficient in this setting with 100 classes. Furthermore, we also preform experiments on the DeiT-S transformer~\cite{Touvron21aDeIT}, since transformer architectures are becoming very popular and relevant in computer vision. The DeiT-S has the parameter count and computational complexity similar to a ResNet-50. We extend the fully-connected layer, just after the activation (in the MLP block), with the PNI on its weights (PNI-W-fc2). This experiment also analyzes the case of less aggressive noise, since PNI is not used in all parametrized blocks of the transformer. The initial experiments, like described in the case of ResNet, did not perform as well, probably because of the data hungry nature of transformers. Therefore, we use the whole ImageNet dataset for this experiment. However, because of high computational demand, we start from pre-trained models on ImageNet, like the ones described in~\cite{Touvron21aDeIT}. During the fine-tuning, which lasts for $20$ epochs, we use the AdamW optimizer and a cosine scheduler with a learning rate of $10^{-5}$, which gradually decays to $10^{-6}$. During every evaluation, we restart the attack 5 times to construct a stronger attack and we use 10 steps for the PGD attack (PGD-10). The number of samples for the EoT estimation is 25 in the case of ResNet50 on 100 classes (EoT-25), and 5 in the case of DeiT-S on all 1000 classes (EoT-5). \subsection{Results} \label{ssec:results} \input{tables/paper/PNI_merged} In Table~\ref{tab:PNI_merged} we see that inserting various forms of PNI improves the adversarial robustness of both the ResNet50 and DeiT-S over the respective baselines, for both FGSM and PGD-10 attacks. In contrast, when EoT is used to estimate the gradients during the attacks, the effect of PNI is even detrimental, weakening the desired robustness. Note that being effective against regular PGD, but ineffective against PGD with EoT, is clear evidence for gradient obfuscation being the main source of robustness. Rather than strengthening the robustness of the visual features, such defenses rather make it harder to find the adversarial example with gradient-based attacks. EoT however allows to uncover the adversarial direction by averaging multiple noisy gradient samples, and exposes the original vulnerability of the network. Note that the results of Table~\ref{tab:PNI_merged} (a) and (b) cannot be directly compared, due to the differences in the following aspects: number of classes, number of EoT samples, network backbones, and the training protocols. Nevertheless, both (a) and (b) support our conclusions. \section{Introduction} \label{sec:introduction} Deep Neural Networks (DNN) achieved astonishing results in the last decade, resulting in breakthroughs in processing images, videos, speech, audio, and natural language~\cite{lecun2015deep}. These networks have the potential to serve as the core of solutions to many real-world problems. However, it was discovered that a small adversarial perturbation in pixel intensities can cause a severe drop in the performance of DNNs, or worse, make them give a specific false prediction desired by the adversary~\cite{Szegedy2014IntriguingPO,Goodfellow2015ExplainingAH}. This can have severe consequences in applications like autonomous vehicles, healthcare, etc. Therefore, it is very important to design defense mechanisms to make DNNs more robust to adversarial attacks, as well as to thoroughly understand the vulnerabilities of a certain model to these attacks. Numerous approaches have been proposed to defend against adversarial attacks. Some of the main defense categories are adversarial training~\cite{Kurakin2017AdversarialML,Wong2020FastIB,Madry2018TowardsDL,Zhang2019TheoreticallyPT} , certified robustness~\cite{Cohen2019CertifiedAR,Croce2019ProvableRO,Salman2019ProvablyRD} and gradient regularization~\cite{Ciss2017ParsevalNI,Ross2018ImprovingTA,Jakubovitz2018ImprovingDR,Gu2015TowardsDN,Finlay2019ScaleableIG}. Another popular category of adversarial defenses is noise injection~\cite{Rakin2019ParametricNI,Kundu2021HIRE_SNN,Eustratiadis2021WeightCovAlign,Lecuyer2019CertifiedDifferential}, where some form of stochastic noise is introduced, in an attempt to increase robustness. Most of the popular adversarial attack methods exploit the network's differentiability to craft the sought adversarial examples~\cite{Szegedy2014IntriguingPO,Goodfellow2015ExplainingAH,Madry2018TowardsDL,Athalye2018SynthesizingRA,mihajlovic2018CommonAdv}. Introducing stochastic noise usually weakens these attacks by obfuscating the gradients, creating only apparent robustness. \emph{Athalye et al.}~\cite{Athalye2018ObfuscatedGG} showed that Expectation over Transformation (EoT), a simple method for gradient estimation, suffices to unveil the obfuscated gradients in those scenarios. In other words, after using the EoT gradient estimation, many defenses become ineffective. Furthermore, five characteristics have been identified by \emph{Athalye et al.}, which commonly occur when the improvement in robustness is mainly caused by gradient obfuscation. It has since become a trend to use these five characteristics as a sufficient test, to determine whether or not gradient obfuscation is the main source of robustness. We empirically show by a counterexample that these characteristics do not characterize all existing cases of gradient obfuscation. Therefore, we argue that the gradient obfuscation checklist test gives a false sense of security. \iffalse require an accurate estimation of the gradients with respect to the input. \emph{Athalye et al.} showed~\cite{Athalye2018ObfuscatedGG} that stochastic defenses often hinder the accurate gradient computation process -- In fact, if the gradients are accurately estimated, the attacks would be successful. -- EoT offers a better estimation. -- Therefore, test against EoT is necessary. -- That's what we show in this paper. -- In fact, the five characteristics.. usually improve the robustness by obfuscating gradients. Since most of the popular adversarial attacks are optimization-based, obfuscated gradients reduce their attacking ability, while the model is still susceptible to stronger or specifically tailored adversarial attacks. Recently, five characteristics have been identified, which are commonly observed when the improvement in robustness is mainly caused by gradient obfuscation. It has since become a trend to use these five characteristics as a sufficient test, to determine whether or not gradient obfuscation is the main source of robustness. However, these characteristics do not perfectly characterize all existing cases of gradient obfuscation, and therefore can not serve as a basis for a conclusive test. However, \emph{Athalye et al.} showed~\cite{Athalye2018ObfuscatedGG} that stochastic defenses of that time were creating improvements in robustness mainly due to gradient obfuscation. Most of the popular adversarial attacks use optimization techniques and thus exploit the network's differentiability~\cite{Szegedy2014IntriguingPO,Goodfellow2015ExplainingAH,Madry2018TowardsDL,Athalye2018SynthesizingRA,mihajlovic2018CommonAdv}. Therefore, those attacks are not fully effective when the gradients are obfuscated. Furthermore, when evaluating the robustness of stochastic defenses, it is recommended to also use stronger attacks or attacks tailored to a specific defense~\cite{Athalye2018ObfuscatedGG}. For example, the Expectation over Transformation (EoT) estimates the gradients, by calculating the expectation of the gradient with respect to the distribution of the introduced noise. In practice, this expectation is usually estimated using Monte-carlo sampling. The EoT gradient estimation can be integrated into popular optimization-based attacks, in order to make them much stronger against gradient obfuscation. \fi \section{Detecting Gradient Obfuscation} \noindent We list the five common characteristics, as observed by \mbox{\emph{Athalye et al.}~\cite{Athalye2018ObfuscatedGG}}, in the following. \begin{enumerate}[font=\itshape] \Myitem One-step attacks perform better than iterative attacks. \Myitem Black-box attacks are better than white-box attacks. \Myitem Unbounded attacks do not reach $100\%$ success. \Myitem Random sampling finds adversarial examples. \Myitem Increasing distortion bound does not increase success. \end{enumerate} In fact, \emph{Athalye et al.} also mentioned that the above list may not perfectly characterize all the cases of gradient obfuscation. Despite that, it has recently become a trend to use these five characteristics as criteria of a \mbox{``checklist``}, to determine whether or not the success of a stochastic defense is mainly caused by obfuscating the gradients~\cite{Rakin2019ParametricNI,Kundu2021HIRE_SNN,Eustratiadis2021WeightCovAlign,Lecuyer2019CertifiedDifferential,Yang2022FeatureUncrt,Jeddi_2020_CVPR,Addepalli_2021_CVPR,Lee2021GradDivAR}. As a result, any given defense is claimed to provide the robustness beyond gradient obfuscation, if none of these five characteristics is observed. In this work, we empirically show that such a claim can not be made. Our empirical study unveils a counterexample to the claim. In particular, we show that the Parametric Noise Injection (PNI) defense~\cite{Rakin2019ParametricNI}, which does not exhibit any of the five characteristics, is still vulnerable to attacks with the EoT gradient estimation. Therefore, its improvement in robustness is mostly based on the obfuscation of gradients. This indicates that the five characteristics are insufficient to be used to determine the contribution of gradient obfuscation to robustness, in general. \section{Discussion} Gradient obfuscation is a problem when it comes to determining the robustness of a model. However, when it comes to just defending against adversarial attackers, there are scenarios when gradient obfuscation can be viewed as more of an attribute, than a weakness. Firstly, performing the EoT gradient estimation inside an attack is usually significantly more computationally demanding than the regular attack, making the attack itself more difficult to perform. Secondly, the attacker might be oblivious to the presence of the noise in the model, and therefore they might not use the EoT estimation. In the case of PNI, the noise scaling coefficient $\alpha_i$ can be tuned by the user instead, thus the architecture would remain the same, which would make the presence of noise in the model more difficult to detect. Furthermore, there are scenarios where the attackers deploy universal adversarial examples, designed to fool various different models at once, instead of tailoring the attacks to the specific model/defense. Finally, the defense designer can attempt to design noise distributions and methods, which make the gradient estimation more difficult. \section{Conclusion} \label{sec:conclusion} In this paper, we reflect on the problem of gradient obfuscation in the case of stochastic defense techniques against adversarial attacks. Athalye et al.~\cite{Athalye2018ObfuscatedGG} observed five common characteristics, when the improvement in robustness is mainly caused by gradient obfuscation. They also stated that ``these behaviors may not perfectly characterize all cases of masked gradients". Despite this, it has become a trend to claim that obfuscated gradients are not the main source of improvements in robustness, if none of these five characteristics hold true~\cite{Rakin2019ParametricNI,Kundu2021HIRE_SNN,Eustratiadis2021WeightCovAlign}. We refute such claims on a large-scale dataset by providing a counterexample. In particular, we have shown that the popular Parametric Noise Injection (PNI) exploits gradient obfuscation to improve robustness, despite of passing the five characteristics checklist test. The exploitation of the gradient obfuscation is unveiled based on the following observations: \begin{itemize}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex] \item PNI passes the five characteristics checklist test~\cite{Rakin2019ParametricNI}. \item Adding PNI improves the adversarial robustness towards FGSM and PGD attacks. \item Adding PNI is detrimental for robustness towards attacks using gradients estimated with EoT. \end{itemize} \noindent This counterexample allows us to conclude that the gradient obfuscation checklist test is not sufficient to determine whether or not the gradient obfuscation is the main source of robustness improvements. Therefore, only using the gradient obfuscation checklist test gives us a false sense of security. Needless to say, the provided counterexample is sufficient to make the above conclusion. Henceforth, we recommend to include EoT-based attacks in the gradient obfuscation test. Please note that even with the EoT criterion, not all cases of obfuscated gradients may be perfectly covered.
1,108,101,565,040
arxiv
\section{Introduction} FU Orionis systems are a class of exceptionally luminous young stellar objects found in star-forming regions (Hartmann \& Kenyon 1996). Originally identified from their very large increases in optical brightness over timescales of years or less (Herbig 1977), a larger group of heavily extincted probable members of the class have been identified by their characteristic infrared spectra, which strongly differ from those of typical T Tauri stars (Reipurth \& Aspin 1997; Aspin \& Reipurth 2003; Reipurth \etal 2007). Additional support for the identification of these heavily embedded objects comes from recent high-resolution infrared spectroscopy, which indicates that many of these sources are rapidly rotating and exhibit double-peaked absorption line profiles as observed in FU Ori (Greene, Aspin, \& Reipurth 2008). The accretion disk model for FU Ori objects (Hartmann \& Kenyon 1996, and references therein) rests fundamentally on the need to explain the peculiar spectral energy distributions (SEDs) of these objects, which are much broader than that of a single temperature blackbody or star, and which exhibit a continuously varying spectral type as a function of wavelength. The disk model naturally accounts for these properties, as observations at longer wavelengths probe increasingly cooler disk regions with later spectral types; our detailed model for FU Ori matches the SED from optical wavelengths to the mid-infrared region (Zhu \etal 2007, 2008). In addition, the disk model predicts that differential rotation should be observed, with slower rotation seen at longer wavelengths, which arise from outer disk radii, and this has been confirmed by comparing optical ($\sim 0.6 \mu$m) and near-infrared ($\sim 2.2 \mu$m) spectral line profiles (Hartmann \& Kenyon 1987a,b; Kenyon, Hartmann, \& Hewett 1988). A desirable feature in a model or theory is an ability to make predictions that can be tested observationally. The disk model predicts that the observed rotational spectral line broadening should be even smaller at $\lambda \sim 5 \mu$m (using the fundamental CO vibrational transitions) than at $2.2 \mu$m (using the first overtone CO vibrational lines). In this paper we present a high-resolution spectrum of FU Ori in the $5 \mu$m region which matches the predictions of the disk model. We also show that some discrepancies seen in optical spectra of FU Ori in comparison with simple disk models are alleviated by a decrease in the maximum disk temperature which we proposed in Zhu \etal (2007). \section{Observations} A high-resolution spectrum of FU Ori at $4.9 \mu$m was obtained at UT 01:00:29 on 2007 February 4 using the Phoenix spectrometer (Hinkle \etal 1998, 2000, 2003) on the 8-m Gemini South telescope. Observations were taken with a two-pixel slit (0.17$^\prime$$^\prime$) for a resolution of $\lambda/\delta\lambda=75,000$ over the wavelength range $4.808-5.050 \mu$m. We observed FU Ori at two positions along the slit for eight 2 minute exposures. We also observed the B2 III star HR1790 for telluric line correction and took 10 flat-field and dark images. We reduced the data using IRAF. \footnote{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} We averaged the flat-field and dark images and subtracted the average dark image from the average flat-field image. This averaged, dark-subtracted flat-field image was then divided into the target spectra. Images at different positions of the slit were differenced to remove the sky and dark backgrounds. We then extracted the spectra using the IRAF $apall$ routine and later combined and flattened the spectra using $splot$ in IRAF. The spectrum of the hot star HR1790 was used to divide out the telluric lines from the FU Ori spectrum. Wavelength calibration was computed using telluric lines from the Arcturus atlas of Hinkle \etal (1995). We also obtained a high-resolution spectrum of FU Ori at UT 17:59:15 on 2008 November 14 using the Magellan Inamori Kyocera Echelle (MIKE) spectrograph on the 6.5 m Magellan Clay telescope at Las Campanas observatory (Bernstein \etal 2003). MIKE is a double echelle spectrograph which delivers full wavelength coverage from about $3350-5000$~\AA\ (blue side) and $4900-9500$~\AA\ (red side). The data were obtained in subarcsecond seeing with a 0.7\arcsec slit, binning 2x2 and an exposure time of 120s. The resolutions were $\sim$ 40,000 and $\sim $ 30,000 for the blue and red sides respectively. The MIKE data were reduced using the MIKE Redux IDL pipeline. \footnote{http://web.mit.edu/$\sim$burles/www/MIKE/} \section{Results} Figure 1 shows the reduced Phoenix spectrum of FU Ori. Strong telluric features mean that regions around wavenumber 2009.3 ${\rm cm^{-1}}$ are not usable, and small residuals from the telluric correction can be seen at wavenumbers 2010.4, 2010.9, 2011.9, and 2013.5 ${\rm cm^{-1}}$. The spike around wavenumber 2012.5 ${\rm cm^{-1}}$ is due to a bad pixel. Outside of these regions, an absorption spectrum is clearly present, with relatively broad features and substantial blending. Most of the lines are due to the P-branch of the fundamental rotational-vibrational transitions of CO. To interpret the results, we calculated a synthetic disk spectrum using the methods described by Zhu \etal (2007). The model parameters were those used by Zhu \etal (2007) to fit the SED of FU Ori and to match the observed rotational broadening observed at optical and near-infrared wavelengths: central star mass $\sim$ 0.3 M$_{\odot}$, mass accretion rate of the inner high $\dot{M}$ disk $\sim$2.4$\times$10$^{-4}\msunyr$, disk inner radius $\sim$5 $R_{\odot}$, outer radius of the inner high $\dot{M}$ disk $\sim$ 1 AU, and the disk inclination angle $\sim$55$^{o}$. (See Figure 8 of Zhu \etal for the fit to the $2.2 \mu$m CO lines). We emphasize that we have {\em not} changed or adjusted any parameters from the Zhu \etal (2007) FU Ori model; these are predicted spectra. The lower dotted curve in Figure 1 shows the synthetic disk spectrum observed pole-on, so that individual spectral lines can be seen without the blending that occurs due to the large rotational broadening. Comparison of the nonrotating spectrum with the observations shows that FU Ori has significantly larger line widths, consistent with rapid rotation, and unlike profiles of M giants and supergiants. The upper dotted curve shows the synthesized spectrum using the inclination and central mass used to obtain a match to the $2.2 \mu$m CO line widths. The agreement between synthetic and real spectra is quite good, except near $2010.7 {\rm cm^{-1}}$ where we are missing the CO 5-4 P7 line in the model. This could be due to adopting too small an oscillator strength \footnote{This work employed the line lists from Kurucz CD ROM-15}. The half-width at half-depth (HWHD) of the lines in this spectral region (5 $\mu$m) is $\sim 22 \kms$, considerably smaller than the line widths measured at $2.2 \mu$m (HWHD $\sim$ 36 km s$^{-1}$) (Hartmann \& Kenyon 1987a; Hartmann, Hinkle, \& Calvet 2004; Zhu \etal 2007). This HWHD at 5 $\mu$m is close to the Keplerian velocity at 0.5 AU around 0.3 M$_{\odot}$ central star. All the strong lines have been identified in the pole-on model spectrum. Though some water lines are present in this spectrum, they are washed out or blended in the broadened spectrum (upper dotted curve). Only the strong CO fundamental lines can be identified with certainty. It is also worth noting that some of the unblended line profiles exhibit evidence for double-peaked shapes predicted by simple disk models (e.g., Hartmann \& Kenyon 1987a,b). In addition, some of the blends show sharp features (eg. 2011.2 cm$^{-1}$ line) which are the result of overlapping double-peaked lines. These features naturally arise in a disk model but would not be seen in rotating star models (unless large polar spots are invoked; see below). In Figure 2 we display a segment of the MIKE spectrum of FU Ori in the wavelength range $7030 - 7100$~\AA\ for comparison with the synthetic Keplerian disk spectrum. We again find good agreement between model and observation, demonstrating that there has been no change in the estimated optical rotational velocity of the object between the observations in Zhu \etal 2007, which we used to set the disk parameters, and this paper. The HWHD of the optical lines in this wavelength range is $\sim$65$\pm$5 km s$^{-1}$, consistent with HWHD $\sim$ 62$\pm$5 km s$^{-1}$ measured by Petrov \& Herbig (2008; PH08). Thus, compared with HWHD of 2 micron CO first-overtone lines $\sim$ 36$\pm$3 km s$^{-1}$ and HWHD of 5 micron CO fundamental lines $\sim$ 22$\pm$2 km s$^{-1}$, the differential rotation in FU Ori observed over nearly an order of magnitude in wavelength is consistent with Keplerian rotation. The slow rotation observed at $5 \mu$m implies spectral formation at radii out to $\sim 0.5$~AU, in agreement with our SED modeling; this supports our conclusion in Zhu \etal (2007, 2008) that the extent of the high-accretion rate disk is larger than can be explained by pure thermal instability models for outbursts \citep{1994ApJ...427..987B}. \section{Discussion} The consistency of the variation of rotational velocities as observed over to $\lambda \sim 0.7 - 5 \mu$m with Keplerian rotation seemingly provides strong evidence for the accretion disk interpretation of FU Ori. However, PH08 recently argued that while the infrared spectrum is that of an accretion disk, the optical spectrum is produced by a rapidly-rotating star with a dark polar spot. The PH08 argument against a pure disk model for a central star rests on three main points: there is no evidence for a variation of absorption line width as a function of excitation potential over the wavelength range $\lambda \sim 0.52 - 0.86 \mu$m; there is no evidence for a variation of rotational velocity with wavelength over that wavelength range; and the observed line profiles are more ``boxy'' or flat-bottomed than the double-peaked profiles of the disk model. It is important to recognize that the above-listed effects expected for a disk spectrum require not only differential rotation but a temperature gradient as well. A Keplerian disk exhibiting a constant effective temperature would not show any effect of rotational velocity with either excitation or wavelength; and the double-peaked behavior of line profiles only occurs because the outer, slowly rotating regions do not fill in the profile at line center, as these regions are too cool to emit significantly at the wavelength of observation. While the standard steady disk temperature distribution $T_{eff}^4 \propto [1 - (R_i/R)^{1/2}] R^{-3}$ is a power law at large radii, it is relatively flat at distances within about twice the inner radius $R_i$. Therefore, observations at long wavelengths which probe the outer disk where the temperature falls rapidly with radius will exhibit stronger rotational velocity variations and more double-peaked line profiles than observations at short wavelengths probing the inner, more nearly isothermal disk. In Zhu \etal (2007) we were forced to use a maximum disk temperature $T_{max} = 6420$~K to match the SED of FU Ori, which is lower than the 7200 K maximum temperature adopted in the model used by PH08 (which was based on the earlier model by Kenyon, Hartmann, \& Hewett 1988). Lowering the maximum temperature has the effect of making the flatter part of the accretion disk temperature distribution more dominant at optical wavelengths. As shown in Table 1 and Figure 3, this model predicts essentially no variation of line width with lower level excitation potential and a very slight dependence on wavelength in the optical region. (Note that PH08 predict a much larger effect of line width on excitation potential than Welty \etal (1992) for what should be essentially the same disk model; the reason for the large discrepancy is unclear.) In any case, measurement of rotation is best done through cross-correlation using suitable templates, as many of the lines used by PH08 are blends and introduce very large scatter into the model predictions (see Table 1 and Figure 3). PH08 also noted that their disk model predicts very strong TiO absorption bands at $\sim 7054$ and $7087$~\AA\ which are not observed. However, as shown in Figure 2, our lower-temperature disk model does not predict strong TiO absorption bands in this region. In addition there is evidence for $7087$~\AA\ bandhead absorption in our MIKE spectra, at the level predicted by our disk model. Once again this difference in the predicted disk model spectra arises simply by reducing the maximum disk temperature, which increases the importance of hot inner disk relative to the outer cool disk at the wavelength of observation. It has long been known that many optical line profiles in FU Ori are less double-peaked than predicted by simple quiescent disk models (Hartmann \& Kenyon 1985). There are, however, alternative possibilities to explain the profiles which do not demand abandonment of the accretion disk hypothesis. If, as currently thought, ionized disks accrete through the action of the magnetorotational instability (MRI; Balbus \& Hawley 1998), such disks must be turbulent. The disk models with resolved vertical structure computed by Miller \& Stone (2000) predict that turbulence driven by the MRI in the central layers of the disk produces waves which propagate outward and shock in the upper layers. It would be surprising if the MRI did not produce significant turbulent motions in the upper atmospheric layers of the disk, which would tend to wash out the double profile structure. Hartmann, Hinkle, \& Calvet (2004) found that that some mildly supersonic turbulence was needed to explain the $^{12}$CO first-overtone lines of FU Ori. It should be noted that the standard steady disk structure may not be completely applicable in the innermost disk. Standard thin-disk models predict that accretion onto a slowly rotating star should give rise to boundary-layer emission with roughly half the system luminosity; this is not observed in FU Ori (Kenyon \etal 1989). Popham \etal (1996) considered disk models which suppress boundary layer radiation; such models exhibit less doubled line profiles in inner disk regions, largely because the angular velocity of the disk departs from Keplerian values near the inner disk boundary. To explain the optical spectrum of FU Ori with a central star, the star would be required to have essentially the same total system luminosity $L \sim 230 \lsun$, and would need to have a radius roughly twice the inner radius of the disk model, $R \sim 10 \rsun$ (Zhu \etal 2007). Assuming Keplerian rotation for the infrared disk, Zhu \etal estimated a central mass $M \sim 0.3 \msun$. Such a star cannot be an isolated product of stellar evolution, as it has an implausibly short Kelvin-Helmholtz contraction time $\sim G M^2 R^{-1} L^{-1} \sim 1200$~yr. The energy to power the star would have to come from disk accretion, which would also potentially explain the outburst (Larson 1983). However, as the ratio of optical-to-infrared rotational velocities is consistent with a Keplerian profile, this implies that any central star would have to be rotating nearly at breakup; this means that the assumption of solid-body rotation in the PH08 model is unlikely to be correct. It is also unclear whether the accretion of a large amount of hot disk material would add enough angular momentum to spin up the outer layers of the star to breakup velocity as it expanded the outer atmosphere. In summary, the accretion disk model for FU Ori provides a coherent explanation of the observed spectral energy distribution and differential rotation over more than a decade in wavelength. The slow rotation observed at $5 \mu$m supports our previous result that the high mass accretion rate disk could extend to $0.5-1$~AU, which is significantly larger than that predicted by the pure thermal instability theory \citep{1994ApJ...427..987B}. On the other hand, the theory incorporating both gravitational and magnetorotational \citep{1999ASPC..160..122G,2005AAS...207.7417B,2009} successfully predicts AU scale high mass accretion rate inner disk during outbursts \citep{2009}. With the advent of more powerful computers and sophisticated magnetohydrodynamic codes, and the assumption of MRI-driven accretion, it should be possible to explore the possibility that atmospheric turbulence and/or nonstandard inner thin disk structure can explain details of the optical line profiles. This work is supported in part by NASA grant NNX08AI39G and is based in part on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States),the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministrio da Cincia e Tecnologia (Brazil) and Ministerio de Ciencia, Tecnologa e Innovacin Productiva (Argentina). The observations were obtained with the Phoenix infrared spectrograph, which was developed and is operated by the National Optical Astronomy Observatory. The Gemini/Phoenix spectra were obtained as part of program GS-2007A-C-4.
1,108,101,565,041
arxiv
\section{Introduction} Text matching is an important research area in several natural language processing (NLP) applications, including, but not limited to information retrieval, natural language inference, question answering and paraphrase identification. In these applications, a model estimates the similarity or relations between two input text sequences and two problems will arise in the process. The first, also common in many NLP tasks, is how to efficiently model or represent texts. The second, specifically for the text matching task, is how to bridge the information gap between two text sequences of non-comparable lengths. Text matching approaches have successfully introduced many encoder methods or constructed their hybrids to represent texts. Although the representation methods significantly advanced the fields of natural language processing as well as its downstream tasks including text matching applications, they have limitations in transferring information from the inputs to the output representations. Some of them lose important information while handling a fairly long sequence of words, while others that focus on learning the local features are inadequate to represent complex long-form documents. For text matching tasks, it is crucial that the text representations should retain as much useful information of the input data as possible. The other problem of text matching is how to bridge the information gap between two text sequences of lengths with different scales, such as short-short text matching, long-long text matching, and short-long text matching. In all these types, the core information is always hard to be extracted from texts, not only because of the text representation problem above, but also because of different text structures. Recently, interests have shifted toward mutual information (MI) maximization of representation across multiple domains, including computer vision and NLP. To efficiently model or represent both sides of text pairs in text matching, a natural idea is to train a representation-learning network to maximize the MI between text inputs and representation outputs before matching. However, MI is difficult to estimate especially in high-dimensional and continuous representation space. Fortunately, the recent theoretical breakthrough has made it possible to effectively compute MI between high dimensional input/output pairs of deep neural networks \cite{Belghazi2018,hjelm2018}. Early attempts have been made to solve NLP tasks like text generation \cite{qian2019} and some other kind of tasks like cross-modal retrieval \cite{wei2019} with MI maximization. In this paper, we introduce deep mutual information estimation technique, as known as Deep InfoMax (DIM, \cite{hjelm2018}) into text matching task. We design a deep MI estimation module to maximize the MI between input text pairs and their learned high-level representations. We start with the text matching neural network model of \cite{yang2019simple}, and design a wrapping-mode training architecture. In our architecture, we take the whole text matching network as the encoder while MI between the inputs and the outputs is estimated and maximized so that learned representations can retain information of the input data to a great extent. Moreover, maximizing MI between the input data and the encoder output (global MI) is often insufficient for learning useful representations. Recently the method on maximizing the local MI between the representation and local regions of the input (e.g. patches rather than the complete text) is presented (\cite{hjelm2018}), where the very representation is encouraged to have high MI with all the patches. So, to preserve the complex structural information and solve the structure difficulty in text matching on varying-length texts, we split input texts into segments as local features, and then maximize the average MI between the high-level representation and local patches of the input text. Our proposed method works effectively and efficiently according to experimental results. The main contributions of this paper are summarized as follows: \begin{itemize} \item We propose a deep neural network with deep mutual information estimation to solve problems of text matching. To the best of our knowledge, this work is the first attempt to apply mutual information neural estimation to improving both representation quantity and diversity of text structures in text matching tasks. \item We integrate the global and local mutual information maximization for texts to help well preserve the information in the process between input and output representation. Our model has fewer parameters and doesn't rely on pretraining on external data, compared to large representation models. This is meaningful to different text matching tasks. \item Experimental results on four benchmark datasets across four different tasks are all on par with or even above the state-of-the-art methods, which demonstrate the high effectiveness of our method on text matching tasks. \end{itemize} \begin{figure}[H] \centering \includegraphics[width=90mm]{figures/Figure-Overview.pdf} \caption{\textbf{Architecture overview of TIM. } The DIM Encoder will be detailed later in the section \ref{sec:dim-encoder} and shown clear in Figure \ref{fig-TIM-RE2}.} \label{fig-TIM-overview} \end{figure} \section{Related Work} The first-generation encoder methods of word embeddings, for instance, Word2Vec (\cite{Mikolov2013}) and Doc2Vec (\cite{Le2014}), learn embedding vectors as text representations based on different text structure levels, such as words, sentences, paragraphs and documents. They are introduced into several text matching models in which typical similarity metrics are employed to compute the matching scores of two text vectors (WMD \cite{Kusner2015}). Besides, some latent variable models are introduced into text matching tasks, too. They extract hidden topics from texts, and then the texts can be compared based on their hidden topic representations (\cite{Gong2018}). Recently, deep neural networks become the most popular models for better text representations in NLP tasks, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Long Short-Term Memory architectures (LSTM). Accordingly, many text matching applications take these models as text encoders in their matching processes: \cite{Severyn2015} ranks short text pairs using CNN which preserves local information in the text representation, \cite{Mueller2016} treats texts as a sequence of words in their representation processes and then take RNN as text encoders for text matching on sentences and long-form texts, \cite{tai2015} shows superiority for representing sentence meaning over a sequential LSTM, and \cite{tan2016} introduces LSTM to construct better answer representations in question-answer matching. Nowadays, the state-of-the-art representation methods focus on the contextual token representation - to train an encoder to represent words in their specific context, such as BERT and XLNet. In text matching tasks, a comparably long text may lose its local information after being encoded as a fixed-sized representation. Some of the previous studies (\cite{tan2016}) exploit attention mechanism to distill important words from sentences, but valuable information can still be diluted within a large number of sentences in long-form texts. On the other hand, representation of a short text has the sparse problem and may lose the global information of word co-occurrency. For this, some previous studies, such as \cite{yang2019simple} typically, employ alignment architecture to rich the mutual information between the sequence pair in matching and introduces augmented residual connections for the encoder for inputs to retain as much information as possible in its outputs. \cite{Liu2018ImprovedTM} focuses on matching question/answer (QA) and adopts the generative adversarial network (GAN) to enhance mutual information by rewriting questions in QA tasks. However MI is able to quantify the dependence of two random variables and to measure non-linear statistical dependencies between variables. \cite{Belghazi2018} implements MI estimation in high-dimensional and continuous scenarios and effectively computed MI between high dimensional input/output pairs of deep neural networks (MINE). \cite{hjelm2018} formalizes Deep InfoMax (DIM), which makes it possible to prioritize global or local information and to tune the suitability of learned representations for classification or reconstruction-style tasks. Inspirited by DIM, we introduce deep mutual information estimation and maximization to our deep neural model for more general text matching tasks. \section{Methodology} We adopt the neural architecture on text matching introduced in RE2 \cite{yang2019simple} and apply MI estimation and maximization method to the representation part of the base text matching architecture. We intend to maximize mutual information of texts in the matching process, but if text matching encoders pass information from only some parts of input, this does not increase the MI with any other parts. Based on this, our model introduces DIM to leverage local regions of the input for better text representation, for the same representation is encouraged to have high MI with all patches, and this mechanism will exert influence on all input data shared across patches. Besides, DIM has the representational capacity of deep neural networks. Therefore, it is very suitable for mutual information estimation of high dimensional data including text data. For the text matching task, our model employs the local DIM framework to estimate and maximize MI. The overall framework of our proposed architecture is presented in Figure \ref{fig-TIM-overview}. In the DIM network on the left hand of Figure \ref{fig-TIM-overview}, multiple feature maps, treated as \textit{local features}, are extracted from one input text by our \textbf{Feature Extraction Method} (section \ref{sec:feature}). The local features reflect some structural aspects of the text data, e.g. spatial locality. For the \textit{global feature}, as shown in the right hand of Figure \ref{fig-TIM-overview}, we take the whole text matching neural network as the \textbf{DIM Encoder} (section \ref{sec:dim-encoder}) of our model, and we take the high-level output representation from its pooling layer as the global feature vector for DIM. Here the DIM network shares the high-level representation with the text matching network output. It is because the base text matching network and MI estimator are optimizing the loss for the same purpose and require similar computations. To implement the DIM model to the base text matching neural network, in the following subsections, we first propose our feature extraction method for text data. Then we describe the base text matching neural network as our DIM encoder. At last, we propose our \textbf{DIM Estimator and Discriminator} (section \ref{sec:dim}) for MI maximization of text matching. \subsection{Feature Extraction for Varying-Length Text} \label{sec:feature} First we generate feature maps, $C(X) := {\{C^{(i)}\}}_{i=1}^{1\times{M}}$, for input $X$. In this step, we convert a text to multiple tensors of the same shape, $1\times{M}$, and generate fixed-sized feature maps for using the DIM method. What we need to consider is how to maintain as much useful information of the source text as possible in these feature maps. Therefore, according to different lengths situation of the text pair in the dataset, we propose two generation modes of feature maps separately for short text data and long text data, named word mode (TIM-W, Figure \ref{fig-TIM-W}) and segment mode (TIM-S, Figure \ref{fig-TIM-S}). The TIM-W is mainly used for short texts to generate feature maps. We observe some universally-used short text datasets, including SNLI, SciTail, Quora and WikiQA, where texts are mostly in the tens of word scale, or are most in the tens of word range. For these cases, we propose the TIM-W to extract feature maps based on words and their embeddings to retain more semantic relevance information in a short text. We convert the short text into a word vector list, denoted as $T$ = ($v_0$, $v_1$,\dots,$v_{n-1}$), in which $v_i$ is a high-dimensional (e.g. 300 dimensions) vector calculated by a simple Word2Vec embeddings. The shape of the feature map ($1$x$M$) is fixed while DIM network is initialized in advance, where $M<n$. The we group the $n$ word vectors into feature maps. We pad the last feature map with zero vectors if the number of the last group of vectors is not big enough to fill all space of the last feature map. The TIM-W mode is shown in Figure \ref{fig-TIM-W}. For a long text dataset, using a higher-dimensional word embedding to encode a long text will cause high space/time complexity while its texts already have much richer information than short texts. So we propose TIM-S to generate fix-size feature maps for a long text in our text matching model. First, we represent each word of a long text with a word index number defined in a relevant vocabulary: $T$ = ($w_0$, $w_1$,\dots, $w_{n-1}$). Then we divide $T$ into segments with the same fixed length according to the preset segment size ($D$), $S$ = ($s_0$, $s_1$ \dots), where each $s_i$ contains $D$ word indexes and the last segment is padded with zeros at the end of it. Then we group the segments into $M$ feature maps, which shapes are of fixed size, $1$x$M$. The segment size $D$ and feature shape $M$ are set when initializing DIM network in advance. If the last group of segments is not enough to meet the size of the last feature map, we will pad zero-element segments for the last feature map. Finally, the long input text is represented as multiple fix-size local feature maps. The process is shown in Figure \ref{fig-TIM-S}. \begin{figure}[H] \centering \includegraphics[width=85mm]{figures/Figure-TIM-W.pdf} \caption{\textbf{Local Feature Extraction of Word Mode (TIM-W).} This mode is for maximizing MI of short texts.} \label{fig-TIM-W} \end{figure} \begin{figure}[H] \centering \includegraphics[width=85mm]{figures/Figure-TIM-S.pdf} \caption{\textbf{Local Feature Extraction of Segment Mode (TIM-S).} This mode is for maximizing MI of long texts.} \label{fig-TIM-S} \end{figure} \subsection{Text Matching Neural Layers} \label{sec:dim-encoder} For the global feature, we take the whole text matching neural network as the DIM encoder and use the its output as the high-level representation. We adopt RE2 as the base of the text matching network, which achieved the state-of-the-art on four well-studied datasets across three different text matching tasks. RE2 leverages the previous aligned features (Residual vectors), point-wise features (Embedding vectors), and contextual features (Encoded vectors), to maintain useful information of texts in one text matching task when information passes through its network. The detailed architecture of RE2 is illustrated in Figure \ref{fig-TIM-RE2}. An embedding layer firstly embeds discrete words. Three layers following the embedding layer are layers of encoding (CNN), alignment and fusion, which then process the sequences consecutively. The three layers are treated as one block in RE2. $N$ blocks are connected by an augmented version of residual connections. In the end, a pooling layer aggregates sequential representations into final vectors. More details can be referred in the original literature. As the high-level global feature output of the DIM encoder, the final vectors are then passed into the DIM discriminator network and trained. Simultaneously, the final vectors are also passed to and processed by a prediction layer to give the final prediction of text matching. We keep RE2's original network architecture (state of the art), then add DIM network on the base text matching network to help maximize useful information in the output representations used in the last step of matching prediction, which improves the performance to the text matching tasks and ensure the contrast experiments are reasonable. \begin{figure}[H] \centering \includegraphics[height=88mm]{figures/Figure-RE2.pdf} \caption{\textbf{Mutual Information Encoder.} The baseline text matching neural architecture (RE2) is adopted as the DIM encoder in our model. Following settings in the original paper, the kernel size of the CNN layer is set to 3, and the number of CNN layer is tuned from 1 to 3. The block number $N$ of its augmented residual connections is tuned from 1 to 3. For experiments of SciTail, WikiQA, Quora and Harvard news, the high-level output is a 200 dimensional vector. For the experiment of SNLI, the output is a 150 dimensional vector.} \label{fig-TIM-RE2} \end{figure} \subsection{MI Maximization for Text Matching} \label{sec:dim} In our model, we define MI estimator and employ a discriminator to optimize the output representation ($E_\psi(X)$) of the input text data ($X$) by simultaneously estimating and maximizing MI, $\mathcal{I}(X; E_\psi$(X)), in both sides of the comparison. \textbf{DIM Estimator.} To estimate MI, an appropriate lower-bound for the KL-divergence is necessary. Before DIM, MINE proposed a lower-bound to the MI based on the Donsker-Varadhan representation (DV, Donsker \& Varadhan, 1983) of the KL-divergence, shown as the following form: \begin{align} \mathcal{I}(X; Y) & := \mathcal{D}_{KL}(\mathbb{J} || \mathbb{M}) \nonumber\\ & \geq \widehat{\mathcal{I}}^{(DV)}_{\omega}(X; Y) := \mathbb{E}_\mathbb{J}[T_{\omega}(x,y)] - \log \mathbb{E}_\mathbb{M}[e^{T_{\omega}(x,y)}], \label{eq:mine} \end{align} where $T_{\omega}: X\times{Y} \rightarrow \mathbb{R}$ is a discriminator function modeled by a neural network with parameters ${\omega}$. Based on the MINE estimator and the DIM local framework, we present our DIM estimator, maximizing the average estimated MI for text data and optimizing this local objective, described as following: \begin{align} (\hat{\omega}, \hat{\psi})_L &= {arg\,max}_{\omega, \psi} \frac{1}{M} \sum_{i=1}^{M} \widehat{\mathcal{I}}_{\omega, \psi}(C^{(i)}(X); E_\psi(X)), \label{eq:dim-estimator} \end{align} where $C(X)$ denotes local features converted from input texts by feature extraction. $E_{\psi}(X)$ is the learned high-level representation output of the pooling layer of the base text matching neural network RE2 with parameters $\psi$. The $\omega$ denotes the parameters of a DIM discriminator function modeled by a neural network. The subscript $L$ denotes ``local" for the DIM local framework. With our estimator, we next describe its DIM discriminator for MI maximization. \textbf{DIM Discriminator.} With the high-level output from the text matching network and the feature maps extracted from the same input text, we then concatenate this global feature vector with its relative lower-level feature maps at every location, $\{[C^{(i)}_{\psi}(x), E\psi(x)]\}_{i=1}^{1{\times}M}$, with $C(X)$ flattened in advance. Then our discriminator is formulated as: \begin{equation}\label{equal_local_discriminator} T^{(i)}_{\psi, \omega}(x, E_{\psi}(x)) = D_{\omega}([C^{(i)}(x), E_{\psi}(x)]), \end{equation} while fake feature maps are generated by combining global feature vectors with local feature maps coming from different texts, $x'$: \begin{align} T^{(i)}_{\psi, \omega}(x', E_{\psi}(x)) = D_{\omega}([C^{(i)}(x'), E_{\psi}(x)]). \end{align} With the `real' and the `fake' feature maps, we introduce the local DIM concat-and-convolve network architecture ($D_{\omega}$), a $1\times1$ convnet with two 512-unit hidden layers, as DIM discriminator for our text matching model. The process is shown in Figure \ref{fig-TIM-feature-map}. Then `real' feature map and the `fake' feature map pass through the discriminators and get the $1{\times}M$ scores. The loss of MI for the input source text $t_s$ and target text $t_t$ of a text matching task can be calculated by: $\mathcal{L_M}$ = $\mathcal{L}_{t_s}$ + $\mathcal{L}_{t_t}$. The overall loss function can be defined as: $\mathcal{L}_{all}$ = $\mathcal{L_M}$ + $\mathcal{L_T}$, where $\mathcal{L_T}$ is the loss calculated by the base text matching neural network. \begin{figure}[H] \centering \includegraphics[width=85mm]{figures/Figure-Feature-Map.pdf} \caption{\textbf{Mutual Information Discriminator.} The global feature vector is concatenated with the lower-level feature map at every location. A $1\times{1}$ convolutional discriminator is used to score the `real' feature map vector pair, while the `fake' pair is produced by pairing the feature vector with a feature map from another text.} \label{fig-TIM-feature-map} \end{figure} \section{Experiments} \subsection{Experimental Setup} \subsubsection{Benchmarks and Metrics} We evaluated our proposed TIM-W and TIM-S model on four well-studied NLP tasks and a news dataset, as follows:\\%number TODO \textbf{Natural Language Inference.} Stanford Natural Language Inference\footnote{\url{https://nlp.stanford.edu/projects/snli}} (SNLI) is a benchmark dataset for natural language inference. In this task, the two input sentences are asymmetrical, one as ``premise'' and the other as ``hypothesis''. We follow the setup of SNLI's original introduction in training and testing. Accuracy is used as the evaluation metric for this dataset.\\ \textbf{Science Entailment.} SciTail\footnote{\url{http://data.allenai.org/scitail}} is an entailment classification dataset constructed from science questions and answers. This dataset contains only two types of labels, entailment and neutral. We use the original dataset partition. It contains 27k examples in total. 10k examples are with entailment labels and the remaining 17k are labeled as neutral. Accuracy is used as the evaluation metric for this dataset\\ \textbf{Paraphrase Identification.} This task is to decide whether one question is a paraphrase of the other between pairs of texts. We use the Quora dataset with 400k question pairs collected from the Quora website. The partition of dataset is the same as the one in \cite{wang2017bilateral}.And accuracy is used as the evaluation metric.\\ \textbf{Question Answering.} For this task, we employ the WikiQA dataset\footnote{\url{https://www.microsoft.com/en-us/research/publication/wikiqa-a-challenge-dataset-for-open-domain-question-answering}}, which is a retrieval-based question answering dataset based on Wikipedia .It contains questions and their candidate answers,with binary labels indicating whether a candidate sentence is a correct answer to the question it belongs to. Mean average precision (MAP) and mean reciprocal rank (MRR) are used as the evaluation metrics for this task.\\ \textbf{News Articles Title Content Match.} We employ the Harvard news dataset, News Articles\footnote{\url{https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/GMFCTR}}, for matching task. It contains news articles and we separate the title of each article from its content and do the data augmentation by randomly combining pairs of a title and content of an article. Most contents of the news articles have 1000 to 5000 words. We report matching accuracy. \subsubsection{Baselines and Implementations} We implement our model based on \cite{yang2019simple} but train it on Nvidia 1080ti GPUs. Sentences in the dataset are all tokenized and converted to lower cases. We also perform a filter on meaningless symbols or emojis before embedding.The maximum sequence length is not limited. Word embeddings are initialized with 840B-300d GloVe word vectors (\cite{pennington2014glove}) and fixed during training process. \subsection{Experimental Results} The experimental results are described below: \begin{itemize} \item \textbf{Natural Language Inference.} Results on SNLI are shown in the first column of Table \ref{tab:experiment-result}. The performance of previous methods are quite close and we slightly outperform the state-of-the-art. Our method can perform well in the language inference task without any tasks-pecific modifications. \item \textbf{Science Entailment.} Results on SciTail dataset are shown in the second column of Table \ref{tab:experiment-result}. Our method successfully improves the baseline model by 0.8\% and achieves a result 0.1\% over the state-of-the-art, which indicates our method is highly effective on this task. \item \textbf{Paraphrase Identification.} Results on Quora are shown in third column of Table \ref{tab:experiment-result}. Our method also lifts the accuracy of baseline model by 0.4\% and achieves higher results than all previous methods. \item \textbf{Question Answering.} Results on WikiQA are shown in the last column of Table \ref{tab:experiment-result}. Small improvements are made by our methods on this IR task, which indicates our method also fits IR tasks well. \item \textbf{News Article Title Content Match.} Results on harvard news dataset are shown in Table \ref{tab:news-result}. \end{itemize} \begin{table*} \begin{tabular}{ |p{2.66cm}|p{0.52cm}||p{2.66cm}|p{0.52cm}||p{2.76cm}|p{0.52cm}||p{2.73cm}|p{0.74cm}|p{0.74cm}| } \hline \multicolumn{2}{|c||}{\textbf{SNLI}} & \multicolumn{2}{c||}{\textbf{SciTail}} & \multicolumn{2}{c||}{\textbf{Quora}} & \multicolumn{3}{c|}{\textbf{WikiQA}} \\ \hline \textbf{Model} & \textbf{Acc.} & \textbf{Model} & \textbf{Acc.} & \textbf{Model} & \textbf{Acc.} & \textbf{Model} & \textbf{MAP} & \textbf{MRR} \\ \hline BiMPM \newline \cite{wang2017bilateral} & 86.9 & ESIM \newline \cite{chen2017enhanced} & 70.6 & BiMPM \newline \cite{wang2017bilateral} & 88.2 & ABCNN \newline \cite{yin2016abcnn} & 0.6921 & 0.7108 \\ \hline ESIM \newline \cite{chen2017enhanced} & 88.0 & DecAtt \cite{parikh2016decomposable} & 72.3 & pt-DecAttn-word \newline \cite{tomar2017neural} & 87.5 & KVMN \newline \cite{miller2016key} & 0.7069 & 0.7265 \\ \hline MwAN \newline \cite{tan2018multiway} & 88.3 & DGEM \newline \cite{Khot2018SciTaiLAT} & 77.3 & pt-DecAttn-char \newline \cite{tomar2017neural} & 88.4 & BiMPM \newline \cite{wang2017bilateral} & 0.718 & 0.731 \\ \hline CAFE \newline \cite{tay2018compare} & 88.5 & HCRN \newline \cite{tay2018hermitian} & 80.0 & MwAN \newline \cite{tan2018multiway} & {89.1} & IWAN \newline \cite{shen2017inter} & 0.733 & 0.750 \\ \hline SAN \newline \cite{liu2018stochastic} & 88.6 & CAFE \newline \cite{tay2018compare} & 83.3 & CSRAN \newline \cite{tay2018co} & {\bf 89.2} & CA \cite{wang2017compare} & {0.7433} & {0.7545}\\ \hline CSRAN \newline \cite{tay2018co} & 88.7 & CSRAN \newline \cite{tay2018co} & {\bf 86.7} & SAN \newline \cite{liu2018stochastic} & {\bf 89.4} & HCRN \newline \cite{tay2018hermitian} & {0.743} & {0.756} \\ \hline RE2 \newline \cite{yang2019simple} & {\bf 88.9} & RE2 \newline \cite{yang2019simple} & {\bf 86.0} & RE2 \newline \cite{yang2019simple} & {\bf 89.2} & RE2 \newline \cite{yang2019simple} & {\bf 0.7452} & {\bf 0.7618} \\\hline \hline TIM-W (ours) & {\bf 88.9} & TIM-W (ours) & {\bf 86.8} & TIM-W (ours) & {\bf 89.6} & TIM-W (ours) & {\bf 0.7516} & {\bf 0.7685}\\ & {\bf$\pm$0.1} & & {\bf$\pm$0.1} & & {\bf$\pm$0.3} & & {$\pm$0.02} & {$\pm$0.02}\\ TIM-S (ours) & 88.3 & TIM-S (ours) & 86.2 & TIM-S (ours) & 87.8 & TIM-S (ours) & {0.7181} & {0.7387}\\ & $\pm$0.1 & & $\pm$0.1 & & $\pm$0.5 & & {$\pm$0.02} & {$\pm$0.02} \\ \hline \end{tabular} \caption{Experimental results on four datasets: SNLI, SciTail, Quora and WikiQA.} \label{tab:experiment-result} \end{table*} \begin{table} \centering \small \begin{tabular}{|l|l|} \hline {\bf Model} & {\bf Acc(\%)}\\ \hline RE2 \cite{yang2019simple} & {93.18} \\ \hline TIM-S (ours): D=12, M=10 & {\bf 96.59}\\ TIM-S (ours): D=20, M=10 & {95.83}\\ TIM-S (ours): D=20, M=20 & {95.45}\\ TIM-S (ours): D=6, M=10 & {95.11}\\ TIM-S (ours): D=6, M=5 & {94.70}\\ TIM-W (ours) & {94.14}\\ \hline \end{tabular} \caption{Experimental results on Harvard news dataset, with the infulence of $M$ and $D$ in TIM-S mode.} \label{tab:news-result} \end{table} In all, our proposed method achieves equal or even better performance on par with the state-of-the-art on four well-studied datasets across three different tasks.\\ \textbf{Analysis of Results. } TIM-W mode on SNLI, Quora, Scitail and WikiQA achieves better accuracy for feature extraction on the word level because of short texts. For feature extraction on the segment level for longer texts, TIM-S mode suits better according to the experiments on the News Article dataset. And without introducing high-dimension pretrained word embedding, TIM-S is significantly faster on the long texts than TIM-W. \\ \textbf{Influence of $M$ and $D$. } $M$ is the shape size of the local feature in both TIM-W and TIM-S, and $D$ is the segment size only need to be set in TIM-S. First, for the TIM-W used in SNLI, Quora, Scitail and WikiQA, we tune $M$ from 1 to 3. Texts in the four datasets are relatively short and $M$ should not be greater than the word number of the short text. Otherwise a short text will only be converted to just one feature map, which will cause loss of structural information in the text. Second, in the experiments under TIM-S mode for the content field in the News dataset, we tune the segment size (words, $D$) and the shape ($1{\times}M$) of the fixed-size feature maps. We set $D=12$ and $M$=$10$, which means each local feature map contains $10$ segments and each segment has $12$ word indexes. For the setting of $M$ and $D$, when we enlarge both $D$ and $M$, each feature map block will have more zeros padded so that it becomes more difficult to maximize the useful local information from sparse feature maps. But when both $D$ and $M$ are set to be small, the TIM-S mode actually becomes TIM-W mode, which is not suitable for long texts. This means when the shape of the feature map in TIM-S becomes smaller, more local structure information is lost in the MI maximization process. The influence of $M$ and $D$ is illustrated in Figure \ref{tab:news-result}.\\ \textbf{Case Study. } Aligning tokens between two texts is a key stage of the baseline model and achieves remarkable improvements on text matching. But incorrect concentration on the text positions during finite number of alignment operations (3 times), may result in failure of predictions. For example, in a pair from WikiQA, ``who is basketball star \textit{antoine walker}" and ``\textit{Antoine Devon Walker} (born August 12, 1976) is an American former professional basketball player", there is a middle name in the player's name. And in another pair, ``what day is \textit{st. patricks} day" and ``\textit{Saint Patrick}'s Day or the Feast of \textit{Saint Patrick} (the Day of the Festival of \textit{Patrick}) is a cultural and religious holiday celebrated on 17 March", the person's name appears at multiple positions in one text. Compared to the baseline, MI maximization with powerful neural networks helps to model local semantics and improve text matching predictions more efficiently. Our model gets better prediction results on these cases. Meanwhile, for richer features can bring better MI estimation results, we will investigate better feature extraction methods with MI neural estimation for NLP tasks in future work. \section{Conclusions} In this paper, we propose a new neural architecture with deep mutual information estimation to learn more effective and high-quality text representations in text matching tasks. By maximizing the mutual information between each input and output pairs, our method retains more useful information in the learned high-level representations. Moreover, we split text into segments and treat these segments as local features. This helps preserve the complex structural information and solve the structure difficulty in text matching on varying-length texts. Then we leverage local mutual information maximization method to solve the information loss problem from complex text structures in text matching frameworks. The experiment results on various text matching tasks also demonstrate the effectiveness of our model. \newpage \bibliographystyle{named}
1,108,101,565,042
arxiv
\section{Introduction} \label{sec:introduction} In model-based reinforcement learning problems~\cite{sutton1999policy,bertsekas1996neuro}, an agent interacts sequentially with a dynamic environment by taking actions in order to maximize its long-term performance This paper, as most related work in this field, focuses on systems and control objectives that are modeled as finite time horizon \emph{Markov decision processes} (MDPs). At each time $t = 1, \ldots, T$, the agent observes the environment state $S_t$ and takes an action $A_t$ following a decision policy $\varphi_t$. Independently of the action, the environment produces a random outcome $Y_t$. The reward is obtained as a deterministic function of the system’s outcome and the chosen action, $R_t = r(Y_t,A_t)$. The data is collected in a history $H^{t+1} = (S_1,A_1,R_1, \ldots , S_{t},A_{t},R_{t})$ and the system evolves to a state $S_{t+1}$. The procedure then repeats until the end of the time horizon, $t=T$. In the Bayesian setting, the MDP model $\Phi$ is treated as a random element of some parametric model family, which is drawn according to a prior distribution of the environment parameters $\Theta$. The goal of the agent is to identify a policy that yields the highest expected cumulative reward $\ensuremath{\mathbb{E}}[\sum_{t=1}^Tr(Y_t,A_t)]$ under the uncertainty of these parameters. \textcolor{black}{ The decision-making process in Bayesian reinforcement learning is typically more computationally demanding than the frequentist approach, however this setting presents various advantages as it facilitates regularization, handles parameter uncertainty naturally, and provides ways to solve the exploration-exploitation trade-off~\cite{ghavamzadeh2015bayesian}. } Following the work from Xu and Raginsky on Bayesian supervised learning~\cite{xu2020minimum}, \textcolor{black}{ we put aside the computational aspect to study the best achievable performance for model-based Bayesian reinforcement learning.} We define the \textit{minimum Bayesian regret} as the difference between the \textit{Bayesian cumulative reward} $R_\phi(\kappa_H)$, defined as the maximum expected cumulative reward attainable by learning from the sequentially collected data, and $R_\phi(\kappa_\Theta)$, the maximum expected cumulative reward that could be reached if the environment parameters were known. We develop information-theoretic upper bounds on the minimum Bayesian regret under various assumptions for the reward function using the relative entropy and the Wasserstein distance. \paragraph*{Structure of the paper} Section~\ref{sec:contributions} summarizes the contributions of this paper. The notations are introduced in Section~\ref{sec:notations}. Section~\ref{sec:model_definitions} presents the different models of decision processes studied and gives the definition of the Bayesian cumulative reward and the minimum Bayesian regret. Section~\ref{sec:upper_bounds} and~\ref{sec:upper_bounds_MAB} are devoted to information-theoretic upper bounds on the MBR. Finally, conclusions are presented in Section~\ref{sec:conclusion}. \section{Contributions} \label{sec:contributions} In this work, inspired by Xu and Raginsky's~\cite{xu2020minimum} framework on the study of the best achievable performance of supervised learning problems, we propose an analogous framework for the study of model-based reinforcement learning problems. \textcolor{black}{Our contributions in this regard can be summarized as:} \begin{color}{black} \begin{enumerate} \item \textcolor{black}{Developing a theoretical framework of model-based Bayesian MDPs suited for information-theoretic studies.} \item Proposing a definition of the minimum Bayesian regret (MBR) for reinforcement learning problems modeled as Markov decision processes. \item Presenting a data processing inequality for the Bayesian cumulative reward in Lemma~\ref{lem:data_process}. \item Formulating upper bounds on the $\textnormal{MBR}$ for general MDPs based on the relative entropy (Proposition~\ref{prop:kl_div_subgaussian}) and the Wasserstein distance (Proposition~\ref{prop:wasserstein}). We present particular cases of these bounds for the case of bounded reward functions in Corollaries~\ref{cor:kl_div_bounded_reward} and~\ref{cor:wasserstein}, and the tightness of these results are compared in Remark~\ref{rem:wasserstein_is_tighter}. \item Deriving MBR bounds for the multi-armed bandit and for the online optimization with partial feedback problems. In this last setting, we show how our bound recovers \emph{from below} results from Russo and Van Roy~\cite{russo2016information}. \end{enumerate} \end{color} \section{Notations and preliminaries} \label{sec:notations} Throughout the paper, random variables $X$ are written in capital letters, their realizations $x$ in lower-case letters, and their set of outcomes $\ensuremath{\mathcal{X}}$ in calligraphic letters. The probability distributions of a random variable $X$ is denoted as $\ensuremath{\mathbb{P}}_X$. When more than one random variable is considered, e.g., $X$ and $Y$, we use $\ensuremath{\mathbb{P}}_{X,Y}$ to denote their joint distribution and $\ensuremath{\mathbb{P}}_X \ensuremath{\mathbb{P}}_Y$ for their product distribution\footnote{Note that this slight abuse of notation does not mean that the product distribution is the product of the distributions.}. We write the conditional probability distribution of $Y$ given $X$ as $\ensuremath{\mathbb{P}}_{Y|X}$, defining a probability distribution $\ensuremath{\mathbb{P}}_{Y|X=x}$ over $\ensuremath{\mathcal{Y}}$ for each element $x\in \ensuremath{\mathcal{X}}$. We use the underscore notation $X_t$ to represent a random variable at time $t=1,\ldots, T$ and the exponent notation $X^t$ to denote a sequence of random variables $X^t \equiv (X_1,\ldots,X_t)$ for $t=2,\ldots,T$. For consistency we let $X^1 \equiv X_1$. The relative entropy between two probability distributions $\ensuremath{\mathbb{P}}$ and $\ensuremath{\mathbb{Q}}$ is defined as $\KL{\ensuremath{\mathbb{P}}}{\ensuremath{\mathbb{Q}}} := \int \log \big( \frac{d\ensuremath{\mathbb{P}}}{d\ensuremath{\mathbb{Q}}} \big) d\ensuremath{\mathbb{P}}$ if $\ensuremath{\mathbb{P}}$ is absolutely continuous with respect to $\ensuremath{\mathbb{Q}}$ and $\KL{\ensuremath{\mathbb{P}}}{\ensuremath{\mathbb{Q}}} \to \infty$ otherwise. The notation $d\ensuremath{\mathbb{P}}/d\ensuremath{\mathbb{Q}}$ is the Radon-Nikodym derivative. Similarly, the mutual information between $X$ and $Y$ is defined as $\textup{I}(X;Y) := \KL{\ensuremath{\mathbb{P}}_{X,Y}}{\ensuremath{\mathbb{P}}_{X} \ensuremath{\mathbb{P}}_{Y}}$, and the conditional mutual information between $X$ and $Y$, given $Z$, as $\textup{I}(X;Y|Z) := \ensuremath{\mathbb{E}}[\textup{I}(X;Y|Z=z)]$, where $\textup{I}(X;Y|Z=z) := \KL{\ensuremath{\mathbb{P}}_{X,Y|Z=z}}{\ensuremath{\mathbb{P}}_{X|Z=z} \ensuremath{\mathbb{P}}_{Y|Z=z}}$. Finally, if two probability distributions $\ensuremath{\mathbb{P}}$ and $\ensuremath{\mathbb{Q}}$ are defined in a Polish space $\ensuremath{\mathcal{X}}$ with respect to a metric $\rho$, then their Wasserstein distance of order $p \geq 1$ is $\ensuremath{\mathbb{W}}_p(\ensuremath{\mathbb{P}},\ensuremath{\mathbb{Q}}) := \big(\inf_{\ensuremath{\mathbb{D}} \in \Pi(\ensuremath{\mathbb{P}},\ensuremath{\mathbb{Q}})} \int \rho d\ensuremath{\mathbb{D}} \big)^{1/p}$, where $\Pi(\ensuremath{\mathbb{P}},\ensuremath{\mathbb{Q}})$ is the set of all couplings of $\ensuremath{\mathbb{P}}$ and $\ensuremath{\mathbb{Q}}$; i.e., all joint distributions on $\ensuremath{\mathcal{X}} \times \ensuremath{\mathcal{X}}$ with marginals $\ensuremath{\mathbb{P}}$ and $\ensuremath{\mathbb{Q}}$. As this work is focused on upper bounds and since by H\"older's inequality $\ensuremath{\mathbb{W}}_p \leq \ensuremath{\mathbb{W}}_q$ for all $p \leq q$~\cite[Remark 6.6]{villani2009optimal}, in what follows, we will only be using the Wasserstein distance of order $1$, $\ensuremath{\mathbb{W}} := \ensuremath{\mathbb{W}}_1$. \textcolor{black}{For a discrete random variable $X$, the Shannon entropy is defined as $\textup{H}(X) \coloneqq \ensuremath{\mathbb{E}}[-\log(\ensuremath{\mathbb{P}}_X(X))]$.} \section{Model and Definitions} \label{sec:model_definitions} In this section we first introduce formally Markov decision processes. We then present the multi-armed bandit and the online optimization with partial feedback problems two special cases of MDPs. After that, we describe Bayesian cumulative reward and prove that it respects a data-processing inequality. Finally, we define minimum Bayesian regret. \subsection{Markov Decision Process} \label{subsec:markov_decision_process} In a \emph{Markov decision process} (MDP), at each time step $1, \ldots, T$, an agent interacts with the environment by observing the system's state $S_t \in \ensuremath{\mathcal{S}}$ and selecting accordingly an action $A_t \in \ensuremath{\mathcal{A}}$. The system then produces an outcome $Y_t \in \ensuremath{\mathcal{Y}}$ which the agent associates with a scalar reward $R_t \in \ensuremath{\mathbb{R}}$. In Bayesian reinforcement learning, the environment is completely characterized by a random variable $\Theta \in \ensuremath{\mathcal{O}}$ with probability distribution $\ensuremath{\mathbb{P}}_\Theta$. Therefore, an MDP $\Phi$ is defined by a transition kernel $\kappa_{\textnormal{trans}} : \ensuremath{\mathscr{S}} \times (\ensuremath{\mathcal{S}} \times \ensuremath{\mathcal{A}} \times \ensuremath{\mathcal{O}}) \to [0,1]$ such that $\ensuremath{\mathbb{P}}_{S_{t+1}|S_t, A_t, \Theta} = \kappa_{\textnormal{trans}}(\cdot,(S_t, A_t, \Theta))$, an outcome kernel $\kappa_{\textnormal{out}} : \ensuremath{\mathscr{Y}} \times (\ensuremath{\mathcal{S}} \times \ensuremath{\mathcal{O}}) \to [0,1]$ such that $\ensuremath{\mathbb{P}}_{Y_{t}|S_t, \Theta} = \kappa_{\textnormal{out}}(\cdot,(S_t, \Theta))$, an initial state prior distribution $\ensuremath{\mathbb{P}}_{S|\Theta}$ such that $S_1 \sim \ensuremath{\mathbb{P}}_{S|\Theta}$, and a reward function $r: \ensuremath{\mathcal{Y}} \times \ensuremath{\mathcal{A}} \to \ensuremath{\mathbb{R}}$. The reward is a deterministic function of the system's outcome and the chosen action, hence there is a reward's kernel $\kappa_{\textnormal{reward}} : \ensuremath{\mathcal{B}}(\ensuremath{\mathbb{R}}) \times (\ensuremath{\mathcal{S}}, \ensuremath{\mathcal{A}}, \ensuremath{\mathcal{O}}) \to [0,1]$ such that $\ensuremath{\mathbb{P}}_{R_t|S_t, A_t, \Theta} = \kappa_{\textnormal{reward}}(\cdot,(S_t, A_t, \Theta))$. The task in Bayesian reinforcement learning is to learn a policy $\varphi = \lbrace \varphi_t: \ensuremath{\mathcal{S}} \times \ensuremath{\mathcal{H}}^t \to \ensuremath{\mathcal{A}} \rbrace_{t=1}^T$ taking an action $A_t$ based on the current observation $S_t$ and the past collected data $H^t$, where $H_{t+1} = (S_{t}, A_{t}, R_{t})$ that maximizes the \emph{cumulative expected reward} $r_\Phi(\varphi) \coloneqq \ensuremath{\mathbb{E}} \big[ \sum_{t=1}^T r\big(Y_t, \varphi_t(S_t, H^t) \big) \big]$. \subsection{Static state MDP and multi-armed bandit problem} A \emph{static state} MDP is an MDP whose transition kernel $\kappa_\textnormal{trans}$ is such that the system's state remains constant, i.e. $ S_t = S$ for all $t$. We will use the notation $\Pi$ to refer to such MDP. A \emph{multi-armed bandit} (MAB) problem can be formalized as a static state MDP whose environment parameters $\Theta$ and outcomes $Y_t$ are independent of $S$. Similarly to an MDP, the task in a MAB problem is to learn a policy $\varphi = \lbrace \varphi_t: \ensuremath{\mathcal{S}} \times \ensuremath{\mathcal{H}}^t \to \ensuremath{\mathcal{A}} \rbrace_{t=1}^T$ taking an action $A_t$ based on the past collected data $H^t$, where $H_{t+1} = (A_{t}, R_{t})$ that maximizes the cumulative expected reward $r_\Pi(\varphi) \coloneqq \ensuremath{\mathbb{E}} \big[ \sum_{t=1}^T r\big(Y_t, \varphi_t(S,H^t) \big) \big]$. A variant of that problem is the \emph{online optimization problem with partial feedback} studied by Russo and Van Roy~\cite{russo2016information}. This problem can also be modeled as a static MDP $\Pi$ with a finite action space $\ensuremath{\mathcal{A}}$ where at each time $t=1,\ldots,T$, the agent selects an action $A_t$ and observes a ``per-action outcome" $Y_{t,A_t} \in \ensuremath{\mathcal{Y}}'$ giving rise to past collected data $H^t$ with $H_{t+1} = (A_{t}, Y_{t,A_t})$. The agent associates the ``per-action outcome" with a reward $R_t = r'(Y_{t,A_t})$ through a preference function $r': \ensuremath{\mathcal{Y}}' \to \ensuremath{\mathbb{R}}$. In this setting, the random outcome $Y_t \in \ensuremath{\mathcal{Y}}$ is the vector formed with all the possible outcomes, $Y_t \equiv \lbrace Y_{t,a}\rbrace_{a\in \ensuremath{\mathcal{A}}}$ and the reward function $r:\ensuremath{\mathcal{Y}} \times \ensuremath{\mathcal{A}} \to \ensuremath{\mathbb{R}}$ is a function such that for all $Y_t \in \ensuremath{\mathcal{Y}}$ and $A_t\in \ensuremath{\mathcal{A}}$, we have $r(Y_t,A_t) = r'(Y_{t,A_t})$. In this problem, as well, the environment parameters $\Theta$ and outcomes $Y_t$ are independent of $S$. \subsection{The Bayesian Cumulative Reward} \label{subsec:BCR} A decision policy that maximizes the expected cumulative reward among all policies is called a \emph{Bayesian decision policy}. The corresponding maximum expected cumulative reward is defined as the \emph{Bayesian cumulative reward}. \begin{definition} \label{def:bcr} The \emph{Bayesian cumulative reward} (BCR) of a Markov decision process $\Phi$ is defined as $R_\Phi \coloneqq \sup_{\varphi} r_\Phi(\varphi)$, where the supremum is taken over the collection $\varphi$ of all decision rules $\varphi_t: \ensuremath{\mathcal{S}} \times \ensuremath{\mathcal{H}}^t \to \ensuremath{\mathcal{A}}$ such that the expectation is defined. \end{definition} The notion of Bayesian cumulative reward can be generalized to allow the agent to select an action using some knowledge $X^t$ such that each $X_{t+1}$ is obtained from $(S_{t}, A_{t}, Y_{t}, \Theta)$. In this generalized model, the knowledge $X_t$ is obtained through a knowledge kernel $\kappa_\textnormal{know}: \ensuremath{\mathscr{X}} \times (\ensuremath{\mathcal{S}} \times \ensuremath{\mathcal{A}} \times \ensuremath{\mathcal{Y}} \times \ensuremath{\mathcal{O}}) \to [0,1]$ such that $\ensuremath{\mathbb{P}}_{X_{t+1} | S_{t}, A_{t}, Y_{t}, \Theta } = \kappa_{\textnormal{know}}(\cdot, (S_{t}, A_{t}, Y_{t}, \Theta))$. Now, let $\varphi = \lbrace \varphi_t: \ensuremath{\mathcal{S}} \times \ensuremath{\mathcal{X}}^t \to \ensuremath{\mathcal{A}} \rbrace_{t=1}^T$ be a policy in this relaxed setting. Then, the \emph{generalized Bayesian cumulative reward} (also written as BCR when no confusion is possible) of an MDP $\Phi$ with knowledge kernel $\kappa_\textnormal{know}$ is $R_\Phi(\kappa_\textnormal{know}) \coloneqq \sup_\varphi r_\Phi(\kappa_\textnormal{know},\varphi)$, where \begin{align*} r_\Phi(\kappa_\textnormal{know},\varphi) \coloneqq \ensuremath{\mathbb{E}} \bigg[ \sum_{t=1}^T r \big( Y_t, \varphi_t(S_t, X^t) \big) \bigg] \end{align*} and again the supremum is taken over the collection $\varphi$ of all decision rules $\varphi_t : \ensuremath{\mathcal{S}} \times \ensuremath{\mathcal{X}}^t \to \ensuremath{\mathcal{A}}$ such that the expectation above is defined. \begin{remark} Given an MDP $\Phi$, let $\ensuremath{\mathcal{X}} = \ensuremath{\mathcal{S}} \times \ensuremath{\mathcal{A}} \times \ensuremath{\mathbb{R}}$ and $\kappa_\textnormal{know}$ be a kernel such that $X_{t+1} = (S_t, A_t, R_t)$ and denote this kernel $\kappa_\textnormal{H}$. Note that $X_t = H_t$ and $R_\Phi (\kappa_\textnormal{H} ) = R_\Phi$ \end{remark} After defining the generalized Bayesian cumulative reward, one can study the case where the agent has access to some processed information $Z_t$ obtained from the knowledge $X^t$. Let $\kappa_\textnormal{process}$ denote a collection of processing kernels $\lbrace \kappa_\textnormal{process,t}: \ensuremath{\mathscr{Z}} \times (\ensuremath{\mathcal{X}}^t) \to [0,1] \rbrace_{t=1}^T$ such that $\ensuremath{\mathbb{P}}_{Z_{t} |X^{t}} = \kappa_\textnormal{process,t}\big(\cdot, (X^{t})\big)$ for each $t=1,\ldots,T$. Then the \emph{processed Bayesian cumulative reward} with knowledge kernel $\kappa_\textnormal{know}$ and process kernels $\kappa_\textnormal{process}$ is $R_\Phi(\kappa_\textnormal{know},\kappa_\textnormal{process} ) \coloneqq \sup_{\psi} r_\Phi(\kappa_\textnormal{know},\kappa_\textnormal{process},\psi)$, where \begin{align*} r_\Phi(\kappa_\textnormal{know},\kappa_\textnormal{process},\psi) \coloneqq \ensuremath{\mathbb{E}} \bigg[ \sum_{t=1}^T r \big( Y_t, \psi_t(S_t, Z_t) \big) \bigg] \end{align*} and the supremum is taken over the collection $\psi$ of all decision rules $\psi = \lbrace \psi_t: \ensuremath{\mathcal{S}} \times \ensuremath{\mathcal{Z}} \to \ensuremath{\mathcal{A}} \rbrace_{t=1}^T$ such that the expectation above is defined. \subsection{Data processing inequality for the BCR} \label{subsec:dataprocess} An important property of the Bayesian cumulative reward is the data processing inequality (DPI), stating that no amount of processing of the knowledge random variables can increase the cumulative reward. This is formalized in the following lemma. \begin{restatable}{lemma}{dataprocess} \label{lem:data_process} Let $\kappa_\textnormal{U}$ be a knowledge kernel associated with an MDP $\Phi$ and $\kappa_{\textnormal{V}|\textnormal{U}}$ a collection of processing kernels. Then, the cumulative Bayesian reward using the knowledge from $U$ is at least as large as the processed Bayesian cumulative reward using the processed knowledge from $V$. More precisely, \begin{equation*} R_\Phi (\kappa_\textnormal{U} ) \geq R_\Phi (\kappa_\textnormal{U},\kappa_{\textnormal{V}|\textnormal{U}} ) \end{equation*} \end{restatable} \begin{proof}[Intuition of the proof] The proof follows by iteratively employing~\cite[Lemma~3.22]{kallenberg2005probabilistic} in a similar fashion to~\cite[Lemma~1]{xu2020minimum} and taking care that the random objects in the definitions of $R_\Phi(\kappa_U)$ and $R_\Phi(\kappa_U,\kappa_{V|U})$ follow the distributions described by the dynamics of the MDP $\Phi$ and their respective actions $\varphi_t$ and $\psi_t$. The complete proof is in appendix~\ref{sec:proofs_lemas}. \end{proof} \subsection{The Minimum Bayesian Regret (MBR)} \label{subsec:minimum_bayesian_regret} We define the \emph{fundamental limit of the Bayesian cumulative reward} as the Bayesian cumulative reward for a knowledge kernel such that $X_t = \Theta$, that is when the environment parameters are known to the agent. We denote such a kernel as $\kappa_{\Theta}$. \begin{definition} The \emph{fundamental limit of the Bayesian cumulative reward} of a Markov decision process $\Phi$ is defined as \begin{align*} R_\Phi(\kappa_\Theta) \coloneqq \sup_{ \lbrace \psi_t \rbrace_{t=1}^T} \ensuremath{\mathbb{E}} \bigg[ \sum_{t=1}^T r \big( Y_t, \psi_t(S_t, \Theta) \big) \bigg], \end{align*} where the kernel $\kappa_\Theta$ is such that $X_t = \Theta$ for all $t=1,\ldots,T$. \label{def:fl_bcr} \end{definition} \begin{assumption} For the rest of the paper, we will assume that the supremum from~\Cref{def:fl_bcr} exists and we will denote by $\psi^\star = \lbrace \psi^\star_t \rbrace_{t=1}^T$ a policy that achieves it. \end{assumption} We define the gap between this limit and the Bayesian cumulative reward as the \emph{minimum Bayesian regret}. \begin{definition} \label{def:mbr} The \emph{minimum Bayesian regret (MBR)} of a Markov decision process $\Phi$ is defined as \begin{equation*} \textnormal{MBR}_\Phi \coloneqq R_\Phi (\kappa_{\Theta} ) - R_\Phi (\kappa_{\textnormal{H}} ). \end{equation*} \end{definition} \textcolor{black}{ The MBR characterizes the regret of the optimal decision policy that has access to the collected data, but not to environment parameters, and is therefore an algorithm-independent quantity. It can be interpreted as the inherent difficulty of the reinforcement learning problem resulting from the lack of knowledge about the environment parameters $\Theta$. } \section{Upper bounds on the Minimum Bayesian Regret} \label{sec:upper_bounds} In this section, we start by giving an upper bound of the minimum Bayesian regret in terms of the difference of the fundamental limit of the BCR, $R_\Phi(\kappa_\Theta)$, and the processed BCR with the optimal Bayes parameters' estimator $\ensuremath{\mathbb{P}}_{\Theta|H^t}$ as the processing kernel and the optimal policy of $R_\Phi(\kappa_\Theta)$. That is, the difference between the best obtainable risk knowing the environment parameters $\Theta$, and the best obtainable risk inferring the parameters with an optimal estimator. This bound, in turn, can be developed into a bound that compares the sum of the individual terms in the optimal trajectory of $R_\Phi(\kappa_\Theta)$ and those obtained with the processing kernels $\ensuremath{\mathbb{P}}_{\Theta|H^t}$. This way, we can employ similar techniques to those in the literature (e.g., \cite{xu2017information, rodriguez2021tighter}) and bound the MBR in terms of the sum of terms depending on the statistical difference between the distributions of those two trajectories. \subsection{The Thompson sampling regret} Consider the fundamental limit of BCR, $R_\Phi(\kappa_\Theta)$, and its optimal trajectory $\psi^\star$. A natural algorithm to try to solve an MDP $\Phi$ when environment $\Theta$ is unknown is to estimate the environment parameters with some processing kernel of the history $\kappa_{\Theta|\textnormal{H}}$ and select an optimal action based on such processing. An elegant scenario would be to have the additional information of knowing which is the optimal trajectory $\psi^\star$ and to be able to calculate the Bayes optimal estimator $\ensuremath{\mathbb{P}}_{\Theta|H^t}$ to process the history. In fact, for a static MDP $\Pi$, this algorithm is studied in the literature and is known as the Thompson's sampling algorithm \textcolor{black}{~\cite{thompson1933likelihood,scott2010modern,chapelle2011empirical,may2012optimistic,osband2013more,russo2016information}}. Therefore, the next lemma shows that the MBR is bounded from above by the difference of $R_\Phi(\kappa_\Theta)$ and the BCR of such an algorithm, $r_\Phi(\kappa_\textnormal{H}, \kappa_{\Theta|\textnormal{H}, \psi^\star})$. \begin{lemma} \label{lemma:thompson_sampling} For any MDP $\Phi$, the MBR can be upper bounded as follows, \begin{equation*} \textnormal{MBR}_\Phi \leq R_\Phi(\kappa_\Theta) - r_\Phi(\kappa_\textnormal{H},\kappa_{\Theta|\textnormal{H}},\psi^\star). \end{equation*} \end{lemma} \begin{proof} The proof starts by using Lemma~\ref{lem:data_process} to lower bound $R_\Phi (\kappa_{\textnormal{H}} )$ with $R_\Phi (\kappa_{\textnormal{H}},\kappa_{\Theta|\textnormal{H}} ) $. The last inequality follows from the definition of $R_\Phi (\kappa_{\textnormal{H}},\kappa_{\Theta|\textnormal{H}} ) $ being the supremum over $\psi$ of $r_\Phi(\kappa_\textnormal{H},\kappa_{\Theta|\textnormal{H}},\psi)$. More precisely, \begin{align*} \textnormal{MBR}_\Phi &= R_\Phi (\kappa_{\Theta} ) - R_\Phi (\kappa_{\textnormal{H}} )\\ &\stackrel{}{\leq} R_\Phi (\kappa_{\Theta} )- R_\Phi (\kappa_{\textnormal{H}},\kappa_{\Theta|\textnormal{H}} )\\ &\stackrel{}{\leq} R_\Phi(\kappa_\Theta) - r_\Phi(\kappa_\textnormal{H},\kappa_{\Theta|\textnormal{H}},\psi^\star). \end{align*} \end{proof} In what follows, we will use the notations $Y^\star_t$ and $S^\star_t$ for the outcomes and states obtained from the actions derived from $\psi^\star$, the kernels that describe the MDP $\Phi$, and the knowledge kernel $\kappa_\Theta$. Similarly we will let $\hat{Y}_t$, $\hat{S}_t$ and $\hat{H}_t$ be the outcomes, states, and histories obtained from the actions derived from $\psi^\star$, the kernels that describe the MDP $\Phi$ with knowledge kernel $\kappa_\textnormal{H}$, and processing kernels $\kappa_{\Theta|\textnormal{H}}$. The following lemma builds on Lemma~\ref{lemma:thompson_sampling} and shows how the MBR can be written as the sum of the individual differences of the expected rewards obtained following the optimal trajectory $(Y^\star_t, S^\star_t)_{t=1}^T$ and the trajectory of the aforementioned algorithm $(\hat{Y}_t, \hat{S}_t)_{t=1}^T$ given the history $\hat{H}^t$. \begin{comment} \begin{lemma} \label{lemma:mdp_diff_expectations_thompson} For any MDP $\Phi$, the MBR can be upper bounded as follows, \begin{align*} \textnormal{M}&\textnormal{BR}_\Phi \leq \\ &\sum_{t=1}^T \ensuremath{\mathbb{E}} \bigg[ \ensuremath{\mathbb{E}} \Big[ r\big(Y^\star_t, \psi^\star_t (S^\star_t, \Theta)\big) - r\big(\hat{Y}_t, \psi^\star_t (\hat{S}_t, \hat{\Theta}) \big)\Big| \Theta, \hat{\Theta}, H^t \Big] \bigg]. \end{align*} \end{lemma} \end{comment} Unrolling $R_\Phi(\kappa_\Theta)$ and $r_\Phi(\kappa_\textnormal{H}, \kappa_{\textnormal{H}|\Theta}, \psi^\star)$ and using the linearity of the expectation and the law of total expectation reveals that the right-hand side term from Lemma~\ref{lemma:thompson_sampling} can be written as \begin{equation} \sum_{t=1}^T \ensuremath{\mathbb{E}} \bigg[ \ensuremath{\mathbb{E}} \Big[ r\big(Y^\star_t, \psi^\star_t (S^\star_t, \Theta)\big) - r\big(\hat{Y}_t, \psi^\star_t (\hat{S}_t, \hat{\Theta}_t) \big)\Big| \Theta, \hat{\Theta}_t, \hat{H}^t \Big] \bigg]. \label{eq:mdp_diff_expectations_thompson} \end{equation} \begin{remark} \label{rem:diff_dists} The importance of this re-formulation lays in the fact that the first term inside the conditional expectation is distributed according to $\ensuremath{\mathbb{P}}_{Y^\star,S^\star|\Theta}$ since $(Y^\star,S^\star)$ are independent of the history $\hat{H}^t$ when the environment parameters $\Theta$ are known. Similarly, the second term is distributed according to $\ensuremath{\mathbb{P}}_{\hat{Y}_t,\hat{S}_t|\hat{H}^t}$ since $(\hat{Y}_t,\hat{S}_t)$ are independent of the sampled parameters $\hat{\Theta}_t$ when the history is known. Both facts follow from the Markov chain $(Y^\star_t,S^\star_t) - \Theta - (\hat{Y}_t, \hat{S}_t) - \hat{H}^t - \hat{\Theta}_t$. Therefore, conditioned on the history $\hat{H}^t$ and the environment parameters $\Theta, \hat{\Theta}_t$, the terms in the sum of~\eqref{eq:mdp_diff_expectations_thompson} are a difference of expectations of random objects which randomness comes from distributions on the same space, which permits us to employ known decoupling techniques to bound these differences in terms of such distributions. \end{remark} In the sequel, we use this fact to bound the MBR as the sum of terms depending on the statistical difference between the distributions of the elements from the optimal trajectory $(Y^\star_t, S^\star_t)|\Theta$ and the trajectory described by the algorithm with the Bayes optimal parameters' estimator $(\hat{Y}_t, \hat{S}_t)|\hat{H}^t$. More precisely, we use the techniques from e.g.~\cite{russo2016information,xu2017information} when the reward is sub-Gaussian, from e.g.~\cite{rodriguez2021tighter,wang2019information} when it is Lipschitz, and from~\cite{rodriguez2021tighter} to connect both settings when the reward is bounded. \subsection{Sub-Gaussian reward functions} \label{subsec:subgaussian} We consider arbitrary reward functions $r:\ensuremath{\mathcal{Y}} \times \ensuremath{\mathcal{A}} \to \ensuremath{\mathbb{R}}$ mapping an outcome and an action to a scalar reward. Under the assumption that the random reward $r(\hat{Y}_t,\psi_t^\star(\hat{S}_t,\theta))$ is $\sigma_t^2$-sub-Gaussian under $\ensuremath{\mathbb{P}}_{\hat{Y}_t,\hat{S}_t|\hat{H}^t = \hat{h}^t}$ for all $\theta \in \ensuremath{\mathcal{O}}$ and all $\hat{h}^t \in \ensuremath{\mathcal{H}}^t$, the $\textnormal{MBR}_\Phi$ is bounded by a sum of terms related to the relative entropy between the distribution of the elements of each step of the optimal trajectory, i.e., $Y^\star_t, S^\star_t$, and the Thompson's sampled trajectory, i.e., $\hat{Y}_t, \hat{S}_t$. This is formalized in the following Proposition. \begin{restatable}{proposition}{kldivboundsubgaussian} \label{prop:kl_div_subgaussian} If for all $t = 1,\ldots,T$, the random reward $r(\hat{Y}_t,\psi_t^\star(\hat{S}_t,\theta))$ is $\sigma_t^2$-sub-Gaussian under $\ensuremath{\mathbb{P}}_{\hat{Y}_t,\hat{S}_t|\hat{H}^t = \hat{h}^t}$ for all $\theta \in \ensuremath{\mathcal{O}}$ and all $\hat{h}^t \in \ensuremath{\mathcal{H}}^t$, then, \begin{align*} \textnormal{MBR}_{\Phi} \leq \sum_{t=1}^T \ensuremath{\mathbb{E}} \Big[ \sqrt{2 \sigma_t^2 \KL{\ensuremath{\mathbb{P}}_{Y^\star_t, S^\star_t|\Theta}}{\ensuremath{\mathbb{P}}_{\hat{Y}_t,\hat{S}_t|\hat{H}^t}}} \Big]. \end{align*} \end{restatable} \begin{proof} \textcolor{black}{The proof follows from applying Donsker-Varadhan's inequality~\cite[Theorem~5.2.1]{gray2011entropy} to~\eqref{eq:mdp_diff_expectations_thompson} using~\Cref{rem:diff_dists} in a similar fashion to~\cite{russo2016information,xu2017information}.} \end{proof} \subsection{Lipschitz reward functions} \label{subsec:lipschitz} In this subsection, we suppose that the set of outcomes and actions $(\ensuremath{\mathcal{Y}},\ensuremath{\mathcal{A}})$ together with the metric $\rho:(\ensuremath{\mathcal{Y}}\times \ensuremath{\mathcal{A}}) \times (\ensuremath{\mathcal{Y}}\times \ensuremath{\mathcal{A}}) \rightarrow \ensuremath{\mathbb{R}}_+$, form a Polish metric space Assume that the reward function $r: \ensuremath{\mathcal{Y}} \times \ensuremath{\mathcal{A}} \to \ensuremath{\mathbb{R}}$ is $L$-Lipschitz under the metric $\rho$, that is that $|r(y,a) - r(y',a')| \leq L \rho((y,a), (y',a'))$ for all $y,y' \in \ensuremath{\mathcal{Y}}$ and $a,a' \in \ensuremath{\mathcal{A}}$. Under this assumption, the Wasserstein distance can be used to upper bound the minimum Bayesian regret. \begin{restatable}{proposition}{wasserstein} \label{prop:wasserstein} Suppose that $(\ensuremath{\mathcal{Y}}\times\ensuremath{\mathcal{A}})$ is a metric space with metric $\rho$. If the reward function $r: \ensuremath{\mathcal{Y}} \times \ensuremath{\mathcal{A}} \to \ensuremath{\mathbb{R}}$ is $L$-Lipschitz under the metric $\rho$, then \begin{equation*} \textnormal{MBR}_{\Phi} \leq L \sum_{t=1}^T \ensuremath{\mathbb{E}} \big[\ensuremath{\mathbb{W}}(\ensuremath{\mathbb{P}}_{Y^\star_t, S^\star_t|\Theta}, \ensuremath{\mathbb{P}}_{\hat{Y}_t,\hat{S}_t|\hat{H}^t}) \big]. \end{equation*} \end{restatable} \begin{proof} \textcolor{black}{The proof follows from applying Kantorovich–Rubinstein duality~\cite[Remark~6.5]{villani2009optimal} to~\eqref{eq:mdp_diff_expectations_thompson} using~\Cref{rem:diff_dists} analogously to~\cite{rodriguez2021tighter,wang2019information}.} \end{proof} \subsection{Bounded reward functions} \label{subsec:bounded} We can obtain upper bounds on the minimum Bayesian regret for bounded reward functions as particular cases of both Proposition~\ref{prop:kl_div_subgaussian} and Proposition~\ref{prop:wasserstein}. We will consider without loss of generality reward functions bounded in $[0,1]$. First, from Hoeffding's lemma~\cite[Theorem~1]{hoeffding1994probability}, we have that if $r:\ensuremath{\mathcal{Y}} \times \ensuremath{\mathcal{A}} \to [0,1]$ then the reward is $1/4$-sub-Gaussian under any distribution of the arguments. This fact and Proposition~\ref{prop:kl_div_subgaussian} leads to Corollary~\ref{cor:kl_div_bounded_reward}. \begin{restatable}{corollary}{kldivboundboundedcor} \label{cor:kl_div_bounded_reward} If the reward function is bounded in $[0,1]$, then, for any MDP $\Phi$, \begin{align*} \textnormal{MBR}_{\Phi}&\leq \sum_{t=1}^T \ensuremath{\mathbb{E}} \bigg[\sqrt{\frac{1}{2} \KL{\ensuremath{\mathbb{P}}_{Y^\star_t, S^\star_t|\Theta}}{\ensuremath{\mathbb{P}}_{\hat{Y}_t,\hat{S}_t|\hat{H}^t}}} \bigg]. \end{align*} \end{restatable} Second, we can note that a bounded $[0,1]$ function is $1$-Lispchitz under the discrete metric (or Hamming distortion) $\rho((y,a),(y',a')) \coloneqq \mathbbm{1}_{(y,a)=(y',a')}$ where $\mathbbm{1}$ is the indicator function. Using this fact, we can obtain Corollary~\ref{cor:wasserstein} from Proposition~\ref{prop:wasserstein}. \begin{restatable}{corollary}{wassersteincor} \label{cor:wasserstein} If the reward function is bounded in $[0,1]$, then, for any MDP $\Phi$, \begin{equation*} \textnormal{MBR}_{\Phi} \leq \sum_{t=1}^T \ensuremath{\mathbb{E}} \big[\ensuremath{\mathbb{W}}(\ensuremath{\mathbb{P}}_{Y^\star_t, S^\star_t|\Theta}, \ensuremath{\mathbb{P}}_{\hat{Y}_t,\hat{S}_t|\hat{H}^t}) \big]. \end{equation*} \end{restatable} \begin{remark} \label{rem:wasserstein_is_tighter} Corollary~\ref{cor:wasserstein} provides a tighter bound than Corollary~\ref{cor:kl_div_bounded_reward}. Indeed, if the geometry is ignored (i.e., the discrete metric is considered), then for all $t=1,\ldots,T$, \begin{align*} \ensuremath{\mathbb{E}} \big[\ensuremath{\mathbb{W}}(\ensuremath{\mathbb{P}}_{Y^\star_t, S^\star_t|\Theta},& \ensuremath{\mathbb{P}}_{\hat{Y}_t,\hat{S}_t|\hat{H}^t}) \big]\\ &= \ensuremath{\mathbb{E}} \big[\ensuremath{\textsc{\texttt{TV}}}(\ensuremath{\mathbb{P}}_{Y^\star_t, S^\star_t|\Theta}, \ensuremath{\mathbb{P}}_{\hat{Y}_t,\hat{S}_t|\hat{H}^t}) \big]\\ &\leq \ensuremath{\mathbb{E}} \bigg[ \sqrt{\frac{1}{2}\KL{\ensuremath{\mathbb{P}}_{Y^\star_t, S^\star_t|\Theta}}{\ensuremath{\mathbb{P}}_{\hat{Y}_t,\hat{S}_t|\hat{H}^t}}} \bigg], \end{align*} where the equality follows from~\cite[Proof of Theorem~6.15]{villani2009optimal} and inequality follows from Pinsker’s~\cite[Theorem~6.5]{polyanskiy2014lecture} and Bretagnolle–Huber’s result~\cite[Proof of Lemma~2.1]{bretagnolle1978estimation}. \end{remark} \section{Upper bounds for static MDPs} \label{sec:upper_bounds_MAB} In this section, we leverage the bound from Section~\ref{sec:upper_bounds} to obtain bounds on the minimum Bayesian regret for static Markov decision processes. We focus here on the case where the reward function is bounded in $[0,1]$ and leave the sub-Gaussian and Lipschitz cases to the Appendix~\ref{sec:static_mdps_subg_lip}, since they are analogous to the previous section. We first present upper bounds on the MBR for the multi-armed bandit problem. We then produce upper bounds to the online optimization with partial feedback problem, and show how they can recover \emph{from below} the results from Russo and Van Roy~\cite{russo2016information}. Similarly to Section~\ref{sec:upper_bounds}, we can apply Lemma~\ref{lemma:thompson_sampling} to upper bound the MBR for static MDPs. In the case of a static MDP, the right-hand side of that bound can be written as \begin{align*} \sum_{t=1}^T \ensuremath{\mathbb{E}} \bigg[ \ensuremath{\mathbb{E}} \Big[ r\big(Y_t, \psi^\star_t (S, \Theta)\big) - r\big(Y_t, \psi^\star_t (S, \hat{\Theta}_t) \big)\Big| \Theta, \hat{\Theta}_t, \hat{H}^t \Big] \bigg]. \end{align*} This rewriting of the bound is obtained the same way as ~\eqref{eq:mdp_diff_expectations_thompson}: unrolling $R_\pi(\kappa_\Theta)$ and $r_\Pi(\kappa_\textnormal{H}, \kappa_{\textnormal{H}|\Theta}, \psi^\star)$, using the linearity of the expectation, the law of total expectation and the fact that the state $S$ does not depend on the time $t=1,\ldots,T$. In the case where the outcomes $\lbrace Y_t \rbrace_{t=1,\ldots,T}$ do not depend on the state $S$, as in a MAB problem, it is possible to rewrite the actions taken by optimal policy $\psi^\star_t(S,\Theta)$ as $\gamma^\star(\Theta)$, where the function $\gamma^\star: \ensuremath{\mathcal{O}} \to \ensuremath{\mathcal{A}}$ is such that for all $S \in \ensuremath{\mathcal{S}}$ and all $\Theta \in \ensuremath{\mathcal{O}}$, it holds that $\psi^\star_t(S,\Theta) = \gamma^\star(\Theta)$. In that case, it comes that the right-hand side term from Lemma 2 can be written as \begin{equation} \sum_{t=1}^T \ensuremath{\mathbb{E}} \Big[ \ensuremath{\mathbb{E}} \big[ r(Y_t,A^\star) - r(Y_t,\hat{A}_t)\big]|A^\star,\hat{A}_t,\hat{H}^t \Big]. \label{eq:smdp_diff_expectations_thompson} \end{equation} \begin{remark} \label{rem:smdp_diff_dists} Under this reformulation, the outcome in the first term inside the conditional expectation is distributed according to $\ensuremath{\mathbb{P}}_{Y_t|A^\star,\hat{H}^t}$ and the second term is distributed according to $\ensuremath{\mathbb{P}}_{Y_t|\hat{H}^t}$. This happens since $Y_t$ is independent of the sampled environment parameters $\hat{\Theta}_t$, and therefore independent of the sampled action $\hat{A}_t$ when the history $\hat{H}^t$ is known. \end{remark} \subsection{Multi-armed bandit problem} In this subsection, we propose minimum Bayesian regret bounds for multi-armed bandit problems $\Pi$. The tightest bound we obtain relates the $\textnormal{MBR}_\Pi$ to the Wasserstein distance between the conditional probability of the outcome given the optimal action and the history collected following a Thompson sampling policy, and the conditional probability of the outcome given only the history. \begin{restatable}{proposition}{MABwassersteinBounded} \label{prop:MABwassersteinBounded} If the reward function is bounded in $[0,1]$, then for any static MDP $\Pi$, \begin{equation*} \textnormal{MBR}_{\Pi} \leq \sum_{t=1}^T \ensuremath{\mathbb{E}} \big[\ensuremath{\mathbb{W}}(\ensuremath{\mathbb{P}}_{Y_t|A^\star,\hat{H}^t}, \ensuremath{\mathbb{P}}_{Y_t|\hat{H}^t}) \big]. \end{equation*} \end{restatable} \begin{proof} \textcolor{black}{The proof follows from applying Kantorovich–Rubinstein duality~\cite[Remark~6.5]{villani2009optimal} to~\eqref{eq:smdp_diff_expectations_thompson} using~\Cref{rem:smdp_diff_dists} in the same way as~\cite{rodriguez2021tighter,wang2019information}.} \end{proof} Using the same arguments as in Remark~\ref{rem:wasserstein_is_tighter}, together with Jensen's inequality, one can relax the bound from Proposition~\ref{prop:MABwassersteinBounded} and relate the $\textnormal{MBR}_\Pi$ to the conditional mutual information between the outcome $Y_t$ and the optimal action $A^\star$ given the history $\hat{H}^t$. This is formalized in the following corollary. \begin{restatable}{corollary}{MABbounded} \label{cor:MAB_bounded} If the reward function is bounded in $[0,1]$, then for any static MDP $\Pi$, \begin{align*} \textnormal{MBR}_{\Pi} &\leq \sum_{t=1}^T \sqrt{\frac{1}{2} \textup{I}(Y_t;A^\star|\hat{H}^t)}. \end{align*} \end{restatable} This conditional mutual information can be interpreted as the remaining ``amount of surprise about the output $Y_t$" after observing the history $\hat{H}^t$ that is removed when the optimal action $A^\star$ is revealed. \subsection{Online optimization with partial feedback problem} In the special case of online optimization with partial feedback, the right-hand-side term from Lemma \ref{lemma:thompson_sampling} in ~\eqref{eq:smdp_diff_expectations_thompson} can be formulated in a compact form using the preference function: \begin{equation} \label{eq:oopf_diff_expectations_thompson} \sum_{t=1}^T \ensuremath{\mathbb{E}} \Big[ \ensuremath{\mathbb{E}} \big[ r'(Y_{t,A^\star})\big]-\ensuremath{\mathbb{E}} \big[ r'(Y_{t,\hat{A}_t})\big]|A^\star,\hat{A}_t,\hat{H}^t \Big]. \end{equation} \begin{remark} \label{rem:oopf_diff_dists} In this last rewriting, the outcome in the first term inside the conditional expectation is distributed according to $\ensuremath{\mathbb{P}}_{Y_{t,A^\star}|A^\star,\hat{H}^t}$ and the second term is distributed according to $\ensuremath{\mathbb{P}}_{Y_{t,\hat{A}_t}|\hat{H}^t}$. This holds since $Y_t$ is independent of the sampled environment parameters $\hat{\Theta}_t$, and therefore independent of the sampled action $\hat{A}_t$ when the history $\hat{H}^t$ is known. \end{remark} As the terms in~\eqref{eq:oopf_diff_expectations_thompson} are a difference of expectations of random objects which randomness comes from distributions on the same space, we can upper bound the minimum Bayesian regret using the Wasserstein distance in terms of such distributions following the techniques from~\cite{rodriguez2021tighter,wang2019information}. This is formalized in the following proposition. \begin{restatable}{proposition}{RUSSOwassersteinBounded} \label{prop:Russo_wasserstein_bounded} If the reward function is bounded in $[0,1]$, then for any \emph{online optimization problem with partial feedback} $\Pi$, \begin{equation*} \textnormal{MBR}_{\Pi} \leq \sum_{t=1}^T \ensuremath{\mathbb{E}} \big[\ensuremath{\mathbb{W}}(\ensuremath{\mathbb{P}}_{Y_{t,A^\star}|A^\star,\hat{H}^t}, \ensuremath{\mathbb{P}}_{Y_{t,A^\star}|\hat{H}^t})\big]. \end{equation*} \end{restatable} \begin{proof} \textcolor{black}{The proof follows from applying Kantorovich–Rubinstein duality~\cite[Remark~6.5]{villani2009optimal} to~\eqref{eq:oopf_diff_expectations_thompson} using~\Cref{rem:oopf_diff_dists}.} \end{proof} This bound can also be relaxed following a similar procedure as Remark \ref{rem:wasserstein_is_tighter} to relate the $\textnormal{MBR}_\Pi$ to the relative entropy between the distribution of the ``per-action outcome" $Y_{t,A^\star}$ given the optimal action $A^\star$ and the history $\hat{H}^t$ and given the history only. \begin{restatable}{corollary}{RUSSObounded} \label{cor:Russo_bounded} If the reward function is bounded in $[0,1]$, then for any \emph{online optimization problem with partial feedback} $\Pi$, \begin{align*} \textnormal{MBR}_{\Pi} \leq \sum_{t=1}^T \ensuremath{\mathbb{E}} \bigg[\sqrt{\frac{1}{2} \KL{\ensuremath{\mathbb{P}}_{Y_{t,A^\star}|A^\star,\hat{H}^t}}{\ensuremath{\mathbb{P}}_{Y_{t,A^\star}|\hat{H}^t}}} \bigg]. \end{align*} \end{restatable} As the above stated Proposition \ref{prop:Russo_wasserstein_bounded} is derived using Lemma \ref{lemma:thompson_sampling}, its bound naturally holds for the regret of the Thompson sampling algorithm, namely: \begin{align*} R_\Pi(\kappa_\Theta) - &r_\Pi(\kappa_\textnormal{H},\kappa_{\Theta|\textnormal{H}},\psi^\star)\leq \nonumber \\ &\sum_{t=1}^T \ensuremath{\mathbb{E}} \big[\ensuremath{\mathbb{W}}(\ensuremath{\mathbb{P}}_{Y_{t,A^\star}|A^\star,\hat{H}^t}, \ensuremath{\mathbb{P}}_{Y_{t,A^\star}|\hat{H}^t})\big] \end{align*} We can further relax this bound to recover results from Russo and Van Roy~\cite{russo2016information}. More specifically, we can recover the general bound combining~\cite[Propositions~1 and~3]{russo2016information}, and the specific bound combining~\cite[Propositions~1 and~4]{russo2016information}, for which it is assumed that the outcome $Y_t$ is perfectly revealed upon observing $Y_{t,a}$ for any $a\in\ensuremath{\mathcal{A}}$. These claims are formalized in Corollary \ref{cor:propositions_from_Russo}. \begin{restatable}{corollary}{RUSSOboundedCorollary} \label{cor:propositions_from_Russo} If the reward function is bounded in $[0,1]$, then for any \emph{online optimization problem with partial feedback} $\Pi$, we have the following inequality on the bound from Proposition \ref{prop:Russo_wasserstein_bounded}: \begin{align*} \sum_{t=1}^T \ensuremath{\mathbb{E}} \big[\ensuremath{\mathbb{W}}(\ensuremath{\mathbb{P}}_{Y_{t,A^\star}|A^\star,\hat{H}^t}, \ensuremath{\mathbb{P}}_{Y_{t,A^\star}|\hat{H}^t})\big] \leq\sqrt{\frac{1}{2} |\ensuremath{\mathcal{A}}| \textup{H}(A^\star) T }. \end{align*} Under the additional assumption that the outcome $Y_t$ is perfectly revealed upon observing $Y_{t,a}$ for any $a\in\ensuremath{\mathcal{A}}$, one can obtain a tighter result: \begin{align*} \sum_{t=1}^T \ensuremath{\mathbb{E}} \big[\ensuremath{\mathbb{W}}(\ensuremath{\mathbb{P}}_{Y_{t,A^\star}|A^\star,\hat{H}^t}, \ensuremath{\mathbb{P}}_{Y_{t,A^\star}|\hat{H}^t})\big] \leq\sqrt{\frac{1}{2} \textup{H}(A^\star) T }. \end{align*} \end{restatable} \begin{proof}[Intuition of the proof] For both results, the proof starts from using the same steps as Remark \ref{rem:wasserstein_is_tighter} to relax the bound from Proposition \ref{prop:Russo_wasserstein_bounded}. Then, both of the proofs rely on the application of Cauchy-Schwartz's and Jensen's inequalities to obtain a bound using the sum of the conditional mutual information between the optimal action $A^\star$ and the “per-action outcome” $Y_{t,A_t}$ given the history $\hat{H}^t$. One can then show that the entropy of the optimal action $\textup{H}(A^\star)$ upper bounds this sum of conditional mutual information $\textup{I}(A^\star; Y_{t,A_t}|\hat{H}^t)$ to obtain the desired results. The assumption that the outcome $Y_t$ is perfectly revealed upon observing $Y_{t,a}$ for any $a\in\ensuremath{\mathcal{A}}$ averts an extra use of the Cauchy-Schwartz inequality and thus allows to avoid an explicit dependence on the the number of actions through the multiplicative constant $\sqrt{|\ensuremath{\mathcal{A}}|}$. The full proof can be found in Appendix~\ref{sec:oopf_mdps_subg_lip}. \end{proof} \section{Conclusion} \label{sec:conclusion} In this paper, building on the results from~\cite{xu2020minimum}, we introduce a framework to study the Bayesian cumulative reward and the minimum Bayesian regret for reinforcement learning problems in the form of Markov decision process. The latter, is an algorithm-independent quantity and reflects the difficulty of the reinforcement learning problem. We prove a data processing inequality for the Bayesian cumulative reward and present upper bounds on the minimum Bayesian regret using the Wasserstein distance and the relative entropy. We leverage these results to the particular cases of the multi-armed bandit and the online optimization with partial feedback problems. For this last problem, our bound can be relaxed to recover from below the results presented in~\cite{russo2016information}. \IEEEtriggeratref{12} \bibliographystyle{IEEEtran}
1,108,101,565,043
arxiv
\section{Introduction} \label{sec:intro} Semi-analytical and numerical studies agree that the first generation of stars are likely to have formed at very high redshifts, $z \gsim 20$, at locations corresponding to rare peaks of the fluctuating primordial density field. Numerical simulations suggest that the first collapsed objects host high--mass ($\sim100M_\odot$), low--metalicity (so--called 'Population III', hereafter 'PopIII') stars \citep{ABN02, BCL02}. In the absence of any feedback processes, these stars, and their accreting remnant black holes (BHs), could significantly reionize the intergalactic medium (IGM). Recent evidence from the $z\sim6$ quasars discovered in the Sloan Digital Sky Survey (SDSS), whose spectra exhibit dark and sharp Gunn-Peterson troughs \citep{FCK06, MH04}, and from the cosmic microwave polarization anisotropies measured by the {\it WMAP} satellite, which imply a conservative electron scattering optical depth $\tau_e = 0.09\pm0.03$ \citep{Page06, Spergel06}, suggest that cosmic reionization was delayed to a later stage of structure formation, $z\sim$ 6 -- 10. Such a late reionization would require a substantial suppression of very high redshift ionizing sources (e.g., \citealt{HB06}). The impact that the first generation of stars would have on their surroundings plays a crucial role in how reionization progresses at high redshifts \citep{HH03, Cen03_postWMAP, WL03_postWMAP}, but is poorly understood from first principles. Several feedback mechanisms are expected to be potentially important, including both direct chemical and energetic input from the first supernovae, and radiative processes due to UV and X--ray radiation from the first stars themselves. The enhanced metallicity due to the supernovae will result in an increase in the cooling rate, leading to more efficient star formation and lower stellar masses (e.g., \citealt{Omukai2000, Bromm2003}). The signature of the metals that are produced may be seen in quasar absorption spectra, although semi-analytic work on the subject suggests that chemical feedback from pregalactic objects did not play a large role in setting the observed intergalactic metallicity distribution at $z\lsim 6$ \citep{SFM02, SSF03}. In this paper we focus on radiative feedback, and postpone the study of PopIII metal pollution and feedback to future work. Radiative feedback can be either positive and negative, in that it can enhance or suppress subsequent star--formation. Positive feedback can result when the enhanced free-electron fraction from ionizing photons or from shocks catalyzes the formation of molecular hydrogen (H$_2$), which can provide the dominant cooling channel at high-densities and low temperatures. The catalyst electrons can be produced by X-rays emitted as a result of gas accretion onto early BHs (e.g., \citealt{HRL96}), or by a previous epoch of photoionization inside ``fossil'' HII regions \citep{RGS02b, OH03, OShea05}, or by collisional ionization in protogalactic shocks \citep{SK87, Susa98, OH02}. Indeed, cosmological simulations have noted net positive feedback close to the edge of HII regions \citep{RGS02b, KM05}. Negative feedback can result from chemical or thermodynamical effects. UV photons in the Lyman-Werner (LW) band of ${\rm H_2}$ can dissociate these molecules, thereby reducing their effectiveness in cooling the gas (e.g., \citealt{HRL97, HAR00, CFA00, MBA01}). Active radiative heating can photo--evaporate gas in low---mass halos \citep{Efstathiou92, BL99, SIR04}. Additionally, a past episode of photoionization heating in inactive ``fossil'' HII regions can leave the gas with tenacious excess entropy, reducing the gas densities, hindering ${\rm H_2}$ formation, cooling and collapse \citep{OH03}. The above feedback effects, their relative importance, and their net outcome on star formation within the population of early halos, is not well understood ab--initio, and is poorly constrained by observations at high redshifts. Although invaluable in furthering our understanding of the main physical concepts, semi-analytic studies (e.g., \citealt{HRL97, OH03, MST05}) do not fully take into account the details of cosmological density structure and evolution, which can be very important. On the other hand, numerical studies are limited in scope due to computational restrictions associated with full radiative transfer on such large scales and large dynamical ranges (see, e.g., \citealt{Iliev06} for a recent review). In particular, it is difficult to couple radiative transfer and hydrodynamics --- most work to date has focused on either one or the other issue (with some exceptions, e.g., \citealt{GA01, RGS02a,SIR04}). Techniques approximating full radiative transfer can provide a crucial speed-up of computing time, but such simulations still do not provide a large statistical sample for a detailed study of radiative feedback, and/or can be limited by a small dynamical range, thereby missing the smallest halos which would be most susceptible to negative feedback (e.g., \citealt{RGS02a}). {\it The purpose of the present paper is to statistically investigate UV radiative feedback associated with the first generation of stars}. Prior numerical studies have either focused on radiation in a single different band, such as LW photons \citep{MBA01} or X-rays \citep{MBA03, KM05} or lacked quantitative statistics and have not included photo--heating and photo--evaporation of low--mass halos \citep{RGS02a, RGS02b, OShea05}. The recent work by \citet{ABS05} has studied the impact of photoionizing radiation in detail within a single HII region, but without self-consistently modeling the hydrodynamics. In the present study, we quantify the combined effects of UV photo-ionization and LW radiation from nearby Pop III star formation. Rather than simulating the radiative transfer within an individual HII region, we take a statistical approach. In particular, we examine how halos which are in the process of collapsing are affected by spatially constant but potentially short-lived radiation backgrounds at various intensities. This allows us to calibrate the sign and amplitude of the resulting feedback, in order to include these effects in future semi-analytic studies. To this end, we carry out simulations in which a large region is photo-ionized for a short period of time, and neglect radiative transfer effects (we discuss the impact of this approximation in more detail in \S~\ref{sec:results} below). The rest of this paper is organized as follows. In \S~\ref{sec:sims}, we describe the simulations. In \S~\ref{sec:results}, we present the results of the simulations with photoionization heating, but without a LW background. In \S~\ref{sec:lw}, we discuss simulation runs that also include a LW background. Finally, in \S~\ref{sec:conc}, we summarize our conclusions and discuss the implications of this work. For completeness and for reference, in the Appendix, we present the dark matter halo mass functions found in our simulations. Throughout this paper, we adopt the background cosmological parameters ($\Omega_\Lambda$, $\Omega_{\rm M}$, $\Omega_b$, n, $\sigma_8$, $H_0$) = (0.7, 0.3, 0.047, 1, 0.92, 70 km s$^{-1}$ Mpc$^{-1}$), consistent with the measurements of the power spectrum of CMB temperature anisotropies by the first year of data from the {\it WMAP} satellite \citep{Spergel03}. The three--year data from {\it WMAP} favors decreased small--scale power (i.e. lower values for $\sigma_8$ and $n_s$; \citealt{Spergel05}), which would translate to a $\sim 15\%$ redshift delay, but would not change our conclusions. Unless stated otherwise, we quote all quantitities in comoving units. \section{Simulations} \label{sec:sims} We use the Eulerian adaptive mesh refinement (AMR) code Enzo, which is described in greater detail elsewhere \citep{Bryan99, NB99}. Our simulation volume is 1 $(h^{-1} ~ {\rm Mpc})^3$, initialized at $z_{\rm init}=99$ with density perturbations drawn from the \citet{EH99} power spectrum. We first run a low resolution ($128^3$ particles), dark matter (DM) only run down to $z=15$, to find the highest density peak in the box. We then re-center the box around the spatial location of that peak, and rerun the simulations with the inclusion of gas and at a higher resolution inside a 0.25 $h^{-1} ~ {\rm Mpc}$ cube, centered in the 1 $h^{-1} ~ {\rm Mpc}$ box. This refined central region has an average physical overdensity of $\delta (z_{\rm init}) \equiv \rho/\bar{\rho} - 1 =0.1637$, corresponding to a 2.4 $ \sigma$ mass fluctuation of an equivalent spherical volume. We use such a biased region for our analysis since it hosts a large number of halos at high regshifts. This not only provides good number statistics, but also helps mimic a pristine, unpolluted region which is likely to host the first generation of stars. We postpone a lower redshift analysis to a future work. Our fiducial runs are shown in Table \ref{tbl:runs}. Our root grid is $128^3$. We have two additional static levels of refinement inside the central 0.25 $h^{-1} ~ {\rm Mpc}$ cube. Furthermore, grid cells inside the central region are allowed to dynamically refine so that the Jeans length is resolved by at least 4 grid zones and no grid cell contains more than 4 times the initial gas mass element. Each additional grid level refines the mesh length of the parent grid cell by a factor of 2. We allow for a maximum of 10 levels of refinement inside the refined central region, granting us a spatial resolution of 7.63 $h^{-1}~{\rm pc}$. This comoving resolution translates to 0.36 $h^{-1}~{\rm proper~pc}$ at $z=20$. We find that this resolution is sufficient to adequately resolve the gross physical processes in the few $\times 10^5-10^7$ $M_\odot$ halos of interest in this work by comparing with higher mass and spatial resolution runs (not shown here). The dark matter particle mass is 747 $M_\odot$. We also include the non-equilibrium reaction network of nine chemical species (H, H$^+$, He, He$^+$, He$^{++}$, $e^-$, H$_2$, H$_2^+$, H$^-$) using the algorithm of \citet{Anninos97}, and initialized with post-recombination abundances from \citet{AN96}. Our analysis below is based on the central refined region; the low resolution dark matter outside the refined region serves to provide the necessary tidal forces to our refined region. Readers interested in further details concerning the simulation methodology are encouraged to consult, e.g., \citet{MBA01}. As shown in Table \ref{tbl:runs}, we have performed five different runs without a LW background, distinguished by the duration or amplitude of the assumed UV background radiation (hereafter UVB), and six additional runs that include an additional constant LW background. For the UV radiation we assume an isotropic background flux with a $T=2\times10^4$K blackbody spectral shape, normalized at the hydrogen ionization frequency, $h \nu_H$ = 13.6 eV. The values of $J_{\rm UV}$ are shown in Table \ref{tbl:runs} in units of $10^{-21}{\rm ergs~s^{-1}~cm^{-2}~Hz^{-1}~sr^{-1}}$. The NoUVB\ run contains no UV radiation, and serves mainly as a reference run. The Heat0.08\ and Heat0.8\ runs include a UVB with $J_{\rm UV} = 0.08, 0.8$, respectively. The value of $J_{\rm UV}=0.08$ was chosen to correspond to the mean UV flux expected inside a typical HII region surrounding a primordial star (e.g., \citealt{ABS05}). As we do not include dynamically expanding HII regions in our code, the Heat0.8\ and Flash\ runs can be viewed as extremes, corresponding to conditions close to the center and close to the edge of the HII region, respectively. More generally, studying a range of values of $J_{\rm UV}$ is useful, since the UV flux of a massive, Pop-III star is uncertain. In the latter two runs, the UVB is turned on at $z_{\rm UVB, on}=25$ and turned off at $z_{\rm UVB, off}=24.64$. This redshift range corresponds to a typical theoretical stellar lifetime, $\sim3$Myr, of a $\sim 100 M_\odot$ primordial (Pop-III) star \citep{Schaerer02}. The flash ionization run, Flash, instantaneously sets the gas temperature to T=15000 K and the hydrogen neutral fraction to $x_{\rm HI} = 10^{-3}$ throughout the simulation volume, but involves no heating thereafter. This allows us to compare our results to those of \citet{OShea05}, and to identify the importance of including the dynamical effects of the photo--heating. We also include an early UVB run, EarlyHeat0.8, with $J_{\rm UV} = 0.8$, $z_{\rm UVB, on}=33$, and $z_{\rm UVB, off}=33.23$, in order to study how our results vary with redshift (or equivalently, with the ionizing efficiency of the first sources). Finally, the six runs in the bottom half of the Table repeat pairs of the NoUVB\ and Heat0.8\ runs with three different constant LW backgrounds ($J_{\rm LW}=$ 0.001, 0.01, and 0.1, normalized at 12.87eV in units of $10^{-21}{\rm ergs~s^{-1}~cm^{-2}~Hz^{-1}~sr^{-1}}$, and assumed to be frequency--independent within the narrow LW band). We stress that we do not attempt to model reionization in this work. Rather, we focus on the statistical analysis of the feedback associated with the UV and LW backgrounds. In particular, we want to simulate the effect of a short-lived UV and a persistent LW background on a range of halo masses, in order to calibrate the net impact of fossil HII regions on subsequent star formation within these regions. In this we are aided by the large number of halos (few hundred) in our refined region, dozens of which manage to host cold, dense (CD) gas by the end of our simulation runs at $z=18$. \begin{table}[ht] \vspace{0.2cm} \caption{Summary of Simulation Runs} \vspace{-0.2cm} \label{tbl:runs} \begin{center} \begin{tabular}{ccccc} \hline Run Name & $J_{\rm UV}$ & $z_{\rm UVB, on}$ & $z_{\rm UVB, off}$ & $J_{\rm LW}$ \\ \hline \hline \multicolumn{5}{c}{Runs without a LW background}\\ \hline NoUVB & 0 & NA & NA & 0 \\ Flash & -- & 25 & 25 & 0 \\ Heat0.08 & 0.08 & 25 & 24.62 & 0 \\ Heat0.8 & 0.8 & 25 & 24.62 & 0 \\ EarlyHeat0.8 & 0.8 & 33 & 32.23 & 0 \\ \hline \multicolumn{5}{c}{Runs with a LW background}\\ \hline NoUVB & 0 & NA & NA & 0.001\\ Heat0.8 & 0.8 & 25 & 24.62 & 0.001\\ NoUVB & 0 & NA & NA & 0.01\\ Heat0.8 & 0.8 & 25 & 24.62 & 0.01\\ NoUVB & 0 & NA & NA & 0.1\\ Heat0.8 & 0.8 & 25 & 24.62 & 0.1\\ \hline \end{tabular}\\ \end{center} \end{table} We use the HOP algorithm \citep{EH98} on the DM particles to identify DM halos. We then convert the resulting DM halo mass, $M_{DM}$, to a total halo mass using the average conversion factor, $M_{\rm halo} = M_{DM} ~ \Omega_{\rm M}/(\Omega_{\rm M}-\Omega_b)$. We find that the halo masses defined in this manner agree to within a factor of two with masses obtained by integrating the densities over a sphere whose radius is the halo's virial radius. In the analysis below, it will be useful to define the fraction of total gas within the virial radius which is cold and dense (CD), $f_{\rm cd}$. By cold, we mean gas whose temperature is $<0.5 T_{\rm vir}$, where $T_{\rm vir}$ is the halo's virial temperature (for how virial temperatures are associated with halos in the simulation, see \citealt{MBA01}). By dense, we mean gas whose density is $> 10^{19}$ $M_\odot$ Mpc$^{-3}$ $\approx$ 330 cm$^{-3}$, roughly corresponding to the density at which the baryons become important to the gravitation potential at the core, taken to be an immediate precursor to primordial star formation \citep{ABN02}. Henceforth, we treat $f_{\rm cd}$ as a proxy for the fraction of the halo's gas which is available for star formation. Additionally, we discount halos which have been substantially contaminated by the large (low-resolution) DM particles outside of our refined region. Specifically, we remove fromS our analysis halos with an average DM particle mass greater than 115\% of the refined region's DM mass resolution, $747M_\odot$. Another possible source of contamination arises from closely separated halos. If some CD gas belonging to a halo is within another halo's virial radius (most likely in the process of merging), the other halo could undeservedly be flagged as containing CD gas as well. To counteract this, we set $f_{\rm cd}=0$ for low--mass halos ($<2\times10^5M_\odot$) whose centers are less than $\sim$ 5 $h^{-1}$ kpc away from the center of a halo containing CD gas. \section{Results without a LW Background} \label{sec:results} \begin{figure*} \vspace{+0\baselineskip} { \includegraphics[width=0.245\textwidth]{f1a.eps} \includegraphics[width=0.245\textwidth]{f1b.eps} \includegraphics[width=0.245\textwidth]{f1c.eps} \includegraphics[width=0.245\textwidth]{f1d.eps} } { \includegraphics[width=0.245\textwidth]{f1e.eps} \includegraphics[width=0.245\textwidth]{f1f.eps} \includegraphics[width=0.245\textwidth]{f1g.eps} \includegraphics[width=0.245\textwidth]{f1h.eps} } { \includegraphics[width=0.245\textwidth]{f1i.eps} \includegraphics[width=0.245\textwidth]{f1j.eps} \includegraphics[width=0.245\textwidth]{f1k.eps} \includegraphics[width=0.245\textwidth]{f1l.eps} } { \includegraphics[width=0.245\textwidth]{f1m.eps} \includegraphics[width=0.245\textwidth]{f1n.eps} \includegraphics[width=0.245\textwidth]{f1o.eps} \includegraphics[width=0.245\textwidth]{f1p.eps} } \vspace{-1\baselineskip} \figcaption{Temperature projections of a 20 $h^{-1}$ kpc comoving region surrounding two halos nested in a filament. The rows, from top to bottom, correspond to the NoUVB, Flash, Heat0.08, and Heat0.8\ runs. The columns, from left to right, correspond to redshifts of $z=$ 24.62, 24, 22, and 20. The scale is logarithmic, with black corresponding to $T < 100$K and white corresponding to $T = 10^4$ K. \label{fig:pics} } \vspace{-1\baselineskip} \end{figure*} In Figure \ref{fig:pics}, we show grey scale temperature projections of a 20 $h^{-1}$ kpc comoving region surrounding two halos nested in a filament. The rows correspond to the NoUVB, Flash, Heat0.08, and Heat0.8 ~runs (from top to bottom). The columns correspond to redshifts $z=$ 24.62, 24, 22, and 20 (from left to right). The scale is logarithmic, with black corresponding to $T < 100$K and white corresponding to $T = 10^4$K. The halo in the lower (upper) part of each figure grows from $M=7.2\times10^5M_\odot$ ($M=6.9\times10^5M_\odot$) at $z=24.62$ to $M=1.4\times10^6M_\odot$ ($M=1.2\times10^6M_\odot$) at $z=20$, as measured in the NoUVB\ run. As one would expect, when the UVB is turned on, the gas is quickly ionized and heated. Gas which was previously at or close to hydrostatic equilibrium now has a greatly increased pressure gradient due to the increase in temperature. As a result, an outward--moving shock is formed, as clearly seen in Figure~\ref{fig:pics} for the last two rows (i.e. the runs which include a UVB with dynamical heating). Note that this shock is nearly absent in the Flash run. Subsequently, the gas in the dense filaments inside the shock is able to cool through Compton and H$_2$ cooling, and the shock stalls. The gas surrounding the halo starts infalling again. We explore these processes more quantitatively in \S~\ref{sec:profiles}. Note that the cores of the halos retain CD gas in all of the runs. In runs containing a UVB, the low density IGM outside of the filament still hasn't cooled below $T\sim10^3$K by $z=20$; however, the filament itself shows evidence of positive feedback in the Flash\ and Heat0.08\ runs, with lower temperatures at $z\lsim24$ than in the NoUVB\ run. Furthermore, it is evident that once the UVB is turned off, the dense filament is able to cool very rapidly, from $T\sim10^4$K to $T\sim10^3$K in $\Delta z \lsim 0.6$, due to the increased electron fraction, as we shall see below. \subsection{Halo Profiles} \label{sec:profiles} \begin{figure*} \vspace{+0\baselineskip} { \includegraphics[width=0.33\textwidth]{f2a.eps} \includegraphics[width=0.33\textwidth]{f2b.eps} \includegraphics[width=0.33\textwidth]{f2c.eps} \vskip0.0pt } { \includegraphics[width=0.33\textwidth]{f2d.eps} \includegraphics[width=0.33\textwidth]{f2e.eps} \includegraphics[width=0.33\textwidth]{f2f.eps} } \vspace{-1\baselineskip} \figcaption{Spherically averaged radial profiles of the same halo in the NoUVB\ ({\it solid lines}) and Heat0.8\ ({\it dashed lines}) simulation runs. This halo was able to first form CD gas at $z=21$ ($z=18$) in the NoUVB\ (Heat0.8) run, respectively. The left pair of figures show a snapshot at $z = z_{\rm UVB, off} = 24.62$, the middle pair at $z=23$, and the right pair at $z=18$. The virial radius of the halo increases from $R_{\rm vir} \sim 100$ pc at $z=24.62$, to $R_{\rm vir} \sim 200$ pc at $z=18$. All quantities are shown in proper (not comoving) units. The {\it Upper panels} show the hydrogen density, mass-weighted gas temperature, gas cooling time, and radial velocity ({\it clockwise from upper left}). The {\it Bottom panels} show mass fractions of HI, HII, H$_2$, and the number fraction of $e^-$ ({\it clockwise from upper left}). \label{fig:profiles} } \vspace{-1\baselineskip} \end{figure*} To get a more quantitative idea of the feedback introduced by a UVB, in Figure \ref{fig:profiles} we plot spherically averaged radial profiles for the same individual halo at redshifts $z=$ 24.62, 23, and 18 ({\it left to right}), and in two different runs: NoUVB\ and Heat0.8\ ({\it solid} and {\it dashed} lines, respectively). Figures in the top row show hydrogen density, mass-weighted gas temperature, gas cooling time, and radial velocity ({\it clockwise from upper left}). Figures in the bottom row shows mass fractions of HI, HII, H$_2$, and the number fraction of $e^-$ (more precisely, $f_e$ is defined as the mass fraction of $e^-$, normalized such that each $e^-$ is assumed to have the mass of hydrogen) ({\it clockwise from upper left}). The halo has a mass of M($z=24.62$) = $3.42\times10^5M_\odot$ and M($z=18$) = $2.37\times10^6M_\odot$ [taken from the NoUVB\ run; note that the mass in the Heat0.8\ run is somewhat smaller, e.g., M($z=18$) = $2.24\times10^6M_\odot$, due to photo-evaporation and a slight suppression of gas accretion as a result of the UVB]. From the profiles, one can see the impact of photo-evaporation in the Heat0.8\ simulation run: the radially--outward moving shock mentioned above, as well as an accompanying decrease in density. As soon as the ionizing radiation is turned off, the gas cools from a temperature of $\sim10^4$ K to $\sim10^3$ K quite rapidly, with the free electron number fraction (approximately corresponding to the bottom left panels of the bottom row of Fig. \ref{fig:profiles}), dropping two orders of magnitude by $z=23$, $\Delta z = 1.62$, since the UVB was turned off. Also, the shock starts to dissipate by $z\sim23$, with most of the gas switching to the infall regime again. Despite the evident photo-evaporation, the presence of the UVB has caused over an order of magnitude increase in the H$_2$ fraction, due to the increased (out--of--equilibrium) number of free electrons soon after $z_{\rm UVB, off}$. We note that this halo managed to first form CD gas at $z=21$ in the NoUVB\ case, but the formation of CD gas was delayed in the Heat0.8\ case until $z=18$. This delay can be crudely understood by looking at three fundamental time-scales: the H$_2$ cooling time, \begin{equation} \label{eq:t_H2_ion} t_{\rm H_2} = \frac{1.5 k_B T}{\Lambda_{\rm H_2}} \frac{n_g}{n_{\rm HI} n_ {\rm H_2}} ~ , \end{equation} \noindent the Compton cooling time, \begin{equation} \label{eq:t_C} t_{\rm C} \approx 14 \left( \frac{1+z}{20} \right)^{-4} x_e^{-1} ~ {\rm Myr} ~ , \end{equation} \noindent and the gas recombination time, \begin{equation} \label{eq:t_rec} t_{\rm rec} \approx 39 \left( \frac{1+z}{20} \right)^{-3} \delta^{-1} x_e^ {-1} ~ {\rm Myr} ~ . \end{equation} \noindent Here, $k_B$ is the Boltzmann constant, $T$ is the temperature, $\Lambda_{\rm H_2}$ is the H$_2$ cooling function, $x_e$ is the free electron number fraction, $n_g$, $n_{\rm HI}$, and $n_{\rm H_2}$ are the number densities of all baryons and electrons, neutral hydrogen, and H$_2$, respectively, and $\delta$ is the gas overdensity, $\delta \equiv n_g/\bar{n}_g - 1$. The first column of Figure \ref{fig:profiles} shows a snapshot of our halos immediately prior to turning off the UVB. Note that the cooling time in the Heat0.8\ run is several orders of magnitude lower than in the NoUVB\ run.\footnote{The sharp drops in the cooling time in the NoUVB\ run correspond to annuli that include cold, low--density gas below the CMB temperature ($\sim$10 K), which is heated, rather than cooled, by Compton scattering.} At large radii, this is due to Compton cooling, since the UVB dramatically increases $x_e$ and the Compton cooling time (eq. \ref{eq:t_C}) scales as $x_e^{-1}$. Additionally, the temperature increase to $\sim 10^4$ K allows for far more efficient line cooling from atomic hydrogen than in the NoUVB\ case. We also see that the structure of halos is important in accurately predicting feedback. Specifically, we note that the central region can behave quite differently than the outer regions of the halo.\footnote{We do not include radiative transfer in our analysis, and the high-density regions ($n_H \geq 1$cm$^{-3}$ ), such as the cores of halos, might be able to self-shield against UV radiation \citep{ABS05}, decreasing feedback effects somewhat; see discussion in \S~\ref{sec:nort}.} Initially, immediately after the radiation field is turned off, the gas cools due to atomic line cooling. However, this quickly becomes inefficient below temperature of about 6000 K. While the radiation is on, the H$_2$ abundance is highly suppressed due to Lyman-Werner dissociation. However, as the temperature drops, the amount of H$_2$ increases rapidly because of the high electron abundance. In fact the molecular hydrogen formation time is shorter than the recombination time and a large amount of molecular hydrogen is produced, $x_{\rm H_2} \sim 3 \times 10^{-3}$, irrespective of the density and temperature (see the discussion in \citealt{OH02} for an explanation of this freeze-out value). Due to the relatively high densities throughout the halo ($\delta \sim 10^4$ near the core), as well as the high initial $x_e$, the majority of the hydrogen recombines shortly after $z_{\rm UVB, off}$ in the Heat0.8\ case ($t_{\rm rec}(z_{\rm UVB, off}) \ll 1$Myr). After a few recombination times, $n_g \sim n_{\rm HI}$, and so eq. (\ref{eq:t_H2_ion}) can be simplified to \begin{eqnarray} \label{eq:t_H2} t_{\rm H_2} & \approx & \frac{1.5 k T}{\Lambda} \frac{1}{n_g x_{\rm H_2}} \\ & \approx & 4 \left( \frac{T}{10^3 K} \right)^{-2.5} \left( \frac{x_{\rm H_2}}{3 \times 10^{-3}} \right) ^{-1} \left( \frac{n_g}{1 \rm cm^{-3}} \right)^{-1} {\rm Myr}~ , \nonumber \end{eqnarray} \noindent where $x_{\rm H_2}$ is the number fraction of H$_2$. This highly efficient molecular cooling channel is largely responsible for quickly driving the temperature down to about 1000 K (although a persistent LW background can counter this ${\rm H_2}$--enhancement effect; see below). Hence, we note that the cooling times near the halo core are comparable for both runs at $z=23$, with the cooling time in the Heat0.8\ case being larger by a factor of a few. This factor of only a few change in the cooling time is due to the remarkable cancellation of two strong effects, and can be understood by noting that the UVB induced photo-evaporation reduces $n_g$ by a factor of $\sim 40$ near the core at $z=23$. Meanwhile, the UVB boosts $x_{\rm H_2}$ near the core by a factor of $\sim 10$. Since Compton cooling is ineffective at this stage, the cooling time is dominated by the H$_2$ cooling channel, whose cooling time roughly scales as $t_{\rm H_2} \propto (n_g x_{\rm H_2})^{-1}$, given that the temperature is almost identical near the core at $z\sim23$, and that the cooling function is very weakly dependent on $n_{\rm H_2}$ in this regime \citep{GP98}. From this crude estimate, one obtains an effective ``delay'' in the formation of CD gas in the Heat0.8\ run with respect to the NoUVB\ run: \begin{equation} \label{eq:delay} f_{\rm delay} \sim \frac{t_{\rm H_2}^{\rmHeat0.8}}{t_{\rm H_2}^{\rmNoUVB}} \sim \left( \frac{n^{\rmNoUVB}_g}{n^{\rmHeat0.8}_g} \right) \left( \frac{x^{\rmHeat0.8}_{\rm H_2}}{x^{\rmNoUVB}_{\rm H_2}} \right)^{-1} \sim \frac{40}{10} \sim 4. \end{equation} \noindent This is in excellent agreement with the delay observed in the pair of simulation runs, where the halo obtains CD gas $\sim$ 20 (60) Myr after our data point at $z=23$ in the NoUVB\ (Heat0.8) case, yielding a delay of a factor of $f_{\rm delay} \sim 3$. \subsection{$H_2$ production} \label{sec:Htwo} \begin{figure} \vspace{+0\baselineskip} \myputfigure{f3.eps}{3.3}{0.5}{.}{0.} \vspace{-1\baselineskip} \figcaption{ Mass fractions of H$_2$ at $z=18$. Results are shown from the NoUVB\ ({\it black crosses}), Flash\ ({\it blue dashes}), Heat0.08\ ({\it green triangles}), and Heat0.8\ ({\it red squares}) simulation runs. \label{fig:hii}} \vspace{-1\baselineskip} \end{figure} As mentioned above, molecular hydrogen can provide a dominant cooling mechanism, especially in high density regions. To study the impact of the UVB on the formation of H$_2$, we plot the H$_2$ mass fractions as functions of halo mass in Figure \ref{fig:hii}, at the lowest redshift of our simulations, $z=18$. Results are shown from the NoUVB\ ({\it black crosses}), Flash\ ({\it blue dashes}), Heat0.08\ ({\it green triangles}), and Heat0.8\ ({\it red squares}) simulation runs. We note that in all of our runs which include a UVB, the molecular hydrogen fraction converges to a value which is nearly independent of the strength and duration of the UVB. As seen in Figure \ref{fig:hii}, by the end of our simulations, most halos which have been exposed to a UVB in their past have settled on a value of a few $\times 10^{-3}$ for the mass fraction of H$_2$. This freeze--out value is due to the fact that the number density of H$_2$ follows its equilibrium value until about 3000 K, below which recombination proceeds more quickly than H$_2$ formation and the fraction freezes out at this point \citep{OH02}. This number is fairly independent of mass, though there is a weak trend towards higher mass fractions for higher mass halos, up to mass fractions of $\sim 10^{-2}$ for $M\sim4\times10^6M_\odot$. Note also that there is some evidence for a non-monotonic evolution of the H$_2$ fraction, with mass fractions falling back down to $\sim 4\times10^{-3}$ by $M\sim3\times10^7M_\odot$. In contrast, molecular hydrogen in halos which have not been exposed to a UVB ({\it black crosses} in Figure \ref{fig:hii}) is distinctly sparser (over two orders of magnitude for $M\lsim10^6M_\odot$) than in our other runs. Also, there is a stronger evolution with respect to mass, as well as more scatter (which makes sense, since the ${\rm H_2}$ abundance in this case is not a result of a freeze--out process, and depends strongly on local density and temperature). An interesting conclusion can be drawn from the similarity of H$_2$ fractions in our runs which include a UVB. Namely, if our analysis of the dominant cooling processes in \S~\ref{sec:profiles} is accurate, the differences between the CD gas fractions among our UVB runs is predominately due to disparate effectiveness of photo-evaporation. In other words, as the positive feedback (i.e. the increase of the $x_{\rm H_2}$ term in eq.~\ref{eq:t_H2}) is nearly independent of the strength and duration of the UVB in our runs, only the amount of negative feedback (decrease in $n_g$) causes variations in the delay in the formation of CD gas (see a more detailed discussion in \S~\ref{sec:lw} below). We have also verified for several halos that this similarity in the total H$_2$ fraction extends to the radial profiles of H$_2$. \subsection{Ensemble Evolution of the Cold, Dense Gas Fractions} \label{sec:fcd} \begin{figure*} \plottwo{f4a.eps}{f4b.eps} \plottwo{f4c.eps}{f4d.eps} \figcaption{ Total gas fractions ({\it top panels}) and cold, dense gas fractions ({\it bottom panels}) as a function of total halo mass at redshift $z=18$, the lowest redshift output of our simulations. The four panels correspond to the NoUVB\ ({\it top left}), Flash\ ({\it top right}), Heat0.08\ ({\it bottom left}), and Heat0.8\ ({\it bottom right}) simulation runs. \label{fig:gas_fract} } \vspace{-1\baselineskip} \end{figure*} As mentioned above, a quantity of particular importance in studying the capacity of a halo at hosting stars is its cold, dense (CD) gas fraction, $f_{\rm cd}$, defined above. Here we show the general trends for the evolution of this quantity for the population of halos in our simulations, as we vary the UVB. In Figure \ref{fig:gas_fract}, we plot the total gas fractions ({\it upper panels}) and CD fractions ({\it lower panels}) as a function of total halo mass at redshift $z=18$, the lowest redshift output of our simulations. The figures correspond to the NoUVB\ ({\it top left}), Flash\ ({\it top right}), Heat0.08\ ({\it bottom left}), and Heat0.8\ ({\it bottom right}) simulation runs. Note that while the total gas fractions of small halos ($M\lsim$ few $\times 10^5 M_\odot$) that have been exposed to a UVB are suppressed with respect to the NoUVB\ case, there is little immediate visual evidence of either negative or positive feedback for halos large enough to host CD gas ($M\gsim$ few $\times 10^5 M_\odot$). The Flash\ CD gas fractions show evidence of positive feedback in the mass range $\sim ~ 2 \times 10^5$ -- $10^6$ $M_\odot$, while the Heat0.8\ run shows evidence of strong negative feedback in the same mass range, with no halos hosting CD gas at $M < 10^6 M_\odot$. These small halos are the ones which would be most affected by photo--evaporation. This lends further credibility to our assertion above that {\it positive} feedback in runs which include a UVB is fairly independent of the UVB strength, and hence the total feedback is set by photo--evaporation effects (i.e. {\it negative} feedback). The Heat0.08\ run has a near zero balance of positive and negative feedback. In order to better quantify the amount of suppression of CD gas in our models incorporating a UVB, as well as the evolution of such a suppression with redshift, we define the cumulative, fractional suppression of the halo number as \begin{equation} \label{eq:delN} \delta_{N, {\rm cd}}(z) \equiv \frac{N_{\rm cd}^{{\rm run}i}(z) - N_{\rm cd}^{{\rm run}i}(z_{\rm UVB, on})}{N_{\rm cd}^{\rm NoUVB}(z) - N_{\rm cd}^{\rm NoUVB}(z_{\rm UVB, on})} - 1 ~ , \end{equation} where $N_{\rm cd}^{\rm NoUVB}(z)$ and $N_{\rm cd}^{{\rm run}i}(z)$ are the total number of halos with CD gas at redshift $z$ in the NoUVB\ run and some given run $i$, respectively. This expression is well--defined for $N_{\rm cd}^{\rm NoUVB}(z)$ $>$ $N_{\rm cd}^{\rm NoUVB}(z_{\rm UVB, on})$; for $N_{\rm cd}^{\rm NoUVB}(z)$ = $N_{\rm cd}^{\rm NoUVB}(z_{\rm UVB, on})$, we set $\delta_{N, {\rm cd}}(z) \equiv 0$. Note that by definition, $N_{\rm cd}^{\rm NoUVB}(z_{\rm UVB, on})$ = $N_{\rm cd}^{{\rm run}i}(z_{\rm UVB, on})$. Similarly, we define the cumulative, fractional suppression of the CD gas mass as \begin{equation} \label{eq:delM} \delta_{M, {\rm cd}}(z) \equiv \frac{M_{\rm cd}^{{\rm run}i}(z) - M_{\rm cd}^{{\rm run}i}(z_{\rm UVB, on})}{M_{\rm cd}^{\rm NoUVB}(z) - M_{\rm cd}^{\rm NoUVB}(z_{\rm UVB, on})} - 1 ~ , \end{equation} where $M_{\rm cd}^{\rm NoUVB}(z)$ and $M_{\rm cd}^{{\rm run}i}(z)$ are the total mass of CD gas at redshift $z$ in the NoUVB\ run and some given run $i$, respectively. The total CD gas mass is obtained by merely summing the CD gas masses for all of the halos in the simulation. As for equation~(\ref{eq:delN}), this expression is well--defined for $M_{\rm cd}^{\rm NoUVB}(z)$ $>$ $M_{\rm cd}^{\rm NoUVB}(z_{\rm UVB, on})$, and for $M_{\rm cd}^{\rm NoUVB}(z)$ = $M_{\rm cd}^{\rm NoUVB}(z_{\rm UVB, on})$, we set $\delta_{M, {\rm cd}}(z) \equiv 0$. \begin{figure} \myputfigure{f5.eps}{3.3}{0.5}{.}{0.} \figcaption{Values of $M_{\rm cd}^{{\rm run}i}(z)$ ({\it top panel}) and $N_{\rm cd}^{{\rm run}i}(z)$ ({\it bottom panel}) as defined in equations (\ref{eq:delM}) and (\ref{eq:delN}). The results are displayed for the NoUVB\ ({\it crosses}), Flash\ ({\it dashes}), Heat0.08\ ({\it triangles}), and Heat0.8\ ({\it squares}) simulation runs. \label{fig:NM}} \vspace{-1\baselineskip} \end{figure} Equations (\ref{eq:delN}) and (\ref{eq:delM}) provide an estimate of how the CD gas has been affected by the presence of a UVB, {\it following} the turn-on redshift of the UVB, $z_{\rm UVB, on}$ (the values at $z_{\rm UVB, on}$ are subtracted in order to provide a more sensitive measure of {\it relative} changes of CD gas). As defined above, $\delta_{N, {\rm cd}}(z) = 0$ and $\delta_{M, {\rm cd}}(z) = 0$ if the UVB has no effect. If the effect of a UVB is positive, resulting in positive feedback, $\delta_{N, {\rm cd}}(z)$ and $\delta_{M, {\rm cd}}(z)$ would be positive. If the effect of the UVB is negative, $\delta_{N, {\rm cd}}(z)$ and $\delta_{M, {\rm cd}}(z)$ would be negative. In Figure \ref{fig:NM}, we plot the values of $M_{\rm cd}^{{\rm run}i}(z)$ ({\it top panel}) and $N_{\rm cd}^{{\rm run}i}(z)$ ({\it bottom panel}) in our four main simulation runs: ${\rm run}i$ = NoUVB\ ({\it crosses}), Flash\ ({\it dashes}), Heat0.08\ ({\it triangles}), and Heat0.8\ ({\it squares}). The corresponding values of $\delta_{M, {\rm cd}}(z)$ and $\delta_{N, {\rm cd}}(z)$ are plotted in Figure \ref{fig:delta} in the top and bottom panels, respectively. The results are displayed for the Flash\ ({\it dashes}), Heat0.08\ ({\it triangles}), and Heat0.8\ ({\it squares}) simulation runs. Although some of the notable fractional changes shown in Figure \ref{fig:delta} might appear statistically insignificant due to the small number statistics inferred from Figure \ref{fig:NM}, it should be noted that these runs are not uncorrelated experiments. In other words, each of our runs in Table \ref{tbl:runs} is seeded with the same initial conditions, and so small relative changes compared to the NoUVB\ run are significant (i.e. the errors are not Poisson). One can infer from Figures \ref{fig:NM} and \ref{fig:delta} that the Heat0.8\ run shows evidence of strong negative feedback down to $z\sim 20$, with values approaching the NoUVB\ run by the end of our simulation ($z=18$). Conversely, the Flash\ run exhibits strong positive feedback down to $z\sim20$, and again approaches the NoUVB\ run by the end of our simulation. In the middle is the Heat0.08\ run, which shows very little difference compared to the NoUVB\ run (initially there is some evidence of mild positive feedback down to a redshift of $z=21$, but at redshifts below that, little evidence remains of a UVB ever being present). It is also interesting to note that while the halo number in the Heat0.8\ run shows negative feedback down to $z=18$ ({\it bottom panels of Figures \ref{fig:NM} and \ref{fig:delta}}), the total mass of CD gas ({\it top panels of Figures \ref{fig:NM} and \ref{fig:delta}}) shows no such feedback at $z\lsim19$. The explanation for this apparent contradiction is that the total mass of CD gas is dominated by the largest halos (both because these halos are more massive and because the fraction of CD gas increases with mass), and as Figure~\ref{fig:gas_fract} shows, these large halos are largely unaffected by the ionizing radiation. Conversely, the elimination of the CD gas from the lowest--mass halos even at $z=18$ is a genuine effect (as is clearly visible in the lower right panel in Fig~\ref{fig:gas_fract}), but these halos do not contribute significantly to the total CD mass summed over all halos. Figure \ref{fig:delta} agrees well with the qualitative inferences drawn above. Furthermore, it explicitly shows that the critical UVB flux cutoff in our simulation between inducing a net negative and net positive feedback is $J_{\rm UV}\sim 0.1$. Halos which have been exposed to a fainter UVB exhibit positive feedback, whereas halos which have been exposed to a brighter UVB exhibit negative feedback. However, it is also important to note that any such feedback is temporary, as all of our runs begin to converge by the end of our simulations at $z=18$. The exception is that the Heat0.8 run shows persistent suppression of the smallest halos (with $M<10^6 M_\odot$) all the way down to $z=18$. \begin{figure} \myputfigure{f6.eps}{3.3}{0.5}{.}{0.} \figcaption{Values of $\delta_{M, {\rm cd}}(z)$ ({\it top panel}) and $\delta_{N, {\rm cd}}(z)$ ({\it bottom panel}) as defined in equations (\ref{eq:delM}) and (\ref{eq:delN}). The results are derived from Figure~\ref{fig:NM} and displayed for the Flash\ ({\it dashes}), Heat0.08\ ({\it triangles}), and Heat0.8\ ({\it squares}) simulation runs. \label{fig:delta}} \vspace{-1\baselineskip} \end{figure} \subsection{Relating initial densities at $z_{\rm UVB, on}$ to subsequent suppression of cold, dense gas} \label{sec:densities} Here we attempt to generalize and physically motivate some of the results from the previous section. In particular, we have already seen that feedback depends on $J_{\rm UV}$ and $M_{\rm halo}$. Here we examine whether a halo's capacity for forming CD gas depends strongly on the properties of its progenitor region at the time of the UV--illumination ($z_{\rm UVB, on}$). Specifically, we expect those progenitor regions which are less dense at $z_{\rm UVB, on}$, and hence at an earlier evolutionary stage, to be more susceptible to negative photo--heating and photo--evaporation feedback than more dense regions. This is because the ${\rm H_2}$ photo--dissociation rate scales with the density, whereas ${\rm H_2}$--forming reaction rates scale with the square of the density; as a result, photo--dissociation becomes comparatively more important at low densities \citep{OH03}. Below we focus on the Heat0.8\ run, as it exhibits the strongest negative feedback. We divide the set of halos with CD gas at redshift $z$ in our NoUVB\ run into two groups: those that also {\it have} CD gas in the Heat0.8\ run (group 1), and those that {\it do not} also have CD gas in the Heat0.8\ run (group 2). From Figure \ref{fig:gas_fract}, one can note that at $z=18$ it is possible to define a rough mass scale that separates these two groups; namely halos with masses $\gsim 10^6 M_\odot$ {\it do not} have their CD gas suppressed (group 1), and halos with masses $\lsim 10^6 M_\odot$ {\it do} have their CD gas suppressed (group 2). As stated above, we hypothesize that the physical distinction between the two sets occurs due to their differences at the redshift they were exposed to the UVB, $z_{\rm UVB, on}$. Specifically, we compare the mass-weighted, average densities of progenitor regions at $z_{\rm UVB, on}$ = 25, which are to become our halos from groups 1 or 2 at some later $z$. We do this by tracing back all dark matter particles comprising each halo at $z$ to their positions at $z=z_{\rm UVB, on}$, and then obtaining the average gas density at that position. As there are too few halos to accurately construct the group 1 and group 2 mean density distribution functions (see bottom panel of Fig. \ref{fig:NM}), we present their properties via a density cutoff. We adopt a simple criterion to define a density cutoff, $\rho_{\rm cutoff}$, between the two groups. We chose $\rho_{\rm cutoff}$ so that the sum of the fraction of group (1) points below $\rho_{\rm cutoff}$ and the fraction of group (2) points above $\rho_{\rm cutoff}$ is minimized. Specifically, this fractional sum used as a proxy for the disjointness of the two distributions is defined as \begin{equation} \label{eq:ffit} f_{\rm fit} \equiv {\rm MIN} \left[ f_1(< \rho_{\rm cutoff}) + f_2(>\rho_{\rm cutoff}) \right] \end{equation} where $f_1(<\rho_{\rm cutoff})$ is the fraction of halos in group 1 which have mean densities {\it less} than the cutoff density, and $f_2(>\rho_{\rm cutoff})$ is the fraction of halos in group 1 which have mean densities {\it greater} than the cutoff density. In essence, each term is the fraction of ``misclassified" halos, and so we want to select $\rho_{\rm cutoff}$ such that $f_{\rm fit}$ is minimized. The sum as defined in equation (\ref{eq:ffit}) ranges from 0 to 2, while our figure of merit ranges from $f_{\rm fit} = 0$ for two completely disjoint distributions to $f_{\rm fit} = 1$ for the case where group 1 and group 2 are drawn from the same underlying distribution (not taking into account Poisson errors). We plot values for the density cutoff (in units of the average comoving density, $\bar{\rho}$) as a function of redshift in the top panel of Figure \ref{fig:densities}. Our disjointness figure of merit is plotted in the bottom panel of Figure \ref{fig:densities}. Note that for most redshifts, $f_{\rm fit} \ll 1$, meaning that the group 1 and group 2 density distributions are quite disjoint, and that the density cutoff, $\rho_{\rm cutoff}$, has a well-defined value. One can get a sense of the Poissonian errors associated with $\rho_{\rm cutoff}$ by looking at the bottom panel of Figure \ref{fig:NM}, since ${\rm N_{group 1}} \sim N_{\rm cd}^{\rm Heat0.8}\sim 10-20$ and ${\rm N_{group 2}} \sim N_{\rm cd}^{\rm NoUVB} - N_{\rm cd}^{\rm Heat0.8}\sim 10$. One should also note that $f_{\rm fit}$ increases with decreasing $\rho_{\rm cutoff}$, which is to be expected as the density distributions can not have negative values, so both distributions start being ``packed'' together as they approach zero. In other words, there is an intrinsic ``noise'' consisting of small environmental fluctuations (halo location, peculiar velocity, etc.), and this noise becomes more noticeable as $\rho_{\rm cutoff} \rightarrow 0$. In practice, it is difficult to disentangle this effect from an actual merging of the two distributions. As expected, halos with less dense progenitor regions at $z_{\rm UVB, on}$ are more susceptible to negative feedback. It is quite interesting to note that our density cutoff decreases exponentially with redshift, implying that an increasing fraction of the photo--heated mass will fall in the ``borderline'' region between negative and positive feedback. This provides further support for our earlier claim that the fossil HII region ``forgets'' the UVB as time passes. In other words, a strong UVB serves to merely delay the gas from cooling and collapsing; the gas eventually manages to cool, aided by an enhanced $\rm H_2$ fraction and enhanced infall (see Figure \ref{fig:profiles}, and associated discussion). The length of this delay is a strong function of the density of halo progenitor regions at $z_{\rm UVB, on}$, as one would expect from our analysis in \S~\ref{sec:profiles}. \begin{figure} \vspace{+0\baselineskip} \myputfigure{f7.eps}{3.3}{0.5}{.}{0.} \vspace{-1\baselineskip} \figcaption{The critical density $\rho_{\rm cutoff}$, {\it upper panel}, for the progenitor gas at $z_{\rm UVB, on}$ = 25 of halos that collapse at some later redshift $z$, roughly divides halos experiencing negative vs. positive feedback at redshift $z$ (i.e. halos that were more/less dense than $\rho_{\rm cutoff}$ at the time of illumination will experience positive/negative feedback). $\rho_{\rm cutoff}$ is a mass-weighted mean density of the progenitors pieces of the halo, shown in units of the average comoving density, $\bar{\rho}$ in our Heat0.8\ run. The values of $f_{\rm fit}$, shown in the {\it Lower panel}, show a rough figure of merit for how well the fixed density $\rho_{\rm cutoff}$ separates halos into two disjoint categories ($f_{\rm fit} \ll 1$ indicates a clear separation). See text and equation (\ref{eq:ffit}) for definitions and discussion. \label{fig:densities}} \vspace{-1\baselineskip} \end{figure} It is numerically impractical to run our simulations to redshifts much lower than $z\lsim18$, due to the rapidly increasing collapsed fraction in our refined region (see Fig. \ref{fig:f_col}). On the other hand, it would be interesting to know what eventually happens to {\it most of the mass} of our refined region. A step towards answering this question is to find out whether the majority of the mass of our refined region at $z=z_{\rm UVB, on}$ is located in overdensities below or above the lowest redshift $\rho_{\rm cutoff}$ value shown in Figure \ref{fig:densities}. With this motivation, we obtained a mass-weighted density distribution over randomly generated positions inside our refined region. We select a radius surrounding each position, such that a given number of DM particles lie within that radius (the number of particles is chosen to correspond to the halo masses capable of hosting CD gas). We then obtain a mass-weighted average density by averaging over the gas densities at the location of each DM particle inside our chosen radius. \begin{figure} \myputfigure{f8.eps}{3.3}{0.5}{.}{0.} \vspace{-1\baselineskip} \figcaption{Mass-weighted, cumulative density distributions for regions of $M\sim 8.9\times10^5M_\odot$ ({\it solid curves}) and $M\sim 8.9\times10^6M_\odot$ ({\it dashed curves}). Two redshift values are presented: $z_{\rm UVB, on}$ = 33 and 25, from left to right. Note that $\rho$ is the comoving gas density. \label{fig:den_hist}} \vspace{-1\baselineskip} \end{figure} In Figure \ref{fig:den_hist}, we plot the cumulative density distributions (fraction of regions with mass-weighted density less than $\rho$) thus generated at $z$ = 33 and 25, from left to right, and for regions of mass scale $M\sim 8.9\times10^5M_\odot$ ($\sim10^3$ DM particles) and $M\sim 8.9\times10^6M_\odot$ ($\sim10^4$ DM particles) with solid and dashed lines, respectively. Understandably, the larger mass scales shift the mean density towards larger values, due to the increased likelihood of averaging over dense patches. Also, we see that for regions of equal mass scales, the higher redshift counterparts have a lower mean density, partly due to a smaller clumping factor and partly due to to the fact that we plot comoving density, which increases with decreasing redshift. Figure \ref{fig:den_hist} shows that the majority of the mass ($\gsim 90\%$) of our refined region is located in regions with mean densities lower than $\sim10\bar{\rho}$, the lowest cutoff obtained by our analysis (see Figure \ref{fig:densities}). Hence, we cannot rule out the possibility of significant negative feedback at lower redshifts, not probed by our simulation. Nevertheless, we regard this as unlikely, for two reasons. First, halos will be centered around overdensities, not random points, and subsequent growth of the halo's mass need not be spherically symmetric; these effects will bias the relevant density distribution to higher values than shown in Figure \ref{fig:den_hist}. Second, it is likely that most halos massive enough to host CD gas, which form in our biased region at $z<18$, already had a dense progenitor core at $z=z_{\rm UVB, on}$. Indeed, we find that all halos hosting CD gas at $z=18$ in the NoUVB\ run had {\it some} dense progenitor gas ($\rho \gsim 100 \bar{\rho}$) at $z_{\rm UVB, on}=25$. Subsequent growth could be dominated by gas accretion onto these dense regions, and merging with other halos (rather than forming fresh halos entirely from low--density gas). Nevertheless, we emphasize that we chose to simulate a biased volume (favoring early collapse), and the above arguments will be less valid in a more typical patch of the IGM (with lower densities). \subsection{Early UVB Run} \label{sec:early_heating} Ideally, one would want to explore all of parameter space by varying $J_{\rm UV}$, $z_{\rm UVB, on}$ and $z_{\rm UVB, off}$, with different realizations of the density field. Unfortunately, given computational limitations, this is impossible. However, in order to confirm the trends we present above, we run another simulation, EarlyHeat0.8, in which we turn on a UVB, with an amplitude of $J_{\rm UV}=0.8$, at $z_{\rm UVB, on}=33$, and turn it off at $z_{\rm UVB, off}=32.23$. We then repeat the analysis in \S~\ref{sec:densities}. The corresponding figures, Figures \ref{fig:early_NM} and \ref{fig:early_delta}, are presented below. Figure \ref{fig:early_delta} shows that we once again find strong negative feedback down to $z\sim23$. For $z<23$, we see virtually no evidence of any feedback, lending further credibility to our interpretation above, that our other runs ``forget'' the episode of UV heating, and start converging to the NoUVB\ run by the end of our simulations ($z=18$). \begin{figure} \vspace{+0\baselineskip} \myputfigure{f9.eps}{3.3}{0.5}{.}{0.} \vspace{-1\baselineskip}\figcaption{ Values of $M_{\rm cd}^{{\rm run}i}(z)$ ({\it top panel}) and $N_{\rm cd}^{{\rm run}i}(z)$ ({\it bottom panel}) as defined in equations (\ref{eq:delM}) and (\ref{eq:delN}). The results are displayed for the NoUVB\ ({\it crosses}) and EarlyHeat0.8\ ({\it squares}) simulation runs. \label{fig:early_NM}} \vspace{-1\baselineskip} \end{figure} \begin{figure} \vspace{+0\baselineskip} \myputfigure{f10.eps}{3.3}{0.5}{.}{0.} \vspace{-1\baselineskip}\figcaption{Values of $\delta_{M, {\rm cd}}(z)$ ({\it top panel}) and $\delta_{N, {\rm cd}}(z)$ ({\it bottom panel}) as defined in equations (\ref{eq:delM}) and (\ref{eq:delN}), shown here in the EarlyHeat0.8\ run. \label{fig:early_delta}} \vspace{-1\baselineskip} \end{figure} In Figure \ref{fig:t_both}, we plot the density cutoff, $\rho_{\rm cutoff}$, defined in \S~\ref{sec:densities}, for both the Heat0.8\ ({\it crosses}) and EarlyHeat0.8\ ({\it triangles}) runs. For the sake of a direct comparison, this time we use physical units both for $\rho_{\rm cutoff}$, (proper cm$^{-3}$), and for the time elapsed since the UVB turn-off (Myr). Unfortunately, the drawback to having a simulation run with such an early heating episode is that there are fewer halos to analyze at earlier epochs. Specifically, in the epoch with evident negative feedback ($z\gsim23$), as seen below, there are only three redshift outputs containing {\it both} halos exhibiting suppression {\it and} halos not exhibiting suppression of CD gas (groups 2 and 1, respectively, defined in \S~\ref{sec:densities}). While it is difficult to draw strong conclusions from Figure \ref{fig:t_both}, the density cutoff values do appear similar in the two runs. We examined the radial profiles of the same halo pictured in Figure \ref{fig:profiles}, to verify that we can apply the same cooling arguments as discussed in \S~\ref{sec:profiles}. We compare the NoUVB\ and EarlyHeat0.8\ runs at $z=30$, shortly after $z_{\rm UVB, off}=33$. As in Figure~\ref{fig:profiles}, this redshift corresponds roughly to the regime where the induced shock begins to dissipate, and the gas starts falling back into the core. In all of the runs, the temperature drops to $T\sim T_{\rm vir} \sim 10^3$K near the core very soon after $z_{\rm UVB, off}$. As in \S~\ref{sec:profiles}, we characterize the delay in formation of CD gas with (c.f. eq. (\ref{eq:delay})) \begin{equation} \label{eq:earlydelay} f_{\rm delay} \sim \left( \frac{n^{\rmNoUVB}_g}{n^{\rmHeat0.8}_g} \right) \left( \frac{x^{\rmHeat0.8}_{\rm H_2}}{x^{\rmNoUVB}_{\rm H_2}} \right)^{-1} \sim \frac{25}{20} \sim 1. \end{equation} \noindent From our simple cooling argument, we predict a nearly negligible delay for this halo. Indeed, the halo ends up forming CD gas at $z=20.5$ in the EarlyHeat0.8\ run, and at $z=21$ in the NoUVB run, showing a very small delay. Despite the halo's exposure to the UVB earlier in its evolution and subsequent lower gas density, the total negative feedback is reduced compared to the Heat0.8\ run. Compared to the Heat0.8\ run, the negative feedback, when expressed as the photo-evaporation term in the above equation $( n^{\rmNoUVB}_g /n^{\rmHeat0.8}_g)$ is smaller by a factor of $\sim 2$, and the positive feedback, when expresses as the H$_2$ fraction term in the above equation $( x^{\rmHeat0.8}_{\rm H_2}/x^{\rmNoUVB}_{\rm H_2})$ is larger by a factor of $\sim 2$. These changes are explained by more efficient Compton cooling (which more effectively eliminates the impact of the photo--heating), and the lower gas density (which leads to a lower value for the ${\rm H_2}$ fraction in the NoUVB\ run, and hence a larger relative enhancement in the Heat0.8\ run), respectively \citep{OH03}. \begin{figure} \vspace{+0\baselineskip} \myputfigure{f11.eps}{3.3}{0.5}{.}{0.} \vspace{-1\baselineskip} \figcaption{Values of $\rho_{\rm cutoff}$ (in proper cm$^{-3}$) as a function of time elapsed since $z_{\rm UVB, off}$. Crosses correspond to our Heat0.8\ run (i.e. $z_{\rm UVB, on}=25$); triangles correspond to our EarlyHeat0.8\ run (i.e. $z_{\rm UVB, on}=33$). \label{fig:t_both}} \vspace{-1\baselineskip} \end{figure} \subsection{The impact of not including radiative transfer} \label{sec:nort} Our simulations treat photo-ionization in the optically thin limit and so do not include radiative transfer effects. This results in two differences compared to a self-consistent treatment. The most obvious effect is that all of our halos are ionized simultaneously, while in reality halos are ionized by very nearby stars with distances less than the few kpc radius of typical HII regions \citep{WAN04, KYSU04}. Nevertheless, we argue that the primary effect of this is to vary the flux felt by the halo and we explore a range of reasonable fluxes in our simulations. The exception is if the halo is so close that it is enveloped within the shock generated by the gas expelled from the halo hosting the star that produces the ionizing radiation. However, typically this shocked region occupies a volume of less than 1\% of the ionized region (e.g., \citealt{WAN04}). The second, and more important, effect of radiative transfer will be to shield the high density cores of our minihalos. If the cores are not ionized, then both the positive and negative feedback effect will clearly not occur in the neutral gas. \citet{ABS05} estimate that self-shielding will set in at densities around a few particles per cm$^{-3}$ (depending on the strength of the flux and the size of the halo). This value is approximately the density we find in the cores of our simulated halos (e.g., Figure 2), and so we conclude that radiative transfer effects may play an important role in the cores of our halos. We note that at these densities, we typically find very little negative feedback anyway because of the short cooling times in the ionized gas. Most of the negative feedback we observe arises due to the photo-heating of low-density gas which is then later accreted onto halos. This means that we do not expect our results to be strongly affected by the missing radiative transfer effects. The effect, where important, will be to decrease the amount of feedback, making our statements about feedback upper limits on the amplitude of the expected feedback. Finally we note that, given time, the halos will be evaporated and eventually ionized despite the high densities in the core; however, this photo-evaporation time will be typically longer than 3 Myr \citep{HAM01, SIR04, ISR05}. \section{The Impact of a Lyman Werner Background} \label{sec:lw} While up to now we have ignored a possible Lyman--Werner background, such a background is likely to be present early on, and could have a strong impact on the ${\rm H_2}$ chemistry and gas cooling. In particular, the IGM is nearly optically thin, or quickly becomes so, at frequencies below 13.6eV \citep{HAR00}. For reference, we note that one photon per hydrogen atom (the minimum UV background required for reionizing the IGM) would translate to a background intensity of $J_{\rm LW}\sim 20 [(1+z)/21]^3$. Background levels 2--4 orders of magnitude below this value will be established well before reionization, and have the potential to already photodissociate ${\rm H_2}$ molecules at these early epochs \citep{HRL97, MBA01}. Furthermore, and of more direct interest in the present study, \citet{OH03} have argued that the presence of an entropy floor, generated by the UV heating, reduces gas densities, and makes the ${\rm H_2}$ molecules in collapsing halos more vulnerable to a LW background. Motivated by the above, in this section, we study the impact of a LW background on our results. We start by a brief discussion of our results without a LW background (\S~\ref{subsec:disc}), use these results to build up some expectations for the impact of the LW background (\S~\ref{subsec:LWtoy}), and then present the results of simulation runs with LW backgrounds (\S~\ref{subsec:LWsims}). \begin{figure} \vspace{+0\baselineskip} \myputfigure{f12.eps}{3.3}{0.5}{.}{0.} \vspace{-1\baselineskip} \figcaption{The figure illustrates the impact of the UV heating in the four different runs (Flash, Heat0.08, Heat0.8, EarlyHeat0.8, as labeled in each panel). For each halo, we show, at $z=23$ (or $z=30$, in the early heating run), the ratio of the ${\rm H_2}$ fractions (red filled squares), the temperatures (blue empty triangle) and of the average gas density within the central 15pc (red filled circles) in the runs with UV heating, compared to the runs with no UVB. Note that the ${\rm H_2}$ fraction tends to increase by the same factor, regardless of the nature of the heating. As a result, the sign of the overall feedback (negative or positive) is determined primarily by the changes in the gas density, which does depend strongly on the type and amount of heating. \label{fig:fossilf4}} \vspace{-1\baselineskip} \end{figure} \subsection{Discussion of Results without the LW Background} \label{subsec:disc} As already discussed above, the UV heating produces two prominent effects: it boosts the ${\rm H_2}$ fraction and it decreases the gas density. In the case of the individual halo studied in Figure~\ref{fig:profiles} and described in \S~\ref{sec:profiles}, and also in the case of the analogous halo in the early heating run, described in \S~\ref{sec:early_heating} above, we have seen that the overall impact of the UV heating is a delay in the development of cold dense gas; this delay can be understood by the increase in the cooling time, given by the product of the two effects above. In order to understand the net effect of the UV heating on the overall halo population, in Figure~\ref{fig:fossilf4}, we show the ratio of the ${\rm H_2}$ fractions (red filled squares), of the temperatures (blue empty triangle) and of the mean gas density within the central 15pc (red filled circles) in all four of our runs with UV heating. Each quantity is computed for every halo present at $z=23$ (or at $z=30$ in the early heating run), in the runs with UV heating, and the ratio refers to this value, divided by the same quantity in the run without a UV background. The figure clearly shows that the ${\rm H_2}$ fractions are enhanced in a similar fashion in all of the runs (by factors ranging from several hundred at low mass, to a few at high mass). This is indeed expected: while the H$_2$ abundance is nearly independent of halo mass in runs with a UVB, it is a strongly increasing function of mass in the NoUVB case (see Figure 3). Because the ``freeze--out'' value of the ${\rm H_2}$ abundance, $f_{\rm H_2}\approx 2\times 10^{-3}$ is insensitive to the background flux or duration, or to the gas density \citep{OH02}, the enhancement factor over the NoUVB run is always the same. Contrary to the ``universal'' effect on the ${\rm H_2}$ fraction, the impact of the UV background on the gas density depends strongly on the nature of the heating. Not surprisingly, the flash heating case shows the weakest gas dilution; heating the gas for an extended period, at increasing flux levels, causes larger dilutions. Note that the impact on the density tends to diminish for more massive halos. This is partly because a fixed amount of heating/energy input corresponds to a smaller fraction of the halo's total binding energy. In addition, the ${\rm H_2}$--cooling time is shorter than $10^7$ years in halos with $M\gsim 10^{5.8}~{\rm M_\odot}$ and the UV--heated gas is able to cool prior to $z=23$. This latter effect is also directly evident in the gas temperature ratios, which decrease towards larger halos (and decrease below unity). The above trends account for the basic results shown in Figure~\ref{fig:delta}. Note that this figure shows only those halos that develop cold dense gas; i.e. those with $M\gsim 10^{5.5}~{\rm M_\odot}$. In the Flash--ionization case, the ${\rm H_2}$ fraction enhancement in these halos dominates, and results in a positive overall feedback. In the Heat0.08 case, the effects on the ${\rm H_2}$ fraction and on the gas density nearly cancel each other and the net result is that the UV heating has almost no impact. In the Heat0.8 case, the dilution of the gas density dominates, and results in a delay in the cooling time, and in the development of the cold dense gas, by a factor of $1-10$. \subsection{The Impact of a Lyman Werner Background - Expectations} \label{subsec:LWtoy} The above trends suggest that the UV heating can render the halos more susceptible to the negative effect of a LW background. As argued in \citet{OH03}, the ${\rm H_2}$ photodissociation rate depends linearly on the gas density, while the rate of ${\rm H_2}$--forming two--body collisions scales with the square of the density; hence density dilution makes ${\rm H_2}$ photodissociation comparatively more important. In order to investigate the impact of a LW background on the amount of cold dense gas, we performed a set of six additional simulation runs. Before describing these runs, however, we use the no--LW runs with UV heating (Heat0.8) and without heating to develop some expectations. These are shown in Figures~\ref{fig:fossilf1}--\ref{fig:fossilf2}, as follows. \begin{figure} \vspace{+0\baselineskip} \myputfigure{f13.eps}{3.3}{0.5}{.}{0.} \vspace{-1\baselineskip} \figcaption{{\it Upper panel:} The ratio of the ${\rm H_2}$ fractions, temperatures, and average gas density between the Heat0.8 and NoUVB runs, at $z=24$ (following the notation in Figure~\ref{fig:fossilf4}). {\it Lower panel:} The critical value of the background LW flux, $J_{\rm LW}$ (in units of $10^{-21}{\rm ergs~s^{-1}~cm^{-2}~Hz^{-1}~sr^{-1}}$) such that the gas temperature cools to $300$K by redshift $z=18$. Only those halos whose gas manages to cool by $z=18$ are shown. Values for the NoUVB\ run are represented by filled black triangles; values for the Heat0.8\ run are represented by filled red circles. Empty symbols denote the critical value of $J_{\rm LW}$, such that the gas temperature decreases by {\it half} between redshifts $z=24$ and $z=18$ (see discussion in text). \label{fig:fossilf1}} \vspace{-1\baselineskip} \end{figure} \begin{figure} \vspace{+0\baselineskip} \myputfigure{f14.eps}{3.3}{0.5}{.}{0.} \vspace{-1\baselineskip} \figcaption{{\it Upper panel:} ${\rm H_2}$--cooling time for each halo in the Heat0.8 (red, filled circles) and NoUVB (black, empty triangles) runs, at redshift $z=24$. {\it Lower panel:} Compton--cooling timescale for the same halos. Note that the ${\rm H_2}$--cooling time is shorter than the Hubble time in halos with $M\gsim 10^{5.5}~{\rm M_\odot}$. \label{fig:fossilf3}} \vspace{-1\baselineskip} \end{figure} In Figure~\ref{fig:fossilf1}, in the upper panel, we show the ratio of the ${\rm H_2}$ fractions (red filled squares), of the temperatures (blue empty triangles) and of the mean gas density within the central 15pc (red filled circles). The ratios are computed in the Heat0.8 and NoUVB runs, as in Figure~\ref{fig:fossilf4}, but we here use $z=24$, rather than $z=23$. This choice is made to allow for some Compton cooling, but to minimize the ${\rm H_2}$--cooling that occurs after the heating is turned off (the latter may not occur if a LW background is always on). In Figure~\ref{fig:fossilf3}, we explicitly show the ${\rm H_2}$--cooling and Compton--cooling times for each halo in the Heat0.8 and NoUVB runs, at $z=24$. Note that the ${\rm H_2}$--cooling time is shorter than the Hubble time in halos with $M\gsim 10^{5.8}~{\rm M_\odot}$. In the bottom panel of Figure~\ref{fig:fossilf1}, we compute the coupled chemical and thermal evolution at fixed density, and compute, for each halo, the critical value of the background LW flux, $J_{\rm LW}$ (in units of $10^{-21}{\rm ergs~s^{-1}~cm^{-2}~Hz^{-1}~sr^{-1}}$) such that the gas temperature cools to $300$K by redshift $z=18$. This will represent a proxy for a critical value for the LW background, above which the halo is prevented from developing cold dense gas in the simulation prior to $z=18$. The choice of the temperature, 300 K, matters relatively little for the low--mass halos. On the other hand, we find that the critical $J_{\rm LW}$ we derive for the larger ($M\gsim 10^6~{\rm M_\odot}$) halos is more sensitive to this choice; in particular, these halos have high ($\gsim 1000$K) virial temperatures, and typically never cool down to 300K, even in the absence of a LW background. Hence, for these halos, we show (in empty symbols), the critical value of the background LW flux, $J_{\rm LW}$, such that the gas temperature decreases by {\it half} between redshifts $z=24$ and $z=18$. The bottom panel in Figure~\ref{fig:fossilf1} shows that the critical $J_{\rm LW}$ is between $10^{-4}$ and $10^0$, with a relatively large scatter at fixed halo mass. There is, nevertheless, a clear impact of the heating by the UV background, which reduces the critical LW flux by about an order of magnitude (shown as a vertical offset between circles and triangles). Note that the majority of halos (especially at low masses) never develop cold dense gas, and are not shown in the bottom panel. \begin{figure} \vspace{+0\baselineskip} \myputfigure{f15.eps}{3.3}{0.5}{.}{0.} \vspace{-1\baselineskip} \figcaption{The figure shows the ratio of the amplitude of critical LW background (defined to prevent gas cooling between $z=24$ and $z=18$) between the Heat0.8 and NoUVB runs, as a function of the ratio of the ${\rm H_2}$--cooling times. The scaling is close to the $J_{\rm LW} \propto t_{\rm cool}^{-1}$ expected from equating the ${\rm H_2}$--cooling and ${\rm H_2}$--photodissociation timescales. \label{fig:fossilf2}} \vspace{-1\baselineskip} \end{figure} In Figure~\ref{fig:fossilf2}, we show the ratio of the critical LW background as a function of the ratio of the ${\rm H_2}$--cooling time (which scales approximately as $t_{\rm cool}\propto T/[n_g f_{\rm H2}]$) at $z=24$. Note that there were only 12 halos for which the critical LW background was finite in both the Heat0.8 and NoUVB runs (this excludes the majority of halos, which do not form cold dense gas even if $J_{\rm LW}=0$, and also those handful of halos that have already formed cold dense gas prior to $z=24$, in either run). As a result, the range shown by this plot is not necessarily representative. Nevertheless, the figure shows a clear trend: the critical LW background scales nearly as the inverse of the cooling time. This can be understood easily: in order to prevent the gas from cooling, the ${\rm H_2}$--dissociation time, $t_{\rm dissoc}\approx 2\times 10^7 (J_{\rm LW}/10^{-3})$ yr, must be comparable or shorter than the ${\rm H_2}$--cooling time. The critical LW flux falls somewhat below the value predicted by this scaling, because at higher LW fluxes, the ${\rm H_2}$ abundance starts saturating as it approaches its equilibrium value (rather than decreasing linearly with time under the influence of the background). \subsection{The Impact of a Lyman Werner Background - Simulation Results} \label{subsec:LWsims} To better quantify the feedback effects of UV heating combined with a persistent LW background, we performed six additional simulations, in which the LW background was left on after the UVB was turned off (at $z=24.62$). Our background flux is constant throughout the narrow LW frequency band (11.18--13.6eV). We normalize the specific intensity at the mean photon energy of 12.87 eV, in units of $10^ {-21}{\rm ergs~s^{-1}~cm^{-2}~Hz^{-1}~sr^{-1}}$. We include three values of the LW background: $J_{\rm LW}=$ 0.001, 0.01, and 0.1. Each of these three LW backgrounds is applied to both our NoUVB\ and Heat0.8\ runs at $z=24.62$, and is subsequently left on. In Figure \ref{fig:LWruns}, we show the resulting CD gas suppression, as defined in eq. (\ref{eq:delN}) and (\ref{eq:delM}). Empty symbols refer to runs with no heating (NoUVB), while filled symbols refer to runs with heating (Heat0.8). In both cases, squares, circles, and triangles denote simulation runs with increasing LW backgrounds ($J_{\rm LW}$ = 0.001, 0.01, 0.1, respectively). Note that we obtain values of $\delta_{N, {\rm cd}}(z) < -1$ in Figure \ref {fig:LWruns}; this is due to the fact that the CD gas disappears from some of the low mass halos in the presence of strong LW fluxes (c.f the normalization of eq. (\ref{eq:delN}), which tracks relative changes since $z=z_{\rm UVB, on}=25$). The results of the simulation runs in Figure \ref{fig:LWruns} agree fairly well with the semi-analytical arguments in \S~\ref{subsec:LWtoy} above. Namely, the value of the LW background at which significant suppression occurs by $z=18$ in the NoUVB\ runs is found to be approximately $J_{\rm LW} \sim 0.01$. In the Heat0.8 run, there is significant suppression at $z=18$ already for $J_{\rm LW} \sim 0.001$. While the bulk of this suppression is due to the UV heating alone (and not the LW background; cf. Figure \ref{fig:delta}), the LW background does prevent three additional halos from cooling their gas prior to $z=18$. This is consistent with the expectation that the UV heating lowers the value of $J_{\rm LW}$ required for appreciable negative feedback, by a factor of $\sim$ 10. More generally, our results reveal that for $J_{\rm LW}\lsim 0.01$, negative feedback is dominated by UV heating, while for $J_{\rm LW}\gsim 0.01$, negative feedback is dominated by the LW background. Near the threshold value of $J_{\rm LW}\sim 0.01$, negative feedback transitions from being UV heating dominated ($\lsim 100$Myr after $z_{\rm UVB, off}$) to being LW background dominated ($\gsim 100$Myr after $z_{\rm UVB, off}$). This ``transition'' behavior can be understood as a combined result of two effects: the UV heating is turned off, and its impact is transient, as discussed above, while the critical LW background scales roughly with the inverse of the density \citep{HAR00, OH03} and hence a fixed LW background will have a larger impact at lower densities or decreasing redshifts. \begin{figure} \vspace{+0\baselineskip} \myputfigure{f16.eps}{3.3}{0.5}{.}{0.} \vspace{-1\baselineskip}\figcaption{This figure shows the suppression of cold dense gas in halos in simulations runs that include a persistent LW background. The LW background had specific intensities of $J_{\rm LW}=0.001, 0.01$, or $0.1$ (normalized at 12.87 eV, in units of $10^ {-21}{\rm ergs~s^{-1}~cm^{-2}~Hz^{-1}~sr^{-1}}$). Each of these three LW backgrounds is applied to both our NoUVB\ and Heat0.8\ runs at $z=24.62$, and is subsequently left on. Values of $\delta_{M, {\rm cd}}(z)$ ({\it top panel}) and $\delta_{N, {\rm cd}}(z)$ ({\it bottom panel}) are show, as defined in equations (\ref{eq:delM}) and (\ref{eq:delN}). \label{fig:LWruns}} \vspace{-1\baselineskip} \end{figure} \section{Conclusions} \label{sec:conc} We used three-dimensional hydrodynamic simulations to investigate the effects of a transient ultraviolet (UV) flux on the collapse and cooling of pregalactic clouds, with masses in the range $10^5$ -- $10^7~M_\odot$, at high redshifts ($z\gsim18$). Although in the scenario we envision, the radiation is due to nearby PopIII star formation, in order to study its effect in a statistical way, we adopted a spatially constant but short-lived photo-ionizing background throughout the simulation box. This was done to mimic the effect of a $\sim 100$ solar mass star forming at $z = 25$ and shining for 3 Myr. Of course, in reality, the closest star can be located at a range of distances and so we effectively covered this range by varying the strength of the background. The effect of the ionizing background will be strongest on relatively low density gas which is in the process of assembling to form halos at later times. The sign of the effect has been uncertain with suggestions of positive feedback due to enhanced H$_2$ formation, and negative feedback due to the increased entropy of gas in the relic HII region. In addition, we studied the combined effects of this transient UV flux and a persistent Lyman--Werner (LW) background (at photon energies below 13.6eV) from distant sources. In the absence of a LW background, we find that a critical specific intensity of $J_{\rm UV} \sim 0.1 \times 10^{-21}{\rm ergs~s^{-1}~cm^{-2}~Hz^{-1}~sr^{-1}}$ demarcates the transition from net negative to positive feedback for the halo population. A weaker UV flux stimulates subsequent star formation inside the fossil HII regions, by enhancing the ${\rm H_2}$ molecule abundance. A stronger UV flux significantly delays star--formation by reducing the gas density, and increasing the cooling time at the centers of collapsing halos. At a fixed $J_{\rm UV}$, the sign of the feedback also depends strongly on the density of the gas at the time of UV illumination. In either case, we find that once the UV flux is turned off, its impact starts to diminish after $\sim30\%$ of the Hubble time. In the more realistic case when a LW background is present (in addition to the ionizing source), with $J_{\rm LW} \gsim 0.01 \times 10^{-21}{\rm ergs~s^{-1}~cm^{-2}~Hz^{-1}~sr^{-1}}$, strong suppression persists down to the lowest redshift ($z=18$) in our simulations. Finally, we find evidence that heating and photoevaporation by the transient UV flux renders the $\sim 10^6~{\rm M_\odot}$ halos inside fossil HII regions more vulnerable to subsequent ${\rm H_2}$ photo--dissociation by a LW background. The results of this study show that the combined negative feedback of a transient UV and a persistent LW background is effective at high redshift in suppressing star--formation in the low--mass halos; this suppression will help in delaying the reionization epoch to $z=6-10$ as inferred from SDSS quasar spectra and from CMB polarization anisotropy measurements in the 3--yr {\it WMAP} data. \acknowledgments{We thank Peng Oh for many stimulating and helpful discussions. AM acknowledges support by NASA through the GSRP grant NNG05GO97H. GB acknowledges support through NSF grants AST-0507161 and AST-0547823. ZH acknowledges partial support by NASA through grants NNG04GI88G and NNG05GF14G, by the NSF through grants AST-0307291 and AST-0307200, and by the Hungarian Ministry of Education through a Gy\"orgy B\'ek\'esy Fellowship. This work was also supported in part by the National Center for Supercomputing Applications under grant MCA04N012P.} \bibliographystyle{apj}
1,108,101,565,044
arxiv
\section{Introduction} Ballistically-controlled reactions provide simple examples of non-equilibrium systems with complex kinetics and have recently attracted a lot of interest~\cite{EF,KS,BRL,R,jarek_uno,jarek_due,jarek_tre,jarek_four,jarek_five}. They consist of an assembly of particles moving freely between collisions with given velocities. When two particles meet, they instantaneously annihilate each other and disappear from the system. Depending on the initial velocity distribution, two classes of asymptotic states have been observed in one dimensional systems. In general, for continuous initial velocity distribution~\cite{BRL,univer}, as well as for some special case of discrete velocity distribution (symmetric two-velocity distribution~\cite{EF,KS,jarek_uno}, or symmetric trimodal velocity distribution with a sufficiently small fraction of immobile particles~\cite{jarek_due,jarek_tre}), the steady-state turns out to be empty and it is approached algebraically in time. The dynamical exponent characterizing the time decay depends on the initial velocity distribution and it is still not completely clear how to characterize the universality classes for this problem~\cite{univer}. On the contrary, for some discrete velocity distribution, the stationary state may not be empty, but may contain particles moving all with the same velocity (for example non-symmetric bimodal velocity distribution~\cite{EF,jarek_uno} or a trimodal velocity distribution with more than 25\% of particles initially at rest~\cite{jarek_due,jarek_tre}). This non-interacting state is generally approached with an exponentially fast decay. A richer behavior can be expected in a system with, in opposition to the ballistic annihilation case, an interacting steady-state. This can be achieved by constantly bringing new particles in the system by some suitable mechanism. A possibility is to allow branching processes: ballistically moving particles can spontaneously generate, with a given branching rate, some offsprings. Accordingly, one speaks of ballistic branching-annihilation. The problem of branching-annihilation has been recently studied in the framework of a diffusive dynamics~\cite{cardy_uno,cardy_due}. The simplest example of such a system would be one with a single species of particle $A$, undergoing diffusive behavior, single--particle annihilation $A \to \emptyset$, and branching $A \to 2 A$. There is always a trivial absorbing state, with no particles. For sufficiently low branching rate, this is the only stationary state, but for larger values of this rate, another non-trivial `active' stationary state appears. This stationary state phase transition belongs to the directed percolation universality class~\cite{dp}. A slightly more complicated class of model are reaction-diffusion systems with the underlying reaction processes $2 A \to \emptyset$ and $A \to (m+1) A$, with $m$ even. It turns out that for these models the critical exponents are not the ones of directed percolation but belong to a new universality class~\cite{cardy_uno,cardy_due} characterized by branching and annihilating walks with an even number of offsprings. The constraint of local `parity' conservation is the reason for the existence of this new universality class. Our aim here is to study the problem of ballistic branching-annihilation (BBA) in one dimension for which interesting new properties can be foreseen. The paper is organized as follows. In section \ref{sec:model}, the BBA model is defined. The exact dynamical equations of motion are derived for the one dimensional case. In section \ref{sec:mf}, the dynamics of the model is studied within a mean-field like approximation. In particular, the phase diagram of the steady-state is established in terms of the different parameters of our model. In this approximation, the steady-state is always approached exponentially fast. Section \ref{sec:num} is devoted to numerical simulations of the one dimensional model. It is shown that fluctuations plays a crucial role. Indeed, as in the mean-field approximation, a phase transition occurs when the probability that the offspring takes the velocity of its mother is $q=1/2$; however, for $q<1/2$ the dynamics is be governed by the coarsening of clusters of particles having the same velocity, and the system approaches a completely filled stationary state with a power law decay. For $q>1/2$, there is no coarsening and the system relaxes rapidly towards a non-filled stationary state. Finally, the results are discussed in section \ref{sec:conc}. \section{The model} \label{sec:model} We shall first define precisely the BBA model studied and secondly derive the corresponding equations of motion. \subsection{Definition of the model} We consider a one-dimensional system composed of particles of size $\sigma$ initially uniformly randomly distributed in space. Moreover, at $t=0$, the velocities of the particles are random independent variables distributed with the symmetric bimodal distribution: \begin{equation} P(v)=\frac{1}{2}\big[\delta(v-c) + \delta(v+c) \big] \end{equation} The dynamics consists of two mechanisms: \begin{itemize} \item{The ballistic annihilation:} Two particles making contact (with opposite velocities) disappear instantaneously. \item{The branching:} during the time interval $[t, t+dt]$, the following branching processes take place: \begin{enumerate} \item A particle with coordinates (position and velocity) ($x,~c$) produces with probability $p(1-q)dt$ a pair of particles with coordinates ($x-\sigma,~c$ ) and ($x,~c$). \item A particle with coordinates ($x,~c$) produces with probability $pqdt$ a pair of particles with coordinates ($x-\sigma,~-c$ ) and ($x,~c$). \item A particle with coordinates ($x,~-c$) produces with probability $p(1-q)dt$ a pair of particles with coordinates ($x,~-c$ ) and ($x+\sigma,~-c$). \item A particle with coordinates ($x,~-c$) produces with probability $pqdt$ a pair of particles with coordinates ($x,~-c$ ) and ($x+\sigma,~c$). \end{enumerate} \end{itemize} (the particular choice of the position of the newly created particle has been made in order that, independently of its velocity, a child cannot collide with its mother at birth.) Thus the parameter $0 \le p \le \infty$ characterizes the overall branching rate, while the parameter $0 \le q \le 1$ characterizes the probability that the offspring has a velocity opposed to the one of its mother. The particular case $p=0$ corresponds to the pure ballistic annihilation problem previously studied~\cite{EF,KS,jarek_uno,jarek_due}. \subsection{Exact equations of motion} We can now derive the equations of motion describing the dynamics of the system. In the particular case $p=0$, a kinetic equation for the two-particle conditional distribution of nearest neighbors was derived as a rigorous consequence of the dynamics of ballistic annihilation~\cite{jarek_uno,jarek_due}. This equation completely described the evolution of the system when initially higher order conditional distributions factorized into products of two-particle ones. It was then possible to extract exactly and analytically the long time behavior of the particle density for several velocity distributions. Unfortunately, this property is no longer valid in the case with branching. Having not been able to find an observable in which one is able to reproduce this exact closure, one has to face the usual problem of dealing with a complete hierarchy of coupled equations~\cite{bbgky}. It seems thus hopeless to find an exact analytical solution to these equations. Accordingly we shall only write the equation for the one-particle density distribution $\rho_1(x,v;t)$. In section \ref{sec:mf}, this equation will be solved using a mean-field approximation. \end{multicols} \vspace{-4.8mm} \noindent\rule{20.5pc}{0.25pt}\rule{0.25pt}{5pt} A careful bookkeeping of the possible dynamical processes leads to the following equations: \begin{eqnarray} (\partial_t + c \partial_x)\rho_1(x,c;t) &=& -2c\rho_2(x,c;x+\sigma,-c;t) \nonumber \\ &+& pq\Big[\rho_1(x-\sigma,-c;t) - \sum_{v=\pm c}\int_0^{\sigma}dy\, \rho_2(x-\sigma,-c;x+y,v,t) \Big] \nonumber \\ &+& p(1-q)\Big[\rho_1(x+\sigma,c;t) - \sum_{v=\pm c}\int_0^{\sigma}dy\, \rho_2(x-y,v;x+\sigma,c;t) \Big] , \label{r1} \end{eqnarray} and \begin{eqnarray} (\partial_t - c \partial_x)\rho_1(x,-c;t) &=& -2c\rho_2(x,c;x+\sigma,-c;t) \nonumber \\ &+& pq\Big[\rho_1(x+\sigma,-c;t) - \sum_{v=\pm c}\int_0^{\sigma}dy\, \rho_2(x-y,v;x+\sigma,c;t) \Big] \nonumber \\ &+& p(1-q)\Big[\rho_1(x-\sigma,-c;t) - \sum_{v=\pm c}\int_0^{\sigma}dy\, \rho_2(x-\sigma,-c;x+y,v;t) \Big] \label{r2} \end{eqnarray} \hspace*{22pc}\rule[-4.5pt]{0.25pt}{5pt}\rule{20.5pc}{0.25pt}% \vspace*{-8pt} \begin{multicols}{2} where $\rho_2(x_1,v_1;x_2,v_2;t)$ is the joint two-particle density to find a particle in the state $(x_1,v_1)$ simultaneously with another in the state $(x_2,v_2)$ at time $t$. The right-hand side of equation (\ref{r1}) can be interpreted in the following way: the first term describes the annihilation of a particle $(x,c)$ with a particle of opposite velocity. It is given by the product of the density of a collision configuration [$\sigma\rho_2(x,c;x+\sigma,-c,t)$] with the frequency of such an encounter ($2c/\sigma$). The second term describes the branching of a particle of velocity $-c$, at position $x-\sigma$, giving birth to a particle of velocity $+c$ at position $x$. This is only possible if no other particles are present in the interval $[x,x+\sigma]$ (otherwise there will be an overlap between two particles) and it happens with the rate $pq$. Finally, the third term describes the creation with rate $p(1-q)$ of a particle whose mother have the same velocity. The same restriction as in the previous case applies. One can in principle write the equation of motion for $\rho_2$ along the same lines. However, we shall not give here this cumbersome equation, as we are not going to use it. For simplicity, we shall only consider spatially homogeneous system. We can thus write $\rho_1(x,v;t)=\rho_1(v,t)$ and $\rho_2(x_1,v_1;x_2,v_2;t) =\rho_2(x_1-x_2,v_1;0,v_2;t)$. Introducing then the observable $\Psi(t) \equiv \rho_1(c,t) - \rho_1(-c,t)$, one easily shows that it is an exactly conserved quantity when $q=1/2$. This feature reflects the particular choice of rule, which are precisely symmetric when $q=1/2$. As a consequence, one expects our model to exhibit a particular behavior at this point. \section{Mean-field analysis} \label{sec:mf} A first attempt to obtain information about our model is to apply a mean-field approximation on equations~(\ref{r1}) and~(\ref{r2}). One then assumes the following factorization: \begin{eqnarray} \rho_2(x_1,v_1;x_2,v_2;t) &=& \rho_1(x_1,v_1;t)\rho_1(x_2,v_2;t) \nonumber \\ &=& \rho_1(v_1,t)\rho_1(v_2,t), \label{mf} \end{eqnarray} (the last equality holds for a spatially homogeneous system). It is then suitable to introduce in addition to the variable $\Psi$ the second variable: \begin{equation} \Phi(t) \equiv \rho_1(c,t) + \rho_1(-c,t), \label{sum} \end{equation} Equations~(\ref{r1}) and~(\ref{r2}) lead to \begin{equation} \frac{d\Phi}{dt} = p(1-\sigma \Phi)\Phi - c(\Phi^2-\Psi^2),\label{eqm1} \end{equation} and \begin{equation} \frac{d\Psi}{dt} = p(1-2q)(1-\sigma \Phi) \Psi. \label{eqm2} \end{equation} The formal solution of this last equation is \begin{equation} \Psi(t) = \Psi(0)\exp \Bigl[p(1-2q)\Bigl(t-\sigma \int_0^t d\tau\,\Phi(\tau)\Bigr)\Bigr], \label{phi} \end{equation} As before, one sees that the value $q=1/2$ plays a special role. Indeed, two regimes have to be distinguished: \begin{enumerate} \item{For $0 \le q <1/2$:} the exponential term diverges unless $(1-\sigma \Phi) \to 0$ as $t \to \infty$. Thus a possible stationary solution is \begin{equation} \Phi_s = \frac{1}{\sigma}, \quad {\Psi_s}^2 = \frac{1}{\sigma^2}, \label{statio1} \end{equation} In the particular case $q=0$, the time dependent solution can be obtained explicitly as shown in Appendix. For $t \to \infty$, one recovers the above stationary solution. \item{For $1/2 < q \le 1$:} in this case, a possible stationary solution is \begin{equation} \Psi_s = 0, \quad \Phi_s= \frac{1}{\sigma}{(1+c/p\sigma)}^{-1},\label{statio2} \end{equation} \end{enumerate} Is is straightforward to verify that the above stationary solutions are stable and are approached exponentially in time. Moreover, when $q=1/2$ the complete time dependent solution can be obtained. From equation~(\ref{eqm2}), one indeed finds $\Psi(t)={\rm const}=\Psi_0$, and thus equation~(\ref{eqm1}), becomes \begin{equation} \frac{d\Phi}{dt} = p \Phi - (c+p \sigma)\Phi^2+c \Psi_0^2, \label{eqpart} \end{equation} whose solution reads \begin{equation} \Phi(t) = \frac{p}{2(c+p\sigma)} + \frac{\gamma A \cosh(At)+A(c+p\sigma)\sinh(At)} {A\cosh(At)+\gamma(c+p\sigma)\sinh(At)},\label{timedep} \end{equation} with $A=p^2/4+c(c+p\sigma)\Psi_0^2$ and $\gamma=\Phi(0)-p/[2(c+p\sigma)]$. The stationary state is then given by $\Psi_s=\Psi_0$ and \begin{equation} \Phi_s(q=1/2) = \Bigl(p+\sqrt{p^2+4c(c+p\sigma)\Psi_0^2}\Bigr)\Big/(c+p\sigma). \label{stat22} \end{equation} Here again, one sees from equation~(\ref{timedep}) that the steady state is approached in an exponential way. As already noted, $\Psi(t)$ is an exactly conserved quantity for $q=1/2$. The mean-field stationary phase diagram is shown in Fig.\ \ref{mfphase}. The stationary value $\Phi_s$ is plotted against $q$ for a fixed value of $p \not= 0$. The interesting feature is the presence of a gap $\Delta(p)$ for $q=1/2$ given by \begin{equation} \Delta(p) = \frac{1}{\sigma}\Big(1-\frac{1}{1+c/p\sigma}\Big). \end{equation} $\Delta(p)$ decreases as $p$ increases. When $q < 1/2$, $\Phi_s=1/\sigma$ for all values of $p$ (completely filled state), while for $q > 1/2$, $\Phi_s$ increases monotonically with $p$. The dependence is linear for small $p$, but $\Phi_s \to 1/\sigma$ when $p \to \infty$. \begin{figure} {\samepage\columnwidth20.5pc \centerline{\input mf.eepicemu} \caption{Mean-field phase diagram: the stationary value of the averaged density $\Phi_s$ is plotted against $q$ for a fixed value of $p$.\label{mfphase}} } \end{figure} \section{Numerical simulations} \label{sec:num} In view of the situation when $p=0$~\cite{EF,KS,jarek_uno}, one can anticipate that the fluctuations will also play an important role in the case with branching. One way to deal with the complete problem, including fluctuations, is to perform numerical simulations. The simulations were performed for a one-dimensional periodic lattice with typically $2^{17}$ sites. The velocity of each particle was drawn from a symmetric bimodal distribution. However, on computational grounds, the particle velocities were chosen to be $(0,c')$ (with $c'>0$). The results for our model defined in section \ref{sec:model} can be recovered by performing a simple Galilean transformation and putting $c=c'/2$. The particle size $\sigma$ is the lattice spacing, and the discretized time step is given by $\tau=\sigma/c'$. The algorithm used to simulate the dynamics is the following. During one time step $\tau$, the three following processes occur sequentially: \begin{enumerate} \item{Ballistic motion:} independently of the occupation state of the sites, the particles with velocity $c'$ move one site to the right. \item{Annihilation:} two particles located on the same site disappear. \item{Branching:} for each remaining particle, one draws two random numbers, $r_p$ and $r_q$, uniformly distributed in the interval $[0,1]$. One offspring particle is added to the left (right) nearest neighbor of a particle with velocity $c'$ ($0$) if the site is empty and if $r_p$ is the less than a given value $\tilde p$. Hence, $\tilde p$ is the probability of branching. This new particle takes the velocity of its mother with a certain probability $1-q$, i.e. if $r_q>q$ (and the other velocity otherwise). If two particles are created on the same site (thus born from two different mothers), they annihilate instantaneously. \end{enumerate} For each of the above different steps, the sites were updated simultaneously. The simulations were run on a Connection Machine CM-200 and the data averaged over $10$ independent realizations. The mean initial density for all the simulations was $0.5$, with, in average, the same densities of both kinds of particles. We have also shown that our results obtained for lattice of $2^{17}$ sites were free of finite size effects. Note that when a particle branches, it can create at most one particle during a time step $\tau$. As a consequence, this limit the value of $p$ that can be explored through the simulations. Indeed, the branching rate $p$ is related to $\tilde p$ via \begin{equation} \tilde p = p \tau. \end{equation} Thus using the definition of $\tau$ and $c'$, one finds \begin{equation} \frac{p\sigma}{c} = 2\tilde p. \end{equation} $\tilde p$ being a probability, the adimensional branching rate $p\sigma/c$ can only take values between $0$ and $2$. We can now discuss the numerical data obtained using the above algorithm. Two kinds of quantities have been investigated: first, the time dependent density with particular emphasis on the stationary states and the way these stationary states are approached; second, a more microscopic quantity, namely the time dependent cluster size distribution $P(\ell,t)$ in the system and some of its moments. These quantities are well suited to describe the coarsening process present in the system. As in the mean-field approach and as expected from the last remark of section \ref{sec:model}, the value $q=1/2$ turns out to play a particular role and three regimes have to be distinguished. \begin{enumerate} \item{For $0 \le q <1/2$:} The time evolution of the particle density $\Phi(t)$ is shown in Fig.\ \ref{philess} for several values of $\tilde p$. Clearly, the system reaches a stationary state $\Phi_s=1/\sigma$ in agreement with the mean-field prediction. However, as shown in Fig.\ \ref{appless}, the stationary state is approached as $\Phi_s-\Phi(t) \sim t^{-1/2}$. This power law establishes after a crossover time roughly proportional to $1/p$. \begin{figure} \epsfxsize=7.5cm {\samepage\columnwidth20.5pc \centerline{\epsfbox[40 60 550 720]{philess.ps}} \caption{Time evolution of the particle density $\sigma\Phi(t)$ as a function of time $t$ for $q=0.1$ and several values of $\tilde p$.\label{philess}} } \end{figure} \item{For $1/2 < q \le 1$:} The time evolution of the particle density $\Phi(t)$ is shown in Fig.\ \ref{phimore} for several values of $\tilde p$. As depicted on Fig.\ \ref{statmore}, the stationary value of the density depends both on $\tilde p$ and $q$. For $\tilde p < 0.1$, it is well fitted by \begin{equation} \Phi_s(\tilde p,q) \approx {\tilde p} \exp(0.55/q), \label{smallp} \end{equation} Moreover, for $\tilde p$ large enough $\Phi_s$ is not increasing monotonically as a function of $\tilde p$, but $\Phi_s$ exhibits a maximum and then slightly decreases as $\tilde p$ increases. As shown in Fig.\ \ref{appmore}, the stationary state is approached in an exponential way according to $\Phi_s-\Phi(t) \sim \exp(-A\tilde pt)$, where $A$ may depend on $q$. \item{} The limit case $q=1/2$ is more difficult to investigate due to the slow decay towards the stationary state. In fact for $\tilde p > 0.3$, there are evidences that the stationary state is completely filled, i.e. $\Phi_s=1/\sigma$. For smaller values of $\tilde p$ the simulations do not allow us to draw any conclusions, as shown in Fig.\ \ref{phiequal}. Nevertheless, for $q <1/2$, one has $\Phi_s=1/\sigma$, for all values of $\tilde p$ while for $q>1/2$, equation~(\ref{smallp}) shows that, at least for small $\tilde p$, $\Phi_s \not= 1/\sigma$. Thus for small $\tilde p$, $\Phi_s$ has a jump at $q=1/2$, and we believe that such a jump will be present for all finite values of $p$. \end{enumerate} \begin{figure} \epsfxsize=7.5cm {\samepage\columnwidth20.5pc \centerline{\epsfbox[40 60 550 720]{appless.ps}} \caption{$\sigma\Phi_s-\sigma\Phi(t)$ versus $t$ in a double logarithmic scale, for $q=0.1$ and several values of $\tilde p$. For comparison, the full line represents $t^{-1/2}$. This decay establishes after a crossover time which behaves as $\tau/\tilde p$.\label{appless}} } \end{figure} \begin{figure} \epsfxsize=7.5cm {\samepage\columnwidth20.5pc \centerline{\epsfbox[40 60 550 720]{phimore.ps}} \caption{Time evolution of the particle density $\sigma\Phi(t)$ as a function of time $t$ for $q=0.9$ and several values of $\tilde p$. The stationary state is reached after a time of order $10\tau/\tilde p$.\label{phimore}} } \end{figure} \begin{figure} \epsfxsize=7.5cm {\samepage\columnwidth20.5pc \centerline{\epsfbox[40 60 550 720]{statmore.ps}} \caption{The stationary values of the averaged density $\sigma\Phi_s$ is plotted against $\tilde p$, for several values of $q>1/2$, obtained by numerical simulations.\label{statmore}} } \end{figure} \begin{figure} \epsfxsize=7.5cm {\samepage\columnwidth20.5pc \centerline{\epsfbox[40 60 550 720]{appmore.ps}} \caption{Semi-logarithmic plot of $\sigma\Phi(t)-\sigma\Phi_s$ versus $t$ for $q=0.9$ and $\tilde p=0.01$. The exponential approach towards the steady state establishes for $t/\tau \simeq 250$.\label{appmore}} } \end{figure} \begin{figure} \epsfxsize=7.5cm {\samepage\columnwidth20.5pc \centerline{\epsfbox[40 60 550 720]{phiequal.ps}} \caption{Time evolution of the particle density $\sigma\Phi(t)$ as a function of time $t$ for $q=0.5$ and several values of $\tilde p$. For small values of $\tilde p$ (less than 0.3), we are unable to extract the steady-state density, for CPU reasons.\label{phiequal}} } \end{figure} We can now consider the properties of the clusters present in the system at a given time. The qualitative situation is well illustrated by the two snapshots in Fig.\ \ref{snapshots}. They represent the time evolution of a $512$-site system during 1024 iterations. Moreover, a change of reference frame has been performed such that the particle velocities appears to be $\pm c$. Depending on $q$, one observes totally different pictures. In the case $\tilde p=0.7, q=0.1$ (Fig.\ \ref{snapshots}a), large clusters (of similar particles) are present. They are separated by two types of interfaces: vertical ones (which are stable) and rough ones. The dynamics of the system is totally governed by the random walks of the rough interfaces. During the time evolution, one rough interface may collide with a stable interface leading to the coalescence of two clusters into a large one. In the case $\tilde p=0.7, q=0.9$ (Fig.\ \ref{snapshots}b), the sizes of the clusters are rather small and there is no stable interfaces. The dynamics is of a different type. A more quantitative description is given by the investigation of the time dependent cluster size distribution $P(\ell,t)$. In the domain $0 \le q <1/2$, where coarsening is observed, one expects~\cite{frach} that $P(\ell,t)$ will obey to a scaling form: \begin{equation} P(\ell,t) \sim t^{-\alpha} \Pi(\ell t^{-\beta}). \label{scaling} \end{equation} \end{multicols} \begin{figure} \epsfxsize=7.5cm \centerline{\epsfbox{snap_a2.ps}\hspace{1cm}% \epsfxsize=7.5cm\epsfbox{snap_b1.ps}} \vspace{3mm} \caption{Time evolution (vertical axes) of the configurations for a chain of 512 sites (the initial density is approximately one half) and for 1024 time iterations. The white pixels indicate sites without particle, the grey ones, sites with a particle towards the right and the black ones, sites with a particle moving towards the left. Fig.\ a is for $\tilde p=0.7$, $q=0.9$ while Fig.\ b is for $\tilde p=0.7$, $q=0.1$. \label{snapshots}} \end{figure} \begin{multicols}{2} In Fig.\ \ref{collapse}, we plot the scaling function obtained by the collapse of the data for $\tilde p=0.7, q=0.1$, with $\alpha=1$ and $\beta=0.5$. Although the plot is very noisy, one still notes that the scaling function $\Pi(z)$ has a very particular shape, with a sharp maximum at $z=z_{max}$. The value of $z_{max}$ increases slowly with $q$, going from $0.4$ for $q=0.1$ to $1.2$ for $q=0.4$. A better way to extract the exponents $\alpha$ and $\beta$ is to consider the $n$-th order moments of the distribution defined as: \begin{equation} \langle\ell^n\rangle = \frac{\int_{\sigma}^{\infty} d\ell\,\ell^n P(\ell,t)} {\int_{\sigma}^{\infty} d\ell\,P(\ell,t)} \end{equation} which according to the scaling form given by equation~(\ref{scaling}), should behave as: \begin{equation} \langle\ell^n\rangle \sim t^{\alpha_n} = t^{n \beta}, \label{scalexp} \end{equation} while \begin{equation} \int_{\sigma}^{\infty} d\ell\,P(\ell,t)\sim t^{-\alpha + \beta} \end{equation} Thus, the two above relations allow us to determine the exponents $\alpha$ and $\beta$. The values of $\alpha_n$ for $n=1,\dots,6$ are shown on Fig.\ \ref{regress} for $p=0.7$, $q=0.1$. A good fit is obtained for $\beta=0.48\pm 0.02$ and $\alpha=0.96\pm 0.04$, in very good agreement with our collapsed plot. By repeating our analysis for other values of $q$ (namely, 0.2, 0.3 and 0.4), the same values for the exponents fit reasonably well the data. For $q=1/2$, the different moments of the cluster size distribution are $\alpha_1=0.33$, $\alpha_2=0.96$, $\alpha_3=1.60$, $\alpha_4=2.22$, $\alpha_5=2.82$ and $\alpha_6=3.40$. These exponents are of the form $\alpha_n=-0.26 +0.61n$ which is not compatible with the relation~(\ref{scalexp}). This probably shows that the simulations have not yet reached the true asymptotic regime. Moreover, as shown on Fig.\ \ref{colequal}, $P(\ell,t)$ is of the form: \begin{figure} \epsfxsize=7.5cm {\samepage\columnwidth20.5pc \centerline{\epsfbox[40 60 550 720]{collapse.ps}} \caption{Scaling form of the cluster sizes distribution for $\tilde p=0.7, q=0.1$. $P(\ell,t)t^{\alpha}$ is plotted versus $\ell t^{-\beta}$ for $\alpha=1$ and $\beta=0.5$.\label{collapse}} } \end{figure} \begin{figure} \epsfxsize=7.5cm {\samepage\columnwidth20.5pc \centerline{\epsfbox[40 60 550 720]{regress.ps}} \caption{Exponent $\alpha_n$ (open circles) of the $n$-th moment of the cluster distribution function for $n=0,\ldots,6$ and $\tilde p=0.7, q=0.1$. The line is the fit $\alpha_n=0.01+0.48 n$.\label{regress}} } \end{figure} \begin{equation} P(\ell,t) \sim t^{-1/3}\ell^{-4/3}, \label{pow} \end{equation} over two decades in the variable $\ell$. Note that equation~(\ref{pow}) cannot be valid for arbitrary large $\ell$, because the moments of $P(\ell,t)$ diverge with the upper limit of integration. Finally, in the domain $1/2 < q \le 1$, where no coarsening is observed, the system approaches very rapidly its stationary state and no dynamical scaling has been found for the cluster distribution. However, in the stationary state, the cluster distribution takes the form: \begin{equation} P(\ell) =C_1 \exp(-C_2 \ell) \end{equation} where $C_1$ and $C_2$ are two constants. \section{Interpretation of the results and conclusions} \label{sec:conc} The first interesting point is the particular role played by the value $q=1/2$. As already mentioned in section \ref{sec:model}, for $q=1/2$, one notes the presence of an extra conservation law in the system. The difference between the average local density of particles with positive and negative velocities is strictly zero. It is well known that conservation laws has a great influence on the dynamics of non-equilibrium statistical systems. Accordingly, one may expect that the dynamics in $q=1/2$ is particular. \begin{figure} \epsfxsize=7.5cm {\samepage\columnwidth20.5pc \centerline{\epsfbox[40 60 550 720]{colequal.ps}} \caption{Scaling form of the cluster sizes distribution for $\tilde p=0.7, q=0.5$. $P(\ell,t) t^{1/3}$ is plotted versus $\ell$ in a double logarithmic scale. The full line represents $\ell^{-4/3}$.\label{colequal}} } \end{figure} In view of the scaling properties of the problem it may be useful to think about it in terms of dynamical renormalization group. Based on the results of both mean-field approximation and the numerical simulations, one is lead to conjecture the presence of three fixed points in this system. An unstable ``critical'' fixed point at $q=1/2$ and two attractive fixed points at $q=0$ and $q=1$. When $q<1/2$, the branching processes favors the apparition of pair of consecutive particles with the same velocities and the dynamics is governed by the attractive fixed point at $q=0$. Large particle clusters with opposite velocities are formed during the time evolution and two kinds of interfaces are present into the system (see Fig.\ \ref{snapshots}a). First, let us consider the interface between two clusters of colliding particles and call this type of interface $I_1$. Such interface has a very long life. Indeed, the probability that a vacancy presents at one of the extremity of a cluster of size $L$ traverses the cluster and perturbates the interface is of the order of $(1-p)^L$. Thus, an interface $I_1$ is very stable in the long time limit where the system is made up of large clusters. The second type of interface, called $I_2$, separates non-colliding clusters. Thus it has not necessarily a one site extension, but it can be wider. Hence, its behavior is more subtle. Three different regimes may be considered. The simplest case to discuss is when $p\sigma/c>1$. In this case, the interface $I_2$ is typically formed by only one empty lattice site, whose dynamic is diffusive. Indeed, one can show that both boundaries of an interface $I_2$ perform a Brownian motion. Moreover, when $p\sigma/c>1$, this random walk is biased, so that both boundaries tend to come closer together. For sufficiently long time, the initial gap separating two non-colliding clusters will shrink to one single site, which will perform a random walk. Eventually, this hole will encounter an $I_1$ interface, permitting the coalescence of two clusters into a larger one. The random walk aspect of this dynamics is responsible for the slow approach towards the stationary state (in $t^{-1/2}$) observed in the simulations. When $p\sigma/c=1$, the boundaries of an interface $I_2$ both perform an unbiased random walk. Accordingly, the initial gap between two non-colliding clusters will not, on average, vary. However, because of the BBA dynamics, this gap will eventually shrink to a single site, either through a creation of cluster inside the gap when $q\neq0$, or through the coalescence of two interfaces. Thus the previous argument holds. Finally, when $p\sigma/c<1$, the situation is similar: although the boundaries of $I_2$ perform a biased random walk which tends to increase the separation between the two non-colliding clusters, the coalescence of two interfaces or a creation of a new cluster inside the gap (if $q\neq0$) will fill up this space in a more efficient way. Eventually, the stationary state is completely filled, only one cluster remains and the annihilation process do not act anymore. For values of $q$ not too far from $1/2$, this asymptotic behavior will shows up only for very long times. Accordingly, the results of the (finite time) numerical simulations may still be affected by the properties of the critical fixed point at $q=1/2$, and the dynamics will exhibit some crossover behavior. In the situation $q>1/2$, a majority of pairs of particles with opposite velocities are created during branching. Due to the annihilation processes, those particles will prevent the formation of large clusters of particles. One may anticipate that the long time dynamics is governed by the other attractive fixed point corresponding to $q=1$. The dynamics is no longer governed by coarsening mechanism but only by the dynamics of small clusters, hence the fast (exponential like) relaxation occurs. Depending upon the value of $p$, there is a more or less important fraction of empty sites (or holes) into the system. The presence of these two different dynamical regimes explains the jump observed in the stationary density at $q=1/2$. This paper shows once again, that the mean-field results generally do not hold for low dimensional systems. Whereas the mean-field approximation predict the exact critical value for $q$ (because the mean-field equation for $\Psi$ is exact when $q=1/2$) and the right stationary value of the density when $q<1/2$, it is unable to give satisfactory results for the density stationary value for $q>1/2$, (see Figs \ref{mfphase} and \ref{statmore}). Unsurprisingly, the mean-field approximation is also unable to predict the power law approach to the stationary state when $q<1/2$, which is obviously governed by fluctuations. More surprisingly, its prediction of an exponentially fast approach towards the steady state when $q>1/2$ is (qualitatively) well verified. However, to better understand this problem, it would be useful to be able to find an exact analytical solution at least for the three fixed point cases ($q=0, 1/2$ and $1$ for arbitrary values of $p$) as a support to the above qualitative picture. Unfortunately, we were not able until now to find such exact solutions. In conclusion, one sees that this simple BBA problem with one offspring exhibit already a very rich behavior. The case with two or more offsprings is a completely open question. \section*{Acknowledgments} Works partially supported by the Swiss National Science Foundation (M.D). Two of us (P.-A.R. and J.P.) acknowledge the hospitality of Department of Theoretical Physics of the University of Geneva were part of this work was done. P.-A.R. is supported by the Swiss National Science Foundation and J.P. acknowledges the financial support by KBN (Committee for Scientific Research, Poland) grant 2 P03 B 035 12. {
1,108,101,565,045
arxiv
\section{Introduction} Over the next five years, our view of the Milky Way galaxy will be revolutionised by the European Space Agency's cornerstone \emph{Gaia} mission \citep{GC+16}, which aims to provide positions and velocities for billions of stars in the Galaxy -- a 10000-fold increase in sample size and 100-fold increase in precision over its predecessor, \emph{Hipparcos} \citep{VLF07}. The second \emph{Gaia} date release (DR2) will already provide astrometric and photometric data in three bands for $\sim 1.4$ billion sources over the entire sky. A fraction of this dataset will contain also measurements for radial velocities, extinction and effective temperatures. With subsequent \emph{Gaia} data releases, in combination with several major current and future spectroscopic surveys, such as SDSS/APOGEE \citep{MSF17}, DESI, Gaia-ESO \citep{GRA12}, LAMOST, GALAH, WEAVE and 4MOST, and asteroseismic surveys, such as K2, TESS and PLATO, additional data for tens of millions of stars will become available that include chemical abundances, radial velocities, and stellar ages. In principle, this huge amount of high-dimensional empirical information about the stellar component of our Galaxy holds the key to unveiling its current state through precise identification of disc, bulge and halo substructure, and its formation history \citep[see][for a recent overview]{RB13R}. Given that the Milky Way is thought to be fairly typical for its mass \citep[although see][]{Bell2017,Cautun2018} within the standard model of cosmology -- the Lambda Cold Dark Matter ($\Lambda$CDM) paradigm -- this multi-dimensional star-by-star information provides a unique window into the formation of $L_*$ galaxies in general, as well as a test of the predictions of $\Lambda$CDM. This new wealth of observational data is only a partial snapshot of the current distribution of stars in our quadrant of the Milky Way, however, and its interpretation requires some form of modelling. Widely employed modelling techniques include dynamical models such as (quasi-) distribution functions \citep{Bi10,BR13,TBR16}; Torus mapping \citep{BM16}; Made-to-Measure (M2M) models \citep{ST96,Hun13} that aim to characterise the current structure of the major Galactic components; and self-consistent $N$-body models that provide testable predictions for the effects of various evolutionary processes \citep[e.g.][]{GKC12,KGG17,FDH17}. A crucial aspect in the quest to draw reliable conclusions from any of these techniques is to understand the limitations, biases and quality of the observational data. Specifically, the effects of survey selection functions, sample size, survey volume, accuracy of phase space and spectroscopic measurements, dust obscuration and image crowding influence inferences as to the true phase-space distribution of stars. A pragmatic solution to these problems is to generate and analyse synthetic Milky Way catalogues cast in the observational frame of the survey \citep{BS80,RC86,BRC87}. ``Mock catalogues'' of this general type were first used in cosmology in the 2000s \citep[e.g.][]{Cole2005} and have now become an essential tool for the design and analysis of large galaxy and quasar surveys. Realistic mock catalogues provide assessments of an instrument's capabilities and biases, tests of statistical modelling techniques applied to realistic representations of observational data, and detailed comparisons between theoretical predictions and observations. Perhaps one of the best known recent attempts is the Besan\c{c}on model \citep{RRD03}, which provides a disc (or set of discs) with a set of coeval and isothermal (single velocity dispersion) stellar populations assumed to be in equilibrium, with analytically specified distributions of density, metallicity and age. However, these models are not dynamically consistent and oversimplify the structure of the Galaxy, particularly the stellar halo which is modelled as a smooth component. An important advance was made by \citet{SBJ11}, who developed the \textlcsc{Galaxia} code for creating mock stellar catalogues either analytically or from phase space sampling of hybrid semi-analytic-$N$-body simulations to represent stellar haloes in a cosmological context \citep{BJ05,CCF10}. In the context of the stellar disc, \citet{HKG15} introduced the \textlcsc{SNAPDRAGONS} code that generates a mock catalogue taking into account \emph{Gaia} errors and extinction, and demonstrated the resulting observable kinematics of stars around a spiral arm in a smoothed particle hydrodynamic simulation set up in isolation. A related technique was developed by \citet{LWC15}, building on that of \citet{SBJ11}, to distribute synthetic stars sampled from a cosmological $N$-body simulation in such a way as to preserve the phase-space properties of their parent stellar populations. One of the goals of modern Galactic astronomy is to compare predictions of \emph{ab initio} cosmological formation models with the high-dimensional observational data provided by Galactic surveys in order to elucidate the evolutionary history of the Galaxy. Mock stellar catalogues based on full hydrodynamical cosmological simulations are an appealing prospect to fulfil this aim. This would provide us with a window into how different types of stars that originate from cosmological initial conditions are distributed in phase space. Given that the details of these distributions will depend on the formation history of the Milky Way, multiple mock catalogues derived from simulations that span a range of formation histories will be desirable for many aspects of disc and halo formation. Until recently, the availability of realistic cosmological simulations of Milky Way analogues has been limited due to a combination of numerical hindrances and insufficiently realistic astrophysical modelling of important physical effects, such as feedback processes \citep{KG91,NS00,GWB10,SWS11}. This situation has improved and cosmological zoom simulations have now become sophisticated enough to produce sets of high-resolution Milky Way analogues in statistically meaningful numbers \citep[e.g.][]{MPS14,WDS15,FNS16}. In particular, the \textlcsc{AURIGA} simulation suite \citep{GGM17} consists of 40 Milky Way mass haloes simulated at resolutions comparable to the most modern idealised simulations ($6\times10^3$ to $5\times10^4$ $\rm M_{\odot}$ per baryonic element) with a comprehensive galaxy formation model, including physical processes such as magnetic fields \citep{PMS14} and feedback from Active Galactic Nuclei and stars. These simulations have been shown to produce disc-dominated, star-forming late-type spiral galaxies that are broadly consistent with a plethora of observational data such as star formation histories, abundance matching predictions, gas fractions, sizes, and rotation curves of $L_{*}$ galaxies \citep{GGM17}. Furthermore, they are sufficiently detailed to address questions related to chemodynamic properties of the Milky Way, such as the origin of the chemical thin-thick disc dichotomy \citep{GBG18}, the formation of bars, spiral arms and warps \citep{GWG16}, and the properties of the stellar halo \citep[][Monachesi et al. in prep]{MGG16} and satellite galaxies \citep{SGG17}. The confluence of these advanced simulation techniques with the new \emph{Gaia} and ground-based data will transform, at a fundamental level, the understanding of our Galaxy in its cosmological context. The aim of this paper is to present two sets of mock \emph{Gaia} DR2 stellar catalogues generated from the \textlcsc{AURIGA} cosmological simulations: one set generated with a parallel version of \textlcsc{SNAPDRAGONS} \citep{HKG15} denoted \textsc{hits-mocks}, and another with the code presented in \citet{LWC15} denoted \textsc{icc-mocks}. These catalogues contain the true and observed phase space coordinates of stars, their \emph{Gaia} DR2 errors, magnitudes in several passbands, metallicities, ages, masses, and stellar parameters. We show that a powerful use of the mock catalogues is to compare them with the intrinsic simulation data from which they were generated in order to acquire predictions of how accurately physical properties are reproduced, and to determine which kind of data should be studied from the \emph{Gaia} survey to target specific questions. We focus on two practical applications: the structure of the young stellar disc and kinematics of the stellar halo. In particular, we show that, in contrast to typical disc setups in many idealised $N$-body simulations, the \textlcsc{AURIGA} simulations predict that young stars ($\sim$few hundred Myr old) make up flared distributions (increasing scale height with increasing radius), which are well traced by B- and A-dwarf stars. We also show that the systemic rotation of the stellar halo can be accurately inferred from \emph{Gaia} data. Finally, we discuss the limitations of our methods and provide information on how the community can access the mock data. \section{Magneto-hydrodynamical Simulations} \label{sec2} \begin{table*} \centering \caption{Table of properties of each simulation. The columns are: 1) halo number; 2) virial mass 3) virial radius; 4) stellar mass within the virial radius; 5) stellar disc mass calculated as $2\pi \Sigma _0 R_d^2$, where $\Sigma _0$ and $R_d$ are the parameters retrieved from a bulge-disc surface density decomposition performed in the same way as in \citet{GGM17} for the mass within 1 kpc of the disc midplane; 6) stellar disc scale length; 7) circular rotation velocity at a radius of 8 kpc, calculated as $V_c = \sqrt{GM(<R=8\, {\rm kpc}) / 8\,{\rm kpc}}$; 8) azimuthally averaged stellar surface density within 1 kpc of the midplane at $R=8$ kpc. The last row provides current estimates of all of these quantities for the Milky Way. All values are taken directly from \citet{BHG16}. $^a$ is the mean of values for $R_{200}$ provided in Table 8 of that paper, the standard deviation of which is $28.6$.} \begin{tabular}{c c c c c c c c} \hline Run & $M_{\rm vir} \, [10^{12} \rm M_{\odot}]$ & $R_{\rm vir} \, [\rm kpc]$ & $M_{*}\, [10^{10} \rm M_{\odot}]$ & $M_{*,\rm d} \, [10^{10}\rm M_{\odot}]$ & $R_{\rm d} \, [\rm kpc]$ & $V_{\rm c} (R=8 \, \rm kpc) \, [\rm km\, s^{-1}]$ & $\Sigma (R=8 \rm kpc) \, [\rm M_{\odot} \, pc^{-2}]$ \\ \hline Au 6 & 1.01 & 211.8 & 6.1 & 2.6 & 3.3 & 224.7 & 33.2 \\ Au 16 & 1.50 & 241.5 & 7.9 & 3.7 & 6.0 & 217.5 & 44.5 \\ Au 21 & 1.42 & 236.7 & 8.2 & 3.8 & 3.3 & 231.7 & 51.8 \\ Au 23 & 1.50 & 241.5 & 8.3 & 4.0 & 5.3 & 240.0 & 52.5 \\ Au 24 & 1.47 & 239.6 & 7.8 & 2.8 & 6.1 & 219.2 & 31.5 \\ Au 27 & 1.70 & 251.4 & 9.5 & 5.0 & 3.2 & 254.5 & 71.1 \\ \hline MW & $1.3 \pm 0.3$ & $^a220.7$ & $6\pm 1$ & $4\pm 1$ & $2.6\pm 0.5$ & $238 \pm 15$ & $33.3 \pm 3$ \\ \hline \end{tabular} \label{table1} \end{table*} \begin{figure*} \includegraphics[scale=0.3,trim={0 0 0 0},clip]{figures/stars_halo_6_snap0063_000070-min}\hspace{-0.1cm} \includegraphics[scale=0.3,trim={0 0 0 0},clip]{figures/stars_halo_16_snap0063_000070-min}\hspace{-0.1cm} \includegraphics[scale=0.3,trim={0 0 0 0},clip]{figures/stars_halo_21_snap0063_000070-min}\hspace{-0.1cm} \includegraphics[scale=0.3,trim={0 0 0 0},clip]{figures/stars_halo_23_snap0063_000070-min}\hspace{-0.1cm} \includegraphics[scale=0.3,trim={0 0 0 0},clip]{figures/stars_halo_24_snap0063_000070-min}\hspace{-0.1cm} \includegraphics[scale=0.3,trim={0 0 0 0},clip]{figures/stars_halo_27_snap0063_000070-min} \caption{Face-on and edge-on projected stellar densities at $z=0$ for the six high-resolution simulations from which we construct mock catalogues. The images are a projection of the $K$-, $B$- and $U$-band luminosity of stars, shown by the red, green and blue colour channels, in logarithmic intervals, respectively. Younger (older) star particles are therefore represented by bluer (redder) colours. The box side-length is 70 kpc in each panel. Movies and images are available at \href{http://auriga.h-its.org}{\url{http://auriga.h-its.org}}.} \label{figau} \end{figure*} The \textlcsc{AURIGA} simulations \citep{GGM17} are a suite of cosmological zoom simulations of haloes in the virial mass\footnote{Defined to be the mass inside a sphere in which the mean matter density is 200 times the critical density, $\rho _{\rm crit} = 3H^2(z)/(8 \pi G)$.} range $10^{12}$ - $2\times10^{12}$ $\rm M_{\odot}$. The haloes were identified as isolated haloes\footnote{The centre of a target halo must be located outside of $9$ times the $R_{200}$ of any other halo that has a mass greater than $3\%$ of the target halo mass.} from the redshift $z=0$ snapshot of a parent dark matter only simulation with a comoving side length of 100 cMpc from the EAGLE project (L100N1504) introduced in \citet{SCB15}. Initial conditions for the zoom re-simulations of the selected haloes were created at $z=127$, using the procedure outlined in \citet{J10} and assuming the \citet{PC13} cosmological parameters: $\Omega _m = 0.307$, $\Omega _b = 0.048$, $\Omega _{\Lambda} = 0.693$ and a Hubble constant of $H_0 = 100 h$ km s$^{-1}$ Mpc$^{-1}$, where $h = 0.6777$. The halos are then re-simulated with full baryonic physics with higher resolution around the main halo. The simulations were performed with the magneto-hydrodynamic code \textlcsc{AREPO} \citep{Sp10}, and a comprehensive galaxy formation model \citep[see][for more details]{VGS13,MPS14,GGM17} that includes: primordial and metal line cooling, a prescription for a uniform background UV field for reionization (completed at $z=6$), a subgrid model for star formation, stellar feedback, magnetic fields \citep{PMS14,PGG17}, and black hole seeding, accretion and feedback. Formed star particles are assumed to be simple stellar populations (SSPs), and are assigned broad band luminosities based on the catalogues of \citet{BC03}. Stellar evolutionary processes such as mass loss and metal enrichment from SNII, SNIa, and AGB stars are modelled by calculating at each time step the mass moving off the main sequence for each star particle according to a Chabrier Initial Mass Function. Lower and upper mass limits of 0.1 and 100 $\rm{M}_\odot$, respectively, are set for the integration limits. The mass and metals are then distributed among nearby gas cells with a top-hat kernel. We track a total of 9 elements: H, He, C, O, N, Ne, Mg, Si and Fe. In this paper, we focus on the highest resolution simulations of the \textlcsc{AURIGA} suite, which correspond to the ``level 3'' resolution described in \citet{GGM17}. The typical dark matter particle mass is $\sim 4 \times 10^{4}$ $\rm M_{\odot}$, and the baryonic mass resolution is $\sim 5 \times 10^{3}$ $\rm M_{\odot}$. The physical softening of collisionless particles increases with time up to a maximum physical softening length of 185 pc, which is reached at redshift 1. The physical softening value for the gas cells is scaled by the gas cell radius (assuming a spherical cell shape given the volume), with a minimum softening set to that of the collisionless particles. Final face-on and edge-on stellar luminosity images for these systems are shown in Fig.~\ref{figau}. We list some relevant properties of the simulations in Table~\ref{table1}. We remark that each of these simulations produces radially extended and thin stellar discs: the thickness of the young stellar disc is typically of the order $\sim 100 - 400$~pc \citep{GGM17} at a radius of $R\sim 8$ kpc, which is similar to that of the Milky Way's thin disc \citep{JIB08}. The disc scale lengths, derived from fits to the surface density distribution of stars $\leq 1$ kpc of the midplane, range from 3.2 kpc to 6.1 kpc, and implied stellar disc masses from 2.6 $\times 10^{10}\rm M_{\odot}$ to 5 $\times 10^{10}\rm M_{\odot}$, which are similar to current estimates for the Milky Way \citep{BHG16}. The ability of these simulations to produce coherent discs with barred and spiral structure and stellar haloes from a self-consistent cosmological galaxy formation model from $\Lambda$CDM initial conditions makes these simulations powerful predictors for the formation of galaxies like the Milky Way. In the next section, we describe how we generate the mock \emph{Gaia} catalogues from the simulations. \section{Mock stellar catalogues} \label{sec3} The first step to create a mock stellar catalogue is to choose the position and velocity of the Sun. For each simulation, we define four choices for the solar position: all adopt a radius and height above the midplane (defined at redshift 0) of $(R_{\odot},Z_{\odot}) = (8,0.02)$ kpc, and are spread at equidistant azimuthal angles relative to our default reference angle, which is chosen to be 30 degrees behind{\footnote{\emph{Behind} means an angle measured from the bar major axis in the direction opposite to that of the rotation of the Galactic disc.}} the major axis of the bar (Bland-Hawthorn \& Gerhard 2016). The bar major axis is calculated from the $m=2$ Fourier mode of the central 5 kpc stellar distribution \citep[see][for details on how to extract angles from modes]{GKC13}. We then rotate the disc such that the solar position is placed at the Galactocentric Cartesian coordinate $(X,Y,Z)=(-R_{\odot}, 0, Z_{\odot})$. We set the local standard of rest equal to the spherically averaged circular velocity at the solar radius, and set the Solar motion velocity to $(U_{\odot}, V_{\odot}, W_{\odot}) = (11.1, 12.24, 7.25)$ $\rm km \, s^{-1}$ \citep{SBD10} relative to the local standard of rest. After setting the solar position and velocities, we transform our coordinate system to heliocentric equatorial coordinates following the matrix transformation described in Section 3 of \cite{HK14}, and we retain this coordinate system in the mock catalogue output. For each of the four Solar positions, we generate two sets of mock catalogues: one set is generated by a parallelised version of \textlcsc{SNAPDRAGONS}\footnote{Serial version available at \href{https://github.com/JASHunt/Snapdragons}{\url{https://github.com/JASHunt/Snapdragons}}} \citep{HKG15} (\textsc{hits-mocks}{}); the other set is generated using the method presented in \citet{LWC15} (\textsc{icc-mocks}{}), who produced SDSS mocks based on the the \citet{CCF10} particle tagging technique applied to the \textsc{aquarius} simulations \citep{SWV08}. \citet{Mateu:2017aa} added \emph{Gaia}{} observables to the \citeauthor{LWC15} mocks to make predictions for the detection of tidal streams in \emph{Gaia}{} data using great-circle methods. Both methods assume that each simulation star particle is a Simple Stellar Population (SSP) that can be transformed into individual stars by sampling from a theoretical isochrone matching the particle's age and metallicity. They compute observable properties of stars and their associated errors in the same way, and apply identical selection functions. The methods differ in how the stars are distributed in phase space, their choice of stellar evolution models, and their treatment of dust extinction. The step-by-step procedure for generating each set of catalogues is as follows: \paragraph*{\textsc{hits-mocks}} \begin{enumerate} \item apply a stellar population synthesis model to each star particle; \smallskip \item add dust extinction;\smallskip \item apply the observational selection based on a magnitude cut;\smallskip \item convolve observable properties with \emph{Gaia} DR2 errors and displace stellar coordinates. \end{enumerate} \paragraph*{\textsc{icc-mocks}} \begin{enumerate} \item apply a stellar population synthesis model to each star particle; \smallskip \item distribute individual stars over the approximate phase space volume of the parent star particle;\smallskip \item apply the observational selection based on a magnitude cut;\smallskip \item convolve observable properties with \emph{Gaia} DR2 errors and displace stellar coordinates. \end{enumerate} We note that the \textsc{hits-mocks}{} displace stars from their parent particles (true coordinates) to their observed coordinates by random sampling the DR2 error distributions for astrometry and radial velocity of the mock star. However, the \textsc{icc-mocks}{} distribute stars over a 6D kernel approximating the phase-space volume of their parent particle, which become the true coordinates, and are afterwards displaced to their observed coordinates by error sampling in the same way as the \textsc{hits-mocks}. Another important difference is that the \textsc{hits-mocks}{} include a model for dust extinction, whereas the \textsc{icc-mocks}{} do not. We discuss the advantages and disadvantages of this choice below, where we describe each stage in detail. \subsection{Stellar Population Synthesis} \label{sec:popsynth} The basic premise of the population synthesis calculation in both the \textsc{hits-mocks}{} and \textsc{icc-mocks}{} is that each simulation star particle corresponds to an SSP with an evolutionary state defined by a single metallicity and age, and a total number of stars proportional to its mass. The present day mass distribution of individual stars in the SSP is determined by the convolution of an assumed IMF by a model of stellar evolution (encapsulated in a set of pre-computed isochrones), which takes into account processes such as the death of massive stars and mass loss from those that survive. For the \textsc{hits-mocks}, although the simulations use a Chabrier IMF, \sc{snapdragons }\rm only contains implementations of the Salpeter \citep{S55} and Kroupa \citep{K01} IMFs. Thus, we use a Kroupa IMF to sample the distribution of present-day stellar masses for each SSP which is the closer approximation of the Chabrier IMF used in the \textlcsc{AURIGA} simulations. We set the minimum allowed initial stellar mass to be 0.1 $\rm M_{\odot}$ (as for the \textlcsc{AURIGA} simulations). For a given SSP, we set the lower mass limit to be the lowest present day stellar mass that would be visible at our limiting magnitude (see below), and the upper stellar mass limit to be the maximum stellar mass which would still be present at the age of our model particle. We then integrate the IMF over the desired mass range to determine the number of stars which would be visible within this mass range, $N_{\mathrm{s}}$, and randomly sample the IMF $N_{\mathrm{s}}$ times. Note that while we do not generate any stars below the visible limit, we do account for their mass. The process is discussed in more detail in \citet{HKG15}. The procedure described above is similar for the \textsc{icc-mocks}, which use a Chabrier IMF. To sample the SSP, we choose small intervals of initial mass in the range\footnote{We note that the lower mass limit of $0.08$ is lower than the limit of $0.1$ adopted by the \textlcsc{AURIGA} simulations, however, $\rm M7V$-$\rm M8V$ stars of this mass have an absolute $V-$band magnitude of $\sim 18$ (fainter than our $V<16$ allsky sample) and an apparent magnitude fainter than $V=20$ at distances farther than 25 pc from the Sun (with no extinction). These extremely faint stars will therefore not be observed for the vast majority of applications. The upper mass limit of $120~\rm{M}_\odot$ is higher than the $100~\rm{M}_\odot$ assumed in \textlcsc{AURIGA}, however such massive stars are extremely rare, therefore we do not expect them to bias any results.} $0.08$ to $120~\rm{M}_\odot$. Given the total initial mass of the SSP, we calculate the expected number of stars in each interval. Finally, the actual number of stars in each mass interval is randomly generated from a Poisson distribution with the corresponding expectation value. Once we have sampled the stellar mass distribution for a given star particle, we are in a position to assign stellar parameters such as temperature, magnitudes, and colours to each synthetic star. For the \textsc{icc-mocks}, we use the \textsc{parsec} isochrones \citep{Bressan2012,Tang2014,Chen2014,Chen2015}. These represent up-to-date stellar models that span a wide range of metallicities and ages, and have magnitudes in multiple bands, including the \emph{Gaia}{} ones. We downloaded isochrone tables from the \texttt{CMD v3.0} web interface\footnote{\href{http://stev.oapd.inaf.it/cgi-bin/cmd_3.0}{\url{http://stev.oapd.inaf.it/cgi-bin/cmd_3.0}}} using the default options. We sample a grid of isochrones spanning the age range $6.63 \leq \log(t/\rm{yr}) \leq 10.13$, with a step size, $\Delta \log (t/\rm{yr}) = 0.0125$, and the metallicity range $0.0001 \leq Z \leq 0.06$. Because interpolating between precomputed isochrones is nontrivial, we identify the isochrone with the closest value in age and metallicity for each star particle. Any particles that lie outside the range of the age/metallicity grid are also matched to the nearest isochrone. For the \textsc{hits-mocks}, we use the same procedure as described above, but use an earlier version of the Padova isochrones \citep{MGBGSG08}, which are currently used in the \textlcsc{SNAPDRAGONS} code. This set of isochrones uses a slightly different range of ages and metallicities for the grid: $6.6 \leq \log(t/\rm{yr}) \leq 10.22$, with a step size, $\Delta \log (t/\rm{yr}) = 0.02$ and $0.0001 \leq Z \leq 0.03$. We do not expect that the properties of most stellar populations in our catalogs will be significantly affected by the differences between these two sets of isochrones. \subsection{Dust Extinction} \label{sec:dext} {\it This step is applied only to the \textsc{hits-mocks}.} Dust extinction can be problematic for Galactic optical surveys, such as \emph{Gaia}, mainly because of the poorly understood three-dimensional distribution of dust in the Milky Way. As an approximation, the \textsc{hits-mocks}{} use the extinction maps used in \textlcsc{Galaxia} \citep{SBJ11}, based on the method presented in \citet{BKF10} to derive a 3D polar logarithmic grid of the dust extinction generated from the 2D dust maps of \citet{SFD98} and the assumption of a uniform distribution of dust along a given line of sight. From these maps, we calculate a magnitude extinction for each magnitude band and, given the distance modulus for the original star particle, we determine the apparent magnitude in each band. We note that the alternative philosophy of modelling dust directly from the gas and dust distribution in the simulations will make the dust map more consistent with large-scale features of the \textlcsc{AURIGA} galaxies (such as spiral arms), although this is far from straightforward \citep[e.g.][]{Trayford2017}. On the other hand, the use of a dust map based on the Milky Way results in one fewer discrepancy between the mock catalogues and observations that use the same dust maps, which may facilitate their inter-comparison. The \textsc{icc-mocks}{} do not include dust extinction, and hence the user is free to adjust magnitudes for extinction themselves, if required. We note also that dust extinction is less important for stellar halo studies, which typically exclude high extinction regions in the Galactic mid-plane. \subsection{Phase space sampling} \label{sec:phase_space_sampling} {\it This step is applied only to the \textsc{icc-mocks}.} Once we have generated a catalogue of stars, the \textsc{icc-mocks}{} method assigns distinct positions in configuration and velocity space to each of them. The intention of this step, which can be thought of as a form of smoothing, is to avoid discrete `clumps' of stars at the coordinates of the parent particles. We follow the implementation of \citet{LWC15}, which is similar to that introduced by the \textlcsc{Galaxia} code \citep{SBJ11}. For every simulation particle we construct a six-dimensional hyper-ellipsoidal `smoothing kernel' that approximates the volume of phase space the particle represents. We distribute the stars associated with particles into these 6D kernels as described below. In this way, we approximately preserve coherent phase space structures in the original simulation, such as tidal streams (e.g. in configuration space, this approach ensures stars are displaced more along such streams than they are perpendicular to them). It is important to note that, although the resulting distribution of stars represents a denser sampling of phase space, it is essentially an interpolation (and extrapolation, around the edges of the phase space of the simulation). It does not add any (physical) dynamical information or increase the resolution beyond that of the parent simulation. The phase-space volume associated with each star particle is estimated using the \textsc{enbid}{} code of \citet{Sharma2006}. This code numerically estimates the 6D phase space density around each particle by using an entropy based criterion to partition the set of particles into a binary tree, without the need to specify a metric relating configuration and velocity space. The resulting estimate of the phase-space volume of each leaf node can be noisy due to Poisson sampling, so we further apply an anisotropic smoothing kernel. We use the nearest 64 neighbours to locally determine the principal directions and to locally rescale the phase space. In this rotated and rescaled phase space, we define the phase space volume, $V_{6D}$, of each star particle as $1/40$ of the hypersphere which encloses the nearest 40 neighbours. The actual phase-space sampling kernel is a 6D isotropic Gaussian with zero mean and dispersion, $\sigma^2=\gamma R_{6D}^{2}$, where $\gamma=1/48$ and $R_{6D}$ is the radius of the hypersphere with volume, $V_{6D}$. To avoid extreme outliers in the Gaussian tails of these kernels, we truncate the kernels at $5\sigma$. We draw coordinates randomly from the kernel defined by each parent star particle for each star it generates. Each randomly generated point is then transformed back from this rotated and rescaled phase space into the Cartesian configuration and velocity space of the original simulation. We call these new coordinates the ``true'' coordinates. This definition differs from that in the \textsc{hits-mocks}, in which the ``true'' positions correspond to those of the parent star particle. See \citet{LWC15} for a more detailed description and several tests of the phase space sampling method. To avoid unnecessary over-smoothing due to `cross-talk' between different phase-space structures, we partition the stellar particles into sets according to their progenitor galaxy, and calculate the scale of the phase space kernels for a given particle using only neighbours from the same set. For this purpose we use the \textlcsc{AURIGA} merger trees built from \textsc{subfind} groups \citep{GGM17}. We trace back each stellar particle to the first snapshot in which it belonged to the same FOF halo as the main progenitor of the Milky Way halo analogue. Particles which did not form `in situ' in the central galaxy are grouped according to their subfind group membership at the snapshot immediately prior to this (i.e. just before their first infall into the main progenitor halo). We assign all particles which did form in the central galaxy to a single group (we discus a potential limitation of this implementation in Sec.~\ref{sec:limitations}). Again, further details are given in \citet{LWC15}. \begin{figure*} \includegraphics[scale=0.75,trim={0 1.5cm 0 1.cm},clip]{figures/allskymontage5.png} \caption{Montage of three-colour all-sky maps of one of the \textsc{hits-mocks}{} \emph{Gaia} stellar catalogues as viewed from a solar-like position in equatorial coordinates. These maps are constructed from mapping the $R$-, $G$- and $U$-band apparent magnitudes to the red, green and blue colour channels of the composite image. The $x$ and $y$ axes represent right ascension (RA) and declination (DEC), respectively. Each image shows the stellar light distribution for different heliocentric shells, which become progressively larger from the lower-left to the upper-right. The contour maps in the top-left and lower-right show the projected face-on and edge-on stellar mass surface density, respectively, with annotations for the three smallest volumes shown in the all-sky maps. } \label{fig1} \end{figure*} \subsection{Mock survey selection function} In order to limit the size of our mock catalogues to the order of $\sim10^8$ stars instead of $\gtrsim10^9$ stars, we provide a full sky catalogue only for stars with $V<16$. Most stellar halo stars are fainter than this, so to have a large sample of stars for stellar halo science we supplement this bright star catalogue by including stars with $16 < V < 20$ for Galactic latitudes $|b|>20$ degrees. These selection cuts are applied to both the \textsc{hits-mocks}{} and the \textsc{icc-mocks}. We note that in the \textsc{hits-mocks}, faint stars are randomly sampled at a rate of $20\%$ in order to reduce the output size. However, this does not bias data trends aside from the number of stars available in the magnitude range $16 < V < 20$. \subsection{\emph{Gaia} DR2 errors} In this subsection, we describe how we add \emph{Gaia} DR2 errors to the catalogues, which is the same for both the \textsc{hits-mocks}{} and \textsc{icc-mocks}. We convolve the parameters of the selected stars with \emph{Gaia}-like errors as a function of magnitude and colour in the Johnson-Cousins $V$ and $I_{c}$ bands following \citet{JGC10}, \begin{equation} \begin{aligned} G = V - 0.0257 - 0.0924(V-I_c) - 0.1623(V-I_c)^2 \\+ 0.009(V-I_c)^3. \end{aligned} \end{equation} We use the post-launch error estimates approximated from the estimates in pre-launch provided through the Gaia Challenge collaboration performance \citep[][]{R-G+15}, which include all known instrumental effects such as stray light levels and residual calibration errors. A simple performance model that takes into account the wavelength dependence of the point spread function and reproduces the end-of-mission parallax standard error estimates, is \begin{equation} \begin{aligned} \sigma _{\pi _{\mathrm{final}}} [\mu \rm as] = (-1.631 + 680.766 z + 32.732 z^2)^{0.5} \\ \times [0.986 + (1 - 0.986) (V-I_c)], \end{aligned} \end{equation} where \begin{equation} z = {\rm max}\left(10^{0.4 (12.09 - 15)}, 10^{0.4 (G - 15)}\right), \end{equation} and $6\le G\le 20$ denotes the range in broad-band, white-light, \emph{Gaia} magnitudes. This relation reflects the magnitude-dependent errors for stars observed by \emph{Gaia}. Stars in the range $6\le G\le 12$ will have shorter integration times in order to avoid CCD saturation, and are assigned a constant $\sigma _{\pi} = 7$ $\mu$as error by the above relation. The basic mission results improve with increasing mission time, $t$, as $t^{-0.5}$ for the positions, parallaxes, photometry and radial velocities, and $t^{-1.5}$ for the proper motions\footnote{\href{http://www.astro.lu.se/gaia2017/slides/Brown.pdf}{\url{http://www.astro.lu.se/gaia2017/slides/Brown.pdf}}}. Given that these errors are end-of-mission estimates, we adopt the following simple scaling to provide the expected parallax-standard error for DR2: \begin{equation} \sigma _{\pi} = L \sigma _{\pi _{\rm final}}, \end{equation} where $L=(60/22)^{1/2}$, which corresponds to the square root of the DR2 mission time divided by the total 5 year mission time. The right ascension, declination and proper motions are all scaled with this factor as well. The errors in position on the sky ($\alpha$, $\delta$) and proper motions ($\mu _{\alpha}$, $\mu _{\delta}$) scale with the ecliptic longitude averaged error of the sky-varying factors derived from scanning law simulations, the values of which are listed on the \emph{Gaia} performance website\footnote{\href{https://www.cosmos.esa.int/web/gaia/science-performance}{\url{https://www.cosmos.esa.int/web/gaia/science-performance}}}. DR2 will provide radial velocities for only a very small subset of stars near the Sun with spectral type later than $\rm F$. However, the selection function and error function is non-trivial, involving, for example, the number of visits, binarity and temperature. Thus, we provide estimates of the radial velocity error for all generated stars, using the end of mission \emph{Gaia} error which adopts the simple performance model, \begin{equation} \sigma _{v_r} = 1 + b e^{a(V-12.7)}, \end{equation} where $a$ and $b$ are constants that depend on the spectral type of the star. We caution the reader that the radial velocities are both more plentiful and more accurate than the expected DR2 radial velocities. In addition to astrometric errors, we calculate the red and blue broadband \emph{Gaia} magnitudes, $G_{\rm RP}$ and $G_{\rm BP}$, and errors for all \emph{Gaia} photometric bands, according to the single-field-of-view-transit standard error on the \emph{Gaia} science performance website, modified to include the DR2 mission time scaling and $20\%$ calibration errors: \begin{align} \sigma _G = & \,\,5\frac{1.2\times10^{-3} L}{\sqrt{70}} \nonumber \\ & \left(0.04895 z^2 + 1.8633 z + 0.0001985\right)^{1/2}, \label{Gsig} \end{align} and \begin{align} \sigma _{G_{\rm RP/BP}} = & \,\,5\frac{1\times 10^{-3} L}{\sqrt{70}} \nonumber \\ & \left(10^{a_{\rm BP/RP}} z^2 + 10^{b_{\rm BP/RP}} z + 10^{c_{\rm BP/RP}}\right)^{1/2}, \label{GRPsig} \end{align} where $a_{\rm BP/RP}$, $b_{\rm BP/RP}$ and $c_{\rm BP/RP}$ are listed on the \emph{Gaia} science performance website. We note that the factor of 5 in the pre-factor of equations~(\ref{Gsig}) and (\ref{GRPsig}) is required to scale the photometric errors to match the $\sim$ millimag accuracy at the bright end ($G < 13$ mag) and the 20 millimag and 200 millimag accuracy at the faint end for $G$ and $G_{\rm RP/BP}$, respectively, that are quoted on the \emph{Gaia} DR2 website. We provide error estimates for atmospheric parameters based on the results of \citet{LBJ12}, who inferred the expected performance of stellar parametrisation from various fitting methods applied to synthetic spectra. Specifically, a second order polynomial in $G$ has been fitted to the mean averaged residual of effective temperature and surface gravity inferred from the Bayesian method Aeneas \citep{BJ11}. For both the \textsc{hits-mocks}{} and the \textsc{icc-mocks}, we randomly sample these standard errors for each generated mock star (that satisfies our magnitude cut) to displace the measured parallax, proper motions and radial velocity of each synthetic star from that of its parent particle. This ensures that, for the reasons discussed in Sec.~\ref{sec:phase_space_sampling}, the position and velocity coordinates of each star are distinct from those of their parent star particle in the case of the \textsc{hits-mocks}. The standard errors for the \emph{Gaia} photometric bands (equations. ~\ref{Gsig} and ~\ref{GRPsig}) and effective temperatures are randomly sampled and added to the true values to produce observed values for these quantities. \begin{figure} \centering \includegraphics[scale=0.4,trim={0 0 0 1.cm},clip]{figures/gmagdistributionfunction030_kroupaIMF} \includegraphics[scale=0.4,trim={0 0 0 1.cm},clip]{figures/gmagdistributionfunction030_chabrierIMF} \caption{The distribution of stars as a function of $G$-magnitude in the \textsc{hits-mocks}{} (top panel) and the \textsc{icc-mocks}{} (bottom panel) for the default solar position of each simulation. The step at $V\sim 16$ reflects our choice to select stars with $16 < V < 20$ at latitudes $|b|>20$ degrees, whereas the stars brighter than $V=16$ are sampled with full sky coverage. The bin size is $0.1$ magnitudes.} \label{gdist} \end{figure} \subsection{Access to Mock Catalogues} The \textsc{hits-mocks}{} and \textsc{icc-mocks}{} presented in this paper will be made available to the community upon submission of this article. They will be available to download from the \textlcsc{AURIGA} website\footnote{\href{http://auriga.h-its.org}{\url{http://auriga.h-its.org}}} as well as the Virgo Millennium database in Durham\footnote{See \href{http://icc.dur.ac.uk/data}{\url{http://icc.dur.ac.uk/data}}}, which also allows subsets of data to be retrieved using SQL queries. In addition, snapshot particle data and gravitational potential grids will be made available at these locations. A description of the data fields and their units is given in Table~\ref{table2}. \section{The Mock Catalogues and Example Applications} \label{results} Fig.~\ref{fig1} shows all-sky maps of the observed mock stellar distributions in heliocentric equatorial coordinates (right ascension and declination) in several shells in heliocentric distance for one of the \textsc{hits-mocks}. These maps are constructed by mapping the $R$-, $G$- and $U$-band apparent magnitudes of stars to the red, green and blue colour channels of the composite image. Immediately obvious is the dust obscuration in the disc mid-plane, which is evident in all volumes, but more pronounced for volumes that extend farther from the Sun. For the two smallest volumes, the main observable feature is the stellar disc, which is noticeably more yellow (particularly in the direction of the Galactic centre) in the volume $3<d<5$~kpc compared to the smallest volume ($d < 3$~kpc). For the second largest volume of $5<d<20$~kpc, the stellar light is dominated by the brightest and bluest stars away from the disc plane and in the outer disc. These stars are more numerous in this volume because of the larger distances probed. In the direction of the Galactic centre, the yellow light of the older stars inhabiting the stellar bulge contribute significantly to the all-sky map. The largest volume shows stars in the first three volumes plus stars out to a heliocentric distance of 50~kpc. For clarity, Fig.~\ref{fig1} also shows the surface mass density of the mock stellar distribution in cartesian coordinates (face-on: top-left panel; edge-on: bottom-right panel), together with annotations of the Galactic centre and the three smallest volumes shown in the all-sky maps. We note that the observed distribution of stars is more extended than the true distribution because the \emph{Gaia} DR2 errors can become large at large distances, which for the \textsc{hits-mocks}{} translates to large displacements of stars in phase space and thus to an inevitable increase in the observed phase-space domain. \begin{figure} \includegraphics[scale=0.48,trim={0 0 1.cm 1.cm},clip]{figures/CMD_mockimagehalohalo_24_MW_030_kroupaIMF_all-min} \includegraphics[scale=0.48,trim={1.25cm 0 1.cm 1.cm},clip]{figures/CMD_mockimagehalohalo_24_MW_030_chabrierIMF_mock_noex_-min} \caption{A colour-magnitude diagram (CMD) for all synthetic stars in one of the \textsc{hits-mocks}{} (left panel) and \textsc{icc-mocks}{} (right panel). This is constructed by sampling the stellar particles taking into account the mass, age and metallicity of each particle according to the Kroupa and Chabrier IMF, respectively. } \label{fig2} \end{figure} Fig.~\ref{gdist} shows the apparent $G$-magnitude distribution of stars in each of the \textsc{hits-mocks}{} (top panel) and \textsc{icc-mocks}{} (bottom panel) generated from the default solar position (30 degrees behind the bar major axis). We reiterate that catalogues cover the full sky for stars with magnitudes $V<16$, whereas fainter stars with $16<V<20$ are only provided at latitudes $|b|> 20$~degrees. The lower number of stars fainter than $V=16$ reflects the $20\%$ sampling rate of these stars in the \textsc{hits-mocks}. We note that these distributions do not vary significantly between the mock catalogues. Fig.~\ref{fig2} shows the colour-magnitude diagram (CMD) for the same mock catalogue shown in Fig.~\ref{fig1}. This distribution is the result of sampling the full range of Padova isochrones as described in section~\ref{sec:popsynth} for all observable star particles in one of the \textsc{hits-mocks}{} (left panel) and one of the \textsc{icc-mocks}{} (right panel). Fig.~\ref{fig2} shows that these catalogues include the full spectral range of stars and prominent evolutionary stages such as the main sequence turn-off and the red giant branch. In the remainder of this section, we present applications of the mock data to the stellar disc and halo. We restrict ourselves to two applications, the flaring (young) stellar disc and the stellar halo spin. \subsection{Flaring disc(s)} \label{results:disc} In the last years, both simulations and observations have increasingly focussed on the chemical and age structure of the stellar disc \citep[e.g.][]{ScB09,BRH12,RCK13,MCM14,HBH15,MBS17,SM17}. An interesting result of these analyses is that the outer disc of the Milky Way is composed of sub-populations of age (and metallicity), each of which flare\footnote{The term \emph{flare} refers to an increase in scale height with increasing radius.}. This sort of flaring distribution is often seen in numerical simulations that include orbiting satellites and mergers \citep[e.g.][]{QHF93,MCM14b,MMF14} that act to preferentially dynamically heat the outer disc more than the inner disc. However, an alternative, internal mechanism that may give rise to disc flaring is the radial migration of stars from the inner disc to the outer disc: \citet{BRS15} has shown that the degree of flaring found in the APOGEE Red Clump data is consistent with theoretical predictions of the radial migration of stars under conservation of vertical action arguments \citep{Min12,SoS12,Rok12}. This finding suggests a secular dynamical origin for the flared distributions, however the origin remains to be conclusively determined and is still debated. \begin{figure} \centering \includegraphics[scale=0.38,trim={0 0 0 0},clip] {figures/xyz_mockimagehalo_24_MW_030_B3V_mock_.png} \includegraphics[scale=0.38,trim={1.cm 0 0 0},clip]{figures/xyz_mockimagehalo_24_MW_030_A0V_mock_.png}\\ \includegraphics[scale=0.38,trim={0 0 0 0},clip] {figures/xyz_mockimagehalo_24_MW_030_B3V_mock_noex_.png} \includegraphics[scale=0.38,trim={1.cm 0 0 0},clip]{figures/xyz_mockimagehalo_24_MW_030_A0V_mock_noex_.png}\\ \caption{The face-on (top panels) and edge-on (bottom panels) distribution of $\rm B3V$ stars (left panels) and $\rm A0V$ stars (right panels) selected to be within a longitude of $126 < l < 234$, in the fiducial \textlcsc{HITS-MOCK} (top panels) and \textlcsc{ICC-MOCK} (bottom panels) of Au 24. The Galactic centre is located at $(X,Y)=(-8,0.02)$ $\rm kpc$. Note that the brighter $\rm B3V$ stars are spread over a larger portion of the disc than the $\rm A0V$ stars, the latter of which are more affected by mid-plane extinction in the \textlcsc{HITS-MOCK} (evident in the top-right panel).} \label{fig3} \end{figure} \begin{figure*} \centering \includegraphics[scale=1.,trim={0 1.7cm 0 1.5cm},clip] {figures/hz_legend_wide_top}\\ \includegraphics[scale=1.2,trim={0 0.5cm 0 0},clip]{figures/hz_radialprofhalo_6_MW_030_kroupaIMF} \includegraphics[scale=1.2,trim={0.9cm 0.5cm 0 0},clip]{figures/hz_radialprofhalo_16_MW_030_kroupaIMF} \includegraphics[scale=1.2,trim={0.9cm 0.5cm 0 0},clip]{figures/hz_radialprofhalo_23_MW_030_kroupaIMF} \includegraphics[scale=1.2,trim={0.9cm 0.5cm 0 0},clip]{figures/hz_radialprofhalo_24_MW_030_kroupaIMF} \vskip -.3cm \caption{The root mean square vertical height as a function of radius for $\rm B3V$ dwarf stars (\emph{top panels}) and $\rm A0V$ dwarf stars (\emph{bottom panels}) from mock catalogues generated from four different simulations. The stars are selected in the outer disc ($126\degr < l < 234\degr$) and around narrow $M_V$ and $V-I_c$ ranges according to the values listed in \citet{PM13}. An additional cut on relative parallax error $0 < \sigma _{\pi}/\pi < 0.5$ is made. This typically results in several tens of thousands of stars that cover a large portion of the Galactic disc (see Fig.~\ref{fig3}). In each case, we show the root mean square of: the raw simulation data for star particles of the corresponding age (black squares); the true positions of the \textsc{hits-mocks}{} and \textsc{icc-mocks}{} synthetic stars before error displacement (red triangles and dark blue right-pointing triangles); the observed positions of the \textsc{hits-mocks}{} and \textsc{icc-mocks}{} synthetic stars after error displacement (red circles and blue diamonds). \newline{} } \label{fig4} \end{figure*} \begin{figure} \centering \includegraphics[scale=1.3,trim={0.75cm 0.1cm 0 0.2cm},clip]{figures/errorhalo_24_MW_030} \vskip -.2cm \caption{The observed heliocentric distance (after displacement from the parent star particle) as a function of real heliocentric distance (before displacement, i.e., the parent star particle distance) for $\rm B3V$ and $\rm A0V$ stars in one of the \textsc{hits-mocks}. The one-to-one relation is shown by the dashed red line.} \label{figerr} \end{figure} \begin{figure*} \includegraphics[scale=1.,trim={0 1.7cm 0 1.5cm},clip] {figures/hz_legend_wide_top}\\ \includegraphics[scale=1.2,trim={0 0.5cm 0 0},clip]{figures/sigmz_radialprofhalo_6_MW_030_kroupaIMF_mock_} \includegraphics[scale=1.2,trim={0.9cm 0.5cm 0 0},clip]{figures/sigmz_radialprofhalo_16_MW_030_kroupaIMF_mock_} \includegraphics[scale=1.2,trim={0.9cm 0.5cm 0 0},clip]{figures/sigmz_radialprofhalo_23_MW_030_kroupaIMF_mock_} \includegraphics[scale=1.2,trim={0.9cm 0.5cm 0 0},clip]{figures/sigmz_radialprofhalo_24_MW_030_kroupaIMF_mock_} \vskip -.3cm \caption{As Fig.~\ref{fig4}, but for the vertical velocity dispersion.} \label{fig5} \end{figure*} Although much attention has been paid to dynamical origins, there is growing evidence that the flared distributions may be formed \emph{in situ} from flaring star-forming regions. \citet{GSG16} showed that a significant amount of the vertical velocity dispersion is set at birth from star-forming gas that becomes progressively thinner with time and that, at a given look back time, the radial profile of the vertical velocity dispersion of young stars ($< 1$ Gyr old) is flat, corresponding to a flaring scale height. \citet{NYL17} showed from the Apostle simulations \citep{SFF15,FNS16} that stars are born in flared distributions. Moreover, these distributions do not change significantly thereafter; they are not strongly affected by subsequent dynamical processes. This idea that the star forming gas disc intrinsically flares in supported also by the simple analytical arguments put forward by \citet{BLNF18}, who demonstrated that the vertical structure of polytropic, centrifugally supported gas discs with flat rotation curves embedded in CDM haloes naturally flare. Moreover, the recent controlled numerical study of \citet{KGG17} suggests that flaring star-forming regions are required in order to preserve a negative vertical metallicity gradient that would otherwise become positive owing to the outward radial migration of metal rich stars. Flaring star-forming regions have therefore become a new and attractive way to help explain the flaring stellar disc. A strong signature of an \emph{in situ} flaring disc is a flaring distribution of very young stars ($\lesssim 300$ Myr), because radial migration requires several dynamical times to become effective. We therefore select young A and B dwarf stars from the mock stellar catalogues according to the absolute $V$-band magnitude, $V-I_c$ colour and tentative ages given by \citet{PM13}, that is: ($V$, $V-I_c$) $\sim$ ($-1.1$, $-0.192$) for $\rm B3V$ stars; and ($V$, $V-I_c$) $\sim$ ($-1.11$, $0.004$) for $\rm A0V$ stars. These stars are typically $\sim 0.1$ Gyr and $\sim 0.3$ Gyr old, respectively. We select stars in the outer disc region ($126\degr < l < 234\degr$) in order to minimise heavy midplane extinction. The distribution of $\rm B3V$ and $\rm A0V$ stars is shown in Fig.~\ref{fig3}, and demonstrates that these stars cover a significant portion of the outer disc, particularly in the absence of extinction. The ``fingers of God'' feature in the distributions shown in the top panels of Fig.~\ref{fig3} are caused by fluctuations in dust attenuation along different lines of sight. In Fig.~\ref{fig4}, we examine the observed root mean square height (or scale height hereafter) as a function of observed Galactocentric radius for our samples of $\rm B3V$ and $\rm A0V$ stars, selected from mock catalogues generated for four different simulations. In each case, we assume our default solar position of 30 degrees behind the major axis of the bar. In addition, we compare the scale height profiles of the B and A stars selected from each mock catalogue with those of raw simulation star particles of equivalent age. For both the \textsc{hits-mocks}{} and the \textsc{icc-mocks}, we show the profile given by the ``true'' positions of the synthetic stars (before stars are displaced in phase space by errors), and the profile given by the ``observed'' positions (after the stars have been displaced). Because both the true and observed positions in the \textsc{hits-mocks}{} include extinction, the comparison of the true and observed profiles with the raw simulation data indicates the effects of extinction and errors separately, in addition to their overall effect. The \textsc{icc-mocks}{} do not include extinction, but do reflect the magnitude cut selection effects in addition to conserving phase space of the parent particles, which provides an additional perspective. The raw simulation data and the true and the observed mock data exhibit flared vertical scale height profiles, and are in good agreement across the radial range 8 - 16 kpc for the $\rm B3V$ stars in all simulations. The true profile is in marginally better agreement than the observed profile, particularly at heliocentric distances greater than $\sim 5$ kpc, which indicates that errors are more important than extinction for $\rm B$-type dwarfs at these distances. This is confirmed also by the distance error distributions shown in Fig.~\ref{figerr}. The agreement is worse for the $\rm A0V$ stars compared to $\rm B3V$ stars at heliocentric distances larger than $\sim 4$ kpc. Extinction (visible in the bottom-right panel of Fig~\ref{fig3}) seems to be mainly responsible for the deviations away from the raw simulation data in these cases. This is reinforced by the \textsc{icc-mocks}{} profiles, which do not model extinction and generally reproduce well the raw simulation data even at galacto-centric radii $\gtrsim 13$ kpc for both types of stars. In Fig.~\ref{fig5}, we examine the radial profiles of the vertical velocity dispersion for the same stars as in Fig.~\ref{fig4}. As expected from their flaring spatial distributions, the vertical velocity dispersion is nearly constant with radius in all cases, and is, in general, well-reproduced by the \textsc{hits-mocks}. Again, this is particularly true for $\rm B3V$ stars, which show minimal deviations, similar to those of their corresponding vertical scale height profiles. For $\rm A0V$ stars, the profiles are well-reproduced up to heliocentric distances of $\sim 5$ kpc, beyond which they begin to deviate noticeably. Apart from the increasing uncertainties in parallax and proper motion at these distances, an additional inaccuracy that contributes to the observed deviations is the lack of a radial velocity component in \emph{Gaia} DR2 for these stars, although it is likely a minimal contribution for this application because radial velocities are almost perpendicular to the vertical velocity field at these low latitudes. The \textsc{icc-mocks}{} are able to reproduce the vertical velocity dispersion for both $\rm B3V$ and $\rm A0V$ stars very well, and tend to bear out a more accurate representation of the dispersion at larger radii where extinction begins to affect the \textsc{hits-mocks}{} measurement of the $\rm A0V$ stars. The results presented in Figs.~\ref{fig4} and ~\ref{fig5} demonstrate that, for \emph{Gaia} DR2, $\rm BV$ and $\rm AV$ stars are reliable tracers for the very young stellar disc and, by extension, the distribution of star-forming regions; the intrinsic flaring of the star-forming gas disc is captured by these dwarf stars in both position and velocity space. It is worth to note that for subsequent data releases the reliability of these tracers will improve: the ability to trace the young disc will extend to the outer reaches of the disc and the warp beyond. \begin{figure} \centering \includegraphics[scale=0.6,trim={0 0 0 0},clip]{figures/HaloSpin_1} \caption{Estimated mean $V_{\phi}$, based on the method of \citet{deason17} from 5D data of a random sample of $70,000$ HB halo stars in \textsc{hits-mocks}{} (red) and \textsc{icc-mocks}{} (blue), versus the true mean $V_{\phi}$ calculated from the 6D phase-space information of the same samples. The different symbol types represent the six \textlcsc{AURIGA} simulations for which the mock catalogues are created as indicated in the legend.} \label{spin1} \end{figure} \begin{figure} \centering \includegraphics[scale=0.6,trim={0 0 0 0},clip]{figures/HaloSpin_2} \caption{Estimated $V_{\phi}$ from 5D data of HB halo stars at different galactocentric radial bins for \textlcsc{AURIGA} halo 6 (top panel) and 16 (bottom panel), shown as circles, compared to the true $V_{\phi}$ of the same samples, illustrated as dashed lines. Red and blue colours represent \textsc{hits-mocks}{} and \textsc{icc-mocks}{}, respectively. The horizontal error bars illustrate the size of the radial bins, while the vertical error bars show the error in the mean values.} \label{spin2} \end{figure} \subsection{Stellar halo rotation} \label{results:halo} The spin of the Milky Way stellar halo is directly related to its merger history. To first order, the stellar halo rotation represents the net angular momentum of all of the Galaxy's past accretion events. Moreover, the presence of \textit{in situ} halo stars, which are formed in the Galactic disc and later ``kicked out" into the halo due to merger events, can lead to disc-like kinematics in the stellar halo (i.e. net prograde rotation in the same sense as the disc, see e.g. \citealt{mccarthy12,cooper15a,pillepich15}). Thus, by measuring the net spin of the stellar halo we are probing the global accretion history of the Galaxy. In addition, we can gain further insight by measuring the halo rotation as a function of metallicity, Galactocentric radius, and position on the sky (see e.g. \citealt{carollo07, carollo10, deason11, kafle13, hattori13}). Previous works attempting to measure the net spin of the halo have aimed to tease out the rotation signal using line-of-sight velocities from large spectroscopic samples of halo tracers (e.g. \citealt{sirko04, deason11}), this limitation to one velocity component is particularly troublesome for measuring rotation; at large distances the line-of-sight velocity is essentially the radial velocity component, and there is little, or no, constraint on the tangential velocity of halo stars. Prior to the \textit{Gaia} era, reliable proper motion measures of distant halo stars were scarce, with ground-based samples subject to large systematic uncertainties (e.g. \citealt{gould04}), and space-based samples limited to very small areas of the sky (\citealt{deason13, cunningham16}). Now, in the era of \textit{Gaia} DR2, we have access to \textit{all sky} proper motion measurements, with well-defined systematic and statistical error distributions. A prelude to the astrometric breakthrough from DR2 was presented in \cite{deason17}, who used a proper motion catalogue constructed from SDSS images and \textit{Gaia} DR1 to measure the net spin of the halo. The main drawback of the SDSS-\textit{Gaia} proper motion catalogue is the constraint to the SDSS sky coverage, and the limited number of known halo tracers that could be used. In this Section, we use the mock catalogues to illustrate how \textit{Gaia} DR2 astrometry can be used to measure the net spin of the stellar halo out to 100 kpc. The \textit{Gaia} spacecraft is expected to observe $N \sim 70,000$ Galactic halo RR Lyrae stars out to $\sim 100$ kpc (\citealt{clem16}). These old, metal-poor stars are approximate standard candles, and their distances can typically be measured with accuracies of less than 5 percent (see e.g. \citealt{iorio18}). Here, we randomly sample $N \sim 70,000$ ``old" (age $> 9$ Gyr) horizontal branch (HB) stars in the \textlcsc{AURIGA} haloes with $0<B-V<0.7$ and $0.2<M_V<1.2$. This selection was chosen to approximately mimic the all-sky RRL catalogues that will be released with \textit{Gaia} DR2. To select halo stars, we include stars between 5 and 100~kpc from the Galactic centre, and $|b| > 20$ deg above/below the disc plane, and height $|z|>4$ kpc. We do not include distance uncertainties in the analysis (but note that including $\sim 5\%$ distance errors makes little difference to our results), and assume that while proper motions measurements are available from \textit{Gaia} DR2, there are no line-of-sight velocities. In order to measure the halo rotation, we employ the same method introduced by \cite{deason17} to measure rotation with 5D data. In brief, we adopt a 3D velocity ellipsoid aligned in spherical coordinates, which assumes Gaussian velocity distributions and allows for net streaming in the $v_\phi$ component. A likelihood analysis is used to determine the best-fit $\langle v_\phi \rangle$ value. For more details on this method we refer the reader to \cite{deason17}. Fig.~\ref{spin1} shows the resulting mean rotation of stars in the radial range $r= 5-50$~kpc for 6 \textlcsc{AURIGA} haloes in \textsc{hits-mocks}{} and \textsc{icc-mocks}. The estimated $\langle v_\phi \rangle$ using the method of Deason et al., $ v_{\phi, \rm est}$, is in very good agreement with the true value for the same sample of stars ($ v_{\phi, \rm true}$). The errors on the mean values are smaller than the size of the symbols and therefore are omitted. The $v_{\phi, \rm true}$ values differ between the two mocks because different isochrones and IMFs are used, and thus our criteria for selecting old HB stars yield different subsets of stars. This point is important and illustrates that different subsets of old stars can have different rotation signals. We plan to investigate this further in a follow-up paper. In Fig.~\ref{spin2} we show the estimated and true $v_{\phi}$ of our sample of old halo stars at different radii for two examples, Auriga 6 (top penal) and Auriga 16 (bottom panel) that have the smallest and largest overall rotation (see, Fig.~\ref{spin1}), respectively. The method of \cite{deason17} works very well at all radii to recover the actual spin of our samples of stars. It is remarkable that even at distances as large as 100 kpc, where \emph{Gaia}\ proper motion errors are large and the number of stars is relatively small, one can recover the spin of the halo stars. \section{Discussion and Conclusions} \label{sec4} We have presented several mock Milky Way stellar catalogues designed to match the selection criteria, volume and observable properties (including uncertainties) of stars with $V<16$ mag and $V<20$ mag at $|b|>20$ degrees that will be provided by the \emph{Gaia} data release 2. We employed two methods to calculate two sets of mock catalogues at four solar-like positions (equidistant in Galactic azimuth) from several high-resolution cosmological-zoom simulations: the \textsc{hits-mocks}{} \citep[generated with a parallelised version of \textlcsc{SNAPDRAGONS},][]{HKG15} which includes dust extinction; and the \textsc{icc-mocks}{} using the \citet{LWC15} method, which distributes stars in phase space by conserving the phase-space volume associated with each simulation stellar particle. Both sets of mock catalogues provide \emph{Gaia} DR2 data products: six-dimensional phase space information, magnitudes in the \emph{Gaia} $G$-, $G_{RP}$- and $G_{BP}$-photometric bands, effective temperature and dust extinction values, and include uncertainty estimates for the \emph{Gaia} DR2 astrometric, photometric and spectroscopic quantities. In addition, the catalogues provide the age, metallicity, mass, stellar surface gravity, gravitational potential and photometry for non-\emph{Gaia} bands for each of the generated stars. The catalogues are available online at both the \textlcsc{AURIGA} website and at the Durham database centre, the latter of which provides a query-based system to retrieve subsets of data. Gravitational potential grids and raw snapshot data for a subset of simulations are available for download at the \textlcsc{AURIGA} website. \subsection{Limitations} \label{sec:limitations} While the mock catalogues presented in this paper have great potential for helping to understand the formation of structure in our Milky Way in tandem with \emph{Gaia} data, there of course exist limitations to each of the methods used to generate the catalogues. \paragraph*{Limitations to both methods:} Neither method guarantees that the positions and velocities of mock stars are consistent with bound orbits in the simulation potential. Caution and careful sample selection based on filtering out stars with large errors should be followed for any applications that require precise correspondence between the motions of stars and their local gravitational potential, or that are sensitive to a small number of stars with very high velocities. \paragraph*{\textsc{icc-mocks}{} limitations:} \citet{LWC15} describe how the parameters entering the phase-space sampling step in the construction of the \textsc{icc-mocks}{} were tuned to the values given in section~\ref{sec:phase_space_sampling}. This tuning sought to balance a sufficiently significant degree of expansion of stars away from their parent simulation particles against the preservation of coherent phase space structures, such as tidal streams, and the suppression of bias in the bulk kinematics of the stellar halo. \citet{LWC15} studied collisionless $N$-body simulations, so the same approach and parameters are not guaranteed to be optimal for the massive, coherent baryonic discs in hydrodynamical simulations like Auriga. In particular, when we compute scale lengths for a star particle formed in situ in the main galaxy, we treat \textit{all} the other in situ stars as its potential phase space neighbours. This may be a substantial approximation, because the set of all in situ particles comprises many different stellar populations that originate in different regions of phase space at different times. Treating all these as potential neighbours of one another can lead to `cross-talk' between distinct dynamical structures, a form of over-smoothing (which is mitigated in the case of accreted halo stars by only considering particles from the same progenitor satellite as potential neighbours). For example, the scale height and vertical velocity dispersion of young, kinematically cold stars in the disc may (in principle) be inflated if neighbours from a kinemtically hotter bulk population dominate the kernels associated with their parent particles. However, in practice, we see no evidence of any significant bias in the analyses of young disk stars we present here. The possibility of artifacts arising from the phase space sampling procedure should be kept in mind nevertheless, especially in applications that probe phase space structure on very small scales. \paragraph*{\textsc{hits-mocks}{} limitations:} The \textsc{hits-mocks}{} do not include a phase space sampling step, i.e. the generated stars are not interpolated in phase space, before adding \emph{Gaia} DR2 errors to the particle phase space coordinates. This may create artefacts for structures that are ``long'' and ``thin'', such as the great circle stream, that arise from the displacement of stars along the line-of-sight with very similar celestial coordinates. Furthermore, the observed positions generated by displacing the coordinates of the parent star particle can be spread over large ranges for particles beyond $\sim 10$ kpc heliocentric distances, where the errors become large. This means that using parallax distances for some halo stars directly can become unreliable, and more sophisticated approaches, such as the one used in this paper, are required.\newline We conclude that the \textsc{icc-mocks}{} are better suited than the \textsc{hits-mocks}{} for studying streams, other inhomogeneities and debris in the stellar halo. On the other hand, the \textsc{hits-mocks}{} include a model for dust extinction that allows the user to make quick assessments of how dust affects {\it Gaia} observables, which is particularly important for the stellar disc. Conversely, the \textsc{icc-mocks}{} provide the user freedom to add any dust model to the data. The two sets of catalogues are therefore complimentary and provide a wide scope for assessing the biases and capabilities of the {\it Gaia} DR2. We note that the codes used to generate these mock catalogues may be improved in the future, in which case the mock catalogues on our public database will be updated accordingly. We urge users to refer back to the database whenever a new application is considered. \subsection{Applications} As a first science application of the mocks, we analysed the vertical structure of the young stellar disc and found that all simulations showed a flaring vertical scale height profile with a consistently flat vertical velocity dispersion profile. We verified that $\rm B3V$ and $\rm A0V$ stars in the outer disc selected from the mock catalogues reproduce these trends; young B and A dwarf star data in DR2 should be reliable tracers of the young stellar disc. If in the \emph{Gaia} DR2 data these tracers exhibit flaring profiles, this will constitute evidence for flaring star-forming regions, and perhaps indicate that radial migration and dynamical heating from satellite perturbations are not the principal drivers of the flaring mono-abundance populations found in other Galactic surveys \citep{BRS15,MBS17}. We also applied the method of \citet{deason17} to samples of old horizontal branch halo stars in the mock catalogues to estimate the mean rotation of \textlcsc{AURIGA} stellar haloes based on 5D phase-space information. We find excellent agreement between the estimated mean rotation velocity and the true values, even at galactocentric distances as large as $100$~kpc. The results show that accurate distance measurements combined with proper motions from \emph{Gaia}, can reliably predict the mean rotation of halo stars. Obtaining an accurate estimate of the spin of the distant MW stellar halo is therefore extremely promising using the tens of thousands of RR Lyrae stars that \emph{Gaia}\ will provide. The mock catalogues presented in this paper are the first such catalogues generated from \emph{ab initio} high-resolution $\Lambda$CDM galaxy formation simulations; they offer a novel perspective of the Milky Way and may be used for a variety of applications. In particular, they provide a testbed for the design and evaluation of Galaxy modelling methods in a realistic cosmological setting, a means to gauge the limitations and biases of \emph{Gaia} DR2 and to link observations to theoretical predictions, encapsulated in the simulations, enabling robust inferences to be made about the multitude of galaxy formation processes that shaped the Milky Way. \section*{acknowledgements} RG would like to thank Daisuke Kawata for many useful discussions. RG and VS acknowledge support by the DFG Research Centre SFB-881 `The Milky Way System' through project A1. JASH is supported by a Dunlap Fellowship at the Dunlap Institute for Astronomy \& Astrophysics, funded through an endowment established by the Dunlap family and the University of Toronto. AD is supported by a Royal Society University Research Fellowship. AF is supported by a European Union COFUND/Durham Junior Research Fellowship (under EU grant agreement no. 609412). MC was supported by Science and Technology Facilities Council (STFC) [ST/P000541/1]. This work has also been supported by the European Research Council under ERC-StG grant EXAGAL- 308037 and the Klaus Tschira Foundation. Part of the simulations of this paper used the SuperMUC system at the Leibniz Computing Centre, Garching, under the project PR85JE of the Gauss Centre for Supercomputing. This work used the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility \href{www.dirac.ac.uk}{\url{www.dirac.ac.uk}}. This equipment was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC capital grant ST/H008519/1, and STFC DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the National E-Infrastructure. \bibliographystyle{mnras}
1,108,101,565,046
arxiv
\section{Introduction} Conceptually, run time packers encode a binary with obfuscation techniques such as compression and encryption to harden analysis of their code. This hardening significantly increases the effort needed to reverse engineer a given sample, whether manually or automatically, because it requires inverting the anti-analysis techniques used by the packer to understand the full capabilities of the malware. Run time packing is a highly effective anti-analysis technique, and estimates show more than 80\% of malware samples come packed \cite{10.1007/978-3-540-87403-4_6}. The combination of needing to unpack samples before proper analysis is feasible, and that most malware samples come packed makes it desirable to develop approaches that automatically unpack malware. Techniques and tools for automatic unpacking malware have received a lot of attention in the literature \cite{Dinaburg:2008:EMA:1455770.1455779, Hu:2013:MSM:2535461.2535485, SebastienJosseMalwareDynamicRecompilation, Kang:2007:RHC:1314389.1314399, DBLP:conf/malware/Korczynski16, 4413009, Ugarte-pedrero_sok:deep, DKOR}. Despite this large amount of research, the vast amounts of work rely on the same core principle, the ``write-then-execute'' heuristic. This heuristic deploys the key observation that in order to execute the encrypted code, it first must be decrypted and, therefore, be dynamically generated. The most common approach by previous work is, therefore, to execute a given sample, monitor all memory writes made by the malware, and whenever dynamically written memory executes, the unpacker identifies this memory as decrypted. The tools then dump this specific memory to enable follow-up inspection. There are two main limitations to the approach of existing work. First, the ``write-then-execute'' heuristic is not well-suited for packers that perform system-wide unpacking. This is because the heuristic only captures code that is dynamically generated \textit{explicitly} by the malware and not malicious code that is dynamically generated \textit{via} benign code which, unfortunately, is frequently the case in multi-process unpacking. Consequently, existing unpackers are mainly suitable for single-process malware and new approaches to capture system-wide malware unpacking are needed. Second, the primary output of existing work is memory dumps or naively constructed PE files of the dynamically generated code. The output lacks structure, is often an unreasonable over- or under-approximation of the actual malware code, and many obfuscation techniques from the packing process, e.g. obfuscation of external dependencies, remain in the output. As such, the analysis that follows must overcome these obfuscation techniques to enable meaningful analysis of the code. This is a problem because the purpose of unpacking is to facilitate follow-up analysis and not to give any conclusive answer about the malware itself. The limitations described above reoccur in existing work, and we argue that an essential reason for this is because existing work widely uses the same set of benchmark applications to validate their solutions. These benchmark applications consist of packers that are out-dated and built more than a decade ago. Consequently, the empirical assessment of novel tools occur with old, and often similar, techniques that do not accurately reflect the challenges posed by modern-day malware packers. To ensure that our novel tools are relevant, we need new benchmark applications that can be used for profiling novel unpackers. These benchmarks must explore corner-cases of modern packing techniques and be easily accessible to anti-malware researchers. The goal of this paper is to develop techniques that overcome the limitations of existing work highlighted above. We present a unified approach to precisely unpack malware samples with system-wide execution, dynamically generated code, custom IAT loading and API call obfuscations. The aim is to provide unpacked code that is well-suited for follow-up analysis via manual reverse engineering or off-the-shelf static analysis tools. To this end, Minerva deploys a \textit{combination} of dynamic and static analysis to amplify the effectiveness of automatic unpacking. The novel techniques presented in Minerva rely on information flow, which makes it highly precise and capable of unpacking malware samples in a system-wide context. Minerva models execution waves on a per-process basis and each process with malware execution operate within the context of a single execution wave at any given moment. This provides for a clear wave model and implementation but may result in duplicate content amongst waves, for example, when execution waves use code from an earlier execution wave. Minerva takes as input a 32-bit Windows binary and outputs at least one Portable Executable (PE) file per execution wave. This has the benefit of mostly independent PE files but also means the duplicate content of multiple waves will exist in multiple PE files. In order to produce output that is useful for follow-up analysis, Minerva captures how the malware uses external dependencies throughout the entire execution and maps this to each execution wave, resulting in PE files with valid import address tables and patched API calls. Finally, Minerva also performs static analysis to identify relevant malware code within each execution wave. In addition to our unpacker, we also propose a new benchmark suite with applications that combine code-injection techniques, dynamically generated code and obfuscation of external dependencies to overcome the limitations of empirical evaluation in existing work. We demonstrate our unpacker empirically against synthetic and real-world malware samples. Our main contributions of this paper are as follows. \begin{itemize} \item We present a novel approach that combines dynamic and static analysis techniques to unpack malware that executes across the entire system automatically. The approach focuses on precise analysis and outputs unpacked samples that are well-suited for follow-up static analysis. \item We present a new benchmark suite with samples exploring modern-day packing behaviours. To the knowledge of the author, this is the first benchmark suite that comprises synthetic applications aimed at evaluating unpackers. \item We implement the techniques into Minerva and present an extensive empirical evaluation based on synthetic applications and real-world malware samples. \end{itemize} \section{Background, motivation and overview} \label{sec:Chapter4BackgroundAndMotivation} Packing is an umbrella term that refers to a set of various concrete obfuscation techniques and there is no clear definition on the specific obfuscation techniques it encapsulates. This section clarifies the obfuscation techniques we treat in this paper and the limitations of existing work that motivate us. In total, we have compiled six core limitations across two general obfuscation techniques. \subsection{Dynamically generated code} The obfuscation technique that is most commonly associated with packing is dynamically generated code. In its simplest terms, dynamically generated code is when an application writes memory at run time and then proceeds to execute this memory. Most often malware does this by containing encrypted code inside its binary image and decrypting this at run time in order to execute it. Existing automated unpackers identify dynamically generated code with the write-then-execute heuristic. This heuristic partitions the malware execution into a set of layers $\mathcal{L}_0$, $\mathcal{L}_1$, \dots $\mathcal{L}_n$ such that each layer constitutes dynamically generated code. Layer $\mathcal{L}_0$ represents the instructions of the binary malware image when first loaded into memory and $\mathcal{L}_{i+1}$ represents the instructions executed on memory written by the instructions in layer $\mathcal{L}_i$. \\ \textbf{Limitation 1.1: the write-then-execute heuristic is unable to capture dynamically generated malicious code \textit{via} benign code.} The strict relationship that instructions of one layer must be dynamically generated explicitly by instructions from a previous layer severely limits the generality of existing work. Malware that uses benign code to dynamically generate its malicious code go unnoticed by this model. The implications of this limitation are substantial for capturing dynamically generated code across multiple processes by way of code-reuse attacks or OS-provided APIs since it is not the instructions of the malicious code that does the writing of memory. Rather, it is benign code that is manipulated by the malware into writing dynamically generated malicious code. \\ \textbf{Limitation 1.2: existing work unreasonably approximate relevant dynamically generated memory.} Whenever an unpacker observes dynamically generated code it outputs the code for follow-up analysis. To do this, the unpacker must have a definition of what parts of dynamically generated memory are relevant to the unpacked code. This is because not all memory that is dynamically generated, e.g. the stack, is relevant for the unpacked output. However, this step of identifying relevant memory is highly overlooked by previous work. For example, neither Renovo \cite{Kang:2007:RHC:1314389.1314399} nor EtherUnpack \cite{Dinaburg:2008:EMA:1455770.1455779} clearly describe the specific memory they extract during unpacking, and Mutant-X \cite{Hu:2013:MSM:2535461.2535485} dumps the entire memory image of a process when observing dynamically generated code. These are unreasonably imprecise and leave follow-up analysis with the task of identifying a needle in a haystack. \\ \textbf{Limitation 1.3: existing work output raw memory scattered across many memory dumps.} The majority of existing unpackers \cite{Bonfante:2015:CMS:2810103.2813627, Dinaburg:2008:EMA:1455770.1455779, Kang:2007:RHC:1314389.1314399, Ugarte-pedrero_sok:deep} make little effort to output the unpacked code in a coherent data structure but rather output the unpacked malware in the shape of raw memory dumps. The problem is that when malware dynamically generates code, this may be scattered across several regions, and some of these may also be data-only sections. A precise unpacker should not output incoherent raw memory regions, but rather a suitable data structure that combines these memory regions in an appropriate manner, e.g. re-basing where needed, that enables meaningful follow-up analysis.\\ \subsection{Obfuscating external dependencies} \label{sec:obfuscatingExternalDependencies} The way malware interacts with its environment is significant to understanding its malicious activities. We capture this understanding by analysing how the malware uses the OS via API calls and system calls, and these are, therefore, natural obfuscation targets for malware. \\ \textbf{Limitation 2.1: existing unpackers fail to accurately correlate API calls to malicious code.} There is often a large portion of dynamically generated code that must be covered when analysing packed malware. To quickly navigate towards relevant parts, we rely on the malware's use of APIs. However, existing unpackers fail to accurately correlate API calls within a process to the packed code, or even, more generally, attribute whether a given API call was performed by malicious or benign code. Consequently, they report unreasonable estimates of API-usage by the malware. In order to circumvent this limitation, the unpacker needs to maintain knowledge of which code belongs to the malware and also be able to identify the instruction responsible for a given API call. From an engineering point of view, most unpackers will be able to augment their systems with solutions to these problems with a modest implementation effort. However, fundamentally, this limitation is guarded by the ability to correctly identify what code belongs to the packed malware, which is closely related to Limitation 1.1 and Limitation 1.2. We identify it here because it is an essential feature in terms of understanding how the unpacked malware code uses external dependencies and something that current unpackers do not support. For example, when the unpacker from Ugarte et al. is matched with a sample\footnote{sha256 078a122a9401dd47a61369ac769d9e707d9e86bdf7ad91708510b9a4584e8d4} from the Tinba malware family that creates one layer of dynamically generated code before injecting code into the Windows process \texttt{winver.exe}, the unpacker reports that the dynamically generated code performs 1666 API calls from more than 350 different API functions. This result far exceeds the correct count, which is fourteen API calls from ten different API functions. \\ \textbf{Limitation 2.2: output from existing unpackers do not show API-usage when faced with custom API-call resolution.} There is an intricate relationship between dynamically generated code and API call obfuscation. In regular PE binaries, the import address table (IAT) specify the external modules the given application uses. At run time, the operating system linker uses this IAT to load these modules and resolve the addresses of the specific functions the binary imports. However, packed code minimises the IAT to hide how it uses external dependencies, and, instead of using the regular OS linker, the packer deploys a custom linker to resolve its imports. Custom API resolution can happen at any moment(s) during execution and memory dumps taken by existing unpackers are, therefore, susceptible to occur when the malware is yet to resolve its external dependencies. Unfortunately, it is rare that API resolution has occurred the moment dynamically generated code is observed, which is precisely when existing work dumps the memory \cite{Bonfante:2015:CMS:2810103.2813627, Dinaburg:2008:EMA:1455770.1455779, Kang:2007:RHC:1314389.1314399}. Figure \ref{fig:SoKvsMinervaUnpackedIATResolution} shows the differences of matching a traditional unpacker (Figure \ref{fig:SoKVSAPIResolution}) with a sample that has custom API resolution and matching the same sample with an unpacker that accurately captures API-usage in unpacked code (Figure \ref{fig:MinervaVsAPIResolution}), in this case, a result of Minerva. It is clear that without knowledge of the API calls it is hopeless to determine the activities of the code, whereas it is clear from the Minerva-generated code. \\ \begin{figure} \centering \begin{subfigure}[b]{.35\textwidth} \includegraphics[width=0.95\linewidth]{images/SokVsTinba4.png} \caption{Traditional unpacker, from \cite{Ugarte-pedrero_sok:deep}} \label{fig:SoKVSAPIResolution} \end{subfigure}% \vspace{\floatsep} \begin{subfigure}[b]{.35\textwidth} \includegraphics[width=0.95\linewidth]{images/MinervaVsTinba3.png} \caption{Minerva} \label{fig:MinervaVsAPIResolution} \end{subfigure} \caption{The output of unpackers when being matched with API calls that are obfuscated with custom API resolution and that branch via a temporal register value.} \label{fig:SoKvsMinervaUnpackedIATResolution} \end{figure} \begin{figure} \centering \begin{subfigure}{.35\textwidth} \includegraphics[width=0.95\linewidth]{images/PEtiteOriginal.png} \caption{Traditional unpacker} \label{fig:PETiteTraditionalUnpacker} \end{subfigure}% \vspace{\floatsep} \begin{subfigure}{.35\textwidth} \includegraphics[width=0.95\linewidth]{images/PETiteMinerva.png} \caption{Minerva} \label{fig:PETiteMinervaUnpacker} \end{subfigure} \caption{The output of unpackers when being matched with an API obfuscation from the PEtite packer.} \label{fig:PETiteExample} \end{figure} \textbf{Limitation 2.3: existing unpackers are unable to identify obfuscated API calls.} Orthogonal to the resolution of API calls, some packed malware samples will go a step further and directly obfuscate the way they call external APIs. In general, there are many ways for malware to obfuscate API calls. In Figure \ref{fig:SoKVSAPIResolution}, we see that the raw calls from Tinba depend on the value of \texttt{EBX}, which in this case contains the base-offset of a custom IAT by the malware. Furthermore, Figure \ref{fig:PETiteExample} shows an example from an application packed with the PEtite\footnote{\url{https://www.un4seen.com/petite/}} packer where the code calls a Windows API function by pushing a value on top of the stack, rotating that value and then transferring execution via a \texttt{ret} instruction to the rotated value on top of the stack. The output of existing unpackers is not capable of resolving obfuscated API calls in the unpacked code. This is a problem because it is much harder and sometimes impossible, to determine the destination of the branch instructions in follow-up analysis than it is for the unpacker. For example, without knowledge about the contents of \texttt{EBX}, the data at the address being read and the process layout, it is impossible to determine the destination of the given branch instructions and if they are API calls. \begin{figure} \centering \begin{tikzpicture} \node(node1) [plain] {Taint analysis}; \path(node1.east)+(0.2, 0.3) node (invisblenode1) [] {}; \path(node1.south)+(-0.0, -0.3) node (node2)[plain]{Malware Tracing}; \path(node2.south)+(-0.0, -0.3) node (node3)[plain]{Wave collector}; \path(node3.south)+(-0.0, -0.3) node (node4)[plain]{API hooks}; \path(node4.east)+(0.2, -0.3) node (invisblenode2) [] {}; \begin{pgfonlayer}{background} \path(node1.west |- node2.north)+(-0.3,0.7) node (a) {}; \path(node1.east |- node2.east)+(+0.3,-1.7) node (c) {}; \path[fill=gray!05, rounded corners, draw=black] (a) rectangle (c); \end{pgfonlayer} \path(node4.south)+(0.0, -0.8) node (node5)[plain]{Disassembler}; \path(node5.east)+(0.2, 0.3) node (invisblenode3) [] {}; \path(node5.south)+(0.0, -0.3) node (node6)[plain]{IAT builder}; \path(node6.south)+(0.0, -0.3) node (node7)[plain]{PE builder}; \path(node7.south)+(0.0, -0.3) node (node8)[plain]{Binary patcher}; \path(node8.east)+(0.2, -0.3) node (invisblenode4) [] {}; \begin{pgfonlayer}{background} \path(node5.west |- node6.north)+(-0.3,0.7) node (a) {}; \path(node6.east |- node6.east)+(+0.3,-1.7) node (c) {}; \path[fill=gray!05, rounded corners, draw=black] (a) rectangle (c); \end{pgfonlayer} \path(node1.east)+(2.5, 1.8) node (JDnode)[malplain]{Malware sample}; \path(JDnode.south)+(0.0, -0.7) node (node9)[thickplain]{Full system recording}; \path(node9.south)+(0.0, -0.7) node (node10)[thickplain]{Replay analysis}; \path(node10.south)+(0.0, -2.2) node (node11)[thickplain]{Static analysis}; \path[draw, ->](JDnode.south) -- node[]{} (node9.north); \path[draw, ->](node9.south) -- node[]{} (node10.north); \path[draw, ->](node10.south) -- node[]{} (node11.north); \path[draw, dashed](node10.north)+(-0.9, 0.0) -- node[]{} (invisblenode1.east); \path[draw, dashed](node10.south)+(-0.9, 0.0) -- node[]{} (invisblenode2.east); \path[draw, dashed](node11.north)+(-0.9, 0.0) -- node[]{} (invisblenode3.east); \path[draw, dashed](node11.south)+(-0.9, 0.0) -- node[]{} (invisblenode4.east); \path(node11.south)+(0.0, -1.3) node (PEfilesNode)[docs]{PE files}; \path[draw, ->](node11.south)+(0.0, 0.0) -- node[]{} (PEfilesNode.north); \end{tikzpicture} \caption{Architecture of Minerva's automatic unpacker.} \label{fig:MinervaArchitecture} \end{figure} \subsection{Solution overview} The goal of this paper is to develop system-wide, precise and general unpacking techniques. Specifically, our goal is to input a malware binary into our Minerva tool and output PE files that precisely capture the malware code post-decryption and decompression, and also capture how the malware uses external dependencies. The aim is to output PE files that are well-suited for follow-up analysis by off-the-shelf static analysis tools and manual investigation. To achieve our goal, we must overcome the limitations highlighted above. First, to overcome the limitations when dealing with dynamically generated code, we need a solution that can (\textit{limitation 1.1}) identify dynamically generated memory across the system; (\textit{limitation 1.2}) extract \textit{precisely} the memory that is relevant to the malware; and (\textit{limitation 1.3}) combine the relevant dynamically generated code into meaningful and related structures. Second, to overcome the limitations against malware that obfuscates external dependencies, the solution must also (\textit{limitation 2.1}) precisely capture the use of API calls \textit{within} the malware code; (\textit{limitation 2.2 and 2.3}) do this in the context of custom API resolution and obfuscated API calls; and, finally, map these observations to the output, so it is readily available for follow-up analysis. The solution we come up with, and implement into Minerva, deploys a two-step approach following the architecture shown in Figure \ref{fig:MinervaArchitecture}. First, we use the dynamic analysis in Minerva to precisely extract packed code and the API calls of the malware, and then we use static analysis to construct PE files based on the unpacked code. Specifically, the first step is to capture the malware execution trace using dynamic taint analysis in a similar fashion to Tartarus presented by Korzynski and Yin \cite{DKOR}. Then, we abstract the malware execution trace into execution waves based on information flow analysis such that an execution wave is a process-level construct that represents dynamically generated code in the malware. During the run time analysis, Minerva also ensures precise identification of API calls by the instructions in the malware execution trace. From the first step, we get a set of execution waves consisting of memory dumps, the malware execution trace of each wave and more. The second step Minerva performs is to group related memory within each execution wave using disassembly techniques. Minerva then converts each group of related dumps into a new PE file with a new import address table, and patches API calls based on static analysis and the API calls observed during dynamic analysis. In the following sections, we detail these steps. \section{System-wide malware tracing} A key component of our system is the ability to trace the malware throughout the entire operating system using dynamic taint analysis. We implement the techniques in Tartarus \cite{DKOR} to do this. In order to make this paper self-contained we briefly summarise the idea in this section, however, for complete description of the approach we refer to \cite{DKOR}. \subsection{Abstract model of execution environment} \label{sec:Chapter3AbstractExecutionEnvironment} We define a formal environment in which we can reason about executions in a sandbox. The model we present is an extension of work from Dinaburg et al. \cite{Dinaburg:2008:EMA:1455770.1455779}. We consider execution at the machine instruction level, and since an instruction can access memory and CPU registers directly we consider a system state as the combination of memory contents and CPU registers. Let $M$ be the set of all memory states and $C$ be the set of all possible CPU register states. We denote all possible instructions as $I$, where each instruction can be considered a machine recognisable combination of opcode and operands stored at a particular place in memory. A program $P$ is modelled as a tuple ($M_P$, $\epsilon_P$) where $M_P$ is the memory associated with the program and $\epsilon_P$ is an instruction in $M_P$ which defines the entry point of the program. There are often many programs executing on a system and each of these may communicate with each other through the underlying OS. As such, we model the execution environment $E$ as the underlying OS and the other programs running on the system. We define a transition function $\delta_E : I \times M \times C \rightarrow I \times M \times C$ to represent the execution of an instruction in the environment $E$. It defines how execution of an instruction updates the execution state and determines the next instruction to be executed. The trace of instructions obtained by executing program $P$ in execution environment $E$ is then defined to be the ordered set $T(P,E) = (i_0, \dots, i_l)$ where $i_0 = \epsilon_P$ and $\delta_E(i_k, M_k, C_k) = (i_{k+1}, M_{k+1}, C_{k+1})$ for $0 \leq k < l$. We note here that the execution trace does not explicitly capture which instructions are part of the program, with the exception of $i_0$, but rather all the instructions executed on the system including instructions in other processes and the kernel. For any two elements in the execution trace $i_j \in T(P,E)$ and $i_k \in T(P,E)$ we write $i_j < i_k$ if $j < k$, $i_j > i_k$ if $j > k$ and otherwise $i_j = i_k$. We use this to define ordering between the instructions of the sequence. \subsection{Malware execution trace} \label{sec:MalwareExecutionTracing} We now introduce the concept of malware execution trace. Suppose $P$ is a malware program and $P_A$ is some malware tracer that aims to collect $P$'s execution trace. Malware program $P$ is interested in evading analysis and gain privilege escalation by using code-reuse attacks and code injections. As such, the execution trace of the malware may contain instructions that are not members of program $P$'s memory $M_P$. To monitor the malware across the environment, the malware monitor $P_A$ maintains a shadow memory that allows it to label the memory and the CPU registers. This shadow memory is updated for each instruction in the execution trace. Let $S \subseteq M \times C$ be the set of all possible shadow memories. We then define the propagation function $\delta_A : S \times I\rightarrow S$ to be the function that updates the shadow memory when an instruction executes. The list of shadow memories collected by the malware tracer is now defined as the ordered set: $ST_{A}(T(P,E)) = (s_0, \dots, s_l)$ where $\delta_A(s_k, i_k) = s_{k+1}$ for $0 \leq k < l$. The job of the malware tracer is to determine for each instruction in the execution trace whether the instruction belongs to the malware or not. To do this, the analyser uses the predicate $\Lambda_A : S \times I \rightarrow \{true, false\}$. The malware execution trace is now given as the sequence of instructions for which $\Lambda_A$ is true and we call $\Lambda_A$ the inclusion predicate. We define the malware execution trace formally as follows: \begin{mydef} Let $T(P,E)$ be an execution trace and $P_A$ a malware tracer. The malware execution trace is the ordered set $\Pi_A = (m_0, \dots, m_d)$ where: \begin{itemize} \item $\Pi_A$ is a subsequence of $T(P,E)$; \item $\exists v \, | \, m_j = i_v \wedge \Lambda_A(s_v, i_v)$ \text{ for } $0 \leq j \leq d$. \end{itemize} \label{def:real_malware_execution_trace} \end{mydef} The above definition says that the malware execution trace is a subsequence (ordering is preserved) of the entire whole-system trace and for each instruction in the malware execution trace there is a corresponding instruction in the whole-system trace for which the inclusion predicate is true. The malware execution trace gives us a definition we can use to reason about the properties of malware tracers. In particular, for a given malware tracer it highlights the propagation function, $\delta_A$, and the inclusion predicate, $\Lambda_A$, to be the defining parts. Having constructed our model of malware tracers and identified the key aspects that determine how they collect the execution trace, we now move on to present how Minerva precisely captures system-wide propagation. \subsection{Tracing the malware execution} \label{sec:Chapter3OverviewOfMalwareExecutionTracing} The goal is to capture malware execution throughout the whole system in a precise and general manner. The overall idea is to use dynamic taint analysis to mark the malware under analysis as tainted and then capture its system-wide execution by following how the taint propagates through the system. Algorithm \ref{Chapter4:WaveCollection} gives an overview of our approach to capturing the malware execution trace. Assuming the first instruction executed on the system is the entry point of the malware, the first step \textbf{(line 1)} is to taint the memory making up the malware. In particular, we taint the entire malware module, including data and code sections. Next, execution continues until there is no more taint or a user-defined timeout occurs, and for each instruction executed we check if the memory making up the instruction is tainted \textbf{(line 7)}. We include the instruction in the malware execution trace if the instruction is tainted \textbf{(line 8)}. For each instruction in the malware execution trace we taint all the output of the instruction, so as to follow memory generated by the malware that is generated independently of the initial state of the malware memory, as shown by the Update algorithm in Algorithm \ref{alg:UpdateTaint} \textbf{(line 3-5)}. \section{Information flow execution waves} \label{sec:DynamicallyGeneratedCode} Given the malware execution trace, the next step is to partition it into execution waves. The goal of execution waves is to capture dynamically generated malicious code independently of who wrote the code and on the basis that the generated code must originate from the malware. However, we consider execution waves to be more than just a sequence of instructions. The set of execution waves gives an explicit representation of an entire application, including dynamically generated malicious code, and each execution wave may, therefore, include both executable and non-executable data. In this section we give a semantics for execution waves (Section \ref{sec:chapter4ExecutionSemantics}) and describe how we collect the waves in practice (Section \ref{sec:chapter4CollectingExecutionWaves}). \subsection{Execution wave semantics} \label{sec:chapter4ExecutionSemantics} The goal of our execution wave semantics is to clearly define the conversion of a malware sample's execution into waves of dynamically generated malicious code. As such, we describe the waves in relation to an execution trace $T(P,E)$ described in Section \ref{sec:Chapter3AbstractExecutionEnvironment}. We partition the malware execution into waves on a process-level basis. We map every instruction in the malware execution trace $i \in \Pi_A$ to a process $P_y$ and a wave within this process $W_x$. We denote $P_yW_x$ to mean wave $x$ within process $y$, and every process with malicious code execution contains a sequence of waves $P_y.\Omega = P_yW_0, \dots, P_yW_n$ with $|P_y.\Omega| \geq 1$. We denote the initial wave in which malware execution begins as $P_{\epsilon}W_{\epsilon}$ and the set $\Phi_{\Pi}$ contains all execution waves for a given malware execution trace $\Pi_A$. For each instruction in the malware execution trace, we first identify the process in which they execute and then the wave they belong to within their respective process. Formally, we define an execution wave as follows. \begin{mydef} An execution wave is a tuple composed of: \begin{itemize} \item A sequence of instructions $\mathcal{I} = i_0, \dots, i_n$ executed in the given wave. We have $i_0$ to be the entry point of the wave; \item a shadow memory $\mathcal{S}$, which is a set of ordered pairs $(m_{addr}, m_{byte})$ that contains the tainted memory making up the wave, including both code and data memory; \item the tainted writes $\mathcal{T}$ which is a set of ordered pairs $(t_{addr}, t_{byte})$ that holds the tainted memory written by instructions in $P$ since $i_0$, where $P$ is the process of the execution wave. \end{itemize} \label{def:execution_wave} \end{mydef} Next, we present a set that formalises our requirements for partitioning a complete execution trace into a set of execution waves. The purpose of this definition is to capture every layer of dynamically generated malicious code and not restrict a minimal overlap between the content of each execution wave. In the following, we write for two instructions $i, j$, $i < j$ if $i$ comes before $j$ in the malware execution trace, and vice versa. \begin{mydef} \label{executionWaveDefinition} Let T(P,E) be an instruction execution trace and $\Pi_A$ the corresponding malware execution trace. The set of execution waves is then given $\Phi_{\Pi} = \{P_0, \dots, P_n\}$ where: \begin{itemize} \item $\forall i \in \Pi_{A} \exists P_x \in \Phi_{\Pi} | i \in P_x.\mathcal{I}.$ \item For any $P_yW_x$ and $P_yW_z$ in $\Phi_{\Pi}$ where $x < z$ we have that $\forall i_x \in P_yW_x.\mathcal{I}, \forall i_z \in P_yW_z.\mathcal{I} | i_x < i_z$. This says that there is a strict ordering in the malware execution trace between the instructions of any two waves in a given process $P_y.\Omega$. \item $\forall (m_{addr}, m_{byte}) \in P_wW_{w'}.\mathcal{S} \exists (t_{addr}, t_{byte}) \in P_tW_{t'}.\mathcal{T} \\ | (m_{addr}, m_{byte}) \in P_{\epsilon}W_{\epsilon}.\mathcal{S} \vee (m_{addr}, m_{byte}) = (t_{addr}, t_{byte})$ where $P_wW_{w'} \neq P_tW_{t'}$ and $\forall i_w \in P_wW_{w'}.\mathcal{I} \exists i_t \in P_tW_{t'} | i_t < i_w$. This says that the shadow memory for all execution waves must either exist in the shadow memory of the initial wave or be composed of tainted memory written by a wave that started earlier. \item For any wave $P_yW_x \in \Phi_{\Pi}$ we have that $\forall i \in P_yW_x.\mathcal{I} \\ | \exists (m_{addr}, m_{byte}) \in P_yW_x.\mathcal{S} | i[A] = m_{addr}$. This says the memory of any instruction in each execution wave must be present in the shadow memory of the given wave. \end{itemize} \end{mydef} An important aspect of Definition \ref{executionWaveDefinition} is that the second bullet enforces a strict ordering between instructions in the set of execution waves for each process. The effect of this is that we preclude instructions from any given execution wave to be used in any other execution wave. The reason we do this is that it creates a clear history of execution wave progress within each process, and it becomes easier to implement since it is only necessary to maintain one execution wave per process. The drawback is that when malware transfers execution to code from an earlier execution wave, we include some content of the earlier execution wave into the current execution wave. In this way, we may end up with waves that overlap in their shadow memory, but, naturally, this can be stripped during post-processing. However, we have found this to be no major issue and that the trade-off works well in practice. However, we leave the door open and encourage future work in other models, e.g. more refined models. \subsection{Collecting the execution waves} \label{sec:chapter4CollectingExecutionWaves} In practice, we only associate one wave with a given process at any given moment. Therefore, to collect the execution waves, it is sufficient to keep track of the current wave in each process. We initially only have one, wave which is the wave inside of the process executing the malicious application. The shadow memory $\mathcal{S}$ of this wave is the malware module when loaded into memory, and the set of tainted writes is initially the empty set, $\mathcal{T} = \emptyset$. We then update the set of tainted writes whenever an instruction writes tainted memory following our Update function shown in Algorithm \ref{alg:UpdateTaint}. \SetAlgoNoEnd \begin{algorithm} \KwData{(input) Malware sample $B$} \KwResult{Logged malware execution waves and malware execution trace $\Pi$.} $\mathcal{P} \leftarrow$ init\_taint$(B)$\; $\mathcal{T}, \mathcal{S} \leftarrow$init\_waves$(B)$ \tcp*{initialise the shadow memories and tainted writes.} // Full system instrumentation\; $\mathit{i} \leftarrow first\_instr()$\; \While{$\mathcal{P} \neq \emptyset$} { // is the instruction tainted?\; \If{$\mathit{i[A]} \in \mathcal{P}$} { $\Pi \leftarrow \Pi ^{\wedge} \langle \mathit{i} \rangle$\; \eIf{$i[A] \not \in S_{pid}$} { \eIf{$i[A] \not \in \mathcal{T}_{pid}$} { $S_{pid} \leftarrow S_{pid} \cup (i[A], i[mem])$ \; $W_{pid} \leftarrow W_{pid} \cup \{i\}$ } { $\mathcal{S}_{pid}$, $\mathcal{T}_{pid}$, $W_{pid} \leftarrow$ dump\_wave() \; } } { \eIf{$i[A] \in \mathcal{T}_{pid} \wedge S_{pid}[i[A]] \neq i[mem]$} { $\mathcal{S}_{pid}$, $\mathcal{T}_{pid}$, $W_{pid} \leftarrow$ dump\_wave() \; } { $W_{pid} \leftarrow W_{pid} \cup \{i\}$ } } } $i, \mathcal{P}, \mathcal{T} = update(i, \mathcal{P}, \mathcal{T})\;$ } \Return $(\Pi)$ \caption{Wave collection} \label{Chapter4:WaveCollection} \end{algorithm} To capture execution waves, we monitor for each process the relationship between the currently executing instruction, the shadow memory and the set of tainted writes following Algorithm \ref{Chapter4:WaveCollection}. Specifically, for every tainted instruction in the malware execution trace, there are four possible cases: \begin{enumerate} \item The address of the instruction is not in the shadow memory and not in the tainted writes \textcolor{blue}{(line 10 Algorithm \ref{Chapter4:WaveCollection})}; \item The address of the instruction is not in the shadow memory but in the tainted writes \textcolor{blue}{(line 13 Algorithm \ref{Chapter4:WaveCollection})}; \item The address of the instruction is in the shadow memory and in the tainted writes but the content of the shadow memory is not similar to current instruction \textcolor{blue}{(line 16 Algorithm \ref{Chapter4:WaveCollection})}; \item The address of the instruction is in the shadow memory and in the tainted writes and the content of the shadow memory is equivalent to the memory of the current instruction \textcolor{blue}{(line 18 Algorithm \ref{Chapter4:WaveCollection})}. \end{enumerate} Case (1) happens in two scenarios. The first case is when tainted memory is transferred across processes via shared memory. For example, if tainted memory is written to memory shared by processes $P_1$ and $P_2$ and the instructions performing the writing is in $P_1$, then the tainted writes will not be in $P_2.\mathcal{T}$ or $P_2.\mathcal{S}$ because we only populate $P_2.\mathcal{T}$ if instructions from $P_2.\mathcal{T}$ are writing to the address space of $P_2$. The second case is when code from the current wave transfers execution to code that is part of an earlier wave. This is because the shadow memory of each wave does not propagate to the proceeding wave, but the memory remains tainted nonetheless. Whenever we observe case (1) we add the memory of the instruction to the shadow memory of the current process and also append the instruction to the sequence of instructions in the current wave. In this case, we update the shadow memory of the current wave with the executing instruction. In case (4) the current instruction is simply part of the current execution wave, and this is by far the most common case. In this case, we append the instruction to the instruction sequence of the current wave. In cases (2) and (3) we consider the current instruction to be the entry point of a new execution wave. Specifically, in case (2) the instruction is dynamically generated in a new memory region and in case (3) the instruction is dynamically generated on top of already existing malware code. In the event of a new wave, we log information about the current wave following Algorithm \ref{alg:DumpWave}. First, we log the instructions executed in the current wave, tainted writes and the shadow memory, which includes dumping every page in which there is a tainted write and also dumping the shadow memory. Then we set the shadow memory of the next wave to be the tainted writes of the current wave and set the tainted writes to be the empty set. \begin{algorithm} \KwData{(input)Instruction $i$, memory propagation set $P$, Tainted writes $T$.} \KwResult{Next instruction $i_{next}$, memory propagation set $P$, Tainted writes $T$} $\mathcal{P} \leftarrow propagate\_taint(i, \mathcal{P})$\; $i_{next} \leftarrow exec\_instr(i)$\; \If{$\mathit{i}[A] \in \mathcal{P}$} { \For{$o \in \mathit{i}[O]$} { $\mathcal{P} \leftarrow \mathcal{P} \cup \{o\}$\; } } \For {$w \in \mathit{i}[W]$ } { \lIf{$w \in \mathcal{P}$} { $\mathcal{T}[i.pid] \leftarrow \mathcal{T}[i.pid] \cup \{w\}$ } } \Return $i_{next}, \mathcal{P}, \mathcal{T}$ \caption{Update} \label{alg:UpdateTaint} \end{algorithm} \begin{algorithm} \KwData{(input)Current wave $\mathcal{W}$, shadow memories $\mathcal{S}$, Tainted writes $\mathcal{T}$.} \KwResult{Updated $\mathcal{S}$, $\mathcal{T}$, $\mathcal{W}$} LogInstrs($W_{pid}$)\; LogTaint($\mathcal{T})$\; LogShadowMem($\mathcal{S}$)\; $\mathcal{S}_{pid} \leftarrow \mathcal{T}_{pid}$\; $\mathcal{T}_{pid} \leftarrow \emptyset$\; $W_{pid} = \emptyset \cup \{i\}$\; \Return $\mathcal{S}_{pid}, \mathcal{T}_{pid}, W_{pid}$\; \caption{dump\_wave} \label{alg:DumpWave} \end{algorithm} The execution waves capture dynamically generated code independent of who wrote the code including dynamically generated \textit{malicious} code via benign code. We achieve this generality because the shadow table is composed of tainted memory and tainted memory propagates through both benign and malicious instructions. Since the tainted code originates from the malware itself, it is dynamically generated \textit{malicious} code. This property distinguishes our technique from previous work and allows it to be more general without losing precision. The output from collecting the execution waves is the sequence of waves executed during the malware execution. For each execution wave, we have memory dumps of the tainted memory during its execution and the list of instructions that belong to each wave. As such, we have an explicit representation of each instruction in the malware execution in the form of its raw bytes, and we also have memory dumps of any non-executed malicious (tainted) memory. All of this information will then be used to reconstruct PE files that are effective for follow-up static analysis. \section{Precise dependency capture} \label{sec:ObfuscatedLibraryCalls} In order for the PE files to be useful for follow-up static analysis, they must show how the malware uses external dependencies. As described in Section \ref{sec:obfuscatingExternalDependencies}, we must consider custom API call resolution and obfuscated API calls. To this end, we capture the destination of every branch instruction in the malware execution trace and check if it corresponds to the beginning of a function in an external module. To collect the addresses of functions in each process with malware execution, we iterate the export table of every module in the given process and capture the address of every function it exports. We put these functions in a per-process map that pairs function addresses with their respective function names. Minerva also comes with the possibility to speed up this process using pre-calculated function offsets for a given DLL. As such, with pre-calculated offsets, we only need to know the base address of a given imported module inside the malware process to compute the absolute addresses of its exported functions. To capture the API functions that the malware calls, we obtain the destination of every branching instruction in the malware execution trace. If the branch destination is in the set of functions exported by any of the dynamically loaded modules within the execution trace, it means the malware performs an API call. We log every API call and for some functions the parameters as well. For many functions in the Windows API, the return value is also essential to understand the semantics of the call. To capture the return value and output parameters, we note the return address of the API call on the stack and read the output of the function whenever the return address executes. We also monitor functions like \texttt{LoadLibrary} to update our export table when processes load new modules. Our approach to monitoring API calls precisely captures the API calls performed by instructions in the malware execution trace and do not capture API calls performed by benign code inside a process in which the malware executes. Furthermore, because we know the specific malicious instruction for each execution wave, it is trivial to map API calls to execution waves. This precise mapping highly improves the precision of the analysis in comparison to sandboxes that capture API calls globally within a process since many of these calls are irrelevant to the malware (this is particularly true in code injected processes). Minerva currently does not take any efforts when malware hides API usage by way of stolen bytes or copying of the Windows code. Furthermore, if malware deploys inlined library code or statically linked libraries, then Minerva will not consider these as external dependencies. This is a limitation we discuss further in Section \ref{Chapter4:limitations}. \section{Static reconstruction of execution waves} After collecting the execution waves and external dependencies, we need to combine these into PE files. For each execution wave we construct a set of PE files based on the content of their respective shadow memory and for each PE file we need three ingredients: (1) The specific memory pages of an execution wave that makes up the PE file; (2) the PE's IAT; (3) the entry point of the file. The static analysis component of Minerva performs three main steps. First, it groups related memory dumps of each execution wave, then identifies external dependencies in each of these memory groups and, finally, builds new PE files based on the results of the two previous steps. \subsection{Merging over-approximated shadow memories} The output from collecting the execution waves includes, for each execution wave, page-level memory dumps of the shadow memory and tainted writes. Intuitively, it can seem appropriate to convert all of these memory dumps into one large PE file and use this for static analysis. However, we have found this to be imprecise in practice because it is rarely the case that all of the memory dumps are relevant to the malware. On the one hand, the shadow memory is a conservative approximation as we capture some memory that is not executable code, and some memory is a result of over-propagated taint. On the other hand, we do not want to reconstruct PE files purely based on executed memory since this will miss non-executed, yet still, malicious executable code, and also relevant data sections. To avoid this imprecision, we divide the page-level memory dumps from the dynamic analysis into smaller groups, such that the pages of each group are related and no page in a given partition relates to any other partition. The goal with this is to capture the parts of the malware that are self-contained and represent the application timelessly. To this end, we create a PE file with multiple sections for each partition. Figure \ref{fig:SelectingTaintedPagesForPE} shows an example of how we select the specific tainted pages that are relevant for the unpacked malware from a set of tainted pages output by Minerva's dynamic analysis component. The first step is to identify the tainted pages with malicious code execution. To do this, we first iterate the sequence of instructions executed in a given execution wave and collect all pages that hold instructions from this sequence. Following this, we iteratively collect neighbouring pages until there are no more neighbouring pages and the result is a set of page-level intervals where some pages hold executed code, and other pages neighbour up to these. This corresponds to the first two steps in Figure \ref{fig:SelectingTaintedPagesForPE}. Following this, we identify pages in the shadow memory that relate to each interval. To construct self-contained PE files, we capture data-dependencies and control-dependencies to other pages in the shadow memory for each interval. We do this by performing speculative disassembly on each memory dump to capture cross-references to other memory dumps. This step gives us cross-references for each interval, and we then iteratively merge related intervals such that no interval will have cross-references to other intervals. Following this approach, we end up with a set of groups of memory dumps, and we create a PE file for each of these groups. In the example in Figure \ref{fig:SelectingTaintedPagesForPE} we end up with one group consisting of two intervals and will, therefore, create one PE file. \begin{figure*}[!h] \centering \begin{tikzpicture}[scale=0.7, every node/.style={transform shape}] \node(node1) [addressbox] {\texttt{0x7768000}}; \path(node1.north)+(0.0, 0.5) node (column0text)[textplain]{Tainted pages from dynamic analysis}; \path(node1.south)+(-0.0, -0.3) node (node2)[addressbox]{\texttt{0x7751000}}; \path(node2.south)+(-0.0, -0.3) node (node3)[addressbox]{\texttt{0x7731000}}; \path(node3.south)+(-0.0, -0.3) node (node4)[addressbox]{\texttt{0x7557000}}; \path(node4.south)+(-0.0, -0.3) node (node5)[addressbox]{\texttt{0x7551000}}; \path(node5.south)+(-0.0, -0.3) node (node6)[addressbox]{\texttt{0x6201000}}; \path(node6.south)+(-0.0, -0.3) node (node7)[addressbox]{\texttt{0x6200000}}; \path(node7.south)+(-0.0, -0.3) node (node8)[addressbox]{\texttt{0x5901000}}; \path(node8.south)+(-0.0, -0.3) node (node82)[addressbox]{\texttt{0x5303000}}; \path(node82.south)+(-0.0, -0.3) node (node9)[addressbox]{\texttt{0x5302000}}; \path(node9.south)+(-0.0, -0.3) node (node10)[addressbox]{\texttt{0x5301000}}; \path(node10.south)+(-0.0, -0.3) node (node11)[addressbox]{\texttt{0x5300000}}; \path(node6.east)+(3.8, -0.3) node (InvisColumn2){}; \path[draw, ->](node6.east)+(0.3,-0.3) -- node[below, text width=5em]{Collect pages with malicious execution.} (InvisColumn2.west){}; \path(node6.east)+(5.0, -0.0) node (Column2Center)[addressbox]{\texttt{0x5301000}}; \path(Column2Center.south)+(0.0, -0.3) node (column2node2)[addressbox]{\texttt{0x5300000}}; \path(Column2Center.east)+(3.8, -0.3) node (InvisColumn3){}; \path[draw, ->](Column2Center.east)+(0.3, -0.3) -- node[below, text width=6em]{Gather neighbouring pages.} (InvisColumn3.west){}; \path(Column2Center.east)+(5.0, -0.6) node (Column3Center)[addressbox]{\texttt{0x5301000}}; \path(Column3Center.south)+(0.0, -0.3) node (column3node2)[addressbox]{\texttt{0x5300000}}; \path(Column3Center.north)+(0.0, +0.3) node (column3node3)[addressbox]{\texttt{0x5302000}}; \path(column3node3.north)+(0.0, +0.3) node (column3node4)[addressbox]{\texttt{0x5303000}}; \path(Column3Center.east)+(3.8, +0.3) node (InvisColumn4){}; \path[draw, ->](Column3Center.east)+(0.3, +0.3) -- node[below, text width=7em]{Speculative disassembly to collect pages by cross-referencing.} (InvisColumn4.west){}; \path(Column3Center.east)+(5.0, +0.6) node (Column4Center)[addressbox]{\texttt{0x5303000}}; \path(Column4Center.south)+(0.0, -0.3) node (Column41Center)[addressbox]{\texttt{0x5302000}}; \path(Column41Center.south)+(0.0, -0.3) node (column4node2)[addressbox]{\texttt{0x5301000}}; \path(column4node2.south)+(0.0, -0.3) node (column4node3)[addressbox]{\texttt{0x5300000}}; \path(Column4Center.north)+(0.0, +0.3) node (column4node4)[addressbox]{\texttt{0x6200000}}; \path(column4node4.north)+(0.0, +0.3) node (column4node5)[addressbox]{\texttt{0x6201000}}; \end{tikzpicture} \caption{The process of identifying which tainted pages from dynamic analysis that are relevant when reconstructing unpacked PE files. In the example, the reconstructed PE file has two sections (\texttt{0x5300000}-\texttt{0x5303000} and \texttt{0x6200000}-\texttt{0x6201000}).} \label{fig:SelectingTaintedPagesForPE} \end{figure*} \subsection{Dependency reconstruction} To reconstruct external dependencies in our PE files, we need to rebuild the IAT of the binary and patch instructions to rely on this new IAT. To construct the IAT, we first identify API calls made by instructions belonging to the pages of each memory group. We identify these by matching the API hooks collected during dynamic analysis to instructions of the respective code wave and the pages of the given memory group. We include each unique API function in the IAT of the reconstructed PE file. Although we know which instructions branch to external APIs from the malware execution trace, the branch destinations may not be visible from the memory dumps themselves. The final step in constructing PE files is, therefore, to map the instructions that perform API calls to our newly generated IAT by patching them on the binary level. Unfortunately, binary patching is not an easy task since some instructions may require for us to rearrange the instructions in the binary, and this may subsequently break it. In practice, we patch branch instructions that are 6 bytes long, e.g. \texttt{call [0xdeadbeef]}, because we can do this without rearranging instructions. We do not patch instructions that are less than 6 bytes, e.g. \texttt{call eax}. However, we still keep the cross-references so they can be used in more abstract representations in a follow-up analysis. \subsection{Final PE construction} In order to construct the final PE file, we need to know the entry point and the PE sections to put in the file. To identify the entry point, we go through the instruction sequence of the given wave and identify the first instruction in the range of each memory dump group. In order to construct the PE sections, we rely on the memory intervals that we end up with in each memory group. For example, in our example from Figure \ref{fig:SelectingTaintedPagesForPE} we end up with one group and two intervals ([\texttt{0x5300000}-\texttt{0x5303000}], [\texttt{0x6200000}-\texttt{0x6201000}]). We make each of these intervals into an individual section of the PE file and place the newly generated IAT in-between the PE header and these sections. The reason we make each of them into individual sections is to avoid rebasing each interval. The pages we dump from virtual memory are placed at various locations, and each of them must keep this virtual address in the PE file. As such, the \texttt{PointerToRawData} and \texttt{VirtualAddress} values in each section header will be significantly different, and the \texttt{VirtualAddress} points to the base address of each interval as it was when dumped from virtual memory (\texttt{0x5300000} and \texttt{0x6200000} in the example in Figure \ref{fig:SelectingTaintedPagesForPE}). \section{Evaluation} \label{sec:Chapter4Evaluation} Having presented the core techniques of Minerva, we now move on to evaluate Minerva using multiple benchmarks with respect to the following research questions: \begin{enumerate} \item Does Minerva precisely capture dynamically generated code and the malware's API calls? \item Does Minerva improve results over previous work? \item Is Minerva relevant for common malware analysis tasks? \end{enumerate} To facilitate the research questions above, we gather four sets of benchmark applications comprising synthetic applications as well as real-world malware applications: \begin{enumerate} \item \textbf{Benchmark \#1 : Ground truth data set.} We develop a new benchmark suite that combines the use of dynamically generated code, code injection and obfuscation of external dependencies. In total, we have developed nine different applications, and they are all described in Table \ref{tab:Chapter4DataSetADynamic}. The applications in the benchmark suite represent many of the challenges posed by real-world packers. To the knowledge of the authors, this is the first dedicated benchmark suite for challenging the attributes of unpackers where none of the samples relies on packers developed by third-party teams. The benefit of this benchmark suite is that each sample poses specific challenges that are clearly defined, the applications are easy to understand, and we have the complete source code of each example. As such, it becomes much more accessible to determine if an unpacker is successful because there is no need to reverse engineer large amounts of binary code. \item \textbf{Benchmark \#2 : Selected malware samples.} The second data set corresponds to several malware samples from the families CryptoWall, Tinba, Gapz and Ramnit. These samples perform many of the obfuscation techniques that Minerva aims to overcome, such as code injection combined with dynamically generated code and custom API resolution. \item \textbf{Benchmark \#3 : Packed synthetic samples.} We have taken a set of synthetic samples and packed them with well-known packers. In these applications, we know the applications' behaviours before packing because we design the applications; however, we do not know the exact changes the packers make on the code and, therefore, do not have ground truth about the packed applications. \item \textbf{Benchmark \#4 : Real-world malware samples.} This set comprises 119 malware samples from the real-world malware families listed in Table \ref{tab:Bench4MalwareFamilies}. We collected seven samples from each family to maintain a balanced data set, and the samples were collected from VirusTotal. In order to ensure the samples are indeed benign, we required each sample to be detected by at least 15 anti-malware vendors. Furthermore, in order to ensure certainty that the samples belong to their respective families, we required at least two vendors to label them in the same family. On average each sample had 52 anti-malware vendors report it as malicious and a median of 54. We recorded each of these samples for 25 seconds and set a max replay time of 120 minutes. \end{enumerate} \begin{table}{} \centering \begin{tabular}{l|c|c|c} Artemis & CTBLocker & Cerber & CoinMiner \\ \hline CosmicDuke & Emotet & Kovter & Madangel \\ \hline Mira & Natas & Nymaim & Pony \\ \hline Shifu & Simda & TinyBanker & Urausy \\ \hline Zbot & && \\ \end{tabular} \caption{The malware families in Benchmark set \#4. We collected a total of seven samples from each family.} \label{tab:Bench4MalwareFamilies} \end{table} In order to assess the techniques of Minerva, we must make a fair and meaningful comparison to existing work. One approach is to compare Minerva to recently proposed unpackers like Codisasm \cite{Bonfante:2015:CMS:2810103.2813627} or Aranchino \cite{10.1007/978-3-319-60876-1_4}. However, we already showed in \cite{DKOR} that Codisasm is very limited due to its implementation in PIN, and Aranchino is also developed on top of PIN with no additional effort for analysing system-wide malware. Instead, we compare Minerva to the unpacker by Ugarte et al. \cite{Ugarte-pedrero_sok:deep}. Ugarte et al. \cite{Ugarte-pedrero_sok:deep} propose a malware unpacker that is capable of analysing multi-process malware is implemented on top of QEMU. The unpacker supports multi-process unpacking by monitor various system calls and also develop techniques for capturing memory mappings. The tool they present is only available as a web service\footnote{www.packerinspector.com}, which forces us to treat their system as a black box. Furthermore, they do not mention which OS they support in their work; however, from experimenting with the service, we conclude the analysis environment is Windows XP. We determined this because the web service responds with \textit{``Error - The sample did not start executing.''} when faced with applications compiled for Windows 7 and later, but runs normally with Windows XP applications. As such, we wrote the samples in our data set to make sure they all execute correctly on both Windows XP and Windows 7. We will refer to the unpacker by Ugarte et al. \cite{Ugarte-pedrero_sok:deep} as PackerInspector. \begin{table}{} \footnotesize \begin{tabular}{| l | L{7.5CM} |} \hline \textbf{ID} & \textbf{Description}. \\ \hline D1 & Dynamically generates code and uses custom IAT resolution to resolve \texttt{GetModuleHandle}, \texttt{GetProcAddress} and \texttt{ExitProcess} and exits. \\ \hline D2 & Dynamically generates code and uses custom IAT resolution to resolve \texttt{GetModuleHandle}, \texttt{GetProcAddress} and \texttt{MessageBoxA} and then displays a message box.\\ \hline D3 & Dynamically generates code that further dynamically generates code and then uses custom IAT resolution to resolve \texttt{GetModuleHandle}, \texttt{GetProcAddress} and \texttt{MessageBoxA} and then displays a message box.\\ \hline D4 & Dynamically generates code that further dynamically generates code and then uses custom IAT resolution to resolve \texttt{GetModuleHandle}, \texttt{GetProcAddress} and \texttt{ExitProcess} and then exits.\\ \hline C1 & Opens the Windows process \texttt{explorer.exe} using \texttt{OpenProcess, WriteProcessMemory} and \texttt{CreateRemoteThread}, then inside the target process dynamically resolves the address of \texttt{GetModuleHandle}, \texttt{GetProcAddress} and \texttt{ExitProcess}, and calls each of them to exit.\\ \hline C2 & Opens the Windows process \texttt{explorer.exe} using \texttt{OpenProcess, WriteProcessMemory} and \texttt{QueueUserAPC}, then inside the target process dynamically resolves the address of \texttt{GetModuleHandle}, \texttt{GetProcAddress} and \texttt{ExitProcess}, and calls each of them to exit.\\ \hline C4 & Uses the PowerLoaderEx injection that relies on a global memory buffer and code-reuse attacks to hijack execution of \texttt{explorer.exe}. Inside \texttt{explorer.exe} code-reuse attacks transfers execution to shellcode that calls \texttt{LoadLibraryA}.\\ \hline C5 & Uses the Atombombing injection techniques that relies on the global atom tables to execute within \texttt{explorer.exe}. Inside \texttt{explorer.exe} it uses code-reuses attack to execute a piece of shellcode that launches \texttt{calc.exe}.\\ \hline M1 & Injects code into \texttt{explorer.exe} similarly to A1, then inside the target process dynamically generates code that then dynamically resolves the address of \texttt{GetModuleHandle}, \texttt{GetProcAddress} and \texttt{ExitProcess}, and calls each of them to exit.\\ \hline \end{tabular} \caption{Description of the samples in data set \#1 and how they perform code injection.} \label{tab:Chapter4DataSetADynamic} \end{table} \subsection{Implementation} Minerva is built on top of PANDA \cite{Dolan-Gavitt:2015:RRE:2843859.2843867}, which is a dynamic analysis framework based on full system emulation and utilises a record-and-replay infrastructure. All of the code on top of PANDA is built in C/C++ and the majority of our tools that process the output of the sandbox are in Python. Most of the code in Minerva's dynamic analysis is on top of PANDA; however, we have had to modify the main taint analysis plugin that comes with PANDA to be less resource intensive. Specifically, PANDA's \texttt{taint2} plugin can quickly use 40+ GB of memory, and to limit this, we removed support for taint-labels and made some data structures more simplistic. \subsection{Experimental set up} \label{sec:Chapter5ExperimentalSetup} We conduct all of our Minerva experiments on a 4-core Intel-7 CPU with 4.2 GHz and a Windows 7, 32-bit guest architecture. The guest is in a closed network and connected to another virtual machine that performs network simulation using INetSim\cite{InetSim}. As such, malware samples that connect back to some CC server will be able to resolve DNS names, connect to every IP and also receive content. However, the content itself is the default data provided by INetsim. We executed the applications on the guest machine with a local admin account, and User Account Control (UAC) enabled. We perform no user stimulation during the analysis, and there were no applications apart from the generic Windows processes running in the guest machine itself. \subsection{Empirical evaluation of correctness} \label{sec:Chapter5EmpiricalEvaluationOfCorrectness} In our first experiment, we match Minerva and PackerInspector with the ground-truth samples in benchmark set \#1. For each of the samples, we capture the number of execution waves, the number of processes involved in the execution, the number of API calls observed from the last wave of each sample and the number of functions in the IAT of the unpacker's output. We match the results from the output of Minerva and PackerInspector with our ground truth data and Table \ref{tab:ground_truth_evaluation} shows our results. For the samples that execute in a single process, both Minerva and PackerInspector capture the number of processes and waves accurately. In two of these four samples, Minerva captures the five expected API calls accurately, and in the other two Minerva captures slightly more than the expected number. PackerInspector, however, attributes about 200x more API calls than the expected number to the final wave of the execution. Furthermore, Minerva builds the IAT for the two samples accurately and a slightly larger IAT for the other two. PackerInspector is unable to produce any output with an IAT, and there is no sign of API usage in the output of PackerInspector. The reason Minerva captures slightly more API calls than expected is that the compiler, naturally, adds various function calls around the source code. PackerInspector successfully identifies the correct number of execution waves but fails to attribute API calls accurately to unpacked code and also fails to produce any output with an IAT. Minerva, however, succeeds at both. For the samples that perform multi-process execution, we observe that Minerva captures all processes, execution waves, API calls, and rebuilds PE files with the expected IAT. The reason Minerva does not capture slightly more API calls than the expected amount in these samples is that the final wave occurs within an injected process and does not contain the added functions from the compiler. Surprisingly, PackerInspector fails to detect multi-process execution in any of the samples, and we suspect this is because PackerInspector only monitors for multi-process execution via memory mapped files which none of the samples uses. From our multi-process samples, we observe the limitations of the original write-then-execute heuristic, in that it is unable to handle system-wide unpacking in a general and precise manner. However, the novel techniques introduced by Minerva are successful at this. When matched with our ground-truth samples it is clear that PackerInspector over-approximates the API usage of the applications, is unable to output unpacked code that shows API usage when faced with obfuscations of external dependencies and under-approximates the system-wide malware execution. These observations verify our hypothesis that state-of-the-art unpackers are unable to deal with many challenges faced by system-wide packing and that the techniques in Minerva overcome these limitations. \begin{table}{} \centering \footnotesize \begin{tabular}{l|c|c|c|c} & \multicolumn{4}{c}{\textbf{Precision}} \\ & \multicolumn{4}{c}{(Ground Truth, Minerva, PackerInspector)} \\ Sample & \#Procs & \#Waves & \#API calls in final wave &\#IAT size\\ \hline \textbf{(1)} D1 & 1,1,1 & 2,2,2 & 5,8,1007 & 3,5,0 \\ \textbf{(1)} D2 & 1,1,1 & 2,2,2 & 5,5,1007 & 3,3,0 \\ \textbf{(1)} D3 & 1,1,1 & 3,3,3 & 5,5,1006 & 3,3,0\\ \textbf{(1)} D4 & 1,1,1 & 3,3,3 & 5,10,1007 & 3,6,0 \\ \hline \textbf{(1)} C1 & 2,2,1 & 2,2,1 & 6,6,$\dagger$ & 3,3,$\dagger$ \\ \textbf{(1)} C2 & 2,2,1 & 2,2,1 & 6,6,$\dagger$ & 3,3,$\dagger$ \\ \textbf{(1)} C4 & 2,2,1 & 2,2,1 & 1,1,$\dagger$ & 1,1,$\dagger$\\ \textbf{(1)} C5 & 2,2,1 & 2,2,1 & 5,5,$\dagger$ & 3,3,$\dagger$ \\ \hline \textbf{(1)} M1 & 2,2,1 & 3,3,1 & 6,6,$\dagger$ & 3,3,$\dagger$ \\ \end{tabular} \centering \caption{The evaluation results from matching Minerva and PackerInspector with the ground-truth samples of data set \#1. $\dagger$ means not available because PackerInspector failed to reach the last wave.} \label{tab:ground_truth_evaluation} \end{table} \subsection{Empirical evaluation against selected malware} \label{sec:Chapter5EmpirivalEvaluationAgainstSelectedMalware} In our second experiment, we match Minerva and PackerInspector with the malware samples in data set \#2. The goal of this experiment is twofold. First, we aim to measure how each unpacker captures system-wide unpacking based on the number of processes and waves they identify. Second, we aim to measure the difference in the total amount and the unique amount of API calls observed in each malware execution. We only have access to PackerInspector via their web interface, and in this experiment, it is important to highlight the limitations of this. The samples we analyse with Minerva are executed in Windows 7, and the samples we analyse with PackerInspector presumably execute in Windows XP. In addition to this, we do not know the state of the execution environment that PackerInspector uses to execute the malware samples, such as the processes executing on the system, the network connection, the privilege-level of the malware, the security settings in the guest system, and so on. This adds a level of uncertainty to the results we present in this section since we are not conducting an isolated comparison of techniques as the execution environments of the malware are, likely, significantly different. This is particularly relevant when dealing with the samples from data set \#2 because malware samples are complex applications that are sensitive to their execution environment and small changes in the environment can have a substantial impact on the malware execution. However, we still feel it is appropriate to report the results as they give certain insights into the differences in our approaches. The results of our experiment are shown in Table \ref{tab:malware_evaluation}. In terms of multi-process monitoring, Minerva captures more injections than PackerInspector in eight samples and fewer injections in three samples. PackerInspector misses multi-process executions in all Tinba and Gapz samples. For the Tinba samples, PackerInspector only catches the first multi-process execution which occurs into the benign Windows process \texttt{winver.exe}, a process that is also started by each sample. PackerInspector misses all multi-process executions within the Gapz malware sample, and we believe there are two possible explanations for this. First, because PackerInspector exits prematurely, and second because Gapz uses the PowerLoader injection which does not rely on any of the API hooks used by PackerInspector to catch multi-process unpacking. In one CryptoWall sample, Minerva finds one more multi-process propagation than PackerInspector, and in another sample, Minerva finds one less than PackerInspector. In both samples, both unpackers find two injections into \texttt{svchost.exe} and in both cases, PackerInspector also finds injections into \texttt{vssadmin.exe}. Minerva also finds an injection into \texttt{vssadmin.exe} in the sample with five process executions. However, we have found that these \texttt{vssadmin.exe} propagations correspond to false positives. In particular, we found no injections into \texttt{vssadmin.exe} but rather that the samples execute the following command \texttt{``vssadmin.exe Delete Shadow /All /Quiet''} using the \texttt{WinExec} call. We believe this call results in the memory flowing into the \texttt{vssadmin.exe} and is, effectively, the reason the unpackers identify execution in \texttt{vssadmin.exe}. In two Ramnit samples, Minerva finds fewer injections than PackerInspector. In one of these samples, PackerInspector reports an additional injection into \texttt{IEXPLORE.exe} that is not identified by Minerva. We analysed the sample ourselves and found that the sample only injects into two \texttt{IEXPLORE.exe} processes if it fails to inject into two \texttt{svchost.exe} processes, which we observed by both Minerva and PackerInspector. However, researchers from Symantec \cite{SymantecRamnit} report that Ramnit also drops a file that will be loaded by each new instance of \texttt{IEXPLORE.exe}, which may be an explanation for why PackerInspector observes such an injection. However, we think it's most likely a result of over-approximation in PackerInspector as it would require the \texttt{IEXPLORE.exe} process to be launched on the system. In the other sample, PackerInspector finds an additional injection into a file with a random name which Minerva does not. Minerva, however, observes the creation of this file but does not see it execute. In the remaining Ramnit sample, Minerva captures seven more injections than PackerInspector, and both Minerva and PackerInspector identifies four injections into \texttt{IEXPLORE.exe} in this sample. Based on follow-up analysis, we determine six of the additional injections Minerva finds are true positives and one is a false positive. We conclude this because we found API-signatures that show injections into these processes, but not the remaining one. \begin{table}{} \hspace*{-0.7cm} \centering \footnotesize \begin{tabular}{l|c|c|c|c} & \multicolumn{4}{c}{\textbf{Precision}} \\ & \multicolumn{4}{c}{(Minerva, PackerInspector)} \\ Sample & Procs & Waves & API calls & Unique APIs\\ \hline CryptoWall\footnote{md5sum e73806e3f41f61e7c7a364625cd58f65} & 5($\dagger$2),4($\dagger$1) & 8,4 & 7371,21050 & 148, 354 \\ CryptoWall \footnote{md5sum 5384f752e3a2b59fad9d0f143ce0215a} & 3, 4($\dagger$1) & 6,4 & 9945,23580 & 135, 388\\ \hline Tinba \footnote{md5sum c141be7ef8a49c2e8bda5e4a856386ac} & 3,2 & 4,3 & 557,34076 & 54, 477 \\ Tinba \footnote{md5sum 08ab7f68c6b3a4a2a745cc244d41d213} & 3,2 & 4,3 & 667,49260 & 55,549 \\ Tinba \footnote{md5sum 6244604b4fe75b652c05a217ac90eeac} & 3,2 & 4,3 & 704,49262 & 55, 550\\ \hline Gapz \footnote{md5sum 089c5446291c9145ad8ac6c1cdfe4928} & 2,1 & 7,5 & 36509156, 15850000 & 140, 336 \\ Gapz \footnote{md5sum 0ed4a5e1b9b3e374f1f343250f527167} & 2,1 & 4,3 & 36504908, 15845670 & 125, 226 \\ Gapz \footnote{md5sum e5b9295e0b147501f47e2fcba93deb6c} & 3,1 & 5,2 & 36506063, 15844113 & 186, 251 \\ \hline Ramnit \footnote{md5sum 448ce1c565c4378b310fa25b4ae3b17f} & 3, 4($\dagger$1) & 8,5 & 6908,56720 & 116, 479\\ Ramnit \footnote{md5sum 33cd65ebd943a41a3b65fa1ccfce067c} & 12($\dagger$1),5 & 30,6 & 16185,209828 & 153, 489 \\ Ramnit \footnote{md5sum 3bb86e6920614ed9ac5d8fbf480eb437} & 3, 5($\dagger$1) & 8,8 & 3189, 115943 & 115, 621\\ \end{tabular} \centering \caption{The evaluation results from matching Minerva and PackerInspector with the malware samples of data set \#2. $\dagger$ indicates the number processes we determined to be false positives.} \label{tab:malware_evaluation} \end{table} In terms of precision for tracking API calls, there is a similar relationship between Minerva and PackerInspector as when matched with our ground-truth samples. In the samples from CryptoWall, Tinba and Ramnit, Minerva reports roughly twelve times fewer API calls within the malware execution than PackerInspector. We manually investigated several of the Tinba samples to confirm these numbers and found that Minerva captures API calls accurately within the malware execution. As such, we consider the API calls reported by PackerInspector to be a significant over-approximation. In addition to the total number of API calls, PackerInspector captures about three times more unique API calls in each malware execution, even in cases where Minerva finds more total API calls. The only instances where Minerva finds more total API calls are in the Gapz samples. The reason that there is a significant amount of API calls in these cases is that the Gapz malware scans a remote process for gadgets and this results in an enormous amount of calls to \texttt{ReadProcessMemory} (about 99.985\% of calls in the Minerva analyses). We believe that the reason Minerva reports more API calls than PackerInspector only in the Gapz samples, is because PackerInspector exits prematurely. When matched with these malware samples, it is clear that PackerInspector over-approximates the API usage of the applications, both in terms of total API calls and unique APIs used. We also find that in the majority of times, PackerInspector under-approximates the system-wide malware propagation and Minerva finds more system-wide unpacking. \begin{table}{} \centering \begin{tabular}{|l|c|c|c|c|} \hline 1 & 2 & 3 & 4-5 & 6-11 \\ \hline 66\% & 15\% & 9\% & 5\% & 5\%\\ \hline \end{tabular} \caption{Number of process executions per malware sample.} \label{tab:proc_count} \begin{tabular}{|l|c|c|c|c|c|} \hline 1 & 2 & 3 & 4 & 5 & 5 < \\ \hline 52\% & 18\% & 7\% & 9\% & 2\% & 12\%\\ \hline \end{tabular} \caption{Number of waves per malware sample.} \label{tab:wave_count} \begin{tabular}{|l|c|c|c|c|c|} \hline 1 & 2 & 3 & 4 & 5 & 5 < \\ \hline 51\% & 17\% & 8\% & 3\% & 8\% & 13\% \\ \hline \end{tabular} \caption{Number of PE files constructed per malware sample.} \label{tab:PE_count} \begin{tabular}{|l|c|c|c|} \hline 1 & 2 & 3-5 & 5 < \\ \hline 56\% & 15\% & 18\% & 11\% \\ \hline \end{tabular} \caption{Number of Sections reconstructed per PE file.} \label{tab:section_count} \begin{tabular}{|l|c|c|c|c|c|} \hline 0 & 1-10 & 11-15 & 16-20 & 21-25 & 26 < \\ \hline 18\% & 28\% & 7\% & 5\% & 2\% & 40\% \\ \hline \end{tabular} \caption{Number of imports per PE file.} \label{tab:import_count} \end{table} \subsection{Relevance on malware} In this experiment, we match Minerva with benchmark set \#4. In total we run 119 samples through Minerva and collect (1) the number of process executions; (2) the number of waves; (3) the number of generated PE files; (4) the number of imports in the IAT of each PE file and (5) the number of sections in each PE file. Table \ref{tab:proc_count} shows the number of processes and Table \ref{tab:wave_count} the number of waves in our data set. We find that a third of the samples perform multi-process execution and that roughly half have multiple execution waves, which means that a large part of all the samples with single-process execution have multi-wave execution. Table \ref{tab:PE_count} shows the distribution of reconstructed PE files. We construct more PE files than the number of captured waves, which shows that some waves contain several regions that are non-related. Finally, Table \ref{tab:section_count} shows the number of sections reconstructed in each PE file and Table \ref{tab:import_count} shows the number of reconstructed imports. For roughly 20\% of the PE files, we do not monitor any API calls in the code, and this is due to some PE files being a result of small amounts of taint in minor code regions. \subsection{Relevance on packers} In this experiment, we show that Minerva is relevant against publicly available packers from benchmark set \#3. This experiment is common practice for unpacking engines \cite{Bonfante:2015:CMS:2810103.2813627, 4413009, Royal:2006:PAH:1191820.1191885, Ugarte-pedrero_sok:deep} and, therefore, natural for us to perform. We construct a simple application that will get the name of the current user and report back to us so we can verify the behaviour occurred correctly. We pack this application with 13 publicly known packers and analyse the samples in Minerva. We show the results of our experiment in Table \ref{tab:packer_relevance}. The table shows the number of processes, waves, PE files, and whether we found the original code or a derivative thereof, and also whether we observed the original behaviour. Minerva produced PE files for most of the packers that are very similar to the original code, including correct API calls. In general, these packers are rather simple in comparison to some of the techniques we observe in malware from the wild. For example, all but one of the packers are single-process packers. This makes sense since the packers are not necessarily meant to be used by malicious software, but may be used by benign applications, which are not meant to inject into other applications. Furthermore, many of these packers rely on similar approaches for compression, e.g. the Lempel–Ziv–Markov chain algorithm, and the majority the packers used in these experiments are rather old. \begin{table}{} \centering \begin{tabular}{l|c|c|c|c|c} \hline Packer & \#proc & \#wave & \#PE & U & OB \\ \hline BoxedApp & 1 & 1 & 1 & Y & Y \\ Enigma & 2 & 11 & 1 & Y & Y \\ FSG\_packed & 1 & 2 & 2 & Y & Y \\ mew11 & 1 & 3 & 4 & Y & Y \\ MoleBox & 1 & 4 & 9 & Y & Y \\ mpress & 1 & 2 & 2 & Y & Y \\ PackMan & 1 & 2 & 2 & Y & Y \\ PECompact & 1 & 4 & 4 & Y & Y \\ PEtite & 1 & 4 & 4 & Y & Y \\ tElock & 0 & 0 & 0 & N & N \\ UPX & 1 & 2 & 2 & Y & Y \\ WinUpack & 1 & 2 & w & Y & Y \\ XComp & 1 & 2 & 2 & Y & Y \\ \end{tabular} \caption{The results from matching Minerva with known packers. \textbf{OB} indicates if we observed the original behaviour of the packed application. \textbf{U} indicates if we found the original code in Minerva's output.} \label{tab:packer_relevance} \end{table} \subsection{Tinba case study} We now investigate in depth a case study of a real-world malware sample from the Tinba malware family\footnote{md5sum 08ab7f68c6b3a4a2a745cc244d41d213}. Minerva outputs four PE files with sizes 12KB, 12KB, 16KB and 24KB, respectively. We manually reverse engineered the sample to fully understand the system-wide propagation and where the sample exposes its unpacked code. The malware first decrypts memory from its data section and then transfers execution to this code. The decrypted code injects code into the Windows process \texttt{Winver.exe} and from \texttt{Winver.exe} it further injects into \texttt{explorer.exe}. To inject code into \texttt{Winver.exe} Tinba launches a new instance of \texttt{Winver.exe} in a suspended state. Then, Tinba allocates memory on the heap of the newly started \texttt{Winver.exe} and copies some malicious code into this specific memory. Tinba then overwrites six bytes of the \texttt{Start} function in \texttt{Winver.exe} with the instructions \texttt{push ADDR; ret}, where \texttt{ADDR} is some address inside the dynamically generated malicious code. Effectively, Tinba ensures execution of its malicious code in \texttt{Winver.exe} by overwriting an initial function in \texttt{Winver.exe} to hijack execution. Minerva captures one execution wave and outputs two unpacked PE files for the code in \texttt{Winver\-.exe}, one PE file based on a single execution wave in \texttt{explorer.exe} and also one PE file from the unpacked code in the initial process. The PE files produced by Minerva has 0, 11, 12 and 26 imports reconstructed. These results capture the execution perfectly because the PE file with 0 imports is purely the \texttt{push ADDR; ret} instructions of the malware execution trace in \texttt{Winver\-.exe} and the rest of the PE files contain various other stages with more payload content. Minerva correctly identifies the malicious code, both the patched code of \texttt{Winver\-.exe} and also the code on the heap that contains the core of the malware code. More importantly, the PE files precisely capture the malware execution, and from the execution trace output by Minerva, we can precisely see the exact instructions \texttt{push ADDR; ret}. Minerva also catches the exact malware code inside of the \texttt{explorer.exe} process. The PE file captured from the second execution wave in the original malware process contains 11 imports in its reconstructed IAT, five of which are \texttt{ResumeThread, CreateProcessA, WriteProcessMemory, VirtualAllocEx} and \texttt{VirtualProtectEx}. A novice analysts can quickly determine that the execution performs a code-injection based on these API calls. \subsection{Performance evaluation} \label{chapter4:PerformanceEvaluation} In the final part of our evaluation, we monitor the performance of Minerva. The authors of PANDA report that recording gives a 1.85x slowdown in comparison to QEMU alone and replaying incurs a 3.57x slowdown \cite{Dolan-Gavitt:2015:RRE:2843859.2843867}. This is expensive in comparison to systems that rely on hypervisor-based virtualisation for recording, e.g. AfterSight \cite{Chow:2008:DDP:1404014.1404015}. However, we consider PANDA's performance good enough for malware analysis in particular because the plugins that we deploy will have far more impact on the total analysis time. Naturally, the performance overhead in the recording stage can be used by the malware to evade analysis, and we discuss this further in Section \ref{Chapter4:limitations}. In this performance evaluation, we focus on the overhead of Minerva's analysis when replaying the recorded execution, and the numbers we report in this section are based on analysis of the 25 malware samples from the Ramnit, Gapz, CryptoWall and Tinba families. \begin{figure}[H] \centering \label{ScoreVersusSize} \begin{tikzpicture} \begin{axis}[ scale=0.55, width=0.8\textwidth, height=0.4\textwidth, xmin=0, xmax=1725000000, ymin=0, ymax=7000, xlabel=Instructions replayed, ylabel=Seconds] \addplot+[error bars/.cd, y dir=both, y explicit] coordinates { (75000000,283) +- (42, 42) (150000000,512) +- (65, 65) (225000000,704) +- (96, 96) (300000000,850) +- (114, 114) (375000000,995) +- (146, 146) (450000000,1121) +- (147, 147) (525000000,1274) +- (157, 157) (600000000,1475) +- (174, 174) (675000000,1700) +- (197, 197) (750000000,1893) +- (218, 218) (825000000,2128) +- (253, 253) (900000000,2324) +- (239, 239) (975000000,2616) +- (243, 243) (1050000000,2805) +- (266, 266) (1125000000,3013) +- (320, 320) (1200000000,3326) +- (393, 393) (1275000000,3359) +- (457, 457) (1350000000,3583) +- (482, 482) (1425000000,3661) +- (556, 556) (1500000000,3902) +- (543, 543) (1575000000,4124) +- (610, 610) (1650000000,4207) +- (706, 706) (1725000000,3973) +- (680, 680) }; \addlegendentry{LLVM + taint + Minerva} \addplot[error bars/.cd, y dir=both, y explicit] coordinates { (75000000,213) +- (54, 54) (150000000,265) +- (42, 42) (225000000,350) +- (58, 58) (300000000,406) +- (65, 65) (375000000,441) +- (66, 66) (450000000,489) +- (57, 57) (525000000,537) +- (56, 56) (600000000,606) +- (57, 57) (675000000,679) +- (75, 75) (750000000,714) +- (78, 78) (825000000,761) +- (83, 83) (900000000,854) +- (92, 92) (975000000,964) +- (117, 117) (1050000000,1030) +- (137, 137) (1125000000,1089) +- (155, 155) (1200000000,1158) +- (155, 155) (1275000000,1043) +- (111, 111) (1350000000,1031) +- (82, 82) (1425000000,1005) +- (87, 87) (1500000000,1099) +- (111, 111) (1575000000,1163) +- (118, 118) (1650000000,1110) +- (103, 103) (1725000000,1106) +- (96, 96) }; \addlegendentry{LLVM + taint} \end{axis} \end{tikzpicture} \caption{The average number and standard deviation of instructions replayed relative to time, for instruction counts where we have more than 3 samples executing the given number of instructions. } \label{fig:InstructionsRelativeToTime} \end{figure} The blue curve of Figure \ref{fig:InstructionsRelativeToTime} shows the number of instructions replayed relative to the time taken in each of the analyses. On average the replay time is 3166 seconds, resulting in a 126x slowdown of the recording time, and we analysed on average 432365 instructions per second. In comparison, the developers of PANDA report a 24.7x slowdown when tainting data sent over the network and a 67.7x slowdown for tainting a 1KB file and encrypting it with AES-CBC-128 \cite{Dolan-Gavitt:2015:RRE:2843859.2843867}. Additionally, Figure \ref{chapter5:figInstructionSampleCount} shows the number of instructions that it took to replay the samples in our data set, and we observe that for about 90\% of the samples this required less than 2 billion instructions. Another interesting metric is the specific overhead incurred by Minerva-only code. Specifically, there is some share of the overhead that is due to the translation of QEMU TCG instructions to LLVM instructions and also overhead that is specific to the taint implementation of PANDA. None of these requirements is strict to Minerva, in that we are not reliant on LLVM specifically, and PANDA's taint analysis does not focus on performance. Several systems focus on fast taint analysis \cite{argos:eurosys06, Bosman:2011:MWF:2186328.2186330, Henderson:2017:DPW:3057931.3057958} and, conceptually, the techniques of Minerva can be implemented on top of these taint libraries as well. To understand the overhead of Minerva's code, we ran the samples through a replay with LLVM-translation and taint analysis enabled, and no Minerva-specific analysis code. This gives us a reasonable estimate for how much of the analysis time was spent in the specific code related to Minerva. The black curve in Figure \ref{fig:InstructionsRelativeToTime} shows these numbers. On average, each execution took 1275 seconds with the overhead of LLVM translation and PANDA's taint library. This corresponds to an average of 51x slowdown, meaning that Minerva's code takes up a bit more than half of the total 126x slowdown. \begin{figure}[H] \centering \label{ScoreVersusSize} \begin{tikzpicture} \begin{axis}[ scale=0.55, width=0.8\textwidth, height=0.4\textwidth, xmin=0, xmax=4982041422, ymin=0, ymax=145, xlabel=Instructions to complete replay, ylabel=\% of samples] \addplot+[mark = none, red] coordinates { (568380645, 4) (605545483, 8) (633190990, 13) (649319669, 17) (661589381, 21) (739277161, 26) (752206384, 30) (782672024, 34) (808381032, 39) (870838726, 43) (958901927, 47) (1029782762, 52) (1102900266, 56) (1124087343, 60) (1204063468, 65) (1244763771, 69) (1299911044, 73) (1381355627, 78) (1640070664, 82) (1775595713, 86) (2112022315, 91) (4557122506, 95) (4982041422, 100) }; \addlegendentry{LLVM + taint + Minerva} \addplot+[mark = none, blue] coordinates { (0, 90) (4982041422, 90) }; \end{axis} \end{tikzpicture} \caption{The amount of instructions needed to replay the samples in our data set. The horizontal blue line shows the 90\% mark. } \label{chapter5:figInstructionSampleCount} \end{figure} During the replay of a malware sample, Minerva does no checking to verify if the analysis is progressing, is stuck or something similar, and the numbers above report the total time of each analysis-replay. An interesting metric in addition to the total replay time is the time it took to reveal the instructions executed in each malware sample. In Figure \ref{fig:CFGInstrsCoverage} we show the time it took to uncover 95\%, \%99 and \%100 of the unique instructions executed by the malware samples, respectively. The numbers decrease significantly, and on average it took 1614, 1964 and 2543 seconds to uncover 95\%, \%99 and \%100 of the unique instructions executed, respectively. As such, it took roughly half of the total replay time to reveal 95\% of the instructions in each sample. \begin{figure}[H] \centering \begin{tikzpicture} \begin{axis}[ scale=0.55, width=0.8\textwidth, height=0.4\textwidth, xlabel={\% samples}, ylabel={Seconds}, scaled ticks=false, ylabel shift = 1 pt, xmin=0, xmax=100, ymin=0, ymax=8200, legend pos=north west,] \addplot[ color=red,] coordinates { (4.000000, 84.760000) (8.000000, 242.000000) (12.000000, 467.950000) (16.000000, 476.270000) (20.000000, 504.800000) (24.000000, 517.380000) (28.000000, 524.590000) (32.000000, 525.260000) (36.000000, 546.180000) (40.000000, 551.330000) (44.000000, 577.070000) (48.000000, 584.700000) (52.000000, 590.310000) (56.000000, 624.040000) (60.000000, 675.750000) (64.000000, 812.970000) (68.000000, 1338.220000) (72.000000, 1379.300000) (76.000000, 1738.510000) (80.000000, 1765.380000) (84.000000, 3045.460000) (88.000000, 3991.120000) (92.000000, 4321.260000) (96.000000, 5716.470000) (100.000000, 8751.300000) }; \addplot[ color=green] coordinates { (4.000000, 84.760000) (8.000000, 467.950000) (12.000000, 584.700000) (16.000000, 597.780000) (20.000000, 624.040000) (24.000000, 628.710000) (28.000000, 646.370000) (32.000000, 675.750000) (36.000000, 753.580000) (40.000000, 818.460000) (44.000000, 903.130000) (48.000000, 1118.000000) (52.000000, 1189.790000) (56.000000, 1265.180000) (60.000000, 1338.220000) (64.000000, 1379.300000) (68.000000, 1857.970000) (72.000000, 1922.340000) (76.000000, 2934.180000) (80.000000, 3045.460000) (84.000000, 3405.290000) (88.000000, 3991.120000) (92.000000, 4321.260000) (96.000000, 5716.470000) (100.000000, 8846.860000)}; \addplot[ color=blue] coordinates { (4.000000, 467.950000) (8.000000, 597.780000) (12.000000, 628.710000) (16.000000, 646.370000) (20.000000, 675.750000) (24.000000, 753.580000) (28.000000, 818.460000) (32.000000, 903.130000) (36.000000, 1201.710000) (40.000000, 1269.260000) (44.000000, 1338.220000) (48.000000, 1450.370000) (52.000000, 1458.120000) (56.000000, 1857.970000) (60.000000, 1922.340000) (64.000000, 3045.460000) (68.000000, 3264.350000) (72.000000, 3336.690000) (76.000000, 3405.290000) (80.000000, 3451.720000) (84.000000, 3860.760000) (88.000000, 4607.140000) (92.000000, 6077.810000) (96.000000, 7711.970000) (100.000000, 8846.860000) }; \legend{95\% instruction coverage, 99\% instruction coverage, 100\% instruction coverage} \end{axis} \end{tikzpicture} \caption{The time taken to explore the unique instructions in each malware sample.} \label{fig:CFGInstrsCoverage} \end{figure} The biggest performance bottleneck we found in Minerva is when malware makes the code execute longer via stalling loops. An example of a stalling loop from a Kovter sample\footnote{md5 of sample 147330a7ec2e27e2ed0fe0e921d45087} is shown in Figure \ref{fig:StallingLoopKovterSample5cc6f7}. In total, the loop does 20 million iterations with sixteen calls to functions from the Windows API in each iteration. The loop has no real effect and is purely garbage code. In total, our 25-second recording of this sample reaches 17 million iterations before the recording is over and incurs a replay time of 3300 seconds. The sample we observed with the longest replay time is from the Nymaim family that has a stalling loop with 1.4 billion iterations, and after the stalling loop, it calls the \texttt{Sleep} function from the Windows API to further stall the execution. In total, our 25-second recording of this sample took 170,000 seconds to replay. \begin{figure}[H] \centering \includegraphics[scale=0.45]{images/StallingLoopKovter5cc6f7.png} \caption{Stalling loop in Kovter malware.} \label{fig:StallingLoopKovterSample5cc6f7} \end{figure} \section{Limitations} \label{Chapter4:limitations} \textbf{Stolen bytes and copying Windows API.} Minerva's precise API capturing of API calls depends on monitoring whether the target address of branch instructions is the start of some Windows API function. Some malware use an anti-analysis technique, called \textit{stolen bytes}, that copies a share of some API function to another place in memory to execute that code and then branch in the middle of the given API function. In this context, they avoid calling the beginning of the function and our technique will not capture it. One solution to this is to identify function boundaries for each API function and then monitor ranges rather than the function start. In a more general setting, malware can copy entire functions or modules from the Windows API and then rely on the copied code rather than calling the original Windows code. In this context, Minerva will still capture whenever system calls happen, but new measurements should be taken for identifying the copying of Windows code. One approach is to mark library code with a specific taint label and then monitor whether library code is propagated. Naturally, this solution is subject to the limitations of taint analysis. Another approach is to incorporate forensic techniques that determine the similarity between the code in a given process and a set of external libraries, which we discuss further in the following paragraph.\\ \textbf{Inlining and statically linked binaries.} A limitation in Minerva in terms of identifying external dependencies is when malware deploys inlined or statically linked code. The difference between this and copying external libraries as described above is that inlining and statically linking occurs at compile time where copying occurs at run time. Minerva is not capable of identifying inlined or statically linked external dependencies, and we consider this to be a slightly different problem, namely similarity analysis of the malware code with library implementations. However, inlining and statically linking can, of course, be used in combination with obfuscation techniques and similar, and, therefore, the problem becomes determining program equivalence in the general case, which is a well-known undecidable problem. Nonetheless, efforts can still yield positive and practical results as shown by previous work in areas such as library fingerprinting \cite{Emmerik1994SignaturesFL, Jacobson:2011:LLF:2024569.2024571}, structural comparison of binary code \cite{articleDullienAndRolf, DBLP:conf/dimva/Flake04, Kruegel:2005:PWD:2146257.2146273} and, most recently, similarity detection via machine learning \cite{li2019graph, DBLP:conf/ccs/SongYLS18}. \\ \textbf{Performance limitations.} There are currently two main performance limitations in Minerva. First, malware can detect the presence of the recording component due to the 3.56x slowdown, and second, the replaying component limits the throughput of Minerva due to its performance cost. Stalling loops seem to pose a core limitation in this context. There is, however, previous work on how to deal with stalling loops in the context of full system emulation. Kolbitsch et al. implemented several features into the Anubis analysis system \cite{Kolbitsch:2011:PPD:2046707.2046740}. Their approach is to implement heuristics that detect when stalling loops occur and then either disable heavy instrumentation until the stalling loop exits or force execution out of the loop. The first approach is certainly possible to implement in Minerva, but it may run into issues if the stalling loop is also responsible for propagating executable malicious code since the taint analysis would likely be disabled. The second approach is more challenging to implement because replaying is not able to change execution-path in the guest system, as the execution is fixed to the replay log. We consider five main avenues to improving performance. First, we can use hardware assisted virtualisation during recording and only full system emulation during replay, as suggested in AfterSight \cite{Chow:2008:DDP:1404014.1404015}. Second, we can implement various on-and-off analyses during the replay similar to Kolbitsch et al. Third, we can add light anti-analysis monitoring during the recording, for example, to limit the effectiveness of calls to functions like \texttt{Sleep}. In this case however, the implementation must use some form of approximation to determine the malware execution trace since taint analysis will not be available. Fourth, we can improve the speed of various parts in PANDA, such as the taint analysis plugin. Instead of converting instructions to LLVM and performing taint analysis on the LLVM code, we can adopt the taint system by DECAF, which occurs directly on the QEMU tcg instructions \cite{Henderson:2017:DPW:3057931.3057958}. Finally, an interesting avenue is implementing a feedback loop between record-and-replay that based on the analysis in the replay sends information to the recording about where a delay in execution occur and how to handle it. In this way, it is possible to incrementally build up a complete execution trace of the malware without anti-analysis tricks. \section{Related work} \label{sec:Chapter4RelatedWork} \textbf{Automatic unpacking.} There are many works in automatic unpacking of malware and we have already discussed several of these throughout the paper \cite{Dinaburg:2008:EMA:1455770.1455779, Hu:2013:MSM:2535461.2535485, Josse2007, Kang:2007:RHC:1314389.1314399, 4413009, 10.1007/978-3-540-88313-5_31, Ugarte-pedrero_sok:deep}. Some of this work considers the concept of IAT destruction \cite{Josse2007, 10.1007/978-3-540-88313-5_31, DBLP:conf/malware/Korczynski16} and IAT reconstruction has also been considered on a more general basis \cite{DBLP:journals/jip/KawakoyaIM18}. The work by Ugarte et al. \cite{Ugarte-pedrero_sok:deep} highlights several missing gaps in existing unpackers and proposes a system-wide approach to unpacking. However, as we observed in this paper, their approach is severely limited. In some aspects, Ugarte et al. provide a more refined model for dynamically generated code in that they assign various labels to the memory written by the malware based on whether it is executed and alike. These labels can easily be integrated into Minerva. In addition to this, they also highlight that several limitations in existing unpackers exist due to missing reference data sets, which indeed also motivated the construction of our synthetic benchmark set \#1. The work that is closest to ours is Tartarus \cite{DKOR} and the ideas of this paper are heavily inspired by their work. We deploy a similar approach to tracing the malware throughout the whole system, however, we deploy a different model of dynamically generated malicious code and also propose novel algorithms for making the output suitable for follow-up analysis. In particular, the post-processing we describe in this paper is novel and our model of dynamically generated code is explicitly connected to previous waves whereas Tartarus simply dumps the whole of tainted memory whenever a new wave executions. As such, our model is more precise and also formally defined. \\ \textbf{System-wide malware execution.} Several works have closely considered the concept of malware executing throughout the whole system. In particular, Panorama \cite{Yin:2007:PCS:1315245.1315261}, DiskDuster\cite {10.1007/978-3-642-37300-8_9}, Tartarus \cite{DKOR} and API Chaser \cite{YuheiKawakoya2019} use dynamic taint analysis to capture this. Barabosch et al. has also investigated the problem with code injection by analysing memory dumps \cite{10.1007/978-3-319-60876-1_10} and also at run time \cite{10.1007/978-3-319-08509-8_13}. Minerva relies on the same techniques as Tartarus to trace malware execution through the system. An interesting approach at the other end of the spectrum is explored by Ispoglou and Payer in malWASH \cite{198415}, where they propose to write complex malware using exactly the paradigm of system-wide execution. \\ \textbf{Malware disassembly.} The work in this paper is closely related to techniques that focus on disassembling malicious software. An accurate description of our work within this domain, rather than unpacking, is a system-wide malware disassembler. We gather a precise instruction-level execution trace of the malware and then gather more content to include in the reconstructed PE file with speculative disassembly. Traditionally, disassembly techniques are split between linear sweep, as used in GNU's Objdump, and recursive traversal \cite{Cifuentes:1995:DBP:213593.213604, Sites:1993:BT:151220.151227} algorithms. However, there are several pieces of previous work on disassembly that specifically target malware and these move beyond the traditional approaches. Kruegel et al. present an approach that combines a variety of techniques from control-flow analysis and statistical methods, in order to statically disassemble obfuscated binaries \cite{Kruegel:2004:SDO:1251375.1251393}. Kinder and Veith present an approach based on abstract interpretation that statically disassembles binaries and also resolves indirect branch instructions \cite{Kinder:2008:JSA:1427782.1427835}. Rosenblum et al. present a classification approach to identify function entry points \cite{DBLP:conf/aaai/RosenblumZMH08}, and Bao et al. \cite{DBLP:conf/uss/BaoBWTB14} follow the same path and use machine learning and static analysis to identify functions within binaries. They train a weighted-prefix tree that recognises function starting points in a binary file and then value-set analysis \cite{Balakrishnan2008} with an incremental control-flow recovery algorithm to identify function boundaries. \section{Conclusions} In this paper, we proposed a system called Minerva that focuses on generic and precise malware unpacking. From a technical point of view, Minerva deploys a concatic approach with both dynamic and static analysis and partitions the malware execution trace into execution waves based on information flow analysis. Minerva precisely monitors the API-calls of the malware code and accurately correlates these to the unpacked code. Based on the output of the dynamic analysis, Minerva performs static analysis on the execution waves to output a set of reconstructed PE files with valid import address tables and patched API calls. From a theoretical point of view, Minerva deploys a precise model of execution waves based on an information flow model that captures dynamically generated malicious code \textit{independently} of who wrote the code. We came up with several novel algorithms that combine these execution waves with other artefacts collected from the dynamic analysis to carefully produce PE files that are well-suited for follow-up static analysis. Finally, we proposed a new set of benchmark applications that exhibit unpacking behaviours with various forms of dynamically generated code, system-wide execution and import-address table destruction in order to address a missing gap in terms of ground-truth samples for testing unpackers. This benchmark suite is the first of its kind in that previous benchmark data sets for testing automatic unpackers rely on third-party applications to perform the packing. We evaluated Minerva against our synthetic applications, real-world malware samples and also performed a comparative evaluation. Our results show that Minerva is significantly more precise than previous work and outputs unpacked code that shows external dependencies, which previous work does not. Our results also show that Minerva captures system-wide unpacking in many cases where previous work fails. \bibliographystyle{ACM-Reference-Format}
1,108,101,565,047
arxiv
\section{Introduction} \IEEEPARstart{R}{eal-world} image super-resolution (RealSR) aims at restoring in-the-wild images collected from poor-quality sensors with unknown degraded kernels. Since RealSR realizes image restoration under real-world scenarios, it plays a remarkable role in many human-centric applications, such as mobile photo enhancement, automatic pilot, etc. Traditional single image super-resolution (SISR)~\cite{yan2021fine,jiang2020dual,tian2020coarse,li2020learning,shi2017structure,qin2019difficulty} obtains high-resolution images from low-quality ones with known and fixed degradation models (e.g., Gaussian blur followed by Bicubic downsampling). As the in-the-wild corrupted images own complicated degradation kernels, traditional SISR model exhibit limited capacity when applying to in-the-wild applications. To address this issue, RealSR constructs the real-world image pairs by incorporating poor- and high- quality sensors to take low- and high- quality images, respectively. Compared with manual downsample kernel, the degraded kernel in RealSR is inherently more complicated and closed to in-the-wild degradation. As the mobile platform has a limited optical sensor size and a large number of users, mobile photography enhancement is one of the most challenging applications for RealSR. Thus, RealSR prefers to entertain the human visual system. Typical evaluation protocols in traditional SR (e.g., PSNR and SSIM~\cite{ssim}) focus on pixel-level similarity and fail to reflect human perception well. Recently, various kinds of efforts~\cite{fid,lpips,lin2018hallucinated,chen2020knowledge,jiang2021degrade} are proposed to reflect the human visual system in image quality assessment. LPIPS~\cite{lpips} argues that the widely used pixel-to-pixel measuring methods (e.g., L2/PSNR, SSIM, and FSIM) are contrary to human perception when estimating the perceptual similarity of images. Recently, RealSR methods~\cite{AIM19,Lugmayr2020ntire,shi2020ddet} usually adopt LPIPS and PSNR as default evaluation metrics and achieve superior scores either on PSNR or LPIPS. As depicted in Fig.~\ref{fig:psnr_lpips}, high PSNR increment samples exhibit simple background, smooth structure, and relatively weak LPIPS increment. Besides, high LPIPS gain samples own complicated texture and less PSNR gain. On one hand, images with complex textures are easier to get high scores on LPIPS on account of psychophysical similarity measurements~\cite{lpips}, while the adversarial training is adept at generating artificial texture. On the other hand, Euclidean-based measurements inherently prefer L1 minimization. Although the metrics based on adversarial loss and L1-norm-based loss are able to generates the results with favorable both LPIPS and PSNR values, the prior methods still utilize the weighted ratio to simply joint them all. Nevertheless, multiple datasets are another challenge for real-world image restoration frameworks. In RealSR task, many datasets~\cite{Lugmayr2020ntire,CameraSR,AIM19,realsr,chen2021cross,wei2020component} are proposed to address the real-world degradation and pixel displacement. Extensive training on multiple RealSR datasets is beneficial for generalization enhancement as well as performance improvement~\cite{wei2020component}. However, typical backbones~\cite{edsr,zhang2018image,zhang2020residual,jiang2020dual} require several days to be converged on one standard dataset~\cite{timofte2017ntire} with a single GPU. The incremental number of RealSR datasets would raise more requirements toward efficiency. Specifically, the training efficiency would suffer a heavier burden when auxiliary datasets are incorporated for pre-training. This motivates us to investigate a dataset distillation strategy for multiple dataset collaborative training. Previously, research communities pay considerable attention to model distillation as large-scale dataset plays a important role in training time consumption. For instance, DIV2K has 1,000 images with 2K resolution, NTIRE2020 and AIM2020 challenges provide 3450 images with 2K resolution, respectively. Though having large-scale training, especially when encountering data augmentation, can significantly improve performance, the following training time cost certainly receives an obvious increment. \begin{figure*}[t] \centering \vspace{-0.35cm} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth]{PSNR_gain-eps-converted-to.pdf} \caption{High PSNR Profits} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth]{lpips_gain-eps-converted-to.pdf} \caption{High LPIPS Profits} \end{subfigure} \begin{subfigure}{\linewidth} \includegraphics[width=\linewidth]{patch.pdf} \caption{Visualization of high-profit results with PSNR and LPIPS index.} \end{subfigure} \caption{We obtain high-profit results of DASR~\cite{wei2020unsupervised} from the perspective of PSNR and LPIPS index. Its discernable that the left high PSNR gain examples($>> 1.5 dB$) obtain clear background, smooth structure and inconspicuous LPIPS promotion. However, the high LPIPS profits samples($>> 0.25$) contain complicated texture and a few artifacts, which also obtain less PSNR gain. Though PSNR and LPIPS are both used for evaluating image restoration, they fail to reach an agreement.} \label{fig:psnr_lpips} \vspace{-3mm} \end{figure*} To tackle the above challenge, we observe that a full image contains many image types, which lead to a large variance in structure signal. Inspired by this, we introduce a Dual-Learning strategy based on a exclusionary attention mechanism to facilitate different types of images to exhibit diverse representation under multi-task learning paradigms. To address training time consumption issue, we also explore noise extraction and blending mechanism across multiple datasets, and proposed an efficient data collection strategy. With the proposed data collection strategy, we certainly relax the training time growth problem while incorporating an auxiliary large-scale dataset for mix-training. Our contributions can be summarized as: \begin{itemize} \item We propose an efficient training paradigm(i.e., NGDC algorithm) that extra time consumption is relaxed when incorporating auxiliary datasets for training. \item We proposed an exclusionary mask mechanism , namely RWSR-EDL, that brings significant improvement toward multi-loss training conditions in RealSR. \item We provide a comprehensive comparison on four challenging real-world SR benchmarks (e.g., AIM2019~\cite{AIM19}, {NTIRE2020}~\cite{Lugmayr2020ntire}, CameraSR~\cite{CameraSR}, RealSR~\cite{realsr}) to demonstrate that RWSR-EDL achieves a clear performance improvement on PSNR and LPIPS index both, and present a high-quality image restoration. \end{itemize} \section{Related Work} \textbf{Single Image Super Resolution.} Deep learning-based SR methods~\cite{jiang2019atmfn,jiang2021rain} have achieved significant improvements over conventional SR methods on restoration quality. Among these methods, Dong et al. proposed the first CNN-based SR method called SRCNN~\cite{dong2014learning}, which utilizes a three-layer CNN to learn a nonlinear mapping between LR and HR image pairs. Lim et al. proposed EDSR~\cite{edsr}, using simplified residual blocks for training SR model, which can achieve great improvement over restoration quality. Luo et al.~\cite{luo2021ebsr} proposed a Feature Enhanced Pyramid Cascading and Deformable convolution (FEPCD) module to align multiple low-resolution burst images in the feature level. Zhang et al. proposed a residual in residual (RIR)~\cite{zhang2018image} to address the difficulty of training a deep SR network with a channel attention mechanism to improve the representation ability of the SR network. Wang et al. proposed ESRGAN~\cite{wang2018esrgan}, which introduced a Residual-in-Residual Dense Block (RRDB) into the generative network. Furthermore, FASRGAN~\cite{yan2021fine} explored a novel pixel-level discriminator for a fine-grained generative adversarial learning and obtains interesting results. Hu et al.~\cite{hu2020meta} address the simplicity of the downscaling kernel in image SR by adopting multiple degradation metrics, and achieves promising results on complex corrupted images. Recently, many dual-way neural network~\cite{jiang2020dual} are applied to image restoration, however, they only address the traditional image SR problem and neglect the complexity of real-world cases. Mei et al.~\cite{csnln} proposed the first Cross-Scale Non-Local (CS-NL) attention module and a powerful Self-Exemplar Mining (SEM) cell into deep neural networks. Jiang et al.~\cite{jiang2020hierarchical} proposed a hierarchical dense connection network (HDN) for SR, achieved the improvements of both reconstruction performance and efficiency. Although the above deep learning-based SR methods bring significant improvement over SISR, they cannot be generalized well on real-world images as their assumed the known and fixed degradation process does not hold for real-world images. \textbf{Real-World Super Resolution.} Recently, Real-world super-resolution attracts considerable attention due to its distinguished practicality. Different from the traditional image SR that generally focuses on simple and uniform synthesized degradation, RealSR needs to handle complicated and rough degradation in real-world cases. To address the above challenges, Shocher et al. proposed zero-shot super-resolution (ZSSR)~\cite{zeroshot}, they realize an unsupervised CNN-based SR method by training an image-specific SR network with internal data rather than employing external data. Bell-Kligler et al. proposed KernelGAN~\cite{kernelgan}, to generate down-sampling kernel from label images by adopting kernel estimation GAN, which used in ZSSR for degraded kernel estimating. Fritsche et al. proposed the DSGAN model~\cite{fritsche2019frequency} to generate LR-HR pairs and then apply ESRGAN-FS on corresponding generated images. Pang et al. proposed FAN~\cite{fan} to extract the different frequency components of the LR images, which can be used to recover the HR images by preserving more high-frequency details. Wei et al. proposed a DASR framework~\cite{wei2020unsupervised} by calculating the domain distance between LR images and real images with the domain-gap aware training and domain-distance weighted supervision. Ji et al.~\cite{Ji_2020_CVPR_Workshops} proposed an RWSR model based on ESRGAN by using kernel estimation and noise injection. However, the above methods only employ a single structure network to obtain the enhanced image, they ignore that the multi-branch network can learn more diverse image features with various prior knowledge. In contrast, we use a dual-learning refinement module and a exclusionary mask generator to further extract the diverse representation. With our exclusionary dual-learning strategy, we obtain the final enhanced image with rich high-frequency details, which indeed address the complicated degradation in real-world super-resolution. Since the ground-truth label may absent in some image enhancement applications, some no-reference image quality assessment (IQA) metrics~\cite{liu2014no,mittal2012making,blau20182018} are proposed. Spatial–Spectral Entropy-based Quality (SSEQ)~\cite{liu2014no} is capable of assessing the quality of a distorted image across multiple distortion categories. Naturalness Image Quality Evaluator (NIQE)~\cite{mittal2012making} uses the multivariate Gaussian (MVG) model to fit the quality-aware features extracted from images. And Perception Index (PI)~\cite{blau20182018} combines the no-reference image quality measures of Ma et al.~\cite{ma2017learning} and NIQE~\cite{mittal2012making} to get the score. \textbf{Noise Modeling Based Denoising.} Reducing the effect of noise is a critical issue in real-world image restoration. Recently, some noise modeling-based approaches have been proposed to address the real-world noise estimation and reduction problem. Lebrun et al. proposed Noise Clinic (NC)~\cite{Noiseclinic}, a method to estimate the noise model dependent on signal and frequency followed by using non-local Bayes (NLB)~\cite{NLB} . Zhang et al. proposed FFDNet~\cite{FFDNet}, a non-blind Gaussian denoising network that can obtain charming results on real-world noise cases, nevertheless, it requires manual intervention to select noise level. However, real-world image noise is indeed distinct from Addictive White Gaussian Noise(AWGN) and the network trained by the handcraft degradation gives a poor performance when applied to complex real-world noise. Guo et al. proposed CBDNet~\cite{CBDNet}, which is composed of two sub-networks, that is noise estimation and non-blind denoising net. They use a noise modeling method that is able to generate realistic noise and also adopt real-world clean images for paired-wised training. Chen et al. proposed GCBD~\cite{GCBD} to extract the smaller image with clear background in noisy images, and then incorporating GAN to generate more fake noise samples for training denoising CNN network. Anwarf et al. proposed RIDNet~\cite{RIDNet}, a single-state denoising network with feature attention for real-world noise reduction. Different from the above approaches, our RWSR-EDL is free from noise modeling, instead, we make use of inherent noise sampling strategy towards the real-world images to construct paired training data, which can eliminate the biases introduced by noise modeling. Also, we incorporate the NGDC strategy that effectively distills a large-scale auxiliary dataset to obtain target real-world noise type. With NGDC strategy, the performance is clearly improved without training time increment. \begin{figure*}[t] \begin{center} \includegraphics[width=0.98\linewidth]{network.pdf} \end{center} \caption{Network architecture of RWSR-EDL, which consists of two components: 1) Noise-Guidance Data Collection for an efficient training in multiple large-scale RealSR datasets; 2) Exclusionary Mask Generator for relaxing multi-loss optimization in RealSR.} \label{fig:network} \vspace{-3mm} \end{figure*} \section{Methodology} \subsection{Overview} Traditional single image super-resolution(SISR) aims at restoring a high-quality image $I_{HR}$ from a low-quality image $I_{LR}$. In traditional SISR, the $I_{LR}$ is synthesized from $I_{HR}$ with a down-scaling operation: \begin{equation} I_{LR} = (I_{HR} \otimes K_{Gauss}) \Downarrow_{\Bbbk} . \end{equation} where $K_{Gauss}$ is a Gaussian blur kernel and $\Downarrow_{\Bbbk}$ denotes the image degradation procedure with downscale factor $\Bbbk$. To obtain a low-quality $I_{LR}$, $\Downarrow_{\Bbbk}$ typically adopts a bicubic-based downsampling algorithm. In contrast to traditional SISR, real-world super-resolution aims to address real-world image degeneration metrics by capturing $I_{HR}$ and $I_{LR}$ with different quality optical sensors and resolution settings where the degraded metrics(e.g., $\Downarrow_{\Bbbk}$) and blur kernel are unknown. Therefore, $\left \{ I_{LR},I_{HR} \right \}$ pairs inherently has different resolution properties in the real-world data collection procedure. To improve the quality of real-world images and attack existed noises and artifacts in real-world images, we propose a novel single image super-resolution framework to take precedence over learned image features. As shown in Figure~\ref{fig:network}, there are two key components in our RWSR-EDL: 1) noise-guidance data collection for an efficient training in multiple large-scale RealSR datasets; 2) exclusionary mask generator for relaxing multi-loss optimization in RealSR. In our method, we first enforce the LR images and ground-truth images yield similar noise distribution by embedding random noise into LR images. The intermediate HR images generated by the main feature extractor will be entered into the exclusionary dual-network. Finally, our framework learns diverse intermediate representations adaptively to pursue high PSNR and LPIPS scores by incorporating exclusionary mask and multi-loss optimization. \subsection{Main Generator} Since the goal of our work is to improve the perceptual quality of SR images under the real-world setting, we adopt an effective image enhancement backbone for feature extraction, which consists of 23 residual-in-residual dense blocks (RRDBs~\cite{wang2018esrgan}) and incorporate paired-wised training. Given $I_{LR}$, we generate intermediate HR image $I_G$ with the main generator as follows: \begin{equation} I_G = Generator(I_{LR}). \end{equation} \vspace{-4mm} \subsection{Exclusionary Mask Generation} To obtain a better super-resolving image by further improving the intermediate HR image $I_G$, we employ a deep feature extraction, which consists of two branches with the same network design but various initialization metrics to fully explore diverse feature representation. The three branches both contain ResBlocks~\cite{residual_net} which has two 3$\times$3 convolutional layers. The first convolutional layer with 3 input channels and 64 output channels is followed by a ReLU activation function while the second convolutional layer has 64 input channels and 3 output channels. To this end, each ResBlock in feature extraction phase can be formulated as follows: \begin{equation} I_{R}^x = I_{G} + f_{RB}^{x}(I_{G}). \label{euq:I_G} \end{equation} where $x$ represents the $x_{th}$ ResBlock in the three branch, $f_{RB}$ indicates the ResBlock, $I_{G}$ and $I_{R}^x$ means the input and output of ResBlock, respectively. Similar to Eqa.~\ref{euq:I_G}, we obtain $[I_{R}^x , I_{R}^y. I_{R}^m]$ w.r.t three parallel branches. As the refined images $I_{R}^x$ and $I_{R}^y$ are obtained, we deploy a soft-mask generator to demonstrate the exclusionary dual-learning. More specifically, $I_{R}^x$ and $I_{R}^y$ are optimized with various loss functions to pursuing PSNR and LPIPS promotion both by applying exclusionary masks. Let $I_{R}^m$ represent the output of the second branch, we apply a softmax operator on $I_{R}^m$ to normalize all feature values into 0.0 to 1.0 as an adaptive mask. Specifically, a soft-mask for channel index can be generated as follows: \begin{equation} M_{\alpha} =\operatorname{softmax}\left(I_{R}^m\right)=\frac{\exp \left(z_{n}\right)}{{\sum_{n=1}^{N}} \exp \left(z_{n}\right)} \label{eqa:alpha} \end{equation} where $n$ and $z_{n}$ are channel number and specific value of $I_{R}^m$ with channel $n$ index, respectively. To this end, we obtain the $M_{\alpha}$ for exclusionary dual-learning. \subsection{Loss Function} Since the exclusionary dual-learning is proposed relax the conflicts of perceptual- and L2- based optimizations, we first briefly introduce the the losses we used in this paper, including Pixel Loss, Perceptual Loss, and Adversarial Loss. \textbf{Pixel Loss.} we use L1 loss, which is a widely used loss function for general image restoration, to train our generator to recover as much effective pixel as possible. The L1 loss is defined by the manhattan distance between the reconstructed image $I_{SR}$ and the ground-truth image $I_{HR}$ as follows: \begin{equation} \mathcal{L}_{pix}(I^{i}_{SR},I^{i}_{HR})=\frac{1}{N} \sum_{i=1}^{N}\left\|I^{i}_{HR}-I^{i}_{SR}\right\|_{1} \end{equation} where $N$ is the samples of training set. \begin{figure*}[t] \centering \includegraphics[width=0.49\textwidth]{PSNR-eps-converted-to.pdf} \includegraphics[width=0.49\textwidth]{LPIPS-eps-converted-to.pdf} \caption{PSNR/LPIPS curves of different settings on RealSR~\cite{realsr} with training iteration index. With NGDC strategy, although we incorporate auxiliary large-scale dataset(i.e., NTIRE2020~\cite{Lugmayr2020ntire}) for training, \textbf{the training time receives no gain while the performance is clearly improved.} } \label{fig:curves} \vspace{-3mm} \end{figure*} \textbf{Perceptual Loss.} To further enhance the high-frequency features (such as edges) in the SR image, we deploy a perceptual loss based on feature space. Specially, we extract the features of $I_{SR}$ and $I_{GT}$ with a pre-trained VGG-19, and compute their loss as follows: \begin{equation} \mathcal{L}_{per}(I^{i}_{SR},I^{i}_{HR})=\frac{1}{N} \sum_{i=1}^{N}\left\|VGG_{19}(I^{i}_{HR})-VGG_{19}(I^{i}_{SR})\right\|_{1} \end{equation} where $N$ is the samples of training set and $VGG_{19}$ denotes a pre-trained VGG-19 model~\cite{vgg}. \textbf{Adversarial Loss.} We also deploy adversarial loss to enhance the SR image's texture to make it more realistic. The adversarial loss is defined as follows: \begin{align*} \mathcal{L}_{adv}(&I^{i}_{SR},I^{i}_{HR}) = \frac{1}{N}\sum_{i=1}^{N}\{-E[\log (1-\sigma(D(I^{i}_{HR})- \\ &E(D(I^{i}_{SR}))))] - E[\log (\sigma(D(I^{i}_{SR})-E(D(I^{i}_{HR}))))]\}. \end{align*} where $N$ is the samples of the training set, $\sigma$ represents a sigmoid function, and $D(\cdot)$ is the discriminator. In adversarial learning, patch discriminator takes advantage of typical used VGG-128 with some aspects. First, patch discriminator is a fully convolutional network, which is free from image size restriction by getting rid of the fully-connected layer. Second, local feature representation is excavated with the limited receptive field. We then apply patch discriminator instead of VGG-128 as $D(\cdot)$. \textbf{Exclusionary Dual-Learning.} As illustrated in Fig.~\ref{fig:psnr_lpips}, a region with high LPIPS profits often bears low PSNR increment. However, a typical utilization of $[\mathcal{L}_{adv}, \mathcal{L}_{per},\mathcal{L}_{pix}]$ in imageSR~\cite{srgan} is employ them together with weighted average: \begin{equation} \begin{aligned} \widetilde{\mathcal{L}}_{all}=\alpha \mathcal{L}_{adv}(I_{SR},I_{HR})& + \beta \mathcal{L}_{per}(I_{SR} ,I_{HR}) \\ &+ \gamma \mathcal{L}_{pix}(I_{SR},I_{HR}), \end{aligned} \label{eqa:oriloss} \end{equation} where $[\acute{\alpha}, \acute{\beta}, \acute{\gamma}]$ are empirical weight factors. $\widetilde{\mathcal{L}}_{all}$ incorporates all losses with fixed weight factors for optimization and ignores the diversity of image types in term of perceptual- and L1-norm-based metrics. Contrary to the Equ.~\ref{eqa:oriloss}, we adopt exclusionary masks in multiple losses training to avoid domain-conflicts and obtain more accurate feature representation: \begin{equation} \begin{aligned} \mathcal{L}_{all}=\mathcal{L}_{adv}(M_{\alpha} \cdot I_R^x,I_{HR})& + \mathcal{L}_{per}(M_{\alpha} \cdot I_R^x + M_{\beta} \cdot I_R^y,I_{HR}) \\ &+ \mathcal{L}_{pix}(M_{\beta} \cdot I_R^y,I_{HR}), \end{aligned} \label{eqa:loss} \end{equation} where $\cdot$ is a matrix dot operation. Meanwhile, we obtain $M_{\alpha}$ with Equ.~\ref{eqa:alpha}, and $M_{\beta}$ is obtained from $M_{\alpha} + M_{\beta} = 1 $ to promise the two masks have exclusionary property. In Eqa.~\ref{eqa:loss}, $M_{\alpha}$ enforces partial region of $I_R^x$ optimized with $\mathcal{L}_{adv}$ and avoid the distraction of $\mathcal{L}_{pix}$. Simultaneously, $M_{\beta}$ performs spatial attention on $I_R^y$ for $\mathcal{L}_{pix}$ optimization and free from $\mathcal{L}_{adv}$. The overall output $[I_R^x+I_R^y]$ is further smooth by $\mathcal{L}_{per}$ in Eqa.~\ref{eqa:loss}. With the proposed exclusionary dual-learning mechanism, we can demonstrate a fine-grained multi-loss optimization and achieve high PSNR and LPIPS profits both. \emph{Difference to Spatial Attention Mechanism.} Compared with spatial attention mechanism~\cite{zhou2016learning,woo2018cbam}, which simply highlights the feature according to the high entropy information, the proposed exclusionary dual-learning mechanism enforces the branches capture diverse feature with exclusionary masks. More specific, the generated soft-mask utilizes a branch to obtain high-value feature representation as usual, then, another branch is enforced to capture extensive information. With this competitive mechanism, our model can learn complementary features at the same time and demonstrates promising results on complex real-world conditions. To this end, we address that different image types exhibit different signal properties with deeply learned representation and the adaptive mask is able to reconstruct better visual-quality SR images. In Ablation Study Section, we will show that using the adaptive mask is superior to using a plain dual-way neural network. \subsection{Noise-Guidance Data Collection} \textbf{Data Preparation.} As the real-world images inherently own a certain proportion of noise, which leads to the restored images contain spare artifacts, we apply a down-sampling to reduce the negative impact. Specifically, we incorporate a bicubic kernel $K_{bic}$ on the source image $I_{src}$ to implement noise remove and obtain the $I_{HR}$ : \begin{equation} \label{eqa:down1} I_{HR} = (I_{src} \otimes K_{bic})\Downarrow_{\Bbbk}. \end{equation} Similarly, we can obtain the corrupted image $I_{LR}$ with: \begin{equation} I_{LR} = (I_{HR} \otimes K_{bic})\Downarrow_{\Bbbk}. \end{equation} \textbf{Noise Sampling.} The high-frequency signal among $I_{src}$ has a certain amount of drawback as the Eqa.~\ref{eqa:down1} reduces the pattern information. The noise distribution on $I_{LR}$, therefore, meets a significant change. In order to keep the $I_{LR}$ and $I_{src}$ has similar noise distribution, we directly sampling the noise in $I_{src}$ with a grid strategy. Specifically, a sliding window with a size of $s \times s$ and the stride is $s$ as well is used to capture smaller images. The mean and variance value of the captured patch is computed for sample sifting. Suppose variance and mean value of a fetched patches $n_i$ is $\sigma_i$ and $m_i$. We obtain the CharAcTeristic Interval (CATI) (i.e., $[ \sigma_{\theta_1},\sigma_{\theta_2}]$ and $[m_{\theta_1},m_{\theta_2}]$) by acquiring bottom $2\%$ patches from the perspective of variance. When the $\sigma_i$ and $m_i$ pertain to CATI, we regard $n_i$ as a noise patch, and joint it into the noise patch bank $N$. Otherwise, we regard the rest patch as noiseless patch. As shown in Fig.~\ref{fig:noise_comparison}, the noiseless patch, which contains richer texture and detail, has larger $\sigma$. We regard these variances exceeded patch as noiseless patch along with two aspects: 1) Complex structure and texture may cover up the noise. 2) Noise signal collection across multiple datasets benefit from the clean background. To this end, the patch with smaller $\sigma$ and positive $m$, which enjoys plain content and help us purely extract the inherent noise, is collected into noise bank. Then, we randomly fetch $n_i$ from the constructed $N$ and infuse $n_i$ into $I_{LR}$ to generate training input $I'_{LR}$ as: \begin{equation} \label{eqa:noise_infuse} I'_{LR} = I_{LR} + n_i. \end{equation} As illustrated in Eq.~\ref{eqa:noise_infuse}, the training pairs become more complex and the noise resistance capability of our model is enhanced with the discriminative learning. \begin{figure}[t] \begin{center} \includegraphics[width=0.89\linewidth]{noise_comparison.pdf} \end{center} \caption{Noise and Noiseless patch.} \label{fig:noise_comparison} \end{figure} \textbf{Noise-Guidance Data Collection.} In inspired by the noise sampling phrase, we extend it to cross-dataset noise collection. Assuming the noise samples, which owns noise type similar to the target domain, is existed in an additional dataset. We aim at distilling auxiliary datasets and indexing similar noise domain image. More specifically, we collect a noisy bank $N_a$ from the target dataset $D_a$ by incorporating its CATI. Then, we apply CATI, which was obtained from $D_a$ already, to the auxiliary dataset $D_b$ as well. Notice here that $D_b$ is usually larger than $D_a$. For instance, suppose a source image $I_{src}^b$ in $D_b$ pertains to the CATI, we generate the corresponding $\left \{ I_{HR}^{b},I_{LR}^{b},N^{b} \right \}$ and merge them into $\left \{ I_{HR}^{a},I_{LR}^a,N^a \right \}$. Finally, the distilled training pairs $\left \{ I_{LR}^{a+b}, I_{HR}^{a+b}, N^{a+b} \right \}$. We sketch the overall algorithm in Alg.~\ref{alg1}. With NGDC, we can significantly reduce the data volume of auxiliary dataset as well as achieve better performance. {\emph{Difference to Impressionism.} As depicted in~\cite{Ji_2020_CVPR_Workshops}, Impressionism collects noise patches by the following rule: $\sigma(p)<v$, where $\sigma(\cdot)$ denotes the function to calculate variance in image patch $p$, and $v$ is value of a fixed threshold. However, $v$ is an empirical value and needs a manual search when using a different training set. To remedy this, as demonstrated in Algorithm.~\ref{alg1}, we obtain the patches from the bottom 2\% variance by calculating the variance of all patches, and this rule can be applied to any training set. As shown in Fig.~\ref{fig:curves}, NGDC realizes an efficient training paradigm that achieves a faster convergence by collecting valuable noise patches across multiple training sets.} \begin{algorithm}[t] \caption{Algorithm of NGDC} \begin{algorithmic}[1] \label{alg1} \REQUIRE Target dataset $D_a$, auxiliary dataset $D_b$, noise bank $N_a=\varnothing$ and $N_b=\varnothing$ \FOR{$I_i^a$ in $D_a$} \STATE Compute $\sigma_i$ and $m_i$ by calculating $I_i^a$ \STATE $I_{HR}^a = (I_{i}^a \otimes K_{bic})\Downarrow_{\Bbbk} $ \STATE $I_{LR}^a = (I_{HR}^a \otimes K_{bic})\Downarrow_{\Bbbk} $ \ENDFOR \STATE Construct $N_a$ from bottom $2\%$ $\sigma$ value images and compute corresponding CATI $\in$ $[\sigma,m]$ \FOR{$I_i^a$ in $D_a$} \FOR {patch $p_j^a$ in $I_i^a$} \STATE Compute $\sigma$ and $m$ for $p_j^a$ \IF { $ \left \{ (\sigma, m) \vert p_j^a \right \} \in CATI$ } \STATE ${N}_a = {N}_a + p_i^a$ \ENDIF \ENDFOR \ENDFOR \STATE Construct $\left \{ I_{LR}^{a}, I_{HR}^{a}, N^{a} \right \}$ \FOR{$I_i^b$ in $D_b$} \FOR {patch $p_j^b$ in $I_i^b$} \STATE Compute $\sigma$ and $m$ for $p_j^b$ \IF { $ \left \{ (\sigma, m) \vert p_j^b \right \} \in CATI$ } \STATE ${N}_b = {N}_b + p_i^b$ \STATE $I_{HR}^b = (I_{i}^b \otimes K_{bic})\Downarrow_{\Bbbk} $ \STATE $I_{LR}^b = (I_{HR}^b \otimes K_{bic})\Downarrow_{\Bbbk} $ \ENDIF \ENDFOR \ENDFOR \STATE Obtain $\left \{ I_{LR}^{b}, I_{HR}^{b}, N^{b} \right \}$ via ${D}_b$ as same as $\left \{ I_{LR}^{a}, I_{HR}^{a}, N^{a} \right \}$ \STATE Construct $\left \{ I_{LR}^{a+b}, I_{HR}^{a+b}, N^{a+b} \right \}$ \end{algorithmic} \end{algorithm} \begin{figure*} \centering \includegraphics[width=0.93\textwidth]{RealSR.png} \caption{Super-resolution results on the RealSR~\cite{realsr} dataset.} \label{fig:realsr} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.93\textwidth]{CameraSR.png} \caption{Super-resolution results on the CameraSR~\cite{CameraSR} dataset.} \label{fig:camerasr} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.93\textwidth]{AIM19.png} \caption{Super-resolution results on the AIM19~\cite{AIM19} challenge data.} \label{fig:aim19} \end{figure*} \section{Experiments} \subsection{Dataset and Evaluation Protocols} We use following four real-world SR datasets for comprehensive comparsions to validate our RWSR-EDL: \begin{itemize} \item \emph{RealSR}~\cite{realsr} consists of 595 LR-HR image pairs, which are collected under a real-world scenario with various optical resolution. To stay in step with other baseline methods~\cite{wei2020unsupervised,fritsche2019frequency}, we use 200 RealSR image pairs and 800 clean images from DIV2K for training. Then, we conduct specified 50 LR-HR pairs, which are collected by Canon camera, for testing. Due to the images from the RealSR dataset are captured under different optical settings, they inherently have different resolutions, we adopt $\times$4 scale to evaluate our RWSR-EDL. \item \emph{CameraSR}~\cite{CameraSR} is a real-world SR dataset, which consists of 200 LR-HR pairs collected by iPhoneX and Nikon camera, respectively. In our experiment, we used 80 real-world images, which are collected by iPhoneX (e.g., No.021-100) and 800 clean images from DIV2K for training. The rest 20 pairs of LR-HR pairs (No.001-020) are incorporated for testing. \item \emph{NTIRE2020 Challenge}~\cite{Lugmayr2020ntire} contains 3550 images, which downscaled with unknown noise to simulating inherent optical sensor noise. In our experiment, we use 3450 images, which consists of 2650 images from Flickr2K and 800 images from DIV2K, for training. The testing data contains 100 images from the DIV2K validation set with the same degradation operation as the training image. We adopt the $\times$4 scale to evaluate our RWSR-EDL. \item \emph{AIM2019 Challenge}~\cite{AIM19}. The training and testing images provided by AIM2019 are the same as NTIRE2020. Nevertheless, different and undisclosed downgrade metrics were used to generate corrupted input. We adopt the $\times$4 scale to evaluate our RWSR-EDL as well. \end{itemize} In evaluation protocols, we adopted Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM), Learned Perceptual Image Patch Similarity (LPIPS)~\cite{lpips}, Naturalness Image Quality Evaluator (NIQE)~\cite{mittal2012making} and Mean Opinion Score (MOS) to verify RWSR-EDL. Meanwhile, PSNR and SSIM are the most commonly used evaluation metrics in image restoration, as they focus on pixel fidelity rather than human perception. In contrast, LPIPS and NIQE pays more attention to human visual perception, the lower values of LPIPS and NIQE score indicate beeter perception quality in the SR observations. We collect the MOS results by recruiting 20 volunteers for subjective assessment on NTIRE2020 challenge data. Specifically, 100 pairs of patches from the NTIRE2020 testset are randomly partitioned and the volunteers were shown a side-by-side comparison of each method's result and the referenced HR image. They were then asked to evaluate the quality of the SR image w.r.t. the reference image using the 6-level scale defined as: 0 - `Perfect’, 1 - `Almost Perfect’, 2 - `Slightly Worse’, 3 - `Worse’, 4 - `Much Worse’, 5 - `Terrible’. \subsection{Implementation Details and Competing Methods} In our experiments, we use flip and random rotation with angles of $90^{\circ}$, $180^{\circ}$ and $270^{\circ}$ for data augmentation. The images are cropped into $128 \times 128$ are input and the batchsize is 16. We use Adam~\cite{adam} as optimizer, where $\beta_1 = 0.9$ and $\beta_2 = 0.999$. The initial learning rate is $1 \times 10^{-4}$ and our RWSR-EDL is trained on an NVIDIA Tesla V100 server. Since the LR and HR images in the CameraSR dataset have the same resolution, we removed the corresponding downsampling and upsampling layers in CameraSR experiment. For the NGDC strategy, we adopt data of NTIRE2020 challenge~\cite{Lugmayr2020ntire} as an auxiliary dataset to present an efficient mix training in all experiments except the NTIRE2020 experiment itself. Also, we incorporating ZSSR~\cite{zeroshot}, CinCGAN~\cite{CinCGAN}, ESRGAN~\cite{wang2018esrgan}, FSSR~\cite{fritsche2019frequency}, DASR~\cite{wei2020unsupervised}, Impressionism~\cite{Ji_2020_CVPR_Workshops}, FASRGAN~\cite{yan2021fine} and KernelGAN~\cite{kernelgan} on corresponding RealSR benchmark for comparison. For ZSSR~\cite{zeroshot}, DASR~\cite{wei2020unsupervised} and Impression~\cite{Ji_2020_CVPR_Workshops}, we adopt official models for evaluation, which are trained on corresponding dataset. To keep consistent with other baselines, we adopt pre-trained model of ESRGAN~\cite{wang2018esrgan} for testing. For FSSR~\cite{fritsche2019frequency}, we fine-tune its model on the corresponding datasets and obtain the results. \defTable{Table} \begin{table}[ht] \centering \begin{tabular}{l|ccc} \hline Methods & \multicolumn{1}{l}{PSNR$\uparrow$} & \multicolumn{1}{l}{SSIM$\uparrow$} & \multicolumn{1}{l}{LPIPS$\downarrow$} \\ \hline \hline ZSSR~\cite{zeroshot} & 26.007 & 0.7482 & 0.386 \\ ESRGAN~\cite{wang2018esrgan} & 25.956 & 0.7468 & 0.415 \\ CinCGAN~\cite{CinCGAN} & 25.094 & 0.7459 & 0.405 \\ FSSR~\cite{fritsche2019frequency} & 25.992 & 0.7388 & 0.265 \\ Impressionism~\cite{Ji_2020_CVPR_Workshops} & 25.781 & 0.7508 & 0.258 \\ FASRGAN~\cite{yan2021fine} & 26.011 & 0.7504 & 0.307 \\ DASR~\cite{wei2020unsupervised} & \underline{26.229} & \underline{0.7660} & \underline{0.251} \\ \hline \hline RWSR-EDL & \textbf{27.803} & \textbf{0.8112} & \textbf{0.247} \\ \hline \end{tabular} \caption{Quantitative results on RealSR dataset.} \label{tab:realsr} \end{table} \begin{table}[ht] \centering \resizebox{\textwidth}{4.5mm}{ \begin{tabular}{l|cccccc} \hline Methods &ZSSR &ESRGAN &CinCGAN &FSSR &DASR & RWSR-EDL \\ \hline \hline NIQE & 4.971 & 6.327 &4.218 & 3.428 & 2.971 &2.419 \\ \hline \end{tabular}} \caption{Evaluation on RealSR~\cite{realsr} testset with NIQE index~\cite{mittal2012making}.} \label{tab:niqe} \end{table} \begin{table}[ht] \centering \begin{tabular}{l|ccc} \hline Methods & \multicolumn{1}{l}{PSNR$\uparrow$} & \multicolumn{1}{l}{SSIM$\uparrow$} & \multicolumn{1}{l}{LPIPS$\downarrow$} \\ \hline \hline FSSR~\cite{fritsche2019frequency} & 23.781 & 0.7566 & 0.180 \\ Impressionism~\cite{Ji_2020_CVPR_Workshops} & 25.142 & \underline{0.8097} & \underline{0.139} \\ DASR~\cite{wei2020unsupervised} & \underline{25.235} & 0.8065 & 0.141 \\ \hline \hline RWSR-EDL & \textbf{26.284} & \textbf{0.8226} & \textbf{0.133} \\ \hline \end{tabular} \caption{Quantitative results on CameraSR dataset.} \label{tab:camerasr} \end{table} \begin{table}[ht] \centering \begin{tabular}{l|ccc} \hline Methods & \multicolumn{1}{l}{PSNR$\uparrow$} & \multicolumn{1}{l}{SSIM$\uparrow$} & \multicolumn{1}{l}{LPIPS$\downarrow$} \\ \hline \hline ZSSR~\cite{zeroshot} & \underline{22.327} & 0.6022 & 0.630 \\ ESRGAN~\cite{wang2018esrgan} & 21.382 & 0.5478 & 0.543 \\ CinCGAN~\cite{CinCGAN} & 21.602 & \underline{0.6129} & 0.461 \\ FSSR~\cite{fritsche2019frequency} & 20.820 & 0.5103 & 0.390 \\ Impressionism~\cite{Ji_2020_CVPR_Workshops} & 21.021 & 0.5978 & 0.376 \\ DASR~\cite{wei2020unsupervised} & 21.780 & 0.5725 & \underline{0.346} \\ \hline \hline RWSR-EDL & \textbf{22.335} & \textbf{0.6187} & \textbf{0.342} \\ \hline \end{tabular} \caption{Quantitative results of AIM2019 Challenge on Real-world image SR track. Note that FSSR is the champion method in AIM2019 Challenge.} \label{tab:aim19} \end{table} \begin{table}[ht] \centering \begin{tabular}{l|cccc} \hline Methods & \multicolumn{1}{l}{PSNR$\uparrow$} & \multicolumn{1}{l}{SSIM$\uparrow$} & \multicolumn{1}{l}{LPIPS$\downarrow$} & \multicolumn{1}{l}{MOS$\downarrow$} \\ \hline \hline EDSR~\cite{edsr} & \underline{25.31} & 0.6383 & 0.5784 &2.875 \\ ESRGAN~\cite{wang2018esrgan} & 19.06 & 0.2423 & 0.7552 &3.250 \\ ZSSR~\cite{zeroshot} & 25.13 & 0.6268 & 0.6160 &2.905 \\ KernelGAN~\cite{kernelgan} & 18.46 & 0.3826 & 0.7307 & 3.155 \\ FASRGAN~\cite{yan2021fine} & 21.86 & 0.6214 & 0.5499 &2.740 \\ Impressionism~\cite{Ji_2020_CVPR_Workshops} & 24.82 &\underline{0.6619} & \underline{0.2270} &2.430 \\\hline \hline RWSR-EDL & \textbf{25.40} & \textbf{0.6819} & \textbf{0.2222} &2.225 \\ \hline \end{tabular} \caption{Quantitative results for NTIRE2020 Challenge on Real-world image SR track. Note that Impressionism is the winning approach in the NTIRE2020 Challenge. } \label{tab:ntire20} \end{table} \begin{table}[ht] \footnotesize \begin{tabular}{l|cccc} \hline \multicolumn{1}{c|}{Method} & ESRGAN & FSSR & DASR & RWSR-EDL \\ \hline \hline \multicolumn{1}{c|}{Time(frame/s)} & 0.7971 & 0.7918 & \textbf{0.7465} & \underline{0.7632} \\ \hline \multicolumn{1}{c|}{Parameter} & 16,697,987 & 16,697,987 & 16,697,987 & 16,729,694 \\ \hline \end{tabular} \caption{Efficiency analysis on 300$\times$200 image of RealSR~\cite{realsr} testset with $4 \times$ factor. As FSSR and DASR adopt ESRGAN as backbone, they have same network parameters.} \label{tab:efficiency} \end{table} \begin{table}[ht] \begin{tabular}{l|ccc} \hline \multicolumn{1}{c|}{Test Set} & \multicolumn{3}{c}{RealSR~\cite{realsr}} \\ \hline \multicolumn{1}{c|}{Metric} & \multicolumn{1}{l}{PSNR$\uparrow$} & \multicolumn{1}{l}{LPIPS$\downarrow$} & \multicolumn{1}{l}{Parameter} \\ \hline \hline Single Branch & \underline{27.691} & 0.255& 16,708,556 \\ Dual-Learning w/o Mask & 27.688 & \underline{0.254}& 16,719,125 \\ Dual-Learning w/ Mask & \textbf{27.775} & \textbf{0.250} & 16,729,694 \\ \hline \end{tabular} \caption{Ablation study on different branches. With similar parameter number, `Dual-Learning w/ Mask' exhibits a significant improvement.} \label{Tab:branches} \end{table} \begin{table}[ht] \begin{tabular}{l|ccc} \hline \multicolumn{1}{c|}{Test Set} & \multicolumn{3}{c}{RealSR~\cite{realsr}} \\ \hline \multicolumn{1}{c|}{Metric} & \multicolumn{1}{l}{PSNR$\uparrow$} & \multicolumn{1}{l}{SSIM$\uparrow$} & \multicolumn{1}{l}{LPIPS$\downarrow$} \\ \hline \hline VGG-128 & 27.673 & 0.8058 & 0.254 \\ Patch-D &\textbf{27.775} & \textbf{0.8095} & \textbf{0.250} \\ \hline \end{tabular} \caption{Ablation study on discriminator.} \label{Tab:discriminator} \end{table} \begin{table}[ht] \footnotesize \begin{tabular}{l|ccc} \hline \multicolumn{1}{c|}{Test Set} & \multicolumn{3}{c}{RealSR~\cite{realsr}} \\ \hline \multicolumn{1}{c|}{Metric} & \multicolumn{1}{l}{PSNR$\uparrow$} & \multicolumn{1}{l}{SSIM$\uparrow$} & \multicolumn{1}{l}{LPIPS$\downarrow$} \\ \hline \hline Ours(RealSR) w/o Noise Sampling & 27.715 & 0.8078 & 0.252 \\ Ours(RealSR) w/ Data Augmentation & \underline{27.775} & \underline{0.8095} & \underline{0.250} \\ Ours(RealSR + NTIRE2020) w/o NGDC & 27.557 & 0.8022 & 0.264 \\ Ours(RealSR + NTIRE2020) w/ NGDC & \textbf{27.802} & \textbf{0.8110} & \textbf{0.247} \\ \hline \end{tabular} \caption{Ablation study on Noise-Guidance Data Collection (NGDC) strategy. With an auxiliary dataset, NGDC can consistently promote performance in all evaluation metrics without any training time increment.} \label{Tab:NGDC} \end{table} \begin{table}[ht] \begin{tabular}{l|ccc} \hline \multicolumn{1}{c|}{Test Set} & \multicolumn{3}{c}{RealSR~\cite{realsr}} \\ \hline \multicolumn{1}{c|}{Metric} & \multicolumn{1}{l}{PSNR$\uparrow$} & \multicolumn{1}{l}{SSIM$\uparrow$} & \multicolumn{1}{l}{LPIPS$\downarrow$} \\ \hline \hline $\widetilde{\mathcal{L}}_{all}$ in Equ.~\ref{eqa:oriloss} & 26.127 &0.7551 &0.272 \\ $\mathcal{L}_{total}=\mathcal{L}_{pix}$ & 27.690 & 0.8098 & 0.256 \\ $\mathcal{L}_{total}=\mathcal{L}_{pix} + \mathcal{L}_{per}$ &\underline{27.730} &\underline{0.8101} & \underline{0.251} \\ $\mathcal{L}_{total}=\mathcal{L}_{pix} + \mathcal{L}_{adv}$ &27.504 &0.8006 & 0.254 \\ $\mathcal{L}_{total}=\mathcal{L}_{pix} + \mathcal{L}_{per} + \mathcal{L}_{adv}$ &\textbf{27.802} &\textbf{0.8110} & \textbf{0.247} \\ \hline \end{tabular} \caption{Ablation study on loss functions.} \label{Tab:loss} \end{table} \subsection{Quantitative and Qualitative Comparisons} \textbf{RealSR.} As depicted in Tab.~\ref{tab:realsr}, we compare the state-of-the-art methods on RealSR dataset. Compared with DASR, our RWSR-EDL achieves clear improvement over three evaluation metrics. For instance, as DASR is the recently proposed method, RWSR-EDL obviously surpasses DASR with 1.57 dB and 0.045 on PSNR and SSIM, which justify the effectiveness of RWSR-EDL. Compared with the zero-shot learning method, RWSR-EDL still achieves 1.76dB improvement. Similarly, as shown in Tab.~\ref{tab:niqe}, the quantitative performance in terms of NIQE indicates considerable improvment using RWSR-EDL compared to other models. Also, we present the visual comparison in Fig.~\ref{fig:realsr}. It can be observed that other one-fold structure methods obtain blurry results on the RealSR dataset. By contrast, RWSR-EDL obtains clear structure and sharp details, which verify the effectiveness of the exclusionary dual-learning mechanism and NGDC strategy. \textbf{CameraSR.} In Tab.~\ref{tab:camerasr}, we present the quantitative comparison on CameraSR dataset. Compared with DASR, our model achieve 1.05 dB improvement. Besides, RWSR-EDL still obtains 0.008 improvements over the LPIPS index, which justifies that our soft-mask mechanism makes a good balance between L1- and perceptual- minimization. As shown in Fig.~\ref{fig:camerasr}, RWSR-EDL also presents a high-quality restoration with more details. For example, compared with DASR, our method recovers more clear lines of the bridge on the upper row of Fig.~\ref{fig:camerasr}, which indeed show that the soft-mask strategy preserves sharp structure with adversarial optimization. \textbf{AIM2019 Challenge.} As depicted in Tab.~\ref{tab:aim19}, we compare RWSR-EDL with state-of-the-arts on data of the AIM2019 challenge. It notes that FSSR is the winning entry on the AIM2019 challenge. Compared with FSSR, our RWSR-EDL achieves 0.55 dB and 0.046 gain with PSNR and SSIM index. Also, RWSR-EDL consistently achieves first place among three evaluation metrics, which fully justifies that our exclusionary dual-learning mechanism helps RWSR-EDL realizes effective spatial attention among the multi-task paradigm. \textbf{NTIRE2020 Challenge.} In this experiment, we purely adopt data of NTIRE2020 and set the NGDC strategy down, to observe the effectiveness of our exclusionary dual-learning intuitively. As shown in Tab.~\ref{tab:ntire20}, RWSR-EDL achieves obvious improvement among three evaluation metrics. As Impressionism is the champion method in the NTIRE2020 challenge with high-quality enhancement, our model still performs superior results, which indeed verify the effectiveness of the proposed learning paradigm. Although EDSR enjoys much more parameter number and adopts L1 loss only, RWSR-EDL achieves 0.09 dB gain with PSNR index. Besides, EDSR exhibits a weak performance on perceptual-based evaluation protocol. Moreover, we confirmed the superior perceptual performance of RWSR-EDL by using MOS testing. The proposed model achieves the best result, with a 8.5\% better MOS than Impressionism. The qualitative results depicted in Fig.~\ref{fig:ntire20} verify that RWSR-EDL obtains significant visual quality improvement over Impressionism with realistic texture, clear structure, and fewer artifacts. \textbf{Efficiency Analysis.} Despite achieving superior results on quantitative comparisons, RWSR-EDL still presents a competitive running efficiency. The comparison in Tab.~\ref{tab:efficiency} reveals that RWSR-EDL achieves competing efficiency. Compared with ESRGAN, RWSR-EDL exhibits faster running efficiency and significant restoration quality improvement. Compared with DASR, our model performs obvious quantitative promotion with similar time consumption. \begin{figure*} \centering \includegraphics[width=0.93\textwidth]{AIM19_2.png} \caption{Super-resolution results on the AIM19~\cite{AIM19} challenge data.} \label{fig:aim19_2} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.93\textwidth]{NTIRE.png} \caption{Super-resolution results on the NTIRE2020~\cite{Lugmayr2020ntire} challenge data.} \label{fig:ntire20} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.93\textwidth]{NTIRE_2.png} \caption{Super-resolution results on the NTIRE2020~\cite{Lugmayr2020ntire} challenge data.} \label{fig:ntire20_2} \end{figure*} \subsection{Ablation Study} In this section, we make an ablation study on the proposed components, which are exclusionary dual-learning, patch-discriminator, and NGDC strategy. \textbf{Exclusionary Dual-Learning.} We make a study on the exclusionary dual-learning mechanism to justify the effectiveness of the proposed exclusionary dual-learning strategy. As depicted in Tab.~\ref{Tab:branches}, `Single Branch' shows a similar performance with `Dual-Learning w/o Mask', which justifies that a simply dual-way structure is useless to RealSR yet bring significant parameter increment. However, a combination of exclusionary soft-mask and dual-learning structure exhibits an obvious improvement over PSNR and LPIPS metrics both. Without obvious parameter increment, `Dual-Learning w/ Mask' surpasses `Dual-Learning w/o Mask' with 0.1 dB improvement and 0.005 LPIPS gain, which clearly justify that the proposed exclusionary soft-mask mechanism is adequate to the dual-way learning. \textbf{NGDC Strategy.} To fully justify the effectiveness of NGDC strategy, we plot the training curves with PSNR and LPIPS index in Fig.~\ref{fig:curves}. Specifically, we conduct the evaluation on the testset of RealSR during the training phrase to plot the curves. As depicted in Fig.~\ref{fig:curves}, `Ours(RealSR + NTIRE2020) w/ NGDC' is convergenced while the `Ours(RealSR + NTIRE2020) w/o NGDC' still stays at a climbing state, which fully justifies that the proposed NGDC strategy can extract useful noise from the auxiliary dataset and utilize noise stacking to bring superior performance. In Tab.~\ref{Tab:NGDC}, we also present a quantitative study on NGDC strategy. As noise sampling strategy give a competitive performance within a single dataset, they fail to demonstrate similar promotion when incorporating auxiliary large-scale dataset as the training data is too miscellaneous. For example, `Ours(RealSR + NTIRE2020) w/o NGDC' shows a lower score on all evaluation metrics than `Our(RealSR) w/ Data Augmentation'. To fair comparison, `Ours(RealSR + NTIRE2020) w/o NGDC' and `Ours(RealSR + NTIRE2020) w/ NGDC' both adopt data augmentation strategy as same as `Ours(RealSR) w/ Data Augmentation'. To our surprise, our NGDC strategy can significant improve the results when incorporating auxiliary large-scale dataset and noise sampling strategy, which well verify that auxiliary dataset exists a certain portion of negative samples and noise-guidance data collection is necessary. \textbf{Patch Discriminator.} We make an experiment on the discriminator of the adversarial training. As shown in Tab.~\ref{Tab:discriminator}, `Patch-D' achieves 27.75 dB and the `VGG-128' obtains 26.67 dB, which justify the effectiveness of patch discriminator in real-world image enhancement. Moreover, patch discriminator also brings a significant improvement on SSIM and LPIPS. To this end, we replace VGG-128 with patch discriminator as default discriminator in the adversarial optimization. \textbf{Loss Function.} We compare different settings of loss on the RealSR dataset. As depicted in Tab.~\ref{Tab:loss}, our model achieves promising results on PSNR by incorporating $\mathcal{L}_{per}$ and $\mathcal{L}_{pix}$. Compared with the typical weighted sum loss $\widetilde{\mathcal{L}}_{all}$ in Equ.~\ref{eqa:oriloss}, our method achieves 1.68 dB and 0.025 gain with PSNR and LPIPS index. However, the proposed model shows a weaker PSNR score when incorporating the adversarial loss function. To our surprise, our full model achieves further improvement as the adversarial loss is incorporated, which justify that $\mathcal{L}_{per}$ is important in RealSR issue. Also, our full model achieves the best score among Tab.~\ref{Tab:loss} verify the effectiveness of the proposed exclusionary dual-learning in L1- and perceptual- based cooperative optimization. \section{Conclusion and Future Work} In this work, we revival exclusionary dual-learning to facilitate deep representation that exhibits more diversity under L1- and perceptual- minimization. With the novel exclusionary dual-learning mechanism, our RWSR-EDL demonstrates a charming result on real-world image super-resolution. Moreover, we present a noise-guidance data collection strategy, which yields an efficient training paradigm in multiple datasets learning. Experimental results show that RWSR-EDL surpasses state-of-the-art real-world image super-resolution methods with a clear margin and closed parameter number on perceptual- and euclidean- based evaluation protocols. We aim at extending our work by following directions. First, we would like to extend the proposed NGDC strategy to a real-world image denoising task, aim at collecting more noise samples with a given image and demonstrate efficient denoising training. Second, we are considering involving explainable deep learning into our framework to further improve the exclusionary dual-learning mechanism. \bibliographystyle{IEEEtran}
1,108,101,565,048
arxiv
\section{Introduction} Scattering of light by light is a prediction of quantum electrodynamics (QED) that has been first calculated in 1935, in fact prior to the full development of QED, in the low-energy limit by Euler and Kockel \cite{Euler:1935zz,Euler:1936}, and in the ultrarelativistic limit shortly thereafter by Akhiezer, Landau, and Pomeranchuk \cite{Akiezer1936,LL4}. The former calculations were extended by Heisenberg and Euler \cite{Heisenberg:1935qt} who obtained an effective low-energy Lagrangian which includes background electromagnetic fields to all orders in field strength (for historical reviews and references see \cite{Dunne:2004nc,Dunne:2012vv,Dittrich:2014bxa,Scharnhorst:2017wzh}; a short list of further relevant references with regard to applications in light-by-light scattering is given by \cite{Itzykson:1980rh,Bern:2001dg,Liang:2011sj,dEnterria:2013zqi,Klusek-Gawenda:2016euz Rebhan:2017zdx,Ellis:2017edi,Gies:2017ezf}). In high-energy ultraperipheral collisions of heavy ions (HIC) evidence of the quantum mechanical process of light-by-light scattering has been presented for the first time by the ATLAS collaboration at the LHC \cite{Aaboud:2017bwk}, and more recently also by the CMS collaboration \cite{dEnterria:2018uly}. Light-by-light scattering can be studied through the large (almost) real photon fluxes available in ultraperipheral hadron-hadron, best in lead-lead collisions at LHC. In the noncentral HICs very strong magnetic fields are created perpendicular to the heavy ion reaction plane, which, however, decay rapidly, but are still strong at collision time $\tau \simeq 1 \,\mathrm{fm}$. The field strength has been estimated to reach \cite{Kharzeev:2007jp,Bzdak:2011yy,Deng:2012pc,Itakura:2013cia} \begin{equation} B/B_c (\tau = 0\,\mathrm{fm}) \simeq O(10^5)~~ \text{and}~~ B/B_c (\tau = 0.6\,\mathrm{fm}) \simeq O(10^2 \text{--} 10^3)~, \end{equation} at RHIC for impact parameters $b \simeq 10 \,\mathrm{fm}$, with the critical magnetic field $B_c = \frac{m_e^2}{e}\approx 0.86 \,{\rm MeV}^2\approx 4.4\times 10^{13}\,\mathrm{G}$ in terms of the electron mass $m_e$. At the LHC the estimated initial value is about a factor of 10 higher (but decays faster) Motivated by this, the present paper considers $\gamma +\gamma \rightarrow \gamma+\gamma$ scattering in the presence of weak and strong (constant) magnetic fields {in the center-of-mass system of the colliding photons, from $B/B_c\ll 1$ to $B/B_c \gg 1$ (but, parametrically, $B/B_c \ll \alpha^{-1/2}$ so that higher-loop corrections as well as the effects from dispersion and refraction of light in the magnetic field \cite{Adler:1971wn} remain negligible).} In the following this process will be studied in detail in the low-energy approximation provided by the Euler-Heisenberg Lagrangian. In this regime, the cross section rises proportional to $\omega^6/m^8$ with increasing photon energy $\omega$. At $\omega\sim m$ the cross section reaches its maximum value $\propto \alpha^4/m^2$ and afterwards decays rapidly like $1/\omega^2$ \cite{Akiezer1936,Karplus:1950zz,Bern:2001dg} until the next heavier charged particle starts to contribute according to the Euler-Heisenberg Lagrangian but with a maximum value that is suppressed by the corresponding lower inverse mass squared. After electrons and muons, also scalar charged particles such as pions and kaons contribute, which are described by a variant of the Euler-Heisenberg Lagrangian first obtained by Weisskopf \cite{Weisskopf:1936}. Also working out the effects of magnetic background fields on virtual scalars, we find that magnetic fields lead to a monotonic decrease of the light-by-light scattering cross section in scalar QED, whereas the lowest Landau level of the Dirac spinors contributes a counteracting effect that dominates at large magnetic fields where it leads to a growing cross section. A theoretically particularly interesting case is given by the Euler-Heisenberg Lagrangian for charged vector bosons \cite{Vanyashin:1965ple} for which we find a light-by-light scattering cross section growing with magnetic field strength and diverging at the critical magnetic field where it has been conjectured that a charged vector boson condensate may form \cite{Ambjorn:1988fx,Chernodub:2010qx,Hidaka:2012mz,Chernodub:2013uja}. As discussed further in the concluding section, relatively more significant effects from magnetic fields are to be expected for lighter particles as they have smaller critical $B_c = m^2/e$. At least sufficiently below the mass threshold, where the cross section steeply rises with energy, the Euler-Heisenberg Lagrangian permits reliable calculations of the effects of magnetic fields on light-by-light scattering. \section{Effective Lagrangian} The one-loop effective QED Lagrangian for a Dirac particle with charge $e$ and mass $m$ in the presence of electromagnetic background fields with negligible gradients as obtained first by Heisenberg and Euler reads \cite{Heisenberg:1935qt,Schwinger:1951nm,Dittrich:2000zu} \begin{eqnarray} \mathcal{L}^{(1)}_{\text{spinor}}&=&-\frac{1}{8\pi^2}\!\! \int\limits_0^{\infty} \!\!\frac{ds}{s^3}\mathrm{e}^{\!-m^2\!s}\!~\biggl[(es)^2|{\cal G}| \coth\Bigl(\!es\bigl(\! \sqrt{{\cal F}^2\!+\!{\cal G}^2}\!+{\cal F}\bigr)\!^{\frac{1}{2}}\!\Bigr) \nonumber\\ &&\qquad\qquad\times\cot\Bigl(\! es\bigl(\!\sqrt{{\cal F}^2\!+\!{\cal G}^2} \!-{\cal F}\bigr)\!^{\frac{1}{2}}\!\Bigr)\!-\frac{2}{3}(es)^2 {\cal F}-1\biggr], \label{1} \end{eqnarray} where ${\cal F}$ and ${\cal G}$ denote the Lorentz scalar and pseudoscalar \begin{eqnarray} {\cal F}&:=&\frac{1}{4}F_{\mu\nu}F^{\mu\nu} = \frac{1}{2}({\mathbf{B}}^2-{\mathbf{E}}^2) \,, \label{5a}\\ {\cal G}&:=&\frac{1}{4}F_{\mu\nu}\, ^\star\! F^{\mu\nu} = {\mathbf{E}}\cdot {\mathbf{B}}\, , \label{5b} \end{eqnarray} that can be built from the field-strength tensor and its dual, \begin{eqnarray} F^{\mu\nu}&=&\partial^\mu A^\nu-\partial^\nu A^\mu\, \label{4a}\\ ^\star\! F^{\mu\nu}&=& \frac{1}{2} \epsilon^{\mu\nu\alpha\beta} F_{\alpha\beta}\, .\label{4b} \end{eqnarray} The Maxwell Lagrangian is given by ${\cal L}^{(0)}=-{\cal F}$. An equivalent version of $(\ref{1})$ is \begin{equation} \mathcal{L}^{(1)}_{\text{spinor}}= -\frac{1}{8\pi^2}\!\! \int\limits_0^{\infty} \!\!\frac{ds}{s^3}\mathrm{e}^{\!-m^2\!s}~\!\biggl[(es)^2 a b~ \coth\Bigl(\!es a\!\Bigr) ~\cot\Bigl(\! es b \!\Bigr)\! -\frac{1}{3}(es)^2 (a^2 - b^2) -1\biggr]\! , \label{2c} \end{equation} where new variables are introduced\footnote{Here we follow the conventions used in Ref.~\cite{Dittrich:2000zu,Schubert:2001he} which differ from the original work of Heisenberg and Euler \cite{Heisenberg:1935qt} as well as the review \cite{Dunne:2004nc} in the notational reversal $a\leftrightarrow b$.} \begin{eqnarray} a:=\bigl(\sqrt{ {\cal F}^2+{\cal G}^2} +{\cal F}\bigr)^{\frac{1}{2}}\, &,&\, b:=\bigl(\sqrt{ {\cal F}^2+{\cal G}^2} -{\cal F}\bigr)^{\frac{1}{2}}\, ,\label{38a} \\ \Longrightarrow \qquad |{\cal G}|=ab\quad &,&\, {\cal F}=\frac{1}{2}(a^2-b^2)\, .\label{2b} \end{eqnarray} In terms of the variables $a$ and $b$, the low-energy one-loop effective Lagrangian of QED with Dirac spinors replaced by charged scalars reads \cite{Weisskopf:1936} \begin{equation} \mathcal{L}^{(1)}_{\text{scalar}} = \frac{1}{16\pi^2}\!\! \int\limits_0^{\infty} \!\!\frac{ds}{s^3}\mathrm{e}^{\!-m^2\!s}~\!\biggl[\frac{(es)^2 a b}{\sinh\left(es a\right) \,\sin\left(es b\right)} +\frac{1}{6}(es)^2 (a^2 - b^2) -1\biggr]\! , \label{Lscalar} \end{equation} {where $m$ is now the mass of the charged scalar particle}. This is of potential interest for elastic light-by-light scattering when the photon energy approaches the mass scale of pions. The Euler-Heisenberg Lagrangian for massive charged vector fields has been obtained in Ref.~\cite{Vanyashin:1965ple} for the case of a gyromagnetic factor $g=2$, which is carried by the electroweak $W^\pm$ gauge bosons and (approximately) also by the $\rho$ meson \cite{Samsonov:2003hs,Djukanovic:2005ag}. It reads \begin{equation} \mathcal{L}^{(1)}_{\text{vector}} = 3 \mathcal{L}^{(1)}_{\text{scalar}} +\frac{e^2}{4\pi^2}\!\! \int\limits_0^{\infty} \!\!\frac{ds}{s} \left[ \mathrm{e}^{-im^2 s} a\left(b \frac{\sin\left(es a\right)}{\sinh\left(es b\right)}-a\right) -\mathrm{e}^{-m^2 s} b\left(a \frac{\sin\left(es b\right)}{\sinh\left(es a\right)}-b\right) \right],\quad \label{Lchargedvector} \end{equation} {where $m$ on the right hand side, including the term $3 \mathcal{L}^{(1)}_{\text{scalar}}$, is the mass of the charged vector particle}. For hadronic scalar and vector mesons, the effective Lagrangians (\ref{Lscalar}) and (\ref{Lchargedvector}) apply as long as they can be treated as pointlike particles, which should be the case at sufficiently large photon wavelength and sufficiently large Larmor radius $r_q\propto m_q/(eB)$ of the quark constituents, compared to the mesons' charge radii. In the limit of weak fields, the various Euler-Heisenberg Lagrangians have the form \begin{equation} \mathcal{L}^{(1)}=c_1 {\cal F}^2 + c_2\, {\cal G}^2+\ldots~~,\label{8a} \end{equation} with $c_{1,2}$ given in Table \ref{tabc12}. These lowest-order terms are sufficient to obtain the cross section for low-energy light-by-light scattering with zero background fields \cite{Euler:1935zz} (see Ref.~\cite{Rebhan:2017zdx} for detailed results including polarization effects); in the following the corresponding calculations will be generalized to a constant magnetic background field of arbitrary strength. \section{Geometry and kinematics} \begin{figure}[b] \includegraphic {figkinetic} \caption{Kinematics of photon-photon collisions in the center-of-mass system.\label{fig:kinematics}} \end{figure} The scattering amplitude $\cal M$ for $\gamma (k_1) + \gamma (k_2) \rightarrow \gamma (k_3) + \gamma (k_4)$ is evaluated in the center-of-mass system, \begin{equation} k_1 = (\omega, \omega {\hat k}) , ~ k_2 = (\omega, - \omega {\hat k}) , \nonumber \end{equation} \begin{equation} k_3 = (\omega, \omega {\hat k^\prime}) , ~ k_4 = (\omega, - \omega {\hat k^\prime}) . \label{veck} \end{equation} The scattering plane is defined by \begin{equation} {\hat k} = (1, 0, 0) , ~ {\hat k^\prime} = (\cos\theta, \sin\theta, 0) . \label{k2} \end{equation} For linear polarizations the unit vectors ${\hat \epsilon}_i$ and ${\hat \epsilon}_o$ denote the directions in and out of the plane of scattering, such that they form a right-handed orthogonal basis with the photon momenta ${\hat k}, {\hat k^\prime}$, respectively, \begin{eqnarray} {\hat \epsilon}_i^1 = (0, 1, 0)~, ~ {\hat \epsilon}_o^1 = ( 0, 0, 1) , \nonumber \\ {\hat \epsilon}_i^2 = (0, 1, 0)~, ~ {\hat \epsilon}_o^2 = ( 0, 0, -1) , \nonumber \\ {\hat \epsilon}_i^3 = (-\sin\theta, \cos\theta, 0)~, ~ {\hat \epsilon}_o^3 = ( 0, 0, 1) , \nonumber \\ {\hat \epsilon}_i^4 = (-\sin\theta, \cos\theta, 0)~, ~ {\hat \epsilon}_o^4 = ( 0, 0, -1) ~. \label{pol} \end{eqnarray} The radiation field strength vectors \cite{Adler:1971wn} are given by \begin{eqnarray} \mathbf{f}^{1 \pm}_{i,o} = \omega ({\hat k} \wedge {\hat \epsilon}_{i,o}^1 ~ \pm ~i~ {\hat \epsilon}_{i,o}^1)~, \nonumber \\ \mathbf{f}^{2 \pm}_{i,o} = \omega (- {\hat k} \wedge {\hat \epsilon}_{i,o}^2 ~ \pm ~i~ {\hat \epsilon}_{i,o}^2)~, \nonumber \\ \mathbf{f}^{3 \pm}_{i,o} = \omega ({{\hat k}^\prime} \wedge {\hat \epsilon}_{i,o}^3 ~ \pm ~i~ {\hat \epsilon}_{i,o}^3)~, \nonumber \\ \mathbf{f}^{4 \pm}_{i,o} = \omega (-{{\hat k}^\prime} \wedge {\hat \epsilon}_{i,o}^4 ~ \pm ~i~ {\hat \epsilon}_{i,o}^4)~. \label{fs} \end{eqnarray} The external fields are denoted by \begin{equation} {\mathbf{F}}^\pm = \mathbf{B} \pm ~i~ \mathbf{E} \label{ext} \end{equation} with components ${F}^\pm_r,~ r=1,2,3$, as for the components $f^\pm_r$ of $\mathbf{f}^\pm$. \section{Light-by-light scattering amplitudes and cross sections} Following Adler's seminal work on photon splitting in a magnetic field \cite{Adler:1971wn} (as reviewed in Sect.~3.4 of Ref.~\cite{Dittrich:2000zu}), the matrix element for the scattering $\gamma (k_1) + \gamma (k_2) \rightarrow \gamma (k_3) + \gamma (k_4)$ in the presence of external electromagnetic fields is given by derivatives of the Euler-Heisenberg Lagrangian ({\ref{1}}) (or its analogue (\ref{Lscalar}) in scalar QED and (\ref{Lchargedvector}) for charged vector mesons), which are finally evaluated for finite $\mathbf{B}$ and vanishing $\mathbf{E} = 0$. The rather lengthy expression reads \begin{eqnarray} {\cal M} = \Bigl(~f^{1 +}_r \cdot \frac{\partial}{\partial F^+_r} + f^{1 -}_r \cdot \frac{\partial}{\partial F^-_r} \Bigr) \Bigl(~f^{2 +}_s \cdot \frac{\partial}{\partial F^+_s} + f^{2 -}_ s\cdot \frac{\partial}{\partial F^-_s} \Bigr) \nonumber \\ \times ~\Bigl(~f^{3 +}_t \cdot \frac{\partial}{\partial F^+_t} + f^{3 -}_t \cdot \frac{\partial}{\partial F^-_t} \Bigr) \Bigl(~f^{4 +}_u \cdot \frac{\partial}{\partial F^+_u} + f^{4 -}_u \cdot \frac{\partial}{\partial F^-_u} \Bigr) \times \mathcal{L}^{(1)}~, \label{mat1} \end{eqnarray} and explicitly, \begin{equation} {\cal M} = f^{1 +}_r f^{2 +}_s f^{3 +}_t f^{4 +}_u ~\frac{\partial^4 \mathcal{L}^{(1)}}{\partial{F^+_r}{\partial F^+_s}{\partial F^+_t}{\partial F^+_u}} \nonumber \end{equation} \begin{equation} ~+~\Bigl( f^{1 +}_r f^{2 +}_s f^{3 +}_t f^{4 -}_u +f^{1 +}_r f^{2 +}_s f^{4 +}_t f^{3 -}_u + f^{1 +}_r f^{3 +}_s f^{4 +}_t f^{2 -}_u + f^{2 +}_r f^{3 +}_s f^{4 +}_t f^{1 -}_u \Bigr) ~\frac{\partial^4 \mathcal{L}^{(1)}}{\partial{F^+_r}{\partial F^+_s}{\partial F^+_t}{\partial F^-_u}} \nonumber \end{equation} \begin{equation} ~+~\Bigl( f^{1 +}_r f^{2 +}_s f^{3 -}_t f^{4 -}_u +f^{1 +}_r f^{3 +}_s f^{2 -}_t f^{4-}_u + f^{1 +}_r f^{4 +}_s f^{2 -}_t f^{3 -}_u + f^{2 +}_r f^{3 +}_s f^{1 -}_t f^{4 -}_u \nonumber \end{equation} \begin{equation} ~+~ f^{3 +}_r f^{4 +}_s f^{1 -}_t f^{2 -}_u + f^{2 +}_r f^{4 +}_s f^{1 -}_t f^{3 -}_u \Bigr) ~ \frac{\partial^4 \mathcal{L}^{(1)}}{\partial{F^+_r}{\partial F^+_s}{\partial F^-_t}{\partial F^-_u}} \nonumber \end{equation} \begin{equation} ~+~\Bigl(f^{1 -}_r f^{2 -}_s f^{3 -}_t f^{4 +}_u +f^{1 -}_r f^{2 -}_s f^{4 -}_t f^{3 +}_u + f^{1 -}_r f^{3 -}_s f^{4 -}_t f^{2 +}_u + f^{2 -}_r f^{3 -}_s f^{4 -}_t f^{1 +}_u \Bigr) ~ \frac{\partial^4 \mathcal{L}^{(1)}}{\partial{F^-_r}{\partial F^-_s}{\partial F^-_t}{\partial F^+_u}} \nonumber \end{equation} \begin{equation} + f^{1 -}_r f^{2 -}_s f^{3 -}_t f^{4 -}_u ~\frac{\partial^4 \mathcal{L}^{(1)}}{\partial{F^-_r}{\partial F^-_s}{\partial F^-_t}{\partial F^-_u}}~. \label{matrix} \end{equation} \noindent Next the derivatives with respect to $F^\pm_r$ are expressed in terms of derivatives $\frac{\partial}{\partial {\cal F}}$ and $\frac{\partial}{\partial {\cal G}}$, e.g. \begin{equation} \frac{\partial}{\partial F^\pm_r} = \frac{1}{2} F^\pm_r ~(\frac{\partial}{\partial {\cal F}} \mp i~ \frac{\partial}{\partial {\cal G}})~, \end{equation} using \begin{equation} \frac{\partial {\cal F}}{\partial F^\pm_r}= \frac{1}{2} F^\pm_r~,~~ \frac{\partial {\cal G}}{\partial F^\pm_r}= \pm \frac{1}{2 i} F^\pm_r~, \end{equation} and \begin{equation} \frac{\partial^2}{{\partial F^+_r}{ \partial F^+_s} } = \frac{1}{2} \delta_{rs} (\frac{\partial}{\partial {\cal F}} -~ i~ \frac{\partial}{\partial {\cal G}})~ +\frac{1}{4} F^+_r F^+_s~ (\frac{\partial^2}{\partial {\cal F}^2} -~ 2i~ \frac{\partial^2}{\partial {\cal F} \partial {\cal G} } - \frac{\partial^2}{\partial {\cal G}^2})~, etc.. \end{equation} An important typical derivative is \begin{eqnarray} \frac{\partial ^4 \mathcal{L}^{(1)}}{\partial{F^+_r}{\partial F^+_s}{\partial F^+_t}{\partial F^+_u}} ~=~\frac{1}{4} (\delta_{rs} \delta_{tu} + \delta_{rt} \delta_{su} + \delta_{st} \delta_{ru}) (\frac{\partial^2 \mathcal{L}^{(1)}}{\partial {\cal F}^2} - \frac{\partial^2 \mathcal{L}^{(1)}}{\partial {\cal G}^2}) \nonumber \\ ~+~ \frac{1}{8} \Bigl(\delta_{rs} F^+_t F^+_s +\delta_{rt} F^+_s F^+_u + \delta_{st} F^+_r F^+_u + \delta_{ru} F^+_s F^+_t + \delta_{su} F^+_r F^+_t +\delta_{tu} F^+_r F^+_s \Bigr) \nonumber \\ \times ~(\frac{\partial^3 \mathcal{L}^{(1)}}{\partial {\cal F}^3} - 3 \frac{\partial^3 \mathcal{L}^{(1)}}{ {\partial {\cal F}} {\partial {\cal G}^2}}) \nonumber \\ + \frac{1}{16} F^+_r F^+_s F^+_t F^+_u ~ \Bigl(\frac{\partial^4 \mathcal{L}^{(1)}}{\partial {\cal F}^4} - 6 \frac{\partial^4 \mathcal{L}^{(1)}}{ {\partial {\cal F}^2} {\partial {\cal G}^2}} + \frac{\partial^4 \mathcal{L}^{(1)}}{ {\partial {\cal G}^4}} \Bigr)~, \nonumber \\ \label{examp} \end{eqnarray} noting that odd derivatives with respect to ${\cal G}$ vanish for $\mathbf{E} = 0$, i.e. at $F^\pm_r = B_r$. \subsection{Weak magnetic field} \label{sec:weakfield} In order to obtain the $O(\xi^2)$, $\xi = B/B_c$, correction to the leading-order matrix element ${\cal M}_{HE}$ of eq.(\ref{HE1}) the derivatives of Eq.~(\ref{der3}) enter, i.e. \begin{equation} \delta {\cal M} = \frac{1}{8} ~{\cal M}_a ~ (\frac{\partial^3 \mathcal{L}^{(1)}}{\partial {\cal F}^3} - 3 \frac{\partial^3 \mathcal{L}^{(1)}}{\partial {\cal F} \partial {\cal G}^2}) ~+~ \frac{1}{8} ~{\cal M}_b ~ (\frac{\partial^3 \mathcal{L}^{(1)}}{\partial {\cal F}^3} + \frac{\partial^3 \mathcal{L}^{(1)}}{\partial {\cal F} \partial {\cal G}^2})~, \label{oxhi} \end{equation} evaluated at ${\cal F}={\cal G}=0$, where \begin{equation} {\cal M}_a = \nonumber \end{equation} \begin{equation} (\mathbf{f}^{1+} \cdot \mathbf{f}^{4+}) (\mathbf{f}^{2+} \cdot {\mathbf{B}}) (\mathbf{f}^{3+} \cdot {\mathbf{B}}) ~+~ (\mathbf{f}^{2+} \cdot \mathbf{f}^{4+}) (\mathbf{f}^{1+} \cdot {\mathbf{B}}) (\mathbf{f}^{3+} \cdot {\mathbf{B}}) ~+~ (\mathbf{f}^{3+} \cdot \mathbf{f}^{4+}) (\mathbf{f}^{1+} \cdot {\mathbf{B}}) (\mathbf{f}^{2+} \cdot {\mathbf{B}}) \nonumber \end{equation} \begin{equation} ~+~ (\mathbf{f}^{1+} \cdot \mathbf{f}^{2+}) (\mathbf{f}^{3+} \cdot {\mathbf{B}}) (\mathbf{f}^{4+} \cdot {\mathbf{B}}) ~+~ (\mathbf{f}^{1+} \cdot \mathbf{f}^{3+}) (\mathbf{f}^{2+} \cdot {\mathbf{B}}) (\mathbf{f}^{4+} \cdot {\mathbf{B}}) ~+~ (\mathbf{f}^{2+} \cdot \mathbf{f}^{3+}) (\mathbf{f}^{1+} \cdot {\mathbf{B}}) (\mathbf{f}^{4+} \cdot {\mathbf{B}}) \nonumber \end{equation} \begin{equation} ~+~ ( ~+~ \Longleftrightarrow ~-~ ) ~, \label{MA1} \end{equation} and \begin{equation} {\cal M}_b = \nonumber \end{equation} \begin{equation} (\mathbf{f}^{1+} \cdot \mathbf{f}^{2+}) (\mathbf{f}^{3+} \cdot {\mathbf{B}}) (\mathbf{f}^{4-} \cdot {\mathbf{B}}) ~+~ (\mathbf{f}^{1+} \cdot \mathbf{f}^{3+}) (\mathbf{f}^{4+} \cdot {\mathbf{B}}) (\mathbf{f}^{2-} \cdot {\mathbf{B}}) ~+~ (\mathbf{f}^{1+} \cdot \mathbf{f}^{3+}) (\mathbf{f}^{2+} \cdot {\mathbf{B}}) (\mathbf{f}^{4-} \cdot {\mathbf{B}}) \nonumber \end{equation} \begin{equation} ~+~ (\mathbf{f}^{1+} \cdot \mathbf{f}^{2+}) (\mathbf{f}^{4+} \cdot {\mathbf{B}}) (\mathbf{f}^{3-} \cdot {\mathbf{B}}) ~+~ (\mathbf{f}^{2+} \cdot \mathbf{f}^{3+}) (\mathbf{f}^{4+} \cdot {\mathbf{B}}) (\mathbf{f}^{1-} \cdot {\mathbf{B}}) ~+~ (\mathbf{f}^{1+} \cdot \mathbf{f}^{4+}) (\mathbf{f}^{2+} \cdot {\mathbf{B}}) (\mathbf{f}^{3-} \cdot {\mathbf{B}}) \nonumber \end{equation} \begin{equation} ~+~ (\mathbf{f}^{1+} \cdot \mathbf{f}^{4+}) (\mathbf{f}^{3+} \cdot {\mathbf{B}}) (\mathbf{f}^{2-} \cdot {\mathbf{B}}) ~+~ (\mathbf{f}^{2+} \cdot \mathbf{f}^{3+}) (\mathbf{f}^{1+} \cdot {\mathbf{B}}) (\mathbf{f}^{4-} \cdot {\mathbf{B}}) ~+~ (\mathbf{f}^{3+} \cdot \mathbf{f}^{4+}) (\mathbf{f}^{2+} \cdot {\mathbf{B}}) (\mathbf{f}^{1-} \cdot {\mathbf{B}}) \nonumber \end{equation} \begin{equation} ~+~ (\mathbf{f}^{2+} \cdot \mathbf{f}^{4+}) (\mathbf{f}^{3+} \cdot {\mathbf{B}}) (\mathbf{f}^{1-} \cdot {\mathbf{B}}) ~+~ (\mathbf{f}^{2+} \cdot \mathbf{f}^{4+}) (\mathbf{f}^{1+} \cdot {\mathbf{B}}) (\mathbf{f}^{3-} \cdot {\mathbf{B}}) ~+~ (\mathbf{f}^{3+} \cdot \mathbf{f}^{4+}) (\mathbf{f}^{1+} \cdot {\mathbf{B}}) (\mathbf{f}^{2-} \cdot {\mathbf{B}}) \nonumber \end{equation} \begin{equation} ~+~ (\mathbf{f}^{3-} \cdot \mathbf{f}^{4-}) (\mathbf{f}^{1+} \cdot {\mathbf{B}}) (\mathbf{f}^{2+} \cdot {\mathbf{B}}) ~+~ (\mathbf{f}^{2-} \cdot \mathbf{f}^{4-}) (\mathbf{f}^{1+} \cdot {\mathbf{B}}) (\mathbf{f}^{3+} \cdot {\mathbf{B}}) ~+~ (\mathbf{f}^{2-} \cdot \mathbf{f}^{3-}) (\mathbf{f}^{1+} \cdot {\mathbf{B}}) (\mathbf{f}^{4+} \cdot {\mathbf{B}}) \nonumber \end{equation} \begin{equation} ~+~ (\mathbf{f}^{1-} \cdot \mathbf{f}^{4-}) (\mathbf{f}^{2+} \cdot {\mathbf{B}}) (\mathbf{f}^{3+} \cdot {\mathbf{B}}) ~+~ (\mathbf{f}^{1-} \cdot \mathbf{f}^{2-}) (\mathbf{f}^{3+} \cdot {\mathbf{B}}) (\mathbf{f}^{4+} \cdot {\mathbf{B}}) ~+~ (\mathbf{f}^{1-} \cdot \mathbf{f}^{3-}) (\mathbf{f}^{2+} \cdot {\mathbf{B}}) (\mathbf{f}^{4+} \cdot {\mathbf{B}}) \nonumber \end{equation} \begin{equation} ~+~ ( ~+~ \Longleftrightarrow ~-~ ) ~. \label{MB1} \end{equation} With \begin{equation} \mathcal{L}^{(1)} = c_1 {\cal F}^2 + c_2 {\cal G}^2 - {\hat c}_1 \frac{{\cal F}^3}{B_c^2} - {\hat c}_2 \frac{{\cal F} {\cal G}^2}{B_c^2} \pm ..., \end{equation} and the explicit values derived in Appendix \ref{sec:weakLc} and tabulated in Table~\ref{tabc12}, the amplitudes for the linear polarizations in and out of the collision plane read\footnote{For vanishing magnetic background fields, this agrees with the results given in Ref.~\cite{Rebhan:2017zdx} except that a factor $-i$ has been absorbed in the definition of $\mathcal{M}$ as done also in Ref.~\cite{Adler:1971wn}.} \begin{eqnarray} \frac{\mathcal M_{oooo}}{\omega^4}&=&4 c_1(3+\cos^2\theta) + \Bxyz{-30\hat{c}_1}{-30\hat{c}_1}{-18\hat{c}_1+16\hat{c}_2}\xi^2 +\Bxyz{6\hat{c}_1}{-42\hat{c}_1}{-6\hat{c}_1}\xi^2 \cos^2\theta,\\ \frac{\mathcal M_{iiii}}{\omega^4}&=&4 c_1(3+\cos^2\theta) + \Bxyz{-18\hat{c}_1+4\hat{c}_2}{-18\hat{c}_1+4\hat{c}_2}{-66\hat{c}_1}\xi^2 +\Bxyz{-6\hat{c}_1-4\hat{c}_2}{-6\hat{c}_1+12\hat{c}_2}{-6\hat{c}_1}\xi^2 \cos^2\theta,\\ \frac{\mathcal M_{ooii}}{\omega^4}&=& -8 c_1+4c_2 (1+ \cos^2 \theta) + \Bxyz{12\hat{c}_1-6\hat{c}_2}{24\hat{c}_1-2\hat{c}_2}{24\hat{c}_1-14\hat{c}_2}\xi^2 +\Bxyz{2\hat{c}_2}{-14\hat{c}_2}{-2\hat{c}_2}\xi^2 \cos^2\theta,\\ \frac{\mathcal M_{iioo}}{\omega^4}&=& -8 c_1+4c_2 (1+ \cos^2 \theta) + \Bxyz{24\hat{c}_1-2\hat{c}_2}{12\hat{c}_1-6\hat{c}_2}{24\hat{c}_1-14\hat{c}_2}\xi^2 +\Bxyz{-12\hat{c}_1-2\hat{c}_2}{12\hat{c}_1-10\hat{c}_2}{-2\hat{c}_2}\xi^2 \cos^2\theta,\qquad\\ \frac{\mathcal M_{oioi,ioio}}{\omega^4}&=&4~(c_1+c_2)(1+\cos\theta)+2(c_2-c_1 )(3+\cos^2 \theta) + \Bxyz{3\hat{c}_1-9\hat{c}_2}{3\hat{c}_1-9\hat{c}_2}{9\hat{c}_1-19\hat{c}_2}\xi^2\nonumber\\ && +\Bxyz{-6\hat{c}_1-2\hat{c}_2}{-12\hat{c}_1-4\hat{c}_2}{-12\hat{c}_1-4\hat{c}_2}\xi^2\cos \theta +\Bxyz{3(\hat{c}_1+\hat{c}_2)}{9\hat{c}_1-11\hat{c}_2}{3\hat{c}_1-\hat{c}_2}\xi^2 \cos^2\theta,\\ {\mathcal M_{oiio,iooi}}&=&{\mathcal M_{oioi,ioio}}\Big|_{\cos \theta \to - \cos \theta}, \end{eqnarray} where the three entries within the curly brackets refer to $\mathbf{B}$ pointing in $x$, $y$, and $z$ direction, respectively. For such $\mathbf{B}$, the remaining amplitudes with an odd number of $i$ or $o$ polarizations vanish identically. \begin{table}[t] \caption{\label{tabc12}Coefficients $c_{1,2}/C$ and $\hat{c}_{1,2}/C$ with $C=\alpha^2/m^4$.} \begin{ruledtabular} \begin{tabular}{@{}ccccc@{}} & $c_1/C$ & $c_2/C$ & $\hat{c}_1/C$ & $\hat{c}_2/C$ \\ \colrule spinor QED& 8/45 & 14/45 & 64/315 & 104/315\\ scalar QED& 7/90 & 1/90 & 31/315 & 11/315 \\ supersymmetric QED & 1/3 & 1/3 & 2/5 & 2/5\\ charged massive vector& 29/10 & 27/10 & $-137/105$ & $-157/105$\\ \end{tabular} \end{ruledtabular} \end{table} While we refrain from listing the unwieldy general case of oblique orientations of the magnetic field for all amplitudes, Appendix \ref{sec:sigmaunpol} gives the general weak-field result for the resulting unpolarized cross section. The resulting total unpolarized cross section reads \begin{eqnarray} \sigma(\gamma\gamma\to\gamma\gamma)^{\rm unpol}&=&\frac12 \int d\Omega \frac{d\sigma^{\rm unpol}}{d\Omega}\nonumber\\&=&\frac{7(3c_1^2-2c_1c_2+3c_2^2)\omega^6}{20\pi}\nonumber\\ &&+\frac{\omega^6}{15\pi} \frac{B_\parallel^2}{B_c^2}(-57 c_1 \hat{c}_1 + 18 \hat{c}_1 c_2 + 10 c_1 \hat{c}_2 - 23 c_2 \hat{c}_2)\nonumber\\ &&+\frac{\omega^6}{120\pi} \frac{B_\perp^2}{B_c^2}(-717 c_1 \hat{c}_1 + 243 \hat{c}_1 c_2 + 233 c_1 \hat{c}_2 - 391 c_2 \hat{c}_2), \end{eqnarray} where $B_\parallel$ is the magnetic field component parallel to the collision axis of the photons and $B_\perp$ the part orthogonal to it. For spinor QED this yields \begin{equation} \sigma(\gamma\gamma\to\gamma\gamma)^{\rm unpol}_{\rm spinor}=\frac{973 \,\alpha^4 \omega^6}{10125\, \pi m^8} \left[ 1-\frac{38224 \,B_\parallel^2 + 65602 \,B_\perp^2}{20433~B_c^2}+O(\xi^4) \right], \end{equation} and for QED with a charged scalar field instead of a Dirac spinor one has \begin{equation} \sigma(\gamma\gamma\to\gamma\gamma)^{\rm unpol}_{\rm scalar}=\frac{119\,\alpha^4 \omega^6}{20250\, \pi m^8} \left[ 1-\frac{11294 \,B_\parallel^2 + 16802\,B_\perp^2}{2499~B_c^2}+O(\xi^4) \right]. \end{equation} Scalar QED is relevant for light-by-light scattering at energies below the peak in the cross section produced by muons, since there charged pions also start to contribute. It is moreover particularly interesting in that it highlights the effects of the magnetic moments in spinor QED: In scalar QED, the total cross section is only about 6\% of the result in spinor QED. (Even with two charged scalars so that scalar QED has the same number of degrees of freedom, the cross section is less than a quarter of that of spinor QED.) This is reflected by the relatively small coefficients $c_2$ and $\hat{c}_2$ associated with the terms involving the square of the pseudoscalar ${\cal G}=\frac{1}{4}F_{\mu\nu}\, ^\star\! F^{\mu\nu}$ (see Table~\ref{tabc12}). Moreover, turning on a (subcritical) magnetic field decreases the total cross section more than twice as strongly as is the case in spinor QED. In fact, as will be shown below, the limit of strong magnetic fields is dominated by the lowest Landau level of Dirac spinors which eventually leads to an increase of the cross section. As an aside we note that supersymmetric QED, which in Ref.~\cite{Rebhan:2017zdx} has been shown to have particularly simple polarization patterns, gives the slightly simpler result \begin{equation} \sigma(\gamma\gamma\to\gamma\gamma)^{\rm unpol}_{\rm sQED}=\frac{7\,\alpha^4 \omega^6}{45\, \pi m^8} \left[ 1-\frac{104 \,B_\parallel^2 + 158 \,B_\perp^2}{35~B_c^2}+O(\xi^4) \right]. \end{equation} Of potential interest to light-by-light scattering are also charged vector bosons, in particular at photon energies between the pion and the $\rho$ meson mass scales. In hadronic contributions to light-by-light scattering, which is a critical ingredient in calculations of the anomalous magnetic moment of muons \cite{Jegerlehner:2009ry}, it is usually assumed that at the scale of the $\rho$ meson one can switch to quark degrees of freedom \cite{Bern:2001dg}. However, light-by-light scattering through virtual quarks differs quite strongly from the one through virtual vector bosons. In Table \ref{tabc12} we have also given the coefficients in the expansion of the Euler-Heisenberg Lagrangian resulting from vector mesons with gyromagnetic factor $g=2$ \cite{Vanyashin:1965ple,Skalozub:1975ab,Preucil:2017wen} corresponding to nonabelian vector bosons as well as to vector mesons \cite{Djukanovic:2005ag} (see also \cite{Samsonov:2003hs}). The interactions due to the magnetic moment of the vector mesons turn out to have the effect of enhancing the light-by-light cross section already in the weak-field limit: \begin{equation} \sigma(\gamma\gamma\to\gamma\gamma)^{\rm unpol}_{\rm vector}=\frac{2751\,\alpha^4 \omega^6}{250 \, \pi m^8} \left[1 + \frac{211846 \,B_\parallel^2 + 318298 \,B_\perp^2}{173313~B_c^2} +O(\xi^4)\right], \end{equation} which is a stark difference to both scalar and spinor QED. As we shall discuss presently, this difference becomes even more pronounced as $\xi$ approaches unity, where one enters a regime with possible vector boson condensation \cite{Chernodub:2010qx,Hidaka:2012mz,Chernodub:2013uja}. Furthermore, already at vanishing magnetic field, the total cross section for a charged vector boson is very much larger than that produced by three scalar degrees of freedom of the same mass, to wit, by a factor of $3537/17\approx 208.06$, underlining the importance of the magnetic moment of the virtual particles in light-by-light scattering. \begin{figure}[b] \includegraphics[width=0.6\textwidth]{stotBperp-red.pdf} \caption{\label{fig:stotBperp}Total cross section for unpolarized photons as a function of $\xi=B/B_c$ with magnetic field perpendicular to the collision axis for spinor QED (dark-red line) and for QED with two charged scalars (light-red line), both normalized to the total cross section of spinor QED at zero magnetic field. The weak-field result to order $\xi^2$ is given by the corresponding dashed lines. The strong-field result (\ref{stotlarge}) for spinor QED is given by the dotted black line.} \end{figure} \begin{figure}[t] \includegraphics[width=0.6\textwidth]{stotBlong-blue.pdf} \caption{\label{fig:stotBlong}Same as Fig.~\ref{fig:stotBperp} but with magnetic field parallel to the collision axis (now dark-blue and light-blue coloring for spinor and scalar QED, respectively).} \end{figure} \subsection{Intermediate field strength} For $\xi= {B}/{B_c} \gtrsim 0.5$, the weak-field expansion breaks down and one has to resort to numerical evaluations of the integral representations of the various derivatives of $\mathcal{L}_c$ appearing in (\ref{mat1}). Our numerical results are shown in Fig.~\ref{fig:stotBperp} and \ref{fig:stotBlong} for magnetic fields perpendicular and parallel to the collision axis, respectively, where the former case is the one of potential relevance to HIC. In these plots we compare the result for spinor QED and scalar QED, where in the latter case two charged scalar particles are assumed so that the difference between the two results is entirely due to the additional interactions of the magnetic moment carried by Dirac spinors. Also given are the weak-field limits up to order $\xi^2$ derived above, which are seen to become inaccurate around $\xi\simeq 0.5$. For larger $\xi$, the results for scalar QED are seen to tend to zero rapidly ($\sim \xi^{-4}$ for $\xi\gg 1$), whereas the spinor QED result for the case of perpendicular magnetic field has a minimum at $\xi\simeq 1.5$ after which it grows quadratically with $\xi$. Further details that show up in differential cross sections are displayed in Appendix \ref{sec:polardiagrams}. In the case of QED with charged vector bosons, for which the total cross section with magnetic field perpendicular or longitudinal to the collision axis is evaluated in Fig.~\ref{fig:stotvector}, we find an increase which is quadratic in $\xi$ for small $\xi$ and which dramatically accelerates for larger $\xi$ with a divergence at $\xi=1$. In fact, at $\xi>1$ the lowest Landau level of a charged vector with $g=2$ becomes tachyonic, corresponding to the conjectured condensation of the charged vector bosons to form a superconducting vacuum \cite{Chernodub:2010qx,Hidaka:2012mz,Chernodub:2013uja}. As explained in Appendix \ref{sec:strongfieldlimitd4Ldy4}, the calculation of the light-by-light scattering cross section through the Euler-Heisenberg Lagrangian is valid only for $\omega^2/m^2\ll 1-\xi$ so that the singularity is never reached. \begin{figure}[h] \includegraphics[width=0.6\textwidth]{suutot-vector.pdf}% \caption{\label{fig:stotvector Total unpolarized light-by-light scattering cross section for virtual charged vector bosons with $g=2$ as a function of $\xi=B/B_c$ with the magnetic field perpendicular (red lines) and longitudinal (blue lines) to the collision axis (dashed lines give the corresponding weak-field results). In order to highlight the effects of the magnetic moment of the charged vector bosons, the normalization constant $N'$ is chosen as the $B=0$ result for three charged scalars of the same charge and mass, which is a factor $3537/17\approx 208.06$ smaller than for one massive charged vector boson.} \end{figure} \subsection{Strong magnetic field} In the limit $\xi = {B}/{B_c} \gg1$ (but parametrically $\xi^2 \ll 1/\alpha$) the dominant contribution in spinor QED comes from the derivative ${\partial^4 \mathcal{L}^{(1)}}/{\partial {\cal G}^4}$ at ${\cal G}=0$, so that e.g. \begin{equation} \frac{\partial ^4 \mathcal{L}^{(1)}}{\partial{F^+_r}{\partial F^+_s}{\partial F^+_t}{\partial F^+_u}} \rightarrow \frac{1}{16} B_r B_s B_t B_u ~\frac{\partial^4\mathcal{L}^{(1)}}{\partial {\cal G}^4}\Big|_{{\cal G}=0}~. \end{equation} Thus the matrix element in leading order of a strong magnetic field becomes \begin{eqnarray} &&{{\cal M}}\big/\bigl(\frac{1}{16} \frac{\partial^4\mathcal{L}^{(1)}}{\partial {\cal G}^4}\bigr) =~ \nonumber \\&&=(\mathbf{f}^{1+} \cdot \mathbf{B})(\mathbf{f}^{2+} \cdot \mathbf{B}) (\mathbf{f}^{3+} \cdot \mathbf{B})(\mathbf{f}^{4+} \cdot \mathbf{B}) ~+~(\mathbf{f}^{1-} \cdot \mathbf{B})(\mathbf{f}^{2-} \cdot \mathbf{B}) (\mathbf{f}^{3-} \cdot \mathbf{B})(\mathbf{f}^{4-} \cdot \mathbf{B}) \nonumber \\&&-~(\mathbf{f}^{1+} \cdot \mathbf{B})(\mathbf{f}^{2+} \cdot \mathbf{B}) (\mathbf{f}^{3+} \cdot \mathbf{B})(\mathbf{f}^{4-} \cdot \mathbf{B}) ~-~(\mathbf{f}^{1+} \cdot \mathbf{B})(\mathbf{f}^{2+} \cdot \mathbf{B}) (\mathbf{f}^{3-} \cdot \mathbf{B})(\mathbf{f}^{4+} \cdot \mathbf{B}) \nonumber \\&&-~(\mathbf{f}^{1+} \cdot \mathbf{B})(\mathbf{f}^{2-} \cdot \mathbf{B}) (\mathbf{f}^{3+} \cdot \mathbf{B})(\mathbf{f}^{4+} \cdot \mathbf{B}) ~-~(\mathbf{f}^{1-} \cdot \mathbf{B})(\mathbf{f}^{2+} \cdot \mathbf{B}) (\mathbf{f}^{3+} \cdot \mathbf{B})(\mathbf{f}^{4+} \cdot \mathbf{B}) \nonumber \\&&+~(\mathbf{f}^{1+} \cdot \mathbf{B})(\mathbf{f}^{2+} \cdot \mathbf{B}) (\mathbf{f}^{3-} \cdot \mathbf{B})(\mathbf{f}^{4-} \cdot \mathbf{B}) ~+~(\mathbf{f}^{1+} \cdot \mathbf{B})(\mathbf{f}^{2-} \cdot \mathbf{B}) (\mathbf{f}^{3+} \cdot \mathbf{B})(\mathbf{f}^{4-} \cdot \mathbf{B}) \nonumber \\&&+~(\mathbf{f}^{1+} \cdot \mathbf{B})(\mathbf{f}^{2-} \cdot \mathbf{B}) (\mathbf{f}^{3-} \cdot \mathbf{B})(\mathbf{f}^{4+} \cdot \mathbf{B}) ~+~(\mathbf{f}^{1-} \cdot \mathbf{B})(\mathbf{f}^{2+} \cdot \mathbf{B}) (\mathbf{f}^{3+} \cdot \mathbf{B})(\mathbf{f}^{4-} \cdot \mathbf{B}) \nonumber \\&&+~(\mathbf{f}^{1-} \cdot \mathbf{B})(\mathbf{f}^{2-} \cdot \mathbf{B}) (\mathbf{f}^{3+} \cdot \mathbf{B})(\mathbf{f}^{4+} \cdot \mathbf{B}) ~+~(\mathbf{f}^{1-} \cdot \mathbf{B})(\mathbf{f}^{2+} \cdot \mathbf{B}) (\mathbf{f}^{3-} \cdot \mathbf{B})(\mathbf{f}^{4+} \cdot \mathbf{B}) \nonumber \\&&-~(\mathbf{f}^{1-} \cdot \mathbf{B})(\mathbf{f}^{2-} \cdot \mathbf{B}) (\mathbf{f}^{3-} \cdot \mathbf{B})(\mathbf{f}^{4+} \cdot \mathbf{B}) ~-~(\mathbf{f}^{1-} \cdot \mathbf{B})(\mathbf{f}^{2-} \cdot \mathbf{B}) (\mathbf{f}^{3+} \cdot \mathbf{B})(\mathbf{f}^{4-} \cdot \mathbf{B}) \nonumber \\&&-~(\mathbf{f}^{1-} \cdot \mathbf{B})(\mathbf{f}^{2+} \cdot \mathbf{B}) (\mathbf{f}^{3-} \cdot \mathbf{B})(\mathbf{f}^{4-} \cdot \mathbf{B}) ~-~(\mathbf{f}^{1+} \cdot \mathbf{B})(\mathbf{f}^{2-} \cdot \mathbf{B}) (\mathbf{f}^{3-} \cdot \mathbf{B})(\mathbf{f}^{4-} \cdot \mathbf{B})~. \label{larg} \end{eqnarray} An amplitude with polarization vectors $\hat\epsilon^{1,2,3,4}$ (cf.\ Eq.~(\ref{pol})) is given by \begin{equation} {\cal M}=\omega^4 \frac{\partial^4 \mathcal{L}^{(1)}}{\partial^4 {\cal G}}\prod_{I=1}^4 \hat\epsilon^I\cdot \mathbf{B} = \frac{32 \alpha^2}{15} (\frac{\omega}{m})^4 ~\xi~ \prod_{I=1}^4 \hat\epsilon^I\cdot \hat \mathbf{B} + O(\xi^0), \label{MstrongB} \end{equation} where $\hat \mathbf{B}$ is the unit vector in the direction of $\mathbf{B}$. For example, when $\mathbf{B}$ points in the $z$-direction, i.e., orthogonal to the scattering plane, the only nonvanishing amplitude for linear polarizations is \begin{equation} {\cal M}_{oooo}|_{B_x=B_y=0} = \frac{32 \alpha^2}{15} (\frac{\omega}{m})^4 ~\xi + O(\xi^0), \end{equation} which is $\theta$-independent; when $\mathbf{B}$ points in the $y$-direction, i.e., in the scattering plane and orthogonal to the incoming photons, the only nonvanishing amplitude is \begin{equation} {\cal M}_{iiii}|_{B_x=B_z=0} = \frac{32 \alpha^2}{15} (\frac{\omega}{m})^4 ~\xi~\cos^2\theta + O(\xi^0), \end{equation} which vanishes for outgoing photon momenta in the direction of $\mathbf{B}$. The low-energy unpolarized cross section averaged over initial and summed over final polarisations for $\xi \gg 1$ and arbitrary orientation of $\mathbf{B}$ reads \begin{equation} \frac{d \sigma^{\rm unpol}_{\rm spinor}}{d \Omega} = \frac{1}{(16 \pi)^2 \omega^2} \frac{1}{4} ~\vert {\cal M} \vert^2 ~=~ \frac{ \alpha^4 \omega^6}{225 \pi^2 m^8}~\xi^2~ \sin^4\beta ~\sin^4\beta', \label{cross2} \end{equation} where $\beta$ is the angle between $\mathbf{B}$ and the direction of the incoming photon $\hat k$, and $\beta'$ is the angle between $\mathbf{B}$ and the outgoing direction $\hat k'$. Notice that this differential cross section has the form of the square of a dipole radiation pattern, with emission maximal in the plane orthogonal to the magnetic field. The resulting unpolarized total cross section for $\xi \gg 1$ is \begin{equation}\label{stotlarge} \sigma(\gamma\gamma\to\gamma\gamma)^{\rm unpol}_{\rm spinor}=\frac12 \int d\Omega \frac{d\sigma^{\rm unpol}}{d\Omega} =\frac{ 16 \alpha^4 \omega^6}{3375 \pi m^8}~\xi^2~ \sin^4\beta . \end{equation} As shown in Appendix \ref{sec:strongfieldlimitd4Ldy4}, the feature that for ultrastrong magnetic fields the Euler-Heisenberg photon scattering cross section grows quadratically is absent in scalar QED. It is entirely due to the magnetic moments of the virtual Dirac spinors which in the lowest Landau level lead to a cancellation of magnetic interaction energy. \section{Discussion} In this paper we have investigated the effect of sizable background magnetic fields on the light-by-light scattering cross section in QED with charged scalar, spinor, or massive vector fields. We have found that the one-loop contribution of charged scalars to the Euler-Heisenberg Lagrangian lead to a strong suppression of the light-by-light scattering cross section for $B\gtrsim 0.5 B_c$. For spinor QED, the cross section initially also decreases with increasing magnetic field, but this trend is reversed at $B\simeq 1.5 B_c$ after which the cross section grows quadratically with $B$. Although at HIC the magnetic field reaches extremely large values with respect to the critical one in terms of the electron mass $m_e$, so that the light-by-light scattering cross section would become correspondingly large, this applies only at low photon energies $\omega \lesssim m_e$. In the recent ATLAS measurement \cite{Aaboud:2017bwk} of light-by-light scattering the characteristic energy of the scattered photons is in the range of several GeV, with peak values of the background magnetic field $B \sim 10^5\, \mathrm{MeV}^2$. Because the cross section decreases as $\alpha^4/\omega^2$ for $\omega\gg m$, only massive loops can contribute effects due to external magnetic fields. The critical magnetic field corresponding to the bottom and the charm quarks with mass $m_b\approx 4.2$~GeV and $m_c\approx 1.25$~GeV is $B_c(m_b)\sim 6\times 10^7\, \mathrm{MeV}^2$ and $B_c(m_c)\sim 5\times 10^6 \, \mathrm{MeV}^2$, respectively. Effects from external magnetic fields at $\omega\lesssim m_b$ are therefore completely negligible. For energies $\omega\lesssim m_c$, such effects would still be tiny; noticeable effects on light-by-light scattering would seem to require photon energies $\omega\lesssim 0.1\,\mathrm{GeV}$, at or below the maximal contribution to the cross section from virtual muons for which $B_c(m_\mu)\sim 4\times 10^4\, \mathrm{MeV}^2$. However, with respect to the corresponding time scale $\omega^{-1}$, the magnetic field in HIC is then probably decaying too fast to leave measurable effects. A case of particular theoretical interest is that of charged $\rho$ mesons which have an unstable lowest Landau level at $B\ge B_c(m_\rho)\sim 2\times 10^6\, \mathrm{MeV}^2$, where a superconducting vacuum state formed by a condensate of $\rho^\pm$ mesons has been conjectured to arise \cite{Chernodub:2010qx}.\footnote{Evidence in favor of this scenario from lattice gauge theory has been presented in \cite{Buividovich:2010tn,Braguta:2011hq}; see however \cite{Luschevskaya:2016epp,Bali:2017ian}.} In this paper we have also determined the contribution of charged vector mesons to light-by-light scattering for photon energies $\omega\lesssim m_\rho$ as determined by the corresponding Euler-Heisenberg Lagrangian derived in \cite{Vanyashin:1965ple}. This turns out to be enhanced by relatively large numerical prefactors compared to scalar and spinor loops. Moreover, the cross section grows as the magnetic field strength is increased from zero. Unfortunately, even the peak values of the magnetic field reached in HIC would give only effects below the percent level to light-by-light scattering cross sections from virtual $\rho$ mesons (if the latter are included at all despite the large width of the $\rho$ meson). \begin{acknowledgments} The authors would like to thank Maxim Chernodub, Dima Kharzeev, Massimiliano Procura, and Vladimir Skalozub for useful discussions of the case of charged vector mesons. \end{acknowledgments}
1,108,101,565,049
arxiv
\section{Introduction} Since the first coining of the Deep Learning term following the first “big win” of neural approaches marked by AlexNet\cite{krizhevsky2012imagenet}, followed by the wide adoption of GPU-based tensor optimization frameworks such as Tensorflow\cite{abadi2016tensorflow} and Pytorch\cite{paszke2019pytorch} a new era of machine learning started. Important industry vendors proposed generation after generation of model inference and prediction serving approaches, yet most focused on SaaS and PaaS approaches. Nevertheless, many individual aspects, such as data pipeline preparation and post-processing of model results, remained mostly unsolved due to many options. Besides the cloud computing solutions proposed by the big players on the market, various tools have been developed and released both by the tensorial framework creators - such as Tensorflow-serving\cite{olston2017tensorflow} - as well as by other third parties. However, once again, the problem of data acquisition, preprocessing pipeline, post-inference processing business logic, and final insight delivery remained with the user. Thus, it is now clear that the machine learning development community must strive and find solutions that would help commercial and academic environments migrate from experiment-centric data science to fully functional end-to-end pipelines and applications. MLOps, an area dominated by startups at this particular time, was already becoming a fast-growing market estimated at more than 20 billion USD in 2019 and projected to reach five times this figure by 2025, according to various studies. \subsection*{Edge vs Cloud} One of the most common divisions related to the taxonomy of the AI system implementation environments is that defined by the target deployment infrastructure. Thus, there are two main infrastructure options: (i) the general cloud-based approach and (ii) the edge deployment approach. While in the first setting, the serving pipelines and the data pre-processing and business logic reside in Cloud APIs as pipelines of microservices, in the second case, we have most or even all of these components running in local devices - usually embedded devices. While our proposed SOLIS framework focuses on edge "box" devices, we will argue that entire deployments can be quickly and transparently migrated and switched between edge and Cloud-hosted VMs. \section{Background} When analyzing the DevOps requirements for production-grade scalable machine learning systems, one can choose various methods for operationalizing the end-to-end pipelines. Currently, we have several options on the market mainly divided into two categories: domain agnostic technologies such as model serving engines that are not biased towards a particular domain and more complex applications that are usually geared towards specific applications - such as time-series prediction or computer vision inference. - usually hosted as Cloud services. \subsection{Domain agnostic techniques} One of the most important MLOps pipelines currently on the market used both in academia as well as in actual commercial applications is Kubeflow\cite{bisong2019kubeflow}, an open-source project dedicated to streamlining machine learning pipelines deployment on kubernetes. It focuses on compatibility - running independent and configurable steps, portability - being platform agnostic as well as on scalability - automatic scaling based on load. It offers deployment solutions for all the machine learning lifecycle steps: data preparation, model training, prediction serving, and service management/monitoring. Although a powerful tool, it requires data science and Kubernetes proficiency. It works by allowing data scientists to develop independent steps. Each step is deployed on its own Docker image in a Docker pod - a single instance of a running process in a Docker environment. These steps are then loosely coupled into a machine learning pipeline. Although each step is easily reusable in new pipelines, Kubeflow offers no inheritance-like mechanism for these steps and components. A much simple in scope approach is that of TensorFlow Serving\cite{olston2017tensorflow}. TensorFlow Serving is a serving system for machine learning models. It handles the inference and lifetime management of trained models without providing any solution for training, data acquisition, business logic scripting. Its main advantage is that it allows for a low code solution for model deployment in production. It works by deploying one or more “servables” - Tensorflow models, embeddings, vocabularies, etc., on a gRPC or HTTP endpoint. Grouping requests optimize the serving process into batches for joint execution on GPU. As an extra, Tensorflow Serving supports simultaneous serving of different model versions and deployment of new model versions without changing the client code. Nevertheless, this is usually limited to Tensorflow framework models. ONNX\cite{shridhar2020interoperating}\cite{bai2019onnx} is an open format designed to represent machine learning models, regardless of what technologies were used to build them. It currently supports conversion from most popular machine learning frameworks like Tensorflow, PyTorch, SciKit-Learn, etc. Its purpose is to provide an interface allowing machine learning applications to be framework agnostic. Current development focuses on model conversion and inference acceleration, but some work has also been done on model training and serving. Being a community-run project, a big part of the development is being done by 3rd party companies, each with its project building upon the standard. \subsection{Domain oriented engines} One such example of domain-oriented engines is Microsoft Azure Cognitive Services\cite{del2018introducing} for Computer Vision. This Cloud-based ecosystem provides a set of APIs enabling users to integrate specific functionalities in their systems or applications. The platform currently offers end-to-end functionality for training custom deep learning models, providing user-friendly mechanisms for uploading datasets, training models, and serving models through APIs. However, to the best of our knowledge, at this moment, the platform does not provide you with an out-of-the-box integrated solution capable of real-time data acquisition from multiple sources, inference, post-inference processing with complex business logic, and payload packing for IoT integration. In terms of tensorial framework, Microsoft Azure Cognitive Services allows trained models to be exported in various options like Tensorflow, TensorflowJS, ONNX, or CoreML. \section{Approach} As argued in the introduction of this paper, we propose an end-to-end methodology that aims at standardizing a simple yet efficient approach to most of the critical stages of the production-grade machine learning pipelines with a particular focus in the area of “box” systems - i.e., systems that run either in an embedded device or as a Cloud deployed box. This proposal objective is to ensure a maximum level of technological independence for the developers and total freedom to operate while allowing multiple levels of abstraction and ease to deploy new custom features. Particular attention has been paid to aspects such as multi-modal data streams, multi-purpose neural models, multiprocessing and multithreading of various tasks and jobs, plug-and-play post-processing of inference and prediction results, and open-ended delivery of final results. A clear direction of proposed further research and development is that of OmniNet - a multi-purpose, multi-model, multi-stage neural graphs- this direction is briefly presented in the current paper. Last but not least, while our proposed approach has been already implemented and deployed for the computer (deep) vision domain within embedded systems designed for safety and security scenarios, we argue that the presented methods and functionalities can be used cross-domain in other areas such as on-prem or cloud-based predictive analytics. \subsection{SOLIS Properties} Several intrinsic properties of our proposed MLOps framework can also be seen as actual objectives: (i) integrating potentially any data source with low-code or no-code while acquiring parallel streams of data; (ii) quick adaptation to any configuration and communication approach from MQTT to HTTP, from flat-files to GraphQL; (iii) machine learning package and tensor framework agnosticism; (iv) low-code to no-code fast creation of business rules for post-processing of inferences and predictions. \subsubsection{Seamlessly integrate any data source} Our pipeline stream inception is initiated within a dedicated module for data acquisition that can parallelly gather data from any kind of feed, protocol, or data format, including complex sensors as well as URI-defined devices that are producing data either real-time or not. Data source connections can be established in parallel with self-explanatory and straightforward JSON-based configuration files. As a result, various data formats can be ingested and implemented by employing low-code plugin development in Python. Moreover, multiple inputs streams, including multi-modal ones, can be configured as pre-aggregated streams and thus served downstream to the pipeline as single data packages. \subsubsection{Versatile communication and configuration for third-parties} The \textit{SOLIS} pipeline uses a configurable communication module able to integrate multiple types of protocols such as AMQP, MQTT, HTTP, SQL, GraphQL, with a low-code method for implementing new ones and thus \textit{SOLIS} can potentially send and receive data to any consumer, regardless of what technology it uses. To further simplify integration with external services, an input-output \textit{formatter} middleware-like module is provided to alter inbound and outbound payloads so that they comply with formats required by each consumer use-case. Each pipeline stage is fully adaptable by using an internal configuration system to provide a customizable framework. This configuration system is split into two sections: the application configuration that handles the main aspects of the pipeline and the configuration of the streams (Figure \ref{fig:config}) that handles data acquisition and functionalities. Since the configuration module allows us to change the system behavior while it runs, specific functionalities can be stopped, started, or changed on the fly. \begin{figure}[H] \centering \fbox{\includegraphics[width=0.6\textwidth]{config_streams.png}} \caption{Configuration file} \label{fig:config} \end{figure} \subsubsection{Domain and framework agnosticity} \textit{SOLIS} inference module supports model inference on multiple types of tensor frameworks such as Tensorflow and Pytorch as well as other machine learning libraries like sklearn\cite{pedregosa2011scikit}. All the deployment and initial configuration tasks are done based on a set of automatic procedures that handle environment creation, application download, and application configuration. The \textit{SOLIS} pipeline is designed to run on multiple operating systems - Windows, Linux, macOS, etc. and multiple hardware platforms - x86, arm, etc. As a result of the fully customizable pipeline stages, \textit{SOLIS} is not constrained to any particular domain. As a result, \textit{SOLIS} supports a wide range of end-user business functionalities for multiple areas like Computer Vision or Predictive Analytics. \subsubsection{Low-code to no-code for developing new business features} This module is responsible for running specific business rules using all the data provided by the previous pipeline steps, such as model predictions or raw input data. The entire business logic can be implemented in a single Python \textit{plugin}, without knowledge of any technical details regarding the internals of the rest of the pipeline. What is probably more important is that this business logic they \textit{plugins} can be implemented by engineers with no data science background. Finally, functionalities can be implemented using a no-code approach, only by specifying rules to be executed by a particular purpose \textit{plugin}. \subsection{Pipeline outline} The main entry point in \textit{SOLIS} is contained in \textit{"main loop"} that orchestrates the execution of all the modules, thus the execution of the entire pipeline. In this section, we will describe the main loop algorithm (Algorithm \ref{alg:mainloop}) with its multi-stage methodology. In the first stage of the pipeline, \textit{SOLIS} checks if any new updates have been received since the last check. The communication module is responsible for acquiring any new configuration from external applications, while the configuration module handles the incoming configuration messages and checks their validity. In the second stage, \textit{SOLIS} updates its internal state using the updates received from the first stage. Thus \textit{SOLIS} can start, stop or update data acquisition threads or business functionalities depending on the type of command received from the external application. The third stage of the pipeline consists of collecting real-time data from all streams in an asynchronous and parallel fashion. Once the data is collected, it is packed and sent downstream. At the fourth and fifth stage of the pipeline, \textit{SOLIS} evaluates which parallel inference and prediction \textit{serving processes} are required while fully managing the process and memory allocation and deallocation. A serving process is defined as an end-to-end inference or prediction pipeline able to run either as an external process or as an internal sequential execution thread. Further downstream, at the sixth stage, the pipeline handles the business features execution based on the upstream raw initial data and the serving process results provided by the fifth stage. In the last stage of the pipeline, \textit{SOLIS} prepares and sends all the payloads using the communication module. This allows \textit{SOLIS} to repeat asynchronously and in parallel the entire loop while larger payloads are still being sent over. \begin{algorithm}[H] \caption{\textit{SOLIS} box "main loop"}\label{alg:mainloop} \textbf{Input:} $cfg\_steams$ \begin{algorithmic} \While{$True$} \State $updates \gets$ \textbf{call} receive updates from external application; \vspace{1mm} \State $data \gets$ \textbf{async threaded calls} collect data from all streams $(cfg\_streams)$; \vspace{1mm} \State $cfg\_streams,\: business\_feats \gets$ \textbf{call} update box internal state $(cfg\_streams,\: updates)$; \vspace{1mm} \State $models \gets$ \textbf{call} get business features models $(business\_feats)$; \vspace{1mm} \State $inferences \gets$ \textbf{parallel calls} inference $(models, \: data)$; \vspace{1mm} \State $payloads \gets$ \textbf{threaded calls} execute $(business\_feats,\: data,\: inferences)$; \vspace{1mm} \State \textbf{async threaded calls} send $(payloads)$; \vspace{1mm} \EndWhile \end{algorithmic} \end{algorithm} Our past experiences correlated with the work of other practitioners in the field showed us that no machine learning model is performing at its peak across any kind of hardware environment and software conditions. That is why we decided to fine-tune our models based on each specific use case. To obtain this kind of data, a module was designed and implemented in our system with the primary purpose of collecting specific data at regular time intervals or when particular triggers are fired. The collected data is later sent over our model training and fine-tuning pipelines to improve our current performance of machine learning models. \subsection{The plugin approach} One essential property of the SOLIS framework is the ability to quickly deploy low-code plugin features at any point in the end-to-end pipeline whenever we discuss adapting to a new custom data source or a complex business logic that must post-process the predictions. Each module in the \textit{SOLIS} pipeline - i.e., data acquisition, communication, inference, business features, and post-processing - is designed to support no data science skills, low-code, dynamic, and collaborative work approaches within development teams that integrate our system in various applications. One of the most important things we aimed to achieve is \textbf{\textit{no data science or MLOps proficiency required}} in integrators' teams for further development. Based on the system configurations, each module manager initializes the required custom plugins making \textit{SOLIS} a versatile engine that can be integrated in almost any larger application. Adding new features without compromising usability is a complex and often tedious task for any software development project and even more so in machine-learning-related ones. The \textit{plugin} approach that we have developed enables almost effortless integration of \textit{SOLIS}, even when custom new requirements are yet to be satisfied. Each plugin template of the plugin ecosystem within each \textit{SOLIS} module is well documented and defines very clear methods that should be implemented. The \textit{data acquisition} step can be configured to use any live, non-live, sequential, non-sequential, structured, or unstructured data stream. Multi-modal streams such as streams combining sensor structured data with unstructured video feed can be created, and meta-streams that re-combine multiple input streams into one flow. Regarding the \textit{serving processes} module, it is worth mentioning that it supports any machine learning / deep learning framework. Thus, any in-house or open-source model can be employed. Having a system that is capable of communicating with the external \textit{world} - i.e., its overall ecosystem - is a must, whether the system runs inside a VPN or it has access over the internet. The \textit{communication} between \textit{SOLIS} any payload consuming applications within proposed ecosystems can be done via MQTT\cite{mqtt}, AMQP\cite{amqp}, HTTP\cite{RFC2616}, sockets or any other protocol that enables message exchange and can be quickly wrapped in a low-code Python \textit{plugin}. By default \textit{SOLIS} already provides MQTT, AMQP \textit{communication plugins}. To make the integration more facile, \textit{SOLIS} does not impose a certain structure of the payloads that are sent by the communication module. Integrating with 3rd parties that already have strict communication protocols and payload definitions is possible through the use of a separate module that is in charge of \textit{post-processing} the system outputs exactly formatted as required by the external application. \textbf{\textit{Eyes on the end-client}} - Employing this low code plugin-based method throughout the entire MLOps pipeline empowers development teams to address potentially any kind of end-to-end real-life use-case. \subsection{Inference} \subsubsection{OmniNet approach} The main objective of our proposed OmniNet DAG deployment architecture is to have multiple task-oriented neural models cooperate and similarly integrate with each other to that of hydra networks\cite{mullapudi2018hydranets}. Nevertheless, while the hydra networks usually rely on a backbone feature extractor in our approach, we propose an arbitrary number of backbone graphs that can provide features for subsequent graphs. Complex architectures such as EfficientNet\cite{tan2019efficientnet} backbone running on video volumes (4D) followed by seq-2-seq custom analyzers parallelized with frame-by-frame second-stage classification directed acyclic graphs can be configured both for end-to-end training but more importantly for efficient inference. Other essential features of our proposed OmniNet approach is (i) employing multi-stage graphs fully trainable while using early-stage graphs as “frozen” graphs in order to train later stage ones (ii) fully parallelizable operations are directly optimized on GPU while (iii) keeping low memory footprint both in RAM and VRAM. \subsubsection{In-process vs parallel multi-process inference and memory management} A certain challenge when dealing with multiple concurrent directed acyclic graphs (DAG) on GPU is optimizing the inference speed for all the DAGs employed by the business features. Running all the DAGs in the same process imposes sequential running and thus the total inference time at one step is the sum of each individual DAG inference time $T_I = T_{DAG_1} + T_{DAG_2} + ... + T_{DAG_N}$. Therefore, in order to optimize $T_I$, \textit{SOLIS} uses an internally developed parallel multi-process inference execution mechanism with GPU memory management that assures $T_I$ to be $T_I = \max (T_{DAG_1}, T_{DAG_2}, T_{DAG_N}) + \varepsilon$. Even though speed is a critical parameter in near-real-time machine learning inference-based systems, there is another critical challenge when it comes to concurrency on GPU, and that is \textit{error contention}. Two significant issues emerged from our studies with production-grade ML system, more specifically how to assure that all graphs are executed but the one that generated an error at a certain point (i) for an out-of-memory (OOM) situation, when the GPU tries to accommodate one more DAGs or (ii) if one graph operations generate unpredictable errors when they are executed on GPU. Our proposed solution for these issues is isolating each graph execution in separate processes, making the internally developed parallel multi-process inference execution mechanism a reliable solution with a high degree of fiability and contention for the encountered challenges. While a multi-threading solution would be more simplistic and easy to implement, eliminating the need for data transfer, the multi-process solution solves the error contention. This approach opens wide horizons in terms of inference - distributed or not - compute and serving configurability. It is worth mentioning that (i) cross-domain tasks can be addressed on the same box, and (ii) the processes can be spawned not only on the box but also on other devices and/or clusters. While in most of the experimental as well as production environments we used Tensorflow\cite{abadi2016tensorflow} frozen DAGs or similarly Pytorch\cite{paszke2019pytorch} approach based on TorchScript\cite{devito2019torchscript}, any other tensor framework can be used due to the multi-processing approach with no real impediments on using “competing” tensor frameworks in parallel. Thus, for example, one can define a simple Gaussian model in Numpy for simple sensor structured data anomaly detection, use in the same time a complex set of TorchScript\cite{devito2019torchscript} chained directly in GPU memory for minimization of GPU transfer/offload in multi-stage inference, and finally a set of Tensorflow frozen graph models - all of these running on the same box. \section{Conclusions} In this whitepaper, we propose an overview for a fully configurable end-to-end methodology that enables easy deployment and configuration of entire application pipelines powered by machine learning, allowing non data scientists to participate in the development process and delivery of custom features to end consumers. The proposed architecture scales well with various computing capabilities and allows multiple tensor computation frameworks and execution engines for the neural models or non-DAG models deployment. While our work is far from being over, we are confident that this architecture will gradually grow into a potential machine learning operations standard. The OmniNet neural model design principles and architectural details have not been presented in detail within this paper, as further research is still underway and will be presented within a subsequent paper. \bibliographystyle{unsrt}
1,108,101,565,050
arxiv
\section{Introduction} Machine learning (ML) \cite{bishop_pattern_2011,cover_elements_1991,hastie2009elements,hundred} refers to a broad field of study, with multifaceted applications of cross-disciplinary breadth. ML {is a subset of Artificial Intelligence (AI) which} ultimately aims at developing computer algorithms that improve automatically through experience. The core idea is that systems can learn from data, so as to identify distinctive patterns and make consequently decisions, with minimal human intervention. The range of applications of ML methodologies is extremely vast \cite{sutton2018reinforcement,graves2013speech,sebe2005machine,grigorescu2020survey}, and still growing at a steady pace due to the pressing need to cope with the efficiently handling of big data \cite{chen2014big}. Biomimetic approaches to sub-symbolic AI \cite{rosenblatt1961principles} inspired the design of powerful algorithms. These latter sought to reproduce the unconscious process underlying fast perception, the neurological paths for rapid decision making, as e.g. employed for faces \cite{meyers2008using} or spoken words \cite{caponetti2011biologically} recognition. An early example of a sub-symbolic brain inspired AI was the perceptron \cite{rosenblatt1958perceptron}, the influential ancestor of deep neural networks (NN) \cite{bengio2007greedy,Goodfellow-et-al-2016}. The perceptron is indeed an algorithm for supervised learning of binary classifiers. It is a linear classifier, meaning that its forecasts are based on a linear prediction function which combines a set of weights with the feature vector. Analogous to neurons, the perceptron adds up its input: if the resulting sum is above a given threshold the perceptron fires (returns the output the value 1) otherwise it does not (and the output equals zero). Modern multilayer perceptrons, account for multiple hidden layers with non-linear activation functions. The learning is achieved via {minimizing the classification error.} {Single or multilayered perceptrons should be trained by examples \cite{bengio2007greedy,hinton2006fast,rumelhart1988learning}. Supervised learning requires indeed a large set of positive and negative examples, the training set, labelled with their reference category.} The perceptrons' acquired ability to perform classification is eventually stored in a finite collection of numbers, the weights and thresholds that were learned during the successive epochs of the supervised training. To date, it is not clear how such a huge collection of numbers (hundred-millions of weights in state of the art ML applications) are synergistically interlaced for the deep networks to execute the assigned tasks, with an exceptional degree of robustness and accuracy \cite{xie2020explainable,hinton2015distilling,erhan2010understanding}. Starting from these premises, the aims of this paper are multifold. On the one side, we will develop a novel learning scheme which is anchored on reciprocal space. Instead of {iteratively} adjusting the weights of the edges that define the connection among nodes, we will modify {the spectra of a collection of suitably engineered matrices that bridge adjacent layers. To eventually recover a multilayered feedforward architecture in direct space, we postulate a nested indentation of the associated eigenvectors. These latter act as the effective gears of a processing device operated in reciprocal space. The directed indentation between stacks of adjacent eigenvectors yield a compression of the activation pattern, which is eventually delivered to the detection nodes.} {As a starting point, assume eigenvectors are frozen to a reference setting which fulfills the prescribed conditions. The learning is hence solely restricted to the eigenvalues, a choice which amounts to performing a {\it global} training, targeted to identifying key collective modes, the selected eigen-directions, for carrying out the assigned classification task. The idea of conducting a global training on a subset of parameters has been also proposed in other works \cite{frankle2020training, Gabri__2019}. This is at odd with the usual approach to machine learning where {\it local} adjustments of pairwise weights are implemented in direct space. As we shall prove, by tuning the eigenvalues, while freezing the eigenvectors, yields performances superior to those reached with usual (local) techniques bound to operate with an identical number of free parameters, within an equivalent network architecture. Eigenvalues are therefore identified as key target of the learning process, proving more fundamental than any other set of identical cardinality, allocated in direct space. Remarkably, the distribution of weights obtained when applying the spectral learning technique restricted to the eigenvalues is close to that recovered when training the neural network in direct space, with no restrictions on the parameters to be adjusted. In this respect, spectral learning bound to the eigenvalues could provide a viable strategy for pre-training of deep neural networks. Further, the set of trainable eigenvalues can be expanded at will by inserting linear processing units between the adjacent layers of a non-linear multilayered perceptron. Added linear layers act as veritable booms of a {\it telescopic neural network}, which can be extracted during the learning phase and retracted in operational mode, yielding compact networks with improved classification skills. The effect of the linear expansion is instead negligible, if applied to neural learning of standard conception. The entries of the indented eigenvectors can be also trained resulting in enhanced performance, as compared to the setting where eigenvalues are exclusively modulated by the learning algorithm. To demonstrate the principles which underly spectral training, we employ the MNIST database, a collection of handwritten digits to be classified. The examined problem is relatively simple: a modest number of tunable parameters is indeed necessary for achieving remarkable success rates. When allowing for the simultaneous training of the eigenvalues and (a limited fraction of ) eigenvectors, the neural networks quickly saturates to accuracy scores which are indistinguishable from those obtained via conventional approaches to supervised learning. More challenging tasks should be probably faced to fully appreciate the role played by a progressive optimization of the eigenmodes, the collective directions in reciprocal space where information flows. As remarked above, the eigenvectors have been here constructed so as to yield a feedforward multi-layered architecture in direct space. By relaxing this assumption, comes to altering the network topology and thus exporting the spectral learning strategy to other frameworks, as e.g. reservoir computing. In general terms, working in the spectral domain corresponds to optimizing a set of {\it non orthogonal} directions (in the high dimensional space of the nodes) and associated weights (the eigenvalues), a global outlook which could contribute to shed novel light on the theoretical foundations of supervised learning.} {\section{Linear and non-linear spectral learning}} To introduce and test the proposed method we will consider a special task, i.e. recognition of handwritten digits. To this end, we will make use of the MNIST database \cite{lecun1998mnist} which has a training set of 60,000 examples, and a test set of 10,000 examples. Each image is made of $N_1=28 \times 28$ pixels and each pixel bears an 8-bit numerical intensity value, see Fig. \ref{fig1}. A deep neural network can be trained using standard backpropagation \cite{bengio2007greedy} algorithms to assign the weights that link the nodes (or perceptrons) belonging to consecutive layers. The first layer has $N_1$ nodes and the input is set to the corresponding pixel's intensity. The highest error rate reported on the original website of the database \cite{lecun1998mnist} is 12 \%, which is achieved using a simple linear classifier, with no preprocessing. In early 2020, researchers announced 0.16 \% error \cite{byerly2020branching} with a deep neural network made of branching and merging convolutional networks. Our goal here is to contribute to the analysis with a radically different approach to the learning, {rather than joining the efforts to break current limit in terms of performance and classification accuracy. More specifically, and referring to the MNIST database as a benchmark application, we will assemble a network made of $N$ nodes, organized in successive $\ell$ layers, tying the training to reciprocal space.} {Directed connections between nodes belonging to consecutive layers are encoded in a set of $\ell-1$, $N \times N$ adjacency matrices. The eigenvectors of these latter matrices are engineered so as to favour the information transfer from the reading frame to the output layer, upon proper encoding. The associated eigenvalues represent the primary target of the novel learning scheme. In the following we will set up the method, both with reference to its linear and non-linear versions. Tests performed on the MNIST database are discussed in the next Section.} {\subsection{Linear spectral learning: Single-layer perceptron trained in reciprocal space}} Assume $N_i$ to label the nodes assigned to layer $i$, {and define} $N=\sum_{i=1}^{\ell} N_i$. {For the specific case here inspected the} output layer is composed by ten nodes ($N_{\ell}=10$), where recognition takes eventually place. Select one image from the training set and be $n_1$ $(=0,1,2..,9)$ the generic number therein displayed. We then construct a column vector $\vec{n}_1$, of size $N$, whose first $N_1$ entries are the intensities displayed on the pixels of the selected image (from the top-left to the bottom-right, moving horizontally), as illustrated in Fig. \ref{fig1}. {All other entries are initially set to zero. As we shall explain in the following, our goal is to transform the input $\vec{n}_1$ into an output vector with same dimensions. The last $N_{\ell}$ elements of this latter vector represent the output nodes where reading is eventually performed.} \begin{figure} \centering \includegraphics[scale=0.3]{Fig1.png} \caption{\it Each image of the training set is mapped into a column vector $\vec{n}_1$, of size $N$, whose first $N_1 = 28 \times$ 28 entries are the intensities displayed on the pixels of the image.} \label{fig1} \end{figure} To set the stage, we begin by reporting on a simplified scenario that, as we shall prove in the following, yields a single layer perceptron. The extension to multi-layered architectures will be discussed {right after}. {Consider the entry layer made of $N_1$ nodes and the outer one composed of $N_2$ elements. In this case $N=N_1+N_2$. The input vector $\vec{n}_1$ undergoes a linear transformation to yield $\vec{n}_2=\mathbf{A}_1 \vec{n}_1$ where $\mathbf{A}_1$ is a $N\times N$ matrix that we shall characterize in the following. Introduce matrix $\Phi_1$: this is the identity matrix $\mathbb{1}_{N \times N}$ modified by the inclusion of a sub-diagonal block $N_{2} \times N_{1}$, e.g. filled with uniformly distributed random numbers, defined in a bounded interval, see Fig. \ref{fig2}. The columns of $\Phi_1$, hereafter $\left(\vec{\phi}_1\right)_k$ with $k=1,...,N$, define a basis of the $N$ dimensional space to which $\vec{n}_1$ and $\vec{n}_2$ belong. Then, we introduce the diagonal matrix $\Lambda_1$. The entries of $\Lambda_1$ are set to random (uniform) numbers spanning a suitable interval. A straightforward calculation returns $\left(\Phi_1\right)^{-1}=2 \mathbb{1}_{N \times N}-\Phi_1$. We hence define $\mathbf{A}_1= \Phi_1 \Lambda_1 \left(2 \mathbb{1}_{N \times N}-\Phi_1\right)$ as the matrix that transforms $\vec{n}_1$ into $\vec{n}_2$. Because of the specific structure of the input vector, and owing the nature of $\mathbf{A}_1$, the information stored in the first $N_1$ elements of $\vec{n}_1$ is passed to the $N_2$ successive entries of $\vec{n}_2$, in a compactified form which reflects both the imposed eigenvectors' indentation and the chosen non trivial eigenvalues. } {To see this more clearly, expand the $N$-dimensional input vector $\vec{n}_1$ on the basis made of $\left( \vec{\phi}_1 \right)_k$ to yield $\vec{n}_1=\sum_{k=1}^N c_k \left( \vec{\phi}_1 \right)_k$ where $c_k$ stands for the coefficients of the expansion. The first $N_1$ vectors are necessarily engaged to explain the non zero content of $\vec{n}_1$ and, because of the imposed indentation, rebound on the successive $N_2$ elements of the basis. These latter need to adjust their associated weights $c_k$ to compensate for the echoed perturbation. The action of matrix $\mathbf{A}_1$ on the input vector $\vec{n}_1$ can be exemplified as follows:} \begin{equation} \vec{n}_2 = \mathbf{A}_1 \vec{n}_1 = \mathbf{A}_1 \sum_{k=1}^N c_k \left( \vec{\phi}_1 \right)_k = \sum_{k=1}^{N_1+N_2} c_k \left( \Lambda_1 \right)_k \left( \vec{\phi}_1 \right)_k \end{equation} {where $\left( \Lambda_1 \right)_k$ are the element of matrix $\Lambda_1$. In short, the entries of $\vec{n}_2$ from position $N_1+1$ to position $N_1+N_2$ represent a compressed (if $N_2<N_1$) rendering of the supplied input signal, the key to decipher the folding of the message being stored in the $N_{2} \times N_{1}$ sub-diagonal block of $\Phi_1$, (i.e. the eigenvector indentation) and in the first set of $N=N_1+N_2$ eigenvalues $\left( \Lambda_1 \right)_k$. The key idea is to propagate this message passing scheme, from the input to the output in a multi-layer setting, and adjust (a subset of) the spectral parameters involved so as to optimize the encoding of the information.} \begin{figure} \centering \includegraphics[scale=0.5]{LineareAdiacenza.pdf} \caption{\it {Panel (a): the structure of matrix $\Phi_k$ is schematically depicted. The diagonal entries of $\Phi_k$ are unities. The sub-diagonal block of size $N_{k+1} \times N_{k}$ for $k=1,\ell-1$ is filled with uniform random numbers in $[a,b]$, with $a,b \in \mathbb{R}$. These blocks yields an effective indentation between successive stacks of linearly independent eigenvectors. The diagonal matrix of the eigenvalues $\Lambda_k$ is also represented. The sub-portions of $\Phi_k$ and $\Lambda_k$ that get modified by the training performed in spectral domain are highlighted (see legend). In the experiments reported in this paper the initial eigenvectors entries are uniform random variables distributed in $[-0.5,0.5]$. The eigenvalues are uniform random numbers distributed in the interval $[-0.01,0.01]$. Optimizing the range to which the initial guesses belong (for both eigenvalues and eigenvectors) is an open problem that we have not tackled. Panel (b): a $(N_1 + N_\ell) \times (N_1 + N_\ell)$ matrix $\mathcal{A}_c$ can be obtained from $\mathcal{A}=\left( \Pi_{k=1}^{\ell-1} \mathbf{A}_{k} \right) $, which provides the weights for a single layer perceptron, that maps the input into the output, in direct space.}} \label{fig2} \end{figure} { To this end, we introduce the $N \times N$ matrix operator $\Phi_k$, for $k=2,...,\ell-1$. In analogy with the above, $\Phi_k$ is the identity matrix $\mathbb{1}_{N \times N}$ modified with a sub-diagonal block $N_{k+1} \times N_{k}$, which extends from rows $N_k$ to $N_k+N_{k+1}$, and touches tangentially the diagonal, as schematically illustrated in Fig. \ref{fig2} (a). Similarly, we introduce $\Lambda_k$, for $k=2,...,\ell-1$, which is obtained from the identity matrix $\mathbb{1}_{N \times N}$ upon mutating to uniformly distributed random entries the diagonal elements that range from $\sum_{i=1}^k N_i$ (not included) to $\sum_{i=1}^{k+1} N_i$ (included). Finally, we define $\mathbf{A}_k= \Phi_k \Lambda_k \left(2 \mathbb{1}_{N \times N}-\Phi_k\right)$, as the matrix that transforms $\vec{n}_k$ into $\vec{n}_{k+1}$, with $k=2,...,\ell-1$. In principle, both non trivial eigenvalues' and eigenvectors' input can be self-consistently adjusted by the envisaged learning strategy. The input signal $\vec{n}_1$ is hence transformed into an output vector $\vec{n}_{\ell}$ following a cascade of linear transformations implemented via matrices $\mathbf{A}_k$. In formulae:} \begin{equation} \vec{n}_{\ell} = \mathbf{A}_{\ell-1} ...\mathbf{A}_{1} \vec{n}_1 = \left( \Pi_{k=1}^{\ell-1} \Phi_k \Lambda_k \left(2 \mathbb{1}_{N \times N}-\Phi_k\right)\right) \vec{n}_1 \end{equation} {where in the last step we made use of the representation of $\mathbf{A}_k$ in dual space. The generic vector $\vec{n}_{k+1}$, for $k=1,..., \ell-1$ is obtained by applying matrix $\mathbf{A}_k$ to $\vec{n}_k$. The first $N_1+N_2+...+N_k$ components of $\vec{n}_{k+1}$ coincide with the corresponding entries of $\vec{n}_k$, namely $\left[ \vec{n}_{k+1} \right]_m \equiv \left[ \vec{n}_{k} \right]_m$ for $m<N_1+N_2+...+N_k$. Here, $\left[ \left(\vec{\cdot}\right) \right]_m$ identifies the $m$-th component of the vector $\left(\vec{\cdot}\right)$. Recall that, by construction, $\left[ \vec{n}_{k} \right]_m=0$ for $m> N_1+N_2+...+N_k$. On the contrary, the components $\left[ \vec{n}_{k+1} \right]_m$ with $N_1+N_2+...+N_k+1<m<N_1+N_2+...+N_k+N_{k+1}$ are populated by non trivial values which reflect the eigenvectors indentation, as well as the associated eigenvalues. This observation can be mathematically proven as follows. Write $\vec{n}_k$ on the basis formed by the eigenvectors $\left( \vec{\phi}_k \right)_l$ to eventually get:} \begin{equation} \vec{n}_k = \sum_{l=1}^{N_1+N_2+...+N_{k+1}} c_l \left( \vec{\phi}_k \right)_l \equiv \sum_{l=1}^{N_1+N_2+...+N_k} c_l \vec{e}_l \end{equation} {where $\left(\vec{e}_1, \vec{e}_2...\right)$ stand for the canonical basis and the last inequality follows the specific structure of the eigenvectors (remark that the leftmost sum in the above equation includes $N_{k+1}$ more elements than the second). By definition:} \begin{equation} \vec{n}_{k+1} = \mathbf{A}_k \vec{n}_k = \sum_{l=1}^{N_1+N_2..+N_{k+1}} c_l \left( \Lambda_k \right)_l \left( \vec{\phi}_k \right)_l \end{equation} {From the above relation, one gets for $m \le N_1+N_2+...+N_k$} \begin{equation} \left[ \vec{n}_{k+1} \right]_m = \sum_{l=1}^{N_1+N_2..+N_{k}} c_l \left[ \vec{e}_l \right]_m \equiv \left[ \vec{n}_{k} \right]_m \end{equation} {where the first equality sign follows from the observation that $\left( \vec{\phi}_k \right)_l$ coincides with $\vec{e}_l$ and $\left( \Lambda_k \right)_l=1$, over the explored range of $m$. For $N_1+N_2+...+N_k+1 \le m \le N_1+N_2+...+N_k+N_{k+1}$, we obtain instead:} \begin{equation} \left[\vec{n}_{k+1}\right]_m = \sum_{l=N_1+N_2..+N_{k-1}}^{N_1+N_2..+N_{k+1}} c_l \left( \Lambda_k \right)_l \left[ \left( \vec{\phi}_k \right)_l \right]_m \end{equation} {Finally, it is immediate to show that $\left[\vec{n}_{k+1}\right]_m=0$ for $m> N_1+N_2+...+N_k+N_{k+1}$, because of the specific form of the employed eigenvectors. In short, the information contained in the last non trivial $N_k$ entries of $\vec{n}_k$ rebound on the successive $N_{k+1}$ elements of $\vec{n}_{k+1}$, funnelling the information downstream from the input to the output. The successive information processing relies on the indented (non orthogonal) eigenvectors and the associated eigenvalues, which hence define the target of the training in reciprocal space. } { To carry out the learning procedure one needs to introduce a loss function $L(\vec{n}_1)$. For illustrative purposes this latter can be written as:} \begin{equation} \label{LF} L(\vec{n}_1) = \left\lVert l(\vec{n}_1) - \sigma\left[ \left( \Pi_{k=1}^{\ell} \Phi_k \Lambda_k \left(2 \mathbb{1}_{N \times N}-\Phi_k\right) \right) \vec{n} _1\right ] \right\lVert^2 \end{equation} { where $\sigma(\cdot)$ is the softmax operation applied to the last entries of the $\ell$-th image of the input vector $\vec{n}_1$. In the above expression, $l(\vec{n}_1)$ stands for the label attached to $\vec{n}_1$ depending on its category. More into details, the $k$-th entry of $l(\vec{n}_1)$ is equal unit (and the rest identically equal to zero) if the number supplied as an input is identical to $k$, with $k=0,1,...,9$. The loss function can be minimized by acting on the free parameters of the learning scheme. Specifically, the learning can be restricted to the set of $N$ non trivial eigenvalues, split in $\ell$ distinct groups, each referred to one of the $\mathbf{A}_{k}$ matrices (i.e. $N_1+N_2$ eigenvalues of $\mathbf{A}_{1}$, $N_3$ eigenvalues of $\mathbf{A}_{2}$,...., $N_{\ell}$ eigenvalues of $\mathbf{A}_{\ell-1}$). In addition, the sub-diagonal block entries of $\Phi_k$, the elements of the basis which dictate the successive indentation between adjacent layers, can be adjusted as follows the training scheme. In the following section we will report about the performance of the method, implemented in its different modalities, against those obtained with a classical approach to the learning anchored in direct space. In the actual implementation we have chosen to deal with a categorical cross-entropy loss function.} {Before ending this section a few remarks are mandatory. Introduce $\mathcal{A} = \Pi_{k=1}^{\ell} \mathbf{A}_{k}$. The linear transformation that links the input vector $\vec{n}_1$ to the generated output $\vec{n}_{\ell}$, can be compactly expressed as $\vec{n}_{\ell} = \mathcal{A} \vec{n}_1$. Then, recall that the classification relies on examining the last $N_\ell$ entries of $\vec{n}_{\ell}$. Hence, for the specific setting here examined, where the mapping is obtained as a cascade of linear transformations, one can imagine to recast the whole procedure in a space of reduced dimensionality. Be $\vec{z}$ a column vector made of $N_1 + N_\ell$ elements. The first $N_1$ entries of $\vec{z}$ are the intensities on the pixels of the selected image, as for the homologous $\vec{{n}}_1$ quantity. The other elements are set to zero. Then, consider the $(N_1 + N_\ell) \times (N_1 + N_\ell)$ matrix $\mathcal{A}_c$ (the label $c$ stands for {\it compact}), constructed from $\mathcal{A}$ by trimming out all the information that pertain to the intermediate layers, as introduced in the reciprocal space (see Fig. \ref{fig2}(b)). Stated differently, matrix $\mathcal{A}_c$ provides the weighted links that feed from the input to the output layer in direct space, via the linear transformation $\mathcal{A}_c \vec{z}$: this is a single layer perceptron, shown in Fig. \ref{fig2}(b), which was trained by endowing reciprocal space with an arbitrary number of additional dimensions, the intermediate stacks responsible for the sequential embedding of the information. Intermediate layers can be literally extracted, during the training phase, and subsequently retracted in operational mode. The importance to allowing for additional layers, and so provide the neural network of a telescopic attribute, will be assessed in the forthcoming sections.} {From the algorithmic point of view the process outlined above can be rephrased in simpler, although equivalent terms. For all practical purposes, one could take the (column) input vector $\vec{{n}}_1$ to have $N_1+N_2$ elements. Following the scheme depicted above, the first $N_1$ entries are the intensities on the pixels of the selected image, while the remaining $N_2$ elements are set to zero. We now introduce a $(N_1+N_2) \times (N_1+N_2)$ matrix $\mathbf{A}_{1}$. This is the identity matrix $\mathbb{1}_{(N_1+N_2) \times (N_1+N_2)}$ with the inclusion of a sub-diagonal block $N_{2} \times N_{1}$, which handles the information processing that will populate the second $N_2$ elements of the output vector $\vec{{n}}_2= \mathbf{A}_{1} \vec{{n}_1}$. Then, we formally replace the $(N_1+N_2)$ column vector $\vec{{n}}_2$ with a column vector made of $(N_2+N_3)$ elements, termed $\vec{{n}}_{2t}$, whose first $N_2$ elements are the final entries of $\vec{{n}}_2$. The remaining $N_3$ elements of $\vec{{n}}_{2t}$ are set to zero. Now, rename $\vec{{n}}_{2t}$ as $\vec{{n}}_2$ and presents it as the input of a $(N_2+N_3) \times (N_2+N_3)$ matrix $\mathbf{A}_{2}$, with a non trivial sub-diagonal $N_{3} \times N_{2}$ block. This latter maps the first $N_2$ elements of the input vector, into the successive $N_3$ of the output one, by completing the second step of an algorithmic scheme which can be iteratively repeated. In analogy with the above, each $(N_k+N_{k+1}) \times (N_k+N_{k+1})$ matrix $\mathbf{A}_{k}$ can be written as $\mathbf{A}_k= \Phi_k \Lambda_k \left(2 \mathbb{1}_{(N_k+N_{k+1}) \times (N_k+N_{k+1})}-\Phi_k\right)$, where now the column vectors of $\Phi_k$ are the eigevenctors of $\mathbf{A}_k$ and form a non-orthogonal basis of the $(N_k+N_{k+1})$ space where input and output vectors belong. $\Lambda_k$ is a diagonal matrix of the eigenvalues: the first $N_k$ are set to one, while the other $N_{k+1}$ are non trivial entries to be adjusted self-consistently via the learning scheme. Framing the process in the augmented space of $N$ dimensions, as done earlier, allows us to avoid adapting the dimensions of the involved vectors at each iteration. On the contrary, this is a convenient procedure to be followed when aiming at a numerical implementation of the envisaged scheme. Notice that to discuss the algorithmic variant of the method, we made use of the same symbols employed earlier. The notation clash is however solely confined to this paragraph.} {In the following, we will discuss how these ideas extend to the more general setting of non-linear multi-layered neural networks.} {\subsection{Training non-linear multi-layered neural networks in the spectral domain}} { In analogy with the above, the image to be processes is again organized in a $N\times1$ column vector $\vec{n}_1$. This latter is transformed into $\vec{n}_2=\mathbf{A}_1 \vec{n}_1$, where matrix ${N \times N}$ matrix $\mathbf{A}_1$ is recovered from its spectral properties, respectively encoded in $\Phi_1$ and $\Lambda_1$. The output vector $\vec{n}_2$ is now filtered via a suitable {\it non-linear} function $f(\cdot)$. This step marks a distinction between, respectively, the linear and non-linear versions of the learning schemes. For the applications here reported we have chosen to work with a rectified linear unit (ReLU) $f(\cdot)= max(0, \cdot)$. Another possibility is to set $f(\cdot, \beta_1)=\tanh[\beta_1(\cdot)]$, where $\beta_1$ is a control parameter which could be in principle self-consistently adjusted all along the learning procedure. We are now in a position to iterate the same reasoning carried out in the preceding section, adapted to the case at hand. More specifically, we introduce the generic $N\times N$ matrix $\mathbf{A}_k= \Phi_k \Lambda_k \left(2 \mathbb{1}_{N \times N}-\Phi_k\right)$ which transforms $\vec{n}_k$ into $\vec{n}_{k+1}$, with $k=2,...,\ell-1$. The outcome of this linear transformation goes through the non-linear filter. The loss function $L(\vec{n})$ generalizes to:} \begin{equation} \label{LF} L(\vec{n}) = \left\lVert l(\vec{n}_1) - \sigma\left( f\left(\mathbf{A}_{\ell-1}.... f\left (\mathbf{A}_2 f \left (\mathbf{A}_1 \vec{n_1},\beta_1 \right), \beta_2 \right),\beta_{\ell-1} \right) \right) \right\rVert^2 \end{equation} {with an obvious meaning of the involved symbols. In the set of experiments reported below we assume, in analogy with the above, a categorical cross-entropy loss function. The loss function is minimized upon adjusting the free parameters of the learning scheme: the $\ell-1$ blocks of tunable eigenvalues, the elements that define the successive indentation of the nested basis which commands the transfer of the information (and e.g. the quantities $\beta_k$, if the sigmoidal hyperbolic function is chosen as a non-linear filter). This eventually yields a fully trained network, in direct space, which can be unfolded into a layered architecture to perform pattern recognition (see Fig. \ref{fig3}). Remarkably, self-loop links are also present. The limit of a linear single layer perceptron is recovered when silencing the non-linearities: a $(N_1 + N_\ell) \times (N_1 + N_\ell)$ matrix $\mathcal{A}_c$ can be generated from the $N \times N$ matrices $\mathbf{A}_{k}$, following the same strategy outlined above. A sequence of linear layers can be also interposed between two consecutive non-linear stacks. The interposed layers allow to enlarge the space of parameters employed in the learning scheme, and can be retracted when operating the deep neural network after completion of the learning stage. Their role is {\it de facto} encapsulated in the entries of the linear operator that bridges the gap between the adjacent non-linear stacks, as explained above when referring to the telescopic operational modality.} \begin{figure} \centering \includegraphics[scale=0.5]{EmbeddingProgressivo.pdf} \\ \caption{\it {The non-linear version of the training scheme returns a multi-layered architecture with self-loops links in direct space. Linear and non-linear transformation can be combined at will, matrices $\mathbf{A}_k$ providing the connection between successive layers. Linear layers can be retracted in operational mode, following a straightforward variant of the compactification procedure described in the main body of the paper. }} \label{fig3} \end{figure} {\section{Results}} To build and train the aforementioned models we used TensorFlow and created a custom spectral layer matrix that could be integrated in virtually every TensorFlow or Keras model. That allowed us to leverage on the automatic differentiation capabilities and the built-in optimizers of TensorFlow. Recall that we aim at training {just a a portion of the diagonal of $\Lambda_k$ and a block of $\Phi_k$}. To reach this goal we generated two fully trainable matrices, for each layer in the spectral domain, and applied a suitably designed mask to filter out the sub-parts of the matrices to be excluded from the training. This is easy to implement and, although improvable from the point of view of computational efficiency, it works perfectly, given the size of the problem to be handled. We then trained all our models with the AdaMax optimizer \cite{kingma2014adam} by using a learning rate of $0.03$ for the linear case and $0.01$ for the non-linear one. The training proceeded for {about $20$ epochs} and during each epoch the network was fed with batches of images of {different size, ranging from $300$ to $800$.}. These hyperparameters have been chosen so as to improve on GPU efficiency, accuracy and stability. However, we did not perform a systematic study to look for the optimal setting. All our models have been trained on a virtual machine hosted by Google Colaboratory. Standard neural networks have been trained on the same machine using identical software and hyperparameters, for a fair comparison. Further details about the implementation, as well as a notebook to reproduce our results, can be found in the public repository of this project \cite{gitrepo}. {We shall start by reporting on the performance of the linear scheme. The simplest setting is that of a perceptron made of two layers: the input layer with $N_1=28 \times 28 = 784$ nodes and the output one made of $N_2=10$ elements. The perceptron can be trained in the spectral domain by e.g. tuning the $N=N_1+N_2=794$ eigenvalues of $\mathbf{A}_{1}$, the matrix that links the input ($\vec{n}_1$) and output ($\vec{n}_2$) vectors. The learning restricted to the eigenvalues returns a perceptron which performs the sought classification task with an accuracy (the fraction of correctly recognized images in the test-set) of $(82 \pm 2) \%$ (averaging over $5$ independent runs). This figure is to be confronted with the accuracy of a perceptron trained with standard techniques in direct space. For a fair comparison, the number of adjustable weights should be limited to $N$. To this aim, we randomly select a subset of weights to be trained and carry out the optimization on these latter. The process is repeated a few ($5$ in this case) times and, for each realization, the associated accuracy computed. Combining the results yields an average performance of $(79 \pm 3) \%$ , i.e. a slightly smaller score (although compatible within error precision) than that achieved when the learning takes place in the spectral domain. When the training extends to all the $N_1 \times N_2$ weights (plus $N_1+N_2$ bias), conventional learning yields a final accuracy of $(92.7 \pm 0.1) \%$. This is practically identical to the score obtained in the spectral domain, specifically $(92.5 \pm 0.2) \%$, when the sub-diagonal entries of the eigenvectors matrix are also optimized (for a total of $N_1+N_2+N_1 \times N_2$ free parameters). The remarkable observation is however that the distribution of the weights as obtained when the learning is restricted on the eigenvalues (i.e using about the 10 \% of the parameters employed for a full training in direct space) matches quite closely that retrieved by means of conventional learning schemes, see Fig. \ref{fig4} . This is not the case when the learning in direct space acts on a subset of $N$, randomly selected, weights (data not shown). Based on the above, it can be therefore surmised that optimizing the eigenvalues constitutes a rather effective pre-training strategy, which engages a modest computational load.} \begin{figure} \centering \includegraphics[scale=0.7]{Percettrone.pdf} \\ \caption{\it {Distribution of the weights of a perceptron. The red line follows the spectral training limited the $N_1+N_2$ eigenvalues. The black line follows the training in direct space. Here, $N_1 \times N_2$ parameters are adjusted in the space of the nodes. The distribution are very similar, but the spectral learning employs about $10 \%$ of the parameters used in direct space. The distributions obtained when forcing the training in direct space to operate on a subset of $N_1+N_2$ weights are very different from the one displayed (for every choice of the randomly selected family of weights to be trained).}} \label{fig4} \end{figure} {To further elaborate on the potentiality of the proposed technique, we modify the simple two-layers perceptron, with the inclusion of supplementary computing layers. As explained above the newly added layers plays an active role during the learning stage, but can be retracted in operating mode so as to return a two-layers perceptron. The weights of this latter bear however an imprint of the training carried out for the linear network in the expanded configuration. Two alternative strategies will be in particular contemplated. On the one side, we will consider a sole additional layer, endowed with $N_2$ nodes, interposed between the input and output layers made of, respectively, $N_1=784$ and $N_{\ell} \equiv N_3=10$ nodes. We will refer to this as to the {\it wide linear} configuration. The performance of the method can be tested by letting $N_2$ to progressively grow. On the other side, the {\it deep linear} configuration is obtained when interposing a sequence of successive (linear) stacks between the input ($N_1=784$) and the output ($N_{\ell} =10$) layers.} { In Fig. \ref{fig5}, we report on the performance of the wide learning scheme as a function of $N_2+N_3$. As we shall clarify, this latter stands for the number of trained parameters for (i) the spectral learning acted on a subset of the tunable eigenvalues and for (ii) the conventional learning in direct space restricted to operate on a limited portion of the weights. The red line in the main panel of Fig. \ref{fig5} refers to the simplified scheme where a subset of the eigenvalues are solely tuned (while leaving the eigenvectors fixed at the random realization set by the initial condition). We have in particular chosen to train the second bunch of $N_2$ eigenvalues of the transfer matrix $\mathbf{A}_{1}$ and the $N_3=10$ non trivial eigenvalues of matrix $\mathbf{A}_{2}$, in line with the prescriptions reported in the preceding Section. The blue line reports on the accuracy of the neural network trained in direct space: the target of the optimization is a subset of cardinality $N_2+N_3$ of the $N_1 N_2+N_2 N_3$ weights which could be in principle adjusted in the space of the nodes. The performance of the spectral method proves clearly superior, as it can be readily appreciated by visual inspection of Fig. \ref{fig5}. The black line displays the accuracy of the linear neural network when the optimization acts on the full set of $N_1 N_2+N_2 N_3$ trainable parameters. No improvement is detectable when increasing the size of the intermediate layer: the displayed accuracy is substantially identical to that obtained for the basic perceptron trained with $N_1 N_2=7840$ parameters. The spectral learning allows to reach comparable performance already at $N_2=1000$ ($13 \%$ of the parameters used for the standard two layers perceptron with $N_1 \times N_2$ parameters, as discussed above). In the inset of Fig. \ref{fig5}, the distribution of the entries of matrix $\mathcal{A}_c$, the equivalent perceptron, is depicted in red for the setting highlighted in the zoom. The black line refers to the two-layers equivalent of the neural network trained in direct space, employing the full set of trainable parameters (black dot enclosed in the top-left dashed rectangle drawn in the main panel of Fig. \ref{fig5}). The two distributions look remarkably close, despite the considerable reduction in terms of training parameters, as implemented in the spectral domain (for the case highlighted, $0.13 \%$ of the parameters employed under the standard training). Similarly to the above, the distribution obtained when forcing the training in direct space to act on a subset of $N_1+N_2$ weights are just a modest modulation of the initially assigned profile, owing to the {\it local} nature of the learning in the space of the nodes.} {In Fig. \ref{fig6}, we report the results of the tests performed when operating under the deep linear configuration. Symbols are analogous to those employed in Fig. \ref{fig5}. In all inspected cases, the entry layer is made of $N_1=784$ elements and the output one has $N_{\ell}=10$ nodes. The first five points, from left to right, refer to a three layers (linear) neural network. Hence, $\ell=3$ and the size of the intermediate layer is progressively increased, $N_2=20,80,100,500,800$. The total number of trained eigenvalues is $N_2+N_3$, and gets therefore larger as the size of the intermediate layer grows. The successive four points of the collections are obtained by setting $\ell=4$. Here, $N_2=800$ while $N_3$ is varied ($=100,200,400,600$). The training impacts on $N_2+N_3+N_4$ parameters. Finally the last point in each displayed curve is obtained by working with a five layers deep neural network, $\ell=5$. In particular $N_2=800$, $N_3=600$ and $N_4=500$, for a total of $N_2+N_3+N_4+N_5$ tunable parameters. Also in this case, the spectral algorithm performs better than conventional learning schemes constrained to operate with an identical number of free parameters. Similarly, the distribution of the weights of an equivalent perceptron trained in reciprocal space matches that obtained when operating in the space of the nodes and resting on a considerably larger number of training parameters. To sum up, eigenvalues are parameters of key importance for neural networks training, way more strategic than any other set of equivalent cardinality in the space of the nodes. As such, they allow for a global approach to the learning, with significant reflexes of fundamental and applied interest. In all cases here considered, the learning can extend to the eigenvectors: an optimized indentation of the eigen-directions contribute to enhance the overall performance of the trained device.} \begin{figure} \centering \includegraphics[scale=0.5]{WideLinearBox.pdf} \\ \caption{\it {A three layers neural network is considered. The accuracy of the neural network is plotted as a function of the number of parameters that we chose to train with the spectral algorithm, $N_2+N_3$. The red line reports on the performance of the spectral training. The blue line refers to the neural network trained in direct space: the optimization runs on $N_2+N_3$ parameters, a subset of the total number of adjustable weights $N_1 N_2+N_2 N_3$. The black line stands for the accuracy of the linear neural network when training the full set of $N_1 N_2+N_2 N_3$ parameters. Notice that the reported accuracy is comparable to that obtained for a standard two layers perceptron. Inset: the distribution of the entries of the equivalent perceptrons are plotted. The red curve refer to the spectral learning restricted to operate on the eigenvalues; the black profile to the neural network trained in direct space, employing the full set of adjustable parameters. In both cases, the weights refer to the two layers configuration obtained by retracting the intermediate linear layer employed during the learning stage. }} \label{fig5} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{DeepLinearBox.pdf} \\ \caption{\it { The performance of the spectral algorithm are tested for a multi-layered linear configuration. Symbols are chosen in analogy to Fig. \ref{fig5}. In all cases, the input layer is made of $N_1=784$ elements and the output layer has $N_{\ell}=10$ nodes. The first five points, from left to right in each of the curves depicted in the main panel, refer to a three layers (linear) neural network. The size of the intermediate layer is progressively increased, as $N_2=20,80,100,500,800$. The total number of trained eigenvalues is $N_2+N_3$. The subsequent four points are obtained by considering a four layers architecture. In particular, $N_2=800$ while $N_3$ takes values in the interval ($100,200,400,600$). The training acts on $N_2+N_3+N_4$ eigenvalues. The final point in each curve is obtained with a four layers deep neural network. Here, $N_2=800$, $N_3=600$ and $N_3=500$, for a total of $N_2+N_3+N_4+N_5$ tunable parameters in the spectral setting. Inset: the distribution of the entries of the equivalent perceptrons are displayed, with the same color code adopted in Fig. \ref{fig5}. Also in this case, the weights refer to the two layers configuration obtained by retracting the intermediate linear layers employed in the learning stage. }} \label{fig6} \end{figure} {We now turn to considering a non-linear architecture. More specifically, we will assume a four layers network with, respectively, $N_1=784, N_2,N_3=120, N_4=10$. The non-linear ReLU filter acts on the third layer of the collection, while the second is a linear processing unit. As in the spirit of the wide network configuration evoked above, we set at testing the performance of the neural network for increasing $N_2$. For every choice of $N_2$, the linear layer can be retracted yielding a three-layered effective non-linear configurations. We recall however that training the network in the enlarged space where the linear unit is present leaves a non trivial imprint in the weights that set the strength of the links in direct space.} {In Fig \ref{fig7}, we plot the computed accuracy as a function of $N_2$, the size of the linear layer. In analogy with the above analysis, the red curve refers to the training restricted to $N_2+N_3+N_4$ eigenvalues; the blue profile is obtained when the deep neural network is trained in direct space by adjusting an identical number of inter-nodes weights. As for the case of a fully linear architecture, by adjusting the eigenvalues yields better classification performances. The black line shows the accuracy of the neural network when the full set of $N_1 N_2+N_2 N_3+N_3 N_4$ is optimized in direct space. The green line refer instead to the spectral learning when the eigenvalues and eigenvectors are trained simultaneously. The accuracies estimated for these two latter settings agree within statistical error, even if the spectral scheme seems more robust to overfitting (the black circles declines slightly when increasing $N_2$, while the collection of green points appears rather stable).} \begin{figure} \centering \includegraphics[scale=0.5]{NonLinearAccuracy.pdf} \\ \caption{\it {The accuracy of the non-linear deep neural network is tested. We assume a four layers network with, respectively, $N_1=784, N_2,N_3=120, N_4=10$; $N_2$ is changed so as to enlarge the set of parameters to be trained. The red line refers to the spectral training, with $N_2+N_3+N_4$ adjusted eigenvalues. The blue line stands for a neural network trained in direct space, the target of the optimization being a subset made of $N_2+N_3+N_4$ weights, randomly selected from the available pool of $N_1 N_2+N_2 N_3+N_3 N_4$ tunable parameters. The black line reports the accuracy of the linear neural network when training the full set of $N_1 N_2+N_2 N_3+N_3 N_4$ weights. The green line refer to the spectral learning when eigenvalues and eigenvectors are simultaneously trained.}} \label{fig7} \end{figure} \section{Conclusions} Summing up, we have here proposed a novel approach to the training of deep neural networks which is bound to the spectral, hence reciprocal, domain. The {eigenvalues and eigenvectors} of the adjacency matrices that connects consecutive layers via directed feed-forward links are trained, instead of adjusting the weights that bridge each pair of nodes of the collection, as it is customarily done in the framework of conventional ML approaches. {The first conclusion of our analysis is that optimizing the eigenvalues, when freezing the eigenvectors, yields performances which are superior to those attained with conventional methods {\it restricted} to a operate with an identical number of free parameters. It is therefore surmised that eigenvalues are key target parameters for neural networks training, in that they allow for a {\it global} handling of the learning. This is at variance with conventional approaches which seek at modulating the weights of the links among mutually connected nodes. Secondly, the spectral learning restricted to the eigenvalues yields a distribution of the weights which resembles quite closely that obtained with conventional algorithms bound to operate in direct space. For this reason, the proposed method could be used in combination with existing ML algorithms for an effective (and computationally advantageous) pre-training of deep neural networks. We have also shown that linear processing units inserted in between consecutive, non-linearly activated layers produce an enlargement of the learning parameters space, with beneficial effects in terms of performance of the trained device. Extending the learning so as to optimize the eigenvectors enhances the ability of the network to operate the sought classification. In the proposed implementation, and to recover a feed-forward architecture in direct space, we have assumed a nested indentation of the eigenvectors. Entangling the eigenvectors referred to successive stacks is the key for a recursive processing of the data, from the input to the output layer. Employing other non-orthogonal basis could eventually allow to challenge different topologies in direct space and shed novel light on the surprising ability of deep networks to cope with the assigned tasks.} {In future perspective, it would interesting to characterize the solutions attained with the spectral method, following the strategy outlined in \cite{feizi2017porcupine}. Further, it could be interesting to combine the spectral approach to other existing schemes which have been devised to improve the computational performance of deep neural networks, without significant loss in final recognition accuracy \cite{6638949, frankle2020training}.} \bibliographystyle{unsrt}
1,108,101,565,051
arxiv
\section{Introduction} The famous domination chain $$\textrm{ir}(G)\leq \gamma(G)\leq i(G)\leq \alpha(G)\leq \Gamma(G)\leq \textrm{IR}(G) $$ links parameters related to the fundamental notions of independence, domination and irredundance in graphs; see \cite{HHS98}. Out of the six parameters, the least studied appears to be the upper domination parameter $\Gamma(G)$. This paper is devoted to changing this impression. \LV{\subsection{Basic notions}}\SV{\paragraph{Basic notions.}} Throughout this paper, we only deal with undirected simple graphs $G=(V,E)$. In the following, we explain the main graph theory notions that we use in this paper and refer the reader to Appendix~\ref{basic_notions} and any graph theory textbook for other standard concepts and notations. \LV{ The number of vertices $|V|$ is also known as the order of $G$. As usual, $N(v)$ denotes the open neighbourhood of $v$ in a graph $G$, and $N[v]$ is the closed neighbourhood of $v$ in $G$, i.e., $N[v]=N(v)\cup\{v\}$. These notions can be easily extended to vertex sets $X$, e.g., $N(X)=\bigcup_{x\in X}N(x)$. The cardinality of $N(v)$ is also known as the degree of $v$, denoted as $deg(v)$. The maximum degree in a graph is usually written as $\Delta$. A graph of maximum degree three is called subcubic, and if actually all degrees equal three, it is called a cubic graph.} Given a graph $G=(V,E)$, a subset $S$ of $V$ is a \emph{dominating set} if every vertex $v\in V\setminus S$ has at least one neighbour in $S$, i.e., if $N[S]=V$. A dominating set is minimal if no proper subset of it is a dominating set. \LV{Likewise, a vertex set $I$ is \emph{independent} if $N(I)\cap I=\emptyset$. An independent set is maximal if no proper superset is independent.} In the following we use classical notations: $\gamma(G)$ and $\Gamma(G)$ are the minimum and maximum cardinalities over all minimal dominating sets in $G$, $\alpha(G)$ is the maximum cardinality of an independent set, $i(G)$ is the minimum cardinality of a maximal independent set, and $\tau(G)$ is the size of a minimum vertex cover, which equals $|V|-\alpha(G)$ by Gallai's identity. A minimal dominating set $D$ of $G$ with $|D|=\Gamma(G)$ is also known as an \emph{upper dominating set} of $G$. For any subset $S\subseteq V$ and $v\in S$ we define the private neighbourhood of $v$ with respect to $S$ as $pn(v,S):=N[v]-N[S-\{v\}]$. Any $w\in pn(v,S)$ is called a \emph{private neighbour of $v$ with respect to $S$}. If the set $S$ is clear from the context, we will omit the ``with respect to $S$" part. $S$ is called \emph{irredundant} if every vertex in $S$ has at least one private neighbour, i.e., if $|pn(v,S)|>0$ for every $v\in S$. $\textrm{IR}(G)$ denotes the cardinality of the largest irredundant set in $G$, while $\textrm{ir}(G)$ is the cardinality of the smallest maximal irredundant set in $G$. The domination chain is largely due to the following two combinatorial properties: (1) Every maximal independent set is a minimal dominating set. (2) A dominating set $S\subseteq V$ is minimal if and only if $|pn(v,S)|>0$ for every $v\in S$. Observe that $v$ can be a private neighbour of itself, i.e., a dominating set is minimal if and only if it is also an irredundant set. Actually, every minimal dominating set is also a maximal irredundant set. \LV{\subsection{Our combinatorial problems}}\SV{\paragraph{Our combinatorial problems.}} We will mostly deal with the following two combinatorial problems, investigating algorithmic and complexity aspects from different angles, for instance, approximation and parameterised complexity. \smallskip \noindent \fbox{\begin{minipage}{.47\textwidth} \textsc{Upper Domination}\xspace\nopagebreak\\\nopagebreak {\bf Input:} A graph $G=(V,E)$, a non-negative integer $k$.\\\nopagebreak {\bf Question:} Is $\Gamma(G) \geq k$? \end{minipage}}\hfill \fbox{\begin{minipage}{.47\textwidth} \noindent\textsc{Min Complement Upper Domination}\xspace\\\nopagebreak {\bf Input:} A graph $G=(V,E)$, a non-negative integer $\ell$. \\\nopagebreak {\bf Question:} Is $\Gamma(G) \geq |V|-\ell$? \end{minipage}} \smallskip From the perspective of classical complexity theory, both problems are trivially equivalent and known to be NP-complete for quite some time~\cite{CheFHJ90}. Slightly abusing notation, we will also consider them from the perspective of parameterised complexity. In that case, $k$ and $\ell$ turn out to be the natural parameters of these problems, which turn both problems into dual problems in the parameterised complexity sense of this word. Actually, as we mostly consider this natural parameterisation, no further mentioning of the choice of the parameter is necessary when putting down the results. Finally, we also consider these problems from the perspective of optimisation, again slightly abusing notations. \textsc{Upper Domination}\xspace is then a maximisation problem, while \textsc{Min Complement Upper Domination}\xspace is a minimisation problem. \LV{\subsection{On some complexity notions}} \paragraph{Parameterised Complexity.} We mainly refer to the newest textbook \cite{DowFel2013} in the area. Important notions that we will make use of include the parameterised complexity classes FPT, W[1] and W[2], parameterised reductions and kernelisation. In this area, it has also become customary not only to suppress constants (as in the well-known $O$ notation), but also even polynomial-factors, leading to the so-called $O^*$-notation. \paragraph{Approximation.} We mostly refer to the textbook~\cite{Ausetal99}. Given an optimisation problem and an instance $I$ of this problem, we use $|I|$ to denote the size of $I$, $\operatorname{opt}(I)$ to denote the optimum value of $I$, and $val(I,S)$ to denote the value of a feasible solution $S$ of instance $I$. The {\em performance ratio\/} of $S$ (or {\it approximation factor}) is $r(I,S)=\max\left\{\frac{val(I,S)}{\operatorname{opt}(I)}, \frac{\operatorname{opt}(I)}{val(I,S)}\right\}.$ The {\em error} of $S$, $\varepsilon(I,S)$, is defined by $\varepsilon(I,S)= r(I,S)-1.$ For a function $f$, an algorithm is an {\it $f(|I|)$-approximation\/}, if for every instance $I$ of the problem, it returns a solution $S$ such that $r(I,S) \leq f(|I|).$ When $f$ is $1+\varepsilon$ for any $\varepsilon >0$ and the algorithm runs in polynomial time in $|I|$ and in exponential time in $\frac 1\varepsilon$, the problem admits a polynomial time approximation scheme (PTAS for short). APX is the class of optimisation problems for which there exists a polynomial time $c$-approximation algorithm for some constant $c > 1$. For providing hardness proofs in the area of approximation algorithms, $L$-reductions have become a kind of standard. An optimisation problem which is APX-hard under $L$-reduction has no polynomial-time approximation scheme if P $\neq$ NP. We will deviate a bit from the $L$-reduction methodology, using $E$-reductions instead in one place. \LV{The notion of an $E$-reduction ({\it error-preserving} reduction) was introduced by Khanna et al. \cite{Khaetal98}. A problem $A$ is called {\it $E$-reducible} to a problem $B$, if there exist polynomial time computable functions $f$, $g$ and a constant $\beta$ such that \begin{itemize} \item $f$ maps an instance $I$ of $A$ to an instance $I'$ of $B$ such that $\operatorname{opt}(I)$ and $\operatorname{opt}(I')$ are related by a polynomial factor, i.e. there exists a polynomial $p$ such that $\operatorname{opt}(I')\leq p(|I|) \operatorname{opt}(I)$, \item $g$ maps any solution $S'$ of $I'$ to one solution $S$ of $I$ such that $\varepsilon(I,S)\leq \beta \varepsilon(I',S')$. \end{itemize} An important property of an $E$-reduction is that it can be applied uniformly to all levels of approximability; that is, if $A$ is $E$-reducible to $B$ and $B$ belongs to $\cal{C}$ then $A$ belongs to $\cal{C}$ as well, where $\cal{C}$ is a class of optimisation problems with any kind of approximation guarantee (see also \cite{Khaetal98}).} \LV{\subsection{Our main results}}\SV{\paragraph{Main results.}} (1) We link minimal dominating sets to a decomposition of the vertex set that turns out to be a crucial tool for deriving our combinatorial and computational results. (2) We bound the upper domination number by the maximum independence number, the order and also by the maximum degree of a graph. (3) We explain the particular hardness of dealing with this graph parameter by showing NP-hardness of an extension problem variant that rules out certain natural greedy strategies for filling up partial solutions. (4) We show that \textsc{Upper Domination}\xspace is W[1]-hard, so very likely not to belong to FPT. (5) Conversely, \textsc{Min Complement Upper Domination}\xspace is in FPT, which we prove by providing both a kernelisation and a branching algorithm. (6) Likewise, \textsc{Min Complement Upper Domination}\xspace is in APX, while \textsc{Upper Domination}\xspace is not $n^{1-\varepsilon}$-approximable for any $\varepsilon>0$, unless P=NP. (7) \textsc{Upper Domination}\xspace is NP-hard even on cubic graphs. (8) Both \textsc{Upper Domination}\xspace and \textsc{Min Complement Upper Domination}\xspace are APX-complete on bounded degree graphs. (9) \textsc{Upper Domination}\xspace is in FPT for graphs of bounded degree. \SV{For reasons of space, proofs and other details were moved into an appendix to this extended abstract.} \section{\LV{Notes on the combinatorial}\SV{On the} structure of minimal dominating sets}\label{sec-FIPO} Any minimal dominating set $D$ for a graph $G=(V,E)$ can be associated with a partition of the set of vertices $V$ into four sets $F,I,P,O$ given by: $I:=\{v\in D\colon v\in pn(v,D)\}$, $F:=D-I$, $P\in\{B\subseteq N(F)\cap(V-D)\colon |pn(v,D)\cap B|=1 $ for all $ v\in F\}$ with $|F|=|P|$, $O=V-(D\cup P)$. This representation is not necessarily unique since there might be different choices for the sets $P$ and $O$, but for every partition of this kind, the following properties hold: \begin{enumerate} \item Every vertex $v\in F$ has at least one neighbour in $F$, called a \textbf{f}riend. \item The set $I$ is an \textbf{i}ndependent set in $G$. \item The subgraph induced by the vertices $F\cup P$ has an edge cut set separating $F$ and $P$ that is, at the same time, a perfect matching; hence, $P$ can serve as the set of \textbf{p}rivate neighbours for~$F$. \item The neighbourhood of a vertex in $I$ is always a subset of $O$, which are otherwise the \textbf{o}utsiders. \end{enumerate} \begin{comment} \todo[inline]{JM: Consider the graph given by $S_{3,3,3}+K_3$ (a $S_{3,3,3}$ is a claw $K_{1,3}$ where we have made 2 subdivision on each edge) and such that the endpoints of the $S_{3,3,3}$ are identified to the nodes of the triangle. An optimal UDS, for example, is given by $D=K_3+c$ where $c$ is the center of the $S_{3,3,3}$. Thus $I=\{c\}$ and $F=K_3$ If $S$ denotes $S=N(K_3)\cap(V\setminus D)$, then any $P$ such that $S\subseteq P \subseteq V\setminus D$ satisfies the condition given in the paper. In particular, $P=V\setminus D$ and $O=\emptyset$ is feasible. In this case the part of item 4 "The neighbourhood of a vertex in $I$ is always a subset of $O$" seems to be wrong. Where is my mistake ? Now, if we assume that P is minimal, then we avoid this case.} \todo[inline]{HF: According to our (now slightly modified) definition of possible $P$'s, there must exist a natural bijection between $F$ and $P$. We always had this in mind, but we did not write it down. Thanks for catching this. Also item 3. on our list of properties is not satisfied by your example, for the same reason. } \end{comment} \LV{\todo[inline]{HF: Maybe, we could add a small picture for illustration in the long version.}} This partition is also related to a different characterisation of $\Gamma(G)$ in terms of so-called upper perfect neighbourhoods~\cite{HHS98}. \begin{lemma} \label{|I|<=alpha-2} For any connected graph $G$ with $n>0$ vertices and an upper dominating set $D$ with an associated partition $(F,I,P,O)$ as defined above, if $|D| = \Gamma(G) > \alpha(G)$ then $|I| \leq \alpha(G) -2$. \end{lemma} \newcommand{\proofofLemmaalphatwo}{Let $G$ be a connected graph with $n>0$ vertices and let $D$ be an upper dominating set with an associated partition $(F,I,P,O)$ as defined above. We first show that if $\Gamma(G) > \alpha(G)$ then $|F| \geq 2$ (in fact, one can show that then $|F| \geq 3$ but that is not necessary for our proof). Indeed, if $|F|=0$, then the upper dominating set is also an independent set, and thus $\Gamma(G) = \alpha(G)$, and according to our definition of partition $(F,I,P,O)$, we have $|F| \ne 1$ (see Property 1 of this partition). Now, if $|F| \geq 2$ then the subgraph of $G$ induced by $F \cup P$ contains an independent set of size $2$ consisting of a vertex in $F$, say $v$, and a vertex in $P$, say $u$, such that $v$ and $u$ are not adjacent. Since in the original graph $G$, there are no edges between the vertices in $I$ and the vertices in $F \cup P$ (Property 4), $I \cup \{u,v\}$ forms an independent set of size $|I| +2$. This sets a lower bound on the independence number and we have $\alpha(G) \geq |I| +2$, that is, $|I| \leq \alpha(G) -2$. From the above, it follows that if $\Gamma(G) > \alpha(G)$ then $|I| \leq \alpha(G) -2$.} \begin{pf}\proofofLemmaalphatwo \end{pf} \begin{lemma} \label{bounds_on_Gama} For any connected graph $G$ with $n>0$ vertices we have: \begin{equation}\alpha(G)\ \leq \ \Gamma(G)\ \leq \ \ \max \ \left\{\alpha(G), \frac{n}{2} + \frac{\alpha(G)}{2}-1\right\} \label{upperdom_is} \end{equation} \end{lemma} \newcommand{\proofofLemmaboundsonGama}{We consider a graph $G$ with $n>0$ vertices and an upper dominating set $D$ with an associated partition $(F,I,P,O)$ as defined above. The left inequality comes from the fact that any maximal independent set is a minimal dominating set. For the right inequality, we examine separately the following two cases. \begin{enumerate} \item $\Gamma(G) = \alpha(G)$. Then we trivially have $\Gamma(G) \leq \alpha(G)$. \item $\Gamma(G) > \alpha(G)$. From the fact that $|F|=|P|$ (from Property 3) we have $|F| = \frac {n-|I|-|O|}{2} \leq \left\lfloor \frac {n-|I|}{2} \right\rfloor $ and thus $$ \Gamma(G) = |F| + |I| \leq \left\lfloor \frac {n+|I|}{2} \right\rfloor $$ From the above and Lemma \ref{|I|<=alpha-2} we have $$ \Gamma(G) \ \leq \ \left\lfloor \frac {n+|I|}{2} \right\rfloor\ \leq \ \left\lfloor \frac {n+\alpha(G) -2}{2}\right \rfloor \ \leq \ \frac{n}{2} + \frac{\alpha(G)}{2}-1 $$ \end{enumerate} This concludes the proof of the claim.} \begin{pf}\proofofLemmaboundsonGama \end{pf} The following lemma generalises the earlier result on upper bounds on $\textrm{IR}(G)$ (and hence on $\Gamma(G)$) for $\Delta$-regular graphs $G$, which is $\textrm{IR}(G)\leq n/2$; see~\cite[Proposition~12]{HenSla96}. Notice that our result generalises the mentioned result of Henning and Slator, as minimum and maximum degrees coincide for regular graphs. \begin{lemma}\label{bounds_on_Gama_with_Delta} For any connected graph $G$ with $n>0$ vertices, minimum degree $\delta$ and maximum degree $\Delta$, we have: \begin{equation}\alpha(G) \ \leq \ \Gamma(G)\ \leq \ \max \ \left\{\alpha(G), \ \frac{n}{2} + \frac{\alpha(G)(\Delta-\delta)}{2\Delta}-\frac {\Delta-\delta}{\Delta}\right\} \label{upperdomdelta_is} \end{equation} \end{lemma} \newcommand{\proofofLemmaboundsonGamawithDelta}{Let $G$ be a connected graph with $n>0$ vertices, maximum degree $\Delta$ and an upper dominating set $D$ with an associated partition $(F,I,P,O)$ as defined above. Our argument is similar to the one in Lemma \ref{bounds_on_Gama}: The left inequality comes from the fact that any maximal independent set is a minimal dominating set. For the right inequality, we examine separately the following two cases. \begin{enumerate} \item $\Gamma(G) = \alpha(G)$. Then we trivially have $\Gamma(G) \leq \alpha(G)$. \item $\Gamma(G) > \alpha(G)$. Again, we obtain: $$ \Gamma(G) = |F| + |I| = \frac {n+|I|-|O|}{2} $$ We next derive an improved lower bound on $|O|$. Let $e$ be the number of edges adjacent with vertices from $I$. As $G$ is of minimum degree $\delta$, we have $e \geq \delta|I|$. As the vertices in $I$ are only adjacent with the vertices in $O$, there are at least $e$ edges that have exactly one end vertex in $O$. Since $G$ has maximum degree $\Delta$, we have that $|O| \geq \left\lceil \frac{e}{\Delta} \right\rceil \geq \left\lceil \frac{\delta|I|}{\Delta} \right\rceil$. From the above and Lemma \ref{|I|<=alpha-2} we have \begin{eqnarray*}\Gamma(G) \ \leq & \left\lfloor \frac {n+|I|- \left\lceil \frac {\delta|I|}{\Delta} \right\rceil }{2} \right\rfloor & \leq \ \frac {n+|I|- \frac {\delta|I|}{\Delta} }{2} = \frac {n+ \frac {(\Delta-\delta)|I|}{\Delta} }{2}\\ \leq & \frac {n+ \frac {(\Delta-\delta)}{\Delta}(\alpha(G)-2) }{2} & = \ \frac{n}{2} + \frac{\Delta-\delta}{2\Delta}\alpha(G)-\frac{\Delta-\delta}{\Delta} \end{eqnarray*} \end{enumerate} To combine these two upper bounds we take their maximum, which concludes the proof.} \begin{pf}\proofofLemmaboundsonGamawithDelta \end{pf} Given a graph on $n$ vertices, we subtract $n$ in the inequalities (\ref{upperdom_is})\SV{; thus:}\LV{ and thus $$ \frac{n}{2} - \frac{\alpha(G)}{2}+1\leq n- \Gamma(G)\leq n-\alpha(G) $$ so that we obtain the following relationship for the vertex cover number $\tau(G)$.} \begin{lemma}\label{lem-complupperdom} Let $G$ be a connected graph of order $n$. Then, \begin{equation} \frac{\tau(G)}{2}+1 \leq n- \Gamma(G)\leq \tau(G) \label{complupperdom} \end{equation} \end{lemma} \LV{Observe that all our bounds derived in this section are also valid for the upper irredundance number $\textrm{IR}(G)$ (instead of $\Gamma(G)$). We also like to point the reader to \cite{Zve2003}, where another interesting combinatorial bound was shown, namely $$\textrm{IR}(G)-\alpha(G)\leq \left\lceil\frac{\Delta-2}{2\Delta}n \right\rceil $$ This somehow bounds the difference between the two entities that are maximised in the previous lemma (recall that the same bounds hold for upper irredundance and for upper domination). As our hardness results will also prove hardness for cubic triangle-free graphs, the following bound \cite[Theorem 5]{Zve2003} could be also interesting; in that case, $\alpha(G)$\todo{to be continued ...} } \section{What makes this problem that hard?} \label{sec-extension} Algorithms working on combinatorial graph problems often try to look at local parts of the graph and then extend some part of the (final) solution that was found and fixed so far. This type of strategy is at least difficult to implement for \textsc{Upper Domination}\xspace, as the following example shows. First, consider a graph $G_n$ that consists of two cliques with vertices $V_n=\{v_1,\dots,v_n\}$ and $W_n=\{w_1,\dots,w_n\}$, where the only edges connecting both cliques are $v_iw_i$ for $1\leq i\leq n$. Observe that $G_n$ has as minimal dominating sets $V_n$, $W_n$, and $\{v_i,w_j\}$ for all $1\leq i,j\leq n$. For $n\geq 3$, the first two are upper dominating sets, while the last $n^2$ many are minimum dominating sets. If we now add a vertex $v_0$ to $G_n$, arriving at graph $G_n'$, and make $v_0$ adjacent to all vertices in $V_n$, then $V_n$ is still a minimal dominating set, but $W_n$ is no longer a dominating set. Now, we have $\{v_i,w_j\}$ for all $0\leq i\leq n$ and all $1\leq j\leq n$ as minimum dominating sets. But, if we add one further vertex, $w_0$ to $G_n'$ to obtain $G_n''$ and make $w_0$ adjacent to all vertices in $W_n$, then all upper dominating sets are also minimum dominating sets and vice versa. This shows that we cannot consider vertices one by one, but must rather look at the whole graph. For many maximisation problems, like \textsc{Upper Irredundance}\xspace or \textsc{Maximum Independent Set}\xspace, it is trivial to obtain a feasible solution that extends a given vertex set, namely by some greedy strategy, or to know that no such extension exists. This is a different situation with \textsc{Upper Domination}\xspace, as we see next. To be able to reason about this problem, let us first define it formally. \begin{center} \fbox{\begin{minipage}{.95\textwidth} \noindent{\sc Minimal Dominating Set Extension}\\\nopagebreak {\bf Input:} A graph $G=(V,E)$, a set $S \subseteq V$. \\\nopagebreak {\bf Question:} Does $G$ have a minimal dominating set $S'$ with $S'\supseteq S$? \end{minipage}} \end{center} Notice that this problem is trivial on some input with $S=\emptyset$\SV{ by using a greedy approach}. \LV{Namely, in that case, we can start a greedy algorithm from $D:=V$, gradually deleting vertices from $D$ until the domination property would be destroyed. So, we end up with a set $D$ from which we cannot remove any vertex while keeping the domination property of $D$.} However, this strategy might fail if the given set $S$ is bigger, as we must also maintain the property of being superset of $S$. This difficulty is very hard to overcome, as the next result shows. Its proof is based on a reduction from {\sc 4-Bounded Planar 3-Connected SAT} (4P3C3SAT) \cite{Kra94}. \begin{theorem}\label{thm-MDSE-hardness} {\sc Minimal Dominating Set Extension} is NP-complete, even when restricted to planar cubic graphs. \end{theorem} \newcommand{\proofofTheoremMDSEhardness}{Membership in NP is obvious. NP-hardness can be shown by reduction from {\sc 4-Bounded Planar 3-Connected SAT} (4P3C3SAT) \cite{Kra94}: Consider an instance of 4P3C3SAT with clauses $c_1,\dots, c_m$ and variables $v_1,\dots, v_n$. By definition, the graph $G=(V,E)$ with $V=\{c_1,\dots,c_m\}\cup\{v_1,\dots,v_n\}$ and $E=\{(c_j,v_i)\colon v_i$ or $ \bar v_i$ is literal of $c_j\}$ is planar. Replace every vertex $v_i$ by six new vertices $f_i^1,x_i^1,t_i^1,t_i^2,x_i^2,f^2_i$ with edges $(f_i^j,x^j_i),(t_i^j,x_i^j)$ for $j=1,2$. If $v_i$ (positive) is a literal in more than two clauses, add the edge $(f^1_i,f^2_i)$, else add the edge $(t^1_i,t^2_i)$. By definition of the problem 4P3C3SAT, each variable appears in at most four clauses and this procedure of replacing the variable-vertices in $G$ by a $P_6$ preserves planarity. To see this, consider any fixed planar embedding of $G$ and any variable $v_i$ which appears in clauses $c_1,c_2,c_3,c_4$, in the embedding placed like in the picture below: \begin{center} \begin{tikzpicture} \tikzstyle{every node}=[inner sep=0.5mm,draw,circle] \node[draw=none, fill=none] (n) at (-4.2,1.2) {{\small $v_i$}}; \node (vi) at (-4,1) {}; \node[draw=none, fill=none] (n) at (-4,2.3) {{\small $c_1$}}; \node (l1) at (-4,2) {}; \node[draw=none, fill=none] (n) at (-4,-0.3) {{\small $c_2$}}; \node (l2) at (-4,0) {}; \node[draw=none, fill=none] (n) at (-5,1.3) {{\small $c_3$}}; \node (l3) at (-5,1) {}; \node[draw=none, fill=none] (n) at (-3,1.3) {{\small $c_4$}}; \node (l4) at (-3,1) {}; \foreach \from/\to in {l1/vi,l2/vi,l3/vi,l4/vi} \draw (\from) -- (\to); \end{tikzpicture} \end{center} Depending on whether $v_i$ appears negated or non-negated in these clauses, we differentiate between the following cases; in the following pictures, vertices plotted in black are the ones to be put into the vertex set $S$ predetermined to be in the minimal dominating set.\\ If $v_i$ is literal in $c_1,c_2,c_3$ and $\bar v_i$ literal in $c_4$: \medskip \begin{center} \begin{tikzpicture} \tikzstyle{every node}=[inner sep=0.5mm,draw,circle] \node[draw=none, fill=none] (n) at (1.3,0) {{\small $t_i^1$}}; \node (t1) at (1,0) {}; \node[draw=none, fill=none] (n) at (1.3,-0.5) {{\small $c_2$}}; \node (c2) at (1,-0.50) {}; \node[draw=none, fill=none] (n) at (-0.5,1.3) {{\small $c_3$}}; \node (c3) at (-0.5,1) {}; \node[draw=none, fill=none] (n) at (1,1.3) {{\small $f_i^1$}}; \node (f1) at (1,1) {}; \node[draw=none, fill=none] (n) at (2.5,1.25) {{\small $c_4$}}; \node (c4) at (2.5,1) {}; \node[draw=none, fill=none] (n) at (2,1.3) {{\small $f_i^2$}}; \node (f2) at (2,1) {}; \node[draw=none, fill=none] (n) at (0.7,2.5) {{\small $c_1$}}; \node (c1) at (1,2.5) {}; \node[draw=none, fill=none] (n) at (0.7,2) {{\small $t_i^2$}}; \node (t2) at (1,2) {}; \node[draw=none, fill=none] (n) at (1.3,0.5) {{\small $ x_i^1$}}; \node[fill=black] (x1) at (1,0.5) {}; \node[draw=none, fill=none] (n) at (1.7,1.7) {{\small $ x_i^2$}}; \node[fill=black] (x2) at (1.5,1.5) {}; \foreach \from/\to in {f1/x1,t1/x1,f2/x2,t2/x2, f1/f2,c3/t1,c2/t1,f2/c4,t2/c1} \draw (\from) -- (\to); \end{tikzpicture} \end{center} \medskip If $v_i$ is literal in $c_2,c_4$ and $\bar v_i$ literal in $c_1,c_3$: \medskip \begin{center} \begin{tikzpicture} \tikzstyle{every node}=[inner sep=0.5mm,draw,circle] \node[draw=none, fill=none] (n) at (1.3,0) {{\small $t_i^1$}}; \node (t1) at (1,0) {}; \node[draw=none, fill=none] (n) at (1.3,-0.5) {{\small $c_2$}}; \node (c2) at (1,-0.50) {}; \node[draw=none, fill=none] (n) at (-0.5,1.3) {{\small $c_3$}}; \node (c3) at (-0.5,1) {}; \node[draw=none, fill=none] (n) at (0,1.3) {{\small $f_i^1$}}; \node (f1) at (0,1) {}; \node[draw=none, fill=none] (n) at (2.5,1.25) {{\small $c_4$}}; \node (c4) at (2.5,1) {}; \node[draw=none, fill=none] (n) at (2,1.3) {{\small $t_i^2$}}; \node (t2) at (2,1) {}; \node[draw=none, fill=none] (n) at (0.7,2.5) {{\small $c_1$}}; \node (c1) at (1,2.5) {}; \node[draw=none, fill=none] (n) at (0.7,2) {{\small $f_i^2$}}; \node (f2) at (1,2) {}; \node[draw=none, fill=none] (n) at (0.25,0.25) {{\small $ x_i^1$}}; \node[fill=black] (x1) at (0.5,0.5) {}; \node[draw=none, fill=none] (n) at (1.7,1.7) {{\small $ x_i^2$}}; \node[fill=black] (x2) at (1.5,1.5) {}; \foreach \from/\to in {f1/x1,t1/x1,f2/x2,t2/x2, t1/t2,c3/f1,c2/t1,t2/c4,f2/c1} \draw (\from) -- (\to); \end{tikzpicture} \end{center} \medskip If $v_i$ is literal in $c_1,c_2$ and $\bar v_i$ literal in $c_3,c_4$: \medskip \begin{center} \begin{tikzpicture} \tikzstyle{every node}=[inner sep=0.5mm,draw,circle] \node[draw=none, fill=none] (n) at (1.3,0) {{\small $t_i^1$}}; \node (t1) at (1,0) {}; \node[draw=none, fill=none] (n) at (1.3,-0.5) {{\small $c_2$}}; \node (c2) at (1,-0.50) {}; \node[draw=none, fill=none] (n) at (-0.5,1.3) {{\small $c_3$}}; \node (c3) at (-0.5,1) {}; \node[draw=none, fill=none] (n) at (0,1.3) {{\small $f_i^1$}}; \node (f1) at (0,1) {}; \node[draw=none, fill=none] (n) at (2.5,1.25) {{\small $c_4$}}; \node (c4) at (2.5,1) {}; \node[draw=none, fill=none] (n) at (2,1.3) {{\small $f_i^2$}}; \node (f2) at (2,1) {}; \node[draw=none, fill=none] (n) at (0.7,2.5) {{\small $c_1$}}; \node (c1) at (1,2.5) {}; \node[draw=none, fill=none] (n) at (0.7,2) {{\small $t_i^2$}}; \node (t2) at (1,2) {}; \node[draw=none, fill=none] (n) at (0.25,0.25) {{\small $ x_i^1$}}; \node[fill=black] (x1) at (0.5,0.5) {}; \node[draw=none, fill=none] (n) at (1.7,1.7) {{\small $ x_i^2$}}; \node[fill=black] (x2) at (1.5,1.5) {}; \foreach \from/\to in {f1/x1,t1/x1,f2/x2,t2/x2, t1/t2,c3/f1,c2/t1,f2/c4,t2/c1} \draw (\from) -- (\to); \end{tikzpicture} \end{center} All other cases are rotations of the above three cases and/or invert the roles of $v_i$ and $\bar v_i$. Also, if a variable only appears positively (or negatively), it can be deleted along with the clauses which contain it. The maximum degree of the vertices which replace $v_i$ is three. Replace each clause-vertex $c_j$ by the following subgraph: \medskip \begin{center} \begin{tikzpicture} \tikzstyle{every node}=[inner sep=0.5mm,draw,circle] \node[draw=none, fill=none] (n) at (1,-0.3) {{\small $c_j^1$}}; \node (cj1) at (1,0) {}; \node[draw=none, fill=none] (n) at (1,2.3) {{\small $c_j^2$}}; \node (cj2) at (1,2) {}; \node[draw=none, fill=none] (n) at (0,0.2) {{\small $z_j^1$}}; \node (zj1) at (0,0.5) {}; \node[draw=none, fill=none] (n) at (0,1.8) {{\small $z_j^2$}}; \node (zj2) at (0,1.5) {}; \node[draw=none, fill=none] (n) at (-1,1.25) {{\small $z_j$}}; \node[fill=black] (zj) at (-1,1) {}; \node[draw=none, fill=none] (n) at (-1.5,1.25) {{\small $s_j$}}; \node[fill=black] (s) at (-1.5,1) {}; \node[draw=none, fill=none] (n) at (-2,1.25) {{\small $p_j$}}; \node (p) at (-2,1) {}; \foreach \from/\to in {cj1/zj1,cj2/zj2,zj1/zj,zj2/zj,zj/s,s/p} \draw (\from) -- (\to); \end{tikzpicture} \end{center} The vertices $c_j^1,c_j^2$ somehow take the role of the old vertex $c_j$ regarding its neighbours: $c_j^1$ is adjacent to two of the literals of $c_j$ and $c_j^2$ is adjacent to the remaining literal. This way, all vertices have degree at most three and the choices of literals to connect to $c_j^1$ and $c_j^2$ can be made such that planarity is preserved. Let $G'$ be the graph obtained from $G$ by replacing all vertices according to the above rules. The input $G'$ and $S:=\{x_i^1,x_i^2\colon i=1,\dots, n\}\cup\{s_j,z_j\colon j=1,\dots,m\}$ is a ``yes"-instance for {\sc Minimal Dominating Set Extension} if and only if the formula associated to $G$ is a ``yes"-instance for 4P3C3SAT. Let $G$ be the graph associated to a satisfiable 4P3C3SAT-formula $c_1\wedge c_2\wedge \dots \wedge c_m$. Consider a satisfying assignment $\phi$ for $c_1\wedge c_2\wedge\dots\wedge c_m$ and the corresponding vertex-set $W:=\{t_i^1,t_i^2\colon \phi(v_i)=1\}\cup\{f_i^1,f_i^2\colon \phi(v_i)=0\}$. Let $W'$ be an arbitrary inclusion-minimal subset of $W$ such that $\{c^1_j,c^2_j\}\cap N_{G'}(W')\not=\emptyset$ for all $j\in \{1,\dots,m\}$; $W$ itself has this domination-property since $\phi$ satisfies the formula $c_1\wedge c_2\wedge\dots\wedge c_m$. By the inclusion-minimality of $W'$, the set $S\cup W'$ is irredundant: Each vertex in $W'$ has at least one of the $c_j^k$ as private neighbour, the vertices $x_i^k$ have either $t^k_i$ or $f^k_i$ as a private neighbour, $pn(s_j,S\cup W')=\{p_j\}$ and $pn(z_j,S\cup W')=\{z_j^1,z_j^2\}$. The set $S\cup W$ might however not dominate all vertices $c_j^k$. Adding the set $Y:=\{z_j^k\colon c_j^k\notin N_{G'}(W)\}$ to $S\cup W$ creates a dominating set. Since for each clause $c_j$ either $c_j^1\in N_{G'}(W')$ or $c_j^2\in N_{G'}(W')$, either $z_j^1$ or $z_j^2$ remains in the private neighbourhood of $z_j$. Other private neighbourhoods are not affected by $Y$. At last, each vertex $z_j^k\in Y$ has the clause-vertex $c_j^k$ as private neighbour, by the definition of $Y$, so overall $S\cup W'\cup Y$ is a minimal dominating set. If the input $(G',S)$ is a ``yes"-instance for {\sc Minimal Dominating Set Extension}, the set $S$ can be extended to a set $S'$ which especially dominates all vertices $c_j^k$ and has at least one private neighbour for each $z_j$. The latter condition implies that $S'\cap \{z_j^k,c_j^k\}=\emptyset$ for $k=1$ or $k=2$ for each $j\in \{1,\dots,m\}$. A vertex $c_j^k$ for which $S'\cap \{z_j^k,c_j^k\}=\emptyset$ has to be dominated by a variable-vertex, which means that $t_i^k\in S'$ ($f_i^k\in S'$) for some variable $v_i$ which appears positively (negatively) in $c_j$. Minimality of $S'$ requires at least one private neighbour for each $x_i^k$ which, by construction of the variable-gadgets, means that either $\{f_i^1,f_i^2\}\cap S'=\emptyset$ or $\{t_i^1,t_i^2\}\cap S'=\emptyset$, so each variable can only be represented either positively or negatively in $S'$. Overall, the assignment $\phi$ with $\phi(v_i)=1$ if $\{t_i^1,t_i^2\}\cap S'\not=\emptyset$ and $\phi(v_i)=0$ otherwise satisfies $c_1\wedge c_2\wedge\dots\wedge c_m$. Finally, $G'$ can be transformed into a cubic planar graph, by adding the following subgraph once to every vertex $v$ of degree two, and twice for each degree one vertex: \begin{center} \begin{tikzpicture} \tikzstyle{every node}=[inner sep=0.5mm,draw,circle] \node[draw=none, fill=none] (n) at (0.2,1.5) {{\small $v$}}; \node (v) at (0,1.5) {}; \node (y) at (0,0.5) {}; \node (w2) at (0,1) {}; \node[fill=black] (w5) at (0.55,0.5) {}; \node[fill=black] (w3) at (-0.5,0.5) {}; \node(w4) at (0,0) {}; \foreach \from/\to in {w2/w3,w2/w5,w5/w4,w4/w3,w5/y,w3/y,y/w4,w2/v} \draw (\from) -- (\to); \end{tikzpicture} \end{center} Add the new black vertices to the set $S$. Then all new vertices are dominated and adding another one of them to the dominating set violates irredundance. The original vertex is not dominated, adding it to the dominating set does not violate irredundance within the new vertices and the new vertices can never be private neighbours to any original vertex so the structure of $G'$ in the above argument does not change.} \begin{pf}\proofofTheoremMDSEhardness \end{pf} \section{General graphs} It has been already shown decades ago that \textsc{Upper Domination}\xspace is NP-complete on general graphs. Also, polynomial-time solvable graph classes are known, mainly due to the fact that on certain graph classes like bipartite graphs, $\alpha(G)=\Gamma(G)$ holds, and it is known that on such classes, the independence number can be computed in polynomial time. We refer to the textbook on domination~\cite{HHS98} for further details. In this section, we complement these results by according results on the parameterised complexity and approximation complexity of \textsc{Upper Domination}\xspace and the complement problem. \subsection{Parameterised complexity} \begin{theorem}\label{w_hardness} \textsc{Upper Domination}\xspace is W[1]-hard. \end{theorem} Our proof is a reduction from \textsc{Multicoloured Clique}\xspace, a problem introduced in~\cite{FelHRV2009} to facilitate W[1]-hardness proofs. We leave it open whether our problem belongs to W[1] or whether it can be shown to be W[1]-hard on very restricted graph classes, similar to the results obtained in~\cite{FelHRV2009} for \textsc{Minimum Domination}. \newcommand{\proofofTheoremWhardness}{Let $G=(V,E)$ be a graph with $k$ different colour-classes given by $V=V_1\cup V_2\cup \dots\cup V_k$. \textsc{Multicoloured Clique}\xspace asks if there exists a clique $C\subseteq V$ in $G$ such that $|V_i\cap C|= 1$ for all $i=1,\dots,k$. For this problem, one can assume that each set $V_i$ is an independent set in $G$, since edges between vertices of the same colour-class have no impact on the existence of a solution. \textsc{Multicoloured Clique}\xspace is known to be W[1]-complete, parameterised by $k$. We construct a graph $G'$ such that $G'$ has an upper dominating set of cardinality (at least) $k+\frac 12(k^2-k)$ if and only if $G$ is a ``yes"-instance for \textsc{Multicoloured Clique}\xspace which proofs W[1]-hardness for \textsc{Upper Domination}\xspace, parameterised by $\Gamma(G')$. Consider $G'=(V',E')$ given by: $V':=V\cup \{v_e\colon e\in E\}$ and \begin{eqnarray*} E'&:=&\bigcup_{i=1}^kV_i\times V_i\\ &\cup&\bigcup_{i=1}^k\bigcup_{j=1}^k\left\{(v_{(u,w)},x)\colon (u,w)\in (V_i\times V_j)\cap E, x\in \left((V_i\cup V_j)-\{u,w\}\right)\right\}\\ &\cup&\bigcup_{i=1}^k\bigcup_{j=1}^k\left\{(v_{e},v_{e'})\colon e,e'\in (V_i\times V_j)\cap E\right\}\,. \end{eqnarray*} If $C\subset V$ is a (multi-coloured) clique of cardinality $k$ in $G$, the set $S':= C\cup \{v_{(u,v)}\colon u,v\in C\}$ is an upper dominating set for $G'$ of cardinality $k+\frac 12(k^2-k)$: First of all, $\{v_{(u,v)}\colon u,v\in C\}\subset V'$ since $(u,v)\in E$ for all $u,v\in C$. Further, by definition of the edges $E',$ $u,v\notin N_{G'}(v_{(u,v)})$ and $u\notin N_{G'}(v)$ for $u$ and $v$ from different colour classes so $S'$ is an independent set in $G'$ and hence a minimal dominating set. It can be easily verified that $S'$ is also dominating for $G'$ -- observe that it contains exactly one vertex for each clique in the graph. Suppose $S$ is a minimal dominating set for $G'$. Consider the partition $S=\left(\bigcup _{i=1}^k S_i\right) \cup \left( \bigcup_{1\leq i<j\leq k} S_{\{i,j\}} \right)$ defined by: $S_i:=S\cap V_i$ for $i=1,\dots,k$ and $S_{\{i,j\}}:= S\cap \{v_e\colon e\in V_i\times V_j\}$ for all $1\leq i<j\leq k$. The minimality of $S$ gives the following properties for these subsets of $S$: \begin{enumerate} \item If $|S_i|>1$ for some index $i\in \{1,\dots,k\}$, minimality implies $|S_i|=2$ and for all $j\not=i$ either $S_{\{i,j\}}=\emptyset $ or $S_j=\emptyset$: \begin{center} \includegraphics[scale=0.5]{w1_1} \end{center} Since for every $u\in V_i$ and every $j$, $j\not =i$, by construction $V_i\subset N[u]$, and if there is more than one vertex in $S_i$, then their private neighbours have to be in $\{v_e\colon e\in E\}$. A vertex $v_e$ with $e\in V_i\times V_j$ is not adjacent to a vertex $u\in V_i$ if and only if $e=(u,w)$ for some $w\in V_j$. For two different vertices $u,v\in V_i$ consequently all $v_e$ with $e\in V_i\times V_j$ are adjacent to either $u$ or $v$, a third vertex $w\in V_i$ consequently can not have any private neighbour. This also means that any vertex $v_e\in S_{\{i,j\}}$ has to have a private neighbour in $V_j$, so if $S_{\{i,j\}}\not=\emptyset$ the set $S_j$ has to be empty because one vertex from $S_j$ dominates all vertices in $V_j$. These observations hold for all $j\not= i$. \item If $|S_{\{i,j\}}|>1$ for some indices $i,j\in \{1,\dots,k\}$ we find that $|S_{\{i,j\}}|=2$, $|S_i|,|S_j|\leq 1$ and that $S_i\not=\emptyset$ implies $S_j=S_{\{j,l\}}=\emptyset$ for all $l\in \{1,\dots,k\}-\{i,j\}$ (and equivalently $S_j\not=\emptyset$ implies $S_i=S_{\{i,l\}}=\emptyset$ for all $l\in \{1,\dots,k\}-\{i,j\}$): \begin{center} \includegraphics[scale=0.4]{w1_2} \end{center} Since for any two vertices $u,v$ from $S_{\{i,j\}}$ we have $\{v_e\colon e\in (V_i\times V_j)\cap E\}\cup V_i\cup V_j \subset N(u)\cup N(v)$, the cardinality of $S_{\{i,j\}}$ can be at most two. If there is a vertex $y$ in $S_i$, it already dominates all of $V_i$ so private neighbours for $u,v\in S_{\{i,j\}}$ have to be in $S_j$. For any two vertices $w,w'\in V_j$ any $v_e\in V'\cap\{v_e\colon e\in V_j\times V_l, \ l=1,\dots,k\}$ is either adjacent to at least $w$ or $w'$, so especially for the private vertices of $u$ and $v$ every $x\in S_{j,l}$ would be adjacent to one of them and can consequently not be in a minimal dominating set, so $S_j=S_{\{j,l\}}=\emptyset$. Dominating the vertices in $S_{\{j,l\}}$ for $l\not=i$ then requires $|S_l|=2$ for all $l\not=i$, which leaves no possible private vertices outside $V_i$ for vertices in $V_i$, so $|S_i|=1$. \item If $|S_i|=2$ there exists an index $j\not=i$ such that $S_{\{i,j\}}=\emptyset$ and $|S_j|\leq 1$:\\ Let $u,v\in S_i$. By the structure of $G'$, $u$ and $v$ share all neighbours in $V_i$ and $v_e$ such that $e=(x,y)\in V_i\times V_l$ with $x\not\in \{u,v\}$ for all $l\not=i$, so especially the private neighbourhood of $u$ is restricted to $pn(u,S)\subseteq\{v_e\colon e=(v,y)\in E\}$. Let $j$ be an index such that there is a vertex $z\in V_j$ with $v_{(u,z)}\in pn(v,S)$ (there is at least one such index). No neighbours of $v_{(u,z)}$ beside $v$ can be in $S$, which means that $S_{\{i,j\}}=\emptyset$ and $S_j\subseteq \{z\}$. \item $|S_{\{i,l\}}|= 2$ implies $|S_{\{j,l\}}|\leq 1$ for all $j\not=i$.\\ Suppose $|S_{\{i,l\}}|,|S_{\{j,l\}}|\geq 2$ for some indices $i,j,l\in \{1,\dots,k\}$. By property 2 both sets $S_{\{i,l\}},S_{\{j,l\}}$ have cardinality two so let $u_i,w_i\in S_{\{i,l\}} $ and $u_j,w_j\in S_{\{j,l\}}$. Since each set $\{v_e\colon e\in E\cap (V_s\times V_t)\}$ is a clique, the private neighbours for these vertices have to be in $V_i,V_j,V_l$. Suppose $v\in pn(u_i,S)\cap V_l$ which means that $w_i,u_j,w_j$ are not adjacent to $v$. This is only possible if $w_i$ represents some edge $(v,x)\in E\cap V_l\times V_i$ and $u_j,w_j$ represent some edges $(v,y),(v,y')\in E\cap V_l\times V_j$. By definition of $E'$, $w_i,u_j,w_j$ then share their neighbourhood in $V_l$ (namely $V_l-\{v\}$) which means that $pn(w_i,S)\subset V_i$ and $pn(u_j)\cup pn(w_j)\subset V_j$ which implies $S_i=S_j=\emptyset$. So in any case, even if there is no $v\in pn(u_i,S)\cap V_l$, at least one of the sets $V_i$ or $V_j$ contains two vertices which are private neighbours for $S_{\{i,j\}}$ and $S_i=S_j=\emptyset$. Suppose $V_j$ contains two private vertices $y\not=y'$ for $u_j$ and $w_j$ respectively. For any two arbitrary vertices $n_1,n_2\in V_j$, any vertex $x\in\{v_e\colon e\in E\cap (V_i\times V_j)\}$ is adjacent to at least one of them, which means that any $x\in S_{\{i,j\}}$ would steal at least $y\in pn(u_j)$ or $y'\in pn(w_j)$ as private neighbour. Minimality of $S$ hence demands $S_i=S_j=S_{\{i,j\}}=\emptyset$. A set with this property however does not dominate any of the vertices $v_e$ with $e\in E\cap (V_i\times V_j)$. (The set $E\cap (V_i\times V_j)$ is not empty unless the graph $G$ is a trivial ``no"-instance for \textsc{Multicoloured Clique}\xspace.) \end{enumerate} According to these properties, the indices of these subsets of $S$ can be divided into the following six sets: $C_i:=\{j\colon |S_j|=i\}$ and $D_i:=\{(j,l)\colon |S_{\{j,l\}}|=i\}$ for $i=0,1,2$ which then give $|S|=2(|C_2|+|D_2|)+|C_1|+|D_1|$. If $|C_2|+|D_2|\not=0$ and $k>3$, we can construct an injective mapping $f\colon C_2\cup D_2 \cup \{a\}\rightarrow C_0\cup D_0$ with some $a\notin V'$ in the following way: \begin{itemize} \item For every $i\in C_2$ choose some $j\not=i$ with $(i,j)\in D_0$ and $j\notin C_2$ which exists according to property 3 and set $f(i)=(i,j)$. Since $j\notin C_2$ this setting is injective. If $D_2=\emptyset$ and $C_2=\{i\}$, choose some $l\not=i$ and map $a$ via $f$ either to $l$ or to $(i,l)$, since, by property 1, one of them is in $C_0$ or $D_0$ respectively. If $D_2=\emptyset$ and $|C_2|>1$, choose some $i,l\in C_2$ and set $f(a)=(i,l)$ since $S_{\{i,l\}}=\emptyset$ by property 1 and neither $i$ nor $l$ is mapped to $(i,l)$. \item For $(i,j)\in D_2$, property 2 implies at least $i$ or $j$ lies in $C_0$. By property 4 we can choose one of them arbitrarily without violating injectivity. If both are in $C_0$ we can use one of them to map $a$. If for all $(i,j)\in D_2$ only one of the indices $i,j$ is in $C_0$, we still have to map $a$, unless $f(a)$ has been already defined. Assume for $(i,j)\in D_2$ that $i\notin C_0$. By property 2 $\{(j,l)\colon l\notin\{i,j\}\}\subset D_0$. If we cannot choose one of these index-pairs as injective image for $a$, they have all been used to map $C_2$ which means $\{1,\dots,k\}-\{i,j\}\subseteq C_2$ and hence, by property 1, all index-pairs $(l,h)$ with $l,h\in\{1,\dots,k\}-\{i,j\}$ are in $D_0$ and so far not in the image of $f$, so we are free to chose one of them as image of $a$, unless $f(a)$ has been already defined. \end{itemize} This injection proves that $|C_2|+|D_2|>0$ implies that $|C_2|+|D_2|<|C_0|+|D_0|$. This means that, regardless of the structure of the original graph $G$, the subsets $S_i$ and $S_{i,j}$ of $S$ either all contain exactly one vertex or $k+\frac 12 (k^2-k)=|C_1|+|D_1|+|C_0|+|D_0|+|C_2|+|D_2|>|C_1|+|D_1|+2(|C_2|+|D_2|)=|S|$. So if $|S|= k+\frac 12(k^2-k)$, the above partition into the sets $S_i,S_{i,j}$ satisfies $|S_i|=|S_{\{i,j\}}|=1$ for all $i,j$. A set with this property is always dominating for $G'$ but only minimal if each vertex has a private neighbour. For some $v_e\in S_{\{i,j\}}$ this implies that there is some private neighbour $e'=(u,v)\in V'\cap(V_i\times V_j)$ that is not dominated by the (existing) vertex $u'$ in $S_i$ or the vertex $v'$ in $S_j$; (all vertices $V_i$ and $V_j$ are already dominated by $\{u',v'\}\subset S$ and cannot be private neighbours for $v_e$). By construction of $E'$, this is only possible if $(u,v)=(u',v')\in E$. Since this is true for all index-pairs $(i,j)$, the vertices $\{v\colon v\in S_i, i=1,\dots,k\}$ form a clique in the original graph $G$.} \begin{pf}\proofofTheoremMDSEhardness \end{pf} We do not know if \textsc{Upper Domination}\xspace belongs to W[1], but we can at least place it in W[2], the next level of the W hierarchy.\SV{ We obtain this result by describing a suitable multi-tape Turing machine that solves this problem.} \begin{proposition}\label{prop-Wtwo} \textsc{Upper Domination}\xspace belongs to W[2]. \end{proposition} \newcommand{\proofofPropWtwo}{(Sketch) Recall how \textsc{Minimum Domination}\xspace can be seen to belong to W[2] by providing an appropriate multi-tape Turing machine\LV{~\cite{Ces2003}}\SV{\footnote{Confer M.~Cesati. The {T}uring way to parameterised complexity. {\em Journal of Computer and System Sciences}, 67:654--685, 2003.}\ }. First, the $k$ vertices that should belong to the dominating set are guessed, and then this guess is verified in $k$ further (deterministic) steps using $n$ further tapes in parallel, where $n$ is the order of the input graph. We only need to make sure that the guessed set of vertices is minimal. To this end, we copy the guessed vertices $k$ times, leaving one out each time, and we also guess one vertex for each of the $k-1$-element sets that is not dominated by this set. Such a guess can be tested in the same way as sketched before using parallel access to the $n+1$ tapes. The whole computation takes $O(k^2)$ parallel steps of the Turing machine, which shows membership in W[2].} \begin{pf}\proofofPropWtwo \end{pf} Another interesting question is to consider the dual parameter $\ell$, that is to decide the existence of an upper dominating set of size at least $n-\ell$. This is in fact the natural parameterisation for \textsc{Min Complement Upper Domination}\xspace. \SV{By some combinatorial arguments exploiting some $(F,I,P.O)$ decomposition implied by an upper dominating set, we can prove:} \begin{theorem}\label{cud_kernel} \textsc{Min Complement Upper Domination}\xspace is in FPT. More precisely, it admits a kernel of at most $\ell^2+\ell$ many vertices and at most $\ell^2$ many edges. \end{theorem} \newcommand{\proofofTheoremcudkernel}{Let $G=(V,E)$ be an arbitrary input graph with $|V|=n$. First consider a vertex $v\in V$ with $deg(v)>\ell$ and any minimal dominating set $D$ with some associated partition $(F,I,P,O)$: \begin{itemize} \item If $v\in I$, all neighbours of $v$ have to be in $O$ which means $|O|\geq|N(v)|>\ell$. \item If $v\in F$, exactly one neighbour $p$ of $v$ is in $P$ and $N[v]-\{p\}\subseteq F\cup O$, which gives $|O|+|P|=|O|+|F|\geq |N[v]-\{p\}|>\ell$. \item If $v\in P$, exactly one neighbour $p$ of $v$ is in $F$ and $N[v]-\{p\}\subseteq P\cup O$, so $|O|+|P|>\ell$. \end{itemize} We always have either $v\in O$ or $|O|+|P|>\ell$, which means a ``no"-instance for \textsc{Min Complement Upper Domination}\xspace. Consider the graph $G'$ built from $G$ by deleting the vertex $v$ and all its edges. For any minimal dominating set $D$ for $G$ with partition $(F,I,P,O)$ such that $v\in O$, $D$ is also minimal for $G'$, since $pn(w,D)\supseteq\{w\}$ for all $w\in I$ and $|pn(u,D)\cap P|=1$ for all $u\in F$. Also, any set $D'\subset V-\{v\}$ which does not dominate $v$ has a cardinality of at most $|V-N[v]|<n-\ell$, so if $G'$ has a dominating set $D'$ of cardinality at least $n-\ell$, $N(v)\cap D'\not= \emptyset$, so $D'$ is also dominating for $G$. These observations allow us to successively reduce $(G,\ell)$ to $(G',\ell')$ with $\ell'=\ell -1$ as long as there are vertices $v$ with $deg(v)>\ell$, similar to Buss's rule for parameterised \textsc{Minimum Vertex Cover}\xspace. Any isolated vertex in the resulting graph $G'$ originally only has neighbours in $O$ which means it belongs to $I$ in any dominating set $D$ with partition $(F,I,P,O)$ and can hence be deleted from $G'$ without affecting the existence of an upper dominating set with $|P|+|O|\leq \ell'$. Let $(G',\ell')$ be the instance obtained after the reduction for all vertices of degree larger than $\ell$ and after deleting isolated vertices with $G'=(V',E')$ and let $n'=|V'|$. If there is an upper dominating set $D$ for $G'$ with $|D|\geq n'-\ell'$, any associated partition $(F,I,P,O)$ for $D$ satisfies $|P|+|O|\leq \ell'$. Since $G'$ does not contain isolated vertices, every vertex in $I$ has at least one neighbour in $O$. Also, any vertex in $V'$, and hence especially any vertex in $O$, has degree at most $\ell'$, which means that $|I|\leq |N(O)|\leq \ell'|O|$. Overall: \begin{equation}\label{eq-cud_kernel} |V'|\leq |I|+|F|+|P|+|O|\leq (\ell'+1)|O|+2|P|\leq \max_{j=0}^{\ell'}\{j(\ell'+1),2(\ell'-j)\}\,, \end{equation} and hence $|V'|\leq \ell'(\ell'+1)$, or $(G',\ell')$ and consequently $(G,\ell)$ is a ``no"-instance. Concerning the number of edges, we can derive a similar estimate. There are at most $\ell$ edges incident with each vertex in $O$. In addition, there is exactly one edge incident with each vertex in $P$ that has not yet been accounted for, and, in addition, there could be $\ell-1$ edges incident to each vertex in $F$ that have not yet been counted. This shows the claim. } \begin{comment} Any complement of an upper dominating set $D$ can be characterized by $\overline{D}=P\cup O$ for the associated partition $(F,I,P,O)$ of $D$. If $|P\cup O|=|\overline{D}|\leq \ell$, the properties of this partition imply $|F|\leq \ell$. So unless $|I|\geq n-2\ell$, $G$ is a trivial ``no"-instance. If $G$ contains a set of $r>\ell+2$ independent vertices which have the same neighbourhood (false twins), we can reduce the instance by deleting $r-\ell-2$ of them: In a ``yes"-instance with associated partition $(F,I,P,O)$, at least two of these vertices have to be in $D$. Because of the shared neighbourhood they cannot be in $F$, which places them in $I$ and their neighbourhood consequently in $O$. Dominating all of the twins is hence only possible, if all of them are in $I$. This setting and the resulting number of vertices placed in $O$ does not depend on $r$ which means that the set $P\cup O$ remains unchanged for the graph $G'$ constructed from $G$ by deleting $r-\ell-2$ of the $r$ twins in $G$. After this reduction, every neighbourhood of vertices in $O$ has at most $\ell+2$ neighbours in $I$ which means that either $|I|\leq 2^{|O|}(\ell+2)$ and consequently \begin{equation}\label{eq-cud_kernel} |V|\leq \max\{2(\ell-j)+2^j(\ell+2)\colon 0\leq j\leq\ell \}=2^\ell(\ell+2)\,, \end{equation} or $G$ is a ``no"-instance. \end{comment} \begin{pf} \proofofTheoremcudkernel \end{pf} \smallskip \LV{We just derived a kernel result for \textsc{Min Complement Upper Domination}\xspace, in fact a kernel of quadratic size in terms of the number of vertices and edges.} This \SV{quadratic-size kernel} poses the natural question if we can do better. Next, the question is if the brute-force search we could perform on the quadratic kernel is the best we can do to solve \textsc{Min Complement Upper Domination}\xspace in FPT time. Fortunately, this is not the case, as the following result shows. \SV{\begin{proposition}\label{prop-CoUDbranching} \textsc{Co-Upper Domination} can be solved in time $O^*(4.3077^\ell)$. \end{proposition} \begin{prf} (Sketch) This result can be shown by designing a branching algorithm that takes a graph $G=(V,E)$ and a parameter $\ell$ as input. As in Section~\ref{sec-FIPO}, to each graph $G=(V,E)$ and (partial) dominating set, we associate a partition ($F,I,P,O$). We consider $\kappa = \ell - (\frac{|F|}{2} + \frac{|P|}{2} + |O|)$ as a measure of the partition. Note that $\kappa \leq \ell$. At each branching step, our algorithm picks some vertices from $R$ (the set of yet undecided vertices). They are either added to the current dominating set $D := F \cup I$ or to $\overline{D} := P \cup O$. Each time a vertex is added to $P$ (resp. to $O$) the value of $\kappa$ decreases by $\frac{1}{2}$ (resp. by $1$). Also, whenever a vertex $x$ is added to $F$, the value of $\kappa$ decreases by $\frac{1}{2}$. Let us describe the two halting rules. First, whenever $\kappa$ reaches zero, we are face a ``no''-instance. Then, if the set $R$ of undecided vertices is empty, we check whether the current domination set $D$ is minimal and of size at least $n- \ell$, and if so, the instance is a ``yes''-instance. Then, we have a simple reduction rule: whenever the neighbourhood of a undecided vertex $v \in R$ is included in $\overline{D}$, we can safely add $v$ to $I$. Finally, vertices are placed to $F$, $I$ or $\overline{D}$ according to three branching rules. The first one considers undecided vertices with a neighbour already in $F$ (in such a case, $v$ cannot belongs to $I$). The second one considers undecided vertices with only one undecided neighbour (in such a case, several cases may be discarded as, e.g., they cannot be both in $I$ or both in $\overline{D}$). The third branching rule considers all the possibilities for an undecided vertex and due to the previous branching rules, it can be assumed that each undecided vertex has at least two undecided neighbours (which is nice since such vertices have to belong to $\overline{D}$ whenever an undecided neighbour is added to $I$). \end{prf}} \newcommand{\CoUDbranching}{ \begin{proposition} Given a graph $G=(V,E)$ and a parameter $\ell$, a call of Algorithm \texttt{ComputeCoUD} with parameters ($G$, $\ell$, $\emptyset$, $\emptyset$, $\emptyset$, $\ell$) solves \textsc{Co-Upper Domination} in time $O^*(4.3077^\ell)$. \end{proposition} \begin{algorithm}[h] \caption{\label{algo-StableMax}\texttt{ComputeCoUD($G$, $\ell$, $F$, $I$, $\overline{D}$, $\kappa$)}} \DontPrintSemicolon \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} {\footnotesize \Input{\textit{a graph $G=(V,E)$, parameter $\ell \in \mathbb{N}$, three disjoint sets $F,I,\overline{D} \subseteq V$ and $\kappa \leq \ell$.}} \Output{\textit{``yes'' if $\Gamma(G) \geq |V| - \ell$; ``no'' otherwise.}} } \smallskip Let $R \gets V \setminus (F \cup I \cup \overline{D})$\; \lIf(\hfill {(H1)}){$\kappa < 0$} { \Return ``no'' } \If(\hfill {(H2)}){$R$ is empty} { \If{$F \cup I$ is a minimal dominating set of $G$ and $|F \cup I| \geq n-\ell$} { \Return ``yes''\; } \lElse { \Return ``no''\; } } \If(\hfill {(R1)}){there is a vertex $v \in R$ s.t. $N(v) \subseteq \overline{D}$} { \Return \texttt{ComputeCoUD($G$, $\ell$, $F$, $I \cup \{v\}$, $\overline{D}$, $\kappa$)}\; } \If(\hfill {(B1)}){there is a vertex $v \in R$ s.t. $|N(v) \cap F| \geq 1$} { \Return \texttt{ComputeCoUD($G$, $\ell$, $F \cup \{v\}$, $I$, $\overline{D}$, $\kappa-\frac{1}{2}$)} $\lor$\linebreak[2] \texttt{ComputeCoUD($G$, $\ell$, $F$, $I$, $\overline{D} \cup \{v\}$, $\kappa-\frac{1}{2}$)}\; } \If(\hfill {(B2)}){there is a vertex $v \in R$ s.t. $|N(v) \cap R| = 1$} { Let $u$ be the unique neighbour of $v$ in $R$\; \Return \texttt{ComputeCoUD($G$, $\ell$, $F \cup \{u,v\}$, $I$, $\overline{D}$, $\kappa-1$)} $\lor$ \texttt{ComputeCoUD($G$, $\ell$, $F \cup \{u\}$, $I$, $\overline{D} \cup \{v\}$, $\kappa-1$)} $\lor$\linebreak[2] \texttt{ComputeCoUD($G$, $\ell$, $F$, $I \cup \{v\}$, $\overline{D} \cup \{u\}$, $\kappa-1$)}\; } \Else(\hfill {(B3)}) { Let $v$ be a vertex of $R$\; \Return \texttt{ComputeCoUD($G$, $\ell$, $F$, $I \cup \{v\}$, $\overline{D} \cup N(v)$, $\kappa-2$)} $\lor$\linebreak[2] \texttt{ComputeCoUD($G$, $\ell$, $F \cup \{v\}$, $I$, $\overline{D}$, $\kappa-\frac{1}{2}$)} $\lor$\linebreak[2] \texttt{ComputeCoUD($G$, $\ell$, $F$, $I$, $\overline{D} \cup \{v\}$, $\kappa-\frac{1}{2}$)}\; } \end{algorithm} \begin{prf} Algorithm \texttt{ComputeCoUD} is a branching algorithm, with halting rules (H1) and (H2), reduction rule (R1), and three branching rules (B1)-(B3). We denote by $G=(V,E)$ the input graph and by $\ell$ the parameter. At each call, the set of vertices $V$ is partitioned into four sets: $F$, $I$, $\overline{D}$ and $R$. The set of ``remaining'' vertices $R$ is equal to $V \setminus (F \cup I \cup \overline{D})$, and thus can be obtained from $G$ and the three former sets. At each recursive call, the algorithm picks some vertices from $R$. They are either added to the current dominating set $D := F \cup I$, or to the set $\overline{D}$ to indicate that they do not belong to any extension of the current dominating set. The sets $F$ and $I$ are as previously described (\emph{i.e.}\, if we denote by $D$ the dominating set we are looking for, $I:=\{v\in D\colon v\in pn(v,D)\}$ and $F:=D-I$). \medskip Note that parameter $\kappa$ corresponds to our ``budget'', which is initially set to $\kappa := \ell$. Recall that any minimal dominating set of a graph $G=(V,E)$ can be associated with a partition ($F,I,P,O$) (see Section~\ref{sec-FIPO} for the definitions of the sets and for some properties). If we denote by $D$ a minimal dominating set of $G$ and by $\overline{D}$ the set $V \setminus D$, then by definition, $F, I$ is a partition of $D$ and $P,O$ is a partition of $\overline{D}$. Also, by definition of $F$ and $P$, it holds that $|F|=|P|$ and there is a perfect matching between vertices of $F$ and $P$. Since each vertex of $F$ will (finally) be matched with its private neighbour from $P$, we define our budget as $\kappa = \ell - (\frac{|F|}{2} + \frac{|P|}{2} + |O|)$. One can observe that if $D$ is a minimal dominating set of size at least $n-\ell$ then $\kappa \geq 0$. Conversely, if $\kappa < 0$ then any dominating set $D$ such that $F \cup I \subseteq D$ is of size smaller than $n - \ell$. This shows the correctness of \textbf{(H1)}. We now consider the remaining rules of the algorithm. Note that by the choice of $\kappa$, each time a vertex $x$ is added to $\overline{D}$, the value of $\kappa$ decrease by $\frac{1}{2}$ (or by $1$ if we can argue that $x$ is not matched with a vertex of $F$ and thus belongs to $O$). Also, whenever a vertex $x$ is added to $F$, the value of $\kappa$ decrease by $\frac{1}{2}$. \begin{description} \item[(H2)] If $R$ is empty, then all vertices have been decided: they are either in $D:=F\cup I$ or in $\overline{D}$. It remains to check whether $D$ is a minimal dominating set of size at least $n-\ell$. \item[(R1)] All neighbours (if any) of $v$ are in $\overline{D}$ and thus $v$ has to be in $I \cup F$. As $v$ will also belong to $pn(v,D)$, we can safely add $v$ to $I$. Observe also that this reduction rule does not increase our budget. \item[(B1)] Observe that if $v$ has a neighbour in $F$, then $v$ cannot belong to $I$. When a vertex $v$ in added to $F$ the budget is reduced by at least $\frac{1}{2}$; when $v$ is added to $\overline{D}$, the budget is reduced by $\frac{1}{2}$ as well. So (B1) gives a branching vector of $(\frac{1}{2}, \frac{1}{2})$. \item[(B2)] If (R1) and (B1) do not apply and $N(v)\cap R=\{u\}$, then the vertex $v$ has to either dominate itself or be dominated by $u$. Every vertex in $F$ has a neighbour in $F$, which in this case means that $v\in F$ implies $u\in F$ (first branch). Moreover, the budget is the reduced by at least $2 \cdot \frac{1}{2}$. If $v$ is put in $I$, $u$ has to go to $\bar D$ (third branch). Thus $u$ cannot be a private neighbour of some $F$-vertex, and the budget decreases by at least $1$ ($u \in O$). If $v$ does not dominate itself, $u$ has to be in $F\cup I$. In this last case it suffices to consider the less restrictive case $u\in F$, as $v$ can be chosen as the private neighbour for $u$ (second branch). If $u$ is indeed in $I$ for a minimal dominating set which extends the current $I\cup F$, there is a branch which puts all the remaining neighbours of $u$ in $\bar D$. Observe that we only dismiss branches with halting rule (H2) where we check if $F\cup I$ is a minimal dominating set, we do not require the chosen partition to be correct. As for the counting in halting rule (H1): weather we count $u\in F$ and $v\in P$ (recall that $P \subseteq \overline{D}$) each with $\frac{1}{2}$ or count $v\in O$ (recall that $O \subseteq \overline{D}$) with $1$ does not make a difference for $\kappa$. So the budget decreases by at least $1$. Altogether (B2) gives a branching vector of $(1, 1, 1)$. \item[B3] The correctness of (B3) is easy as all possibilities are explored for vertex $v$. Observe that by (R1) and (B2), vertex $v$ has at least two neighbours in $R$. When $v$ is added to $I$, these two vertices are removed (and cannot be the private neighbours of some $F$-vertices). Thus we reduce the budget by at least $2$. When $v$ is added to $F$, the budget decreases by at least $\frac{1}{2}$. When $v$ is added to $\overline{D}$, we reduce the budget by at least $\frac{1}{2}$. Thus (B3) gives a branching vector of $(2, \frac{1}{2}, \frac{1}{2})$. However, we can observe that the second branching rule (\emph{i.e.,} when $v$ is added to $F$) implies a subsequent application of (B1) (or rule (H1) would stops the recursion). Thus the branching vector can be refined to $(2, 1, 1, \frac{1}{2})$. \end{description} Taking the worst-case over all branching vectors, establishes the claimed running-time. \end{prf} } \LV{\CoUDbranching{}} Of course, the question remains to what extent the previously presented parameterised algorithm can be improved on. \LV{\todo[inline]{HF: We might want to look into ETH-based lower bounds for the parameterised or exact algorithms.}} \newcommand{\textsc{Maximum Minimal Hitting Set}\xspace}{\textsc{Maximum Minimal Hitting Set}\xspace} \subsection{Approximation} Also in this case, the two problems \textsc{Upper Domination}\xspace and \textsc{Min Complement Upper Domination}\xspace behave quite differently, the first one is hard, while the second one is relatively easy to approximate. Let us first show that \textsc{Upper Domination}\xspace is hard to approximate. We will establish this in two steps: first, we show that a related natural problem, \textsc{Maximum Minimal Hitting Set}\xspace, is hard to approximate, and then we show that this problem is essentially equivalent to \textsc{Upper Domination}\xspace. The \textsc{Maximum Minimal Hitting Set}\xspace problem is the following one: we are given a hypergraph, that is, a base set $V$ and a collection $F$ of subsets of $V$. We wish to find a set $H\subseteq V$ such that: \begin{enumerate} \item For all $e\in F$ we have $e\cap H\neq \emptyset$ (i.e., $H$ is a hitting set). \item For all $v\in H$ there exists $e\in F$ such that $e\cap H=\{v\}$ (i.e., $H$ is minimal). \item $H$ is as large as possible. \end{enumerate} It is not hard to see that this problem generalises \textsc{Upper Domination}\xspace: given a graph $G=(V,E)$, we can produce a hypergraph by keeping the same set of vertices and creating a hyperedge for each closed neighbourhood $N[v]$ of $G$. An upper dominating set of the original graph is now exactly a minimal hitting set of the constructed hypergraph. We will also show that \textsc{Maximum Minimal Hitting Set}\xspace can be reduced to \textsc{Upper Domination}\xspace. Let us note that \textsc{Maximum Minimal Hitting Set}\xspace, as defined here, also generalises {\sc Maximum Minimal Vertex Cover}\SV{, which corresponds to instances where the input hypergraph is actually a graph}. We recall that for this problem there exists a $n^{1/2}$-approximation algorithm, while it is known to be $n^{1/2-\varepsilon}$-inapproximable \cite{BorCroPas2013}. Here, we generalise this result to arbitrary hypergraphs, taking into account the sizes of the hyperedges allowed. \SV{The proof of the next theorem is a reduction from \textsc{Maximum Independent Set}\xspace, compare~\cite{Has99}.} \begin{theorem}\label{mmhs} For all $\varepsilon>0, d\ge 2$, if there exists a polynomial-time approximation algorithm for \textsc{Maximum Minimal Hitting Set}\xspace which on hypergraphs $G=(V,E)$ where hyperedges have size at most $d$ has approximation ratio $n^{\frac{d-1}{d}-\varepsilon}$, where $|V|=n$, then P=NP. This \SV{is still true for}\LV{statement still holds if we restrict the problem to} hypergraphs where $|F|=O(|V|)$. \end{theorem} \newcommand{\proofofTheoremmmhs}{Fix some constant hyperedge size $d$. We will present a reduction from \textsc{Maximum Independent Set}\xspace, which is known to be, in a sense, completely inapproximable~\cite{Has99}. In particular, it is known that, for all $\varepsilon>0$, it is NP-hard to distinguish for an $n$-vertex graph $G$ if $\alpha(G)>n^{1-\varepsilon}$ or $\alpha(G)<n^{\varepsilon}$. Take an instance $G=(V,E)$ of \textsc{Maximum Independent Set}\xspace. We begin by considering this graph as a hypergraph (with size-two hyperedges), to which we will add some more vertices and hyperedges. For every set $S\subseteq V$ such that $|S|=d-1$ which is an independent set in $G$, we add to our instance $n$ new vertices, call them $u_{S,i}$, $1\le i\le n$. Also, for each such vertex $u_{S,i}$ we add to our instance the hyperedges $S\cup\{u_{S,i}\}$, $1\le i\le n$. This completes the construction. It is not hard to see that the constructed hypergraph has hyperedges of size at most $d$, and its vertex and hyperedge set are both of size $O(n^d)$. Let us analyse the approximability gap of this reduction. First, suppose that in the original graph we have $\alpha(G) > n^{1-\varepsilon}$. Then, there exists a minimal hitting set of the new instance with size at least $n^{d-O(\varepsilon)}$. To see this, consider a maximum independent set $I$ of $G$. We set $H$ to be $(V\setminus I) \cup \{ u_{S,i}\ |\ S\subseteq I, 1\le i\le n \}$. In words, we add to $H$ a minimum vertex cover of $G$, as well as the $u_{S,i}$ vertices whose neighbourhoods are contained in $I$. It is not hard to see that $H$ is a hitting set, because each of the size-$d$ hyperedges not hit by $V\setminus I$ is hit by a unique selected vertex $u_{S,i}$. Because of this, and since $V\setminus I$ is a minimal vertex cover of $G$, $H$ is also minimal. Finally, the size of $H$ follows from the fact that there exists $\alpha(G) \choose d-1$ sets $S$ for which $H$ contains all the vertices $u_{S,i}$. For the converse direction, we want to show that if $\alpha(G)<n^\varepsilon$ then any minimal hitting set of the new instance has size at most $n^{1+O(\varepsilon)}$. Consider a hitting set $H$ of the new instance. It must be the case that $H\cap V$ is a vertex cover of $G$, and therefore $V\setminus H$ is an independent set of $G$. Let $S\subset V$ be a set of vertices such that $S\cap H\neq \emptyset$. Then $u_{S,i}\not\in H$ for all $i$, because the (unique) hyperedge that contains $u_{S,i}$ also contains some other vertex of $H$, contradicting minimality. It follows that $H$ can only contain $u_{S,i}$ if $S\subseteq V\setminus H$. Because $V\setminus H$ is an independent set, it has size at most $n^\varepsilon$, meaning there are at most $n^{\varepsilon}\choose d-1$ sets $S$ such that $H$ may contain vertices $u_{S,i}$. Thus, the total size of $H$ cannot be more than $n^{1+O(\varepsilon)}$.} \begin{pf}\proofofTheoremmmhs \end{pf} \begin{corollary}\label{mmhs_apx} For any $\varepsilon>0$ \textsc{Maximum Minimal Hitting Set}\xspace is not $n^{1-\varepsilon}$-approximable, where $n$ is the number of vertices of the input hypergraph, unless P=NP. This \SV{is still true for}\LV{statement still holds if we restrict the problem to} hypergraphs where $|F|=O(|V|)$. \end{corollary} \newcommand{\proofofCormmhsapx}{Assume there were, for some $\varepsilon>0$, a factor-$n^{1-\varepsilon}$ approximation algorithm $A$ for \textsc{Maximum Minimal Hitting Set}\xspace. Then, choose $d$ such that $1/d\leq \varepsilon/2$ and hence $(d-1)/d\geq 1-\varepsilon/2$. Then, $A$ would be a factor-$n^{(d-1)/d-\varepsilon/2}$ approximation algorithm when restricted to hypergraphs with hyperedges of size at most $d$, contradicting Theorem~\ref{mmhs}.} \begin{pf}\proofofCormmhsapx \end{pf} \begin{theorem}\label{ud_apx} For any $\varepsilon>0$ \textsc{Upper Domination}\xspace is not $n^{1-\varepsilon}$-approximable, where $n$ is the number of vertices of the input graph, unless P=NP. \end{theorem} \SV{\begin{prf} (Sketch) We construct an $E$-reduction from \textsc{Maximum Minimal Hitting Set}\xspace. Given a hypergraph $G=(V,F)$ as an instance of \textsc{Maximum Minimal Hitting Set}\xspace, we define a graph $G'=(V',E')$ as an instance of \textsc{Upper Domination}\xspace as follows: $V'$ contains a vertex $v_i$ associated to any vertex $i$ from $V$, a vertex $u_{e}$ for any edge $e\in F$ and a new vertex $v$. $E'$ contains edges such that $G'[V]$ and $G'[E]$ are cliques. Moreover, $v$ is adjacent to every vertex $v_i \in V$, and $(v_i,u_{e})\in E'$ if and only if $i\in e$ in $G$. \end{prf}} \newcommand{\proofofTheoremudapx}{We construct an $E$-reduction from \textsc{Maximum Minimal Hitting Set}\xspace. Given a hypergraph $G=(V,F)$ as an instance of \textsc{Maximum Minimal Hitting Set}\xspace, we define a graph $G'=(V',E')$ as an instance of \textsc{Upper Domination}\xspace as follows: $V'$ contains a vertex $v_i$ associated to any vertex $i$ from $V$, a vertex $u_{e}$ for any edge $e\in F$ and a new vertex $v$. $E'$ contains edges such that $G'[V]$ and $G'[E]$ are cliques. Moreover, $v$ is adjacent to every vertex $v_i \in V$, and $(v_i,u_{e})\in E'$ if and only if $i\in e$ in $G$. First we show that given a solution $S$ that is a minimal hitting set in $G$, $S$ is also a minimal dominating set in $G'$. Indeed if $S$ is a hitting set in $G$ then $S$ is a dominating set in $G'$. If $S$ is minimal, that is, any proper subset $S'\subset S$ is no longer a hitting set, then it is also the case that $S'$ is no more a dominating set in $G'$. That implies that $\operatorname{opt}(G') \geq \operatorname{opt}(G)$. Consider now an upper dominating set $S$ for $G'$. To dominate the vertex $v$, $S$ has to contain at least one vertex $w\in V\cup \{v\}$. If $S$ contains one vertex $u_{e}\in E$, then the set $\{w,u_{e}\}$ is already dominating. If we want a solution of cardinality more than two, then $S\subseteq V$. If $S\subseteq V$ is a minimal dominating set in $G'$, $S$ is also a minimal hitting set in $G$ since $S$ covers all hyperedges in $G$ if and only if it dominates all edge-vertices in $G'$. So starting with any minimal dominating set $S$ of $G'$ of cardinality larger than two, $S$ is also a minimal hitting set of $G$. The result now follows from Corollary \ref{mmhs_apx}. } \begin{pf}\proofofTheoremudapx \end{pf} \LV{\todo[inline]{HF: We can talk about this; at this moment, I think that this problem fits pretty well into the whole paper, and we do have quite some results for \textsc{Maximum Minimal Hitting Set}\xspace: for instance, the non-extensibility immediately transfers from minimal dom set extension, and also the W[1]-hardness. We might want to mention all this in one sentence.}} \LV{Let us n}\SV{N}ote that, interestingly, the inapproximability bound given in Theorem \ref{mmhs} is tight, for every fixed $d$. To see this, consider the algorithm of the following theorem, which also generalises results on {\sc Maximum Minimal Vertex Cover}~\cite{BorCroPas2013}. \LV{To simplify presentation, we assume that we are given an $n$-vertex hypergraph where every vertex appears in at least one hyperedge.} \begin{theorem}\label{mmhs_alg} For all $d\ge 1$, there exists a polynomial-time algorithm which, given a hypergraph $G=(V,F)$ such that all hyperedges have size at most $d$, produces a minimal hitting set $H$ of $G$ with size $\Omega(n^{1/d})$. This shows an $\Omega(n^{\frac{d-1}{d}})$-approximation for \textsc{Maximum Minimal Hitting Set}\xspace on such hypergraphs. \end{theorem} \newcommand{\proofofTheoremmmhsalg}{ The proof is by induction on $d$. For $d=1$, if every vertex appears in a hyperedge, any hitting set must contain all vertices, so we are done. For $d>1$, we do the following: first, greedily construct a maximal set $M\subseteq F$ of pair-wise disjoint hyperedges. If $|M|\geq n^{1/d}$ then we know that any hitting set of $G$ must contain at least $n^{1/d}$ vertices. So, we simply produce an arbitrary feasible solution by starting with $V$ and deleting redundant vertices until our hitting set becomes minimal. Suppose then that $|M|<n^{1/d}$. Let $H$ be the set of all vertices contained in $M$, so $|H|<d|M| = O(n^{1/d})$. Clearly, $H$ is a hitting set of $G$ (otherwise $M$ is not maximal), but it is not necessarily minimal. Let us now consider all sets $S\subseteq H$ with the following two properties: $|S|\le d-1$ and all edges $e\in F$ have an element in $V\setminus S$ (in other words, $V\setminus S$ is a hitting set of $G$). For such a set $S$ and a vertex $u\in V\setminus H$ we will say that $u$ is seen by $S$, and write $u\in B(S)$, if there exists $e\in F$ such that $e\cap H=S$ and $u\in e$. Intuitively, what we are trying to do here is find a set $S$ that we will \emph{not} place in our hitting set. Vertices seen by $S$ are then vertices which are more likely to be placeable in a minimal hitting set. Let $B_i$ be the union of all $B(S)$ for sets $S$ with $|S|=i$. Since every vertex appears in a hyperedge, all vertices of $V\setminus H$ are seen by a set $S$, and therefore belong in some $B_i$. Therefore, the union of all $B_i$ has size at least $|V\setminus H| \ge n-n^{1/d} = \Omega(n)$. The largest of these sets, then, has size at least $\frac{n-n^{1/d}}{d} = \Omega(n)$. Consider then the largest such set, which corresponds to sets $S$ with size $s$. There are at most ${|H|\choose s} = O(n^{s/d})$ such sets $S$. Since all together they see $\Omega(n)$ vertices of $V\setminus H$, one of them must see at least $\Omega(n^{1-\frac{s}{d}})$ vertices. Call this set $S_m$. Consider now the hypergraph induced by $S_m\cup B(S_m)$. If we delete the vertices of $S_m$ from this hypergraph, we get a hypergraph where every hyperedge has at most $d-s$ vertices. By induction, we can in polynomial time find a minimal hitting set of this hypergraph with at least $\Omega((n^{1-\frac{s}{d}})^{\frac{1}{d-s}})=\Omega(n^{1/d})$ vertices. Call this set $H'$. We are now ready to build our solution. Start with the set $V\setminus (S_m\cup B(S_m))$ and add to it the vertices of $H'$. First, this is a hitting set, because any hyperedge not hit by $V\setminus (S_m\cup B(S_m))$ is induced by $S_m \cup B(S_m)$, and $H'$ hits all such hyperedges. We now proceed to make this set minimal by arbitrarily deleting redundant vertices. The crucial point here is that no vertex of $H'$ is deleted, since this would contradict the minimality of $H'$ as a hitting set of the hypergraph induced by $S_m\cup B(S_m)$. Thus, the resulting solution has size $\Omega(n^{1/d})$.} \begin{pf}\proofofTheoremmmhsalg \end{pf} \begin{theorem}\label{cud_apx} \textsc{Min Complement Upper Domination}\xspace is 4-approximable in polynomial time\SV{, 3-approximable with a running time in $O^*(1.0883^{\tau(G)})$} and 2-approximable in time $O^*(1.2738^{\tau(G)})$ or $O^*(1.2132^n)$. \end{theorem} \SV{\begin{prf} First, find a vertex cover $V'$ in $G$ using any 2-approximation algorithm, and define $S'=V\setminus V'$. Let $S$ be a maximal independent set containing $S'$. $V\setminus S$ is a vertex cover of size $|V\!\setminus\! S| \leq |V'|\leq 2 \tau(G)\leq 4 (n-\Gamma(G))$, by eq.~\ref{complupperdom}. Moreover, $S$ is maximal independent and hence minimal dominating set which makes $V\!\setminus\! S$ a feasible solution for \textsc{Min Complement Upper Domination}\xspace with $|V\!\setminus\! S|\leq 4(n-\Gamma(G))$. The claimed running time for the factor-2 approximation stems from the best parameterised and exact algorithms \textsc{Minimum Vertex Cover}\xspace by \cite{CheKanXia2010} and \cite{KneLanRos2009}, the factor-3 approxmation from the parameterised approximation in \cite{BraFer2013}. \end{prf}} \begin{pf}Given a graph $G$ on $n$ vertices, we first find a vertex cover $V'$ in $G$ using any 2-approximation algorithm, and define $S'=V\setminus V'$. Set $S'$ is an independent set and let $S$ be a maximal independent set containing $S'$. The set $V\setminus S$ is a vertex cover of size $|V\setminus S| \leq |V'|\leq 2 \tau(G)\leq 4 (n-\Gamma(G))$, by eq.~\ref{complupperdom}. Moreover, $V\setminus S$ is the complement of a maximal independent set which also makes it the complement of a minimal dominating set, so overall a feasible solution for \textsc{Min Complement Upper Domination}\xspace with $|V\setminus S|\leq 4(n-\Gamma(G))$. The claimed running time for the factor-2 approximation stems from the best parameterised and exact algorithms \textsc{Minimum Vertex Cover}\xspace by \cite{CheKanXia2010} and \cite{KneLanRos2009}. \end{pf} \LV{ We could also use some results from parameterised approximation. For instance, by \cite{BraFer2013}, we can conclude: \begin{corollary}\textsc{Min Complement Upper Domination}\xspace is 3-approximable with a running time of $O^*(1.0883^{\tau(G)})$. \end{corollary} } With the results shown in the next section, we can conclude that \textsc{Min Complement Upper Domination}\xspace is also APX-complete. \LV{\todo[inline]{To the approx. group: Can we improve the poly-time factor?}} \section{Graphs of bounded degree} For these classes of graphs, we have also some new results on classical complexity. \subsection{Classical complexity and exact algorithms} \textsc{Upper Domination}\xspace has been shown to be NP-hard on planar graphs of maximum degree six in \cite{AboHLMR2014} before. Here, we are going to strengthen this result in two directions, for cubic graphs and for planar subcubic graphs. \begin{theorem} \label{UDcubic} \textsc{Upper Domination}\xspace is NP-hard on cubic graphs. \end{theorem} \SV{\begin{proof} (Sketch) We present a reduction from \textsc{Maximum Independent Set}\xspace on cubic graphs. Let $G=(V,E)$ be the cubic input graph.\\ \begin{minipage}{.6\textwidth} Build $G'$ from $G$ by replacing every $(u,v)\in E$ by a construction introducing six new vertices, as shown on the right. Any $IS\subset V$ is an independent set for $G$ if and only if $G'$ contains an upper dominating set of cardinality $|IS|+3|E|$. \end{minipage} \begin{minipage}{.4\textwidth} \begin{center} \includegraphics[scale=0.5]{cubic_plain2} \end{center}\hfill\framebox{} \end{minipage} \end{proof}} \newcommand{\proofofTheoremUDcubic}{We present a reduction from \textsc{Maximum Independent Set}\xspace on cubic graphs. Let $G=(V,E)$ be the cubic input graph for \textsc{Maximum Independent Set}\xspace. Construct a cubic graph $G'$ from $G$ by replacing every edge $(u,v)\in E$ by the following construction introducing six new vertices for each edge: \begin{center} \includegraphics[scale=0.4]{cubic_names} \end{center} If $IS$ is an independent set for $G$, the corresponding vertex-set in $G'$ can be extended to an upper dominating set $S$ of cardinality at least $|IS|+3|E|$ in the following way: For every edge $(u,v)$ with $v\notin IS$ add $\{v_u,u_v^1,u_v^2\}$ to $S$: \begin{center} \includegraphics[scale=0.4]{cubic_3b} \end{center} Since $IS$ is independent, this procedure chooses three vertices for each edge-gadget in $G'$ and creates an independent set $S$ of cardinality $|IS|+3m$. If there was some vertex not dominated by $S$ we could add it to $S$ without violating independence and $S$ will only increase in cardinality. Finally we arrive at a maximal independent and consequently minimal dominating set of cardinality at least $|IS|+3|E|$. Let $S$ be an upper dominating set for $G'$. Minimality yields that for every edge $(u,v)\in E$ at most three of the vertices added for this edge can be in $S$. Consider an original edge $(u,v)\in E$ such that $u,v\in S$. From the vertices added for this edge, only two can be in $S$: Observe that any two out of $\{u_v^1,u_v^2,v_u^1,v_u^2\}$ already dominate the whole subgraph. The set $\{u_v,v_u\}$ is also already dominating so there is no minimal dominating set of cardinality three in this case. Consider the set $S'=S\cap V$ as potentially independent set for the original graph $G$. If there are two vertices $u,v\in S'$ such that $(u,v)\in E$, the corresponding edge-gadget only adds two vertices to $S$. By successively deleting vertices from $S'$ as long as there is a conflict with respect to independence, we arrive at an independent set of cardinality at least $|S|-3|E|$.} \begin{pf}\proofofTheoremUDcubic \end{pf} We complement this result by some results on exact algorithms. Let us recall one important result on the pathwidth of subcubic graphs from~\cite{FomHoi2006}. \begin{theorem}Let $\epsilon>0$ be given. For any subcubic graph $G$ of order $n>n_\epsilon$, a path decomposition proving \LV{that }$pw(G)\leq n/6+\epsilon$ is computable in polynomial time. \end{theorem} \LV{This result immediately gives an $O^*(1.2010^n)$-algorithm for solving \textsc{Minimum Domination}\xspace on subcubic graphs. We will take a similar route}\SV{We will use this result} to prove moderately exponential-time algorithms for \textsc{Upper Domination}\xspace on subcubic graphs. \begin{proposition}\label{prop-UDpw} \textsc{Upper Domination}\xspace on graphs of pathwidth $p$ can be solved in time $O^*(7^p)$, given a corresponding path decomposition. \end{proposition} \newcommand{\proofofPropUDpw}{We are considering all partitions of each bag of the path decomposition into 6 sets: $F$, $F^*$, $I$, $P$, $O$, $O^*$, where \begin{itemize} \item $F$ is the set of vertices that belong to the upper dominating set and have already been matched to a private neighbour; \item $F^*$ is the set of vertices that belong to the upper dominating set and still need to be matched to a private neighbour; \item $I$ is the set of vertices that belong to the upper dominating set and is independent in the graph induced by the upper dominating set; \item $P$ is the set of private neighbours that are already matched to vertices in the upper dominating set; \item $O$ is the set of vertices that are not belonging neither to the upper dominating set nor to the set of private neighbours but are already dominated; \item $O^*$ is the set of vertices not belonging to the upper dominating set that have not been dominated yet. \end{itemize} (Sets within the partition can be also empty.) For each such partition, we determine the largest minimal dominating set in the situation described by the partition, assuming optimal settings in the part of the graph already forgotten. We can assume that we are given a nice path decomposition. So, we only have to describe the table initialisation (the situation in a bag containing only one vertex) and the table updates necessary when we introduce a new vertex into a bag and when we finally forget a vertex. \begin{description} \item[initialisation] We have six cases to consider: \begin{itemize} \item $T[\{v\},\emptyset,\emptyset,\emptyset,\emptyset,\emptyset]\gets -1$, \item $T[\emptyset,\{v\},\emptyset,\emptyset,\emptyset,\emptyset]\gets 1$, \item $T[\emptyset,\emptyset,\{v\},\emptyset,\emptyset,\emptyset]\gets 1$, \item $T[\emptyset,\emptyset,\emptyset,\{v\},\emptyset,\emptyset]\gets -1$, \item $T[\emptyset,\emptyset,\emptyset,\emptyset,\{v\},\emptyset]\gets -1$. \item $T[\emptyset,\emptyset,\emptyset,\emptyset,\emptyset,\{v\}]\gets 1$. \end{itemize} Here, $-1$ signals the error cases when we try to introduce already dominated vertices. \item[forget] Assume that we want to update table $T$ to table $T'$ for the partition $F$, $F^*$, $I$, $P$, $O$, $O^*$, eliminating vertex $v$: \begin{itemize} \item $T'[F\setminus\{v\},F^*,I,P,O,O^*]\gets T[F,F^*,I,P,O,O^*]$, \item $T'[F,F^*\setminus\{v\},I,P,O,O^*]\gets -1$, \item $T'[F,F^*,I\setminus\{v\},P,O,O^*]\gets T[F,F^*,I,P,O,O^*]$, \item $T'[F,F^*,I,P\setminus\{v\},O,O^*]\gets T[F,F^*,I,P,O,O^*]$, \item $T'[F,F^*,I,P,O\setminus\{v\},O^*]\gets T[F,F^*,I,P,O,O^*]$, \item $T'[F,F^*,I,P,O,O^*\setminus\{v\}]\gets -1$. \end{itemize} Clearly, it is not feasible to eliminate vertices whose promises have not yet been fulfilled. \item[introduce] We are now introducing a new vertex $v$ into the bag. The neighbourhood $N$ refers to the situation in the new bag, i.e., to the corresponding induced graph. $T'$ is the new table and $T$ the old one. \begin{itemize} \item $T'[F\cup\{v\},F^*,I,P,O,O^*]\gets -1$ if $N(v)\cap (I\cup O^*)\neq \emptyset$ or $|N(v)\cap P|\not=1$;\\ $T'[F\cup\{v\},F^*,I,P,O,O^*]\gets \max\{ T[F,F^*,I,P\setminus\{x\},O\setminus X,O^*\cup X\cup\{x\}]: x\in N(v), X\subseteq (N(v)\setminus\{x\})\cap O\}+1$ otherwise;\\ this means that exactly one neighbour $x$ of $v$ that was previously labelled to be dominated in the future is selected as a private neighbour of $v$; all other neighbours of $v$ are labelled dominated; \item $T'[F,F^*\cup\{v\},I,P,O,O^*]\gets -1$ if $N(v)\cap (I\cup P \cup O^*)\neq \emptyset$;\\ $T'[F,F^*\cup\{v\},I,P,O,O^*]\gets\max\{ T[F,F^*,I,P,O\setminus X,O^*\cup X]: X\subseteq N(v)\cap O\}+1$ otherwise;\\ in contrast to the previous situation, no private neighbour has been selected; \item $T'[F,F^*,I\cup\{v\},P,O,O^*]\gets -1$ if $N(v)\cap (I\cup F\cup F^*\cup P\cup O^*)\neq \emptyset$;\\ $T'[F,F^*,I\cup\{v\},P,O,O^*]\gets\max\{ T[F,F^*,I,P,O\setminus X,O^*\cup X]: X\subseteq N(v)\cap O\}+1$ otherwise; \item $T'[F,F^*,I,P\cup\{v\},O,O^*]\gets -1$ if $N(v)\cap I\neq \emptyset$ or $|N(v)\cap F|\neq 1$;\\ $T'[F,F^*,I,P\cup\{v\},O,O^*]\gets T[F\setminus N(v),F^*\cup (N(v)\cap F),I,P,O,O^*]$ otherwise;\\ this means that exactly one neighbour $x$ of $v$ that was previously labelled as dominating but looking for a private neighbour in the future is selected as pairing up with $v$; all other neighbours of $v$ are not in the dominating set; \item $T'[F,F^*,I,P,O\cup\{v\},O^*]\gets T[F,F^*,I,P,O,O^*]$ if $N(v)\cap (F\cup F^*\cup I)\neq\emptyset $ and $T'[F,F^*,I,P,O\cup\{v\},O^*]\gets -1$ otherwise; \item $T'[F,F^*,I,P,O,O^*\cup\{v\}]\gets T[F,F^*,I,P,O,O^*]$ unless $N(v)\cap (F\cup F^*\cup I)\neq\emptyset $; in that case, $T'[F,F^*,I,P,O,O^*\cup\{v\}]\gets -1$. \end{itemize} \end{description} The formal induction proof showing the correctness of the algorithm is an easy standard exercise. As to the running time, observe that we cycle only in one case potentially through all subsets of $O$, so that the running time follows by applying the binomial formula: $$\sum_{i=0}^{p}\binom{p}{i}5^i 2^{p-i} = 7^p\,.$$} \begin{pf}\proofofPropUDpw \end{pf} \LV{Observe that t}\SV{T}he upper bound on the running time can be improved for graphs of a certain maximum degree to $O^*(6^p)$, so that we can conclude: \begin{corollary}\label{ud_ex} \textsc{Upper Domination}\xspace on subcubic graphs of order $n$ can be solved in time $O^*(1.3481^n)$, using the same amount of space. \end{corollary} \subsection{Parameterised complexity} In contrast to the case of general graphs, \textsc{Upper Domination}\xspace turns out to be easy (in the sense of paramterised complexity) for graphs of bounded degree. \begin{proposition}\label{ud_fpt} Fix $\Delta>2$. \textsc{Upper Domination}\xspace\SV{,}\LV{ is in FPT when} restricted to graphs of maximum degree $\Delta$\SV{,}\LV{. More precisely, the problem} can be solved in time $O^*((\Delta+1)^{2k})$. \end{proposition} \LV{The statement of the proposition is of course also true for $\Delta\in\{0,1,2\}$, but then the problem is (trivially) solvable in polynomial time. In the following, we give an argument based on branching.} \begin{prf} Consider the simple branching algorithm that branches on all at most $\Delta+1$ possibilities to dominate a yet undominated vertex. Once we have fixed a new vertex in the dominating set, we let follow another branch (of at most $\Delta+1$ possibilities) to determine the private neighbour of the new vertex in the dominating set. Assuming that we are only looking for sets of size $k$, we can find a yes instance in each branch where we needed to put $k$ vertices in the dominating set (so far); if that set is not yet dominating, we can turn it into a minimal dominating set by a greedy approach, respecting previous choices. \LV{The overall running time of the branching algorithm is hence $O^*((\Delta+1)^{2k})$.} \end{prf} The astute reader might wonder why we have to do this unusual 2-stage branching, but recall Theorem~\ref{thm-MDSE-hardness} that shows that we cannot extend some set of vertices of size at most $k$ that easily to a minimal dominating set containing it. \LV{\todo[inline]{HF: It might be interesting to study MDSE, parameterised by the size of the input vertex subset $S$, even on degree-bounded graphs. A nice solution would immediately improve on the branching that we presented.}} Brooks' Theorem yields the following result. \begin{proposition}\label{prop-UDkernel} Fix $\Delta>2$. \textsc{Upper Domination}\xspace has a problem kernel with at most $\Delta k$ many vertices. \end{proposition} \newcommand{\proofofPropUDkernel}{First, we can assume that the input graph $G$ is connected, as otherwise we can apply the following argument separately on each connected component. Assume $G$ is a cycle or a clique. Then, the problem \textsc{Upper Domination}\xspace can be optimally solved in polynomial time, i.e., we can produce a kernel as small as we want. Otherwise, Brooks' Theorem yields a polynomial-time algorithm that produces a proper colouring of $G$ with (at most) $\Delta$ many colours. Extend the biggest colour class to a maximal independent set $I$ of $G$. As $I$ is maximal, it is also a minimal dominating set. So, there is a minimal dominating set $I$ of size at least $n/\Delta$, where $n$ is the order of $G$. So, $\Gamma(G)\geq n/\Delta$. If $k<n/\Delta$, we can therefore immediately answer YES. In the other case, $n\leq \Delta k$ as claimed.} \begin{pf}\proofofPropUDkernel \end{pf} With some more combinatorial effort, we obtain: \begin{proposition}\label{prop-CUDkernel} Fix $\Delta>2$. \textsc{Min Complement Upper Domination}\xspace has a problem kernel with at most $(\Delta+0.5)\ell$ many vertices. \end{proposition} \newcommand{\proofofPropCUDkernel}{Consider any graph $G=(V,E)$. For any partition $(F,I,P,O)$ corresponding to an upper dominating set $D=I\cup F$ for $G$, isolated vertices in $G$ always belong to $I$ and can hence be deleted in any instance of \textsc{Min Complement Upper Domination}\xspace without changing $\ell$. For any graph $G$ without isolated vertices, the set $P\cup O$ is a dominating set for $G$, since $\emptyset \not=N(v)\subset O$ for all $v\in I$ and $N(v)\cap P\not=\emptyset $ for all $v\in F$. Maximum degree $\Delta $ hence immediately implies $n=|N[P\cup O]|\leq (\Delta+1)\ell$. Since any connected component can be solved separately, we can assume that $G$ is connected. For any $v\in P$, the structure of the partition $(F,I,P,O)$ yields $|N[v]\cap D|=1$, so either $|N[v]|=1<\Delta$ or there is at least one $w\in P\cup O$ such that $N[v]\cap N[w]\not=\emptyset$. For any $v\in O$, if $N[v]\cap F\not=\emptyset$, the $F$-vertex in this intersection has a neighbour $w\in P$, which means $N[w]\cap N[v]\not=\emptyset$. If $N[v]\subset I$ and $N[v]\not=V$, at least one of the $I$-vertices in $N[v]$ has to have another neighbour to connect to the rest of the graph. Since $N[I]\subset O$, this also implies the existence of a vertex $w\in O$, $w\not=v$ with $N[w]\cap N[v]\not=\emptyset$. Finally, if $N[v]\not\subset I\cup F$, there is obviously a $w\in P\cup O$, $w\not=v$ with $N[w]\cap N[v]\not=\emptyset$. Assume that there is an upper dominating set with partition $(F,I,P,O)$ such that $|P\cup O|=l\leq \ell$ and let $v_1,\dots,v_l$ be the $l>1$ vertices in $P\cup O$. By the above argued domination-property of $P\cup O$, we have: $$ n= |\bigcup _{i=1}^l N[v_i]|=\tfrac12\sum_{i=1}^l |N[v_i]\setminus\bigcup_{j=1}^{i-1}N[v_j]|+ \tfrac12\sum_{i=1}^l |N[v_i]\setminus\!\!\bigcup_{j=i+1}^{l}\!\!N[v_j]|$$ Further, by the above argument about neighbourhoods of vertices in $P\cup O$, maximum degree $\Delta$ yields for every $i\in \{1,\dots,l\}$ either $|N[v_i]\setminus\bigcup_{j=1}^{i-1}N[v_j]|\leq \Delta$ or $|N[v_i]\setminus\bigcup_{j=i+1}^lN[v_j]|\leq \Delta$ which gives: $$n=\tfrac12 \sum_{i=1}^l |N[v_i]\setminus\bigcup_{j=1}^{i-1}N[v_j]| + |N[v_i]\setminus\bigcup_{j=i+1}^lN[v_j]|\leq \tfrac12 l(2\Delta +1)\leq(\Delta+0.5)\ell.$$ Any graph with more than $(\Delta+0.5)\ell$ vertices is consequently a NO-instance which yields the stated kernelisation, as the excluded case $|P\cup O|=1$ (or in other words $N[v]=V$ for some $v\in O$) can be solved trivially.} \begin{pf}\proofofPropCUDkernel \end{pf} This implies that we have a $3k$-size vertex kernel for \textsc{Upper Domination}\xspace, restricted to subcubic graphs, and a $3.5\ell$-size vertex kernel for \textsc{Min Complement Upper Domination}\xspace, again restricted to subcubic graphs. With \cite[Theorem 3.1]{Cheetal2007}, we can conclude the following consequence: \begin{corollary} Unless $P$ equals $NP$, for any $\varepsilon>0$, \textsc{Upper Domination}\xspace, restricted to subcubic graphs, does not admit a kernel with less than $(1.4-\varepsilon)k$ vertices; neither does \textsc{Min Complement Upper Domination}\xspace, restricted to subcubic graphs, admit a kernel with less than $(1.5-\varepsilon) \ell$ vertices. \end{corollary} \subsection{Approximation} Using that \textsc{Maximum Independent Set}\xspace on cubic graphs is APX-hard \cite{AliKan2000}, one can however obtain the following result from the proof of Theorem~\ref{UDcubic}. \begin{theorem}\label{ud_ptas} \textsc{Upper Domination}\xspace is APX-hard even for cubic graphs. \end{theorem} \begin{pf} The reduction from \textsc{Maximum Independent Set}\xspace on cubic graphs in the proof of Theorem~\ref{UDcubic} is an $L$-reduction. Given a graph $G$ on $n$ vertices and $m$ edges, as an instance of \textsc{Maximum Independent Set}\xspace, we construct a graph $G'$ instance of \textsc{Upper Domination}\xspace with the following properties: $\operatorname{opt}(G')=\operatorname{opt}(G)+3m$ and given any solution of size $val'$ in $G'$, we can construct a solution of size $val=val'-3m$. Thus $\operatorname{opt}(G') \leq 19 \operatorname{opt}(G)$ since $\operatorname{opt}(G) \geq n/4$. Moreover, $\operatorname{opt}(G)-val=\operatorname{opt}(G')-val'$. \end{pf} We can obtain membership in APX, so altogether APX-completeness, \LV{by re-considering the simple greedy algorithm that was the basis of the kernel argument given above.}\SV{by the simple fact that any dominating set has cardinality at least $\frac {n}{\Delta+1}$ for graphs of maximum degree $\Delta$.} However, we can do much better, as we now exhibit\LV{ in the following theorem}. \begin{theorem}\label{ud_delta_approx} Consider some graph-class~$\mathcal{G}(p,{\rho})$ with the following properties:\begin{itemize} \item One can colour every $G \in \mathcal{G}(p,{\rho})$ with~$p$ colours in polynomial time. \item For any $G \in \mathcal{G}(p,{\rho})$, \textsc{Maximum Independent Set}\xspace is $\rho$-approximable in polynomial time. \end{itemize} Then, in every $G \in \mathcal{G}(p,{\rho})$ \textsc{Upper Domination}\xspace is approximable in polynomial time within ratio at most: \begin{equation}\label{ratio} \max\left\{\rho, \frac{\Delta\rho p + \Delta - 1}{2\rho\Delta}\right\} \end{equation} \end{theorem} The basic proof idea is based upon equation (\ref{upperdomdelta_is}) and the fact that any maximal independent set is a feasible \textsc{Upper Domination}\xspace-solution. Then, we distinguish two cases, and we run two \textsc{Maximum Independent Set}\xspace-algorithms efficiently handling them. We finally output the best among the computed solutions. Any graph of maximum degree~$\Delta$ can be coloured with at most~$\Delta$ colours~\cite{Lov75}; furthermore, \textsc{Maximum Independent Set}\xspace is approximable within ratio $(\Delta+3)/5$ in graphs of maximum degree~$\Delta$~\cite{BerFuj95}. So, the class~$\mathcal{G}(\Delta,(\Delta+3)/5)$ contains all graphs of maximum degree $\Delta$. Conversely,~$\mathcal{G}(4,1+\epsilon)$ contains the planar graphs, and~$\mathcal{G}(3,1+\epsilon)$ the triangle-free planar graphs, since \textsc{Maximum Independent Set}\xspace admits a PTAS for such graphs~\cite{Bak94}. Hence, from Theorem~\ref{ud_delta_approx}, the following corollary can be derived. \begin{corollary}\label{ud_delta_all} \textsc{Upper Domination}\xspace is approximable in polynomial time within ratios $(6\Delta^2+2\Delta-3)/10\Delta$ in general graphs, $5/2 + \epsilon$ in planar graphs and $2 + \epsilon$ in triangle-free planar graphs, for any $\epsilon > 0$. \end{corollary} Let us note that \LV{ the result of} Theorem~\ref{ud_delta_approx} can be easily improved in the case of regular graphs. Indeed, in this case $\Gamma(G) \leqslant \frac{n}{2}$ according to~\cite{HenSla96}. Then\LV{, from the proof of Theorem~\ref{ud_delta_approx}} one can conclude\SV{:} \LV{ the following corollary.} \begin{corollary}\label{ud_delta_approx_cor} \textsc{Upper Domination}\xspace in regular graphs is approximable in polynomial time within ratio~$\Delta/2$. \end{corollary} We are now turning to the complementary problem, i.e., to \textsc{Min Complement Upper Domination}\xspace. Notice that we know that this problem lies in APX for general graphs, more precisely it is 4-approximable in polynomial time. Since \textsc{Minimum Vertex Cover}\xspace is 7/6-approximable in polynomial time~\cite{BerFuj95} for cubic graphs, using the same argument as in the proof of Theorem~\ref{cud_apx}, we obtain a polynomial-time 7/3-approximation for \textsc{Min Complement Upper Domination}\xspace on cubic graphs.\\ \begin{theorem}\label{cud_ptas} \textsc{Min Complement Upper Domination}\xspace is APX-complete even for cubic graphs. \end{theorem}\SV{This follows from the proof of Theorem~\ref{UDcubic} which can be seen as an L-reduction from \textsc{Maximum Independent Set}\xspace to \textsc{Min Complement Upper Domination}\xspace.} \newcommand{\proofofcudptas}{We re-consider the same reduction from \textsc{Maximum Independent Set}\xspace on cubic graphs as in the proof of Theorem~\ref{UDcubic} and prove that it is indeed an $L$-reduction also for this case. Given a graph on $n$ vertices and $m$ edges, as an instance of \textsc{Maximum Independent Set}\xspace, we construct a graph $G'$, an instance of \textsc{Min Complement Upper Domination}\xspace on $n'$ vertices with the following properties: $\operatorname{opt}(G')=n'-\operatorname{opt}(G)-m$ and given any solution of size $val'$ in $G'$, we can construct a solution of size $val=n'-3m-val$. Thus $\operatorname{opt}(G') \leq c n- n/4-9n/2=c' n\leq \operatorname{opt}(G)$, since $\operatorname{opt}(G) \geq n/4$. Moreover, $\operatorname{opt}(G)-val=\operatorname{opt}(G')-val'$. } \LV{\begin{pf} \proofofcudptas \end{pf} } \LV{\section{Planar graphs} Notice that the edge gadget that we used in the proof of Theorem~\ref{UDcubic} destroys planarity, so this does not show that \textsc{Upper Domination}\xspace is NP-hard on planar cubic graphs, as it is known for \textsc{Maximum Independent Set}\xspace. However, as omitting the two crossing edges of the gadget (as the reader can verify), we can obtain the following result immediately: \begin{corollary}\label{UDcubicplanar} \textsc{Upper Domination}\xspace is NP-hard on subcubic planar graphs. \end{corollary} \todo[inline]{To the approx. group: Are there things in your NP-hardness proof of this result that are worth maintaining?! Then, please insert this here. I know that the one-paragraph note of ours is pretty short.} \todo[inline]{Mathieu, can you see into technical details if / how Corollary~\ref{ud_ex} extends to tree decompositions, as this is important to know. Of course we believe that such a thing exists, but does it also have 6 in the basis of the running time?} Similar constructions should be possible for graphs of any bounded genus. Let us finally mention that some further tricks should be possible to improve on the running times or other practical aspects both of subexponential-type algorithms and of PTAS for planar graphs. We only refer to the discussions in \cite{MarGuJia2009,MarGu2013} and the literature quoted therein with respect to practical aspects of the computation of $\gamma(G)$ for planar graphs.} \section{Discussions and open problems} The motivation to study \textsc{Upper Domination}\xspace (at least for some in the group of authors) was based on the following observation: \begin{proposition} \textsc{Upper Domination}\xspace can be solved in time $O^*(1.7159^n)$ on general graphs of order $n$. \end{proposition} \begin{pf} The suggested algorithm simply lists all minimal dominating sets and then picks the biggest one. It has been shown in \cite{Fometal2008a} that this enumeration problem can be performed in the claimed running time. \end{pf} It is of course a bit nagging that there seems to be no better algorithm (analysis) than this enumeration algorithm for \textsc{Upper Domination}\xspace. Recall that the minimisation counterpart can be solved in better than $O^*(1.5^n)$ time \cite{Iwa1112,RooBod2011}. Intuitively, the problem behind finding nothing better than enumeration has to do with the hardness of the extension problem considered in Section~\ref{sec-extension}, as it is hard to fill up partial solutions determined by branching. As this appears to be quite a tough problem, it makes a lot of sense to study it on restricted graph classes. This is what we did above for subcubic graphs, see Corollary~\ref{ud_ex}. We are currently working on \textsc{Upper Domination}\xspace on planar graphs, which turns out to be a bit harder than with other graph problems, again due to the hardness of the extension problem. We summarise some open problems. \begin{itemize} \item Is \textsc{Upper Domination}\xspace in W[1]? \item Can we improve on the \LV{current} 4-approximation of \textsc{Min Complement Upper Domination}\xspace? \LV{\todo[inline]{To the approx. group: Can you solve this?}} \item Can we find smaller kernels for \textsc{Upper Domination}\xspace or \textsc{Min Complement Upper Domination}\xspace on degree-bounded graphs? \item Can we find exact (e.g., branching) algorithms that beat enumeration or pathwidth-based algorithms for \textsc{Upper Domination}\xspace, at least on cubic graphs? \end{itemize} \paragraph{Acknowledgements.} We like to thank our colleagues David Manlove and Daniel Meister for some discussions on upper domination. Part of this research was supported by the Deutsche Forschungsgemeinschaft, grant FE 560/6-1. \bibliographystyle{plain}
1,108,101,565,052
arxiv
\section{Introduction} The low-luminosity active galactic nucleus (LLAGN) NGC~4258\ is an essential object in our quest to understand the astrophysics of extragalactic supermassive black holes. Very Large Array (VLA) and Very Long Baseline Interferometry (VLBI) observations have found a set of water masers that trace a nearly edge-on, geometrically-thin gas disk $\sim 0.2-0.3{\rm\thinspace pc}$ from the central black hole (Miyoshi et al. 1995; Herrnstein et al. 1999). The near-perfect Keplerian velocity curve of these water masers provides one of the strongest and most robust pieces of evidence for the existence of extragalactic supermassive black holes (Miyoshi et al. 1995). Furthermore, the dynamics of this disk allows precise measurements of the central black hole mass, (outer) accretion disk inclination and warping, \emph{and} its distance ($M = 3.9 \pm 0.3 \times 10^{7}\,\hbox{$\rm\thinspace M_{\odot}$}$, $D = 7.2 \pm 0.5$\,Mpc, yielding $r_g \equiv GM/c^2 = 5.8\times 10^{12}\,{\rm cm}$, 1\,pc $=5.4 \times 10^5~r_g$, and 1'' = 35\,pc; Miyoshi et al. 1995; Herrnstein et al. 1999). This is the only AGN for which the black hole mass, distance, and (outer) accretion disk geometry are so accurately known. Due to the precision with which the black hole mass and distance are known, NGC~4258\ can be used as a basic test-bed for our models of black hole accretion. The overall luminosity of the AGN is small compared with the Eddington luminosity of the black hole, $L\sim 10^{-4}\,L_{\rm Edd}$, the cause of which has remained controversial. Is the small luminosity simply due to a very small mass accretion rate through a radiatively-efficient disk, as suggested by modeling the physics of the maser production Neufeld \& Maloney (1995)? Or does the disk make a transition to a radiatively-inefficient accretion flow (RIAF) at some radius as suggested by the modeling of Lasota et al. (1996) and Gammie, Narayan \& Blandford (1999)? What is the role of the jet, and how much of the radiative luminosity of the AGN is actually due to the jet (Yuan et al. 2002). In the bigger picture, what is the fundamental difference among the accretion flows in the most underluminous galactic nuclei (e.g., Sgr A$^*$), LLAGN and powerful AGN? Sensitive X-ray observations provide a powerful means of probing both large scale and small scale structures within NGC~4258 and hence addressing these questions. Soft X-ray thermal emission from hot gas associated with the well known helically twisted jets (the anomalous arms) has been known since the {\it Einstein} days (Fabbiano et al. 1992; also see {\it ROSAT} work of Pietsch et al. 1994; Cecil, Wilson \& De Pree 1995; Vogler \& Pietsch 1999). However, power-law X-ray emission from the (absorbed) central engine of the AGN itself was not seen until the advent of {\it ASCA} (Makishima et al. 1994; Reynolds, Nowak \& Maloney 2000). {\it ASCA} clearly revealed variability of both the absorbing column and hard X-ray flux on the timescale of years (Reynolds et al. 2000; hereafter R00), a result supported by short observations of NGC~4258 by {\it XMM-Newton} and {\it Chandra} (Pietsch \& Read 2002; Young et al. 2004; Fruscione et al. 2005). The most sensitive hard X-ray ($>10{\rm\thinspace keV}$) study of NGC~4258 was conducted by \textsl{BeppoSAX}\ (200\,ksec; Fiore et al. 2001) which detected the AGN emission out to beyond 50\,keV (Fiore et al. 2001). \textsl{BeppoSAX}\ also revealed day-timescale variability of the hard X-ray power-law, setting a firm upper limit of $250\,r_g$ ($5 \times 10^{-4}$\,pc) to the X-ray emission region size. In this {\it Paper}, we present results from new {\it Suzaku} and {\it XMM-Newton} observations of NGC~4258 which, when supplemented with survey data from the {\it Swift} Burst Alert Telescope (BAT), gives us an unprecedented view of this AGN from 0.3\,keV upto 140\,keV. These data suggest a circumnuclear environment that is remarkably ``clean'' compared with other Seyfert 2 nuclei. {\it Suzaku} also reveals rapid variability of the AGN emission, allowing us to set new constraints on the size/compactness of the X-ray source. The plan of this paper is as follows. Section~2 briefly discusses the data that we utilize and the basic reduction steps. We present our analysis of the spectrum, as well as spectral variability, in Section~3. Section~4 discusses the implications of these results for our understanding of the structure of this AGN and the origin of the X-ray emission. In particular, we argue that the iron line has all of the properties expected if it were to originate from the surface layers of the (warped) accretion disk. Throughout this paper we quote error bars at the 90\% confidence level for one interesting parameter. All error bars on figures are displayed at the $1\sigma$ level. \section{Observations and data reduction} In this section, we discuss the new observations presented in this paper and the subsequent data reduction. {\it Suzaku} observed NGC~4258 for a total of 186\,ksec starting 10-Jun-2006 as part of the Cycle-1 Guest Observer Program (PI: Itoh, US Co-PI: Reynolds). All four X-ray Imaging Spectrometers (XIS~0--3) as well as the Hard X-ray Detector (HXD) were operational and collecting data, and NGC~4258 was placed at the ``nominal HXD'' aimpoint. Reduction started from the cleaned Version-2 data products, and data were further reduced using FTOOLS version 6.4 according to the standard procedure outlined in the ``Suzaku Data Reduction (ABC) Guide''. The standard filtering resulted in 88.5\,ksec of ``good'' XIS data. Spectra and lightcurves were extracted from all XISs using a circular region of radius 3.25\,arcmin centered on NGC~4258. Background spectra were obtained from rectangular source free regions (avoiding the calibration sources) around the chip edges. Response matrices and effective area curves were generated using the {\tt xisrmfgen} and {\tt xissimarfgen} tools, respectively, using the recommended 400,000 photons per energy bin during the construction of the effective area files. We also utilize HXD data in this paper. Standard filtering resulted in 97.5\,ksec of ``good'' {\it Suzaku}-HXD/PIN data from which a spectrum was constructed. A PIN background spectrum was produced that included the Cosmic X-ray Background (CXB) plus the latest model of the detector background. We do not consider HXD/GSO data in this paper due to the fact that the high background of this detector makes a detection of NGC~4258 impossible. The {\it XMM-Newton} data presented in this paper result from a continuous 65\,ksec exposure started on 17-Nov-2006, 160 days after the {\it Suzaku} observation. All of the European Photon Imaging Cameras (EPIC) were operated in {\tt PrimeFullWindow} mode, and the data were cleaned using the Science Analysis System (SAS) version 7.1 following the standard procedure outlined in the ``User's Guide to XMM-Newton SAS''. We reject data during three background flares (PN count rate $>$50\,ct\,s$^{-1}$). The final ``good'' exposure time for the EPIC detectors is 55\,ksec. EPIC spectra were extracted using a circular extraction region of radius 30\,arcsec centered on the bright nucleus of NGC~4258, and background spectra were extracted using a nearby source free circular region of radius 1.5\,arcmin. The {\it Swift/BAT} is a wide field (2 steradians) coded aperture hard X-ray instrument which, during normal operations, surveys 60\% of the sky each day at $<$20\,milliCrab sensitivity. The BAT spectrum used here was prepared as part of the 22-month extension to the BAT-survey. The BAT survey spectra are derived from an independent all sky mosaic map in each of nine energy bins averaged over 22 months of data beginning on Dec 5 2004. The energy bin edges are 14, 20, 24, 35, 50, 75, 100, 150, 195 keV. The nature of the coded-mask analysis naturally results in a background-subtracted source spectrum. As discussed in Tueller et al. (2008), fitting of the BAT data was performed using a diagonal response matrix which correctly accounts for instrumental systematics in sources with spectral indices similar to the Crab (photon index $\Gamma\sim 2$). See Tueller et al. (2008) for more details. All spectra were binned to a minimum of 20 counts per bin to facilitate $\chi^2$ fitting. All spectral analysis presented here is performed using XSPECv11.3.2. \section{Results} In this section, we describe the spectral and temporal properties of the X-ray emission from NGC~4258. Given its superior signal-to-noise and energy band our discussion focuses on the {\it Suzaku} dataset, although we compare and contrast with the new {\it XMM-Newton} dataset where ever appropriate. In order to extend the energy reach of our time-averaged spectral study (\S\ref{sec:broadspec}), we supplement the {\it Suzaku} data with {\it Swift}/BAT data. Finally, in order to study long-term variations in the all-important fluorescent iron line, we include data from our new {\it XMM-Newton} observation as well as archival {\it XMM-Newton} and {\it ASCA} data. \subsection{The 0.3--140\,keV spectrum} \label{sec:broadspec} \begin{figure} \centerline{ \psfig{figure=f1.eps,width=0.48\textwidth} } \caption{The 0.3--140\,keV spectrum of NGC~4258. Shown here are data from {\it Suzaku}/XIS1 (0.3--10\,keV; black), {\it Suzaku}/PIN (14--30\,keV; orange), and the {\it Swift}/BAT survey (15--140\,keV; gold). The spectral model consists of two optically-thin thermal plasma component (magenta; unabsorbed apart from the effects of the cold Galactic absorption column of $N_{\rm Gal}=1.45\times 10^{20}\,{\rm cm}^{-2}$), an absorbed power-law (with intrinsic absorption $N_H=9.2\times 10^{22}\,{\rm cm}^{-2}$ and photon index $\Gamma=1.77$), and a narrow 6.4\,keV iron fluorescence line.} \label{fig:spectrum} \vspace{0.3cm} \end{figure} The combination of the {\it Suzaku}/XISs, the {\it Suzaku}/PIN and the {\it Swift}/BAT allows us to form the spectrum of NGC~4258\ over the energy range 0.3--140\,keV (see Fig.~\ref{fig:spectrum} which, for clarity, only shows one of the four XISs). This spectrum extends to significantly higher energy than any previous study of NGC~4258. It must be cautioned, however, that the {\it Swift}/BAT data are collected over a 22-month period compared with the {\it Suzaku} ``snapshot''. In this study, we make the assumption that the form/shape of the high-energy ($>10{\rm\thinspace keV}$) spectrum does not change with time even if its normalization does change. This allows us to perform joint spectral fits of the {\it Suzaku} and {\it Swift}/BAT data in order to study the nature of the X-ray source and the circumnuclear environment. Guided by previous studies of this source, we model this spectrum as the superposition of optically-thin thermal plasma emission with temperature $T$ (described by the XSPEC model {\tt vmekal}; Mewe et al. 1985; Kaastra 1992; Liedahl et al. 1995), an absorbed power-law (with photon index $\Gamma$ absorbed by a cold column $N_{\rm H}$), and an additional continuum component required to correctly describe the inflection point in the spectrum around 2\,keV where thermal plasma emission and absorbed power-law swap dominance. There are several possible identifications of this additional continuum component including (1) AGN emission that has scattered around the absorbing matter, (2) AGN emission that has leaked through a patchy absorber, (3) hard X-ray emission associated with X-ray binaries in the galaxy, or (4) thermal emission from very hot gas associated with star formation or the interaction of the AGN jet with the galactic disk. Both the intrinsic absorption and the Galactic absorption (from a column $N_{\rm H}=1.45\times 10^{20}\,{\rm cm}^{-2}$; Dickey \& Lockman 1990) were described using the {\tt phabs} model. To obtain a good description of the soft X-ray data, we require that the elemental abundances are allowed to vary relative to solar values (as defined by Anders \& Grevesse 1989) in two groups, Group A (with abundance $Z_A$) consisting of \{C,N,O,Ne,Na,Mg,Al,Si,S,Ar,Ca\}, and group B (with abundance $Z_B$) containing \{Fe,Ni\}. In our Canonical Spectral Model, the additional continuum component is modeled by a power-law component with photon index equal to that of the main (absorbed) AGN powerlaw ($\Gamma_2=\Gamma$) that is unaffected by the intrinsic absorption. This is an appropriate spectral model to describe the scattering or leaky absorber scenario. This model provides a decent description of the spectrum ($\chi^2/{\rm dof}=4506/4003$) with best-fitting parameters $\Gamma=1.75^{+0.05}_{-0.04}$, $N_{\rm H}=(9.2^{+0.4}_{-0.3})\times 10^{22}\,{\rm cm}^{-2}$, $kT=0.54\pm 0.01{\rm\thinspace keV}$, $Z_A=0.49^{+0.10}_{-0.08}Z_\odot$, $Z_B=0.27^{+0.05}_{-0.04}Z_\odot$. The normalization of the additional continuum component relative to the main AGN powerlaw is $f=6.0\pm 0.4\%$; this can be interpreted as the scattering or leakage fraction. If we allow the photon index of the additional continuum component to be free, the fit does not improve and we find that $\Gamma_2$ is very poorly constrained. Using this spectral model, we deduce an observed 0.5--10\,keV flux $F^{\rm obs}_{0.5-10}=9.7\times 10^{-12}\hbox{$\erg\cm^{-2}\s^{-1}\,$}$ and observed 0.5--10\,keV luminosity $L^{\rm obs}_{0.5-10}=6.0\times 10^{40}\hbox{$\erg\s^{-1}\,$}$. Removing the effects of absorption within the model implies an intrinsic 0.5--10\,keV luminosity of $L^X_{0.5-10}=1.4\times 10^{41}\hbox{$\erg\s^{-1}\,$}$. The best-fitting normalization of the BAT model is 48\% that of the {\it Suzaku} model. We interpret this as true variability, i.e., the {\it Suzaku} observation caught the source at a time when the high-energy emission has twice the normalization of the 22-month average spectrum. The second continuum component can also be described by an additional thermal plasma component; however, while a good fit can be obtained ($\chi^2/{\rm dof}=4501/4004$), a rather hot ($kT>5.5{\rm\thinspace keV}$) and low-metallicity ($Z<0.26Z_\odot$) plasma is required. Thus, since it is not clear from where such a plasma component would originate, we prefer the power-law description of this additional continuum component. Our Canonical Spectral Model fit leaves a line-like residue in the 0.5--0.6\,keV range that likely signals soft X-ray complexity beyond the simple one-temperature thermal plasma model. Adding a second plasma component with an identical abundance pattern but different temperature ($kT=0.22\pm 0.02{\rm\thinspace keV}$) leads to a significant improvement in the goodness of fit ($\chi^2/{\rm dof}=4360/4001$). However, a robust exploration of these multi-temperature solutions is not possible at XIS-resolutions (e.g., the abundances are poorly constrained) and we defer further discussion of this to a future publications in which high resolution spectral data from the {\it XMM-Newton}/RGS and a long {\it Chandra}/HETG observation) will be discussed. The combination of the XIS, PIN and BAT data allows us to examine the hard X-ray spectrum of this source in more detail than previously possible. For the hard-band study of this paragraph, the data below 3\,keV were formally excluded from the analysis to prevent their high-statistical weight from biasing the high-energy fit. No significant deviations from a pure absorbed power-law are detected above 3\,keV. Of course, in order for the observed powerlaw (with $\Gamma<2$) not to possess a divergent energy, it must cut-off or roll over at some high energy. If the power-law has an exponential cutoff, the characteristic e-folding energy is constrained by our high-energy data to be $E_{\rm fold}>124{\rm\thinspace keV}$. If we instead assume a pure-power law with cold X-ray reflection (modeled using the {\tt pexrav} code; Magdziarz \& Zdziarski 1995), the constraints on the ``reflection fraction'' ${\cal R}$ are very dependent upon the assumed inclination of the reflector. If the reflector has a slab geometry with a very high inclination ($i>80^\circ$), as expected if we identify it with the inner accretion disk, then these data provide no meaningful constraints on the reflection fraction. For more face-on reflection (e.g., the surfaces of discrete cold clouds or the inner wall of a cold torus on the far side of the X-ray source), we find ${\cal R}<0.43$. If we allow both reflection and an exponential cutoff simultaneously, the limits on the e-folding energy become weaker ($E_{\rm fold}>67{\rm\thinspace keV}$) but constraints on reflection are essentially unaffected. The implications of this result for the circumnuclear environment in this Seyfert nucleus is discussed in \S4. An identical analysis of the 0.7--10\,keV {\it XMM-Newton}/EPIC (PN and MOS) data (supplemented with the {\it Swift}/BAT spectrum) gives a similar picture, although there are some quantitative differences. Firstly, the flux of the thermal plasma emission is lower by a factor of two, a simple consequence of the fact that much of this emission lies in the anomalous arms and is outside of the our EPIC extraction region. The temperature and iron abundance of this emission ($kT=0.58\pm 0.01{\rm\thinspace keV}$, $Z_B=0.17^{+0.08}_{-0.05}Z_\odot$) are very similar to that derived from {\it Suzaku}, although the light metal abundance is slightly lower ($Z_A=0.20^{+0.11}_{-0.06}Z_\odot$). The most robust differences are with the parameters describing the absorbed power-law; the power-law is flatter ($\Gamma=1.65\pm 0.07$) and less absorbed [$N_H=(7.7\pm 0.5)\times 10^{22}\hbox{$\cm^{-2}\,$}$] in the new {\it XMM-Newton} data as compared with the earlier {\it Suzaku} observation. Accompanying these changes is a decrease in intrinsic (unabsorbed) 0.5--10\,keV luminosity from $L^X_{0.5-10}=1.4\times 10^{41}\hbox{$\erg\s^{-1}\,$}$ to $L^X_{0.5-10}=6\times 10^{40}\hbox{$\erg\s^{-1}\,$}$. We will see below that these long-term changes are in the same sense as short term variability seen within the {\it Suzaku} observation and may be revealing aspects of the spatial structure of the X-ray source and absorber. \subsection{The Suzaku detection of a weak iron line} \begin{figure} \centerline{ \psfig{figure=f2.ps,width=0.48\textwidth,angle=270} } \caption{XIS residues from the best-fitting continuum model showing the presence of a 6.4\,keV fluorescent line of cold iron. Data from all XIS are shown (XIS0=black, XIS1=red, XIS2=green, XIS3=blue).} \label{fig:ironline} \vspace{0.3cm} \end{figure} The width, strength, and variability of the 6.4\,keV K-shell fluorescent line of cold iron is one of the most powerful probes of cold gas in the circumnuclear environment of an X-ray luminous AGN. The {\it Suzaku} spectrum of NGC~4258\ shows the most robust evidence to date for this iron line in this source (Fig.~\ref{fig:ironline}). Adding a Gaussian line to the Canonical Spectral Model leads to a very significant improvement in the goodness-of-fit ($\Delta\chi^2=-40$ for three additional model parameters) and gives a line energy, width, flux and equivalent width of $E=6.42\pm 0.03{\rm\thinspace keV}$, $\sigma<0.07{\rm\thinspace keV}$ (corresponding to a full width half maximum, FWHM$<1.1\times 10^4\hbox{$\km\s^{-1}\,$}$), $F_{\rm K\alpha}=(6.0^{+1.9}_{-1.6})\times 10^{-6}\,{\rm ph}\,{\rm cm}^{-2}\,{\rm s}^{-1}$ and $W_{K\alpha}=45\pm 17{\rm\thinspace eV}$, respectively. Assuming Keplerian orbits in an edge-on accretion disk, the limit on the FWHM corresponds to $r>3\times 10^3\,r_g$ ($6\times 10^{-3}{\rm\thinspace pc}$). We note that the equivalent width of the iron line is entirely consistent with the limits on reflection reported in Section~\ref{sec:broadspec}. If, for now, we assume that the iron line is produced by isotropic illumination of a planar optically-thick structure, we can use the relations reported in Matt, Fabian \& Reynolds (1997) to infer that the solid angle of the reflector as seen by the X-ray source satisfies \begin{equation} \frac{\Omega}{2\pi}\approx 0.25\frac{\ln 2}{\cos\theta\,\ln(1+1/\cos\theta)}, \end{equation} where $\theta$ is the inclination of the slab, we have assumed Anders \& Grevesse (1989) abundances. Note that the quantity $\Omega/2\pi$ can be compared directly with the reflection fraction quoted in Section~\ref{sec:broadspec}. A more sophisticated treatment of the iron line strength expected via reflection from the surface of the warped accretion disk in NGC~4258 (including all of the relevant geometric effects) will be deferred to Section~4.1. Once the narrow iron line component has been modeled, there is no evidence in the XIS spectra for an additional broad/relativistic iron line from the inner accretion disk. However, due to the high inclination angle of the inner disk the limits are not strong. Including a Schwarzschild disk-line (modeled using the fully relativistic code of Brenneman \& Reynolds [2006]) with a rest-frame energy of $E=6.4{\rm\thinspace keV}$ and an emissivity profile $\epsilon\sim r^{-3}$ between $r_{\rm in}=6r_g$ and $r_{\rm out}=1000r_g$, we derive an upper limit on the equivalent width of any broad iron line of $W_{\rm broad}<180{\rm\thinspace eV}$ assuming an inner disk inclination of $i=80^\circ$. By contrast, even if the inner disk is in an optically-thick fluorescing state and irradiated by an isotropic X-ray source, limb-darkening effects would likely reduce the equivalent width of the broad iron line to below 100\,eV (Matt et al. 1997; R00). Thus, we cannot yet rule out the possibility of an optically-thick X-ray irradiated inner accretion disk. Despite the lower signal-to-noise, the iron line is also robustly detected in the new {\it XMM-Newton} EPIC data. Adding a Gaussian line leads to an improvement in the goodness-of-fit of $\Delta\chi^2=-26$ for three additional model parameters, and gives a line energy, width, flux and equivalent width of $E=6.41\pm 0.03{\rm\thinspace keV}$, $\sigma<0.07{\rm\thinspace keV}$, $F_{\rm K\alpha}=(3.3^{+1.2}_{-1.0})\times 10^{-6}\,{\rm ph}\,{\rm cm}^{-2}\,{\rm s}^{-1}$ and $W_{K\alpha}=53\pm 19{\rm\thinspace eV}$, respectively. Comparing the {\it XMM-Newton} and {\it Suzaku} iron line fits, we see evidence for a decrease in the flux of the iron line. Motivated by the desire to use line variability to locate the fluorescing matter, this result prompts us to conduct a more systematic analysis of the long term variability of the iron line. \subsection{Long term variability of the iron line flux} Given the weak nature of the iron line in NGC~4258 (which is itself a comparatively X-ray faint AGN), there are relatively few datasets capable of providing good constraints on the line flux. In addition to the deep {\it Suzaku} and {\it XMM-Newton} observations presented here, we examine five additional datasets from the HEASARC archives; a deep {\it ASCA} observation (15--20 May 1999; 169\,ks of good data), and four shorter {\it XMM-Newton} observations (8-Dec-2000 [21.6\,ks]; 6-May-2001 [12.9\,ks]; 17-June-2001 [13.6\,ks]; 22-May-2001 [16.5\,ks]). We note that there is also a 15.2\,ks {\it XMM-Newton} observation on 17-Dec-2001 that we choose to ignore due to it being severely affected by flares in the instrumental background. The {\it XMM-Newton} data were processed according to the description given in \S2. Processing of the {\it ASCA} data follows R00 except for the use of the latest version of the FTOOLS/HEADAS package (v6.4) and calibration files. \begin{figure} \vspace{0.5cm} \centerline{ \psfig{figure=f3.ps,width=0.48\textwidth,angle=270} } \caption{Historical variability of the iron line flux (units $10^{-6}\,{\rm ph}\,{\rm cm}^{-2}\,{\rm s}^{-1}$, equivalent width (units $eV$), and underlying 6--7\,keV continuum (units $10^{-13}\hbox{$\erg\cm^{-2}\s^{-1}\,$}$). A 10\% systematic (calibration uncertainty) has been assumed for all continuum flux measurements.} \label{fig:ironline_vary} \vspace{0.3cm} \end{figure} Figure~\ref{fig:ironline_vary} shows the iron line flux, equivalent width, and the 6--7\,keV continuum flux for all of the data under consideration. The superior statistics of the new datasets are evident. The hypothesis of a constant line flux can be formally rejected at the 95\% level ($\chi^2=12.1$ for 6 degrees of freedom), whereas the data are consistent with a constant equivalent width (a model with $W_{\rm K\alpha}=49{\rm\thinspace eV}$ gives $\chi^2=7.7$ for 6 degrees of freedom). The evidence for flux variability comes primarily from the new {\it XMM-Newton} dataset (MJD~54056) which coincides with a historical minimum in the 6-7\,keV continuum flux. Variations of the line flux by almost a factor of two over the course of 160 days (i.e., between the new {\it Suzaku} and {\it XMM-Newton} observations) allows us to place interesting constraints on the location of the fluorescing matter; the light crossing time of the full line emitting region must be no greater than 160 days (and likely significantly smaller). Thus, we conclude that the line emitting material is within a radius of $0.07{\rm\thinspace pc}$ ($4\times 10^4r_g$) from the central X-ray source. This strongly suggests that there is cold material (producing fluorescent iron emission) {\it within} the masing portion of the disk (the masers appear to have an inner truncation radius of approximately 0.13\,pc). \subsection{The Suzaku flare} \begin{figure} \vspace{0.3cm} \centerline{ \psfig{figure=f4.eps,width=0.48\textwidth} } \caption{Sum of the 0.3--10\,keV light curves from the Front Illuminated XISs (XIS0,2,3). Also shown are the results of a ``Bayesian block'' analysis with $p=3\times 10^{-5}$ (orange) and $p=0.085$ (red).} \vspace*{0.5cm} \label{fig:lightcurve} \end{figure} Significant time variability on a range of characteristic timescales is seen during the {\it Suzaku} observation (Fig.~\ref{fig:lightcurve}). A broad 100\,ksec hump is centered within the (almost) 200\,ksec observation window. However, much more rapid variability is apparent. Applying a Bayesian Blocks analysis\footnote{http://space.mit.edu/CXC/analysis/SITAR/bb\_experiment.html} (based upon earlier work by Scargle 1998), which searches for flare-like structure in lightcurves dominated by Poisson statistics, the overall lightcurve is found to be significantly variable; with robust evidence for 10\% variations on 16\,ksec timescales (false alarm probability $p=3\times 10^{-5}$), and good evidence for the presence of a 5\,ksec flare ($p=8.5\times 10^{-2}$). Direct inspection of XIS images confirm that the variable source is located at the nucleus of NGC~4258 to within $20^{\prime\prime}$. This confirms and strengthens the case for rapid variability seen by {\it BeppoSAX} (Fiore et al. 2001), and alleviates the background and confusion concerns expressed by Fruscione et al. (2005) about the {\it BeppoSAX} variability. In the absence of relativistic beaming effects, light-crossing time arguments allow one to estimate an approximate upper limit to the size of the emitting regions; the large amplitude 50\,ksec variability should originate from a region smaller than about $250r_g$, whereas the corresponding limit for the rapid 5\,ksec flare is just $25r_g$. If this variability is associated with accretion disk processes, more stringent limits can be set by equating these timescales with the dynamical time of a Keplerian disk $\Omega^{-1}=\sqrt{r^3/GM}$. Thus, under this disk-origin scenario, the 50\,ksec variability should originate from within $r\approx 40r_g$ whereas the 5\,ksec flare should occur in the innermost disk, $r\approx 10r_g$. We search for any spectral changes during the flare by splitting the {\it Suzaku} observation into a ``low-state'' and a ``high-state''. Only the XIS data have sufficient S/N to permit this exercise. We define the threshold distinguishing high-state from low state to be a total XIS count rate of $1$\,cps. Using the Canonical Spectral Model, the low-state XIS0--3 spectra are fit jointly with the high-state XIS0--3 spectra. The parameters describing the soft thermal plasma emission are fixed between the low- and high-state spectral models. Initially, we also assume that the photon index of the AGN emission and the intrinsic absorption column were also the same for the two states, with only the normalizations of the AGN power-laws being allowed to float between the low- and high-state spectral models. The resulting best-fit parameters are identical to that found for the full time-averaged spectrum. We then allow the photon index and the absorbing column density to be different between the low- and high-state. While there was a hint that both the photon index and the absorbing column density increased in the high-state, the formal improvement in the goodness of fit is {\it not} statistically significant ($\Delta\chi^2=-5$ for the additional of two new model parameters). Thus, direct spectral modeling of the low- and high-state spectra does not reveal robust evidence of spectral variability. An alternative methodology is to construct and then model the high$-$low difference spectrum; this is a formally correct procedure (and yields meaningful results) if the physical difference between the high- and low-state is the addition of a new emission component. We find that the difference spectrum has the form of a pure absorbed power-law; this explicitly demonstrates that, as expected, the soft X-ray thermal plasma component has not changed between the high- and low-state spectra. Interestingly, both the absorption column and photon index characterizing this variable emission differ from that found in the time-averaged spectrum, $N_{\rm H}=(1.58^{+0.37}_{-0.31})\times 10^{23}\,{\rm cm}^{-2}$ and $\Gamma=2.4\pm 0.5$ [compared with $N_{\rm H}=(9.2^{+0.4}_{-0.3})\times 10^{22}$ and $\Gamma=1.75^{+0.05}_{-0.04}$ for the time-averaged spectrum.] While these results must be taken as tentative, it appears that the difference spectra are revealing spectral changes that are too subtle to be revealed by direct modeling of the low- and high-state spectra alone. \section{Discussion} \subsection{The origin of the fluorescent iron line} Our study has found the first significant evidence of iron line flux variability in this source. Combined with the limits on the velocity width of the iron line, the variability allows us to constrain the line emitting region to the range $3\times 10^3r_g<r<4\times 10^4r_g$ ($6\times 10^{-3}{\rm\thinspace pc}<r<7\times 10^{-2}{\rm\thinspace pc}$). The most obvious candidate for the fluorescing matter on these spatial scales is the accretion disk itself. \begin{figure} \centerline{ \psfig{figure=f5.ps,width=0.48\textwidth,angle=270} } \caption{Radial distribution of iron line emission in the overall best fitting warped disk model of M08 (dashed blue line) and the best fitting gravitationally-stable model of M08 (solid red line). The quantity shown here $f_L(r)$ is the total observed line flux (in arbitrary units) from a given radius in the accretion disk.} \label{fig:warped_results} \end{figure} Assuming an isotropic X-ray source at the center of the accretion disk, the expected iron line from the disk can be estimated once we have a model for the 3-dimensional geometry of the warped disk relative to the observer. Martin (2008; M08) has recently described the maser data for NGC~4258 accretion disk using a model of a disk warped by the General Relativistic frame-dragging effects of central rotating black hole (Bardeen \& Petterson 1975). As described in the Appendix, we have calculated the expected iron line equivalent width and the radial distribution of iron line emission for both the overall best fitting warped disk model of M08, and the best fitting warped disk model of M08 that is stable to self-gravity. These two warped disk models have a total warp angle between the inner and outer disk of $\eta=45.8^\circ$ and $85.9^\circ$, respectively. Assuming solar abundances (Anders \& Grevesse 1989), the $\eta=45.8^\circ$ model gives an iron line that is both too weak ($W_{\rm K\alpha}=25{\rm\thinspace eV}$) and originates from too far out in the disk to be compatible with the line variability ($5\times 10^4-1\times 10^5r_g$; see Fig.~\ref{fig:warped_results}). On the other hand, the gravitational stable warped disk of M08 predicts an iron line of the correct strength ($W_{\rm K\alpha}=51{\rm\thinspace eV}$), originating from a region in the disk that is compatible with both the line width and line variability constraints ($(1-4)\times 10^4r_g$; Fig.~\ref{fig:warped_results}). We note that in addition to the high viewing inclination, the geometry of the warped disk implies that most of the observed iron line emission is driven by highly oblique irradiation of the disk surface. Thus, it is important to account for the dependence of the iron line photon production on the angle of incidence of the irradiating X-ray continuum (see Appendix). The high viewing inclination and the oblique irradiation means that simple estimates of the line strength based on covering fraction arguments are inappropriate. \begin{figure*} \hbox{ \psfig{figure=f6a.ps,width=0.45\textwidth,angle=270} \hspace{0.5cm} \psfig{figure=f6b.ps,width=0.45\textwidth,angle=270} } \caption{Predicted iron line profiles for the best fitting warped disk model of M08 (dashed blue line) as well as the best fitting gravitationally stable warped disk model (solid red line). The {\it left panel} shows the line profile for a single $\delta$-function emission line at 6.4\,keV; the symmetry of the redshifts and blueshifts is apparent. The {\it right panel} shows the expected profile of the Fe-K$\alpha$ doublet (with components at 6.392\,keV and 6.405\,keV and a 1:2 branching ratio.)} \label{fig:warped_profile} \vspace*{0.5cm} \end{figure*} Thus,we have shown that one can quantitatively understand the properties of the iron line in NGC~4258 if it originates from the surface layers of a warped disk irradiated by a central X-ray source. We do require, however, a severe warp inside of the masing region of the disk of the kind possessed by the M08 best-fitting gravitationally stable model. Interestingly, as discussed in M08, this model implies a significant mis-alignment between the inner accretion disk and the jet. The warped disk hypothesis for the origin of the iron line in NGC~4258 can be tested by future X-ray spectroscopy by searching for the predicted asymmetric double-peaked line profile. Fig.~\ref{fig:warped_profile} shows the predicted line profiles from the two warped disk models of M08, both for a perfect $\delta$-function emission line at 6.4\,keV (left panel) and the real Fe~K$\alpha$ doublet with component energies of 6.392\,keV and 6.405\,keV and a 1:2 branching ratio (right panel). Future high-resolution spectrographs (such as the micro-calorimeters on {\it Astro-H} or {\it International X-ray Observatory}) will be able to easily revolve the predicted line profiles. \subsection{The circumnuclear environment of NGC~4258} As discussed above, most if not all of the iron line emission in NGC~4258 can be understood as originating from the warped accretion disk. This leaves very little room for any iron fluorescence from other (non-disk) cold matter in the system such as the putative molecular torus of unified Seyfert schemes. Indeed, the circumnuclear environment of this black hole does appear to be significantly ``cleaner'' than the vast majority of Seyfert-2 galaxies. Dadina (2008) examines a sample of 62 Seyfert 2 galaxies observed with {\it BeppoSAX} and finds a mean reflection fraction of ${\cal R}=0.87\pm 0.14$ (assuming face-on reflection) and a mean iron line equivalent width of $W_{K\alpha}=693\pm 195{\rm\thinspace eV}$. The strong iron line and Compton reflection signatures in most Seyfert 2 nuclei are readily understood as originating from the obscuring molecular torus (Krolik, Madau \& Zycki 1994). For NGC~4258, the fact that our almost edge-on view of the central engine suffers absorption at ``only'' the $\sim 10^{23}\hbox{$\cm^{-2}\,$}$ level already rules out a Compton-thick torus that is aligned with the accretion disk. Furthermore, the very weak X-ray reprocessing features (${\cal R}<0.43$ and $W_{K\alpha}=45\pm 17{\rm\thinspace eV}$ from our time-averaged {\it Suzaku} data) rules out the presence of even a misaligned geometrically-thick Compton-thick torus. \subsection{A comparison with M81* and other AGN} It is interesting to compare NGC 4258 with the LLAGN M81*, since both seem to have very clean nuclear environments. M81 is classified as a Seyfert 1.8 / Low-Ionization Emission Line Region (LINER) galaxy, with a central black hole mass of $M = 7 \times 10^7 M_\odot$ (Devereux et al. 2003) accreting at $L \simeq 10^{-5} L_{\rm Edd}$. M81* shows evidence of a one-sided radio jet emanating from the nucleus (Bietenholz, Bartel \& Rupen 2000). High-resolution X-ray spectroscopy of M81* with \emph{Chandra} (Young et al. 2007) reveals a number of emission lines, including Fe K$\alpha$, and velocity broadened Si K$\alpha$ and other thermal lines. The Fe K$\alpha$ line in M81* has an EW of $47^{+25}_{-24}$ eV, almost identical to that of NGC 4258, and there is no evidence of a broad iron line. Furthermore, the broadened Si K$\alpha$ fluorescence line in M81* is consistent with originating in a disk at $r \sim 10^4 r_g$, assuming the inclination angle is the same as the disk observed with HST, $i = 14^\circ$ (Devereux et al. 2003). The broadened thermal lines in M81* are consistent with originating at $r < 10^{4-5} r_g$, suggesting that there is hot thermal gas at small radii, possibly in the form of a radiatively inefficient accretion flow or the base of a jet (Markoff et al. 2008). The inclination angles of the outer accretion disk in NGC 4258 ($ \sim 80^\circ$), and in M81, ($14^\circ$) are significantly different, and this may account for the fact that NGC 4258 has a much larger column density ($\sim 10^{23}$ cm$^{-2}$) than M81 ($\sim 10^{21}$ cm$^{-2}$), that maser emission is not seen in M81, and the difference in the obscuration of the broad line region (i.e., the Seyfert types). In M81*, we do not know the geometry of the disk well enough to calculate the strength of the Fe K$\alpha$ line that it would produce. Apart from their different inclination angles, the accretion flows in NGC 4258 and M81* seem to be remarkably similar. Although we have noted the similarities between the circumnuclear environments of NGC4258 and M81*, it is interesting that these results on NGC4258 and M81* appear to run counter to some general behaviour of the AGN population. In particular, it has been noted that the strength of the iron emission line across the AGN population is anticorrelated with both the X-ray luminosity (Iwasawa \& Taniguchi 1993) and the Eddington ratio (Winter et al. 2008). Using the Swift-BAT AGN survey, Winter et al. (2008) show that LLAGN with X-ray Eddington ratios comparable to NGC~4258 ($L_X/L_{\rm Edd}\sim 10^{-5}$) typically possess iron line equivalent widths in excess of 500\,eV. Thus, on the basis of the iron line equivalent width, the circumnuclear environments of NGC~4258 and M81* appear to have significantly less cold gas than the average LLAGN. \section{Conclusions} Using {\it Suzaku}, {\it XMM-Newton} and {\it Swift}, we have obtained an unprecedented view of the active nucleus in NGC~4258. Our principal results are: \begin{enumerate} \item Comparing the {\it Suzaku} data with {\it XMM-Newton} data taken 160 days later, we detect robust flux variability of the 6.4\,keV iron line for the first time. Together with constraints on the velocity width of the line, and assuming Keplerian motion about the central black hole, we can place the iron line emitting region to be between $3\times 10^3r_g$ and $4\times 10^4r_g$. \item We show that the strength, velocity width and time variability of the iron line can be explained by a model in which the line originates from the surface of a warped accretion disk. In particular, we present explicit calculations of the expected iron line from a disk warped by Lens-Thirring precession from a severely misaligned central black hole. \item During our {\it Suzaku} observation, we detect high amplitude intraday variability, with fluctuations on timescales as short as 5\,ks. Corresponding light travel time arguments suggest that the emission region is smaller than $25r_g$. If we make the stronger assertion that this timescale be longer than the dynamical timescale of the accretion disk at the location it is produced, the upper limit on the radius of the emission is $10r_g$. \item In stark contrast with the vast majority of other Seyfert 2 galaxies, there are no indications of a Compton-thick obscuring torus; the weak iron line and the lack of reflection all point to a circumnuclear environment that is remarkable clean of cold gas. As pointed out by Herrnstein et al. (2005), the intrinsic absorption that we do see in the X-ray spectrum may well arise in the outer layers of the warped geometrically-thin accretion disk, further reducing the need for any cold structure other than the accretion disk itself. \item We highlight the similarities in the circumnuclear environments of NGC~4258 and another LLAGN, M81*. However, we also note that the remarkably clean circumnuclear environment found in these two LLAGN stand in contrast to the vast majority of LLAGN. \end{enumerate} We thank Richard Mushotzky for stimulating conversations throughout this work. CSR thanks the NASA {\it Suzaku} and {\it XMM-Newton} Guest Observer Programs for support under grants NNX06A135G and NNX07AE97G. \section*{References} {\small \noindent Anders E., Grevesse N., 1989, Geochimica et Cosmochimica Acta, 53, 197 \noindent Bardeen J.M., Petterson J.A., 1975, ApJL, 195, L65 \noindent Brenneman L.W., Reynolds C.S., 2006, ApJ, 652, 1028 \noindent Cecil G., Wilson A.~S., De Pree C., 1995, ApJ, 440, 181 \noindent Chiang J. et al., 2000, ApJ, 528, 292 \noindent Dadina M., 2007, A\&A, 461, 1209 \noindent Devereux N., Ford H., Tsvetanov Z., Jacoby G., 2003, AJ, 125, 1226 \noindent Dickey J.M., Lockman F.J., 1990, ARA\&A, 28, 215 \noindent Fabbiano G., Kim D., Trincheieri G., 1992, ApJS, 80, 531 \noindent Fiore F., et al., 2001, ApJ, 556, 150 \noindent Fruscione A., Greenhill L.J., Filippenko A.V., Moran J.M., Hernnstein J.R., Galle, E., 2005, ApJ, 624, 103 \noindent Gammie C.F., Narayan R., Blandford R.D., 1999, ApJ, 516, 177 \noindent George I.M., Fabian A.C., 1991, MNRAS, 249, 352 \noindent Herrnstein J.R., et al., 1999, Nature, 400, 539 \noindent Herrnstein J.R., Moran J.M., Greenhill L.J., Trotter A.S., 2005, ApJ, 629, 719 \noindent Kaastra J.S., 1992, An X-ray Spectral Code for Optically Thin Plasmas (Internal SRON-Leiden Report, updated version 2.0) \noindent Krolik J.H., Madau P., Zycki P.T., 1994, ApJL, 420, L57 \noindent Iwasawa K., Taniguchi Y., 1993, ApJ, 1993, 413, L15 \noindent Lasota J.P., Abramowicz M.A., Chen X., Krolik J., Narayan R., Yi L., 1996, ApJ, 462, 142 \noindent Lee J.C., Fabian A.C., Reynolds C.S., Brandt W.N., Iwasawa K., 2000, MNRAS, 318, 857 \noindent Liedahl D.A., Osterheld A.L., Goldstein W.H., 1995, ApJ, 438, L115 \noindent Makishima K. et al., 1994, PASJ, 46, L77 \noindent Magdziarz P., Zdziarski A.A., 1995, MNRAS, 273, 837 \noindent Markoff S. et al., 2008, ApJ, in press \noindent Martin R.G., 2008, MNRAS, in press (arXiv0804.1013) [M08] \noindent Martin R.G., Pringle J.E., Tout C.A., 2007, MNRAS, 381, 1617 \noindent Matt G., Fabian A.C., Reynolds C.S., 1997, MNRAS, 289, 175 \noindent Mewe R., Gronenschild E.H.B.M., van den Oord G.H.J., 1985, A\&AS, 62, 197 \noindent Miyoshi M., Moran J., Herrnstein J., Greenhill L., Nakai N., iamond P., Inoue M., 1995, Nature, 373, 127 \noindent Neufeld D., Maloney P.R., 1995, ApJ, 447, L17 \noindent Pietsch W., Vogler A., Kahabka P., Klein U., 1994, A\&A, 284, 386 \noindent Pietsch W., Read A.M., 2002, A\&A, 384, 793 \noindent Reynolds C.S., Nowak M.A., Maloney P.R., 2000, ApJ, 540, 143 [R00] \noindent Reynolds C.S., Nowak M.A., 2003, Phys. Rep., 377, 389 \noindent Scargle J.D., 1998, ApJ, 504, 405 \noindent Scheuer P.A.G., Feiler R., 1996, MNRAS, 282, 291 \noindent Tueller J., et al., ApJ, in press (arXiv:0711.4130) \noindent Vogler A., Pietsch W., 1999, A\&A, 352, 64 \noindent Wilson A.S., Yang Y., Cecil G., 2001, ApJ, 560, 689 \noindent Winter L.M., Mushotzky R.F., Reynolds C.S., Tueller J., 2008, ApJ, in press \noindent Yuan F., Markoff S., Falcke H., Biermann P.L., 2002, A\&A, 391, 139 \noindent Young A.J., Wilson A.S., 2004, ApJ, 601, 133 \noindent Young A.J., Nowak M.A., Markoff S., Marshall H.L., Canizares C.R., 2007, ApJ, 669, 830 }
1,108,101,565,053
arxiv
\section{Introduction} \label{intro} The arithmetic of quadratic forms has long held a special place in number theory. In this paper we focus our efforts on algebraic varieties $X\subset \mathbb{P}^{m-1}$ which arise as the common zero locus of two quadratic forms $q_1,q_2\in \mathbb{Z}[x_1,\dots,x_m]$. We will always assume that $X$ is a geometrically integral complete intersection which is not a cone. Under suitable further hypotheses on $q_1$ and $q_2$, we will be concerned with estimating the number of $\mathbb{Q}$-rational points on $X$ of bounded height. Where successful this will be seen to yield a proof of the Hasse principle for the varieties under consideration. The work of Colliot-Th\'el\`ene, Sansuc and Swinnerton-Dyer \cite{CT} provides a comprehensive description of the qualitative arithmetic associated to the set $X(\mathbb{Q})$ of $\mathbb{Q}$-rational points on $X$ for large enough values of $m$. In fact it is known that the Hasse principle holds for any smooth model of $X$ if $m\geq 9$. This can be reduced to $m\geq 5$ provided that $X$ contains a pair of conjugate singular points and does not belong to a certain explicit class of varieties for which the Hasse principle is known to fail. In this paper the quadratic forms $q_1$ and $q_2$ will have special structures. Let $Q_1$ and $Q_2$ be integral quadratic forms in $n$ variables $\mathbf x=(x_1,\dots,x_n)$, with underlying symmetric matrices $\mathbf{M}_1$ and $\mathbf{M}_2$, so that $Q_i(\mathbf{x})=\mathbf{x}^T\mathbf{M}_i\mathbf{x}$ for $i=1,2$. Then we set \begin{align*} q_1(x_1,\dots, x_{n+2})&=Q_1(x_1,\dots,x_n)-x_{n+1}^2-x_{n+2}^2,\\ q_2(x_1,\dots, x_{n+2})&=Q_2(x_1,\dots,x_n). \end{align*} We will henceforth assume that $Q_{2}$ is non-singular and that as a variety $V$ in $\mathbb{P}^{n-1}$, the intersection of quadrics $Q_{1}(\mathbf{x})=Q_{2}(\mathbf{x})=0$ is also non-singular. It then follows that $X$ has a singular locus containing precisely two singular points which are conjugate over $\mathbb{Q}(i)$. The question of whether the Hasse principle holds for such varieties is therefore answered in the affirmative by \cite{CT} when $n\geq 3$. Furthermore, when $X(\mathbb{Q})$ is non-empty, it is well-known (see \cite[Proposition 2.3]{CT}, for example) that $X$ is $\mathbb{Q}$-unirational. In particular $X(\mathbb{Q})$ is Zariski dense in $X$ as soon as it is non-empty. Let $r(M)$ be the function that counts the number of representations of an integer $M$ as a sum of two squares and let $W: \mathbb{R}^{n}\rightarrow \mathbb{R}_{\geq 0}$ be an infinitely differentiable bounded function of compact support. Our analysis of the density of $\mathbb{Q}$-rational points on $X$ will be activated via the weighted sum \begin{equation}\label{eq:main-sum} S(B)=\sum_{\substack{\mathbf x \in \mathbb Z^n\\ 2\nmid Q_{1}(\mathbf{x})\\ Q_2(\mathbf{x})=0}}r(Q_1(\mathbf x)) W\left(\frac{\mathbf x}{B}\right), \end{equation} for $B\rightarrow \infty$. The requirement that $Q_{1}(\mathbf{x})$ be odd is not strictly necessary but makes our argument technically simpler. Simple heuristics lead one to expect that $S(B)$ has order of magnitude $B^{n-2}$, provided that there are points in $X(\mathbb{R})$ and $X(\mathbb{Q}_{p})$ for every prime $p$. Confirmation of this fact is provided by work of Birch \cite{birch} when $n\geq 12$. Alternatively, when $Q_{1}$ and $Q_{2}$ are both diagonal and the form $b_{1}q_{1}+b_{2}q_{2}$ is indefinite and has rank at least $5$ for every non-zero pair $(b_{1},b_{2})\in \mathbb{R}^2$, then Cook \cite{C} shows that $n\geq 7$ is permissible. The following result offers an improvement over both of these results. \begin{theorem} \label{th1} Let $n\geq 7$ and assume that $V$ is non-singular with $Q_{2}$ also non-singular. Assume that $Q_1(\mathbf{x})\gg 1$ and $\nabla Q_1(\mathbf{x})\gg 1$, for some absolute implied constant, for every $\mathbf{x} \in \supp(W)$. Suppose that $X(\mathbb{R})$ and $X(\mathbb{Q}_{p})$ are non-empty for each prime $p$. Then there exist constants $c>0$ and $\delta>0$ such that $$ S(B)=cB^{n-2}+O(B^{n-2-\delta}). $$ The implied constant is allowed to depend on $Q_1, Q_{2}$ and $W$. \end{theorem} In \S \ref{s:conclusion} an explicit value of $\delta$ will be given and it will be explained that the leading constant is an absolutely convergent product of local densities $ c=\sigma_\infty \prod_p \sigma_p, $ whose positivity is equivalent to the hypothesis that $X(\mathbb{R})$ and $X(\mathbb{Q}_{p})$ are non-empty for each prime $p$. In particular Theorem \ref{th1} provides a new proof of the Hasse principle for the varieties $X$ under consideration. Our proof of Theorem \ref{th1} uses the circle method. An inherent technical difficulty in applying the circle method to systems of more than one equation lies in the lack of a suitable analogue of the Farey dissection of the unit interval, as required for the so-called ``Kloosterman refinement''. In the present case this difficulty is circumvented by the specific shape of the quadratic forms $q_{1},q_{2}$. Thus it is possible to trade the equality $Q_{1}(\mathbf{x})=x_{n+1}^{2}+x_{n+2}^{2}$ for a family of congruences using the familiar identity $$ r(M)=4\sum_{d\mid M}\chi(d), $$ where $\chi$ is the real non-principal character modulo $4$. In this fashion the sum $S(B)$ can be thought of as counting suitably weighted solutions $\mathbf{x}\in \mathbb{Z}^{n}$ of the quadratic equation $Q_{2}(\mathbf{x})=0$, for which $Q_{1}(\mathbf{x})\equiv 0 \bmod{d}$, for varying $d$. We will apply the circle method to detect the single equation $Q_{2}(\mathbf{x})=0$, in the form developed by Heath-Brown \cite{H}, thereby setting the scene for a double Kloosterman refinement by way of Poisson summation. This approach ought to be compared with joint work of the second author with Iwaniec \cite{IM2}, wherein an upper bound is achieved for the number of integer solutions in a box to the pair of quadratic equations $Q_1(\mathbf x)=\Box$ and $Q_2(\mathbf x)=0$, when $n=4$. In this case a simple upper bound sieve is used to detect the square, which thereby allows the first equation to be exchanged for a suitable family of congruences. Finally we remark that with additional work it would be possible to work with more general quadrics, in which the term $x_{n+1}^2+x_{n+2}^2$ is replaced by an arbitrary positive definite binary quadratic form. The exponential sums that feature in our work take the shape \begin{align}\label{eq:S'} S_{d,q}(\mathbf{m})=\sideset{}{^{*}}\sum_{a\bmod{q}} \sum_{\substack{\mathbf k \bmod{dq}\\ Q_1(\k)\equiv 0\bmod{d}\\Q_2(\k)\equiv 0\bmod{d}}} e_{dq}\left(aQ_2(\k)+\mathbf{m}.\k\right), \end{align} for positive integers $d$ and $q$ and varying $\mathbf{m}\in \mathbb{Z}^{n}$. The notation $\sum^*$ means that the sum is taken over elements coprime to the modulus. We will extend it to summations over vectors in the obvious way. There is a basic multiplicativity relation at work which renders it profitable to consider the cases $d=1$ and $q=1$ separately. In the former case we will need to gain sufficient cancellation in the sums that emerge by investigating the analytic properties of the associated Dirichlet series $$ \xi(s;\mathbf{m})=\sum_{q=1}^{\infty} \frac{S_{1,q}(\mathbf{m})}{q^s}, $$ for $s\in \mathbb{C}$. This is facilitated by the fact that $S_{1,q}(\mathbf{m})$ can be evaluated explicitly using the formulae for quadratic Gauss sums. We will see in \S \ref{sec:qsum} that $\xi(s;\mathbf{m})$ is absolutely convergent for $\Re(s)>\frac{n}{2}+2$. In order to prove Theorem \ref{th1} it is important to establish an analytic continuation of $\xi(s;\mathbf{m})$ to the left of this line. This eventually allows us to establish an asymptotic formula for $S(B)$ provided that $n>6$. The situation for $n=6$ is more delicate and we are no longer able to win sufficient cancellation through an analysis of $\xi(s;\mathbf{m})$ alone. In fact it appears desirable to exploit cancellation due to sign changes in the exponential sum $S_{d,1}(\mathbf{m})$. The latter is associated to a pair of quadratic forms, rather than a single form, and this raises significant technical obstacles. We intend to return to this topic in a future publication. With a view to subsequent refinements, much of our argument works under much greater generality than for the quadratic forms considered in Theorem \ref{th1}. In line with this, unless otherwise indicated, any estimate concerning quadratic forms $Q_1,Q_2\in \mathbb{Z}[x_1,\ldots,x_n]$ is valid for arbitrary forms such that $Q_2$ is non-singular, $n\geq 4$ and the variety $V\subset \mathbb{P}^{n-1}$ defined by $Q_1(\mathbf{x})=Q_2(\mathbf{x})=0$ is a geometrically integral complete intersection. We let $$ \rho(d)=S_{d,1}(\mathbf{0}), $$ in the notation of \eqref{eq:S'}. The Lang--Weil estimate yields $\rho(p)=O(p^{n-2})$ when $d=p$ is a prime, since the affine cone over $V$ has dimension $n-2$. We will need upper bounds for $\rho(d)$ of comparable strength for any $d$. It will be convenient to make the following hypothesis. \begin{hyp} Let $d\in \mathbb{N}$ and $\varepsilon>0$. Then we have $\rho(d)=O(d^{n-2+\varepsilon})$. \end{hyp} Here, as throughout our work, the implied constant is allowed to depend upon the coefficients of the quadratic forms $Q_{1},Q_{2}$ under consideration and the parameter $\varepsilon$. We will further allow all our implied constants to depend on the weight function $W$ in \eqref{eq:main-sum}, with any further dependence being explicitly indicated by appropriate subscripts. We will establish Hypothesis-$\rho$ in Lemma \ref{rho(d)} when $V$ is non-singular, as required for Theorem \ref{th1}. \begin{notat} Throughout our work $\mathbb{N}$ will denote the set of positive integers. The parameter $\varepsilon$ will always denote a small positive real number, which is allowed to take different values at different parts of the argument. We shall use $|\mathbf{x}|$ to denote the norm $\max |x_i|$ of a vector $\mathbf{x}=(x_1,\dots,x_n)\in \mathbb{R}^{n}$. Next, given integers $m$ and $M$, by writing $m\mid M^\infty$ we will mean that any prime divisor of $m$ is also a prime divisor of $M$. Likewise $(m,M^\infty)$ is taken to mean the largest positive divisor $h$ of $m$ for which $h\mid M^\infty$. It will be convenient to record the bound \begin{equation}\label{eq:scat} \#\{m\leq x: m\mid M^\infty\} \leq \sum_{p\mid m \Rightarrow p\mid M} \left(\frac{x}{m}\right)^\varepsilon = x^\varepsilon \prod_{p\mid M}\left(1-p^{-\varepsilon}\right)^{-1} \ll (x|M|)^\varepsilon, \end{equation} for any $x\geq 1$, a fact that we shall make frequent use of in our work. Finally we will write $e(x)=\exp(2\pi ix)$ and $e_q(x)=\exp(\frac{2\pi ix}{q})$. \end{notat} \begin{ack} Some of this work was done while the authors were both visiting the {\em Institute for Advanced Study} in Princeton, the hospitality and financial support of which is gratefully acknowledged. While working on this paper the first author was supported by EPSRC grant number \texttt{EP/E053262/1}. The authors are very grateful to the anonymous referee for numerous helpful comments and for drawing our attention to an error in the original treatment of Lemma \ref{lem:technical}. \end{ack} \section{Auxiliary estimates} \subsection{Linear congruences} \label{s:congruences} Let $q\in \mathbb{N}$. For $n\times n$ matrices $\mathbf{M}$, with coefficients in $\mathbb{Z}$, and a vector $\a\in\mathbb{Z}^n$ we will often be led to consider the cardinality \begin{equation}\label{eq:2.1} K_{q}(\mathbf{M};\a)=\#\{\mathbf{x}\bmod{q}: \mathbf{M}\mathbf{x}\equiv \mathbf{a} \bmod{q}\}. \end{equation} The Chinese remainder theorem implies that $K_q(\mathbf{M};\a)$ is a multiplicative function of $q$, rendering it sufficient to conduct our analysis at prime powers $q=p^r$. We will need the following basic upper bound. \begin{lemma}\label{lem:smith} Assume that $\mathbf{M}$ has rank $\rho$ and let $\delta_p$ be the minimum of the $p$-adic orders of the $\rho \times \rho$ non-singular submatrices of $\mathbf{M}$. Then we have $$ K_{p^r}(\mathbf{M};\a)\leq \min\{ p^{nr}, p^{(n-\rho)r+\delta_p}\}. $$ In particular $K_{p^r}(\mathbf{M};\a)=O_{\mathbf{M}}(1)$ if $\rho=n$. \end{lemma} This is established by Loxton \cite[Proposition 7]{loxton}, but is also a trivial consequence of earlier work of Smith \cite{smith}, which provides a precise equality for $K_{p^r}(\mathbf{M};\a)$. We present a proof of Lemma \ref{lem:smith}, for completeness, the upper bound $K_{p^r}(\mathbf{M};\a)\leq p^{nr}$ being trivial. Given $\mathbf{M}$ as in the statement of the lemma, it follows from the theory of the Smith normal form that there exist unimodular integer matrices $\mathbf{A},\mathbf{B}$ such that $$ \mathbf{A}\mathbf{M}\mathbf{B}=\diag(M_1,\ldots,M_n), $$ with $M_1,\ldots,M_n \in \mathbb{Z}$ satisfying $M_i\mid M_{i+1}$, for $1\leq i< n$. In particular, since $\mathbf{M}$ has rank $\rho$, it follows that $M_i=0$ for $i>\rho$. Hence \begin{align*} K_{p^r}(\mathbf{M};\a) &= \#\{\mathbf{x}\bmod{p^r}: M_ix_i\equiv (\mathbf{A}\mathbf{a})_i \bmod{p^r}, ~(1\leq i \leq \rho)\}\\ &\leq p^{(n-\rho)r+v_p(M_1)+\cdots +v_p(M_\rho)}. \end{align*} This completes the proof of Lemma \ref{lem:smith}, since $\delta_p= v_p(M_1)+\cdots +v_p(M_\rho)$. We end this section by drawing a conclusion about the special case that $\mathbf{M}$ is non-singular, with $\rho=n$. Suppose that there exists a vector $\mathbf{x}$ counted by $K_{p^r}(\mathbf{M};\mathbf{0})$, but satisfying $p\nmid \mathbf{x}$. Then it follows from our passage to the Smith normal form that in fact $r\leq v_p(\det \mathbf{M})$. \subsection{Geometry of $V$} \label{geometry} In this section we consider the geometry of the varieties $V\subset \mathbb{P}^{n-1} $ defined by the common zero locus of two quadratic forms $Q_1,Q_2\in \mathbb{Z}[x_1,\dots,x_n]$, specifically in the case that $V$ is non-singular. Suppose that $Q_i$ has underlying symmetric matrix $\mathbf{M}_i$, with $\mathbf{M}_{2}$ non-singular. Let $D=D(Q_1,Q_2)$ be the discriminant of the pair $\{Q_1,Q_2\}$, which is a non-zero integer by assumption. According to Gelfand, Kapranov and Zelevinsky \cite[\S 13]{GKZ}, $D$ has total degree $(n+2)2^{n+1}$ in the coefficients of $Q_1,Q_2$ and is equal to the discriminant of the bihomogeneous polynomial $$ F(\b,\mathbf{x})=b_1Q_1(\mathbf{x})+b_2Q_2(\mathbf{x}). $$ We write \begin{equation} \label{eq:Mc} \mathbf{M}(\b)=b_1\mathbf{M}_1+b_2\mathbf{M}_2, \end{equation} for the underlying symmetric matrix. It follows from \cite[Lemma 1.13]{CT} that \begin{equation} \label{eq:rank} \rank \mathbf{M}(\b) \geq n-1 \end{equation} for any $[\b]\in \mathbb{P}^1$. Furthermore, Reid's thesis \cite{reid} shows that the binary form $P(\b)=\det \mathbf{M}(\b)$ has non-zero discriminant. An important r\^ole in our work will be played by the dual variety $V^*\subset {\mathbb{P}^{n-1}}^*\cong \mathbb{P}^{n-1}$ of $V$. Consider the incidence relation $$ I=\{(x,H)\in V\times {\mathbb{P}^{n-1}}^*: H \supseteq \mathbb{T}_x(V)\}, $$ where $\mathbb{T}_x(V)$ denotes the tangent hyperplane to $V$ at $x$. The projection $\pi_1: I \rightarrow V$ makes $I$ into a bundle over $V$ whose fibres are subspaces of dimension $n - \dim V - 2=1$. In particular $I$ is an irreducible variety of dimension $n - 2$. Since $V^*$ is defined to be the image of the projection $\pi_2:I \rightarrow {\mathbb{P}^{n-1}}^*$, it therefore follows that the dual variety $V^*$ is irreducible. Furthermore, since $I$ has dimension $n-2$ one might expect that $V^*$ is a hypersurface in ${\mathbb{P}^{n-1}}^*$. This fact, which is valid for any irreducible non-linear complete intersection, is established by Ein \cite[Proposition~3.1]{ein}. Elimination theory shows that the defining homogeneous polynomial may be taken to have coefficients in $\mathbb{Z}$. Finally, by work of Aznar \cite[Theorem~3]{aznar}, the degree of $V^*$ is $4(n-2)$. Hence $V^*$ is defined by an equation $G=0$, where $ G\in \mathbb{Z}[x_1,\dots,x_n] $ is an absolutely irreducible form of degree $4(n-2)$. Given a prime $p$, which is sufficiently large in terms of the coefficients of $V$, the reduction of $V$ modulo $p$ will inherit many of the basic properties enjoyed by $V$ as a variety over $\mathbb{Q}$. In particular it will continue to be a non-singular complete intersection of codimension $2$, satisfying the property that \eqref{eq:rank} holds for any $[\b]\in \mathbb{P}^1$, where now $\mathbf{M}_i$ is taken to be the matrix obtained after reduction modulo $p$ of the entries. Furthermore we may assume that $p \nmid 2\det \mathbf{M}_2$ and that the discriminant of the polynomial $P(\b)$ does not vanish modulo $p$. We will henceforth set $$ \Delta_V =O(1) $$ to be the product of all primes for which any one of these properties fails at that prime. \subsection{The function $\rho(d)$} In this section we establish Hypothesis-$\rho$ when $V$ is non-singular, where $\rho(d)=S_{d,1}(\mathbf{0})$, in the notation of \eqref{eq:S'}. Note that $\rho^*(d)\leq \rho(d)$, where $$ \rho^*(d)=\#\{\mathbf{x}\bmod{d}: (d,\mathbf{x})=1, ~Q_1(\mathbf{x})\equiv Q_2(\mathbf{x})\equiv 0 \bmod{d}\}. $$ We proceed to establish the following result. \begin{lemma} \label{rho(d)} Hypothesis-$\rho$ holds if $V$ is non-singular. \end{lemma} \begin{proof} We adapt an argument of Hooley \cite[\S 10]{nonary} used to handle the analogous situation for cubic hypersurfaces. By multiplicativity it suffices to examine the case $d=p^r$ for a prime $p$ and $r\in \mathbb{N}$. Extracting common factors between $\mathbf{x}$ and $p^r$, we see that \begin{equation}\label{m:1} \rho(p^r)= \sum_{0\leq k <\frac{r}{2}} p^{kn} \rho^*(p^{r-2k}) + p^{ (r-\lceil \frac{r}{2}\rceil)n }. \end{equation} Using additive characters to detect the congruences gives \begin{align*} \rho^*(p^s) &=\frac{1}{p^{2s}} \sum_{\b \bmod{p^s}} ~~ \sideset{}{^{*}} \sum_{\substack{\mathbf x \bmod{p^s}}} e_{p^s}\left(b_1Q_1(\mathbf{x})+b_2Q_2(\mathbf{x})\right), \end{align*} where we recall that the notation $\sum^*$ means only $\mathbf{x}$ for which $p\nmid \mathbf{x}$ are of interest. Extracting common factors between $p^s$ and $\b$ yields \begin{align*} \rho^*(p^s) &=\frac{1}{p^{2s}} \sum_{0\leq i<s} p^{in} S(s-i) +p^{(n-2)s}\left(1-\frac{1}{p^n}\right), \end{align*} with $$ S(k)= \sideset{}{^{*}} \sum_{\b \bmod{p^k}} ~~ \sideset{}{^{*}} \sum_{\substack{\mathbf x \bmod{p^k}}} e_{p^k}\left(F(\b,\mathbf{x})\right), $$ with $F(\b,\mathbf{x})=b_1Q_1(\mathbf{x})+b_2Q_2(\mathbf{x})$. We claim that $S(k)=O(1)$, for any $k\in \mathbb{N}$. Once achieved, this implies that $\rho^*(p^s)=O(p^{(n-2)s})$. Inserting this into \eqref{m:1} gives $\rho(p^r)=O(p^{(n-2)r})$, which suffices for the lemma. To analyse $S(k)$ we introduce a dummy sum over $a\in (\mathbb{Z}/p^k\mathbb{Z})^*$ and replace $\b$ by $a\b$ to get \begin{align*} \phi(p^k)S(k)&= \sideset{}{^{*}} \sum_{a \bmod{p^k}} ~~ \sideset{}{^{*}} \sum_{\b \bmod{p^k}} ~~ \sideset{}{^{*}} \sum_{\substack{\mathbf x \bmod{p^k}}} e_{p^k}\left(aF(\b,\mathbf{x})\right). \end{align*} Evaluating the resulting Ramanujan sum yields \begin{equation}\label{m:3} S(k)=\left(1-\frac{1}{p}\right)^{-1} \left\{ N(p^k)-p^{n+1}N(p^{k-1}) \right\}, \end{equation} where $N(p^k)$ is the number of $(\b,\mathbf{x})\bmod{p^k}$, with $p\nmid \b$ and $p\nmid \mathbf{x}$, for which $p^k\mid F(\b,\mathbf{x})$. We are therefore led to compare $N(p^k)$ with $N(p^{k-1})$, using an approach based on Hensel's lemma. Let $\nabla F(\b,\mathbf{x})=(Q_1(\mathbf{x}),Q_2(\mathbf{x}),b_1\nabla_\mathbf{x} Q_1(\mathbf{x})+b_2\nabla_\mathbf{x} Q_2(\mathbf{x}) )$, where $\nabla_\mathbf{x}$ means that the partial derivatives are taken with respect to the $\mathbf{x}$ variables. Using our alternative definition of the discriminant $D$ as the discriminant of $F$, we may view $D$ as the resultant of the $n+2$ quadratic forms appearing in $\nabla F(\b,\mathbf{x})$. Writing $\mathbf{y}=(\b,\mathbf{x})$, elimination theory therefore produces $n+2$ identities of the form $$ Dy_i^N = \sum_{1\leq j\leq n+2} G_{ij}(\mathbf{y}) \frac{\partial F}{\partial y_i}, \quad (1\leq i\leq n+2), $$ where $G_{ij}$ are polynomials with coefficients in $\mathbb{Z}$. In particular, if $(\b,\mathbf{x})\in\mathbb{Z}^{n+2}$ satisfies $p^m \mid \nabla F(\b,\mathbf{x})$, but $p\nmid \b$ and $p\nmid \mathbf{x}$, it follows that $m\leq v_p(D)$. Let us put $\delta=v_p(D)$. If $k\leq 2\delta+1$ then it trivially follows from \eqref{m:3} that $S(k)=O(1)$. If $k\geq 2\delta +2$, which we assume for the remainder of the argument, we will show that $S(k)=0$. Our work so far has shown that $$ N(p^k)=\sum_{0\leq m\leq \delta} \#C_m(p^k), $$ where $C_m(p^k)$ denotes the set of $\mathbf{y}=(\b,\mathbf{x})\bmod{p^k}$, with $p\nmid \b$ and $p\nmid \mathbf{x}$, for which $p^k\mid F(\mathbf{y})$ and $p^m \| \nabla F(\mathbf{y})$. Given any $\mathbf{y}\in C_m(p^k)$ it is easy to see that \begin{align*} F(\mathbf{y}+p^{k-m}\mathbf{y}') &\equiv F(\mathbf{y})+p^{k-m} \mathbf{y}'. \nabla F(\mathbf{y}) \bmod{p^k}\\ &\equiv 0 \bmod{p^k}, \end{align*} for any $\mathbf{y}'\in\mathbb{Z}^{n+2}$, with \begin{align*} \nabla F(\mathbf{y}+p^{k-m}\mathbf{y}') -\nabla F(\mathbf{y}) &\equiv 0 \bmod{p^{k-m}}\\ &\equiv 0 \bmod{p^{m+1}}, \end{align*} Thus $C_m(p^k)$ consists of cosets modulo $p^{k-m}$. Moreover, $\mathbf{y}+p^{k-m}\mathbf{y}'\in C_m(p^{k+1})$ if and only if $$ p^{-k}F(\mathbf{y})+p^{-m}\mathbf{y}'. \nabla F(\mathbf{y})\equiv 0 \bmod{p}, $$ for which there are precisely $p^{n+1}$ incongruent solutions modulo $p$. Hence $\#C_m(p^{k+1})=p^{n+1} \#C_m(p^k)$, which therefore shows that $S(k)=0$ in \eqref{m:3}. This completes the proof of the lemma. \end{proof} \subsection{Treatment of bad $d$} Returning briefly to $S(B)$ in \eqref{eq:main-sum}, we will need a separate argument to deal with the contribution from $\mathbf{x}$ for which $Q_2(\mathbf{x})=0$ and $Q_1(\mathbf{x})$ is divisible by large values of $d$ which share a common prime factor with $\Delta_V$. To begin with we call upon joint work of the first author with Heath-Brown and Salberger \cite{bhbs}, which is concerned with uniform upper bounds for counting functions of the shape $$ M(f;B)=\#\{\t\in\mathbb{Z}^\nu: |\t|\leq B, ~ f(\t)=0\}, $$ for polynomials $f\in\mathbb{Z}[t_1,\ldots,t_\nu]$ of degree $\delta\geq 2$. Although the paper focuses on the situation for $\delta\geq 3$, the methods developed also permit a useful estimate in the case $\delta=2$. Suppose that $\nu=3$ and that the quadratic homogeneous part $f_0$ of $f$ is absolutely irreducible. Using \cite[Lemmas 6 and 7]{bhbs} we can find a linear form $L\in \mathbb{Z}[t_1,t_2,t_3]$ of height $O(1)$ such that the intersection of the projective plane curves $f_0=0$ and $L=0$ consists of two distinct points. After eliminating one of the variables, we are then free to apply \cite[Lemma 13]{bhbs} to all the affine curves defined by $f=0$ and $L=c$, for each integer $c\ll B$. This gives the upper bound $M(f;B)\ll B^{1+\varepsilon}$ when $\nu=3$. According to \cite[Lemma 8]{bhbs}, we have therefore established the following result, which may be of independent interest. \begin{lemma} \label{m:5} Let $\varepsilon>0$, let $\nu\geq 3$ and let $f\in\mathbb{Z}[t_1,\ldots,t_\nu]$ be a quadratic polynomial with absolutely irreducible quadratic homogeneous part. Then we have $$M(f;B)\ll B^{\nu-2+\varepsilon}. $$ The implied constant in this estimate depends at most on $\nu$ and the choice of $\varepsilon$. \end{lemma} We shall also require some facts about lattices and their successive minima, as established by Davenport \cite[Lemma 5]{Dav}. Suppose that $\Lambda\subset \mathbb{Z}^n$ is a lattice of rank $r$ and determinant $\det(\Lambda)$. Then there exists a minimal basis $\mathbf{m}_1,\ldots,\mathbf{m}_r$ of $\Lambda$ such that $|\mathbf{m}_i|$ is equal to the $i$th successive minimum $s_i$, for $1 \leq i \leq r$, with the property that whenever one writes $\mathbf{y}\in\Lambda$ as $$ \mathbf{y}=\sum_{i=1}^{r}\lambda_{i}\mathbf{m}_i, $$ then $\lambda_{i}\ll s_i^{-1}|\mathbf{y}|$, for $1 \leq i \leq r.$ Furthermore, $$ \prod_{i=1}^r s_i \ll \det \Lambda \le \prod_{i=1}^r s_i, $$ and $1\leq s_1\leq \cdots \leq s_n$. We now come to the key technical estimate in this section. Given any $d\in \mathbb{N}$ and $B\geq 1$, we will need an auxiliary upper bound for the quantity \begin{equation}\label{eq:def-Ne} N_d(B)=\#\{ \mathbf{x}\in\mathbb{Z}^n: |\mathbf{x}|\leq B, ~d\mid Q_1(\mathbf{x}), ~ Q_2(\mathbf{x})=0 \}. \end{equation} Simple heuristics suggest that $N_d(B)$ should have order $d^{-1}B^{n-2}$. For our purposes we require an upper bound in which any power of $d$ is saved. \begin{lemma} \label{lem:m:4} Let $\varepsilon>0, d\in \mathbb{N}$ and $n\geq 5$. Assume $B\geq d$ and Hypothesis-$\rho$. Then we have $$ N_d(B)\ll \frac{B^{n-2+\varepsilon}}{d^{\frac{1}{n}}} +dB^{n-3+\varepsilon}. $$ \end{lemma} Note that this estimate is valid for any quadratic forms $Q_1,Q_2$ for which $Q_2$ is non-singular and the expected bound for $\rho(d)$ holds. For our purposes the desired bound follows from Lemma \ref{rho(d)} when $V$ is non-singular. \begin{proof}[Proof of Lemma \ref{lem:m:4}] On extracting common factors between $\mathbf{x}$ and $d$ in $N_d(B)$, one quickly verifies that it suffices to prove the upper bound in the lemma for the quantity $N_d^*(B)$, in which the additional constraint $(d,\mathbf{x})=1$ is added. Breaking into residue classed modulo $d$, we see that \begin{equation}\label{m:6} N_d^*(B)= \sideset{}{^{*}} \sum_{\substack{\boldsymbol{\xi}\bmod{d}\\ Q_1(\boldsymbol{\xi})\equiv 0 \bmod{d} \\ Q_2(\boldsymbol{\xi})\equiv 0 \bmod{d} }} \#\{ \mathbf{x}\in\mathbb{Z}^n: |\mathbf{x}|\leq B, ~\mathbf{x}\equiv \boldsymbol{\xi} \bmod{d}, ~ Q_2(\mathbf{x})=0 \}. \end{equation} Let us denote the set whose cardinality appears in the inner sum by $S_d(B;\boldsymbol{\xi})$. If $S_d(B;\boldsymbol{\xi})=\emptyset$ then there is nothing to prove. Alternatively, suppose we are given $\mathbf{x}_0\in S_d(B;\boldsymbol{\xi})$. Then any other vector in the set must be congruent to $\mathbf{x}_0$ modulo $d$. Making the change of variables $\mathbf{x}=\mathbf{x}_0+d\mathbf{y}$ in $S_d(B;\boldsymbol{\xi})$, we note that $|\mathbf{y}|<Y$, with $Y=2d^{-1}X$. Furthermore, Taylor's formula yields \begin{equation}\label{m:4} \mathbf{y}.\nabla Q_2(\mathbf{x}_0) +dQ_2(\mathbf{y})=0, \end{equation} since $Q_2(\mathbf{x}_0+d\mathbf{y})=0$ and $Q_2(\mathbf{x}_0)=0$. This equation implies that the $\mathbf{y}$ under consideration are forced to satisfy the congruence $\mathbf{y}.\nabla Q_2(\boldsymbol{\xi})\equiv 0\bmod{d}$, since $\mathbf{x}_0\equiv \boldsymbol{\xi} \bmod{d}$. Let us write $\a=\nabla Q_2(\boldsymbol{\xi})$. Then it follows that $$ \#S_d(B;\boldsymbol{\xi})\leq 1+ \#\{ \mathbf{y}\in\Lambda_\a : |\mathbf{y}|<Y , ~\mbox{\eqref{m:4} holds}\}, $$ where $\Lambda_\a=\{\mathbf{y}\in \mathbb{Z}^n: \a.\mathbf{y}\equiv 0 \bmod{d}\}$. This set defines an integer lattice of full rank and determinant $$ \det \Lambda_\a = \frac{d}{(d,\a)}. $$ The conditions of summation in \eqref{m:6} demand that $(d,\boldsymbol{\xi})=1$. It therefore follows from the remark at the end of \S \ref{s:congruences} that $p^j\ll 1$, whenever $j\in \mathbb{N}$ and $p$ is a prime for which $p^j\mid (d,\nabla Q_2(\boldsymbol{\xi}))$. Thus $(d,\a)\ll 1$ and it follows that $\det \Lambda_\a \gg d. $ Let $\mathbf{M}$ denote the non-singular matrix formed from taking a minimal basis $\mathbf{m}_1,\ldots,\mathbf{m}_n$ for $\Lambda_\a$. Making the change of variables $\mathbf{y}=\mathbf{M}\boldsymbol{\lambda}$, and recalling the properties of the minimal basis recorded above, we see that $$ \#S_d(B;\boldsymbol{\xi})\leq 1+ \#\{ \boldsymbol{\lambda}\in\mathbb{Z}^n : \mbox{$\lambda_i\ll s_i^{-1}Y$ for $1\leq i\leq n$} ,~ q(\boldsymbol{\lambda})=0\}, $$ where $s_1,\ldots,s_n$ are the successive minima of $\Lambda_\a$ and $q(\boldsymbol{\lambda})$ is obtained from \eqref{m:4} via substitution. In particular, it is clear that the quadratic homogeneous part $q_0$ of $q$ has underlying matrix $\mathbf{M}^T \mathbf{M}_2 \mathbf{M}$, which is non-singular. We are therefore left with the task of counting integer solutions to a quadratic equation, which are constrained to lie in a lop-sided region. Furthermore, since we require complete uniformity in $d$, we want an upper bound in which the implied constant does not depend on the coefficients of $q$. It being difficult to handle a genuinely lopsided region, we will simply fix the smallest variable and then allow the remaining vectors $\boldsymbol{\lambda}'=(\lambda_1,\ldots,\lambda_{n-1})$ to run over the full hypercube with side lengths $O(Y)$. In this way we find that $$ \#S_d(B;\boldsymbol{\xi})\leq 1+ \sum_{ \substack{ t\ll s_n^{-1}Y } } \#\{ \boldsymbol{\lambda}'\in\mathbb{Z}^{n-1} : |\boldsymbol{\lambda}' |\ll Y, ~ q(\boldsymbol{\lambda}',t)=0\}. $$ Viewed as a polynomial in $ \boldsymbol{\lambda}'$, the quadratic homogeneous part of $q(\boldsymbol{\lambda}',t)$ is equal to $q_0(\boldsymbol{\lambda}',0)$. This must have rank at least $n-2\geq 3$, since $q_0$ is non-singular and its rank cannot decrease by more than $2$ on any hyperplane. In particular, $q_0(\boldsymbol{\lambda}',0)$ is absolutely irreducible. We apply Lemma \ref{m:5} with $\nu=n-1$ and $f=q(\boldsymbol{\lambda}',t)$ to get $$ \#S_d(B;\boldsymbol{\xi})\ll Y^{n-3+\varepsilon}\left(1+ \frac{Y}{s_n}\right). $$ Now it follows from the general properties of the successive minima recorded above that $s_n\geq (\det \Lambda_\a)^{\frac{1}{n}}\gg d^{\frac{1}{n}}$. Recalling that $Y=2d^{-1}B$ and inserting this into \eqref{m:6}, we conclude that $$ N_d^*(B)\ll \rho(d)\left(\frac{B}{d}\right)^{n-3+\varepsilon} \left(1+ \frac{B}{d^{1+\frac{1}{n}}}\right). $$ The conclusion of the lemma therefore follows from Hypothesis-$\rho$. \end{proof} \section{Preliminary transformation of $S(B)$} \label{prelim} In this section we initiate our analysis of $S(B)$ in \eqref{eq:main-sum}. For any odd integer $M$ it is clear that $r(M)=0$ unless $M\equiv 1 \bmod{4}$. Hence our sum can be written $$ S(B)= \sum_{\substack{\mathbf x \in \mathbb Z^n\\ Q_1(\mathbf{x})\equiv 1 \bmod{4} \\ Q_2(\mathbf{x})=0}}r(Q_1(\mathbf x)) W\left(\frac{\mathbf x}{B}\right). $$ We proceed to open up the $r$-function in the summand. Let $\{V_T(t)\}_{T}$ be a collection of smooth functions, with $V_T$ supported in the dyadic block $[T,2T]$, such that $\sum_TV_T(t)=1$ for $t\in[1,CB^2]$. The constant $C$ will be large enough depending on $Q_1$ and $W$, so that $|Q_1(\mathbf x)|\leq C$ whenever $\mathbf x\in \supp(W)$. We will neither specify the function $V_T$ nor the indexing set for $T$. However we will simply note that $T$ can be restricted to lie in the interval $[\frac{1}{2},2CB^2]$, and that there are $O(\log B)$ many functions in the collection. Moreover we will stipulate that $$ t^jV^{(j)}_T(t)\ll_j 1, $$ for each integer $j\geq 0$. For a positive integer $M\leq CB^2$ we may write \begin{align*} r(M)=4\sum_T\sum_{d\mid M}\chi(d)V_T(d). \end{align*} It follows that $$ S(B)=4\sum_T\sum_{d}\chi(d)V_T(d) \sum_{\substack{\mathbf x \in \mathbb Z^n\\ Q_1(\mathbf{x})\equiv 1\bmod{4}\\ Q_1(\mathbf{x})\equiv 0 \bmod{d}\\Q_2(\mathbf{x})=0}} W\left(\frac{\mathbf x}{B}\right)=4\sum_T S_T(B), $$ say. Let $\a\in \left(\mathbb Z/4\mathbb Z\right)^n$ be such that $Q_1(\a)\equiv 1 \bmod{4}$, and let $S_{T,\a}(B)$ be the part of $S_T(B)$ which comes from $\mathbf{x}\equiv \a \bmod{4}$. In the analysis of $S_{T,\a}(B)$ we want to arrange things so that only values of $d$ satisfying $d\ll B$ occur. When $T\leq B$ this is guaranteed by the presence of the factor $V_T(d)$. When $T>B$ we can use Dirichlet's hyperbola trick, since $\chi(Q_1(\mathbf{x}))=\chi(Q_1(\a))=1$, to get \begin{align*} S_{T,\a}(B)=\sum_{d}\chi(d) \sum_{\substack{\mathbf{x} \equiv \a \bmod{4}\\ Q_1(\mathbf{x})\equiv 0 \bmod{d}\\Q_2(\mathbf{x})=0}} W\left(\frac{\mathbf x}{B}\right)V_T\left(\frac{Q_1(\mathbf{x})}{d}\right). \end{align*} In this case too we therefore have $d\ll B$. For notational simplicity we write \begin{align} \label{eq:W_d} W_d\left(\mathbf y\right)= \begin{cases} W\left(\mathbf y\right)V_T(d), &\mbox{if $T\leq B$,}\\ W\left(\mathbf y\right)V_T\left(\frac{B^{2}Q_1(\mathbf{y})}{d}\right), &\mbox{otherwise}. \end{cases} \end{align} Here $W: \mathbb{R}^{n}\rightarrow \mathbb{R}_{\geq 0}$ is an infinitely differentiable bounded function of compact support such that $Q_1(\mathbf{x})\gg 1$ and $\nabla Q_1(\mathbf{x})\gg 1$, for some absolute implied constant, for every $\mathbf{x} \in \supp(W)$. As already indicated, the exponential sums \eqref{eq:S'} will be prominent in our work. We will face significant technical issues in dealing with large values of $d$ in $S_{T,\a}(B)$ which share prime factors with the constant $\Delta_V$ that was introduced at the close of \S \ref{geometry}. The following expression for $S(B)$ is now available. \begin{lemma} \label{S(B)} Let $\Xi$ be a parameter satisfying $1\leq \Xi\leq B$. Then we have $$ S(B)= 4\sum_T \sum_{\substack{ \a\in (\mathbb Z/4\mathbb Z)^n\\ Q_1(\a)\equiv 1 \bmod{4}}} \left(S_{T,\a}^\flat(B)+ S_{T,\a}^\sharp(B)\right), $$ with \begin{align*} S_{T,\a}^\flat(B) &=\sum_{\substack{d=1\\ (d,\Delta_V^\infty)>\Xi}}^\infty \chi(d) \sum_{\substack{\mathbf{x} \equiv \a \bmod{4}\\ Q_1(\mathbf{x})\equiv 0 \bmod{d}\\Q_2(\mathbf{x})=0}} W_d\left(\frac{\mathbf x}{B}\right),\\ S_{T,\a}^\sharp(B) &= \sum_{\substack{d=1\\ (d,\Delta_V^\infty)\leq \Xi}}^\infty \chi(d) \sum_{\substack{\mathbf{x} \equiv \a \bmod{4}\\ Q_1(\mathbf{x})\equiv 0 \bmod{d}\\Q_2(\mathbf{x})=0}} W_d\left(\frac{\mathbf x}{B}\right). \end{align*} \end{lemma} We will provide an upper bound for $S_{T,\a}^\flat(B)$ and an asymptotic formula for $S_{T,\a}^\sharp(B)$, always assuming that $\Xi$ satisfies $1\leq \Xi\leq B$. The following result deals with the first task. \begin{lemma}\label{lem:flat} Let $\varepsilon>0$ and assume Hypothesis-$\rho$. Then we have $$S_{T,\a}^\flat(B)\ll \Xi^{-\frac{1}{n}}B^{n-2+\varepsilon}+ \Xi B^{n-3+\varepsilon}. $$ \end{lemma} \begin{proof} Write $e=(d,\Delta_V^\infty)$. Then \begin{align*} |S_{T,\a}^\flat(B)| &\leq \sum_{\substack{e\mid \Delta_V^\infty\\ e>\Xi}} \sum_{\substack{d=1}}^\infty \sum_{\substack{\mathbf{x} \equiv \a \bmod{4}\\ Q_1(\mathbf{x})\equiv 0 \bmod{de}\\Q_2(\mathbf{x})=0}} W_{de}\left(\frac{\mathbf x}{B}\right). \end{align*} By the properties of \eqref{eq:W_d}, only $d,e$ satisfying $de\ll B$ feature here. Inverting the sums over $d$ and $\mathbf{x}$, we obtain \begin{align*} S_{T,\a}^\flat(B) &\ll \sum_{\substack{e\mid \Delta_V^\infty\\ \Xi<e\ll B}} \sum_{\substack{ |\mathbf{x}|\ll B\\ \mathbf{x} \equiv \a \bmod{4}\\ Q_1(\mathbf{x})\equiv 0 \bmod{e}\\Q_2(\mathbf{x})=0}} \tau\left(\frac{Q_1(\mathbf{x})}{e}\right), \end{align*} where $\tau$ is the divisor function. Note that $Q_1(\mathbf{x})\neq 0$, since $Q_1(\mathbf{x})\equiv Q_1(\a)\equiv 1 \bmod{4}$, so that the inner summand is $O(B^\varepsilon)$ by the trivial estimate for $\tau$. Hence we have \begin{equation}\label{eq:train} S_{T,\a}^\flat(B) \ll B^\varepsilon \sum_{\substack{e\mid \Delta_V^\infty\\ \Xi<e\leq cB}} N_e(cB), \end{equation} for an absolute constant $c>0$, in the notation of \eqref{eq:def-Ne}. We will make crucial use of the monotonicity property $ N_e(cB)\leq N_d(cB) $ for $d\mid e$. Suppose that we have a factorisation $\Delta_V=\prod_{i=1}^{t} p_i$. For $\mathbf{n}\in \mathbb{Z}_{\geq 0}^t$, let $\mathbf{p}^{\mathbf{n}}=\prod_{i=1}^t p_i^{n_i}$. Consider a collection of integers $\mathcal B=\{\mathbf{p}^{\mathbf{n}}: \mathbf{n}\in \mathbb{Z}_{\geq 0}^t\}$ and set $ \mathcal B(A_1,A_2)=\mathcal B\cap(A_1,A_2]. $ It follows from \eqref{eq:scat} that $\mathcal{B}$ contains $O(B^\varepsilon)$ elements of order $B$. In this new notation the sum in \eqref{eq:train} is over $e\in \mathcal B(\Xi,cB)$. We claim that \begin{align*} S_{T,\a}^\flat(B) &\ll B^\varepsilon \sum_{e\in \mathcal B( \Xi,\Delta_V \Xi)} N_e(cB). \end{align*} Once achieved, the statement of the lemma will then follow from Lemma \ref{lem:m:4}. By the monotonicity property, in order to establish the claim it will suffice to show that every $e \in \mathcal B(\Xi, cB)$ has a divisor $e'\mid e$, with $e' \in \mathcal B(\Xi, \Delta_V \Xi)$. To see this we suppose that $e=\mathbf{p}^{\mathbf{n}}$ and consider the decreasing sequence of divisors of $e$. This sequence ends at $1$, and the ratio between any two consecutive members is bounded by $\Delta_V$ . Thus one of the divisors must lie in the range $(\Xi, \Delta_V \Xi]$, as required. This completes the proof of the lemma. \end{proof} Turning to $S_{T,\a}^\sharp(B)$, we now need a means of detecting the equation $Q_2(\mathbf{x})=0$. For any integer $M$ let $$ \delta(M)=\begin{cases} 1,& \mbox{if $M=0$,}\\ 0,&\mbox{otherwise}. \end{cases} $$ Our primary tool in this endeavour will be a version of the circle method developed by Heath-Brown \cite{H}, based on work of Duke, Friedlander and Iwaniec \cite{DFI}. The starting point for this is the following smooth approximation of $\delta$. \begin{lemma} \label{dfi} For any $Q>1$ there is a positive constant $c_Q$, and a smooth function $h(x,y)$ defined on $(0,\infty)\times\mathbb R$, such that \begin{align*} \delta(M)=\frac{c_Q}{Q^2}\sum_{q=1}^{\infty}\;\sideset{}{^{*}}\sum_{a \bmod{q}}e_q(aM)h\left(\frac{q}{Q},\frac{M} {Q^2}\right). \end{align*} The constant $c_Q$ satisfies $c_Q=1+O_N(Q^{-N})$ for any $N>0$. Moreover $h(x,y)\ll x^{-1}$ for all $y$, and $h(x,y)$ is non-zero only for $x\leq\max\{1,2|y|\}$. \end{lemma} In practice, to detect the equation $M=0$ for a sequence of integers in the range $|M|<N/2$, it is logical to choose $Q=N^{\frac{1}{2}}$. We will use the above lemma to detect the equality $Q_2(\mathbf{x})=0$ in $S_{T,\a}^\sharp(B)$. Since we already have the modulus $d$ in the sum over $\mathbf{x}$ it is reasonable to use this modulus to reduce the size of the parameter $Q$. Thus we replace the equality $Q_2(\mathbf{x})=0$ by the congruence $Q_2(\mathbf{x})\equiv 0 \bmod{d}$ and the equality $Q_2(\mathbf{x})/d=0$. Then we have \begin{align*} S_{T,\a}^\sharp(B)&= \sum_{\substack{d=1\\ (d,\Delta_V^\infty)\leq \Xi}}^\infty \chi(d) \sum_{\substack{\mathbf{x} \equiv \a \bmod{4}\\ Q_1(\mathbf{x})\equiv 0\bmod{d}\\Q_2(\mathbf{x})\equiv 0\bmod{d}}} \delta\left(\frac{Q_2(\mathbf{x})}{d}\right)W_d\left(\frac{\mathbf{x}}{B}\right)\\ &= \sum_{\substack{d=1\\ (d,\Delta_V^\infty)\leq \Xi}}^\infty \frac{\chi(d)c_Q}{Q^2} \sum_{q=1}^{\infty}\;\sideset{}{^{*}}\sum_{a\bmod{q}} \sum_{\substack{\mathbf{x} \equiv \a \bmod{4}\\ Q_1(\mathbf{x})\equiv 0\bmod{d}\\Q_2(\mathbf{x})\equiv 0\bmod{d}}} \hspace{-0.4cm} e_q\left(\frac{aQ_2(\mathbf{x})}{d}\right)h\left(\frac{q}{Q},\frac{Q_2(\mathbf{x})}{dQ^2}\right)W_d\left(\frac{\mathbf{x}}{B}\right). \end{align*} We shall make the choice $$ Q= \frac{B}{\sqrt{d}}. $$ Since $d\ll B$, it follows that $Q\gg \sqrt{B}$. With our choice of $Q$ made we remark that the size of the full modulus $qd$ is typically of order $B^{\frac{3}{2}}$. Since this is much smaller than the square of the length of each $x_i$ summation, it will be be profitable to use the Poisson summation formula on the sum over $\mathbf{x}$. \begin{lemma} \label{psum} For any $N>0$ we have $$ S_{T,\a}^\sharp(B)=\left(1 +O_N(B^{-N})\right) \frac{B^{n-2}}{4^n}\sum_{\mathbf{m}\in \mathbb Z^n} \sum_{\substack{d=1\\ (d,\Delta_V^\infty)\leq \Xi}}^\infty \hspace{-0.2cm} \frac{\chi(d)}{d^{n-1}} \sum_{q=1}^\infty\frac{1}{q^n} T_{d,q}(\mathbf{m})I_{d,q}(\mathbf{m}), $$ where $$ T_{d,q}(\mathbf{m})=\sideset{}{^{*}}\sum_{a\bmod{q}} \sum_{\substack{\mathbf k \bmod{4dq}\\ \k \equiv \a \bmod{4}\\ Q_1(\k)\equiv 0\bmod{d}\\Q_2(\k)\equiv 0\bmod{d}}} e\left(\frac{4aQ_2(\k)+\mathbf{m}.\k}{4dq}\right) $$ and $$ I_{d,q}(\mathbf{m})=\int_{\mathbb R^n} h\left(\frac{q}{Q},\frac{B^2Q_2(\mathbf{y})}{dQ^2}\right)W_d(\mathbf{y}) e_{4dq}(-B\mathbf{m}.\mathbf{y})\d \mathbf y. $$ \end{lemma} \begin{proof} Splitting the sum over $\mathbf x$ into residue classes modulo $4dq$, we get that the inner sum over $\mathbf{x}$ in our expression for $S_{T,\mathbf{a}}^\sharp(B)$ is given by \begin{align*} \sum_{\substack{\mathbf k \bmod{4dq}\\ \k \equiv \a \bmod{4}\\ Q_1(\k)\equiv 0\bmod{d}\\Q_2(\k)\equiv 0\bmod{d}}} e\left(\frac{aQ_2(\k)}{qd}\right) \sum_{\mathbf{x}\in \mathbb Z^n}f(\mathbf{x}), \end{align*} where $$ f(\mathbf{x})=h\left(\frac{q}{Q},\frac{Q_2(\k+ 4dq\mathbf{x})}{dQ^2}\right)W_d\left(\frac{\k+4dq\mathbf{x}}{B}\right). $$ The Poisson summation formula yields \begin{align*} \sum_{\mathbf{x}\in \mathbb Z^n}f(\mathbf{x})=\sum_{\mathbf{m}\in \mathbb Z^n}\hat f(\mathbf{m}), \end{align*} where \begin{align*} \hat f(\mathbf{m})&=\int_{\mathbb R^n}f(\mathbf y)e(-\mathbf{m}.\mathbf y)\d\mathbf y\\ &=\left(\frac{B}{4dq}\right)^ne_{4dq}(\mathbf{m}.\k)\int_{\mathbb R^n} h\left(\frac{q}{Q},\frac{B^2Q_2(\mathbf{y})}{dQ^2}\right)W_d\left(\mathbf{y}\right) e_{4dq}(-B\mathbf{m}.\mathbf{y})\d\mathbf y. \end{align*} The lemma follows on rearranging and noting that $c_Q= 1 +O_N(B^{-N})$ and $Q^2=B^2/d$. \end{proof} In this and the next few sections, we will analyse in detail the exponential sum $T_{d,q}(\mathbf{m})$ which appears in Lemma \ref{psum}. We start with a multiplicativity relation which reduces the problem to analysing the sum for a prime power modulus. Observe that $d$ is necessarily odd, but $q$ can be of either parity. For any $d,q\in \mathbb{N}$ we recall the definition \eqref{eq:S'} of $S_{d,q}(\mathbf{m})$, and for any non-negative integer $\ell$ define \begin{align} \label{eq:Sell} S^{\pm}_{1,2^{\ell}} (\mathbf{m})= \sideset{}{^{*}}\sum_{a\bmod{2^{\ell}}} \sum_{\substack{\mathbf k \bmod{2^{2+\ell}}\\ \k\equiv \pm \mathbf{a}\bmod{4}}} e_{2^{2+\ell}}\left(4aQ_2(\k)+\mathbf{m}.\k \right). \end{align} We note that if $h\in \mathbb{N}$ is coprime to $d$ and $q$ then $S_{d,q}(h\mathbf{m})=S_{d,q}(\mathbf{m})$. The following result is now available. \begin{lemma}\label{lem:mult1} For $q=2^{\ell}q'$, with $q'$ odd, we have \begin{align*} T_{d,q}(\mathbf{m})=S_{d,q'}(\mathbf{m})S^{\chi(dq')}_{1,2^{\ell}}(\mathbf{m}). \end{align*} \end{lemma} \begin{proof} Set \begin{align*} \k = \k' 2^{\ell+2}\overline{2^{\ell+2}}+ \k'' dq'\overline{dq'}, \quad a= a'2^{\ell}\overline{2^{\ell}} + a''q'\overline{q'}, \end{align*} where $\k'\bmod{dq'}$, $\k'' \bmod{2^{\ell+2}}$, $a'\bmod{q'}$, and $a''\bmod{2^{\ell}}$. The conditions on $\k$ then translate into $ \k''\equiv \a \bmod{4}$, $Q_1(\k')\equiv 0\bmod{d}$ and $Q_2(\k')\equiv 0\bmod{d}.$ Furthermore, we have $$ e\left(\frac{4aQ_2(\k)+\mathbf{m}.\k}{4dq}\right)= e\left(\frac{(4a'Q_2(\k')+\mathbf{m}.\k')\overline{2^{\ell+2}}}{dq'}\right) e\left(\frac{(4a''Q_2(\k'')+\mathbf{m}.\k'')\overline{dq'}}{2^{\ell+2}}\right). $$ The sum over $a'$ and $\k'$ gives $S_{d,q'}(\mathbf{m})$ after a change of variables. A similar change of variables in $a''$ and $k''$ gives $S^{\pm}_{1,2^{\ell}}(\mathbf{m})$, where the sign is given by $\chi(dq')$. \end{proof} In a similar spirit we can prove the following multiplicativity property for the sum \eqref{eq:S'}. \begin{lemma}\label{lem:mult2} For $d=d_1d_2$ and $q=q_1q_2$, with $(d_1q_1,d_2q_2)=1$, we have \begin{align*} S_{d,q}(\mathbf{m})=S_{d_1,q_1}(\mathbf{m})S_{d_2,q_2}(\mathbf{m}). \end{align*} \end{lemma} This result reduces the problem of estimating $S_{d,q}(\mathbf{m})$ into three distinct cases. Accordingly, for $d,q\in \mathbb{N}$ we define the sums $$ \mathcal Q_{q}(\mathbf{m})=S_{1,q}(\mathbf{m}),\quad \mathcal D_{d}(\mathbf{m})=S_{d,1}(\mathbf{m}),\quad \mathcal M_{d,q}(\mathbf{m})=S_{d,q}(\mathbf{m}), $$ the latter sum only being of interest when $d$ and $q$ exceed $1$ and are constructed from the same set of primes. The analysis of these sums will be the focus of \S \ref{sec:qsum}, \S \ref{sec:dsum1} and \S \ref{mcs}, respectively. For the moment we content ourselves with recording the crude upper bound \begin{equation} \label{eq:Sell-upper} S^{\pm}_{1,2^{\ell}} (\mathbf{m})\ll 2^{\ell(\frac{n}{2}+1)}, \end{equation} for \eqref{eq:Sell}, whose truth will be established in the following section. \medskip We close this section by presenting some facts concerning the exponential integral $I_{d,q}(\mathbf{m})$ which appears in Lemma \ref{psum}, recalling the definition \eqref{eq:W_d} of $W_d\left(\mathbf y\right)$. The properties of $h$ recorded in Lemma \ref{dfi} ensure that $q\ll Q$ when $I_{d,q}(\mathbf{m})$ is non-zero. Likewise the properties of $W_{d}$ imply that $d\ll B$ under the same hypothesis. The underlying weight function $W$ has bounded derivatives $$ \frac{\partial^{i_1+\cdots+i_n}}{\partial y_1^{i_1}\cdots\partial y_n^{i_n}}W(\mathbf{y})\ll_{i_1,\dots,i_n}1, $$ and the function $V_T$ satisfies $t^jV_T^{(j)}(t)\ll_j 1$. It therefore follows that $$ \frac{\partial^{i_1+\cdots+i_n}}{\partial y_1^{i_1}\cdots\partial y_n^{i_n}}W_d(\mathbf{y})\ll_{i_1,\dots,i_n}1, $$ since $Q_{1}(\mathbf{y})$ has order of magnitude $1$ for every $\mathbf{y}\in \supp(W)$. In the notation of \cite[\S 7]{H} we have \begin{equation}\label{eq:I*} I_{d,q}(\mathbf{m})=I_r^{*}(\mathbf v)=\int_{\mathbb R^n} h\left(r,G(\mathbf{y})\right)\omega\left(\mathbf{y}\right) e_{r}(-\mathbf v.\mathbf{y})\d\mathbf{y}, \end{equation} where $$ r=\frac{q}{Q}, \quad \mathbf v= \frac{B\mathbf{m}}{4dQ}, \quad G(\mathbf{y})=\frac{B^2Q_2(\mathbf{y})}{dQ^2}= Q_2(\mathbf{y}), \quad \omega (\mathbf{y})= W_d(\mathbf{y}). $$ We have $$ \frac{\partial^{i_1+\cdots+i_n}}{\partial y_1^{i_1}\cdots\partial y_n^{i_n}}G(\mathbf{y})\ll_{i_1,\dots,i_n}1, \quad \frac{\partial^{i_1+\cdots+i_n}}{\partial y_1^{i_1}\cdots\partial y_n^{i_n}}\omega(\mathbf{y})\ll_{i_1,\dots,i_n}1. $$ Using these bounds and integration by parts, as in \cite[\S 7]{H}, we obtain the following bound. \begin{lemma} \label{ubI_q} For $\mathbf{m} \neq \mathbf 0$ and any $N\geq 0$, we have $$ I_{d,q}(\mathbf{m}) \ll_{N} \frac{Q}{q}\left(\frac{dQ}{B|\mathbf{m}|}\right)^N. $$ \end{lemma} As a consequence we get that $\mathbf{m}$ with $|\mathbf{m}|>dQB^{-1+\varepsilon}$ will make a negligible contribution in our analysis of $S_{T,\mathbf{a}}^\sharp(B)$. For $\mathbf{m}$ with $0<|\mathbf{m}|\leq dQB^{-1+\varepsilon}$ we need a more refined bound. \begin{lemma} \label{ubI_q2} For $0<|\mathbf{m}| \leq dQB^{-1+\varepsilon}=\sqrt{d}B^{\varepsilon}$ and $q\ll Q=B/\sqrt{d}$, we have \begin{align*} \frac{\partial^{i+j}}{\partial d^i\partial q^j}I_{d,q}(\mathbf{m}) &\ll d^{-i}q^{-j}\left|\frac{B\mathbf{m}}{dq}\right|^{1-\frac{n}{2}} B^{\varepsilon}, \end{align*} for any $i,j\in\{0,1\}$. \end{lemma} \begin{proof} When $i=0$ this result follows from a closer study of the behaviour of the function $h(x,y)$, and is due to Heath-Brown \cite[\S\S 4--8]{H}. Let us suppose that $i=1$. After a change of variables we have $$ I_{d,q}(\mathbf{m})=d^n\int_{\mathbb R^n} h\left(\frac{q\sqrt{d}}{B},d^2Q_2(\mathbf{y})\right)W_d\left(d\mathbf{y}\right) e_{4q}(-B\mathbf{m}.\mathbf{y})\d\mathbf y. $$ We proceed to take the derivative with respect to $d$. The right hand side is seen to be \begin{align*} \frac{n}{d}I_{d,q}(\mathbf{m})+d^n\int_{\mathbb R^n} g_d(\mathbf{y}) e_{4q}(-B\mathbf{m}.\mathbf{y})\d\mathbf y, \end{align*} where if $h^{(1)}(x,y)=\frac{\partial}{\partial x}h(x,y)$ and $h^{(2)}(x,y)=\frac{\partial}{\partial y}h(x,y)$, then \begin{align*} g_d(\mathbf{y})=~& \frac{q}{2B\sqrt{d}}h^{(1)}\left(\frac{q\sqrt{d}}{B},d^2Q_2(\mathbf{y})\right)W_d\left(d\mathbf{y}\right)\\ & +2dQ_2(\mathbf{y})h^{(2)}\left(\frac{q\sqrt{d}}{B},d^2Q_2(\mathbf{y})\right)W_d\left(d\mathbf{y}\right) +h\left(\frac{q\sqrt{d}}{B},d^2Q_2(\mathbf{y})\right)\frac{\partial}{\partial d} W_d\left(d\mathbf{y}\right). \end{align*} Let $W^{(1)}(\mathbf{y})=\mathbf{y}.\nabla W(\mathbf{y})$. One finds that $$ \frac{\partial}{\partial d} W_d\left(d\mathbf{y}\right)= \frac{1}{d}W^{(1)}\left(d\mathbf y\right)V_T(d)+W\left(d\mathbf y\right)V_T'(d), $$ if $T\leq B$, and $$ \frac{\partial}{\partial d} W_d\left(d\mathbf{y}\right) =\frac{1}{d}W^{(1)}\left(d\mathbf y\right)V_T\left(B^2dQ_1(\mathbf{y})\right)+W\left(d\mathbf y\right)V_T'\left(B^2dQ_1(\mathbf{y})\right)B^2Q_1(\mathbf{y}), $$ otherwise. Hence \begin{align*} \frac{\partial}{\partial d} W_d\left(d\mathbf{y}\right)=\frac{1}{d}W_{1,d}\left(d\mathbf{y}\right), \end{align*} where the new function $W_{1,d}$ has the same analytic behaviour as $W_d$. Another change of variables now yields \begin{align*} \frac{\partial}{\partial d}I_{d,q}(\mathbf{m}) =~&\frac{n}{d}I_{d,q}(\mathbf{m})+\frac{1}{2d}\int_{\mathbb R^n}\frac{q\sqrt{d}}{B}h^{(1)}\left(\frac{q\sqrt{d}}{B},Q_2(\mathbf{y})\right)W_d\left(\mathbf{y}\right)e_{4dq}(-B\mathbf{m}.\mathbf{y})\d\mathbf y\\ &+\frac{2}{d}\int_{\mathbb R^n}h^{(2)}\left(\frac{q\sqrt{d}}{B},Q_2(\mathbf{y})\right)W_{2,d}\left(\mathbf{y}\right)e_{4dq}(-B\mathbf{m}.\mathbf{y})\d\mathbf y\\ &+\frac{1}{d}\int_{\mathbb R^n}h\left(\frac{q\sqrt{d}}{B},Q_2(\mathbf{y})\right)W_{1,d}\left(\mathbf{y}\right)e_{4dq}(-B\mathbf{m}.\mathbf{y})\d\mathbf y, \end{align*} where $W_{2,d}(\mathbf{y})=W_d(\mathbf{y})Q_2(\mathbf{y})$. The last three integrals can be compared with $I_{d,q}(\mathbf{m})$, and the lemma now follows using the bounds in the statement of the lemma for $i=0$. \end{proof} \section{Analysis of $\mathcal{Q}_q(\mathbf{m})$}\label{sec:qsum} The aim of this section is to collect together everything we need to know about the sums $$ \mathcal{Q}_q(\mathbf{m})=\sideset{}{^{*}}\sum_{\substack{a\bmod{q}}} \sum_{\k\bmod{q}} e_q(aQ_2(\k)+\mathbf{m}.\k), $$ for given $\mathbf{m} \in \mathbb{Z}^n$. This sum appears very naturally when the circle method is employed to analyse quadratic forms. Let $\mathbf{M}$ be the underlying symmetric $n\times n$ integer matrix for a quadratic form $Q$, so that $Q(\k)=\k^T\mathbf{M}\k$. We begin with an easy upper bound for the inner sum in $\mathcal{Q}_{q}(\mathbf{m})$ when $q$ is a prime power. \begin{lemma} \label{lem:gauss-sum-bound} For any quadratic form $Q(\mathbf{x})=\mathbf{x}^T\mathbf{M}\mathbf{x}$, we have $$ \left|\sum_{\mathbf k \bmod{p^r}}e_{p^r}\left(Q(\k)+\mathbf{m}.\k\right)\right|\leq p^{\frac{nr}{2}} \sqrt{K_{p^r}(2\mathbf{M};\mathbf{0})}, $$ in the notation of \eqref{eq:2.1}. \end{lemma} \begin{proof} Cauchy's inequality implies that the square of the left hand side is not greater than $$ \sum_{\mathbf{x},\mathbf{y}\bmod{p^r}} e_{p^r}\big((Q(\mathbf{x})-Q(\mathbf{y}))+\mathbf{m}.(\mathbf{x}-\mathbf{y})\big). $$ Substituting $\mathbf{x}=\mathbf{y}+\mathbf{z}$ we see that the summand is equal to $ e_{p^r}(\mathbf{m}.\mathbf{z})e_{p^r}(Q(\mathbf{z})+2\mathbf{y}^T \mathbf{M}\mathbf{z}). $ The sum over $\mathbf{y}$ vanishes unless $p^{r}\mid 2\mathbf{M}\mathbf{z}$, in which case it is given by $p^{nr}e_{p^r}(Q(\mathbf{z}))$. The result now follows by executing the sum over $\mathbf{z}$ trivially. \end{proof} We apply Lemma \ref{lem:gauss-sum-bound} to estimate $\mathcal{Q}_{q}(\mathbf{m})$. Since $Q_2$ is non-singular it follows from Lemma~\ref{lem:smith} that there is an absolute constant $c\geq 1$ such that $ K_{p^r}(2\mathbf{M}_2;\mathbf{0})\leq c, $ for any prime power $p^r$. Moreover one can take $c=1$ when $p\nmid 2\det \mathbf{M}_2$. On summing trivially over $a$ one deduces that $ |\mathcal Q_{p^r}(\mathbf{m})|\leq \sqrt{c} p^{\left(\frac{n}{2}+1\right)r}, $ for any prime power $p^r$. Applying Lemma~\ref{lem:mult2} therefore yields \begin{equation} \label{cor:Q_p^r} \mathcal Q_{q}(\mathbf{m})\ll q^{\frac{n}{2}+1}. \end{equation} Likewise \eqref{eq:Sell-upper} is an easy consequence of Lemma \ref{lem:smith} and Lemma \ref{lem:gauss-sum-bound} when $p=2$. Using quadratic Gauss sums, it is possible to prove explicit formulae for $\mathcal{Q}_{p^{r}}(\mathbf{m})$ when the prime $p$ is large enough. The oscillation in the sign of these sums will give cancellation in the sum over $q$ in Lemma \ref{psum} which will be crucial for handling $n= 7$. Let $Q(\mathbf{x})$ be a quadratic form with associated matrix $\mathbf{M}$. We write $Q^*(\mathbf{x})$ for the adjoint quadratic form with underlying matrix $(\det \mathbf{M})\mathbf{M}^{-1}$. For any odd prime $p$ let $$ \varepsilon(p)= \begin{cases} 1, &\mbox{if $p\equiv 1 \bmod{4}$,}\\ i, &\mbox{if $p\equiv 3 \bmod{4}$,} \end{cases} $$ and let $\chi_{p}(\cdot)$ denote the Legendre symbol $(\frac{\cdot}{p})$. We may now record the following formula. \begin{lemma} \label{gs} Let $p$ be a prime with $p\nmid 2\det \mathbf{M}$. Then we have \begin{align*} \sum_{\k \bmod{p^r}}e_{p^r}(Q(\k)+\mathbf{m}. \k)= \begin{cases} p^{\frac{nr}{2}}e_{p^r}(-\overline{4\det \mathbf{M}}Q^*(\mathbf{m})), & \mbox{if $r$ is even},\\ p^{\frac{nr}{2}}\chi_p(\det \mathbf{M})\varepsilon(p)^ne_{p^r}(-\overline{4\det \mathbf{M}}Q^*(\mathbf{m})), &\mbox{if $r$ is odd}. \end{cases} \end{align*} \end{lemma} \begin{proof} Since $p$ is odd there exists a $n\times n$ matrix $\mathbf U$ with integer entries and $p\nmid \det \mathbf U$ such that $\mathbf{U}^{T}\mathbf{M}\mathbf{U}$ is diagonal modulo $p^{r}$. Hence in proving the lemma we may restrict ourselves to diagonal forms $ Q(\mathbf{x})=\alpha_1x_1^2+\cdots+\alpha_nx_n^2, $ with $\mathbf{M}=\diag(\alpha_1,\dots,\alpha_n)$. In this case we have $$ Q^*(\mathbf{x})=\det \mathbf{M}\left(\frac{x_1^2}{\alpha_1}+\cdots+\frac{x_n^2}{\alpha_n}\right), $$ where $\det \mathbf{M}=\alpha_1\cdots \alpha_n$. Let $S$ denote the sum appearing on the left hand side in the statement of the lemma. Then \begin{align*} S=\prod_{i=1}^n\Bigl\{\sum_{k \bmod{p^r}}e_{p^r}(\alpha_i k^2+m_ik)\Bigr\}. \end{align*} Since $p\nmid 2\alpha_i$, we can complete the square. This yields \begin{align*} \sum_{k \bmod{p^r}}e_{p^r}(\alpha_i k^2+m_ik)=e_{p^r}(-\overline{4\alpha_i}m_i^2) \sum_{k \bmod{p^r}}e_{p^r}(\alpha_i k^2). \end{align*} The last sum is the quadratic Gauss sum, which satisfies \begin{align*} \sum_{k (\text{mod}\;p^r)}e_{p^r}(\alpha_i k^2)= \begin{cases}p^{\frac{r}{2}}, &\mbox{if $r$ is even,}\\ \chi_p(\alpha_i)\varepsilon(p)p^{\frac{r}{2}}, &\mbox{if $r$ is odd.} \end{cases} \end{align*} The lemma follows on substituting this into the above expression for $S$.\end{proof} Lemma \ref{gs} directly yields an explicit evaluation of the sum $\mathcal Q_{p^r}(\mathbf{m})$ when the prime $p$ is sufficiently large. To state the outcome of this let $$ c_{p^r}(a)=\sideset{}{^{*}}\sum_{x\bmod{p^r}}e_{p^r}\left(ax\right) =\sum_{d\mid (p^r,a)} d\mu\left(\frac{p^r}{d}\right) $$ be the Ramanujan sum and let $$ g_{p^r}(a)=\sum_{x\bmod{p^r}}\chi_p(x)e_{p^r}\left(ax\right) $$ be the Gauss sum. For the former we will make frequent use of the fact that $c_{p^{r}}(ab)=c_{p^{r}}(a)$ for any $b$ coprime to $p$, and $c_{p^{r}}(a_{1})=c_{p^{r}}(a_{2})$ whenever $a_{1}\equiv a_{2}\bmod{p^{r}}$. Moreover, we have the obvious inequality $|c_{p^r}(a)|\leq (p^r,a)$. It follows from Lemma \ref{gs} that $$ \mathcal Q_{p^r}(\mathbf{m})=p^{\frac{nr}{2}}\sideset{}{^{*}}\sum_{a\bmod{p^r}}\begin{cases} e_{p^r}(-\overline{4a\det \mathbf{M}_2} Q^*_2(\mathbf{m})),&\mbox{if $r$ is even,}\\ \chi_p(\det \mathbf{M}_2)\chi_p(a)^n\varepsilon(p)^ne_{p^r}(-\overline{4a\det \mathbf{M}_2}Q^*_2(\mathbf{m})), &\mbox{if $r$ is odd}, \end{cases} $$ if $p\nmid 2\det \mathbf{M}$. The following lemma now follows from executing the sum over $a$. \begin{lemma} \label{expQ} Let $p$ be a prime with $p\nmid 2\det \mathbf{M}_{2}$. Then for even $n$ we have $$ \mathcal Q_{p^r}(\mathbf{m})=\varepsilon(p)^{nr}\chi_p(\det \mathbf{M}_2)^rp^{\frac{nr}{2}}c_{p^r}\left(Q^*_2(\mathbf{m})\right). $$ For odd $n$ we have $$ \mathcal Q_{p^r}(\mathbf{m})=\begin{cases} p^{\frac{nr}{2}}c_{p^r}\left(Q^*_2(\mathbf{m})\right), & \mbox{if $r$ is even},\\ \varepsilon(p)^n \chi_p(-1)p^{\frac{nr}{2}}g_{p^r}(Q^*_2(\mathbf{m})), & \mbox{if $r$ is odd}. \end{cases} $$ \end{lemma} Let \begin{equation}\label{eq:NM} N=\begin{cases} 2\det \mathbf{M}_2 Q^*_2(\mathbf{m}), & \mbox{if $Q^*_2(\mathbf{m})\neq 0$,}\\ 2\det \mathbf{M}_2 , &\mbox{otherwise}. \end{cases} \end{equation} We now turn to the average order of $\mathcal{Q}_q(\mathbf{m})$, as one sums over $q$ coprime to $M$ for some fixed $M\in \mathbb{N}$ divisible by $N$. For this we will use Perron's formula unless $n$ is even and $Q_{2}^{*}(\mathbf{m})\neq 0$, a case that can be handled trivially as follows. \begin{lemma}\label{lem:q-triv} Let $M \in \mathbb{N}$ with $N\mid M$ and let $\varepsilon>0$. Assume that $n$ is even and $Q_{2}^{*}(\mathbf{m})\neq 0$. Then we have $$ \sum_{\substack{q\leq x\\ (q,M)=1}}|\mathcal{Q}_{q}(\mathbf{m})| \ll x^{\frac{n}{2}+1+\varepsilon}M^{\varepsilon}. $$ \end{lemma} \begin{proof} Combining Lemma \ref{expQ} with the multiplicativity relation Lemma \ref{lem:mult2} we obtain $$ \sum_{\substack{q\leq x\\ (q,M)=1}}|\mathcal{Q}_{q}(\mathbf{m})| \leq x^{\frac{n}{2}} \sum_{\substack{q\leq x\\ (q,M)=1}}|c_{q}(Q_{2}^{*}(\mathbf{m}))|. $$ The lemma is therefore an easy consequence of the inequality $|c_q(a)|\leq (q,a)$ satisfied by the Ramanujan sum. \end{proof} Let $\chi$ be a non-principal Dirichlet character with conductor $c_\chi$. It will be convenient to recall some preliminary facts concerning the size of Dirichlet $L$-functions $L(s,\chi)$ in the critical strip. We begin by recalling the convexity bound \begin{equation}\label{eq:convex} L(\sigma+it,\chi) \ll (c_\chi |t|)^{\frac{1-\sigma}{2}+\varepsilon}, \end{equation} for any $\sigma\in [0,1]$ and $|t|\geq 1$. Next we claim that \begin{align} \label{eq:l-series-bound} \int_{\frac{1}{2}-iT}^{\frac{1}{2}+iT}|L(s,\chi)|^2\frac{\d s}{|s|}\ll c_{\chi}^{\frac{7}{16}+\varepsilon}T^{\varepsilon}. \end{align} In order to show this we break the integral into dyadic blocks, deducing that it is dominated by $$ \sum_{\substack{ Y\;\text{dyadic}\\ \frac{1}{2}<Y\leq T }}\frac{1}{1+Y}\int_{Y}^{2Y}\left|L\left(\frac{1}{2}+it,\chi\right)\right|^2\d t. $$ For small values of $Y$ we use Heath-Brown's \cite{hb-hybrid} hybrid bound $L(\frac{1}{2}+it,\chi)\ll (c_\chi |t|)^{\frac{3}{16}+\varepsilon}$, for $|t|\geq 1$, to get $$ \frac{1}{1+Y}\int_{Y}^{2Y}\left|L\left(\frac{1}{2}+it,\chi\right)\right|^2 \d t\ll c_{\chi}^{\frac{3}{8}+\varepsilon}\sqrt{Y}. $$ For larger values of $Y$ we use the approximate functional equation to replace the $L$-value by a series of length $\sqrt{c_{\chi}Y}$, and then use the mean value theorem for Dirichlet polynomials (see Iwaniec and Kowalski \cite[Theorem 9.1]{HIEK}, for example). This gives $$ \frac{1}{1+Y}\int_{Y}^{2Y}\left|\sum_{n\leq \sqrt{c_{\chi}Y}T^{\varepsilon}}\frac{\chi(n)}{\sqrt{n}}n^{-it}\right|^2\d t\ll \left(1+\sqrt{\frac{c_{\chi}}{Y}}\right)T^{\varepsilon}. $$ Summing over all dyadic blocks, we easily arrive at the claimed bound \eqref{eq:l-series-bound}. For $s\in \mathbb{C}$ let $\sigma=\Re(s)$. Returning now to the application of Perron's formula, we set $$ \xi_M(s;\mathbf{m})=\sum_{(q,M)=1}\frac{\mathcal{Q}_{q}(\mathbf{m})}{q^s}. $$ By \eqref{cor:Q_p^r} this series is absolutely convergent for $\sigma>\frac{n}{2}+2$. When $n$ is even and $Q_2^*(\mathbf{m})\neq 0$ it is absolutely convergent for $\sigma>\frac{n}{2}+1$, by Lemma \ref{lem:q-triv}. For any $x-\frac{1}{2}\in \mathbb{Z}$ and $T>0$ we obtain \begin{align} \label{perron} \sum_{\substack{q\leq x\\ (q,M)=1}}\mathcal{Q}_{q}(\mathbf{m})=\frac{1}{2\pi i}\int_{c-iT}^{c+iT}\xi_M(s;\mathbf{m}) x^s\frac{\d s}{s}+O\left( \frac{x^c}{T}\right), \end{align} where $c>\frac{n}{2}+2$. We will take $T$ large enough in terms of $x$ and $|\mathbf{m}|$ so that the error term in the formula is negligible. The analytic nature of the $L$-series can be revealed using the explicit formulae that we enunciated in Lemma \ref{expQ} and depends on the parity of $n$. For even $n$ we get \begin{equation} \label{eq:even-n} \xi_M(s;\mathbf{m})=\prod_{p\nmid M}\left\{\sum_{r=0}^{\infty} \frac{\chi_p(\det \mathbf{M}_2)^r\varepsilon(p)^{nr}c_{p^r}\left(Q^*_2(\mathbf{m})\right)}{p^{\left(s-\frac{n}{2}\right)r}}\right\}. \end{equation} For odd $n$ we get \begin{equation} \label{eq:odd-n} \xi_M(s;\mathbf{m})=\prod_{p\nmid M}\left\{\sum_{r\;\text{even}} \frac{c_{p^r}\left(Q^*_2(\mathbf{m})\right)}{p^{\left(s-\frac{n}{2}\right)r}}+ \chi_p(-1)\varepsilon(p)^n\sum_{r\;\text{odd}} \frac{g_{p^r}\left(Q^*_2(\mathbf{m})\right)}{p^{\left(s-\frac{n}{2}\right)r}}\right\}. \end{equation} The following result handles the case in which $Q_{2}^{*}(\mathbf{m})=0$. \begin{lemma}\label{lem:0} Let $M \in \mathbb{N}$ with $N\mid M$ and let $\varepsilon>0$. Assume that $Q^*_2(\mathbf{m})=0$. Then we have $$ \sum_{\substack{q\leq x\\ (q,M)=1}}\mathcal{Q}_{q}(\mathbf{m})\ll \begin{cases} x^{\frac{n+3}{2}+\varepsilon}M^{\varepsilon}, & \mbox{if $(-1)^{\frac{n}{2}}\det \mathbf{M}_2\neq \square$},\\ x^{\frac{n}{2}+2}, & \mbox{if $(-1)^{\frac{n}{2}}\det \mathbf{M}_2=\square$.} \end{cases} $$ \end{lemma} Here, and after, for any complex number $z$ we write $z=\square$ if and only if there exists an integer $j$ such that $z=j^{2}.$ Thus the sum in question is bounded by $O(x^{\frac{n+3}{2}+\varepsilon}M^{\varepsilon})$ when $n$ is odd since it is then impossible for $(-1)^{\frac{n}{2}}\det \mathbf{M}_2$ to be the square of an integer. \begin{proof}[Proof of Lemma \ref{lem:0}] The second part of the lemma is a trivial consequence of \eqref{cor:Q_p^r} and the triangle inequality. Turning to the first part we begin by supposing that $n$ is even and $(-1)^{\frac{n}{2}}\det \mathbf{M}_2\neq \square$. If $Q^*_2(\mathbf{m})=0$ then $c_{p^r}\left(Q^*_2(\mathbf{m})\right)=\phi(p^r)$. It follows from \eqref{eq:even-n} that $$ \xi_M(s;\mathbf{m})=L\left(s-1-\frac{n}{2},\psi\right)E_M(s), $$ where $L(s,\psi)$ is the Dirichlet $L$-function associated to the Jacobi symbol $$ \psi(\cdot )=\left(\frac{(-1)^{\frac{n}{2}}\det \mathbf{M}_2}{\cdot} \right), $$ with conductor $c_\psi=O(1)$, and where $E_M(s)$ is an Euler product which converges absolutely in the half plane $\sigma>\frac{n}{2}+1$ and satisfies the bound $E_M(s)\ll M^\varepsilon$ there. This gives the analytic continuation of $\xi_M(s;\mathbf{m})$ up to $\sigma>\frac{n}{2}+1$. Moving the contour of integration in \eqref{perron} to $c_0=\frac{n+3}{2}$ and invoking the convexity estimate \eqref{eq:convex} to deal with the horizontal contours, we obtain $$ \sum_{\substack{q\leq x \\(q,M)=1}}\mathcal{Q}_{q}(\mathbf{m})=\frac{1}{2\pi i}\int_{c_0-iT}^{c_0+iT}\xi_M(s;\mathbf{m})x^s\frac{\d s}{s}+ O\left(\frac{x^c}{T}+\frac{x^{c_0}M^\varepsilon T^\varepsilon}{T^{\frac{3}{4}}} \right). $$ Here we note that $(-1)^{\frac{n}{2}}\det \mathbf{M}_2$ is not a square and so the $L$-series does not have a pole in the region $\sigma>c_0-\frac{1}{2}$. Taking $T=x^{n+4}$ the error term is seen to be $O( x^{-\frac{n}{4}-\frac{3}{2}+\varepsilon}M^\varepsilon)$. The remaining integral is estimated via \eqref{eq:l-series-bound}, which thereby leads to the first part of Lemma \ref{lem:0} when $n$ is even. If $n$ is odd and $Q^*_2(\mathbf{m})=0$, then $c_{p^r}\left(Q^*_2(\mathbf{m})\right)=\phi(p^r)$ and $g_{p^r}\left(Q^*_2(\mathbf{m})\right)=0$. Hence $\xi_M(s;\mathbf{m})$ is absolutely convergent and bounded by $O(M^\varepsilon)$ in the half-plane $\sigma>\frac{n+3}{2}$. This implies that we can shift the contour in \eqref{perron} to $c_0=\frac{n+3}{2}+\varepsilon$, without encountering any poles, leading to a similar but simpler situation to that considered for even $n$. This completes the proof of Lemma \ref{lem:0}. \end{proof} Let us turn to the size of the exponential sums $\mathcal{Q}_q(\mathbf{m})$ for generic $\mathbf{m}$, for which sharper bounds are required. Tracing through the proof one sees that if $n$ is even and $Q_{2}^{*}(\mathbf{m})\neq 0$ then one is instead led to compare $\xi_M(s;\mathbf{m})$ in \eqref{eq:even-n} with $L\left(s-\frac{n}{2},\psi\right)^{-1}$. To improve on Lemma \ref{lem:q-triv} one therefore requires a good zero-free region for $L\left(s-\frac{n}{2},\psi\right)$ to the left of the line $\sigma=\frac{n}{2}+1$, for which the unconditional picture is somewhat lacking. However, even if one is able to save a power of $x$ in Lemma \ref{lem:q-triv}, this still does not seem to be enough to handle $n=6$ in Theorem \ref{th1}. The following result deals with the case of odd $n$ when $Q_2^*(\mathbf{m})\neq 0$. \begin{lemma}\label{lem:2} Let $M \in \mathbb{N}$ with $N\mid M$ and let $\varepsilon>0$. Assume that $n$ is odd and $Q_{2}^{*}(\mathbf{m})\neq 0$. Then we have $$ \sum_{\substack{q\leq x\\ (q,M)=1}}\mathcal{Q}_{q}(\mathbf{m})\ll \begin{cases} |\mathbf{m}|^{\frac{7}{16}+\varepsilon}x^{\frac{n}{2}+1+\varepsilon}M^{\varepsilon}, & \mbox{if $(-1)^{\frac{n-1}{2}}Q_{2}^{*}(\mathbf{m})\neq \square$,}\\ x^{\frac{n+3}{2}+\varepsilon}M^{\varepsilon}, & \mbox{if $(-1)^{\frac{n-1}{2}}Q_{2}^{*}(\mathbf{m})= \square$.} \end{cases} $$ \end{lemma} \begin{proof} Recalling \eqref{eq:odd-n} we note that $ g_{p}(a)=\chi_{p}(a)\varepsilon(p)p^{\frac{1}{2}}, $ for any non-zero integer $a$ that is coprime to $p$. Hence we deduce in this case that $$ \xi_M(s;\mathbf{m})=L\left(s-\frac{n+1}{2},\psi_{\mathbf{m}}\right)E_M(s), $$ where $\psi_{\mathbf{m}}$ is the Jacobi symbol $$ \psi_{\mathbf{m}}(\cdot) = \left(\frac{(-1)^{\frac{n-1}{2}}Q^*_2(\mathbf{m})}{\cdot}\right), $$ with conductor $4|Q_2^*(\mathbf{m})| =O( |\mathbf{m}|^2)$. Also $E_M(s)$ is an Euler product which now converges absolutely in the half plane $\sigma>\frac{n}{2} +1$ and satisfies the bound $E_M(s)\ll M^\varepsilon$ there. Under the assumption that $(-1)^{\frac{n-1}{2}}Q^*_2(\mathbf{m})\neq \square$, the $L$-series $\xi_M(s;\mathbf{m})$ does not have a pole in the region $\sigma>\frac{n}{2}+1$. Moving the contour of integration in \eqref{perron} to $c_0=\frac{n}{2}+1+\varepsilon$, and using the convexity estimate \eqref{eq:convex}, we therefore get $$ \sum_{\substack{q\leq x\\ (q,M)=1}}\mathcal{Q}_{q}(\mathbf{m})=\frac{1}{2\pi i}\int_{c_0-iT}^{c_0+iT}\xi_M(s;\mathbf{m})x^s\frac{\d s}{s}+ O\left(\frac{x^c}{T} +\frac{|\mathbf{m}|^{\frac{1}{2}+\varepsilon}x^{c_0}M^\varepsilon}{T^{\frac{3}{4}}}\right), $$ in this case. Estimating the remaining integral using \eqref{eq:l-series-bound}, as before, we conclude the proof of the lemma when $(-1)^{\frac{n-1}{2}}Q^*_2(\mathbf{m})\neq \square$ by taking $T$ sufficiently large. Finally, if $(-1)^{\frac{n-1}{2}}Q^*_2(\mathbf{m})=\square$, then $\xi_M(s;\mathbf{m})$ is regularised by $\zeta(s-\frac{n+1}{2})$ and has a pole at $s=\frac{n+3}{2}$. In this case we move the line of integration back to $c_0=\frac{n+3}{2}+\varepsilon$, which easily leads to the statement of the lemma. \end{proof} \section{Analysis of $\mathcal{D}_d(\mathbf{m})$}\label{sec:dsum1} The aim of this section is to collect together everything we need to know about the sums $$ \mathcal{D}_d(\mathbf{m})=\sum_{\k \in \hat V(\mathbb{Z}/d\mathbb{Z})} e_d(\mathbf{m}.\k), $$ for given $\mathbf{m} \in \mathbb{Z}^n$ and $d\in \mathbb{N}$. Here we write $\hat W$ to denote the affine cone above a projective variety $W$. The estimates in this section pertain to the quadratic forms considered in Theorem~\ref{th1}, so that $V$ is non-singular and we may make use of the geometric facts recorded in \S \ref{geometry}. Our starting point is Lemma \ref{lem:mult2}, which yields $ \mathcal{D}_{d_{1}d_{2}}(\mathbf{m})=\mathcal{D}_{d_{1}}(\mathbf{m})\mathcal{D}_{d_{2}}(\mathbf{m}) $ if $(d_{1},d_{2})=1$, rendering it sufficient to understand the behaviour of the sum at prime powers. For any $\mathbf{m}\in \mathbb{Z}^n$ we begin by examining the case in which $d=p$, a prime. Introducing a free sum over elements of $\mathbb{F}_p^*$, we find that \begin{align*} (p-1)\mathcal{D}_p(\mathbf{m}) &=\sum_{a=1}^{p-1}\sum_{\substack{\mathbf{x}\in \hat V(\mathbb{F}_p)}}e_p(\mathbf{m}.\mathbf{x})\\ &=\sum_{\substack{\mathbf{x}\in \hat V(\mathbb{F}_p)}}\sum_{a=1}^{p-1}e_p(a\mathbf{m}.\mathbf{x})\\ &=p\#\hat V_\mathbf{m} (\mathbb{F}_p)-\#\hat V (\mathbb{F}_p), \end{align*} where $V_\mathbf{m}$ is the variety obtained by intersecting $V$ with the hyperplane $\mathbf{m}.\mathbf{x}=0$, and $\hat V_\mathbf{m}$ is the corresponding affine variety lying above it. Rearranging, we obtain \begin{equation}\label{eq:goat} \mathcal{D}_p(\mathbf{m})=\Big(1-\frac{1}{p}\Big)^{-1}\left( \#\hat V_\mathbf{m}(\mathbb{F}_p)-p^{-1}\#\hat V(\mathbb{F}_p) \right). \end{equation} Now for any complete intersection $W\subset \mathbb{P}^m$, which is non-singular modulo $p$ and has dimension $e\geq 1$, it follows from Deligne's resolution of the Weil conjectures \cite{deligne} that $$ |\#W(\mathbb{F}_p)-(p^{e}+p^{e-1}+\cdots +1)|= O_{d,m}(p^{\frac{e}{2}}), $$ where $d$ is the degree of $W$. In particular, since $$ \#W(\mathbb{F}_p)=\frac{\#\hat W(\mathbb{F}_p)-1}{p-1}, $$ we deduce that \begin{equation}\label{eq:deligne} \#\hat W (\mathbb{F}_p)=p^{e+1} + O_{d,m}(p^{\frac{e+2}{2}}). \end{equation} In our setting we have $e=n-3$ for $V$ and $e=n-4$ for $V_\mathbf{m}$ if $p\nmid \mathbf{m}$. We may now record the following inequalities. \begin{lemma}\label{lem:r=1} We have $$ \mathcal{D}_p(\mathbf{m})\ll \begin{cases} p^{\frac{n-2}{2}}, & \mbox{if $p\nmid G(\mathbf{m})$, }\\ p^{\frac{n-1}{2}}, & \mbox{if $p\mid G(\mathbf{m})$ and $p\nmid \mathbf{m}$, }\\ p^{n-2}, & \mbox{if $p\mid \mathbf{m}$.} \end{cases} $$ \end{lemma} \begin{proof} Without loss of generality we may assume that $p\nmid \Delta_{V}$, since otherwise the result is trivial. Our starting point is \eqref{eq:goat}. If $p\mid \mathbf{m}$ then $\mathcal{D}_{p}(\mathbf{m})=\#\hat V(\mathbb{F}_{p})$ and the claim follows from \eqref{eq:deligne}. If $p\nmid G(\mathbf{m})$, so that $V_{\mathbf{m}}$ is non-singular modulo $p$, then an application of \eqref{eq:deligne} yields \begin{align*} \mathcal{D}_p(\mathbf{m}) &=\Big(1-\frac{1}{p}\Big)^{-1}\left( p^{n-3}+O(p^{\frac{n-2}{2}}) -p^{-1}(p^{n-2}+O(p^{\frac{n-1}{2}}))\right)= O(p^{\frac{n-2}{2}}), \end{align*} if $n\geq 5$. When $n=4$ this is trivial since then $\#V_\mathbf{m}(\mathbb{F}_p)=O(1)$. This establishes the claim. Finally, if $p\mid G(\mathbf{m})$ and $p\nmid \mathbf{m}$, then $V_\mathbf{m}$ is singular and of codimension $1$ in $V$ modulo $p$. By a result of Zak (see Theorem 2 in \cite[Appendix]{hooley}), the singular locus of $V_\mathbf{m}$ has projective dimension $0$. Hence the work of Hooley \cite{hooley} yields $\# \hat V_\mathbf{m}(\mathbb{F}_{p})=p^{n-3}+O(p^{\frac{n-1}{2}})$, which once inserted into \eqref{eq:goat} yields the desired inequality. \end{proof} We now turn our attention to higher prime powers. Let $d=p^r$ for $r\geq 2$ and suppose that $G(\mathbf{m})\neq 0$. We assume that $p\nmid \Delta_V$ and $p\nmid \mathbf{m}$. Then it is easy to see that $$ \mathcal{D}_{p^r}(\mathbf{m})=\sum_{\substack{\mathbf{x} \in \hat V\left(\mathbb{Z}\slash p^r\mathbb{Z} \right)\\p\nmid \mathbf{x}}}e_{p^r}\left(\mathbf{m}.\mathbf{x}\right). $$ Mimicking the argument leading to \eqref{eq:goat}, a line of attack that we already met in the proof of Lemma \ref{rho(d)}, we deduce from the explicit formula for the Ramanujan sum that $$ \phi(p^r)\mathcal \mathcal{D}_{p^r}(\mathbf{m})=\sideset{}{^*}\sum_{a\bmod{p^r}}\sum_{\substack{\mathbf{x} \in \hat V\left(\mathbb{Z}\slash p^r\mathbb{Z} \right)\\p\nmid \mathbf{x}}}e_{p^r}\left(a\mathbf{m}.\mathbf{x}\right)=p^r\sum_{\substack{\mathbf{x} \in \hat V\left(\mathbb{Z}\slash p^r\mathbb{Z} \right)\\p^r|\mathbf{m}.\mathbf{x}\\p\nmid \mathbf{x}}}1-p^{r-1}\sum_{\substack{\mathbf{x} \in \hat V\left(\mathbb{Z}\slash p^r\mathbb{Z} \right)\\p^{r-1}|\mathbf{m}.\mathbf{x}\\p\nmid \mathbf{x}}}1. $$ In the second sum we write $\mathbf{x}=\mathbf{y}+p^{r-1}\mathbf{z}$ with $\mathbf{y} \bmod{p^{r-1}}$ and $\mathbf{z} \bmod{p}$, to get $$ \sum_{\substack{\mathbf{x} \in \hat V\left(\mathbb{Z}\slash p^r\mathbb{Z} \right)\\p^{r-1}|\mathbf{m}.\mathbf{x}\\p\nmid \mathbf{x}}}1=\sum_{\substack{\mathbf{y} \in \hat V\left(\mathbb{Z}\slash p^{r-1}\mathbb{Z} \right)\\p^{r-1}|\mathbf{m}.\mathbf{y}\\p\nmid \mathbf{y}}}\#\{\mathbf{z}: \;Q_i(\mathbf{y}+p^{r-1}\mathbf{z})\equiv 0 \bmod{p^r},\;\;\text{for}\;\;i=1,2\}. $$ Since $p\nmid \Delta_V$, the count for the number $\mathbf{z} \bmod{p}$ is given by $p^{n-2}$. Setting $$ N(p^j,\mathbf{m})=\#\{\mathbf{x}\in \hat V\left(\mathbb{Z}\slash p^j\mathbb{Z} \right):\;p\nmid \mathbf{x},\;\;\mathbf{m}.\mathbf{x}\equiv 0 \bmod{p^j}\}, $$ we get $$ \mathcal D_{p^r}(\mathbf{m})=\frac{p^r}{\phi(p^r)}\left\{N(p^r,\mathbf{m})-p^{n-3}N(p^{r-1},\mathbf{m})\right\}. $$ In particular an application of Hensel's lemma yields the following conclusion. \begin{lemma}\label{lem:r>1} Let $r\geq 2$. Then we have $\mathcal{D}_{p^{r}}(\mathbf{m})=0$ unless $p\mid \Delta_V G(\mathbf{m})$. \end{lemma} We also require a general bound for $\mathcal{D}_{d}(\mathbf{m})$. By the orthogonality of characters we may write $$ \mathcal D_{d}(\mathbf{m})=\frac{1}{d^{2}}\sum_{\b\bmod{d}} \mathcal D_{d}(\mathbf{m};\b), $$ where $$ \mathcal D_{d}(\mathbf{m};\b)= \sum_{\k \bmod{d}} e_{d}\left(b_{1}Q_{1}(\k)+b_{2}Q_{2}(\k)+\mathbf{m}.\k\right). $$ We proceed to extract the greatest common divisor $h$ of $\b$ with $d$, writing $d=hd'$ and $\b=h\b'$, with $(d',\b')=1$. Breaking the sum into congruence classes modulo $d'$ we then see that $$ \mathcal D_{d}(\mathbf{m};\b)=\sum_{\k'\bmod{d'}}\sum_{\k'' \bmod{h}} e_{d'}\left(b_{1}'Q_{1}(\k')+b_{2}'Q_{2}(\k')+h^{-1}\mathbf{m}.\k'\right) e_{h}\left(\mathbf{m}.\k''\right). $$ In particular $h$ must be a divisor of $\mathbf{m}$ and, furthermore, if we write $\mathbf{m}=h\mathbf{m}'$ then we have $ \mathcal D_{d}(\mathbf{m};\b)=h^n \mathcal D_{d'}(\mathbf{m}';\b'). $ Applying Lemma \ref{lem:gauss-sum-bound}, we conclude that \begin{equation}\label{eq:bh} |\mathcal{D}_d(\mathbf{m})|\leq \frac{1}{d^2} \sum_{h\mid (d,\mathbf{m})} h^n {d'}^\frac{n}{2} \sideset{}{^{*}}\sum_{\b' \bmod{d'}} \sqrt{K_{d'}(2\mathbf{M}(\mathbf{b}');\mathbf{0})}, \end{equation} in the notation of \eqref{eq:2.1} and \eqref{eq:Mc}. The following result provides a good upper bound for the inner sum, provided that $d'$ does not share a common prime factor with $\Delta_V$. \begin{lemma} \label{lem:technical} For any $\varepsilon>0$ and $e\in \mathbb{N}$ with $(e,\Delta_V)=1$, we have $$ \sideset{}{^{*}}\sum_{\b \bmod{e}} K_{e}(2\mathbf{M}(\mathbf{b});\mathbf{0})\ll e^{2+\varepsilon}. $$ \end{lemma} \begin{proof} Let $g(e)$ denote the sum that is to be estimated and put $U_{e}(\mathbf{b})=K_{e}(2\mathbf{M}(\mathbf{b});\mathbf{0})$. One notes via the Chinese remainder theorem that $g$ is a multiplicative arithmetic function which it will therefore suffice to understand at prime powers $e=p^{r}$, with $p\nmid \Delta_{V}$. We have $$ g(p^{r})= \sum_{\substack{ 0\leq b_{1},b_{2}<p^{r}\\ p\nmid \b}} U_{p^{r}}(\mathbf{b}). $$ Viewed as a matrix with coefficients in $\mathbb{Z}$, it follows from \eqref{eq:rank} that $\mathbf{M}(\mathbf{b})$ has rank $n$ or $n-1$, and furthermore $P(\b)=\det \mathbf{M}(\mathbf{b})$ has non-zero discriminant, as a polynomial in $\mathbf{b}$. For $i=0,1$ we write $\mathcal{B}_i$ for the set of $\mathbf{b}\in \mathbb{Z}^2$ with $0\leq b_1, b_2< p^{r}$ and $p\nmid \b$, for which $\mathbf{M}(\b)$ has rank $n-i$ over $\mathbb{Z}$. We will provide two upper bounds for $U_{p^r}(\mathbf{b})$. We begin with Lemma \ref{lem:smith}, which gives \begin{equation} \label{eq:smith} U_{p^r}(\mathbf{b})\leq p^{r(n-\rho)+\delta_p}, \end{equation} where $\rho$ is the rank of $2\mathbf{M}(\mathbf{b})$ over $\mathbb{Z}$ and $\delta_p$ is the minimum of the $p$-adic orders of the $\rho \times \rho$ non-singular submatrices of $2\mathbf{M}(\mathbf{b})$. Our second estimate for $U_{p^r}(\mathbf{b})$ is based on an analysis of the case $r=1$. Since $p\nmid \Delta_{V}$ it follows that $2\mathbf{M}(\mathbf{b})$ has rank $n$ or $n-1$ modulo $p$. In the former case one obtains $U_{p}(\mathbf{b})=1$ and in the latter case $U_{p}(\mathbf{b})=p$. An application of Hensel's lemma therefore yields \begin{equation}\label{eq:Upr} U_{p^r}(\b) \leq \begin{cases} 1, & \mbox{if $p\nmid \Delta_{V}\det \mathbf{M}(\mathbf{b})$,}\\ p^r, & \mbox{if $p\nmid \Delta_{V}$ and $p\mid \det \mathbf{M}(\mathbf{b})$.} \end{cases} \end{equation} Combining \eqref{eq:smith} and \eqref{eq:Upr} we deduce that $$ U_{p^r}(\b) \leq \begin{cases} p^{\min\{r,v_{p}(P(\b))\}}, & \mbox{if $\b\in \mathcal{B}_{0}$,}\\ p^r, & \mbox{if $\b\in \mathcal{B}_{1}$.} \end{cases} $$ It therefore follows that $$ g(p^{r}) \leq \sum_{\b\in\mathcal{B}_0} p^{\min\{r, v_p(P(\mathbf{b}))\}}+ p^{r}\#\mathcal{B}_1. $$ Now it is clear that there are only $O(1)$ primitive integer solutions of the equation $P(\b)=0$, whence $\#\mathcal{B}_1=O(p^{r})$. Moreover we have $ v_p(P(\mathbf{b}))\leq \Delta$ with $\Delta=rn+O(1)$, for any $\b\in \mathcal{B}_0$. Our investigation so far has shown that for $p\nmid \Delta_{V}$ we have $$ g(p^{r})\ll p^{2r}+ \sum_{\ell= 0}^{\Delta} p^{\min\{\ell, r\}} \#\mathcal{B}_0(\ell), $$ where $\mathcal{B}_0(\ell)$ is the set of $\b\in \mathcal{B}_0$ for which $p^\ell \mid P(\b)$. If $\ell\leq r$ then $$ \#\mathcal{B}_0(\ell)\ll p^{2(r-\ell)} \#\{\b\bmod{p^\ell}: p\nmid \b, ~ P(\b)\equiv 0 \bmod{p^\ell}\}\ll p^{2r-\ell}, $$ since $p$ does not divide the discriminant of $P$. Alternatively if $\ell>r$ then it follows that $$ \#\mathcal{B}_0(\ell)\ll p^{r}. $$ Putting this altogether we conclude that $$ g(p^{r}) \ll p^{2r}+ \sum_{0\leq \ell\leq r} p^{2r} + \sum_{r<\ell\leq \Delta} p^{2r} \ll r p^{2r}, $$ for $p\nmid \Delta_{V}$. This suffices for the statement of the lemma. \end{proof} Applying Lemma \ref{lem:technical} in \eqref{eq:bh}, we conclude that \begin{align*} \mathcal{D}_{d}(\mathbf{m}) &\ll d^{\frac{n}{2}+\varepsilon} (d,\mathbf{m})^{\frac{n}{2}-2}, \end{align*} if $(d,\Delta_V)=1$. If $d\mid \Delta_V^\infty$, we will merely take the trivial bound $$ |\mathcal{D}_d(\mathbf{m})|\leq \rho(d) \ll d^{n-2+\varepsilon}, $$ which follows from Lemma \ref{rho(d)}. Combining these therefore leads to the following result. \begin{lemma} \label{lem:r rough} For any $\varepsilon>0$ we have $\mathcal D_{d}(\mathbf{m})\ll (d,\Delta_V^\infty)^{\frac{n}{2}-2} d^{\frac{n}{2}+\varepsilon} (d,\mathbf{m})^{\frac{n}{2}-2}$. \end{lemma} We are now ready to record some estimates for the average order of $|\mathcal{D}_{d}(\mathbf{m})|$, as we range over appropriate sets of moduli $d$. Combining Lemma \ref{lem:r=1} with Lemma \ref{lem:r>1} and the multiplicativity property in Lemma \ref{lem:mult2}, we are immediately led to the following conclusion. \begin{lemma}\label{lem:dave} For any $\varepsilon>0$ we have $$ \sum_{\substack{d\leq x\\ (d,\Delta_V G(\mathbf{m}))=1}} |\mathcal{D}_d(\mathbf{m})| \ll x^{\frac{n}{2}+\varepsilon}. $$ \end{lemma} Here Lemma \ref{lem:r>1} ensures that only square-free values of $d$ are counted in this sum. Furthermore this result is trivial if $G(\mathbf{m})=0$, in which case we will need an allied estimate. This is provided by the following result. \begin{lemma}\label{lem:dave'} Assume that $G(\mathbf{m})=0$. For any $\varepsilon>0$ we have $$ \sum_{\substack{d\leq x\\ (d,\Delta_V\mathbf{m} )=1}} |\mathcal{D}_d(\mathbf{m})| \ll x^{\frac{n+1}{2}+\varepsilon}. $$ \end{lemma} \begin{proof} We make the factorisation $d=uv$, where $u$ is the square-free part of $d$ and $v$ is the square-full part. In particular both $u$ and $v$ are assumed to be coprime to $\Delta_{V}$ and $\mathbf{m}$. Then Lemma \ref{lem:r=1} yields $ \mathcal{D}_u(\mathbf{m}) \ll u^{\frac{n-1}{2}+\varepsilon}, $ and it follows from Lemma \ref{lem:r rough} that $ \mathcal{D}_v(\mathbf{m}) \ll v^{\frac{n}{2}+\varepsilon}. $ Hence \begin{align*} \sum_{\substack{d\leq x\\ (d,\Delta_V\mathbf{m} )=1}} |\mathcal{D}_d(\mathbf{m})| &\ll \sum_{\substack{uv\leq x}} u^{\frac{n-1}{2}+\varepsilon} v^{\frac{n}{2}+\varepsilon} \\ &\ll x^{\frac{n+1}{2}+\varepsilon} \sum_{\substack{v\leq x}} \frac{1}{v^{\frac{1}{2}}}. \end{align*} On noting that the number of square-full integers $v\leq V$ is $O(V^{\frac{1}{2}})$, this therefore concludes the proof of the lemma. \end{proof} \section{Analysis of $\mathcal{M}_{d,q}(\mathbf{m})$} \label{mcs} It remains to estimate the mixed character sums $\mathcal{M}_{d,q}(\mathbf{m})$, which it will suffice to analyse at prime powers. Our goal in this section will be a proof of the following result. \begin{lemma} \label{lem:mixed-strong'} Assume that $q\mid d^{\infty}$ and $d\mid q^{\infty}$. Let $\varepsilon>0$ and assume Hypothesis-$\rho$. Then we have $$ \mathcal{M}_{d,q}(\mathbf{m})\ll (d,(2\det \mathbf{M}_2)^\infty)^{\frac{n}{2}-2} d^{\frac{n}{2}+\varepsilon}q^{\frac{n}{2}+1}. $$ \end{lemma} Our proof of this result is based on an analysis of the sum \begin{align*} \mathcal M_{p^r,p^{\ell}}(\mathbf{m})=\sideset{}{^{*}}\sum_{a \bmod{p^{\ell}}} \sum_{\substack{\mathbf k \bmod{p^{r+\ell}}\\ Q_1(\k)\equiv 0\bmod{p^r}\\Q_2(\k)\equiv 0\bmod{p^r}}} e_{p^{r+\ell}}\left(aQ_2(\k)+\mathbf{m}.\k\right), \end{align*} for integers $r, \ell\geq 1$. We first split the inner sum by replacing $\k$ by $\k+p^r\mathbf{x}$, where $\k$ runs modulo $p^r$ and $\mathbf{x}$ runs modulo $p^{\ell}$. This yields \begin{align} \label{eq:mixed-step1} \mathcal{M}_{p^r,p^{\ell}}(\mathbf{m})= \sum_{\substack{\mathbf k \bmod{p^{r}}\\ Q_1(\k)\equiv 0\bmod{p^r}\\Q_2(\k)\equiv 0\bmod{p^r}}} S(\k), \end{align} where $$ S(\k)= \sideset{}{^{*}}\sum_{a \bmod{p^{\ell}}} e_{p^{r+\ell}}\left(aQ_2(\k)+\mathbf{m}.\k\right) \sum_{\mathbf{x} \bmod{p^{\ell}}} e_{p^{\ell}}\left(aQ_2(\mathbf{x})p^r+a\nabla Q_2(\k).\mathbf{x}+\mathbf{m}.\mathbf{x}\right). $$ We will argue differently according to which of $r$ or $\ell$ is largest. Recall that $Q^*_2$ is the dual of $Q_2$, with matrix $\mathbf{M}_2^*=(\det \mathbf{M}_2)\mathbf{M}_2^{-1}$. Lemma~\ref{lem:mixed-strong'} is a straightforward consequence of the following pair of results and the multiplicativity property in Lemma \ref{lem:mult2}. \begin{lemma} \label{lem:mixed-strong-1} Suppose that $\ell>r$. Then $\mathcal{M}_{p^r,p^{\ell}}(\mathbf{m})=0$ unless $p^{r}\mid Q^*_2(\mathbf{m})$ or $p\mid 2\det \mathbf{M}_2$, in which case $\mathcal{M}_{p^r,p^{\ell}}(\mathbf{m})\ll p^{\ell+\frac{n}{2}(\ell+r)}$. \end{lemma} \begin{proof} In the inner sum of $S(\k)$ we take $\mathbf{x}=\mathbf{y}+p^{\ell-r}\mathbf{z}$, where $\mathbf{y}$ runs modulo $p^{\ell-r}$ and $\mathbf{z}$ runs modulo $p^r$. This gives $$ \sum_{\mathbf{y} \bmod{p^{\ell-r}}} e_{p^{\ell}}\left(aQ_2(\mathbf{y})p^r+a\nabla Q_2(\k).\mathbf{y}+ \mathbf{m}.\mathbf{y}\right)\sum_{\mathbf{z} \bmod{p^r}} e_{p^{r}}\left(a\nabla Q_2(\k).\mathbf{z}+\mathbf{m}.\mathbf{z}\right), $$ for the sum over $\mathbf{x} \bmod{p^{\ell}}$. The sum over $\mathbf{z}$ vanishes unless \begin{equation}\label{m:7} a\nabla Q_2(\k)+\mathbf{m} \equiv \mathbf{0} \bmod{p^r}. \end{equation} Recall from the conditions of summation in \eqref{eq:mixed-step1} that $p^r\mid Q_2(\k)$. In particular, if $p\nmid 2\det \mathbf{M}_2$, then it follows that $\mathcal{M}_{p^r,p^{\ell}}(\mathbf{m})=0$ unless $p^{r}\mid Q^*_2(\mathbf{m})$, as required for the first part of the lemma. For the second part, we let $\v\in \mathbb{Z}^n$ be such that $a\nabla Q_2(\k)+\mathbf{m} =p^r\v$. Then we have $$ S(\k)=p^{nr} \sum_{a \in A(\k)} e_{p^{r+\ell}}\left(aQ_2(\k)+\mathbf{m}.\k\right) \sum_{\mathbf{y} \bmod{p^{\ell-r}}} e_{p^{\ell-r}}\left(aQ_2(\mathbf{y})+\v.\mathbf{y}\right), $$ where $A(\k)$ denotes the set of $a\in(\mathbb{Z}/p^\ell\mathbb{Z})^*$ such that \eqref{m:7} holds. Applying Lemma \ref{lem:gauss-sum-bound} and then Lemma \ref{lem:smith} we conclude that $$ |S(\k)|\leq\sum_{a \in A(\k)} p^{nr+\frac{n}{2}(\ell-r)} \sqrt{K_{p^{\ell-r}}(2\mathbf{M}_2;\mathbf{0})}\ll \sum_{a \in A(\k)} p^{nr+\frac{n}{2}(\ell-r)}. $$ Inserting this into \eqref{eq:mixed-step1} therefore gives $$ \mathcal{M}_{p^r,p^{\ell}}(\mathbf{m}) \ll p^{\frac{n}{2}(\ell+r)}\sideset{}{^{*}}\sum_{a \bmod{p^{\ell}}} K_{p^r}(2a\mathbf{M}_2; -\mathbf{m}). $$ A further application of Lemma \ref{lem:smith} therefore gives the bound in the lemma. \end{proof} \begin{lemma} \label{lem:mixed-strong-2} Suppose that $\ell\leq r$ and assume Hypothesis-$\rho$. Then $\mathcal{M}_{p^r,p^{\ell}}(\mathbf{m})=0$ unless $p^{\ell}\mid Q^*_2(\mathbf{m})$ or $p\mid 2\det \mathbf{M}_2$, in which case $\mathcal{M}_{p^r,p^{\ell}}(\mathbf{m})\ll p^{\ell+\frac{n}{2}(\ell+r)} (p,2\det \mathbf{M}_2)^{\frac{nr }{2}-2+\varepsilon}$. \end{lemma} \begin{proof} The expression in \eqref{eq:mixed-step1} now features $$ S(\k)= \sideset{}{^{*}}\sum_{a\bmod{p^{\ell}}}e_{p^{r+\ell}}\left(aQ_2(\k)+\mathbf{m}.\k\right)\sum_{\mathbf{x} \bmod{p^{\ell}}} e_{p^{\ell}}\left(a\nabla Q_2(\k).\mathbf{x}+\mathbf{m}.\mathbf{x}\right). $$ The sum over $\mathbf{x}$ vanishes unless \begin{equation}\label{m:7'} a\nabla Q_2(\k)+\mathbf{m} \equiv \mathbf{0} \bmod{p^\ell}. \end{equation} Recall that $p^r\mid Q_2(\k)$ in \eqref{eq:mixed-step1}, which implies that $p^\ell\mid Q_2(\k)$ since $r\geq \ell$. If $p\nmid 2\det \mathbf{M}_2$, it follows from \eqref{m:7'} that $$ a\k\equiv -\overline{2\det \mathbf{M}_2}\mathbf{M}_2^{*}\mathbf{m} \bmod{p^\ell}, $$ whence $p^{\ell}\mid Q_{2}^{*}(\mathbf{m})$, as required for the first part of the lemma. For the second part we deduce that $$ S(\k)=p^{n\ell} \sideset{}{^{*}}\sum_{\substack{a\bmod{p^{\ell}}\\ \scriptsize{\mbox{\eqref{m:7'} holds}}}} e_{p^{r+\ell}}\left(aQ_2(\k)+\mathbf{m}.\k\right). $$ Re-introducing the sum over $\k$ and using exponential sums to detect the divisibility constraints $p^{r-\ell}\mid p^{-\ell}Q_i(a\k)$, which are clearly equivalent to $p^{r-\ell}\mid p^{-\ell}Q_i(\k)$ when $a$ is coprime to $p$, we deduce that \begin{equation}\label{m:8} \mathcal{M}_{p^r,p^\ell}(\mathbf{m})= \frac{p^{n\ell}}{p^{2(r-\ell)}} \sum_{\b \bmod{p^{r-\ell}}} T(\b), \end{equation} where $$ T(\b)= \sideset{}{^{*}}\sum_{\substack{a\bmod{p^{\ell}}}} \sum_{\substack{ \k \in K}} e_{p^{r+\ell}}\left(aQ_2(\k)+\mathbf{m}.\k\right) e_{p^{r}}\left(b_1Q_1(a\k)+b_2Q_2(a\k)\right), $$ and $K$ denotes the set of $\k \bmod{p^r}$ for which \eqref{m:7'} holds and $Q_i(\k)\equiv 0 \bmod{p^{\ell}}$, for $i=1,2$. We proceed by writing $a\k=\mathbf{x}+p^\ell \mathbf{y}$, for $\mathbf{y}$ modulo $p^{r-\ell}$. Let $\bar{a}$ denote the multiplicative inverse of $a$ modulo $p^\ell$, which lifts to a unique point modulo $p^{r+\ell}$. This leads to the expression $$ T(\b)= \sideset{}{^{*}}\sum_{\substack{a\bmod{p^{\ell}}}} \sum_{\substack{ \mathbf{x} \bmod{p^\ell} \\ \bar{a}\nabla Q_2(\mathbf{x})+\mathbf{m} \equiv \mathbf{0} \bmod{p^\ell}\\ Q_i(\mathbf{x})\equiv 0\bmod{p^\ell} }} \sum_{\substack{ \mathbf{y} \bmod{p^{r-\ell}}}} f(\mathbf{x},\mathbf{y}), $$ for $i=1,2$, with \begin{align*} f(\mathbf{x},\mathbf{y}) &= e_{p^{r+\ell}}\left(\bar{a}Q_2(\mathbf{x}+p^\ell \mathbf{y})+\mathbf{m}.(\mathbf{x}+p^\ell \mathbf{y})\right) e_{p^{r}}\left(b_1Q_1(\mathbf{x}+p^\ell \mathbf{y})+b_2Q_2(\mathbf{x}+p^\ell \mathbf{y})\right). \end{align*} Recall the notation $\mathbf{M}(\b)$ introduced in \eqref{eq:Mc}. One concludes that $$ \left| \sum_{\substack{ \mathbf{y} \bmod{p^{r-\ell}}}} f(\mathbf{x},\mathbf{y})\right|\leq \left|\sum_{\substack{ \mathbf{y} \bmod{p^{r-\ell}}}} e_{p^{r-\ell}}\left(Q(\mathbf{y})+\mathbf{n}.\mathbf{y}\right) \right|, $$ with $\mathbf{n}=p^{-\ell}(\bar{a}\nabla Q_2(\mathbf{x})+\mathbf{m})+2\mathbf{M}(\b)\mathbf{x}$ and $$ Q(\mathbf{y})=\bar{a}Q_2(\mathbf{y})+p^\ell \left(b_1Q_1(\mathbf{y})+b_2Q_2(\mathbf{y})\right). $$ This quadratic form has underlying matrix $\mathbf{M}(p^\ell b_1,p^\ell b_2+\bar{a})$. The number of $\mathbf{x} \bmod{p^\ell}$ appearing in our expression for $T(\b)$ is $O(1)$ by Lemma \ref{lem:smith}. Applying Lemma \ref{lem:gauss-sum-bound}, we deduce that $$ T(\b)\ll p^{\frac{(r-\ell)n}{2}} \sideset{}{^{*}}\sum_{\substack{a\bmod{p^{\ell}}}} \sqrt{K_{p^{r-\ell}}(2\mathbf{M} (p^\ell b_1,p^\ell b_2+\bar{a}); \mathbf{0})}. $$ As $b_2$ runs modulo $p^{r-\ell}$ and $a$ runs over elements modulo $p^\ell$ which are coprime to $p$, so $c_2=p^\ell b_2+\bar{a}$ runs over a complete set of residue classes modulo $p^r$. Replacing $b_1$ by $b_1c_2$, and recalling \eqref{m:8}, we obtain \begin{align*} \mathcal{M}_{p^r,p^\ell}(\mathbf{m}) &\ll \frac{p^{\frac{n}{2}(\ell+r)}}{p^{2(r-\ell)}} \sum_{b_1 \bmod{p^{r-\ell}}} ~\sideset{}{^{*}}\sum_{\substack{c_2\bmod{p^{r}}}} \sqrt{K_{p^{r-\ell}}(2\mathbf{M} (p^\ell b_1c_2,c_2); \mathbf{0})}\\ &\ll \frac{p^{\ell+\frac{n}{2}(\ell+r)}}{p^{r-\ell}} \sum_{b_1 \bmod{p^{r-\ell}}} \sqrt{K_{p^{r-\ell}}(2\mathbf{M} (p^\ell b_1,1); \mathbf{0})}. \end{align*} It will be convenient to put $\delta=v_p(2^n \det \mathbf{M}_2)$. We may assume that $\ell>\delta$. Indeed, if $\ell\leq \delta$ then we may take the trivial bound $S(\k)=O(1)$ in \eqref{eq:mixed-step1}. Applying Hypothesis-$\rho$ we go on to deduce that $\mathcal{M}_{p^r,p^{\ell}}(\mathbf{m})=O(p^{r(n-2)+\varepsilon})$, which is satisfactory. Using Taylor's formula we may write \begin{align*} \det 2\mathbf{M} (p^\ell b_1,1) &=p^\ell f(b_1)+\det 2\mathbf{M}(0,1)\\ &= p^\ell f(b_1)+2^n\det \mathbf{M}_2, \end{align*} for an appropriate polynomial $f(b_1)$ with integer coefficients. Viewing $b_1$ as an element of $\mathbb{Z}$, it follows that $p^\ell f(b_1)+2^n\det \mathbf{M}_2\neq 0$, since $\ell>\delta$. Hence $$ v_p\left(\det 2\mathbf{M}(p^\ell b_1,1)\right)= \delta $$ and Lemma \ref{lem:smith} yields $K_{p^{r-\ell}}(2\mathbf{M} (p^\ell b_1,1); \mathbf{0})\ll 1$. The overall contribution to $\mathcal{M}_{p^r,p^\ell}(\mathbf{m})$ from this case is therefore $O(p^{\ell+\frac{n}{2}(\ell+r)})$, which is satisfactory. \end{proof} \section{Proof of Theorem \ref{th1}: initial steps} \label{pt1} We henceforth assume that $n\geq 5$. From Lemma~\ref{psum} we have $$ S_{T,\a}^\sharp(B)=\left(1 +O_N(B^{-N})\right) \frac{B^{n-2}}{4^n}\sum_{\mathbf{m}\in \mathbb Z^n} \sum_{\substack{d=1\\ (d,\Delta_V^\infty)\leq \Xi}}^\infty \frac{\chi(d)}{d^{n-1}} \sum_{q=1}^\infty\frac{1}{q^n} T_{d,q}(\mathbf{m})I_{d,q}(\mathbf{m}), $$ for any $N>0$. We expect that the main term of the sum comes from the zero frequency $\mathbf{m}=\textbf 0$. This we will compute explicitly in \S \ref{s:conclusion} and it will turn out to have size $B^{n-2}$, as expected. Our immediate task, however, is to produce a satisfactory upper bound for the contribution from the non-zero frequencies. In view of the properties of $I_{d,q}(\mathbf{m})$ recorded in \S \ref{prelim} the sums over $d$ and $q$ are effectively restricted to $d\ll B$ and $q\ll Q$, respectively. Moreover, Lemma \ref{ubI_q} implies that the contribution of the tail $|\mathbf{m}|>dQB^{-1+\varepsilon}$ is arbitrarily small. Finally, Lemma \ref{rho(d)} confirms Hypothesis-$\rho$ for the quadratic forms considered here. As reflected in the various estimates collected together in \S\S \ref{sec:qsum}--\ref{mcs}, the behaviour of the exponential sum $T_{d,q}(\mathbf{m})$ will depend intimately on $\mathbf{m}$. We must therefore give some thought to the question of controlling the number of $\mathbf{m}\in \mathbb{Z}^n$ which are constrained in appropriate ways. The constraints that feature in our work are of three basic sorts: either $Q_2^*(\mathbf{m})=0$ or $G(\mathbf{m})=0$ or $(-1)^{\frac{n-1}{2}}Q_2^*(\mathbf{m})=\square$, the latter case only being distinct from the first case when $n$ is odd. The first two cases correspond to averaging $\mathbf{m}$ over rational points $[\mathbf{m}]$ belonging to a projective variety $W\subset \mathbb{P}^{n-1}$, with $W$ equal to the quadric $Q_2^*=0$ or the dual hypersurface $V^*$, respectively. For such $W$ we claim that \begin{equation} \label{eq:count} \#\left\{ \mathbf{m} \in \mathbb{Z}^n:~ [\mathbf{m}]\in W(\mathbb{Q}), ~|\mathbf{m}|\leq M\right\} \ll M^{n-2+\varepsilon}, \end{equation} for any $M\geq 1$ and $\varepsilon>0$. When $W$ is the quadric, in which case we recall that $Q_2^*$ is non-singular, this follows from Lemma \ref{m:5}. When $W=V^*$ then our discussion in \S \ref{geometry} shows that $W$ is an irreducible hypersurface of degree $4(n-2)\geq 12$. Hence the desired bound follows directly from joint work of the first author with Heath-Brown and Salberger \cite[Corollary 2]{bhbs}. Finally, we note that \begin{equation} \label{eq:count'} \#\left\{ \mathbf{m} \in \mathbb{Z}^n:~ (-1)^{\frac{n-1}{2}}Q_2^*(\mathbf{m})=\square, ~|\mathbf{m}|\leq M\right\} \ll M^{n-1+\varepsilon}, \end{equation} for any $M\geq 1$ and $\varepsilon>0$. Indeed, the contribution from $\mathbf{m}$ for which $Q_2^*(\mathbf{m})=0$ is satisfactory by \eqref{eq:count} and the remaining contribution leads us to count points of height $O(M)$ on a non-singular quadric in $n+1$ variables, for which we may appeal to Lemma \ref{m:5}. We may now return to the task of estimating the contribution to $ S_{T,\a}^\sharp(B)$ from $\mathbf{m}$ for which $0<|\mathbf{m}|\leq dQB^{-1+\varepsilon}=\sqrt{d}B^\varepsilon$. In this endeavour it will suffice to study the expression \begin{equation} \label{eq:AAsum} U_{T,\a}(B,D)=B^{n-2}\sum_{0<|\mathbf{m}|\leq\sqrt{D}B^{\varepsilon}} \sideset{}{'}\sum_{\substack{d\sim D\\ (d,\Delta_V^\infty)\leq \Xi}} \frac{1}{d^{n-1}}\left|\sum_{q}\frac{1}{q^n} T_{d,q}(\mathbf{m})I_{d,q}(\mathbf{m})\right|, \end{equation} for $D\geq 1$, where $\sum'$ indicates that the sum should be taken over odd integers only and the notation $d\sim D$ means $D/2<d\leq D$. In our analysis of this sum we will clearly only be interested in values of $D\ll B$. However, for the time being we allow $D\geq 1$ to be an arbitrary parameter. Recall the definition \eqref{eq:NM} of the non-zero integer $N$. We split $q$ as $\delta q$ with $(q,dN)=1$ and $\delta\mid (dN)^{\infty}$. Since $q$ is restricted to have size $O(Q)$ in \eqref{eq:AAsum}, by the properties of $I_{d,q}(\mathbf{m})$ recorded in \S \ref{prelim}, we may assume that $\delta\ll B$. We deduce from the multiplicativity relations Lemma \ref{lem:mult1} and Lemma \ref{lem:mult2} that $$ U_{T,\a}(B,D) \leq B^{n-2} \hspace{-0.4cm} \sum_{0<|\mathbf{m}|\leq\sqrt{D}B^{\varepsilon}} \sideset{}{'}\sum_{\substack{d\sim D\\ (d,\Delta_V^\infty)\leq \Xi}} \frac{1}{d^{n-1}}\sum_{\substack{\delta\mid (dN)^{\infty} \\\delta\ll B}} \frac{|T_{d,\delta}(\mathbf{m})|}{\delta^n}\left| \sum_{\substack{q \\ (q,dN)=1}}\frac{1}{q^n} \mathcal{Q}_{q}(\mathbf{m})I_{d,\delta q}(\mathbf{m})\right|. $$ To estimate the inner sum over $q$ we see via partial summation that it is \begin{align*} -\int_{1}^{\infty}\left(\sum_{\substack{q\leq y\\ (q,dN)=1}} \mathcal{Q}_{q}(\mathbf{m})\right) \frac{\partial}{\partial y}\left(\frac{I_{d,\delta y}(\mathbf{m})}{y^n}\right)\d y. \end{align*} The integral is over $y \leq c Q/\delta$, for some absolute constant $c>0$. Define the quantities $$ \theta_{1}(n;\mathbf{m})=\begin{cases} \frac{7}{16}, & \mbox{if $2\nmid n$ and $(-1)^{\frac{n-1}{2}}Q_{2}^{*}(\mathbf{m})\neq \square$,}\\ 0, &\mbox{otherwise,} \end{cases} $$ and $$ \theta_2(n;\mathbf{m})=\begin{cases} 1, &\mbox{ if $Q_{2}^{*}(\mathbf{m})=0$ and $(-1)^{\frac{n}{2}}\det\mathbf{M}_{2}=\square$,}\\ \frac{1}{2}, &\mbox{ if $Q_{2}^{*}(\mathbf{m})=0$ and $(-1)^{\frac{n}{2}}\det \mathbf{M}_{2}\neq\square$,}\\ \frac{1}{2}, &\mbox{ if $Q_{2}^{*}(\mathbf{m})\neq 0$ and $(-1)^{\frac{n-1}{2}}Q_{2}^{*}(\mathbf{m})=\square$,}\\ 0, & \mbox{otherwise.} \end{cases} $$ According to our conventions we note that the first case in the definition of $\theta_{2}(n;\mathbf{m})$ only arises for even $n$ and likewise the third case only arises for odd $n$. Drawing together Lemmas~\ref{lem:q-triv}, \ref{lem:0} and \ref{lem:2}, and using Lemma \ref{ubI_q2}, we therefore obtain the estimate \begin{align*} &\ll |\mathbf{m}|^{\theta_{1}(n;\mathbf{m})} (dN)^\varepsilon \int_{1}^{cQ/\delta} y^{\frac{n}{2}+1+\theta_{2}(n;\mathbf{m})+\varepsilon} \left| \frac{\partial}{\partial y} \left(\frac{I_{d,\delta y}(\mathbf{m})}{y^n}\right)\right|\d y\\ &\ll \left(\frac{d\delta}{B|\mathbf{m}|}\right)^{\frac{n}{2}-1} |\mathbf{m}|^{\theta_{1}(n;\mathbf{m})} (dNB)^\varepsilon \int_{1}^{cQ/\delta} y^{\frac{n}{2}+1+\theta_{2}(n;\mathbf{m})} \cdot y^{-\frac{n}{2}-2} \d y, \end{align*} for the above integral. Let \begin{equation} \label{eq:def-theta} \theta_{1}(n)=\begin{cases} 0, &\mbox{if $n$ is even},\\ \frac{7}{16}, & \mbox{if $n$ is odd}, \end{cases}\quad \theta_{2}(n)=\begin{cases} \frac{1}{2}, & \mbox{if $2\mid n$ and $(-1)^{\frac{n}{2}}\det\mathbf{M}_{2}=\square$,}\\ 0, &\mbox{otherwise.} \end{cases} \end{equation} Returning to our initial estimate for $U_{T,\a}(B,D) $ and recalling the definition \eqref{eq:S'} of $S_{d,q}(\mathbf{m})$, we now have everything in place to establish the following result. \begin{lemma}\label{lem:Asum} We have $$ U_{T,\a}(B,D) \ll \frac{B^{\frac{n}{2}-1+\varepsilon}}{D^{\frac{n}{2}}} \left( U^{(1)}+U^{(2)}\right), $$ where $$ U^{(1)} = \sum_{\substack{0<|\mathbf{m}|\leq\sqrt{D}B^{\varepsilon}\\ (-1)^{\frac{n-1}{2}}Q^*_2(\mathbf{m})=\square}} \frac{(B/\sqrt{D})^{\frac{1}{2}+\theta_{2}(n)}}{|\mathbf{m}|^{\frac{n}{2}-1}} \sum_{\substack{d\sim D\\ (d,\Delta_V^\infty)\leq \Xi}} \sum_{\substack{\delta\mid d^{\infty} \\\delta\ll B}} \frac{|S_{d,\delta}(\mathbf{m})|}{\delta^{\frac{n}{2}+1}} $$ and $$ U^{(2)} = \sum_{\substack{0<|\mathbf{m}|\leq\sqrt{D}B^{\varepsilon}\\ (-1)^{\frac{n-1}{2}}Q^*_2(\mathbf{m})\neq \square }} \frac{|\mathbf{m}|^{\theta_{1}(n)}}{|\mathbf{m}|^{\frac{n}{2}-1}} \sum_{\substack{d\sim D\\ (d,\Delta_V^\infty)\leq \Xi}} \sum_{\substack{\delta\mid d^{\infty} \\\delta\ll B}} \frac{|S_{d,\delta}(\mathbf{m})|}{\delta^{\frac{n}{2}+1}}. $$ \end{lemma} \begin{proof} Our work so far shows that $U_{T,\a}(B,D)\ll C^{(1)}+C^{(2)}$, with \begin{align*} C^{(1)} &= \frac{ B^{n-2+\varepsilon}}{B^{\frac{n}{2}-1}} \sum_{\substack{0<|\mathbf{m}|\leq\sqrt{D}B^{\varepsilon}\\ (-1)^{\frac{n-1}{2}}Q_{2}^{*}(\mathbf{m})=\square }} \frac{(B/\sqrt{D})^{\theta_{2}(n;\mathbf{m})}}{|\mathbf{m}|^{\frac{n}{2}-1}} \sideset{}{'} \sum_{\substack{d\sim D\\ (d,\Delta_V^\infty)\leq \Xi}} \frac{1}{d^{\frac{n}{2}}} \sum_{\substack{\delta\mid (dN)^{\infty} \\\delta\ll B}} \frac{|T_{d,\delta}(\mathbf{m})|}{\delta^{\frac{n}{2}+1+\theta_{2}(n;\mathbf{m})}} \end{align*} and $$ C^{(2)} = \frac{B^{n-2+\varepsilon} }{B^{\frac{n}{2}-1}} \sum_{\substack{0<|\mathbf{m}|\leq\sqrt{D}B^{\varepsilon}\\ (-1)^{\frac{n-1}{2}}Q^*_2(\mathbf{m})\neq \square}} \frac{|\mathbf{m}|^{\theta_{1}(n)}}{|\mathbf{m}|^{\frac{n}{2}-1}} \sideset{}{'} \sum_{\substack{d\sim D\\ (d,\Delta_V^\infty)\leq \Xi}} \frac{1}{d^{\frac{n}{2}}} \sum_{\substack{\delta\mid (dN)^{\infty} \\\delta\ll B}} \frac{|T_{d,\delta}(\mathbf{m})|}{\delta^{\frac{n}{2}+1}}. $$ We note that $\theta_{2}(n;\mathbf{m})=\frac{1}{2}+\theta_{2}(n)$ in $C^{(1)}$, but take $\frac{n}{2}+1$ for the exponent of $\delta$. Drawing together Lemma~\ref{lem:mult1}, \eqref{eq:Sell-upper} and \eqref{cor:Q_p^r}, it follows that $$ \sum_{\substack{\delta\mid N^{\infty}\\\delta\ll B}} \frac{|T_{1,\delta}(\mathbf{m})|}{\delta^{\frac{n}{2}+1}}\ll \sum_{\substack{\delta\mid N^{\infty}\\\delta\ll B}}1\ll (N B)^{\varepsilon}, $$ where the final inequality follows from \eqref{eq:scat}. Thus we can restrict $\delta$ to be a divisor of $d^\infty$ in $C^{(1)}$ and $C^{(2)}$ at the cost of enlarging the bound by $B^{\varepsilon}$. In particular, since $d$ is odd, it follows that $\delta$ is odd and so Lemma \ref{lem:mult1} implies that $T_{d,\delta}(\mathbf{m})=S_{d,\delta}(\mathbf{m})$. Finally, on taking $d>D/2$ in the denominator of both expressions, we arrive at the statement of the lemma. \end{proof} We are now ready to commence our detailed estimation of $U_{T,\mathbf{a}}(B,D)$, based on Lemma~\ref{lem:Asum}. We begin by directing our attention to the estimation of $U^{(2)}$. Pulling out the greatest common divisor $h$ of $\mathbf{m}$, and then splitting $d=d_1d_2$ and $\delta=\delta_1\delta_2$, with $\delta_1\mid d_1^\infty$, $d_1\mid h^{\infty}$, $\delta_2\mid d_2^\infty$ and $(d_2,h)=1$, it follows that \begin{equation} \label{eq:A2} U^{(2)} = \sum_{0<h\leq\sqrt{D}B^{\varepsilon}}\frac{h^{\theta_{1}(n)}}{h^{\frac{n}{2}-1}} \sum_{\substack{0<|\mathbf{m}|\leq\frac{\sqrt{D}B^{\varepsilon}}{h}\\ (-1)^{\frac{n-1}{2}}Q^*_2(\mathbf{m})\neq \square\\ \gcd(\mathbf{m})=1}} \frac{|\mathbf{m}|^{\theta_{1}(n)}}{|\mathbf{m}|^{\frac{n}{2}-1}} \sum_{\substack{d_1\leq D\\d_1\mid h^{\infty}\\ (d_1,\Delta_V^\infty)\leq \Xi }}\sum_{\substack{\delta_1\mid d_1^{\infty}\\\delta_1\ll B}} \frac{|S_{d_1,\delta_1}(h\mathbf{m})|}{\delta_1^{\frac{n}{2}+1}} \Sigma_1, \end{equation} where if $\Xi_{d_1}=\Xi/(d_1,\Delta_V^\infty)$, then $$ \Sigma_1=\sum_{\substack{d_2\sim \frac{D}{d_1}\\(d_2,h)=1\\ (d_2,\Delta_V^\infty)\leq \Xi_{d_1} }}\sum_{\substack{\delta_2\mid d_2^{\infty}\\\delta_2\ll B}} \frac{|S_{d_2,\delta_2}(h\mathbf{m})|}{\delta_2^{\frac{n}{2}+1}}. $$ Here we recall from \S \ref{prelim} that $S_{d_2,\delta_2}(h\mathbf{m})=S_{d_2,\delta_2}(\mathbf{m})$ since $(\delta_2d_2,h)=1$. Now set $$ H(\mathbf{m})= \begin{cases} \Delta_V \det \mathbf{M}_2 G(\mathbf{m})Q^*_2(\mathbf{m}), & \mbox{if $G(\mathbf{m})\neq 0$,}\\ \Delta_V \det \mathbf{M}_2 Q^*_2(\mathbf{m}), & \mbox{if $G(\mathbf{m})=0$,} \end{cases} $$ where $G$ is the dual form introduced in \S \ref{geometry}. Note that $Q_{2}^{*}(\mathbf{m})\neq 0$ in this definition, so that $H(\mathbf{m})$ is a non-zero integer. We further split $d_2=d_{21}d_{22}$ and $\delta_2=\delta_{21}\delta_{22}$ with $\delta_{21}\mid d_{21}^\infty$, $d_{21}\mid H(\mathbf{m})^{\infty}$, $\delta_{22}\mid d_{22}^\infty$ and $(d_{22},H(\mathbf{m}))=1$. It follows that $$ \Sigma_1 \leq \sum_{\substack{d_{21} \leq \frac{D}{d_1}\\d_{21}\mid H(\mathbf{m})^{\infty}\\ (d_{21},h)=1\\ (d_{21},\Delta_V^\infty)\leq \Xi_{d_1} }}\sum_{\substack{\delta_{21}\mid d_{21}^{\infty}\\\delta_{21}\ll B}}\sum_{\substack{d_{22}\sim \frac{D}{d_1d_{21}}\\(d_{22},hH(\mathbf{m}))=1}} \sum_{\substack{\delta_{22}\mid d_{22}^{\infty}\\\delta_{22}\ll B}} \frac{|S_{d_{21},\delta_{21}}(\mathbf{m})||S_{d_{22},\delta_{22}}(\mathbf{m})|}{(\delta_{21}\delta_{22})^{\frac{n}{2}+1}}. $$ In view of the fact that $(d_{22},2\det \mathbf{M}_2 Q^*_2(\mathbf{m}))=1$, it follows from Lemmas \ref{lem:mixed-strong-1} and \ref{lem:mixed-strong-2} that $S_{d_{22},\delta_{22}}(\mathbf{m})$ vanishes unless $\delta_{22}=1$. Hence we may conclude that the sum over $d_{22}$ and $\delta_{22}$ is $$ \sum_{\substack{d_{22}\sim \frac{D}{d_1d_{21}}\\ (d_{22},hH(\mathbf{m}))=1}}|\mathcal{D}_{d_{22}}(\mathbf{m})| \ll \left(\frac{D}{d_1d_{21}}\right)^{\frac{n}{2}+\psi_1(\mathbf{m})+\varepsilon}, $$ by Lemmas \ref{lem:dave} and \ref{lem:dave'}, where $$ \psi_{1}(\mathbf{m})= \begin{cases} \frac{1}{2}, &\mbox{if $G(\mathbf{m})=0$,}\\ 0, &\mbox{otherwise.} \end{cases} $$ It follows that $$ \Sigma_1 \ll \left(\frac{D}{d_1}\right)^{\frac{n}{2}+\psi_{1}(\mathbf{m})+\varepsilon} \sum_{\substack{d_{21} \leq \frac{D}{d_1}\\d_{21}\mid H(\mathbf{m})^{\infty}\\ (d_{21},h)=1\\ (d_{21},\Delta_V^\infty)\leq \Xi_{d_1} }} \sum_{\substack{\delta_{21}\mid d_{21}^{\infty}\\\delta_{21}\ll B}} \frac{|S_{d_{21},\delta_{21}}(\mathbf{m})|}{d_{21}^{\frac{n}{2}+ \psi_{1}(\mathbf{m})}\delta_{21}^{\frac{n}{2}+1}}. $$ Now there is a factorisation $d_{21}=d_{21}'d_{21}''$ such that $S_{d_{21},\delta_{21}}(\mathbf{m})=\mathcal{M}_{d_{21}',\delta_{21}}(\mathbf{m})\mathcal{D}_{d_{21}''}(\mathbf{m})$, where $\delta_{21}\mid d_{21}'^{\infty}$. It therefore follows from Lemma \ref{lem:r rough} and Lemma \ref{lem:mixed-strong'} that $$ S_{d_{21},\delta_{21}}(\mathbf{m})\ll (d_{21},\Delta_V^\infty)^{\frac{n}{2}-2} d_{21}^{\frac{n}{2}+\varepsilon}\delta_{21}^{\frac{n}{2}+1}, $$ since $\mathbf{m}$ is primitive. Hence \begin{align*} \Sigma_1 &\ll \Xi_{d_1}^{\frac{n}{2}-2} \left(\frac{D}{d_1}\right)^{\frac{n}{2}+\psi_{1}(\mathbf{m})+\varepsilon} B^{\varepsilon}. \end{align*} Substituting this into \eqref{eq:A2} we now examine \begin{align*} \Sigma_{2} &= \sum_{\substack{d_1\leq D\\d_1\mid h^{\infty}\\ (d_{1},\Delta_V^\infty)\leq \Xi }}\sum_{\substack{\delta_1\mid d_1^{\infty}\\\delta_1\ll B}} \frac{|S_{d_1,\delta_1}(h\mathbf{m})|}{\delta_1^{\frac{n}{2}+1}} \Sigma_1\\ &\ll D^{\frac{n}{2}+\psi_{1}(\mathbf{m})+\varepsilon} B^{\varepsilon} \sum_{\substack{d_1\leq D\\d_1\mid h^{\infty}\\ (d_{1},\Delta_V^\infty)\leq \Xi }} \frac{\Xi^{\frac{n}{2}-2} }{(d_{1},\Delta_V^\infty)^{\frac{n}{2}-2} } \sum_{\substack{\delta_1\mid d_1^{\infty}\\\delta_1\ll B}} \frac{|S_{d_1,\delta_1}(h\mathbf{m})|}{d_{1}^{\frac{n}{2}+\psi_{1}(\mathbf{m})} \delta_1^{\frac{n}{2}+1}}. \end{align*} We repeat the process that we undertook above to estimate $ S_{d_1,\delta_1}(h\mathbf{m})$, using Lemma \ref{lem:mixed-strong'} and Lemma \ref{lem:r rough}. This gives $$ \frac{|S_{d_1,\delta_1}(h\mathbf{m})|}{d_{1}^{\frac{n}{2}+\psi_{1}(\mathbf{m})} \delta_1^{\frac{n}{2}+1}} \ll (d_{1},\Delta_V^\infty)^{\frac{n}{2}-2} d_1^\varepsilon h^{\frac{n}{2}-2-\psi_{1}(\mathbf{m})}. $$ By \eqref{eq:scat} there are only $O(B^{\varepsilon}D^{\varepsilon})$ values of $\delta_{1}$ that feature in this analysis. In this way we arrive at the estimate \begin{equation}\label{eq:sigma2} \Sigma_{2} \ll \Xi^{\frac{n}{2}-2} D^{\frac{n}{2}+\psi_{1}(\mathbf{m})+\varepsilon} B^{\varepsilon} h^{\frac{n}{2}-2-\psi_{1}(\mathbf{m})}. \end{equation} It is time to distinguish between whether $G(\mathbf{m})=0$ or $G(\mathbf{m})\neq 0$ in our analysis of $U^{(2)}$. Accordingly, let us write $U^{(2)}=U^{(21)}+U^{(22)}$ for the corresponding decomposition. We begin with a discussion of $U^{(22)}$ , for which $\psi_{1}(\mathbf{m})=0$ in \eqref{eq:sigma2} . We deduce from \eqref{eq:A2} that \begin{align*} U^{(22)} &\ll \Xi^{\frac{n}{2}-2} D^{\frac{n}{2}+\varepsilon} B^{\varepsilon} \sum_{\substack{0<h\leq \sqrt{D}B^{\varepsilon}}} h^{\theta_{1}(n)-1} \sum_{\substack{0<|\mathbf{m}|\leq\frac{\sqrt{D}B^{\varepsilon}}{h}}} \frac{|\mathbf{m}|^{\theta_{1}(n)}}{|\mathbf{m}|^{\frac{n}{2}-1}}\\ &\ll \Xi^{\frac{n}{2}-2} D^{\frac{n}{2}+\varepsilon} B^{\varepsilon} \sum_{\substack{0<h\leq \sqrt{D}B^{\varepsilon}}} h^{\theta_{1}(n)-1} \left(\frac{\sqrt{D}B^{\varepsilon}}{h}\right)^{\frac{n}{2}+1+\theta_1(n)}, \end{align*} on breaking the sum over $\mathbf{m}$ into dyadic intervals for $|\mathbf{m}|$. The sum over $h$ is therefore convergent and we conclude that \begin{equation}\label{eq:A22} \begin{split} U^{(22)} &\ll \Xi^{\frac{n}{2}-2}D^{\frac{n}{2}+\varepsilon} B^{\varepsilon} \left(\sqrt{D}\right)^{\frac{n}{2}+1+\theta_1(n)}\\ &= \Xi^{\frac{n}{2}-2} D^{\frac{3n}{4}+\frac{1+\theta_1(n)}{2}+\varepsilon} B^{\varepsilon}. \end{split} \end{equation} We now turn to a corresponding analysis of $U^{(21)}$, for which $\psi_{1}(\mathbf{m})=\frac{1}{2}$ in \eqref{eq:sigma2} . It follows from \eqref{eq:A2} that \begin{align*} U^{(21)} &\ll \Xi^{\frac{n}{2}-2} D^{\frac{n+1}{2}+\varepsilon}B^{\varepsilon} \sum_{\substack{0<h\leq \sqrt{D}B^{\varepsilon}}} h^{\theta_{1}(n)-\frac{3}{2}} \sum_{\substack{0<|\mathbf{m}|\leq\frac{\sqrt{D}B^{\varepsilon}}{h}\\ G(\mathbf{m})=0}} \frac{|\mathbf{m}|^{\theta_{1}(n)}}{|\mathbf{m}|^{\frac{n}{2}-1}}\\ &\ll \Xi^{\frac{n}{2}-2} D^{\frac{n+1}{2}+\varepsilon}B^{\varepsilon} \max_{\frac{1}{2}<M\leq \sqrt{D}B^{\varepsilon}} M^{\theta_{1}(n)+1-\frac{n}{2}} \sum_{\substack{|\mathbf{m}|\leq M \\ G(\mathbf{m})=0}}1. \end{align*} Appealing to \eqref{eq:count}, we therefore deduce that \begin{equation}\label{eq:A21} \begin{split} U^{(21)} &\ll \Xi^{\frac{n}{2}-2} D^{\frac{n+1}{2}+\varepsilon} B^{\varepsilon} \left(\sqrt{D}\right)^{\frac{n}{2}-1+\theta_1(n)}\\ &= \Xi^{\frac{n}{2}-2} D^{\frac{3n}{4}+\frac{\theta_1(n)}{2}+\varepsilon} B^{\varepsilon}. \end{split} \end{equation} Our final task in this section is to estimate $U^{(1)}$ in Lemma \ref{lem:Asum}, for which we will be able to recycle most of the treatment of $U^{(2)}$. Following the steps up to \eqref{eq:sigma2} we find that $$ U^{(1)} \ll \Xi^{\frac{n}{2}-2} D^{\frac{n}{2}+\varepsilon} \left(\frac{B}{\sqrt{D}}\right)^{\frac{1}{2}+\theta_{2}(n)+\varepsilon} \sum_{\substack{0<|\mathbf{m}|\leq \sqrt{D}B^{\varepsilon}\\ (-1)^{\frac{n-1}{2}}Q^*_2(\mathbf{m})=\square}} D^{\psi_{1}(\mathbf{m})} |\mathbf{m}|^{1-\frac{n}{2}}. $$ One notes that in the absence of the function $\theta_{1}(n)$, the exponent of $h$ is at most $-1$, so that the summation over $h$ can be carried out immediately. As previously it will be necessary to write $U^{(1)}=U^{(11)}+U^{(12)}$, where $U^{(11)}$ denotes the contribution from the case $G(\mathbf{m})= 0$ and $U^{(12)}$ is the remaining contribution. Beginning with the latter, in which case $\psi_{1}(\mathbf{m})=0$, we deduce that \begin{align*} U^{(12)} \ll \Xi^{\frac{n}{2}-2} D^{\frac{n}{2}+\varepsilon} \left(\frac{B}{\sqrt{D}}\right)^{\frac{1}{2}+\theta_{2}(n)+\varepsilon} \max_{\frac{1}{2}<M\leq \sqrt{D}B^{\varepsilon}} M^{1-\frac{n}{2}} \sum_{\substack{|\mathbf{m}|\leq M \\ (-1)^{\frac{n-1}{2}}Q_2^*(\mathbf{m})=\square}}1. \end{align*} Applying \eqref{eq:count'} we therefore obtain \begin{equation}\label{eq:A12} \begin{split} U^{(12)} &\ll \Xi^{\frac{n}{2}-2} D^{\frac{n}{2}+\varepsilon} \left(\frac{B}{\sqrt{D}}\right)^{\frac{1}{2}+\theta_{2}(n)} B^\varepsilon \left(\sqrt{D}\right)^{\frac{n}{2}} \\&=\Xi^{\frac{n}{2}-2} D^{\frac{3n}{4}-\frac{1}{4}-\frac{\theta_2(n)}{2}+\varepsilon} B^{\frac{1}{2}+\theta_2(n)+\varepsilon}. \end{split} \end{equation} For the remaining contribution, with $\psi_{1}(\mathbf{m})=\frac{1}{2}$, we will drop the fact that $(-1)^{\frac{n-1}{2}}Q_2^*(\mathbf{m})$ should be a square from the sum over $\mathbf{m}$ since there is already sufficient gain from the fact that $G(\mathbf{m})$ vanishes. Arguing as above, but this time with recourse to \eqref{eq:count}, we conclude that \begin{equation}\label{eq:A11} \begin{split} U^{(11)} &\ll \Xi^{\frac{n}{2}-2} D^{\frac{n+1}{2}+\varepsilon} \left(\frac{B}{\sqrt{D}}\right)^{\frac{1}{2}+\theta_{2}(n)} B^\varepsilon \left(\sqrt{D}\right)^{\frac{n}{2}-1}\\ &= \Xi^{\frac{n}{2}-2} D^{\frac{3n}{4}-\frac{1}{4}-\frac{\theta_2(n)}{2}+\varepsilon} B^{\frac{1}{2}+\theta_2(n)+\varepsilon}. \end{split} \end{equation} Recall the definitions \eqref{eq:def-theta} of $\theta_{1}$ and $\theta_{2}$. Combining \eqref{eq:A22}--\eqref{eq:A11} in Lemma \ref{lem:Asum}, we may now record our final bound for $U_{T,\a}(B,D)$. \begin{lemma}\label{lem:Asum'} Let $n\geq 5$ and $ D\geq 1$. Then we have \begin{align*} U_{T,\a}(B,D) \ll \Xi^{\frac{n}{2}-2} B^{\frac{n}{2}-1+\varepsilon} \left( D^{\frac{n}{4}+\frac{1+\theta_1(n)}{2}+\varepsilon} + D^{\frac{n}{4}-\frac{1}{4}-\frac{\theta_2(n)}{2}+\varepsilon} B^{\frac{1}{2}+\theta_2(n)}\right). \end{align*} \end{lemma} \section{Proof of Theorem \ref{th1}: conclusion} \label{s:conclusion} Recall the expression for $S_{T,\a}^\sharp(B)$ recorded at the start of \S \ref{pt1}. We now have everything in place to estimate the overall contribution to this sum from the non-zero $\mathbf{m}$. An upper bound for this contribution is obtained by taking $D\ll B$ in Lemma \ref{lem:Asum'}'s estimate for the quantity introduced in \eqref{eq:AAsum}. This gives the overall contribution \begin{equation*} \begin{split} &\ll \Xi^{\frac{n}{2}-2} B^{\frac{n}{2}-1+\varepsilon} \left( B^{\frac{n}{4}+\frac{1+\theta_{1}(n)}{2}} + B^{\frac{n}{4}+\frac{1}{4}+\frac{\theta_{2}(n)}{2}} \right)\\ &\ll \Xi^{\frac{n}{2}-2} B^{\frac{3n}{4}-\frac{1}{2}+\frac{\theta_{1}(n)}{2} +\varepsilon}. \end{split} \end{equation*} Combining this with Lemma \ref{S(B)}, Lemma \ref{lem:flat} and Lemma \ref{psum}, our work so far has shown that \begin{equation}\label{eq:nose} S(B)= M^\sharp(B)+ O(\Xi^{-\frac{1}{n}}B^{n-2+\varepsilon}+\Xi B^{n-3+\varepsilon} + \Xi^{\frac{n}{2}-2} B^{\frac{3n}{4}-\frac{1}{2}+\frac{\theta_{1}(n)}{2} +\varepsilon}), \end{equation} where \begin{equation}\label{eq:main} M^\sharp (B) =\frac{B^{n-2}}{4^{n-1}} \sum_T \sum_{\substack{ \a\in (\mathbb Z/4\mathbb Z)^n\\ Q_1(\a)\equiv 1 \bmod{4}}} \sum_{\substack{d=1\\ (d,\Delta_V^\infty)\leq \Xi}}^\infty \frac{\chi(d)}{d^{n-1}} \sum_{q=1}^\infty\frac{1}{q^n} T_{d,q}(\mathbf{0})I_{d,q}(\mathbf{0}). \end{equation} We begin with a few words about the integral $$ I_{d,q}(\mathbf{0})= \int_{\mathbb R^n} h\left(\frac{q\sqrt{d}}{B}, Q_2(\mathbf{y})\right)W_d(\mathbf{y}) \d \mathbf y, $$ where $W_d$ is given by \eqref{eq:W_d} and we have made the substitution $Q=B/\sqrt{d}$. Recall the correspondence \eqref{eq:I*} between $I_{d,q}(\mathbf{0})$ and $I_r^*(\mathbf{0})$. Recall additionally the properties of $h(x,y)$ and the weight function $W_d$ that were recorded in \S \ref{prelim}. In particular $\nabla Q_2(\mathbf{y})\gg 1$ on $\supp(W_d)$ and we have $d\ll B$ and $q\sqrt{d}\ll B$ if $I_{d,q}(\mathbf{0})$ is non-zero. Combining \cite[Lemma 14]{H} and \cite[Lemma~15]{H} it follows that \begin{equation}\label{eq:I-trivial} I_{d,q}(\mathbf{0})\ll 1. \end{equation} Furthermore, according to \cite[Lemma 13]{H}, we have \begin{equation}\label{eq:home} I_{d,q}(\mathbf{0})= \tau_{\infty}(Q_2,W_d)+O_N\left\{\left( \frac{q\sqrt{d}}{B}\right)^N\right\}, \end{equation} for any $N>0$, where for any infinitely differentiable bounded function $\omega: \mathbb{R}^n\rightarrow \mathbb{R}$ of compact support we set \begin{equation}\label{eq:tau-inf} \tau_{\infty}(Q_2,\omega)= \lim_{\varepsilon\rightarrow 0}(2\varepsilon)^{-1}\int_{|Q_2(\mathbf{y})|\leq \varepsilon}\omega(\mathbf{y})\d\mathbf{y}. \end{equation} In fact $\tau_{\infty}(Q_2,\omega)$ is the real density of points on the affine cone over the hypersurface $Q_2=0$, weighted by $\omega$. We will use these facts to extract the dependence on $I_{d,q}(\mathbf{0})$ from \eqref{eq:main}. Returning to \eqref{eq:main}, our main goal in this section will be a proof of the following asymptotic formula. \begin{lemma}\label{lem:main} Let $n\geq 5$, let $\varepsilon>0$ and assume Hypothesis-$\rho$. Then we have $$ M^\sharp(B)= B^{n-2} \sigma_\infty \prod_p \sigma_p +O(\Xi^{-1}B^{n-2}+ B^{n-\frac{5}{2}+\varepsilon}+ B^{\frac{3n}{4}-1+\varepsilon}), $$ where $\sigma_\infty$ and $\sigma_p$ are the expected local densities of points on $X(\mathbb{R})$ and $X(\mathbb{Q}_p)$, respectively. In particular $ \sigma_\infty \prod_p \sigma_p >0 $ if $X(\mathbb{R})$ and $X(\mathbb{Q}_p)$ are non-empty for each prime $p$. \end{lemma} In the context of Theorem \ref{th1}, for which $n\geq 7$, we note that Hypothesis-$\rho$ follows from Lemma \ref{rho(d)}. We now wish to apply Lemma \ref{lem:main} in \eqref{eq:nose} to complete the proof of Theorem~\ref{th1}. Our estimates will be optimised by the choice $\Xi=B^{\xi(n)}$, with $$ \xi(n)=\left(\frac{n}{4}-\frac{3+\theta_1(n)}{2}\right)\left(\frac{2n}{n^2-4n+2}\right), $$ which comes from balancing the first and third error terms in \eqref{eq:nose}. We make the observation that $ \xi(n)<\xi(n)(1+\frac{1}{n})<1, $ for $n\geq 7$. Hence we obtain the overall error term $O(B^{n-2-\eta(n)+\varepsilon})$, with $$ \eta(n) =\min\left\{ \frac{\xi(n)}{n}, ~ 1-\xi(n), ~\frac{1}{2}, ~\frac{n}{4}-1 \right\} = \frac{\xi(n)}{n}. $$ Observe that $\eta(n)>0$ if $n\geq 7$. At this point we stress that if we had exponent $\frac{1}{2}$ instead of $\frac{7}{16}$ in \eqref{eq:l-series-bound}, which corresponds to the convexity bound, we would have $\theta_{1}(n)=\frac{1}{2}$ for odd $n$ and hence our result would only hold for $n\geq 8$. This completes the proof of Theorem \ref{th1}, subject to Lemma \ref{lem:main}. \medskip The remainder of this section will be devoted to the proof of Lemma \ref{lem:main}. Combining \eqref{eq:Sell-upper}, \eqref{cor:Q_p^r} and Lemma \ref{lem:mixed-strong'} it follows from Hypothesis-$\rho$ that \begin{equation}\label{eq:hyp-est} T_{d,q}(\mathbf{0})\ll d^{n-2+\varepsilon}q^{\frac{n}{2}+1}, \end{equation} for any $d,q\in \mathbb{N}$. We will also make use of the bound \eqref{eq:I-trivial} and the fact that $d\ll B$ whenever $I_{d,q}(\mathbf{0})$ is non-zero. Let $M(B)$ be defined as in \eqref{eq:main}, but in which the sum over $d$ runs over all positive integers. It follows from \eqref{eq:hyp-est} that $M^\sharp(B)=M(B)+O(\Xi^{-1}B^{n-2+\varepsilon})$. Write $$ M(B) =\frac{B^{n-2}}{4^{n-1}}\sum_T M_T(B), $$ say. For given $\theta>0$, let us consider the contribution to $M_T(B)$ from $q>B^{\frac{1}{2}-\theta}$. Invoking \eqref{eq:hyp-est}, this contribution is seen to be \begin{align*} &\ll \sum_{\substack{ \a\in (\mathbb Z/4\mathbb Z)^n\\ Q_1(\a)\equiv 1 \bmod{4}}} \sum_{d\ll B}\frac{1}{d^{n-1}} \sum_{q>B^{\frac{1}{2}-\theta}}\frac{|T_{d,q}(\mathbf{0})|}{q^n} \\ &\ll \sum_{d\ll B}d^{-1+\varepsilon} \sum_{q>B^{\frac{1}{2}-\theta}} q^{-\frac{n}{2}+1+\varepsilon}\\ &\ll B^{(\frac{1}{2}-\theta)(-\frac{n}{2}+2)+\varepsilon}, \end{align*} since $n\geq 5$. Turning to the contribution from $q\leq B^{\frac{1}{2}-\theta}$ we see that the error term in \eqref{eq:home} is $O_N(B^{-N})$ for arbitrary $N>0$, since $d\ll B$. Hence such $q$ make the overall contribution $$ \sum_{\substack{ \a\in (\mathbb Z/4\mathbb Z)^n\\ Q_1(\a)\equiv 1 \bmod{4}}} \sum_{d=1}^\infty\frac{\chi(d)\tau_\infty(Q_2,W_d)}{d^{n-1}} \sum_{q\leq B^{\frac{1}{2}-\theta}}\frac{T_{d,q}(\mathbf{0}) }{q^n} +O_N(B^{-N}), $$ to $M_T(B)$. The previous paragraph shows that the summation over $q$ can be extended to infinity with error $O(B^{(\frac{1}{2}-\theta)(-\frac{n}{2}+2)+\varepsilon})$. Taking $\theta$ to be a suitably small positive multiple of $\varepsilon$, we may therefore conclude that $$ M_T(B)= \sum_{\substack{ \a\in (\mathbb Z/4\mathbb Z)^n\\ Q_1(\a)\equiv 1 \bmod{4}}} \sum_{d=1}^\infty\frac{\chi(d)\tau_\infty(Q_2,W_d)}{d^{n-1}} \sum_{q=1}^\infty \frac{T_{d,q}(\mathbf{0}) }{q^n} +O( B^{-\frac{n}{4}+1+\varepsilon}). $$ Let us denote by $L_T(B;W_d)$ the main term in this expression. We proceed to introduce the summation over $T$ via the following result, in which $\rho(d)=\mathcal{D}_d(\mathbf{0})$. \begin{lemma}\label{lem:d-m0} Let $\varepsilon>0$ and $M\in \mathbb{N}$. Assume Hypothesis-$\rho$. Then for any $1\leq y<x$ we have $$ \sum_{\substack{y<d\leq x\\ (d,M)=1}} \frac{\chi(d) \rho(d)}{d^{n-1}} \ll \frac{M^\varepsilon}{\sqrt{y}}. $$ \end{lemma} \begin{proof} Let $s=\sigma+it\in \mathbb{C}$. In the usual way we consider the Dirichlet series $$ \eta_M(s)=\sum_{(d,M)=1} \frac{\chi(d) \rho(d)}{d^{s}} =\prod_{p\nmid M} \left(1+\frac{\chi(p)\rho(p)}{p^s} +O(p^{2n-4-2\sigma+\varepsilon}) \right), $$ where the error term comes from Hypothesis-$\rho$. Since $\rho(p)=p^{n-2}+O(p^{n-\frac{5}{2}})$, by the Lang--Weil estimate, we conclude that $$ \eta_M(s)=L\left(s-(n-2),\chi\right) E_M(s), $$ where $E_M(s)$ is absolutely convergent and bounded by $O(M^\varepsilon)$ for $\sigma>n-\frac{3}{2}$. The conclusion of the lemma is now available through a straightforward application of Perron's formula in the form \eqref{perron}. \end{proof} We deduce from Lemma \ref{lem:d-m0} that $$ \sum_{\substack{(d,q)=1}} \frac{\chi(d) \rho(d) V_T(d)}{d^{n-1}} \ll \frac{ q^\varepsilon }{\sqrt{T}} $$ and $$ \sum_{\substack{(d,q)=1}} \frac{\chi(d) \rho(d) V_T(B^2Q_1(\mathbf{y})/d)}{d^{n-1}} \ll \frac{ q^\varepsilon \sqrt{T}}{B\sqrt{Q_1(\mathbf{y})}} \ll \frac{ q^\varepsilon \sqrt{T}}{B}, $$ for any $\mathbf{y}\in \supp(W)$. Here we recall that $Q_1(\mathbf{y})$ is positive and has order of magnitude $1$ on $\supp(W)$. We now claim that $$ \sum_{T}L_T(B;W_d)=2C +O(B^{-\frac{1}{2}+\varepsilon}), $$ with \begin{equation}\label{eq:CC} C=\tau_\infty(Q_2,W) \sum_{\substack{ \a\in (\mathbb Z/4\mathbb Z)^n\\ Q_1(\a)\equiv 1 \bmod{4}}} \sum_{d=1}^\infty\frac{\chi(d)}{d^{n-1}} \sum_{q=1}^\infty \frac{T_{d,q}(\mathbf{0}) }{q^n}. \end{equation} Now the weight function $W_d$ differs according to whether $T\leq B$ or $T>B$. It will be convenient to set $W^{(1)}(\mathbf{y})=W(\mathbf{y})V_T(d)$ and $W^{(2)}(\mathbf{y})=W(\mathbf{y})V_T(B^2Q_1(\mathbf{y})/d)$. In either case we wish to extend the sum over $T$ to the full range, since $\sum_T V_T(t)=1$ for $1\leq t\ll B^2$. We have $$ \sum_{T\leq B} L_T(B;W_d) = C -\sum_{T>B}L_T(B;W^{(1)}), $$ and $$ \sum_{T> B} L_T(B;W_d) = C -\sum_{T\leq B}L_T(B;W^{(2)}). $$ To estimate the tails we employ the factorisation properties of $T_{d,q}(\mathbf{0})$, finding that $$ L_T(B;W^{(i)})= \sum_{\substack{ \a\in (\mathbb Z/4\mathbb Z)^n\\ Q_1(\a)\equiv 1 \bmod{4}}} \sum_{q=1}^\infty \frac{1}{q^n} \sum_{\substack{\delta \mid q\\ \delta \ll B}} \frac{T_{\delta,q}(\mathbf{0})}{\delta^{n-1}} \sum_{(d,q)=1}\frac{\chi(d)\rho(d)\tau_\infty(Q_2,W^{(i)})}{d^{n-1}}, $$ for $i=1,2$. The claim is now an easy consequence of our hypothesised bound \eqref{eq:hyp-est} and Lemma \ref{lem:d-m0}. Bringing everything together, we have therefore shown that \begin{equation}\label{eq:t3} M(B) = \frac{2B^{n-2}}{4^{n-1}} C +O(B^{n-\frac{5}{2}+\varepsilon}+ B^{\frac{3n}{4}-1+\varepsilon}), \end{equation} with $C$ given by \eqref{eq:CC}. We wish to show that the leading constant admits an interpretation in terms of local densities for the intersection of quadrics $X$ considered in Theorem \ref{th1}. For a prime $p$ the relevant $p$-adic density is equal to $$ \sigma_p=\lim_{k\rightarrow \infty} p^{-kn} N(p^k), $$ where $$ N(p^k)= \#\left\{ (\mathbf{x},u,v)\in (\mathbb{Z}/p^k\mathbb{Z})^{n+2}: \begin{array}{l} Q_1(\mathbf{x})\equiv u^2+v^2 \bmod{p^k},\\ Q_2(\mathbf{x})\equiv 0 \bmod{p^k} \end{array} \right\}, $$ if $p>2$, and $$ N(2^k)= \#\left\{ (\mathbf{x},u,v)\in (\mathbb{Z}/2^k\mathbb{Z})^{n+2}: \begin{array}{l} Q_1(\mathbf{x})\equiv u^2+v^2 \bmod{2^k},\\ Q_2(\mathbf{x})\equiv 0 \bmod{2^k}, ~2\nmid Q_1(\mathbf{x}) \end{array} \right\}. $$ The restriction to odd values of $Q_1(\mathbf{x})$ in $N(2^k)$ comes from the definition of the counting function $S(B)$. In order to relate these densities to the local factors that arise in our analysis, we set $$ S(A;p^k)=\#\{(u, v) \in(\mathbb{Z}/p^k\mathbb{Z})^2: u^2+v^2 \equiv A \bmod{p^k}\}, $$ for any $A\in\mathbb{Z}$ and any prime power $p^k$. According to Heath-Brown \cite[\S 8]{h-b03} we have $$ S(A;p^k)=\begin{cases} p^k+kp^k(1-1/p), &\mbox{if $v_p(A)\geq k$},\\ (1+v_p(A))p^{k}(1-1/p), & \mbox{if $v_p(A)<k$}, \end{cases} $$ when $p\equiv 1\bmod{4}$. When $p\equiv 3 \bmod{4}$, we have $$ S(A;p^k)=\begin{cases} p^{2[\frac{k}{2}]}, &\mbox{if $v_p(A)\geq k$},\\ p^{k}(1+1/p), & \mbox{if $v_p(A)<k$ and $2\mid v_p(A)$},\\ 0, & \mbox{if $v_p(A)<k$ and $2\nmid v_p(A)$}. \end{cases} $$ Finally, for odd $A$, when $p=2$ and $k\geq 2$ we have $$ S(A;2^k)=\begin{cases} 2^{k+1}, & \mbox{if $A\equiv 1\bmod{4}$,}\\ 0, & \mbox{otherwise.} \end{cases}$$ We now have everything in place to reinterpret the densities $\sigma_p$. We being by analysing the case $p=2$, obtaining \begin{equation}\label{eq:sig2} \sigma_2= \lim_{k\rightarrow \infty} 2^{1-k(n-1)}\#\left\{ \mathbf{x}\in (\mathbb{Z}/2^k\mathbb{Z})^{n}: \begin{array}{l} Q_1(\mathbf{x})\equiv 1\bmod{4},\\ Q_2(\mathbf{x})\equiv 0 \bmod{2^k} \end{array} \right\}. \end{equation} Alternatively, when $p>2$, it is straightforward to deduce that \begin{equation}\label{eq:sigp} \sigma_p= \left(1-\frac{\chi(p)}{p}\right) \lim_{k\rightarrow \infty} p^{-k(n-1)} \sum_{0\leq e\leq k} \chi(p^e) \widetilde{N}_k(e), \end{equation} where $$ \widetilde{N}_k(e)= \#\left\{ \mathbf{x}\in (\mathbb{Z}/p^k\mathbb{Z})^{n}: \begin{array}{l} Q_1(\mathbf{x})\equiv 0\bmod{p^e},\\ Q_2(\mathbf{x})\equiv 0 \bmod{p^k} \end{array} \right\}. $$ Finally, for the real density $\sigma_\infty$ of points, we claim that \begin{equation}\label{eq:siginf} \sigma_\infty =\pi \tau_\infty(Q_2,W), \end{equation} in the notation of \eqref{eq:tau-inf}. Supposing that the equations for $X$ are taken to be $Q_1(\mathbf{x})=u^2+v^2$ and $Q_2(\mathbf{x})=0$, the real density is equal to $$ \sigma_\infty =\int_{-\infty}^\infty \int_{-\infty}^\infty \int_{(\mathbf{x},u,v)\in \mathbb{R}^{n+2}} \hspace{-0.7cm} W(\mathbf{x}) e\left( \alpha \{Q_1(\mathbf{x})-u^2-v^2\} +\beta Q_2(\mathbf{x})\right) \d\mathbf{x} \d u \d v\d \alpha \d \beta. $$ We restrict $u,v$ to be non-negative and substitute $t=Q_1(\mathbf{x})-u^2-v^2$ for $v$. Writing $$ F(t)= \frac{1}{2} \int_{-\infty}^\infty \int_{\mathbf{x},u} \frac{W(\mathbf{x}) e\left(\beta Q_2(\mathbf{x})\right) }{\sqrt{Q_1(\mathbf{x})-u^2-t}} \d \mathbf{x} \d u \d \beta, $$ where the integral is over $(\mathbf{x},u)\in \mathbb{R}^{n+1}$ such that $u\geq 0$ and $Q_1(\mathbf{x})-u^2-t\geq 0$, we therefore obtain $$ \sigma_\infty =4 \int_{-\infty}^\infty \int_t F(t)e(\alpha t) \d t\d \alpha. $$ By the Fourier inversion theorem this reduces to $4F(0)$. Noting that $$ \int_0^{\sqrt{A}} \frac{\d u}{\sqrt{A-u^2}}=\frac{\pi}{2}, $$ for any $A>0$, we arrive at the expression $$ \sigma_\infty=4\times \frac{1}{2}\times \frac{\pi}{2} \int_{-\infty}^\infty \int_{\mathbf{x}\in \mathbb{R}^n} W(\mathbf{x}) e\left(\beta Q_2(\mathbf{x})\right) \d \mathbf{x} \d \beta. $$ But the remaining integral is just the real density $\tau_\infty(Q_2,W)$, by \cite[Theorem 3]{H}. This concludes the proof of \eqref{eq:siginf}. It is now time to interpret the constant $C$ in \eqref{eq:CC} in terms of the local densities $\sigma_p$ and $\sigma_\infty$. Invoking Lemma \ref{lem:mult1} we may write $$ C= \tau_{\infty}(Q_2,W) \sum_{\substack{ \a\in (\mathbb Z/4\mathbb Z)^n\\ Q_1(\a)\equiv 1 \bmod{4}}} \sum_{d=1}^\infty\frac{\chi(d)}{d^{n-1}} \sum_{\ell=0}^\infty \sum_{\substack{q'=1\\ 2\nmid q'}}^\infty \frac{1}{(2^\ell q')^n} S_{d,q'}(\mathbf{0}) S_{1,2^\ell}^{\chi(dq')}(\mathbf{0}). $$ Recall \eqref{eq:Sell}. We therefore see that for fixed $d$ and $q'$ the sum over $\a$ and $\ell$ is \begin{align*} \sum_{\substack{ \a\in (\mathbb Z/4\mathbb Z)^n\\ Q_1(\a)\equiv 1 \bmod{4}}} \sum_{\ell=0}^\infty \frac{1}{2^{\ell n}} S_{1,2^\ell}^{\chi(dq')}(\mathbf{0}) &= \sum_{\ell=0}^\infty \frac{1}{2^{\ell n}} \sideset{}{^{*}}\sum_{a\bmod{2^\ell}} \sum_{ \substack{ \mathbf{k}\bmod{2^{2+\ell}}\\ Q_1(\mathbf{k})\equiv 1 \bmod{4}}} e_{2^{\ell}} \left(a Q_2(\mathbf{k})\right)\\ &= \lim_{\ell \rightarrow\infty} 2^{-\ell(n-1)} \#\left\{ \k\in (\mathbb{Z}/2^{2+\ell}\mathbb{Z})^{n}: \begin{array}{l} Q_1(\k)\equiv 1\bmod{4},\\ Q_2(\k)\equiv 0 \bmod{2^\ell} \end{array} \right\}\\ &=4^{n}\times \frac{\sigma_2}{2}, \end{align*} on carrying out the sum over $a$ and comparing with \eqref{eq:sig2}. Hence it follows that $$ C= 4^{n}\times \frac{\sigma_2}{2}\times \tau_{\infty}(Q_2,W) \sum_{d=1}^\infty\frac{\chi(d)}{d^{n-1}} \sum_{\substack{q'=1\\ 2\nmid q'}}^\infty \frac{1}{q'^n} S_{d,q'}(\mathbf{0}). $$ Expressing the sum over $d$ and $q'$ as an Euler product one finds that \begin{align*} \sum_{d=1}^\infty\frac{\chi(d)}{d^{n-1}} \sum_{\substack{q'=1\\ 2\nmid q'}}^\infty \frac{1}{q'^n} S_{d,q'}(\mathbf{0}) &= \prod_{p>2} \sum_{r,\ell\geq 0} \frac{p^r\chi(p^r)}{p^{(r+\ell)n}} S_{p^r,p^\ell}(\mathbf{0}). \end{align*} Here $S_{p^r,1}(\mathbf{0})=\widetilde{N}_r(r)$ and $ S_{p^r,p^\ell}(\mathbf{0})=p^\ell \widetilde{N}_{r+\ell}(r) - p^{\ell-1+n} \widetilde{N}_{r+\ell-1}(r), $ when $\ell\geq 1$, in the notation of \eqref{eq:sigp}. It easily follows that $$ \sum_{d=1}^\infty\frac{\chi(d)}{d^{n-1}} \sum_{\substack{q'=1\\ 2\nmid q'}}^\infty \frac{1}{q'^n} S_{d,q'}(\mathbf{0}) =\prod_{p>2} \tau_p, $$ with \begin{align*} \tau_p&= \lim_{k\rightarrow \infty} p^{-k(n-1)}\sum_{0\leq r\leq k} \chi(p^r)\widetilde{N}_k(r) =\left(1-\frac{\chi(p)}{p}\right)^{-1} \sigma_p. \end{align*} Finally, on appealing to the identity \eqref{eq:siginf} and noting that $L(1,\chi)=\pi/4$, we deduce that \begin{align*} C &= 4^{n}\times \frac{\sigma_2}{2} \times \tau_{\infty}(Q_2,W) L(1,\chi)\prod_{p>2} \sigma_p\\ &= \frac{4^{n-1}}{2}\times \sigma_{\infty} \prod_{p} \sigma_p. \end{align*} Once inserted into \eqref{eq:t3} we therefore arrive at the statement of Lemma \ref{lem:main}.
1,108,101,565,054
arxiv
\section{Discussion} Antineutrinos from the decay chains of $^{238}$U and $^{232}$Th existing in the Earth interior (the so called geo-neutrinos) have been recently detected both by Kamland \cite{kamland} and by Borexino \cite{borexino} experiments. Future experiments for geo-neutrinos detection have been proposed (or starting) in several location in the world (e.g. SNO+ in Canada \cite{SNO+}, Lena project in Europe \cite{lena} and Hawaii Anti-Neutrino Observatory\cite{hawaii}). The main source of background of such experiments is given by antineutrino produced by nuclear plants. These particles account for a signal almost always larger than geo-neutrinos one, see Table \ref{table}. So a detailed calculation of reactor antineutrino flux in mandatory for an accurate measurements of geo-neutrinos. With this aim, we performed a calculation of reactor antineutrinos flux all over the world. Previus analysis has been presented, for instance, in ref. \cite{report} and \cite{geoscience2010}. Now we will show an updated estimate of reactor antineutrino signal, with particular attention to the sites proposed for the new geo-neutrino experiments. In our calculation we take into account the most updated data on Thermal Power for each nuclear plant, on reactor antineutrino spectra and on three neutrino oscillation mechanism. The expected reactor antineutrino signal has been calculated as follows: \begin{equation} N_{ev}= \epsilon N_p \tau \sum_{r=1}^{N_{react}} \frac {P_{r}}{4 \pi L_{r}^{2}} <LF_r> \int dE_{\bar{\nu}_e} \sum_{i=1}^4 \frac {f_{i}}{E_{i}} \phi_{i}(E_{\bar{\nu}_e}) \sigma(E_{\bar{\nu}_e}) P_{ee}(E_{\bar{\nu}_e};\hat\theta, L_r) \label{Eq:ReactorFlux} \end{equation} where $\epsilon$ is detector efficiency, $N_p$ is the number of target proton, $\tau$ is period of data taking, index $r$ cycles over the $N$ reactors considered, $L_{r}$, $P_{r}$ and $<LF_r>$ are the distance, the nominal thermal power and the averaged Load Factor of reactor $r$, respectively. The index $i$ stands for the i-th spectral component in the set ($^{235}$U, $^{238}$U, $^{239}$Pu, and $^{241}$Pu), $f_{i}$ is the power fraction of the component $i$, as reported in\cite{borexino}, $E_i$ is the average antineutrino energy per fission of the component $i$ \cite{huber}, $\phi(E_{\bar{\nu}})$ is the anti-neutrino flux per fission of the $i^{\rm th}$ component, as recently calculated in ref.\cite{mueller}, $\sigma(E_{\bar{\nu}})$ is the inverse beta decay cross section\cite{vissani} and $P_{ee}$ is the survival probability of the reactor antineutrinos of energy $E_{\bar{\nu}}$ traveling the baseline $L_r$,depending on the mixing parameters $\hat\theta$. In Eq. (\ref{Eq:ReactorFlux}) we assume a 100\% detection efficiency, for a detector containing $10^{32}$ target protons and operating continuously for 1 year. In particular we consider the nuclear cores all over the world, operating in the year 2012. Information on the nominal thermal power and monthly load factor for each nuclear cores originates from International Agency of Atomic Energy (IAEA)~\cite{iaea}. Concerning survival probability, we assumed a three flavour vacuum oscillation mechanism with $P_{ee}$ as in in ref.\cite{fiorentini2012}, and mixing parameters from ref.\cite{fogli2012}. The results of our calculation are reported in Table\ref{table} and in Fig.\ref{mappa}. We also performed a analysis on the sources of uncertainty in reactor signal prediction, see\cite{geoscience2010} for details. The total uncertainty is of the order of 5\%, the main contributions (i.e. greater than 2\%) arising from $\theta_{12}$ mixing angle, antineutrino spectrum, fuel composition and thermal power. One can see that, due to reactors shutdown occurred in 2012, Kamioka became a suitable site for detecting geo-neutrinos, comparable to LNGS. A new European geo-neutrino detector located at Frejus Laboratory requires a detailed knowledge of closeby reactors; the choice of Phyasalmi looks better in this respect. Of course Hawaii and Curacao are wonderful places for geo-neutrino studies due to their position far away from any nuclear plants of the world. The same holds for Homestake. In the near future, the SNO+ experiment, with a quite reasonable ratio R$_G$/G, will provide more information about Earth's interior. \begin{figure} \center \includegraphics[width=.9\textwidth]{mappa.pdf} \caption{A worldwide map of reactor antineutrinos signal. 1 TNU= 1 events/yr/10$^{32}$ target protons} \label{mappa} \end{figure}
1,108,101,565,055
arxiv
\section{Introduction} The study of vortex rings (VRs) has a long and distinguished history in the context of fluids and superfluids~\cite{saffman,Pismen1999}, a prominent chapter of which involves, e.g., associated observations in helium; see, for instance~\cite{donnelly,Rayfield64,Gamota73}. However, the experimental realization and subsequent developments in the theme of Bose-Einstein condensates (BECs)~\cite{book1,book2,dumitr1,dumitr2} have produced a playground especially suitable for the study of such coherent structures. This is also evidenced by the plethora of reviews and book chapters on the topic~\cite{emergent,komineas_rev,barenghi_rev,book_new}. In the early stages of examination of VRs in the context of BECs, the emphasis was on finding mechanisms for their creation. These ranged from the (transverse) instability of planar dark solitons in 3D BECs~\cite{Anderson01}, to the generation via density engineering in suitable geometries, as in the work of~\cite{Shomroni09}, or even their spontaneous emergence as byproducts of collision between symmetric defects in~\cite{Ginsberg05}. They were also detected in the collision between dark solitons imprinted in non-one-dimensional geometries, where they were seen to give rise to unusual dynamical events such as slingshot events of one of the waves~\cite{sengstock}. More recently, the significant improvements in the visualization techniques, not only in the context of BECs~\cite{bis1,bis2}, but also in the case of superfluid helium~\cite{lathrop1,lathrop2}, have enabled a much more detailed ability to explore the dynamics of vortex rings. In light of that, one of the recent focal points has been the exploration of the interaction between rings both theoretically and computationally~\cite{wacks,wang2}, as well as experimentally~\cite{bis3}. This aspect is especially important not only in its own right, but also due to its critical role in the theory of quantum turbulence, another topic of intense recent study~\cite{Tsatsos2016,Navon2016}. It should also be mentioned in passing that in addition to these significant developments on the dynamics of vortex rings in bosonic systems, similar (planar) solitonic and vortical (line and ring) structures have been recently identified in superfluid Fermi gases~\cite{zwierlein}. Our emphasis in the present work is different, going partially back to the context of the single vortex ring. There exists an analytical prediction for its spectrum, stemming from the early work of~\cite{horng}. The relevant prediction relates the potential stability of the VR with the potential prolateness or oblateness of the full condensate. In particular, defining $\lambda= \omega_z/\omega_r$, i.e., the ratio of confining frequencies along the $z$- vs. along the radial-direction, the analysis of~\cite{horng} yields that the VR is stable when $1 \leq \lambda \leq 2$. To the best of our knowledge, such a theoretical prediction has not been systematically tested. Here, we test this prediction, finding that it is {\it only asymptotically valid} in the regime of $\mu \rightarrow \infty$. We showcase the intriguing way that the prediction deviates from its asymptotic form, identifying stable rings for $1 \leq \lambda \leq \lambda_c(\mu)$, where $\lambda_c$ is an upper critical bound of stability with $\lim_{\mu \rightarrow \infty} \lambda_c(\mu)=2$ and $\lambda_c$ demonstrating a monotonically decreasing trend, as the chemical potential $\mu$ decreases within finite values. We also go a step further: we explore the fully nonlinear dynamical equation for the ring as a vortical filament, whose linearization results in the above prediction. This is the full nonlinear equation encompassing the combined dynamics of the ring and its ``Kelvin modes'', comparing this with the full GPE dynamics. In our view, this is an important step towards both a quantitative understanding of the VR stability, but also a stepping stone towards formulating a multi-ring Lagrangian and developing the associated nonlinear dynamics for settings where this may be relevant (such as, e.g.,~\cite{wacks}). Our presentation is structured as follows. In Sec.~\ref{theory}, we summarize our theoretical analysis, connecting wherever appropriate with past theoretical analyses on the subject. In Sec.~\ref{computations}, we discuss our numerical methods. Next we explore extensively the comparison of the theoretical findings with the spectral and dynamical results of the full 3D problem in Sec.~\ref{results}. We compare the spectra in a wide interval of the anisotropy parameter $\lambda$ and the chemical potential $\mu$, and we establish the validity of the vortex ring filament equation of motion. Additionally, we also visualize the identified instabilities for different values of $\lambda$ and for different case examples of excited modes. This provides us with a reasonably complete sense of the (single) ring's dynamical properties. This is not only of value in its own right, but also a necessary preamble towards a more systematic theoretical understanding of multi-ring configurations. This and related topics for future study are commented upon, along with a summary of the present findings in Sec.~\ref{cc}. \section{Theoretical Analysis} \label{theory} The dynamics of vortical filaments is a topic that was studied rather extensively shortly after vortical patterns could be produced in BECs. We single out here two ground-breaking works along this direction, namely~\cite{fetter1} and~\cite{ruban1}. Following the latter (although equivalent results for our purposes are obtained in the former), we can write down the Hamiltonian and the Lagrangian of vortical ring filaments of radius $R$ and vertical position $Z$ as: \begin{eqnarray} H &=& \int_0^{2 \pi} \rho \sqrt{R_\phi^2+ R^2 + Z_\phi^2} d \phi, \\ \label{vr_eq1} L &=& \int_0^{2 \pi} F Z_t - \rho \sqrt{R_\phi^2+ R^2 + Z_\phi^2} d \phi. \label{vr_eq2} \end{eqnarray} Here, $F(R,Z)$ is such that $F_R=\rho(R,Z) R$, the subscripts denote partial derivatives with respect to the argument, while $A=\sqrt{R_\phi^2+ R^2 + Z_\phi^2}$ denotes the arclength quantity in cylindrical coordinates that are most natural to use in the present setting. The density $\rho$ is assumed to be well approximated by the Thomas-Fermi limit (for the large chemical potential values of interest) in the form $\rho(R,Z)=\max(\mu-V(R,Z),0)$. It should be noted that this expression is part of a more general formulation for arbitrary filaments~\cite{fetter1,ruban1}, yet herein we restrict considerations to a ring geometry and the corresponding natural cylindrical coordinate choice. The resulting equations of motion (see also the recent exposition of~\cite{ruban2}) then read: \begin{eqnarray} \rho R R_t &=& -\rho_Z A + \frac{\partial}{\partial \phi} \left(\frac{\rho Z_{\phi}}{A}\right), \label{vr_eq3} \\ \rho R Z_t &=& \rho_R A + \frac{\rho}{A} R - \frac{\partial}{\partial \phi} \left(\frac{\rho R_{\phi}}{A}\right). \label{vr_eq4} \end{eqnarray} To the best of our understanding, these partial differential equations (PDEs), the canonical description of a vortex ring as a filamentary structure, have never been explored (even numerically) in their fully nonlinear form for $R=R(\phi,t)$ and $Z=Z(\phi,t)$. Instead, only the corresponding ordinary differential equations (ODEs) have been derived for homogeneous, $\phi$-independent, steady states, i.e.,: \begin{eqnarray} R_t=\omega_z^2 \frac{Z}{\rho}, \quad Z_t=-\omega_r^2 \frac{R}{\rho} + \frac{1}{R}. \label{vr_eq5} \end{eqnarray} Here, the parabolic potential $V(r,z)=(\omega_r^2 r^2 + \omega_z^2 z^2)/2$ has been assumed. It is important to note that these ODEs are the same as the ones used by~\cite{horng} to derive the VR spectrum, up to a logarithmic correction factor~\cite{fetter1} of: \begin{eqnarray} \Lambda(r)=-\frac{1}{2} \log \left[\left( \sqrt{\frac{\omega_z^2}{2 \mu}+ \frac{\kappa^2}{8}} \right) \frac{1}{\sqrt{2 \mu}} \right]. \label{vr_eq6} \end{eqnarray} Here $\kappa$ denotes the curvature of the filament equal to $1/R$ for the case of the VR. Importantly, Eqs.~(\ref{vr_eq5}) lead to an equilibrium ring with $Z_0=0$ and radius $R_0^2=2\mu/(3 \omega_r^2)=R_{\perp}^2/3$. Linearizing around this equilibrium, at the level of Eqs.~(\ref{vr_eq3})-(\ref{vr_eq4}), by using the ansatz \begin{eqnarray} Z= \epsilon \sum_m Z_m \cos(m \phi), \quad R=R_0 + \epsilon \sum_m R_m \sin(m \phi), \label{vr_eq8} \end{eqnarray} leads to an effective eigenvalue problem through $R_m=e^{i \omega t} R_m^0$ and $Z_m=e^{i \omega t} Z_m^0$, which can be solved for each $m$. The resulting expression yields~\cite{horng} for each subspace represented by its own (integer to ensure periodicity) value of $m$: \begin{eqnarray} \omega = \frac{3\Lambda\omega_r^2}{2\mu} \left[ \left(m^2-\lambda^2\right) \left(m^2-3\right) \right]^{1/2}. \label{spectrum} \end{eqnarray} On the basis of the expression of Eq.~(\ref{spectrum}), it can be seen that if $\lambda < 1$ (prolate BECs), the mode with $m=1$ will always be unstable producing a negative sign inside the bracket of Eq.~(\ref{spectrum}). In the case where $1 \leq \lambda \leq 2$, i.e., for slightly oblate BECs, all the modes of Eq.~(\ref{spectrum}) possess real frequencies and therefore the VR is expected to be stable. Finally, for $\lambda > 2$, the the modes with $m=1$ and $m \geq \lambda$ will be stable, while those ``in between'', i.e., $m=2, \dots$ will necessarily be unstable yielding at least one unstable mode. For $2< \lambda \leq 3$, there will only be one such mode, for $3 < \lambda \leq 4$, there will be two, then three and so on. Moreover, which one of these modes dominates, in terms of producing the maximal growth rate depends on $\lambda$, based on the above theory. For instance, for $3 \leq \lambda < \sqrt{10}$, the mode with $m=2$ dominates the instability growth rate, while for larger $\lambda$, the $m=3$ acquires a larger growth rate than the $m=2$ one. Similarly, at $\lambda=\sqrt{22}$, $m=4$ becomes more unstable than $m=3$, and more generally two modes $m$ and $\tilde{m}$ exchange their respective dominance at $\lambda=\sqrt{m^2 + \tilde{m}^2 -3}$. It is interesting to note that, to the best of our knowledge, these predictions have not been systematically tested, partly also due to the intensive nature of the relevant computations. Such an examination, as a function of $\lambda$, but also varying $\mu$ will be the main theme of the spectral computations of our numerical section below. Then, upon appreciating the asymptotic nature of the relevant comparison, we will also attempt to explore the unstable dynamics of the VR for $\lambda < 1$, as well as for $\lambda > 2$, to observe how the corresponding instabilities of the different modes manifest themselves and what is the comparison between the predictions of Eqs.~(\ref{vr_eq3})-(\ref{vr_eq4}) and the original GPE with the three-dimensional parabolic trap. \section{Computational setup} \label{computations} Our numerical simulations include finding stationary states of the vortex ring and computing the corresponding linear stability spectra, as well as performing the dynamical integration of both the filament PDE (i.e., the effective description of Eqs.~(\ref{vr_eq3})-(\ref{vr_eq4})) and the GPE. We use Newton's iteration method to identify the numerically exact VR stationary solutions of the GPE: \begin{eqnarray} i \hbar u_t =-\frac{\hbar^2}{2m} \Delta u + V(r,z) u + g |u|^2 u - \mu u. \label{vr_eq9} \end{eqnarray} For such stationary solutions, the left hand side of the equation is set to $0$. Also, here, $V(r,z)=(\omega_r^2 r^2 + \omega_z^2 z^2)m/2$ is the parabolic confinement of atoms with mass $m$, while $\Delta$ stands for the 3D Laplacian, and $g$ is $4\pi \hbar^2 a_s/m$ with $a_s$ being the s-wave scattering length. We rescale Eq. (\ref{vr_eq9}) in terms of the energy $\hbar \omega_r$. This naturally leads to length scales being measured in oscillator units, i.e., $\sqrt{\hbar/m\omega_r}$ for length, and time scale in units of the inverse trapping frequency, $1/\omega_r$. Subsequently, we consider the excitation spectrum around a solution $u_0(x,y,z)$ of the steady state of Eq.~(\ref{vr_eq9}). This is done by using the ansatz: \begin{eqnarray} u=u_0(x,y,z)+ a(x,y,z) e^{i \omega t} + b^*(x,y,z) e^{-i \omega^* t} \label{vr_eq10} \end{eqnarray} The resulting, so-called Bogolyubov-de Gennes (BdG) equations can be used for computing normal mode eigenfrequencies $\omega$ and eigenvectors $(a,b)^T$. Because a steady ring has rotational symmetry, we compute both the stationary state and the spectrum using a cross section in the $r$-$z$ plane. The spectrum is computed using basis expansions through the so-called partial wave method. A recent summary of the technique can be found, e.g., in \cite{wenlongdss} for one-component solitons with rotational symmetry up to a topological charge and we refer the interested readers to this paper for more details. Nonetheless, we briefly mention here that the method computes eigenvalues for each Fourier $m$-mode separately (eigenvalues of modes $m$ and $-m$ are complex conjugates) and the full 3D spectrum is the union of all the individual 2D spectra. In our work, we have collected the spectrum using $m=0, 1, 2, 3, 4$ and $5$ \cite{wenlongdss}. The stationary ring states in the $(\lambda,\mu)$ plane are explored using parametric continuations. We take advantage of the linear limit of the ring when $\lambda=2$, where the ring is a superposition of a dark soliton state and a dark soliton ring state with a phase difference of $\pi/2$~\cite{tickold}. The states at larger values of $\mu$ are computed via a parametric continuation in $\mu$. The states at a fixed value of $\mu$ but different $\lambda$ values are then computed by a second parametric continuation in $\lambda$. Our fields are all discretized using standard finite element methods, also for dynamics. We have used two different methods for our dynamics for comparing the different PDE models. The simpler, effectively 1+1D ($\phi$- and $t$- dependent) dynamics of the ring filament PDE of (\ref{vr_eq3})-(\ref{vr_eq4}) is simulated using the regular fourth order Runge-Kutta method, while the more demanding 3+1D GPE is simulated using a third order operator splitting Fourier spectral method. Finally, we present some details of the initial states used in the dynamical evolution simulations. Here, to emulate more closely the experimental protocols customarily used, we imprint (within the BEC) a VR with its center position given by: \begin{eqnarray} R_{\rm{ring}}(\phi)=R_0+\sum_m R_m \sin(m \phi). \label{vr_eq11} \end{eqnarray} Here, each $m$ represents the excitation of a Kelvin wave mode of index $m$. Then in each $r$-$z$ plane, the phase of the VR is given by $\varphi=\arctan[(z-Z_{\rm{ring}})/(r-R_{\rm{ring}})]$. Typically, a single $R_m$ is set to 0.1 while all the others are set to zero, and we have picked $Z_{\rm{ring}}$ to be $10^{-6}$. \section{Results} \label{results} \subsection{Spectra} We start by presenting three typical stationary ring states for three different trapping frequencies shown in Fig.~\ref{states}. They are from left to right for $\lambda=0.5$ (prolate), $\lambda=1$ (spherical) and $\lambda=3$ (oblate) at $\mu=40$ in the Thomas-Fermi regime. Notice how the aspect ratios of the profiles change as we vary $\lambda$, by changing $\omega_z$. By the way, it is worth mentioning that since we vary $\omega_z$, the planar ($x$-$y$) profile of the VR remains that of a dark ring~\cite{tickold}, hence it is not shown here. Next we present four typical results of the spectrum of the stationary ring as a function of $\lambda$ for different values of the chemical potential and compare with the analytical results of Eq.~(\ref{spectrum}) as shown in Fig.~\ref{spectra}. Notice the good agreement especially for the lower mode of $m=1$ and the capturing of the corresponding instability for $\lambda < 1$. Importantly, note also that this instability is {\it indifferent} to the variation of the chemical potential and presumably has to do with a change of the symmetry of the trap from prolate (for $\lambda < 1$) to oblate (for $\lambda >1$). However, the instability for $m=2$ and higher does nontrivially depend on the chemical potential $\mu$. Indeed, as $\mu$ grows, we approach the large $\mu$ limit result of $\lambda \rightarrow 2$ for the instability threshold. However, as we depart from that limit, the critical point for the instability, $\lambda_c$ progressively decreases, lying further away from $\lambda=2$. \begin{figure \includegraphics[width=\columnwidth]{States.jpg} \put (-225,183) {(a) $\lambda=0.5$} \put (-145,183) {(b) $\lambda=1$} \put (-70,183) {(c) $\lambda=3$} \caption{Three typical states of the VR in various trap geometries in the TF limit at $\mu=40$. The figures are for condensates with: (a) prolate, $\lambda=0.5$; (b) spherical, $\lambda=1$; and (c) oblate, $\lambda=3$ geometries. The top panels show $|u|$ and the bottom panels illustrate the phase of the field. The states have rotational symmetry with respect to the $z$-axis and therefore only a cross section in the $y$-$z$ plane is shown. } \label{states} \end{figure} Importantly, this trend of capturing the spectrum better as $\mu \rightarrow \infty$ is not only evident in the mode with $m=2$, but in fact continues for higher values of $m$. While the $m=0$ and $m=1$ modes (associated with ring oscillations inside the trap) are accurately captured more or less for all values of $\lambda$, the higher modes are progressively less accurate when $\mu$ is smaller and are becoming more accurate as $\mu$ increases. This trend has also been observed in a variety of other problems associated with solitonic and vortical filaments~\cite{ai1}. In this light, just as the instability of the $m=2$ mode arises for $\lambda_c<2$, the instability of the $m=3$ mode occurs (for finite $\mu$) for $\lambda_c < 3$, and so on. Nevertheless, we can see that Eq.~(\ref{spectrum}) provides an excellent qualitative handle on the nature of the arising instabilities and the parametric dependence (over $\lambda$) of the different eigenfrequencies. \begin{figure* \mbox{ \hspace{0cm} \subfigure[][]{ \includegraphics[width=0.95\columnwidth]{S12.jpg} } } \mbox{ \hspace{0cm} \subfigure[][]{ \includegraphics[width=0.95\columnwidth]{S20.jpg} } } \mbox{ \hspace{0cm} \subfigure[][]{ \includegraphics[width=0.95\columnwidth]{S40.jpg} } } \mbox{ \hspace{0cm} \subfigure[][]{ \includegraphics[width=0.95\columnwidth]{S80.jpg} } } \caption{ Four typical spectra of the stationary ring as a function of $\lambda$ for different values of the chemical potential $\mu$. The blue (stable mode) and red (unstable mode) lines are the full 3D numerical spectra of the GPE, while the gold (stable mode) and black (unstable mode) dashed lines are from the analytical formula of Eq.~(\ref{spectrum}). Take the analytical lines for example, the one with no instability is the $m=0$ mode, the one that is unstable below $\lambda=1$ is the $m=1$ mode, the one that becomes unstable near $\lambda=2$ is the $m=2$ mode, etc. Note that the two results compare well and become closer as the chemical potential $\mu$ increases. The finite $\mu$ effects are further discussed in the text. } \label{spectra} \end{figure*} \subsection{Dynamics} Following our spectra of the last section, we now turn to the dynamics. We first illustrate the dynamical effect of a few key unstable modes predicted by the spectra. These results are from full simulations of the GPE. Then, we compare the GPE and the filament PDE of Eqs.~(\ref{vr_eq3})--(\ref{vr_eq4}) for both stable and unstable modes predicted by the spectra. Our dynamical simulations also cover a wide range of trapping frequencies and chemical potentials. In the figures showing GPE simulation results, we have plotted the VR as red points in 3D. The vortical patterns are identified through the methods of Refs.~\cite{foster,bisset}. Additionally, we project both the VR position onto three back planes along with the density of the BEC, with contours of $0.2$, $0.4$, $0.6$, and $0.8$ of the maximum density. The first pair of dynamical examples in Figs.~\ref{dyn1} and~\ref{dyn2} concerns the instability of the mode with $m=1$ for $\lambda=0.95$. We can see that indeed as we expect, this mode does not excite undulations on top of the VR. Instead, it results in a dynamical rotation of the ring within the (slightly prolate) condensate that is combined with a growth of the radius and partial deformation from a perfect cylindrical form. Nevertheless, as can be seen in the figures, despite this instability the ring remains robust and eventually its radius shrinks again, exhibiting a form of (rotational) oscillatory dynamics. For the same value of $\lambda=0.95$, but for larger values of $\mu$, the ring is even more robust (in its nonlinear dynamics) in the case examples we considered, and takes a longer time before manifesting the flipping motion that is observed as a result of the instability. \begin{figure \includegraphics[width=0.45\columnwidth]{cor12_95_m1_a.png} \includegraphics[width=0.45\columnwidth]{cor12_95_m1_b.png} \includegraphics[width=0.45\columnwidth]{cor12_95_m1_c.png} \includegraphics[width=0.45\columnwidth]{cor12_95_m1_d.png} \caption{Snapshots for $\lambda=0.95$, $\mu$=12, and $m=1$ show at times $0, 20, 50$, and $57.5$ in units of $1/\omega_r$. Notice that the ring does not break for this mode, rather it flips over. The vortex ring leaves the BEC short after the last snapshot, presumably because the size of the condensate is small; see also Fig.~\ref{dyn2} at a larger chemical potential. } \label{dyn1} \end{figure} \begin{figure \includegraphics[width=0.45\columnwidth]{cor20_95_m1_a.png} \includegraphics[width=0.45\columnwidth]{cor20_95_m1_b.png} \includegraphics[width=0.45\columnwidth]{cor20_95_m1_c.png} \includegraphics[width=0.45\columnwidth]{cor20_95_m1_d.png} \includegraphics[width=0.45\columnwidth]{cor20_95_m1_e.png} \includegraphics[width=0.45\columnwidth]{cor20_95_m1_f.png} \caption{Snapshots for $\lambda=0.95$, $\mu$=20, and $m=1$ shown at times: $130$, $135$, $140$, $145$, $150$, and $155$ in units of $1/\omega_r$. The ring first flips over and then extends one side to the bottom of the BEC. However the ring does not break, rather it re-enters the BEC.} \label{dyn2} \end{figure} We now turn to the unstable dynamics of the case with $\lambda=3$. Here, in principle, given the finite $\mu$ nature of the computation, {\it both} the $m=2$ and the $m=3$ modes have manifested their dynamical instability. The former is explored in the case of Fig.~\ref{dyn3}, while the latter is shown in Fig.~\ref{dyn4} [recall that the latter has a weaker instability growth rate and in fact would be completely stabilized for this $\lambda$ in the $\mu \rightarrow \infty$ limit]. In the case of Fig.~\ref{dyn3}, the quadrupolar nature of the $m=2$ destabilizing mode is clearly evident both in the ring dynamics, and in its corresponding planar projections shown in the figures. Similarly, for the $m=3$ case, the hexapolar structure can definitively be discerned in the top panels of Fig.~\ref{dyn4}, prior to the eventual breakup of the VR. \begin{figure \includegraphics[width=0.45\columnwidth]{cor12_3_m2_a.png} \includegraphics[width=0.45\columnwidth]{cor12_3_m2_b.png} \includegraphics[width=0.45\columnwidth]{cor12_3_m2_c.png} \includegraphics[width=0.45\columnwidth]{cor12_3_m2_d.png} \caption{Snapshots of the VR for $\lambda=3$, $\mu=12$, and an $m=2$ mode perturbation shown at times: $1.0$, $3.0$, $5.0$, and $7.2$ in units of $1/\omega_r$. This shows the quick death of the vortex ring by the $m=2$ mode as the ring just stretches horizontally out of the BEC. } \label{dyn3} \end{figure} \begin{figure \includegraphics[width=0.45\columnwidth]{cor12_3_m3_a.png} \includegraphics[width=0.45\columnwidth]{cor12_3_m3_b.png} \includegraphics[width=0.45\columnwidth]{cor12_3_m3_c.png} \includegraphics[width=0.45\columnwidth]{cor12_3_m3_d.png} \caption{Snapshots for $\lambda=3$, $\mu=12$, and an $m=3$ instability shown at times: $1.0$, $3.0$, $4.0$, and $4.8$ in units of $1/\omega_r$. This shows the quick death of the vortex ring by the $m=3$ mode as the ring breaks vertically out of the BEC. } \label{dyn4} \end{figure} An example of an excited stable $m=5$ mode for the case of $\lambda=3$ is shown in Fig.~\ref{dyn5}. The relevant mode is indeed excited and persists for a long series of oscillations. However, eventually, the small projection to other modes of the (imperfect) initial data ``takes over'' manifesting the instability of the modes with $m=2$ and $m=3$. This serves as a warning that while these higher $m$ modes may be stable, the ``contamination'' of the initial data in practical situations with a small component in the unstable modes will eventually lead to an instability, even though this manifestation takes longer in this setting. \begin{figure \includegraphics[width=0.45\columnwidth]{cor12_3_m5_a.png} \includegraphics[width=0.45\columnwidth]{cor12_3_m5_b.png} \includegraphics[width=0.45\columnwidth]{cor12_3_m5_c.png} \includegraphics[width=0.45\columnwidth]{cor12_3_m5_d.png} \caption{Snapshots for $\lambda=3$, $\mu=12$, and $m=5$ shown at times: $1.6$, $3.8$, $7.0$, and $11.6$ in units of $1/\omega_r$. The mode is not unstable, yet it slowly evolves until it leaves the BEC, making for a long death of a vortex ring.} \label{dyn5} \end{figure} Lastly, as $\lambda$ becomes higher, as in Fig.~\ref{dyn6}, even the stable modes, such as the $m=5$, end up leading to a relatively quick ``death'' of the VR. This is because the unstable modes to which the initial data has some nontrivial projection now have substantial growth rates that take over quickly the relevant dynamical evolution, eventually leading to this outcome. Fig.~\ref{dyn6} reports such an example for the case of $\lambda=3.1$. Contrasting Figs.~\ref{dyn5} and \ref{dyn6} shows how only a seemingly slight change in the geometry, for the same value of $\mu=12$, alters the dynamics appreciably. While Fig.~\ref{dyn6} shows a quick death for a vortex ring, Fig.~\ref{dyn5} shows a more protracted non-perturbative dance towards the vortex ring's demise. \begin{figure \includegraphics[width=0.45\columnwidth]{cor12_31_m5_a.png} \includegraphics[width=0.45\columnwidth]{cor12_31_m5_b.png} \includegraphics[width=0.45\columnwidth]{cor12_31_m5_c.png} \includegraphics[width=0.45\columnwidth]{cor12_31_m5_d.png} \caption{Snapshots for $\lambda=3.1$, $\mu=12$, and $m=5$ shown at times: $2.5$, $5.0$, $6.8$, and $7.1$ in units of $1/\omega_r$. Here the ring quickly stretches and breaks vertically. } \label{dyn6} \end{figure} Finally, we directly compare the filament PDE and the full GPE dynamics of evolving the vortex ring, in order to get a sense not only of the spectral but also of the dynamical value of Eqs.~(\ref{vr_eq3})-(\ref{vr_eq4}). We first benchmark each method against their respective spectra for a stable oscillatory mode and then compare them for two selected unstable dynamical scenarios in the Thomas-Fermi limit. Recall from the spectra that the agreement of the two methods is expected to become progressively better for all modes in the large chemical potential limit. As a basis for comparison we have evolved both systems for the stable case of $\lambda=1.5$ and $\mu=20$ with a small vertical displacement of $0.1$ which corresponds to the $m=0$ mode. The motion of the vortex ring is an orbit around its (stable) equilibrium radius. We have measured the frequencies of this motion from the two methods and they are in very good agreement with their respective spectra. Since this scenario effectively involves a simple harmonic motion around the respective equilibria, we shall not discuss this further here. In what follows, we compare the more interesting cases of the two dynamics directly at a large chemical potential $\mu=40$. We have again selected the $m=2$ and $m=3$ unstable modes at $\lambda=3$, such that this will allow the ring to deform rather than keeping the circular shape as in the $m=0$ mode case. The results for $m=2$ and $m=3$ are shown in Figs.~\ref{m2} and \ref{m3}, respectively. There are two important observations. One is that the filament PDEs indeed allow the ring to deform and break when an unstable mode is perturbed or excited, as is the case in both of these scenarios. In contrast, the ODEs of Eq.~(\ref{vr_eq5}) do not allow the ring to depart from the perfect circular shape. Secondly, the filament PDEs are able to follow closely the dynamics of the GPE, in both the deformation patterns and the time scales before the ring breaks. While this may be natural to expect for short instability times from our computed spectra, it is certainly far less obvious for the full nonlinear evolution of Figs.~\ref{m2} and \ref{m3}. \begin{figure* \includegraphics[width=0.45\columnwidth]{cor40_3_m2_a.png} \includegraphics[width=0.45\columnwidth]{cor40_3_m2_b.png} \includegraphics[width=0.45\columnwidth]{cor40_3_m2_c.png} \includegraphics[width=0.45\columnwidth]{cor40_3_m2_d.png} \includegraphics[width=0.45\columnwidth]{pic40_3_m2_a.png} \includegraphics[width=0.45\columnwidth]{pic40_3_m2_b.png} \includegraphics[width=0.45\columnwidth]{pic40_3_m2_c.png} \includegraphics[width=0.45\columnwidth]{pic40_3_m2_d.png} \caption{ Comparison of the filament PDEs (bottom panels) and the 3D GPE (top panels) at $\mu=40$ in the Thomas-Fermi limit. In this case, the unstable mode $m=2$ at $\lambda=3$ is excited. The comparison is presented at times 0, 12, 20, and 24 in units of $1/\omega_r$. } \label{m2} \end{figure*} \begin{figure* \includegraphics[width=0.45\columnwidth]{cor40_3_m3_a.png} \includegraphics[width=0.45\columnwidth]{cor40_3_m3_b.png} \includegraphics[width=0.45\columnwidth]{cor40_3_m3_c.png} \includegraphics[width=0.45\columnwidth]{cor40_3_m3_d.png} \includegraphics[width=0.45\columnwidth]{pic40_3_m3_a.png} \includegraphics[width=0.45\columnwidth]{pic40_3_m3_b.png} \includegraphics[width=0.45\columnwidth]{pic40_3_m3_c.png} \includegraphics[width=0.45\columnwidth]{pic40_3_m3_d.png} \caption{ Same as the previous figure, but for a case example where the unstable mode $m=3$ at $\lambda=3$ is excited. The comparison is presented at times 0, 4, 10, and 18 in units of $1/\omega_r$. } \label{m3} \end{figure*} \section{Conclusions \& Future Challenges} \label{cc} In the present work we have revisited the context of a vortical ring filament, utilizing the Lagrangian and Hamiltonian formulation thereof in order to go beyond the linearized (or simply time-dependent ODE) formulation for the evolution of the ring. More concretely, we have explored the full PDE evolution of the radius $R$ and vertical position $Z$ of the vortex ring as a function of the azimuthal coordinate $\phi$ and time. This has allowed us to retrieve the linearization results from Ref.~\cite{horng}, which importantly have now been tested as a function of both the nonlinearity strength (parametrized by the chemical potential $\mu$) and of the trap geometry (parametrized by the ratio $\lambda$ of longitudinal over transverse confinement). Indeed, the qualitative picture emerging from this analysis was in excellent agreement with the numerical results and the quantitative tendency of the latter to the former as $\mu \rightarrow \infty$ was detailed. In prolate condensates, the rings are unstable because they can start rotating (an instability associated with the $m=1$ mode), employing the freedom allowed in the vertical direction in such a case. On the other hand, as the BEC transitions from a prolate to a weakly oblate geometry, then it becomes stabilized (the spherical case is a special, marginal example of this stabilization). However, if things become too oblate (i.e., a ratio of 2:1 of the radial to transverse confinement radius), then unstable modes, starting with the quadrupolar one (and progressively higher ones as the anisotropy increases) become unstable. As a result of the corresponding dynamical evolution, the ring will break into an array of filaments, $2$ when the quadrupolar mode is unstable, $3$ when it is the hexapolar, $5$ corresponding to $m=5$ and so on. From the quantitative perspective, we find that the analytically available spectrum very accurately reflects the instability transition occurring at $\lambda=1$ and the associated $m=1$ mode. However, it only captures higher modes (such as the $m=2$ going unstable when $\lambda=2$, the $m=3$ going unstable at $\lambda=3$ etc.) progressively more accurately in the limit of $\mu \rightarrow \infty$. In these cases, the smaller the value of the chemical potential, the more narrow the interval of ring stability becomes. Naturally, these results pave the way for further developments as regards the recently more and more accessible both experiment and also measurement-wise vortical filaments. In particular, the formulation of~\cite{ruban1,fetter1} can be extended to vortex lines and their near-equilibrium linear, as well as highly nonlinear dynamics. It would be particularly interesting to explore that case and compare it systematically with the case of the ring, bearing in mind also the destabilization of the ring into filamentary structures in highly oblate condensates. This would create a fairly complete picture of single filaments. Going beyond the single filament, as is especially relevant in some experimental settings both in superfluid Helium~\cite{wacks}, but also in BECs, this formulation, along with that of Biot-Savart-based vortex interactions can lend itself to the formulation of Hamiltonians and Lagrangians for the case of multiple vortical filaments. This would allow us to capture not only intriguing orbits involving multiple rings, such as leapfrogging ones~\cite{caplan} (and their variants arising in a trap~\cite{wang2}), but also the role that additional rings may play in affecting the stability of a single ring, that was systematically portrayed herein. Such studies are currently in progress and will be reported in future publications. \acknowledgments The authors are grateful to Prof. R. Carretero for constructive comments on the manuscript. W.W.~acknowledges support from the Swedish Research Council Grant No.~642-2013-7837 and Goran Gustafsson Foundation for Research in Natural Sciences and Medicine. P.G.K.~gratefully acknowledges the support of NSF-PHY-1602994, as well as from the Greek Diaspora Fellowship Program. C. T. acknowledges support from the Advanced Simulation, and Computing and LANL, which is operated by LANS, LLC for the NNSA of the U.S. DOE under Contract No. DE-AC52-06NA25396.
1,108,101,565,056
arxiv
\section{Introduction} Shock waves are an important and ubiquitous phenomenon in astrophysics. They can accelerate electrons \citep{Mann1995, Miteva2007, Schwartz2011} and ions \citep{Thomsen1985, Sckopke1995, Giacalone2005} effectively. In the solar atmosphere, there exist various kinds of shocks, which usually manifest themselves as wave-like structures in images, or are implied in the radio dynamic spectra. In particular, type II radio bursts, usually following the eruption of coronal mass ejections (CMEs), are a kind of plasma emission with a slow frequency drift in the radio dynamic spectrum, and they are generally thought to be a signature of coronal shocks \citep{Wild1950, Zheleznyakov1970}. It is believed that the majority of interplanetary shocks (within 1 AU) are CME-driven \citep{Cane1987, Reiner2001, Reiner2001stat}. However, the origin of the metric type II radio bursts is still an open question; they can be generated either by CME-driven shocks \citep{Cliver2004, Liu2009, Chen2011, Cho2013} or by flare-caused blast waves \citep{Leblanc2001, Magdalenic2008, Nindos2011}. This is an important topic but hard to explore since the radio spectrum has usually a low or even no spatial resolution. The situation has been greatly improved since the launch of recent space solar instruments. In particular, the Atmospheric Imaging Assembly \citep[AIA;][]{Lemen2012} on board the Solar Dynamics Observatory \citep[SDO;][]{Pesnell2012} can observe the low corona under 0.4R$_{\odot}$, where the type II radio bursts are initially produced with the start frequency of hundreds of MHz. Recently, \citet{Bemporad2010} derived physical parameters of the pre- and post-shock regions using multi-wavelength observations. \citet{Ma2011} discovered a clear case of a CME-driven shock in the low corona by extreme-ultraviolet (EUV) images. For more quantitative studies of coronal shocks, some new methods have been developed. \citet{Kouloumvakos2013} combined the differential emission measure (DEM) method to calculate the compression ratio of a coronal shock, and found that the shock can be driven by both the nose and the flanks of the expanding bubble. \citet{Zucca2014} used multi-wavelength observations to obtain two-dimensional density, magnetic field, and Alfv\'en speed maps, which help reveal the properties of shocks more accurately. Among these recent works, the type II radio bursts were all associated with CMEs. It is now widely accepted that the CMEs are the source of the metric type II radio bursts. However, there still exist a few type II radio bursts without CMEs. \citet{Magdalenic2012} reported a CME-less type II radio burst, and suggested that the burst was likely to be generated by the flare. \citet{Gopalswamy2001} mentioned that the reconnection jets could be also one of the candidate structure to generate a type II radio burst. In this paper, we report a type II radio burst without an associated CME. We propose a new generation mechanism for the coronal shocks. The paper is organized as follows. Observations are described in Section \ref{sect:data}, data analysis and results are presented in Section \ref{sect:Analysis and Result}. Some discussions are made in Section \ref{sect:discussion}, followed by a conclusion in Section \ref{sect:conclusion}. \section{Observations} \label{sect:data} A type II radio burst was detected on 2011 February 28, following the occurrence of a C2.4-class flare which was located at N24E45 in the NOAA active region 11164. The radio burst was observed by a number of radio telescopes, such as Learmonth and San Vito Solar Observatories. A coronal wave-like structure, which was coincident with the type II burst, was observed in detail by the AIA in EUV passbands. The Large Angle Spectrometric Coronagraph \citep[LASCO;][]{Brueckner1995} on board the the Solar and Heliospheric Observatory (SOHO) and the inner coronagraph \citep[COR1;][]{Howard2008} on board the Solar Terrestrial Relations Observatory \citep[STEREO;][]{Kaiser2008} provide the white light images, which are used to inspect whether the radio burst is associated with a CME. In this work, we use the radio data recorded by the Learmonth Observatory, because the station was at day time during the burst. Compared to the data from San Vito, the dynamic spectrum of Learmonth is clearer. The temporal resolution of the dynamic spectrum is 3 s. As shown in Figure 1, the type II radio burst started at 07:40 UT in the fundamental frequency with the start frequency of 109 MHz, and ended at 07:44 UT with the end frequency of about 52 MHz. The linear frequency drift rate is 0.2375 MHz s$^{-1}$ for the fundamental frequency. The burst is obvious in both fundamental and harmonic frequencies. However, we cannot identify the start frequency in the first harmonic frequency because it is out of observing range. There is a weak indication of band-splitting phenomenon in this burst, presumably implying that the shock was not very strong. The C2.4-class flare was observed by {\it GOES} a few minutes before the type II radio burst. The soft X-ray flux, which is shown as a red curve in Figure 1, began to rise at 07:34:34 UT, and reached the peak at 07:38:42 UT, 90 s before the type II radio burst. Then the flux declined to the pre-flare level in a few minutes. The rise and decay of the soft X-ray flux are roughly symmetrical. The duration of the flare is less than 10 minutes. Since type II radio bursts are usually accompanied by CMEs, we check coronal images observed by {\it SOHO}/LASCO and {\it STEREO}/COR1 if this is the case in the present event. The fields of view (FOV) of LASCO and COR1 are located hundreds of Mm above the solar limb; therefore, it takes tens of minutes for a CME to propagate to the visible region. We then check the images of LASCO and COR1 in a period that begins 50 minutes after the peak time of the flare (08:00--08:25 UT) as shown in Figure 2. It is clear that there is no indication of any CME in this time period. Since the active region is close to the solar limb, a CME should be detectable if it exists. For this reason, we can conclude that this flare and type II radio burst are not accompanied by a CME. We also check the EUV images of the low corona observed by {\it SDO}/AIA whose time interval is 12 s. We find that there is a wave-like structure above the active region in the 171 {\AA}, 193 {\AA} and 211 {\AA} running difference images as indicated by the horizontal arrows in Figure 3 (see online movie for details). The wave-like structure appeared at 07:38 UT and was too weak to be detected after 07:42 UT; it propagated quickly during this period with a velocity of $\sim$ 600 km s$^{-1}$. At almost the same time, a jet (indicated by the vertical arrows in Figure 3) ejected. Moreover, in the running difference images whose time interval is 120 s as shown in Figure 4, we find a group of loops that expand during the propagation of the wave-like structure, and continue to expand after the disappearance of the wave-like structure. The expansion of the loops pushes the wave-like structure to propagate outward. Note that the time interval and contrast ratio of the running difference images are differently adjusted in Figure 3 and Figure 4, in order that they can reveal more clearly the wave-like structure and loops, respectively. \section{Data Analysis and Result} \label{sect:Analysis and Result} From Figures 1 and 3, we can see that the burst and the wave-like structure occured at about 07:40 UT, while the type II radio burst appeared about 2 minutes after the wave-like structure can be initially identified in the AIA images, and it disappeared about 2 minutes after the wave-like structure became invisible. This implies a temporal relationship between the wave-like structure and the burst. In order to confirm this point, we do further analysis and in particular derive the propagation speed of the wave-like structure and that of the type II radio burst. \subsection{Wave Speed measured from AIA images} First, we measure the propagation speed of the wave-like structure from the EUV observations of {\it SDO}/AIA at the 171 {\AA}, 193 {\AA} and 211 {\AA} wavelength channels. To do so, we put a slice (shown as the pink dashed line in Figure 3) along the propagation direction of the wave in the running-difference images at these channels. The time-slice diagrams are then plotted in Figure 5, whose time range is from 07:30 to 07:50 UT, covering the whole process of the formation and propagation of the wave-like structure. The wave-like structure can be identified clearly in the time-slice images as a bright narrow stripe. The front of the wave-like structure is marked by green asterisks in the images. The time interval of these asterisks is fixed to be 24 s. Because the wave front is identified visually, to reduce the error, we try in total 10 times to obtain the front points and then calculate the mean value and the standard deviation, which are shown in Figure 5. The speed of the wave-like structure is shown in the bottom row of Figure 5. It can be seen that the three channels yield similar results. The initial speed of the wave was about 1000 km s$^{-1}$, and then decreased to about 600 km s$^{-1}$ in 1 minute. After that, the wave propagated at a nearly constant speed until its disappearance. The linear speed of the wave is calculated to be 539 km s$^{-1}$, 654 km s$^{-1}$ and 659 km s$^{-1}$ in the 171 {\AA}, 193 {\AA} and 211 {\AA} channels, respectively. Such a speed is marginal to generate a shock as the Alfv\'en wave speed in the corona is very similar to that. Note that in the top row of Figure 5, some bright structures below the wave front refer to the expansion of the loops. The loop expansion in the initiation phase of the wave-like structure is very hard to identify, which could be submerged by the complicated background emission in the active region. \subsection{Shock Speed Derived from the Radio Spectrum} It is well known that the frequency drift in the dynamic spectrum for a type II radio burst is produced by a shock propagating outwards in the corona. In order to get the shock speed from the dynamic spectrum, we need to know the density distribution of the corona. However, most of the density models available may deviate significantly from the real case and affect the inversion accuracy of the shock speed. In previous studies, the most commonly used density models are based on the models by \citet{Newkirk1961} and \citet{Saito1977}. Usually, these models are assumed to be multiplied by a coefficient \citep[e.g.,][]{Bemporad2010,Feng2012,Magdalenic2012}. This coefficient is not an invariant, but depends sensitively on time (solar maximum and minimum years) and on place (active region and quiet region). Therefore, it is still a problem how to adopt a coefficient that is accurate enough for a specific event. Since the start frequency of the type II radio burst is 109 MHz, the start height of the shock is estimated to be less than 0.1R$_{\odot}$ under the \citet{Saito1977} model, which should be within the FOV of AIA. Indeed, the wave-like structure did appear in the FOV of the AIA, as stated above. Thanks to the multi-wavelength EUV emissions observed by AIA, it is possible to derive the density and temperature structure of the coronal part above the active region using the DEM method \citep{Cheng2012}. Then, we can get a more accurate density model, so as to derive a reliable shock speed from the type II burst, which can then be compared with the wave speed measured in the AIA images. \subsubsection{Constraining the Density Model} The DEM method is applied to calculate the density maps of the corona above the active region. The procedure is as follows. We use the intensities in six wavelength channels (94 {\AA}, 131 {\AA}, 171, {\AA}, 193 {\AA}, 211 {\AA}, and 335 {\AA}) recorded by {\it SDO}/AIA to derive the emission measure (EM) distribution. We choose the AIA images at 07:20 UT, 20 minutes before the start time of type II radio burst, to get the EM map, which is shown in the left panel of Figure 6. The density of the emitting plasma can be calculated from the EM as \citep{Cheng2012}, \begin{equation} n_{e} = \sqrt{\frac{EM}{s}}, \\ \end{equation} where $s$ is the line-of-sight length and can be estimated as \citep{Zucca2014}, \begin{equation} s \sim \sqrt{H \pi r}, \\ \end{equation} where $H$ is the pressure scale height and $r$ is the heliocentric distance of the active region. The density distribution derived from the DEM method is then used to construct a specific density model for this event. We first calculate the density distribution for the coronal part between the wave and the active region below. This part is shown as the region enclosed by a white box in Figure 6, which has an angular range of $24\degree$, and a radial range of 1.02--1.06 R$_{\odot}$. With the density distribution for this region, we then calculate the mean density at a specific height by making an average over the whole angular range. In this way, we can get a radial distribution of the density, which is shown as the black solid line in the right panel of Figure 6. This result is then used to constrain the density model of \citet{Saito1977}. For simplicity, we multiply the density model by a coefficient in order to match the derived density distribution. The optimal coefficient is found to be 0.8. The revised density model is shown as the black dashed line in the right panel of Figure 6. Note that the revised density model can match the derived density distribution almost perfectly with a correlation coefficient of 0.98 between them. \subsubsection{Shock Speed Derived from the Frequency Drift Rate} With the appropriate density model, we can deduce the shock speed from the radio dynamic spectrum. First, we can get the frequency as a function of time ($t$) for the type II radio burst from the frequency spectrum. The frequency of type II radio bursts is considered to be the plasma frequency of the local electrons around the shock. Moreover, the plasma frequency is related to the electron density as \citep[e.g.,][]{Gopalswamy2009,Ma2011}, \begin{equation} f_{p} = 8.98 \times 10^{3} \sqrt{n_{e}}.\\ \end{equation} Then, for a specific frequency of the burst, the density at the shock producing the burst is calculated and the corresponding height ($h$) can be found from the revised density distribution. Finally, we can get the speed of the shock from the frequency drift rate. In practice, we select 15 points from the beginning to the end of the radio burst in the fundamental frequency belt, as marked with red asterisks in the left panel of Figure 7. It is seen that the fundamental belt becomes weak at the later period of the burst, while the harmonic belt is still fairly strong though dispersive. Both of the belts ended at about 07:44 UT. As already mentioned, the start time of the burst is about 120 s later than the appearance of the wave in the AIA images, while the end time of the burst is about 120 s later than the disappearance of the wave in the AIA images. The red solid curves in the left panel are the fitting to the fundamental and harmonic frequency belts. With the frequency-time curve, we can obtain the height-time relation and hence the speed of the shock. The result is shown in the right panels of Figure 7. It is shown that the shock speed is about 700 km s$^{-1}$ at the beginning and then decreases to 450 km s$^{-1}$ later. The average speed is 542 km s$^{-1}$. The speed of the shock is close to the speed of the wave which has been detected in the AIA images. Considering the similarity in speeds and the close relationship in time between the wave-like structure and the type II radio burst, we believe that the former is indeed a shock that is responsible for the type II radio burst in the dynamic spectrum. \section{Discussion} \label{sect:discussion} According to previous studies, a common view is that the coronal shock, responsible for the type II radio burst, is generated either by CMEs or flares, although more people believe the CME scenario. For this event, we checked the coronal images of LASCO/C2 and {\it STEREO}/COR1 (A and B) in white light and {\it SDO}/AIA in EUV, and did not find occurrence of any associated CME. If a CME was erupting, it should be detectable since the source region is close to the solar limb. Therefore, we can conclude that the shock is not likely driven by a CME as usual. On the other hand, it was sometimes argued that the pressure pulse in a solar flare might generate a blast shock wave, which can emit type II radio bursts \citep[e.g.,][]{Uchida1974}. Similarly to the discussions in \citet{Vrsnak2008}, we tend to ignore this possibility because the radio burst event in this paper is associated with a C2.4-flare, whereas quite some X-class flares, which are $\sim$100 times stronger than C-class flare, are not associated with EUV waves or type II radio bursts, e.g., the 2005 January 14 flare studied by \citet{Chen2006}. Considering the above facts, here we are trying to propose a new mechanism for the generation of the shock wave in the case of a CME-less flare. By inspecting carefully the AIA images, we find that some coronal loops under the shock wave front expanded obviously, starting from the appearance of the shock wave and ending a few minutes after the disappearance of the wave. We think that such a coincidence between the loop expansion and the coronal shock is not incidental, but implying a casual relationship. To check this point, we further analyze the HMI magnetograms of the active region and find that there were continual magnetic emergence and magnetic cancellation near the eastern feet of the loops (see Figure 8). It has been well established that magnetic reconnection between an emerging magnetic flux and the pre-existing coronal loop would lead to an impulsive flare \citep{Heyvaerts1977}. At the same time, such a reconnection would also lead to the expansion of the pre-existing coronal loop after it is re-rooted to another site due to the interchange reconnection, as simulated by \citet{Chen2000}. The post-reconnection coronal loop, as indicated by the dashed line in the left panel of Figure 9, is kinked around the reconnection site. Such a kink results in a strong magnetic tension force along the direction of the reconnection jet, which drives the coronal loop to expand. Such an expanding loop would lead to a fast-mode magnetoacoustic wave propagating outward, which further steepens to form a shock wave due to the nonlinear effect, since the background fast-mode wave speed decreases with height. The shock is then manifested as the type II radio burst. Since there is no continual eruption like CMEs, the shock exists for a short time and then decays to an ordinary wave, which can explain why the radio burst lasted for a short period as indicated by Figure 1. However, it should be mentioned that this type of interchange reconnection happens frequently as long as newly emerging magnetic flux meets with the pre-existing coronal loop, whereas shock waves responsible for type II radio bursts are rare in these CME-less flare events. This implies that a special condition is required. Here, we conjecture two different scenarios: (1) if the pre-existing magnetic loop is strongly inclined near the emerging flux, as depicted in the left panel of Figure 9, there would be a shock wave generated since the magnetic loop after reconnection (dashed line) is strongly bent. The magnetic tension force is very strong in this case. With the strong magnetic tension force, a faster jet would be ejected, which takes the threading magnetic field lines away quickly, and a fast-mode shock wave is expected to be produced. As a strongly-bent field lines stretch, they become less bent, and no continual pushing is provided for the shock wave. As the consequence, the shock wave decays rapidly as it propagates outward, which naturally explains the short lifetime of the type II radio burst as indicated by Figure 1; (2) if the pre-existing magnetic loop is slightly inclined near the emerging flux, as depicted in the right panel of Figure 9, there would be no shock wave generated, and only a fast-mode MHD wave is generated since the magnetic loop after reconnection (dashed line) is weakly bent. The magnetic tension force is weak in this case. In our event, the magnetic field lines as traced from the EUV loops, which are indicated by the blue arrow in Figure 4, are strongly bent toward the solar surface. This fits the first scenario and explains why a type II radio burst is observed. In order to further confirm such a conjecture, we perform two-dimensional magnetohydrodynamic numerical simulations of the interchange reconnection between emerging flux and pre-existing magnetic field in two cases. In case A, the magnetic field lines are strongly inclined near the flux emerging site, as illustrated in the top-left panel of Figure 10, whereas in case B, the pre-existing magnetic field lines are slightly inclined near the flux emerging site, as illustrated in the top-right panel of Figure 10. Note that the magnetic field strength near the reconnection is chosen to be the same. We adopt the same numerical code as in \citet{Chen2000}, which employs a multi-step implicit scheme \citep{Hu1989, Chen2000Ch}. The numerical results reveal that as magnetic flux emerges from the bottom of the simulation box, i.e., the base of the corona, interchange reconnection occurs between the emerging magnetic flux and the pre-existing field lines. In both cases, the bent field lines after reconnection drive a jet and a wave front propagating in the upper-right direction. The wave fronts are indicated in the two panels of the bottom row of Figure 10 for both cases. However, it is noted that in case A, the density enhancement of the wave front is stronger than that in case B, which is manifested by different colors in the density map. Such a result strongly implies that only with strongly bent post-reconnection field lines can a shock wave be produced when emerging flux reconnects with the pre-existing coronal field in CME-less flares. \section{Conclusions} \label{sect:conclusion} In this paper, we study a type II radio burst without an accompanying CME. The burst is related to a coronal shock that can be clearly identified in AIA images. However, we find that the shock is neither produced by a CME nor by a flare. We propose a new mechanism for this event: the shock is generated by the expansion of the magnetic loops after reconnecting with emerging flux. The main results of the paper are summarized as follows: 1. We apply the DEM method to the multi-wavelength EUV data to obtain the density distribution in the radial direction above the active region, which is then used to constrain the density model of \citet{Saito1977}. Using the resulting density model and the dynamic frequency spectrum of the type II radio burst, we further derive the shock speed. 2. We identify the wave-like structure in the AIA images. We find a close relationship in the occurrence time and in the propagation speeds between the wave-like structure and the shock inverted from the type II radio burst. The wave-like structure in AIA images is thus most likely a coronal shock responsible for the type II burst. 3. We suggest that the shock is neither generated by a CME as a piston-driven wave nor by a flare as a blast wave. We propose a new mechanism for the generation of the shock. The strongly-inclined magnetic loops, after reconnecting with emerging flux, can expand quickly to generate a shock front ahead. This scenario of loop-driven shock is somewhat different from the usual mechanism of CME-driven shock. In the former, the expansion of loops serves as a piston driving the shock wave in a short time while the loops do not erupt as a CME with their feet still anchored to the photosphere. \acknowledgements The authors are grateful to the referee for valuable comments that helped improve the manuscript. SDO is a mission of NASA's Living With a Star Program, STEREO is the third mission in NASA's Solar Terrestrial Probes program, and SOHO is a mission of international cooperation between ESA and NASA. This work was supported by the NSFC (grants 11303016, 11373023, 11203014, and 11025314) and NKBRSF (grants 2011CB811402 and 2014CB744203). \bibliographystyle{apj}
1,108,101,565,057
arxiv
\section{Introduction} Understanding how an isolated quantum system prepared out of equilibrium, can exhibit thermal properties at late times, i.e. how it thermalizes, has challenged quantum physicists for almost a century. The eigenstate thermalization hypothesis (ETH) \cite{srednickiChaosQuantumThermalization1994,deutschEigenstateThermalizationHypothesis2018} offers a generic mechanism to explain this phenomenon but makes strong assumptions on the structure of energy eigenstates in terms of the matrix elements of local operators. Nonetheless, it has been shown numerically that a large class of quantum systems complies with ETH and thermalizes \cite{gogolinEquilibrationThermalisationEmergence2016,dalessioQuantumChaosEigenstate2016}. A notable exception are strongly disordered systems in which transport is absent and the system retains memory of the initial state at arbitrary times \cite{nandkishoreManyBodyLocalizationThermalization2015,abaninRecentProgressManybody2017a,nandkishoreManyBodyLocalization2017,abaninManybodyLocalizationThermalization2019}. This phenomenon, called many-body localization (MBL), has been verified for small systems including, but not limited to, spin-systems with random potentials \cite{znidaricManyBodyLocalization2008,luitzManybodyLocalizationEdge2015,sierantPolynomiallyFilteredExact2020}, random nearest \cite{vasseurParticleholeSymmetryManybody2016,protopopovNonAbelianSymmetriesDisorder2020,chandaManybodyLocalizationTransition2020} and next-to-nearest neighbour interactions \cite{kjallManyBodyLocalizationDisordered2014,bahovadinovManybodyLocalizationTransition2022}, and power-law interactions \cite{burinManybodyDelocalizationStrongly2015,schifferManybodyLocalizationSpin2019,roySelfconsistentTheoryManybody2019,safavi-nainiQuantumDynamicsDisordered2019,mohdebExcitedEigenstateEntanglementProperties2022} using a combination of exact numerical approaches and heuristic arguments like the strong disorder renormalization group (SDRG) \cite{pekkerHilbertGlassTransitionNew2014,potterUniversalPropertiesManybody2015,voskTheoryManyBodyLocalization2015,monthusStrongDisorderRenormalization2018} to generalize to large systems. Recently, claims have been made that this localization phenomenology may not be stable in the thermodynamic limit due to thermal inclusions \cite{deroeckStabilityInstabilityDelocalization2017,luitzHowSmallQuantum2017,morningstarAvalanchesManybodyResonances2022,selsDynamicalObstructionLocalization2021,selsThermalizationDiluteImpurities2022,selsMarkovianBathsQuantum2021,thieryManyBodyDelocalizationQuantum2018,deroeckStabilityInstabilityDelocalization2017,ponteThermalInclusionsHow2017,pandeyAdiabaticEigenstateDeformations2020}. These are small, more ordered subregions thought to thermalize with their surrounding and thus slowly pushing the system towards thermalization. Unfortunately, these regions are very rare and thus only start appearing in large systems far beyond the reach of numerical methods. This raises the question, whether this instability is relevant for quantum simulation experiments, being finite in size and limited by coherence time. In this paper, we only focus on the phenomenology of localization in finite systems and subsequently use the term "localized regime" instead of a "phase" following the terminology of \cite{morningstarAvalanchesManybodyResonances2022}. Complementary to numerical works there are a number of experimental results falling into roughly two classes: Experiments with single particle resolution, including optical lattices \cite{kondovDisorderInducedLocalizationStrongly2015,schreiberObservationManybodyLocalization2015,bordiaCouplingIdenticalOnedimensional2016,lukinProbingEntanglementManybody2019} and trapped ions \cite{smithManybodyLocalizationQuantum2016}, and experiments based on macroscopic samples, like NV centers in diamond \cite{kucskoCriticalThermalizationDisordered2018} or NMR systems \cite{weiExploringLocalizationNuclear2018}. The former offer precise control, but are rather limited in size, while the latter can realize much larger systems at the expense of flexibility, in particular lack of programmable disorder. Cold gases of Rydberg atoms implement dipolar dynamics with random couplings (similar to NMR systems or NV centers) and allow for control of the disorder strength and even the power-law of the interaction at rather large particle numbers \cite{signolesGlassyDynamicsDisordered2021}, which makes them a powerful platform for studying localization phenomena. Motivated by recent progress on quantum simulations with Rydberg atoms \cite{orioliRelaxationIsolatedDipolarInteracting2018,signolesGlassyDynamicsDisordered2021,geierFloquetHamiltonianEngineering2021,franzAbsenceThermalizationInteracting2022}, we consider a power-law interacting spin system where the disorder is due to randomly positioned spins respecting a blockade condition, which induces disordered couplings. In this setup, the strength of the disorder can be tuned by changing the density of particles or, equivalently, the minimal distance between them. Starting out in a ordered system, where the blockade radius is of order of the mean inter-particle distance, we show numerically that this system exhibits a crossover to a localized regime at small blockade and apply a SDRG approach to derive a simple model based on strongly interacting pairs, which captures the properties of the eigenstates in the localized regime well. Our study thus adds to the body of numerical works on MBL, focusing on dipolar systems with tunable positional disorder, and is highly relevant to experimental efforts, as a wide range of quantum simulation platforms feature dipolar interactions. \section{Localization in a Rydberg gas} \subsection{System} We consider the Heisenberg XXZ spin model described by the Hamiltonian ($\hbar = 1$) \begin{equation} \hat{H} = \frac{1}{2}\sum_{i\neq j} J_{ij} \underbrace{ \left( \hat{S}_x^{(i)}\hat{S}_x^{(j)} + \hat{S}_y^{(i)} \hat{S}_y^{(j)} + \Delta \hat{S}_z^{(i)} \hat{S}_z^{(j)} \right) }_{\equiv H\ind{pair}^{(i)(j)}} \, \label{eq:H_XXZ} \end{equation} where $\hat{S}_{\alpha}^{(k)}$ (with $\alpha \in \{x,y,z\}$) denotes the spin-$\frac{1}{2}$ operators acting on the $k$-th spin. The coupling $J_{ij}$ between spins $i$ and $j$ at positions $x_i$ and $x_j$ is given by $J_{ij}=\frac{C_\alpha}{|x_i-x_j|^\alpha}$, where $C_\alpha$ is an interaction coefficient which we set to $C_\alpha=1$. In experimental realizations of this model with Rydberg atoms, the values of the anisotropy parameter $\Delta$ and interaction exponent $\alpha$ are controllable via the choice of the Rydberg states encoding the two spin states. The cases $\alpha=3$, $\Delta=0$ (dipolar exchange) and $\alpha=6$, $\Delta\approx -0.7$ (van-der-Waals) have been realized experimentally \cite{signolesGlassyDynamicsDisordered2021,geierFloquetHamiltonianEngineering2021}. For typical cloud temperatures and time scales of the spin dynamics the atom positions can be regarded as fixed (frozen gas approximation). During the initial Rydberg excitation, the spins are subjected to the Rydberg blockade \cite{lukinDipoleBlockadeQuantum2001} which means no two spins can be closer than some distance $r_b$, called the blockade radius. This feature allows one to tune the strength of disorder via the sample's density: In a very dilute sample, the mean inter-spin distance is much larger than the blockade radius $r_b$ and thus positions are essentially uncorrelated. In the other extreme, the spins are tightly packed and exhibit strong spatial correlation. We quantify the strength of disorder by the ratio $W$ of the system's total volume $V$ over total blocked volume $V\ind{block}$ or, equivalently, by the ratio of Wigner-Seitz radius $a_0$, which is half of the mean inter-spin distance, to the blockade radius $r_b$ to the power of the dimension $d$, \begin{equation} W = \frac{V}{V\ind{block}} = \left( \frac{a_0}{r_b} \right)^d \, . \end{equation} For $d=1$, the minimal value of $W_{min}=\frac{1}{2}$ is attained for a translationally invariant chain with spacing $2a_0 = r_b$, as illustrated in Fig.~\ref{fig:1}(a). \subsection{Effective pair description} \begin{figure} \centering \def0.45\textwidth{0.45\textwidth} \input{gfx/figure1.pdf_tex} \caption{\textbf{Pair description.} The blockade constraint (blue shadings) enables tuning of disorder in the couplings (green lines) from fully ordered (a) to disordered (b). In the latter case a perturbative treatment to first order yields a description in terms of strongly correlated pairs (c) subject to an Ising-like interaction (not depicted). These pairs constitute local integrals of motion (LIOM).} \label{fig:1} \end{figure} This model differs from the random field Heisenberg model, which has been studied extensively in the MBL literature, as no disordered potentials are considered. Thus it may not be immediately apparent, why this system features localization and what constitutes the local conserved quantitites akin to the $l$-bits \cite{husePhenomenologyFullyManybodylocalized2014} in the standard scenario. Here we provide a phenomenological picture in the spirit of the SDRG suggesting that localization should appear due to strongly interacting pairs. Consider a strongly disordered cloud of $N$ spins described by Eq.~\eqref{eq:H_XXZ} like the example depicted in Fig.~\ref{fig:1}(b). Due to the power-law interactions, coupling strengths vary strongly between different pairs of atoms, symbolized by the width and brightness of the green lines. This motivates us to employ a perturbative treatment, in which we single out the strongest pair coupling and consider all other couplings as a perturbation. In the example shown in Fig.~\ref{fig:1}(b), the two rightmost spins share the strongest coupling and we can see that it is much stronger than the other couplings of either one of the spins to the rest of the system. Using perturbation theory to first order, we find that the pair of spins almost decouples from the rest of the system leaving only an effective Ising-like interaction, which is unimportant for the further procedure and thus not shown in the figure. For details on the calculations involved, see appendix \ref{appendix:pair_model}. We may now repeat this procedure of eliminating couplings between pairs and rest system by identifying the next strongest interaction among the remaining spins which, in this example, is the coupling between the second and third spin. Eliminating the respective couplings as well leaves us with the effective pairs shown in Fig.~\ref{fig:1}(c). Note that in an ordered system, as shown in Fig.~\ref{fig:1}(a), this perturbative treatment is not applicable as not all neglected couplings can be considered small. We also note that the order of eliminations is not important as long as each time the inner-pair coupling is much larger than the couplings between pair and rest. Concretely, for the given example, choosing the coupling between spins 2 and 3 in Fig.~\ref{fig:1}(b) first in the pair elimination process does not change the result. The great advantage of this ansatz is that we can now give a simple description of the whole many-body spectrum. Diagonalizing $H\ind{pair}$ (see Eq ~\ref{eq:H_XXZ}), we find two maximally entangled eigenstates $\ket{\pm} = 1/\sqrt{2} (\ket{\uparrow\downarrow} \pm \ket{\downarrow\uparrow})$ at energies $E_\pm = \pm 2 - \Delta$ and two degenerate states $\ket{\uparrow\uparrow}$, $\ket{\downarrow\downarrow}$ at energy $E_d = \Delta$, which we will refer to as $\ket{\updownarrow\updownarrow}$. The Ising-like interaction between pairs does not act on the entangled states $\ket{\pm}$ and is diagonal w.r.t. to $\ket{\updownarrow\updownarrow}$. Thus, in the pair picture, the eigenstates of the full system are now given by tensor products of these four pair eigenstates. We refer to this basis as the "pair basis". In the many-body spectrum, the degeneracy between the pair states $\ket{\uparrow\uparrow}$ and $\ket{\downarrow\downarrow}$ is lifted due to the emerging Ising-like interaction. However, we note that this splitting is small compared to the splitting between the other pair eigenstates as it emerges from first order perturbation theory. The pair picture is analogous to the $l$-bit picture often used MBL, where strong local disorder potentials lead to the emergence of quasi-local conserved quantities $\hat{\tau}^{(i)}\sim \hat{\sigma}_z^{(i)}$ \cite{husePhenomenologyFullyManybodylocalized2014,serbynLocalConservationLaws2013}. Here, we see that each projector on a pair's eigenstate constitutes an approximately conserved quantity and hence is a local integral of motion (LIOM). Thus, we established a description akin to the $l$-bit picture of MBL for this disordered Heisenberg model, where the role of LIOMs is taken by strongly interacting pairs. While this ansatz is heuristic and neglects all higher resonances, that may play a crucial role in delocalizing the system, it will nonetheless turn out to be useful for interpreting and understanding the spectral and eigenstate properties reported in the following. \section{Numerical Results} To minimize boundary effects, we consider a one\nobreakdash-dimensional system with periodic boundary conditions \footnote{Only the closest copy of each spin is considered for the interaction.} of up to $N=16$ spins governed by Eq.~\eqref{eq:H_XXZ} and perform exact diagonalisation on the sector of smallest positive magnetisation. We fix the interaction exponent to $\alpha = 6$, corresponding to a Van-der-Waals interactions, and set $\Delta=-0.73$ (cf. \cite{signolesGlassyDynamicsDisordered2021}). We do not expect a strong dependence of our results on the precise value of $\Delta$ as long as one steers clear from regions around points where additional symmetries emerge. For each disorder strength $W$, we generate 2000 configurations, perform a full diagonalisation and compute several well established indicators for the localization transition from the spectrum. We always average over all eigenstates/-values as restricting to the bulk of the spectrum does not lead to qualitative changes in the observed behavior. For a description of the algorithm for choosing the configurations, we refer to appendix~\ref{appendix:drawing_positions}. All code used for this paper can be found at \footnote{\url{https://github.com/abraemer/Pair-localization-paper}}. The following sections discuss different indicators of localization with the aim to establish the localization crossover in this model and employ the pair model for interpretation and predictions. The last section directly compares the pair basis to the eigenstates, thus demonstrating it's validity. \subsection{Level spacing ratio} The spectral average of the level spacing ratio (LSR), defined as \cite{oganesyanLocalizationInteractingFermions2007} \begin{equation} \langle r \rangle = \frac{1}{|\mathcal{H}|}\sum_n \min\left(\frac{E_{n+2} - E_{n+1}}{E_{n+1} - E_{n}}, \frac{E_{n+1} - E_{n}}{E_{n+2} - E_{n+1}}\right), \end{equation} is a simple way of characterizing the distribution of differences between adjacent energy levels. For thermalizing (ergodic) systems, the Hamiltonian is expected to show a mean LSR resembling a random matrix from the Gaussian orthogonal ensemble (GOE), because its eigenvectors essentially look like random vectors. Thus one can use random matrix theory to obtain $\langle r \rangle\ind{thermal} = 4 - 2\sqrt{3} \approx 0.536 $ \cite{atasDistributionRatioConsecutive2013}. On the other hand, in localized systems the eigenvalues follow a Poisson distribution, since they are essentially sums of randomly distributed energies from the $l$-bits the system consists of. Computing the mean LSR in this case yields $\langle r \rangle\ind{MBL} = 2 \ln 2 - 1 \approx 0.386 $ \cite{atasDistributionRatioConsecutive2013}. \begin{figure} \includegraphics[width=0.45\textwidth]{lsr_W} \caption{\textbf{Level-spacing ratio.} With increasing disorder the LSR shows a crossover from an ergodic value to its Poissonian value and below. We identify four major regions where the physics is governed by (I) translational symmetry breaking, (II) thermal behavior, (III) the localization crossover and (IV) localization. The horizontal lines show random-matrix theory predictions.} \label{fig:lsr} \end{figure} Comparing with the numerical results in Fig.~\ref{fig:lsr} and focusing on the central parts first, we find the mean LSR reaches its thermal value for large enough systems and weak disorder (II) dropping towards the Poissonian value for stronger disorder (III). With growing system size, the thermal plateau (II) broadens, marking a parameter region where the system appears ergodic. But while the plateau broadens, the drop-off (III) for increasing disorder strength becomes steeper, meaning the crossover becomes sharper as the system gets larger. Considering very strong disorder (IV), the mean LSR drops even below the Poissonian value, which indicates level attraction. This effect can be explained by the pair model: As stated earlier, the $\ket{\updownarrow\updownarrow}$ states' degeneracy is lifted by the effective Ising-like terms from 1st order perturbation theory, which means the split is of smaller magnitude compared to the intra-pair interactions. For small systems with comparatively low spectral density, this means that the small lifting likely fails to mix the formerly degenerate states into their surrounding spectrum. Thus the LSR still reflects the near degeneracy within the pairs, leading to level-attraction. A similar argument can be made at very weak disorder (I): Here the source of the degeneracy is the proximity to the perfectly ordered case at $W=0.5$ which has an additional translation invariance. Weak disorder breaks that symmetry but couples the symmetry sectors only weakly, leading again to a very small energetic splitting of degenerate states. We want to emphasize the reason for level attraction being very different in nature in (I) and (IV): Whereas in (I) the system is close to a system with obvious conserved quantities due to symmetries, in (IV) there is the emergent integrability of the MBL regime\cite{abaninManybodyLocalizationThermalization2019}. We conclude, that, in analogy to standard MBL, we find a crossover in the level spacing distribution from a regime with level repulsion to Poissonian gaps indicating a localization crossover. At very strong disorder, we even find a region with level attraction, the source of which can be explained by the effective pair model. \subsection{Thouless parameter} Complementary to eigenvalue statistics, we also probe eigenstate properties by computing the Thouless parameter \begin{equation} \mathcal{G}_n = \ln \frac{|\mel{n}{\hat{V}}{n+1}|}{E^\prime_{n+1} - E^\prime_n} \end{equation} introduced by \citeauthor{serbynCriterionManybodyLocalizationdelocalization2015}\cite{serbynCriterionManybodyLocalizationdelocalization2015}. This quantity is akin to the Thouless conductance in single particle systems and quantifies how well two states $\ket{n}$,$\ket{n+1}$ with perturbed energies $E^\prime_n = E_n + \expval{\hat{V}}{n}$ are coupled by a local perturbation $\hat{V}$. In the thermal phase, states of similar energy will have similar spatial structures, whereas in the localized phase, eigenstates are products of LIOM eigenstates and thus typically vary drastically from one to the next. One can derive the scaling of the average $\mathcal{G}$ in the thermal regime to be $\mathcal{G}\propto \log|\mathcal{H}|$ and in the localized regime to be $\mathcal{G}\propto -\log|\mathcal{H}|$, leading to the natural definition of the location of the crossover to be the point where $\mathcal{G} = \mathrm{constant}$\cite{serbynCriterionManybodyLocalizationdelocalization2015}. \begin{figure} \includegraphics[width=0.45\textwidth]{thouless_W} \caption{\textbf{Thouless parameter.} Spectrally averaged $\mathcal{G}$ vs. disorder strength $W$. Data shown uses local operator $\hat{W}=\sigma_z^{1}$.} \label{fig:thouless_parameter} \end{figure} Figure~\ref{fig:thouless_parameter} shows results using local operator $\hat{V}_1=\hat{S}_z^{(1)}$. Data for local operators $\hat{V}_2 = \hat{S}_z^{(1)}\hat{S}_z^{(2)}$ and $\hat{V}_3 = \hat{S}_+^{(1)}\hat{S}_-^{(2)} + \mathrm{h.c.}$ is visually identical. There is a very clear point, where all curves intersect each other indicating the crossover's location. To the right of the crossing point in the localized regime, the curves are roughly evenly spaced, reflecting the expectation of $\mathcal{G}\propto -\log|\mathcal{H}|$, clearly signaling the localized regime. \subsection{Half-chain entropy} Having shown, that there is indeed a localization crossover, we now demonstrate that our effective pair model is indeed a good approximation. We start by probing the half-chain entropy, $S=-\Tr \rho_A \log_2\rho_A$, with $\rho_A=\Tr_B(\rho)$, i.e. the entanglement entropy between two halves of the chain. For that we select $\left\lfloor\frac{N}{2}\right\rfloor$ consecutive spins and trace out the rest, resulting in two cuts due to the periodic boundary conditions, and average over all $N$ possible choices of connected subsystems and all eigenstates. In an ergodic system, all bulk states should exhibit volume-law entanglement meaning $S\propto N$. In contrast, in a localized setting all states show area-law entanglement, which for $d=1$ means $S = \mathrm{const}$ \cite{eisertAreaLawsEntanglement2010,gogolinEquilibrationThermalisationEmergence2016}. To compute the half-chain entropy predicted by the pair model, we need to determine how many pairs are divided by each cut and how often these pairs are found in one of the entangled states $\ket{\pm} = 1/\sqrt{2} (\ket{\uparrow\downarrow} \pm \ket{\downarrow\uparrow})$. Not all pairs consist of adjacent spins (see Fig.~\ref{fig:1}c), so a cut can separate more than one pair. The amount of cut bonds is easily determined from the position data alone, by adding up the distances between paired spins. Respecting periodic boundary conditions of the system yields an additional factor of two, since there are two cuts needed to divide the chain. Considering the entropy contribution of a single bond, if we were to average over all possible configurations of pair states, each cut bond would contribute half an ebit on average, as half of the pair states are maximally entangled and the other half not entangled at all. However, here we consider the sector of smallest positive magnetization, which yields a slightly larger entropy, because it favors the entangled states $\ket{\pm}$ (which have zero net magnetization) over the fully polarized ones. This modification can be computed exactly (see appendix \ref{appendix:pair_entropy} for details). Taking into account both the effects of non-local pairs and of the fixed total magnetization, we can compute a prediction for the entanglement entropy directly from the interaction matrix $J_{ij}$. Figure~\ref{fig:hce} shows both the numerically computed values for different system sizes (solid) and pair-model prediction (dashed). \begin{figure} \includegraphics[width=0.45\textwidth]{hce_W} \caption{\textbf{Half-chain entropy.} Average of half-chain entropy for different system sizes across disorder and prediction by pair description (black dashed line). Inset: Linear fits at fixed disorder strengths indicated by the vertical dashed lines in the main panel.} \label{fig:hce} \end{figure} We clearly see the change between the ergodic and localized regime for the numerically computed data. For strong disorder all lines collapse, confirming on one hand the area law entanglement expected in the localized phase and, on the other hand, validating the pair model as it predicts the strong-disorder limit with striking accuracy. Another piece of information that we can access easily via the half-chain entropy is the location of the crossover. To determine it, we calculate the variance of the half chain entropy over different disorder realizations and extract the maximum for each chain length $N$ via a quadratic fit \cite{kjallManyBodyLocalizationDisordered2014}. Figure~\ref{fig:hce_variance} shows no strong dependence of the crossover point on $N$ in the range of accessible system sizes. \begin{figure} \includegraphics[width=0.45\textwidth]{entropy_variance_W} \caption{\textbf{Standard deviation of half-chain entropy.} The main plot shows the standard deviation of the half-chain entropy across disorder realizations exhibiting a clear maximum around which a quadratic polynomial is fitted. Inset: Position of the maximum as extracted by the fits. Errors shown are statistical errors extracted from the fits.} \label{fig:hce_variance} \end{figure} Interestingly, the crossover location is very close the density given by Rényi's parking constant, or jamming limit, which is the maximal density attainable, by randomly placing non-overlapping unit intervals on the number line \cite{renyialfredOneDimensionalProblemConcerning1958}. As in experiments with Rydberg spins atom positions result from such a random process, this could imply, that these experiments might not be able to reach the densities required for observing the fully ergodic regime. However, it is unclear how the crossover location generalizes to higher dimensions and larger systems. \subsection{Participation ratio} Now that we have seen, that the pair model captures the spatial entanglement structure of the exact eigenstates, we compare the predicted eigenstates directly to the exact ones by computing the participation ratio (PR). Intuitively, it measures how many states of a reference basis $\mathcal{B}=\{\ket{b}\}$ contribute to a given eigenstate $\ket{\phi_n}$ \begin{align} \mathrm{PR}_\mathcal{B}(\ket{\phi_n}) = \left(\sum_{b\in\mathcal{B}} |\braket{b}{\phi_n}|^4 \right)^{-1} \quad. \end{align} Usually in the MBL context, one chooses a product basis as reference, because a low PR relative to product basis means the eigenstates are close to product states. "Low" in this context means a sublinear scaling of PR with the dimension of the Hilbert space $\mathcal{H}$: $\mathrm{PR} \propto |\mathcal{H}|^{\tau}$ where $\tau < 1$. In contrast, a thermalizing system always has $\mathrm{PR} \propto |\mathcal{H}|$ with respect to any product basis \cite{serbynThoulessEnergyMultifractality2017,maceMultifractalScalingsManyBody2019,luitzMultifractalityItsRole2020}. Here we compare two different reference bases, the $z$-basis $\mathcal{Z}=\{\ket{\uparrow},\ket{\downarrow}\}^{\otimes N}$ and the pair basis $\mathcal{P}=\{\ket{\pm},\ket{\updownarrow\updownarrow}\}^{\otimes N/2}$, introduced above, to determine how well the pair model describes the eigenstates. If the pair basis $\mathcal{P}$ was exactly equal to the eigenbasis, its PR would be exactly 1. In this case the expected PR with respect to the z-basis, averaged over the Hilbert space, $\mathcal{Z}$ will be $1.5^{N/2}$, because a single pair has an average PR of $1.5$. However, we only consider the sector of smallest positive magnetization, which increases the expected PR by a similar line of reasoning as for the entropy in the previous section. \begin{figure} \includegraphics[width=0.45\textwidth]{pr_W} \caption{\textbf{Participation ratio.} (a) PR relative to Hilbert space dimension $|\mathcal{H}|$ for different reference bases: $z$-basis in blue, pair basis in red. The inset shows a magnification of the region towards perfectly ordered systems. (b) shows the growth in absolute PR with increasing system size in the localized regime. The used value of $W$ is indicated by the dash-dotted line in (a).} \label{fig:pr} \end{figure} Figure~\ref{fig:pr}(a) shows the PR relative to the two reference bases as a fraction of the Hilbert space dimension $|\mathcal{H}|$. We see that the weakly disordered regime indeed has ergodic eigenstates as the curves collapse onto each other. The small offset between the two reference bases is plausible, since a thermal systems eigenstates express volume law entanglement and thus the overlap with a product basis like $\mathcal{Z}$ is minimal. The states of the pair basis contain pairwise entanglement and are thus a bit closer, which manifest as slightly lower PR. Around $W=0.6$ the scaling with $|\mathcal{H}|$ starts to change to a sublinear relation as we crossover to the localized regime. Checking the PR deep in the localized phase (at around $W=1.67$) in Fig.~\ref{fig:pr}(b), we can see that the PR relative to the $z$-basis (blue line) is slightly, but systematically, larger than the pair model's prediction (dashed green line). Consistent with this observation, we see that the PR relative to the pair basis (red line), while being much smaller, is still not constant across system sizes. We conclude, that the pair states offer a good first order approximation of the true eigenstates, but there are higher order resonances that lead to further hybridization for some states. The exponent of the remaining dependence on system size is close to $N/4$, which hints at effects stemming from interactions between pairs. \section{Conclusions} We analyzed a disordered Heisenberg XXZ spin model with power-law interaction and positional disorder, which is naturally realized by many quantum simulation platforms. Among these, cold Rydberg gases allow for easy tuning of the disorder via the sample's density due to the Rydberg blockade. By using standard MBL indicators, we showed numerically that this system undergoes a localization crossover, which we interpreted in terms of a simple physical model derived using an SDRG ansatz. This model, consisting of an effective Ising model of strongly interacting pairs of spins, was verified by considering the participation ratio of eigenstates with the conjectured basis, which is drastically reduced compared to the participation ratio relative to the $z$-basis. Still, there was a weak dependence on system size left, which means there are higher order corrections to our model. Nonetheless, we also showed that this simple model can already predict the entanglement entropy of the system nearly perfectly. With this model at hand, we can now make predictions for large systems which may be tested in quantum simulation experiments. Of course, one of the most interesting questions will be, whether the location of the crossover shifts towards stronger disorder for large systems, indicating a transition at infinite disorder strength in the thermodynamic limit. For this purpose the easy tunability of the disorder is a great advantage as both sides of the crossover can be probed on the same platform by changing the system parameters. Note that the pair model cannot be used to predict the crossover itself as it essentially requires the assumption that one can find strongly interacting pairs, which is only justified in the strongly disordered regime. Recent arguments for the absence of localization postulate the existence of rare thermal subregions within the system \cite{deroeckStabilityInstabilityDelocalization2017,luitzHowSmallQuantum2017,morningstarAvalanchesManybodyResonances2022,selsDynamicalObstructionLocalization2021,selsThermalizationDiluteImpurities2022,selsMarkovianBathsQuantum2021,thieryManyBodyDelocalizationQuantum2018,deroeckStabilityInstabilityDelocalization2017,ponteThermalInclusionsHow2017,pandeyAdiabaticEigenstateDeformations2020}. This would of course break the base assumption of the pair model. A possible direction for future research would be to extended the model to include not only pairs but also larger clusters, which would require one to track all the kinds of interactions between clusters of different sizes. Interestingly, the dimensionality of the system does not directly influence the pair model. As long as the couplings are sufficiently disordered, such that pairs can be defined, it will be a good approximation. Thus it suffices to study how the distribution of couplings changes with respect to the dimensionality $d$ of the space and coupling power $\alpha$. Similar to resonance counting arguments arguments \cite{yaoManyBodyLocalizationDipolar2014}, we conjecture the requirement $d<\alpha$ for the pair model to be applicable. Hence, we expect our results, while acquired in $d=1$, to generalize well to $d>1$. \section{Acknowledgements} We thank Dima Abanin for his stimulating input. For numerical simulations we used the Julia programming language \cite{bezansonJuliaFreshApproach2017}. The authors acknowledge support by the state of Baden-Württemberg through bwHPC and the German Research Foundation (DFG) through grant no INST 40/575-1 FUGG (JUSTUS 2 cluster). This work is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy EXC 2181/1 - 390900948 (the Heidelberg STRUCTURES Excellence Cluster) and under SFB 1225 ISOQUANT - 27381111.
1,108,101,565,058
arxiv
\section{Introduction} \label{Introduction} Kidney paired donation programs (KPDPs) across the globe have facilitated living donor kidney transplants for patients experiencing renal failure who have a willing yet incompatible or sub-optimal donor. A patient in need of a kidney transplant registers along with their incompatible donor (paired donor) in a KPDP\ as a pair, and that patient receives a compatible kidney from either the paired donor in another pair---who in turn receives a kidney from another donor, or from a singleton donor who does not expect a kidney in return (a.k.a.\ non-directed donor). The transplants are then made possible through the exchange of paired donors between patient-donor pairs. Since first discussed by \cite{Rapaport1986}, and put in practice for the first time in South Korea \citep{Park1999}, kidney exchanges performed through KPDPs\ have been introduced in several countries around the world, e.g., the United States \citep{Saidman2006}, the United Kingdom \citep{Manlove2015}, Canada \citep{CaKEPFoundations} and Australia \citep{AustraliaKPD}, and the underlying matching of patients to donors has been the subject of study from multiple disciplines (see, e.g., \citet{Roth2005,Dickerson2016, Dickerson2019,Carvalho2020, Riascos2020}). Despite this attention, kidney exchange still faces challenges from both a practical and a theoretical point of view \citep{Ashlagi2021}. In this work, motivated by the high rate of exchanges that do not proceed to transplant \citep{Bray2015, Dickerson2016, CBS2019}, we provide a robust optimization methodological framework that proposes a set of exchanges for transplant, observes failures in the transplant plan, and then repairs the affected exchanges provided that a recovering rule (i.e., \textit{recourse policy}) is given by KPDP\ operators. Kidney exchange, or the kidney exchange problem as it is known in the literature, can be modeled in a compatibility graph, i.e., a digraph whose vertices represent either a pair or a singleton donor and arcs indicate that the donor in the starting vertex is blood type and tissue type compatible with the patient in the ending vertex. Arcs can have an associated weight that represents the importance/priority of that transplant. Exchanges then take the form of simple cycles and simple paths (a.k.a.\ chains). Cycles consist of patient-donor pairs exchanges only, whereas in a chain, the first donation occurs from a singleton donor to a patient-donor pair and then is followed by a sequence of donations from a paired donor to the patient in the next patient-donor pair. It is standard practice to perform cyclic transplants simultaneously and limit their length due to the logistical implications of arranging operating rooms and surgical teams. Although the size of chains can be unbounded and theoretically infinite \citep{Ashlagi2012, Anderson2015,Ding2018,Dickerson2019} by allowing the donor in the last pair of a chain to become a \textit{bridge donor}, i.e., a paired donor that acts as a singleton donor in a future algorithmic matching, it is also a common practice to limit the size of chains \citep{Biro2009, CBS2019, Carvalho2020}, in light of potential fallbacks and greater opportunity for late-detected incompatibilities \citep{Biro2009, Carvalho2020}. In that case, the donor in the last pair donates to a patient on the deceased donor's waiting list, ``closing up'' a chain. Operational and technological limitations bring uncertainty to the existence of vertices and arcs in the compatibility graph when exchanges go through a final checking process before nephrectomies take place. Permanent or temporary withdrawals can also happen at any stage of the kidney exchange process. The availability of a patient or donor, just like the actual compatibility of an exchange, is confirmed only when a set of transplants has already been selected. This phenomenon is explained by changes/lack of accuracy on the ``believed'' information KPDP\ operators rely on when building the compatibility graph. There are multiple reasons a pair/singleton donor selected for transplant may not be available, e.g., match offer rejection, already transplanted out of the KPDP, illness, pregnancy, reneging, etc. Thus, even if some of this information is captured prior to the matching, there is still a chance of a subsequent fallback. Additionally, tissue type compatibility is not known with certainty when the compatibility graph is built, unlike blood type compatibility. Tissue type compatibility is based on the result of a \textit{virtual crossmatch test}, which typically has lower accuracy compared to a \textit{physical crossmatch test}. Both tests try to determine if there are antibodies of importance in a patient that can lead to the rejection of a potential donor's kidney. Physical crossmatch tests are challenging to perform, making them impractical and unlikely to be performed in real life between all patients and donors \citep{Carvalho2020}. Then, at a first stage, a ``believed'' compatibility graph is built according to the results of the virtual crossmatch test and once the set of transplants has been proposed, those exchanges undergo a physical crossmatch test to confirm or rule out the viability of the transplants. After confirming infeasible transplants and depending on KPDPs' regulations, KPDP\ operators may attempt to repair the originally planned cycles and chains impacted by the non-existence of a vertex or an arc. {We refer to these impacted cycles and chains as \textit{failed} cycles and chains. While a cycle fails completely by the failure of any of its elements (vertices or arcs), a chain is cut short at the pair preceding the first failed transplant.} Failures in the graph, caused by the disappearance of a vertex or arc have a significant impact on the number of exchanges that actually proceed to transplant \citep{Dickerson2019,CBS2019} and can even drive KPDP\ regulations \citep{Carvalho2020}. For instance, \cite{Dickerson2019} reported that for selected transplants from the UNOS program between 2010--2012, 93\% did not proceed to transplant. Of those non-successful transplants, 44\% had a failure reason (e.g., failed physical crossmatch result or patient/donor drop-out), which in turn caused the cancellation of the other 49\% of non-successful transplants. In Canada, between 2009--2018, 62\% of cycles and chains with six transplants failed, among which only 10\% could be repaired. Half the cycles and chains with three or fewer transplants were successful, and approximately 30\% of the total could not proceed to transplant \citep{CBS2019}. The set of transplants that is repaired by KPDPs\ is planned without accounting for subsequent failures and thus, a sub-optimal outcome on the number of successful transplants is obtained. A large body of work has been concerned with maximizing the expectation of the proposed transplants \citep{Awasti2009, Dickerson2014, Dickerson2019, Klimentova2016, Smeulders2022}. From a practical perspective, maximizing expectation could increase the number of planned exchanges that become actual transplants. However, an underlying concern is that such an objective could benefit patients that are likely to be compatible with multiple donors at the risk of disadvantaging highly-sensitized patients, i.e., patients for whom a few kidney donors are available and whose associated exchanges tend to fail at a higher rate compared to non-sensitized patients. Furthermore, real data is limited and there is not yet enough understanding on how to characterize the dynamics between patients and donors to derive a probability distribution of failures that could be generalized to most KPDPs. With these considerations in mind, we model failure through an uncertainty set that does not target patient sensitization level nor probabilistic knowledge, and aim to find a set of transplants that allows the biggest recovery under the worst-case failure scenario in the uncertainty set. We therefore develop a two-stage robust optimization (RO) approach to the kidney exchange problem wherein (1) the first stage determines a kidney matching solution according to the original compatibility graph, and then (2) the second stage repairs the solution after observing transplant cancellations. We extend the current state-of-the-art RO methodologies \citep{Carvalho2020} by considering that the failure rate of vertices and arcs is non-homogeneous---since failure reasons, such as a late-detected incompatibility, seem to be independent from patient/donor fallback and vice versa---and also considering the impact of scarce match possibilities for highly-sensitized patients. The contributions of this work are as follows: \begin{enumerate} \item We study a two-stage RO framework with non-homogeneous failure between vertices and arcs. \item We present a novel general solution framework for any recourse policy whose recourse solution set is finite upon selection of a first-stage set of transplants. \item We introduce for the first time two feasibility-seeking reformulations of the second stage as opposed to optimality-based formulations \citep{Carvalho2020, Blom2021}. The number of decision variables grows linearly with the size of vertices and arcs. This greater scalability, however, comes at the price of the lack of a lower bound. We derive dominating scenarios and explore several second-stage solution algorithms to overcome the drawbacks and exploit the advantages. \item We compare our framework to state-of-the-art algorithms and find significant computational and solution quality improvements. \end{enumerate} The remainder of the paper is organized as follows. Section \ref{LitReview} presents a collection of related works. Section \ref{Preliminaries} establishes the problem we address. Sections \ref{RobustModels} and \ref{SecondStage} present the first- and second-stage formulations, respectively. Section \ref{sec:HSAs} presents the full algorithmic framework. Section \ref{Experiments} shows computational results. Lastly, Section \ref{sec:Conclusion} draws some conclusions and states a path for future work. \section{Related Work} \label{LitReview} \cite{Abraham2007} and \cite{Roth2007} introduced the first KPDP\ mixed-integer programming (MIP) formulations to maximize the number/weighted sum of exchanges, namely, the well-known and denominated \textit{edge formulation} and the \textit{cycle formulation}. The edge formulation uses arcs in the input graph to index decision variables; whereas the cycle formulation, which was initially proposed for cycles only, has a decision variable for every feasible cycle and chain in the input graph. Although both formulations are of exponential size either in the number of constraints (edge formulation) or in the number of decision variables (cycle formulation), the cycle formulation, along with a subsequent formulation proposed by \cite{Dickerson2016}, is the MIP formulation with the strongest linear relaxation. Due to its strength and natural adaptability, multiple works have designed branch-and-price algorithms employing the cycle formulation. The branch-and-price algorithm proposed in \citep{Abraham2007} was effective for cycles of size up to three. \cite{Lam2020} solved the problem for long cycles. \cite{Riascos2020} used decision diagrams to solve the pricing problems and for the first time their branch-and-price addresses both long cycles and long chains, and it is able to scale successfully for even the largest instances in the PrefLib library \citep{Mattei2013}. More recently, \cite{Omer2022} built up on the work in \citep{Riascos2020} by implementing a branch-and-price able to solve remarkably large instances (10000 pairs and 1000 altruists). Another trend has focused on new arc-based formulations (e.g., \cite{Constantino2013,Dickerson2016}) and arc-and-cycle-based formulations (e.g., \cite{Anderson2015, Dickerson2016}). Among these two approaches, arc-and-cycle-based formulations seem to outperform arc-based formulations \citep{Dickerson2016}, especially when allowing up to three exchanges in a cycle. The previously discussed studies do not consider uncertainty in the proposed exchanges. However, the percentage of planned transplants that end up cancelled suggests a need to plan for uncertainty. There are two sources of uncertainty that have been studied in the literature: weight accuracy (e.g., \cite{Duncan2019}) and vertex/arc existence (e.g., \cite{Dickerson2016, Klimentova2016, Duncan2019, Carvalho2020, Smeulders2022}). Weight accuracy uncertainty considers that the associated social benefit (weight) associated to an exchange can vary, e.g., due to changes in a patient's health condition or from the result of multiple opinions of policy makers on the priority that should be given to a patient with respect to others \citep{Duncan2019}. Uncertainty in the existence of a vertex/arc, e.g., whether or not a patient or donor leaves the KPDP\ or compatibility between a patient and donor changes, has received greater attention. There are three main approaches in the literature when considering vertex or arc existence as the source of uncertainty: (1) a maximum expected value approach; (2) an identification of exchanges for which a physical crossmatch test should be performed to maximize the expected number of realized transplants; and (3) a maximization of the number of transplants under the worst-case disruption of vertices and arcs; which are explained in detail in what follows. The maximum expected value approach is the approach most investigated in the literature. It is concerned with finding the set of transplants with maximum expected value, i.e., a set of transplants that is most likely to yield the maximum number or maximum weighted sum of exchanges given some vertex/arc failure probabilities. This approach has been mostly modeled as a deterministic KEP, where the objective function approximates the expected value of a matching using the given probabilities as objective coefficient multipliers of deterministic decisions. \cite{Awasti2009} considered the failure of vertices in an online setting of the cycle-only version for cycles with at most three exchanges. The authors generate sample trajectories on the arrival of patients/donors and patients survival, then use a REGRETS algorithm as a general framework to approximate the collection of cycles with maximum expectation. \cite{Dickerson2012} proposed a heuristic method to learn the ``potential'' of structural elements (e.g. vertex), which quantifies the future expected usefulness of that element in a changing graph with new patient/donor arrivals and departures. \cite{Dickerson2013} considered arc failure probabilities and found a matching with maximum expected value, but solution repairs are not considered in case of failure. Following the same motivation, an extension of this work is in \citep{Dickerson2019}. \cite{Klimentova2016} studied the problem of computing the expected number of transplants for the cycle-only version while considering \textit{internal} recourse and \textit{subset} recourse to recover a solution in case of vertex or arc failure. Internal recourse, also known as \textit{back-arcs recourse} (e.g., \cite{Carvalho2020}) allows surviving pairs to match among themselves, whereas subset recourse allows a wider subset of vertices to participate in the repaired solution. To compute the expectation, an enumeration tree is used for all possible failure patterns in a cycle and its extended subset, consisting of the additional vertices (for the subset recourse only) with which the pairs in the original cycle can form feasible cycles. To reasonably limit the size of the tree, the subset recourse is limited to a small subset of extra vertices and the internal recourse seemed to scale for short cycles only. \cite{Alvelos2019} proposed to find the expected value for the cycle-only version while considering internal recourse through a branch-and-price algorithm, finding that the overall run time grew rapidly with the size of the cycles. To identify exchanges where a physical crossmatch test should be performed, \cite{Blum2013} modeled the KEP in an undirected graph representing pairwise exchanges only. They proposed to perform two physical crossmatch tests per patient-donor pair---one for every arc in a cycle of size two---before exchanges are selected with the goal of maximizing the expected number of transplants. They showed that their algorithm yields near-optimal solutions in polynomial time. Subsequent works \citep{Assadi2019,Blum2020} evaluated adaptive and non-adaptive policies to query edges in the graph. In the same spirit, but now for the general kidney exchange problem (with directed cycles and chains), \cite{Smeulders2022} formulated the maximization of the expected number of transplants as a two-stage stochastic integer programming problem considering a limited budget on the number of arcs that can be tested in the first stage. Despite the different algorithmic approaches that were proposed, scalability is still challenging. In addressing worst-case vertex/arc disruption, \cite{Duncan2019} found robust solutions with no recourse considering a budget failure on the number of arcs that fail in the graph. \cite{Carvalho2020} proposed a two-stage robust optimization model that allowed recovery of failed solutions through the back-arcs recourse and the full recourse policies. The latter can be seen as a subset recourse policy (e.g., \cite{Klimentova2016}), in which all vertices that were not selected in the matching can be included in the repaired solution. Unlike our work, vertex and arc failure are treated as homogeneous, i.e., both elements can fail with the same probability. Since in homogeneous failure there is a worst-case scenario in which all failures are vertex failures, the recourse policies are evaluated under vertex failure only. The back-arcs recourse policy only scales for instances with 20 vertices, whereas the full-recourse policy scales for instances up to 50 vertices. \citet{Blom2021} examined the general robust model for the full-recourse policy studied in \citet{Carvalho2020} and showed its structure to be a defender-attacker-defender model. Two Benders-type approaches are proposed and tested using the same instances from \citet{Carvalho2020}. The Benders-type approaches showed improved performance over the branch-and-bound proposed by \citet{Carvalho2020}. This approach, however, is limited to homogeneous failure for the full-recourse policy. In this work, we allow for different failure rates between vertices and arcs, and present a solution scheme that can address recourse policies whose recourse solution set corresponds to a subset of the feasible cycles and chains in the compatibility graph, e.g., the full recourse, the back-arcs recourse and a new recourse policy introduced later on. Our solution method does require a robust MIP formulation adapted to a specific policy that can be solved iteratively as new failure scenarios are added. The second-stage problem is decomposed into a master problem and a subproblem. The master problem is formulated as the same feasibility problem regardless of the policy but it is implicit in the constraint set, whereas the subproblem (i.e., recourse problem) corresponds to a deterministic KEP where only non-failed cycles and chains contribute to the robust objective. \section{Preliminaries} \label{Preliminaries} In this section, we describe the two-stage robust model we study. We start by formally describing the first-stage problem. Particularly, we define the compatibility graph and the feasible set for the first-stage decisions. We proceed in a similar way for the second-stage problem and then introduce the two-stage robust problem addressed in this paper. Lastly, we define the uncertainty set and the recourse policies study in this work. \paragraph{First-stage compatibility graph.} The KEP can be defined on a directed graph $\graph = (\vertexSet, \arcSet)$, whose vertex set $V := P \cup N$ represents the set of patient-donor pairs, $P$, and the set of singleton donors $N$. From this point onward, we will refer to patient-donor pairs simply as \emph{pairs}. The arc set $A \subseteq V \times P$ contains arc $(\vertexa,\vertexb)$ if and only if the donor in vertex $u \in V$ is compatible with patient in vertex $v \in P$. A matching of donors and patients in the KEP can take the form of simple cycles and simple chains (i.e., paths from the digraph). A cycle is feasible if it has no more than $K$ arcs, whereas a chain is feasible if it has no more than $L$ arcs ($L + 1$ vertices) and starts with a singleton donor. The set of feasible cycles and chains is denoted by $\mathcal{C}_{\cycleCap}$ and $\mathcal{C}_{\chainCap}$, respectively. Furthermore, let $V(\cdot)$ and $A(\cdot)$ be the set of vertices and arcs in $(\cdot)$. \paragraph{Feasible set of first-stage decisions.} A feasible solution to the KEP corresponds to a collection of vertex-disjoint feasible cycles and chains, referred to as a \emph{matching}\footnote{Note that since each vertex in the KEP compatibility graph corresponds to a patient-donor pair or a singleton donor, a KEP matching---which is referred to as a matching in the literature and in this paper---is not actually a matching of the digraph, rather a collection of simple cycles and paths of the digraph which leads to an actual underlying 1-1 matching of selected donors and patients.}, i.e., $M \subseteq \mathcal{C}_{\cycleCap} \cup \mathcal{C}_{\chainCap}$ such that $V(c) \cap V(c^{\prime}) = \emptyset,$ for all $c, c^{\prime} \in M$ with $c \ne c^{\prime}$. We let $\matchSet_D$ denote the set of all KEP matchings in graph $D$. Also, we define $$ \mathcal{X} := \{\bm{x_{\matchSet}}: M \in \matchSet_D \}, $$ as the set of all binary vectors representing the selection of a feasible set of transplants, where $\bm{x_{\matchSet}}$ is the characteristic vector of matching $M$ in terms of the cycles/chains sets $\mathcal{C}_{\cycleCap} \cup \mathcal{C}_{\chainCap}$. That is, $\bm{x_{\matchSet}} \in \{0,1\}^{\mid \mathcal{C}_{\cycleCap} \cup \mathcal{C}_{\chainCap} \mid}$ with $x_{M,c} = 1$ if and only if $c \in M$, meaning that a patient in a pair obtains a transplant if it is the start or terminal vertex of an arc $a \in A(c)$ in some cycle/chain $c$ selected in matching $M$. \paragraph{Second-stage compatibility graph.} Once a failure scenario $\bm{\oneCeS} \in \Gamma$ affecting solution $\varFirstPlain \in \mathcal{X}$ is observed, a digraph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})$ for the second-stage problem is such that its vertices and arcs do not fail under scenario $\bm{\oneCeS}$ and whose set of feasible cycles $\cycleSet^{\pi}(\varFirstPlain, \bm{\oneCeS}) \subseteq \mathcal{C}_{\cycleCap}$ and set of feasible chains $\chainSet^{\pi}(\varFirstPlain, \bm{\oneCeS}) \subseteq \mathcal{C}_{\chainCap}$ are (i) \textit{allowed} under recourse policy $\pi \in \Pi$ and (ii) have \textit{at least one pair} in $\varFirstPlain$. We detail on the uncertainty set $\Gamma$ and the types of recourse policies $\Pi$ in Sections \ref{subsec:uncertaintyset} and \ref{sec:RecoursePolicies}, respectively. \paragraph{Feasible set of second-stage decisions.} A solution to the second stage is referred to as a \textit{recourse solution} in $D^{\pi}(\varFirstPlain, \bm{\oneCeS})$ under some scenario $\bm{\oneCeS} \in \Gamma$, leading to an alternative matching where pairs from the first-stage solution $\varFirstPlain \in \mathcal{X}$ are re-arranged into non-failed cycles and chains, among those allowed under policy $\pi \in \Pi$. We can now define $\matchSet^{\policy}(\varFirst, \oneCe):= \{M \subseteq \cycleSet^{\pi}(\varFirstPlain, \bm{\oneCeS}) \cup \chainSet^{\pi}(\varFirstPlain, \bm{\oneCeS}) \mid V(c) \cap V(c^{\prime}) = \emptyset \text{ for all } c, c^{\prime} \in M \text{; } c \neq c^{\prime}\}$ as the set of allowed recovering matchings under policy $\pi$ such that every cycle/chain in $\matchSet^{\policy}(\varFirst, \oneCe)$ contains at least one pair in $\varFirstPlain$. Thus, let $$ \setRecoSym^{\policy}(\varFirst, \oneCe):= \{\bm{y_{\matchSet}}:M \in \matchSet^{\policy}(\varFirst, \oneCe) \} $$ be the set of all binary vectors representing the selection of a feasible set of transplants with non-failed elements (vertices/arcs), under scenario $\bm{\oneCeS} \in \Gamma$ and policy $\pi \in \Pi$ that contain at least one pair in $\varFirstPlain$. Likewise, $\bm{y_{\matchSet}}$ is the second-stage counterpart of the characteristic vector $\bm{x_{\matchSet}}$ in terms of the cycle and chain sets $\cycleSet^{\pi}(\varFirstPlain, \bm{\oneCeS})$ and $\chainSet^{\pi}(\varFirstPlain, \bm{\oneCeS})$, respectively. \paragraph{Two-stage RO problem.} A general two-stage RO problem for the KEP can then be defined as follows: \begin{align} \label{ROmodel0} \max_{\varFirstPlain \in \mathcal{X}} \min_{\bm{\oneCeS} \in \Gamma} \max_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)} f(\varFirstPlain,\mathbf{y})& \end{align} \noindent i.e., a set of transplants given by solution $\varFirstPlain \in \mathcal{X}$ is selected in the first stage. Then, the uncertainty vector $\bm{\oneCeS} \in \Gamma$ is observed and a recourse solution $\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)$ is found to repair $\varFirstPlain$ according to recourse policy $\pi \in \Pi$. The second stage, established by the min-max problem, finds a recourse solution by solving the recourse problem (third optimization problem), whose objective value maximizes $f(\varFirstPlain,\mathbf{y})$ under failure scenario $\bm{\oneCeS} \in \Gamma$ but it is the lowest among all failure scenarios. The scenario optimizing the second-stage problem is then referred to as the worst-case scenario for solution $\varFirstPlain \in \mathcal{X}$. The recourse objective function, $f(\varFirstPlain,\mathbf{y})$, assigns weights to the cycles and chains of a recovered matching, associated to a recourse solution $\mathbf{y}$, based on its property with respect to $\varFirstPlain$. Thus, we define $f(\varFirstPlain,\mathbf{y})$ as $\mathbf{w}(\varFirst)^{\top}\mathbf{y} = \sum_{c \in \cycleSet^{\pi}(\varFirstPlain, \bm{\oneCeS}) \cup \chainSet^{\pi}(\varFirstPlain, \bm{\oneCeS})} \mathbf{w}_{c}(\varFirstPlain) \mathbf{y}_{c}$ in Model \ref{ROmodel0}, where $\mathbf{w}_{c}(\varFirstPlain) = \sum_{u \in V(c) \cap V(\varFirstPlain) \cap P} \mathbf{y}_{c}$ is the weight of a cycle/chain $c \in \cycleSet^{\pi}(\varFirstPlain, \bm{\oneCeS}) \cup \chainSet^{\pi}(\varFirstPlain, \bm{\oneCeS})$ corresponding to the \textit{number of pairs} that having been matched in the first stage by solution $\varFirstPlain \in \mathcal{X}$ can also be matched in the second stage by recourse solution $\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)$, after failures are observed. As a result, the weight of every cycle and chain selected by $\mathbf{y}$ corresponds to the number of pairs from the first-stage solution $\varFirstPlain$ that are also present in that cycle/chain. An optimal KEP robust solution corresponds to a feasible set of transplants in the first stage, among all solutions $\varFirstPlain \in \mathcal{X}$, whose best recovery plan has the lowest number of pairs that can be matched in the second stage. Thus, the two-stage RO problem we study for the KEP can be defined as follows: \begin{align} \label{ROmodel} \max_{\varFirstPlain \in \mathcal{X}} \min_{\bm{\oneCeS} \in \Gamma} \max_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)} \weightRO\varReco& \end{align} \subsection{Uncertainty set} \label{subsec:uncertaintyset} Failures in a planned matching are observed after a final checking process. Failure reasons can include illness, late-detected incompatibilities, match offer rejection by a recipient, patient/donor dropout, etc. Theses failure reasons can then lead to the removal of the affected vertices and arcs from the first-stage compatibility graph. The failure of a vertex/arc causes the failure of the entire cycle that element belongs to and the shortening of a chain right at the last vertex before the first failure. In the literature, the uncertainty set has been defined as a polyhedron where vertices and arcs in the compatibility graph can fail at the same rate (homogeneously), and the total failures for both vertices and arcs are bounded by a predefined integer value \citep{Blom2021, Carvalho2020}. In \textit{homogeneous failure}, there is no need to consider arc failures since there exists a worst-case scenario where all failures are vertex failures \citep{Carvalho2020}. We consider \textit{non-homogeneous failure} by allowing vertices and arcs fail at different rates, as such, we define two failure budgets, one for vertices and another for arcs. In other words, we are assuming that there exists two unknown probability distributions causing vertices and arcs to fail independently from one another, rather than assuming that both vertices and arcs follow the same failure probability distribution. This approach, however, can still model homogeneous failure since it suffices to consider only vertex failures. After having selected a set of transplants but before the uncertainty is revealed, there exists a transitory compatibility graph $D^{\pi}(\varFirstPlain) = (V^{\pi}_{\varFirstPlain}, A^{\pi}_{\varFirstPlain})$ with the same properties as the second-stage compatibiblity graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS}) = (V^{\pi}_{\varFirstPlain, \bm{\oneCeS}}, A^{\pi}_{\varFirstPlain, \bm{\oneCeS}})$, except that no failures are yet observed and thus $\lvert V^{\pi}_{\varFirstPlain} \rvert \ge \lvert V^{\pi}_{\varFirstPlain, \bm{\oneCeS}} \rvert$ and $\lvert A^{\pi}_{\varFirstPlain} \rvert \ge \lvert A^{\pi}_{\varFirstPlain, \bm{\oneCeS}} \rvert$. Thus, we define an uncertainty set $\bm{\Gamma}$ in terms of all the uncertainty sets $\bm{\Gamma}(\varFirstPlain)$ leading to a second-stage compatibility graph as follows: \begin{subequations} \label{UDef} \begin{align} \bm{\Gamma}(\varFirstPlain) &:= \left(\bm{\bigsCe^{\text{v}}}(\varFirstPlain), \bm{\bigsCe^{\text{a}}}(\varFirstPlain)\right) \text{ where,}\\ \bm{\bigsCe^{\text{v}}}(\varFirstPlain) &:=\{\bm{\gamma^{\text{v}}} \in \{0,1\} ^{\midV \mid }\mid \lvert V^{\pi}_{\varFirstPlain} \rvert - \lvert V^{\pi}_{\varFirstPlain, \bm{\oneCeS}} \rvert \le \sum_{u \in V} \gamma^{\text{v}}_{u} \le r^{\text{v}}\}\\ \bm{\bigsCe^{\text{a}}}(\varFirstPlain) &:= \{\bm{\gamma^{\text{a}}} \in \{0,1\} ^{\mid A \mid}\mid \lvert A^{\pi}_{\varFirstPlain} \rvert - \lvert A^{\pi}_{\varFirstPlain, \bm{\oneCeS}} \rvert \le \sum_{(\vertexa,\vertexb) \in A} \gamma^{\text{a}}_{\vertexa\vertexb} \le r^{\text{a}} \}\\ \bm{\Gamma} &:= \bigcup\limits_{\varFirstPlain \in \mathcal{X}} \bm{\Gamma}(\varFirstPlain) \end{align} \end{subequations} A failure scenario $\bm{\oneCeS} = (\gamma^{\text{v}}, \gamma^{\text{a}})$ is represented by binary vectors $\gamma^{\text{v}}$ and $\gamma^{\text{a}}$. A vertex $u \in V$ and arc $(\vertexa,\vertexb) \in A$ from the first-stage compatibility graph fail under a realized scenario if $\gamma^{\text{v}}_{u} = 1$ and $\gamma^{\text{a}}_{\vertexa\vertexb} = 1$, respectively. The total number of vertex and arc failures in the first-stage compatibility graph is controlled by parameters $r^{\text{v}}$ and $r^{\text{a}}$, respectively. Therefore, the number of vertex failures in the transitory compatibility graph $D^{\pi}(\varFirstPlain)$ leading to a second-stage compatibility graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})$ cannot exceed $r^{\text{v}}$. Likewise, the number of arc failures in $D^{\pi}(\varFirstPlain)$ cannot exceed $r^{\text{a}}$. Thus, the uncertainty set $\Gamma$ is the union over all failure scenarios leading to a second-stage compatibility graph when a set of transplants in the first stage has been proposed for transplant. Note that this uncertainty set definition only distinguishes failures by element type (vertex or arc), which gives equal chances of failure and thus success to both non sensitized and sensitized patients. \subsection{Recourse policies} \label{sec:RecoursePolicies} An important consideration made by KPDPs is the guideline according to which a selected matching is allowed to be repaired. Although, re-optimizing the deterministic KEP with non-failed vertices/arcs is an alternative \citep{Dutch2005}, some KPDPs opt for recovery strategies when failures in the matching given by the deterministic model are observed \citep{CBS2019, Manlove2015}. Thus, it is reasonable to use those strategies as recourse policies when uncertainty is considered. We consider the \textit{full-recourse policy} studied in \citep{Blom2021, Carvalho2020} and introduce a natural extension of this policy, that we refer to as \textit{the first-stage-only recourse policy}. \subsubsection{Full recourse} Under the full-recourse policy, pairs that were selected in the first stage but belong to failed components, are allowed to be re-arranged in the second stage within non-failed cycles and chains that may involve any other vertex (pair or singleton donor), regardless of whether that vertex was selected or not in the first stage. Figure \ref{fig:FullRecourse} shows an example of the full-recourse policy. The first stage solution depicted with bold arcs has a total weight of eight, since there are eight exchanges. Suppose there is a scenario in which $\gamma^{\text{v}}_{2} = 1$ and $\gamma^{\text{a}}_{56} = 1$. Assuming that $K = L = 4$, then the best recovery plan under this scenario, depicted by the recourse solution with shaded arcs, is to re-arrange vertices 3, 4 and 6 by bringing them together into a new cycle and include vertices 1 and 5 in a chain started by singleton donor 8, which was not selected in the first stage. Alternatively, vertices 1 and 5 could have been selected along with vertex 7 to form a cycle. In both cases, the recourse solution involves only five pairs from the first stage. \input{Figures/RO_FullRecourse} \subsubsection{First-stage-only recourse} We refer to the first-stage-only, as a recourse policy in which only vertices selected in the first stage can be used to repair a first-stage solution, i.e., the new non-failed cycles and chains selected in the second stage, must include vertices from that first-stage solution only. It is easy to see that the recourse solution set of the first-stage-only recourse policy corresponds to a subset of that of the full recourse. Although more conservative, KPDPs can opt for the first-stage-only since in the full recourse there can be vertices selected in the second stage that were not selected in the first stage, and thus, have never been checked before by KPDP operators, adding uncertainty on the actual presence of vertices on a recourse solution under full recourse. The back-arcs recourse policy studied in \citep{Carvalho2020} also allows to recover a first-stage solution with pairs that were selected in that solution only, but such policy only allows to recover a cycle (chain) if there exists other cycles (chains) nested within it, making the back-arcs recourse more conservative than the first-stage-only recourse. In Figure \ref{fig:FullRecourse}, the recourse solution under the first-stage-only recourse would involve vertices 3, 4, and 6 in a cycle just as in the full-recourse policy, but chains started by vertex 8 and the cycle, vertex 7 belongs to, would not be within the feasible recourse set. Thus, the recourse objective value corresponds to only three pairs from the first stage. \section{Robust model and first stage} \label{RobustModels} By observing that for a finite (yet possibly large) uncertainty set, the second optimization problem (the min problem), can be removed as an optimization level and included as a set of scenario constraints instead--- as in \citep{Carvalho2020}, Model \eqref{ROmodel} can be expressed as the following MIP robust formulation: \vspace{-0.2cm} \begin{subequations} \label{SingleROmodel0} \begin{align} \bm{P}(\policy, \bigsCe): \max_{\varFirstPlain, Z_{P}} \quad& Z_{P}\\ & Z_{P} \le \max_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)} \weightRO\varReco & \bm{\oneCeS} \in \Gamma \label{eq:scenarios}\\ &\varFirstPlain \in \mathcal{X} \end{align} \end{subequations} Observe that the optimization problem in the constraints set of Model \eqref{SingleROmodel0} is the recourse problem, which given a first-stage solution $\varFirstPlain \in \mathcal{X}$ and policy $\pi \in \Pi$, finds a recourse solution in the set of binary vectors $\setRecoSym^{\policy}(\varFirst, \oneCe)$ whose cycles and chains under scenario $\bm{\oneCeS} \in \Gamma$ have the largest number of re-arranged pairs from the first stage. Having the recourse problem in the constraints set is challenging since this one is solved for a fixed decision $\varFirstPlain$ as many times as the number of failure scenarios $\bm{\oneCeS} \in \Gamma$, which can be prohibitively large. To find a more tractable form, observe that the sense of the external and internal objectives in Model \eqref{SingleROmodel0} is maximization. We can then define a binary decision variable $\mathbf{y}^{\bm{\oneCeS}}$ for every scenario $\bm{\oneCeS} \in \Gamma$ whose feasible space corresponds to a recourse solution in $\setRecoSym^{\policy}(\varFirst, \oneCe)$. Thus, an equivalent model can see the inner optimization problem removed and the set $\setRecoSym^{\policy}(\varFirst, \oneCe)$ included as part of the constraints set for every failure scenario, as follows: \vspace{-0.2cm} \begin{subequations} \label{SingleROmodel} \begin{align} \bm{P}(\policy, \bigsCe)= \max_{\varFirstPlain, Z_{P}} \quad& Z_{P}\\ & Z_{P} \le \weightRO\varReco^{\oneCe} & \bm{\oneCeS} \in \Gamma \label{eq:scenariosBound}\\ &\mathbf{y}^{\bm{\oneCeS}} \in \setRecoSym^{\policy}(\varFirst, \oneCe) & \bm{\oneCeS} \in \Gamma \label{eq:scenariosReco}\\ &\varFirstPlain \in \mathcal{X} \end{align} \end{subequations} \noindent i.e., the first-stage solution and its corresponding recourse solution could be found in a single-level MIP formulation if the set of scenarios $\Gamma$ could be enumerated. Observe that once a scenario $\bm{\oneCeS} \in \Gamma$ is fixed, $\setRecoSym^{\policy}(\varFirst, \oneCe)$ can be expressed by a set of linear constraints. The position-indexed formulation for the robust KEP under full recourse proposed in \citep{Carvalho2020} follows the structure of Model \eqref{SingleROmodel}. We present such formulation and a variation of it for the first-stage-only recourse in the Online Supplement (see \ref{FullFirstSFormulation} and \ref{FirstStageOnlyFormulation}). To overcome the large size of $\Gamma$, we solve the robust KEP in Model \eqref{SingleROmodel} iteratively, by finding a new failure scenario whose associated optimal recourse solution $\mathbf{y}^{\bm{\oneCeS}} \in \setRecoSym^{\policy}(\varFirst, \oneCe)$ has a number of re-arranged pairs from the first stage lower than the current value of Model \eqref{SingleROmodel}. The goal is to find a scenario, if any, that yields a recourse solution with an objective value lower than the current objective value of the robust KEP model. If such scenario is found, then a new set of recourse decision variables along with Constraints \eqref{eq:scenariosBound} and \eqref{eq:scenariosReco} are added to Model \eqref{SingleROmodel}. Algorithm \ref{Alg:ScenarioGeneration} shows the iterative approach used to solve Model \eqref{SingleROmodel}. It starts with an empty restricted set of scenarios $\tilde{\Gamma}$. At Step 1, $\bm{P}(\policy, \tilde{\bigsCe})$ is solved and its incumbent solution $(\tilde{Z}_{P}, \varFirstPlain)$ is retrieved. Then, at Step 2, the second-stage problem is solved to optimality for an incumbent solution $\varFirstPlain$ under policy $\pi$, i.e., a failure scenario $\oneCe^{\star} \in \Gamma$ causing the optimal recourse objective value to be the lowest among all possible failure scenarios is found. We refer to that scenario as the worst-case scenario for a first-stage solution $\varFirstPlain \in \mathcal{X}$. If such recourse objective value is lower than $\tilde{Z}_{P}$, then $\oneCe^{\star}$ is added to the restricted set of scenarios $\tilde{\Gamma}$ and the corresponding Constraints \eqref{eq:scenariosBound} and \eqref{eq:scenariosReco} are added to Model \eqref{SingleROmodel}, point at which Model \eqref{SingleROmodel} is re-optimized. We refer to Model \eqref{SingleROmodel} with a restricted set of scenarios $\tilde{\Gamma}$ as the first-stage problem or first-stage formulation to imply that the search of the optimal robust solution corresponding to some $\varFirstPlain \in \mathcal{X}$ continues. In principle, as long as a failure scenario $\bm{\oneCeS} \in \Gamma$ leads to an optimal recourse solution with an objective value lower than $\tilde{Z}_{P}$, that scenario could be added to Model \eqref{SingleROmodel}. However, for benchmark purposes, at Step 2 the worst-case scenario is found for a given first-stage solution $\varFirstPlain$ under policy $\pi$. If the optimal objective of the second-stage problem equals $\tilde{Z}_{P}$, then an optimal robust solution $(Z_{P}^{\star}, \varFirstPlain^{\star})$ is returned at Step 3. \begin{algorithm}[tbp] \algorithmicrequire { A policy $\pi \in \Pi$ and restricted set of scenarios $\tilde{\Gamma}$, $\tilde{\Gamma}:= \{\emptyset\}$}\\ \algorithmicensure { Optimal robust solution $(Z_{P}^{\star}, \varFirstPlain^{\star})$}\\ \textbf{Step 1:} Solve $\bm{P}(\policy, \tilde{\bigsCe})$ and obtain optimal solution $(\tilde{Z}_{P}, \varFirstPlain)$ \\ \textbf{Step 2: } If $ \bm{\min}_{\bm{\oneCeS} \in \Gamma}\bm{\max}_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)} \mathbf{w}(\varFirst)^{\top}\mathbf{y} < \tilde{Z}_{P}$, then \\ \textbf{\textcolor{white}{Step 1: If }} $\tilde{\Gamma} \gets \tilde{\Gamma} \cup \{\bm{\oneCeS}\}$, create recourse decision variable $\mathbf{y}^{\bm{\oneCeS}}$, add Constraints \eqref{eq:scenariosBound}, \eqref{eq:scenariosReco} and go to Step 1\\ \textbf{Step 3: } $Z_{P}^{\star} \gets \tilde{Z}_{P}$; $\varFirstPlain^{\star} \gets \varFirstPlain$; Return $(Z_{P}^{\star}, \varFirstPlain^{\star})$ \caption{Solving the robust KEP in Model \ref{SingleROmodel}} \label{Alg:ScenarioGeneration} \end{algorithm} Algorithm \ref{Alg:ScenarioGeneration} converges in a finite number of iterations due to the finiteness of $\mathcal{X}$ and $\Gamma$. Due to the large set of scenarios, Step 2 in Algorithm \ref{Alg:ScenarioGeneration} is critical to efficiently solve the robust problem. In subsequent sections, we decompose the second-stage problem into a master problem yielding a failure scenario and a sub-problem (the recourse problem) finding an alternative matching with the maximum number of re-arranged pairs from the first stage. \section{New second-stage decompositions} \label{SecondStage} In this section, we present two decompositions of the second-stage problem. Both consists of a feasibility-seeking master problem that finds a failure scenario and a sub-problem that finds an alternative matching under that scenario. Each decomposition solves a recourse problem with one of the two objective functions using cycles/chains as decision variables as proposed in \citep{Blom2021}. Although, the optimal solutions provided by both formulations are identical, one of the recourse solutions is found in the second-stage compatibility graph while the other one is found in the transitory graph. We use the structure of each solution set to formulate the master problem of each decomposition, accordingly. \subsection{A basic feasibility-seeking formulation} \label{sec:basicfeas} In this section we introduce the first of the two new decompositions for the second-stage problem. \subsubsection{The recourse problem} Given a first-stage solution $\varFirstPlain \in \mathcal{X}$, let $\bigsCe(\varFirst) \subseteq \Gamma$ be the set of failure scenarios inducing digraph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})$ with $\bm{\oneCeS} \in \bigsCe(\varFirst)$, as defined in Section \ref{Preliminaries}. The recourse problem for the basic decomposition consists of finding a matching of maximum weight in $D^{\pi}(\varFirstPlain, \bm{\oneCeS})$, i.e., a matching with the highest number of pairs selected in the first stage. We refer to $R^{\policy}(\varFirst, \oneCe)$ as the MIP formulation (Appendix \ref{sec:RecourseFormls}) solving the recourse problem in graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})$. Lastly, we refer to $\hat{\mathbf{y}}$ as an optimal recourse solution in $\setRecoSym^{\policy}(\varFirst, \oneCe)$ with objective value ${Z}^{\pi,\star}_{R}(\varFirst, \oneCe)$. \subsubsection{The master problem: MasterBasic} We continue to use $\gamma^{\text{v}}$ and $\gamma^{\text{a}}$ as binary vectors representing the failure of a vertex ($\gamma^{\text{v}}_{u} = 1 \ \forall u \in V$) and arc ($\gamma^{\text{a}}_{\vertexa\vertexb} = 1 \ \forall (\vertexa,\vertexb) \in A$), respectively. We let $\hat{\bigsCe}(\varFirst) \subseteq \bigsCe(\varFirst)$ be a subset of scenarios for which the recourse problem has already been solved under scenario $\hat{\oneCe} \in \hat{\bigsCe}(\varFirst)$. Thus, the master problem for the second-stage problem can be formulated as follows:\begin{subequations} \label{mo:basicfeas} \begin{align} \tag{MB} \text{MasterBasic}(\varFirstPlain) \text{: } &&& \qquad \qquad \text{Find } \bm{\oneCeS} \label{MasterBasic}\\ && \sum_{c \in \mathcal{C}_{\cycleCap} \cup \mathcal{C}_{\chainCap}: \varFirstPlain_{c} = 1} &\left(\sum_{u \in V(c)} \gamma^{\text{v}}_{u} + \sum_{(\vertexa,\vertexb) \in A(c)} \gamma^{\text{a}}_{\vertexa\vertexb} \right) \ge 1 && \varFirstPlain \in \mathcal{X} \label{eq:AtLeastOneX} \\ && \sum_{c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlain, \hat{\oneCe}): \hat{\mathbf{y}}_{c} = 1} &\left(\sum_{u \in V(c)} \gamma^{\text{v}}_{u} + \sum_{(\vertexa,\vertexb) \in A(c)} \gamma^{\text{a}}_{\vertexa\vertexb} \right) \ge 1 && \hat{\mathbf{y}} \in \setRecoSym^{\policy}(\varFirst, \hat{\oneCe}); \hat{\oneCe} \in \hat{\bigsCe}(\varFirst) \label{eq:AtLeastOne}\\ &&& \qquad \qquad \bm{\oneCeS} \in \Gamma&& \label{basicGamma} \end{align} \end{subequations} where $\mathcal{C}_{K,L}^{\pi}(\varFirstPlain, \hat{\oneCe}) = \cycleSet^{\pi}(\varFirstPlain, \hat{\oneCe}) \cup \chainSet^{\pi}(\varFirstPlain, \hat{\oneCe})$. Observe that Constraints \eqref{eq:AtLeastOneX}-\eqref{eq:AtLeastOne} correspond to covering constraints in a set cover problem, one of Karp's 21 NP-complete problems \citep{Karp1972}. That is, at least one vertex or arc in \textit{every} cycle/chain selected in recourse solution $\mathbf{y} \in \setRecoSym^{\policy \star}(\varFirst, \hat{\oneCe})$ \textit{must} fail before finding the optimal worst-case scenario $\bm{\gamma}^{\star} \in \Gamma$ for a first-stage solution $\varFirstPlain \in \mathcal{X}$. Algorithm \ref{Alg:BasicCovering} generates new failure scenarios $\hat{\oneCe} \in \bigsCe(\varFirst)$ until master problem \ref{MasterBasic} becomes infeasible, point at which the worst-case scenario, $\bm{\gamma}^{\star}$, and its associated optimal recourse solution value, i.e., the optimal objective value of the second-stage problem, $Z_{Q}^{\pi,\star}(\varFirst)$, have already been found. Algorithm \ref{Alg:BasicCovering} starts with an empty subset of scenarios $\hat{\bigsCe}(\varFirst)$, and with an upper bound on the objective value of the second stage ${\bar{\varCov}}^{\pi}_{Q}(\varFirst)$ equal to the objective value of the robust KEP in Model \eqref{SingleROmodel}, i.e., $\tilde{Z}_{P}$. \ref{MasterBasic} is solved for the first time to obtain a failure scenario $\hat{\oneCe} \in \bigsCe(\varFirst)$ and it is added to $\hat{\bigsCe}(\varFirst)$. At Step 1, the recourse formulation $R^{\policy}(\varFirst, \cdteOne)$ is solved and an optimal recourse solution $\mathbf{y}$ is obtained with objective value ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe})$. The recourse solution is then added to $\setRecoSym^{\policy}(\varFirst, \hat{\oneCe})$ and a Constraint \eqref{eq:AtLeastOne} is created in \ref{MasterBasic}. If the objective value of the recourse solution ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe})$ is lower than the current upper bound ${\bar{\varCov}}^{\pi}_{Q}(\varFirst)$, then, the upper bound is updated, along with the incumbent failure scenario $\oneCe^{\prime}$. At Step 2, a new iteration starts and an attempt is made to find a feasible failure scenario in \ref{MasterBasic}. If such scenario exists, then Step 1 is repeated, otherwise the algorithm goes to Step 3, and an optimal solution ($\oneCe^{\star}, Z_{Q}^{\pi,\star}(\varFirst)$) to the second stage is returned. To prove the validity of Algorithm \ref{Alg:BasicCovering}, we state the following: \begin{algorithm}[tbp] \algorithmicrequire { A recourse policy $\pi \in \Pi$ and feasible solution $\varFirstPlain \in \mathcal{X}$}\\ \algorithmicensure { Optimal recovery plan value $Z_{Q}^{\pi,\star}(\varFirst)$ and worst-case scenario $\oneCe^{\star} \in \bigsCe(\varFirst)$}\\ \textbf{Step 0: } $i = 1 $; $\hat{\bigsCe}(\varFirst) = \emptyset$; ${\bar{\varCov}}^{\pi}_{Q}(\varFirst) \gets \tilde{Z}_{P}$; Solve \ref{MasterBasic} with Constraint \eqref{eq:AtLeastOneX} to obtain scenario $\hat{\oneCe}$; $\hat{\bigsCe}(\varFirst) \gets \hat{\bigsCe}(\varFirst) \cup \hat{\oneCe}$ \\ \textbf{Step 1: } Solve $R^{\policy}(\varFirst, \cdteOne)$ to obtain objective value ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe})$ and recourse solution $\hat{\varReco}$; create Constraint \eqref{eq:AtLeastOne}\\ \textbf{\textcolor{white}{Step 1: }} If ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe}) < {\bar{\varCov}}^{\pi}_{Q}(\varFirst)$ then ${\bar{\varCov}}^{\pi}_{Q}(\varFirst) = {Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe})$ and $\oneCe^{\prime} \gets \hat{\oneCe}$\\ \textbf{Step 2: } $i \gets i + 1 $; Attempt to solve MasterBasic$(\varFirstPlain)$ to get a new candidate scenario $\hat{\oneCe}$;\\ \textbf{\textcolor{white}{Step 2:} } If MasterBasic$(\varFirstPlain)$ is feasible go to Step 1;\\ \textbf{Step 3: } $\oneCe^{\star} \gets \oneCe^{\prime}$; $Z_{Q}^{\pi,\star}(\varFirst) \gets {\bar{\varCov}}^{\pi}_{Q}(\varFirst)$; Return $\oneCe^{\star}$ and $Z_{Q}^{\pi,\star}(\varFirst)$ \caption{A basic feasibility-seeking algorithm for the second-stage master problem} \label{Alg:BasicCovering} \end{algorithm} \proposition Algorithm \ref{Alg:BasicCovering} returns the optimal objective value of the second stage $Z_{Q}^{\pi,\star}(\varFirst)$ and worst-case scenario $\bm{\gamma}^{\star} \in \bigsCe(\varFirst)$ for a first-stage decision $\varFirstPlain \in \mathcal{X}$. \proof In the first part of the proof, we show that any optimal objective value of the recourse problem ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe})$, regardless of the scenario $\hat{\oneCe} \in \bigsCe(\varFirst)$ is an upper bound on the objective value of the second stage. In the second part, we show that the second-stage problem can be decomposed into an optimality-seeking master problem and a subproblem, also corresponding to the recourse problem. We then show that constraints in the optimality master problem have a one-to-one correspondence with those in \ref{MasterBasic}. Part I. Observe that given a candidate solution $\varFirstPlain \in \mathcal{X}$ and policy $\pi \in \Pi$ \begin{align*} \max_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \hat{\oneCe})} \weightRO\varReco &\ge \min_{\bm{\oneCeS} \in \bigsCe(\varFirst)} \max_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)} \weightRO\varReco & \hat{\oneCe} &\in \bigsCe(\varFirst) \\ \sum_{c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlain, \hat{\oneCe}): \hat{\varReco}_{c} = 1} \mathbf{w_{\match}}(\varFirst) &\ge Z_{Q}^{\pi,\star}(\varFirst) & \hat{\varReco} &\in \setRecoSym^{\policy}(\varFirst, \hat{\oneCe}); \hat{\oneCe} \in \bigsCe(\varFirst)\\ {Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe}) &\ge Z_{Q}^{\pi,\star}(\varFirst) & \hat{\oneCe} &\in \bigsCe(\varFirst) \end{align*} That is, the optimal objective value of a recourse solution $\hat{\mathbf{y}} \in \setRecoSym^{\policy}(\varFirst, \hat{\oneCe})$ regardless of the scenario $\hat{\oneCe} \in \bigsCe(\varFirst)$, is at least as big as the smallest value that $Z_{Q}^{\pi,\star}(\varFirst)$ can reach. Part II. In what follows $\mathbbm{1}_{c,\bm{\oneCeS}}$ is an indicating variable that takes on value one if cycle/chain $c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlain, \oneCe)$ fails under scenario $\bm{\oneCeS} \in \Gamma$. Let $\cycleSet^{\pi}(\varFirstPlain)$ and $\chainSet^{\pi}(\varFirstPlain)$ be the set of cycles and chains of the transitory graph $D^{\pi}(\varFirstPlain)$ that exists before a failure scenario is observed but after a set of transplants has already been proposed. Then, we define $\alpha_{c}$ as a binary decision variable for a cycle/chain $c \in \cycleSet^{\pi}(\varFirstPlain) \cup \chainSet^{\pi}(\varFirstPlain)$ that takes on value one if such cycle/chain fails, and zero otherwise. In Appendix \ref{Alg:CyChrecourse} we present a procedure to find $\cycleSet^{\pi}(\varFirstPlain)$ and $\chainSet^{\pi}(\varFirstPlain)$ given a first-stage solution $\varFirstPlain \in \mathcal{X}$. Note that since all feasible chains from length 1 to $L$ are found, the shortening of a chain when a failure occurs is represented by some $\alpha_{c}$ taking on value zero and another one taking on value one. Then, the second-stage problem can be reformulated as follows: \begin{subequations} \begin{align} \label{eq:ReformScndS} &\min_{\bm{\oneCeS} \in \bigsCe(\varFirst)} \max_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)} \sum_{c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlain, \oneCe)} \mathbf{w_{\match}}(\varFirst)\mathbf{y}_{c} \\ =\min_{\bm{\oneCeS} \in \bigsCe(\varFirst)} \quad &Z^{\pi}_{Q}(\varFirst)\\ & Z^{\pi}_{Q}(\varFirst) \ge \max_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)} \sum_{c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlain, \oneCe)} \left(\mathbf{w_{\match}}(\varFirst) - \mathbf{w_{\match}}(\varFirst) \mathbbm{1}_{c,\bm{\oneCeS}} \right) \mathbf{y}_{c} \\ =\min_{\bm{\oneCeS} \in \bigsCe(\varFirst)} \quad &Z^{\pi}_{Q}(\varFirst)\\ & Z^{\pi}_{Q}(\varFirst) \ge \sum_{c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlain, \oneCe)} \left(\mathbf{w_{\match}}(\varFirst) - \mathbf{w_{\match}}(\varFirst) \mathbbm{1}_{c,\bm{\oneCeS}} \right) \mathbf{y}_{c} & \mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)\\ =\min \quad &Z^{\pi}_{Q}(\varFirst) \tag{Q} \label{objMPOpt}\\ & Z^{\pi}_{Q}(\varFirst) \ge {Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe}) - \sum_{c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlain, \hat{\oneCe}): \hat{\mathbf{y}}_{c} = 1} \mathbf{w_{\match}}(\varFirst)\hat{\mathbf{y}}_{c} \alpha_{c} & \hat{\varReco} \in \setRecoSym^{\policy}(\varFirst, \hat{\oneCe}); \hat{\oneCe} \in \hat{\bigsCe}(\varFirst) \label{eq:solsTwo}\\ & \alpha_{c} \le \sum_{u \in V(c)} \gamma^{\text{v}}_{u} + \sum_{(\vertexa,\vertexb) \in A(c)} \gamma^{\text{a}}_{\vertexa\vertexb} &c \in \cycleSet^{\pi}(\varFirstPlain) \cup \chainSet^{\pi}(\varFirstPlain) \label{eq:Inter}\\ &\bm{\oneCeS} \in \bigsCe(\varFirst) \label{eq:budget}\\ &\alpha_c \in \{0,1\} & c \in \cycleSet^{\pi}(\varFirstPlain) \cup \chainSet^{\pi}(\varFirstPlain) \label{eq:Bound} \end{align} \end{subequations} That is, for $Z^{\pi}_{Q}(\varFirst)$ to be as small as possible at least one cycle or chain, and thus, at least one vertex or arc must fail for some cycle/chain in every matching associated to an optimal recourse solution $\hat{\mathbf{y}} \in \setRecoSym^{\policy}(\varFirst, \hat{\oneCe})$ (Constraints \eqref{eq:solsTwo}). If it were not for the failure budget limiting the maximum number of failed vertices and arcs (Constraint \eqref{eq:budget}), $Z^{\pi}_{Q}(\varFirst)$ could reach zero. Note that if \ref{objMPOpt} is unable to cause the failure of a new optimal recourse solution $\hat{\varReco}$, is because in doing so Constraint \eqref{eq:budget} would be violated and from Part I, we know that the optimal objective value of that solution is at least $Z_{Q}^{\pi,\star}(\varFirst)$. Thus, the worst-case scenario $\oneCe^{\star}$ is found when there exists a Constraint \eqref{eq:solsTwo} associated to a recourse solution $\hat{\varReco}$ whose cycles/chains do not fail. Observe that there is a one-to-one correspondence between Constraints \eqref{eq:solsTwo} in \ref{objMPOpt} and Constraints \eqref{eq:AtLeastOne} in \ref{MasterBasic}. At the start of Algorithm \ref{Alg:BasicCovering}, when there is no known scenario, Constraint \eqref{eq:solsTwo} has Constraint \eqref{eq:AtLeastOneX} as counterpart in \ref{MasterBasic}. Therefore, the smallest value of ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe})$ among all scenarios $\hat{\oneCe} \in \hat{\bigsCe}(\varFirst)$ before \ref{MasterBasic} becomes infeasible is the optimal value to the second stage. \hfill$\square$ \subsubsection{Lifting constraints} Unlike valid inequalities, \textit{non-valid} inequalities cut off feasible solutions \citep{Atamturk2000, Hooker1994}, and therefore are invalid in the standard sense. Although, non-valid inequalities remove some integer solutions, all optimal solutions are preserved. Next, we derive the first family of non-valid inequalities to narrow down the search of the worst-case scenario in \ref{MasterBasic}. Our goal is to strengthen the right-hand side of Constraints \eqref{eq:AtLeastOne} that have already been generated up to some iteration $i$ in Algorithm \ref{Alg:BasicCovering}, by updating the minimum number of vertices/arcs that should fail in each of those constraints whenever a smaller value of ${\bar{\varCov}}^{\pi}_{Q}(\varFirst)$ is found. For a recourse solution $\hat{\mathbf{y}} \in \setRecoSym^{\policy}(\varFirst, \hat{\oneCe})$ and assuming that the failure of a vertex/arc completely causes the failure of a cycle or chain, we sort cycle and chain weights $\mathbf{w_{\match}}(\varFirst)$ $\forall \mathbf{y}_c = 1$ with $c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlain, \oneCe)$, in non-increasing order so that $\mathbf{w_{\match}}(\varFirst)_1 \ge \mathbf{w_{\match}}(\varFirst)_2 ... \ge \mathbf{w_{\match}}(\varFirst)_{\mid \mathcal{C}_{K,L}^{\pi}(\varFirstPlain, \oneCe) \mid}$ and let $H(\hat{\oneCe})$ be a parameter indicating the right-hand side value of its corresponding Constraint \eqref{eq:AtLeastOneX} - \eqref{eq:AtLeastOne}. We can now state the following: \begin{proposition} \label{prop:RHSi} $H(\hat{\oneCe}) = t$ where $t$ is the smallest index for which the following condition is true: ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe}) - \sum_{t = 1}^{\mid \mathcal{C}_{K,L}^{\pi}(\varFirstPlain, \oneCe) \mid} \mathbf{w_{\match}}(\varFirst)_t < {\bar{\varCov}}^{\pi}_{Q}(\varFirst)$. \end{proposition} \proof Observe that unless optimal $Z^{\pi}_{Q}(\varFirst) < {Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe}) \ \forall \hat{\oneCe} \in \bigsCe(\varFirst)$, i.e., $Z^{\pi}_{Q}(\varFirst) < {\bar{\varCov}}^{\pi}_{Q}(\varFirst)$. To satisfy this condition, a minimum number of cycles/chains must fail in every Constraint \eqref{eq:solsTwo}. Also, observe that Constraints \eqref{eq:Inter} imply that whenever at least one vertex/arc fails, so does its associated cycle/chain. Therefore, finding the minimum number of cycles/chains that should fail in $\ref{objMPOpt}$ to satisfy $Z^{\pi}_{Q}(\varFirst) < {\bar{\varCov}}^{\pi}_{Q}(\varFirst)$ implies finding the minimum number of vertices/arcs that should fail in Constraints \eqref{eq:AtLeastOneX} - \eqref{eq:AtLeastOne}. It is easy to see that by sorting the cycle/chain weight of every cycle/chain in non-increasing order, and then subtracting it in that order from ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe})$ until the subtraction is strictly lower to ${\bar{\varCov}}^{\pi}_{Q}(\varFirst)$, yields a valid lower bound on the number of cycles/chains that must fail. \hfill$\square$ \subsection{An expanded feasibility-seeking decomposition} \label{sec:ExtFeasForm} Next, we introduce the second new decomposition for the second-stage problem. \subsubsection{The recourse problem} So far, we have solved the recourse problem in a realization of the second-stage compatibility graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})$. However, an optimal solution to the recourse problem $R^{\policy}(\varFirst, \oneCe)$ can also be found in the transitory graph $D^{\pi}(\varFirstPlain)$, by allowing failed cycles/chains into the optimal solution given by $R^{\policy}(\varFirst, \oneCe)$. Although, this solution does not contribute to more recourse objective value, as we will show, it can prevent some dominated scenarios from being explored, and therefore help reduce the number of times the recourse problem is resolved in Algorithm \ref{Alg:BasicCovering}. We refer to $\Rexp(\varFirst, \oneCe)$ (Appendix \ref{sec:RecourseFormls}) as the recourse problem solved in the transitory graph $D^{\pi}(\varFirstPlain)$, whose optimal recourse solutions are also optimal to $R^{\policy}(\varFirst, \oneCe)$. We refer the reader to Appendix \ref{sec:RecourseFormls} for a proof. The objective function in $\Rexp(\varFirst, \oneCe)$ has no meaning here beyond the fact that it is useful to find optimal solutions to the recourse problem $R^{\policy}(\varFirst, \oneCe)$ with failed cycles/chains. Thus, we let $\bar{\varReco} \in \setRecoSym_{\text{exp}}^{\policy}(\varFirst, \oneCe)$ be an optimal recourse solution in the transitory graph $D^{\pi}(\varFirstPlain)$ that is also optimal to $R^{\policy}(\varFirst, \oneCe)$ under scenario $\bm{\oneCeS} \in \bigsCe(\varFirst)$. The set of solutions in the transitory graph $\setRecoSym_{\text{exp}}^{\policy}(\varFirst, \oneCe)$ is defined in the same way $\setRecoSym^{\policy}(\varFirst, \oneCe)$ is, except that $\cycleSet^{\pi}(\varFirstPlain, \bm{\oneCeS})$ and $\chainSet^{\pi}(\varFirstPlain, \bm{\oneCeS})$ are replaced by $\cycleSet^{\pi}(\varFirstPlain)$ and $\chainSet^{\pi}(\varFirstPlain)$, respectively. \subsubsection{The master problem: MasterExp} In the basic reformulation, we assumed the recourse solutions correspond to matchings with non-failed components only. In our expanded formulation we assume that recourse solutions can be expanded to fit some failed cycles/chains by solving the recourse problem in the transitory graph $D^{\pi}(\varFirstPlain)$. We let $\VarRecoOpExp(\hat{\oneCe}) \in \setRecoSym^{\policy}(\varFirst, \hat{\oneCe})$ be the subset of the optimal recourse solution in the transitory graph that has no failed cycles/chains and thus corresponds to a feasible solution in $\setRecoSym^{\policy}(\varFirst, \hat{\oneCe})$. Thus, the expanded feasibility-seeking reformulation is expressed as follows: \begin{subequations} \label{mo:extendedfeas} \begin{align} \tag{ME} \text{MasterExp}(\varFirstPlain) \text{: } &&& \qquad \qquad \text{Find } \bm{\oneCeS} \label{MasterExp}\\ && \sum_{c \in \mathcal{C}_{\cycleCap} \cup \mathcal{C}_{\chainCap}: \varFirstPlain_{c} = 1} &\left(\sum_{u \in V(c)} \gamma^{\text{v}}_{u} + \sum_{(\vertexa,\vertexb) \in A(c)} \gamma^{\text{a}}_{\vertexa\vertexb} \right) \ge 1 && \varFirstPlain \in \mathcal{X} \label{eq:AtLeastOneXExp} \\ && \sum_{c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlain, \hat{\oneCe}): \hat{\varReco}_{c} = 1} &\left(\sum_{u \in V(c)} \gamma^{\text{v}}_{u} + \sum_{(\vertexa,\vertexb) \in A(c)} \gamma^{\text{a}}_{\vertexa\vertexb} \right) \ge 1 && \VarRecoOpExp(\hat{\oneCe}) \in \setRecoSym^{\policy}(\varFirst, \hat{\oneCe}); \hat{\oneCe} \in \hat{\bigsCe}(\varFirst) \label{eq:AtLeastOneExp}\\ && \sum_{c \in \cycleSet^{\pi}(\varFirstPlain) \cup \chainSet^{\pi}(\varFirstPlain): \hat{\varReco}_{c} = 1} &\left(\sum_{u \in V(c)} \gamma^{\text{v}}_{u}+ \sum_{(\vertexa,\vertexb) \in A(c)} \gamma^{\text{a}}_{\vertexa\vertexb} \right) \ge H_{\text{e}}(\hat{\oneCe}) && \ \bar{\varReco} \in \setRecoSym_{\text{exp}}^{\policy}(\varFirst, \hat{\oneCe}); \hat{\oneCe} \in \hat{\bigsCe}(\varFirst) \label{eq:AtLeastOneRHSExp}\\ &&& \qquad \qquad \bm{\oneCeS} \in \Gamma&& \label{GammaExp} \end{align} \end{subequations} Constraints \eqref{eq:AtLeastOneXExp} and \eqref{eq:AtLeastOneExp} are equivalent to Constraints \eqref{eq:AtLeastOneX} and \eqref{eq:AtLeastOne}. Constraints \eqref{eq:AtLeastOneRHSExp} require that at least $H_{\text{e}}(\hat{\oneCe})$ vertices and arcs fail in an expanded recourse solution $\bar{\varReco} \in \setRecoSym_{\text{exp}}^{\policy}(\varFirst, \hat{\oneCe})$ which may include cycles and chains that fail under scenario $\hat{\oneCe} \in \hat{\bigsCe}(\varFirst)$. Proposition \ref{prop:RHSExtdi} defines the value of $H_{\text{e}}(\hat{\oneCe})$. The goal of \ref{MasterExp} is to enforce the failure of two different recourse solutions with identical objective value under scenario $\hat{\oneCe} \in \hat{\bigsCe}(\varFirst)$, i.e. a recourse solution found in the second-stage compatibility graph $D^{\pi}(\varFirstPlain, \hat{\oneCe})$ and a solution found in the transitory graph $D^{\pi}(\varFirstPlain)$. For a recourse solution $\bar{\varReco} \in \setRecoSym_{\text{exp}}^{\policy}(\varFirst, \hat{\oneCe})$ and again assuming that the failure of a vertex/arc completely causes the failure of a cycle or chain, we sort cycle and chain weights $\mathbf{w_{\match}}(\varFirst)$ $\forall \mathbf{y}_c = 1$ with $c \in \cycleSet^{\pi}(\varFirstPlain) \cup \chainSet^{\pi}(\varFirstPlain)$, in non-increasing order so that $\mathbf{w_{\match}}(\varFirst)_1 \ge \mathbf{w_{\match}}(\varFirst)_2 ... \ge \mathbf{w_{\match}}(\varFirst)_{\mid \cycleSet^{\pi}(\varFirstPlain) \cup \chainSet^{\pi}(\varFirstPlain) \mid}$. Thus, we can now state, \begin{proposition} \label{prop:RHSExtdi} $H_{\text{e}}(\hat{\oneCe}) = t$ where $t$ is the smallest index for which the following condition is true: ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe}) - \sum_{t = 1}^{\mid \cycleSet^{\pi}(\varFirstPlain) \cup \chainSet^{\pi}(\varFirstPlain) \mid} \mathbf{w_{\match}}(\varFirst)_t < {\bar{\varCov}}^{\pi}_{Q}(\varFirst)$. \end{proposition} \proof It follows by the same arguments given for Proposition \ref{prop:RHSi}. \noindent In the following example we see that \ref{MasterExp} can yield failure scenarios that dominate those in \ref{MasterBasic}. \paragraph{Example} Let us consider Figure \ref{fig:FullRecourse} again under full recourse. Given a first-stage solution $\varFirstPlain \in \mathcal{X}$ involving eight pairs in Figure \ref{fig:CompGraph}, the believed compatibility graph also corresponds to the transitory graph $D^{\pi}(\varFirstPlain)$. Once we observe failure scenario $\hat{\gamma}^{\text{v}}_{2} = 1$ and $\hat{\gamma}^{\text{a}}_{56} = 1$, the realization of the second-stage compatibility graph, $D^{\pi}(\varFirstPlain, \hat{\oneCe})$, is such that vertices 2, 9 and 10 do not belong to it. The optimal objective value to the recourse problem in $D^{\pi}(\varFirstPlain, \hat{\oneCe})$, i.e., $R^{\policy}(\varFirst, \cdteOne)$ is ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe}) = 5$ with an optimal solution $\hat{\varReco}$ involving cycles (3,4,6) and chain (8,1,5). When we solve the recourse problem in the transitory graph $D^{\pi}(\varFirstPlain)$, the optimal objective value is also 5, but its optimal recourse solution $\bar{\varReco} = \hat{\varReco} \cup \{(2,9,10)\}$ involves in addition to cycle (3,4,6) and chain (8,1,5), the failed cycle (2,9,10). A feasible scenario is highlighted in red for every iteration. Assuming $\bm{\oneCeS} \in \Gamma$, the first two iterations for \ref{MasterBasic} and \ref{MasterExp} are shown below: \vspace{-1.5cm} \begin{multicols}{2} \begin{align*} i = 1 \ {\bar{\varCov}}^{\pi}_{Q}(\varFirst) = 8\\ \text{MasterBasic}(\varFirstPlain) \text{: }\\ {\color{red} \gamma^{\text{v}}_{2}} + \gamma^{\text{v}}_{9} + \gamma^{\text{v}}_{10} + \gamma^{\text{a}}_{2,9} + \gamma^{\text{a}}_{9,10} + \gamma^{\text{a}}_{10,2} + \gamma^{\text{v}}_{3} + \gamma^{\text{v}}_{4} + \gamma^{\text{a}}_{3,4}\\ + \gamma^{\text{a}}_{4,3} + \gamma^{\text{v}}_{1} + \gamma^{\text{v}}_{5} + \gamma^{\text{v}}_{6} + \gamma^{\text{v}}_{1,5} + {\color{red} \gamma^{\text{v}}_{5,6}} \ge 1 \\ i = 2, \ {\bar{\varCov}}^{\pi}_{Q}(\varFirst) = 5\\ \gamma^{\text{v}}_{2} + \gamma^{\text{v}}_{9} + \gamma^{\text{v}}_{10} + \gamma^{\text{a}}_{2,9} + \gamma^{\text{a}}_{9,10} + \gamma^{\text{a}}_{10,2} + {\color{red}\gamma^{\text{v}}_{3}} + \gamma^{\text{v}}_{4} + \gamma^{\text{a}}_{3,4}\\ + \gamma^{\text{a}}_{4,3} + \gamma^{\text{v}}_{1} + \gamma^{\text{v}}_{5} + \gamma^{\text{v}}_{6} + \gamma^{\text{v}}_{1,5} + {\color{red} \gamma^{\text{v}}_{5,6}} \ge 1 \rightarrow {\color{red} 2} \text{ by } \ref{prop:RHSi} \\ {\color{red}\gamma^{\text{v}}_{3}} + \gamma^{\text{v}}_{6} + \gamma^{\text{v}}_{4} + \gamma^{\text{a}}_{3,6} + \gamma^{\text{a}}_{6,4} + \gamma^{\text{a}}_{4,3}\\ + \gamma^{\text{v}}_{1} + \gamma^{\text{v}}_{5} + \gamma^{\text{v}}_{8} + \gamma^{\text{a}}_{1,5} + \gamma^{\text{a}}_{5,8} \ge 1 \end{align*} \columnbreak \begin{align*} i = 1, \ {\bar{\varCov}}^{\pi}_{Q}(\varFirst) = 8\\ \text{MasterExp}(\varFirstPlain) \text{: }\\ {\color{red} \gamma^{\text{v}}_{2}} + \gamma^{\text{v}}_{9} + \gamma^{\text{v}}_{10} + \gamma^{\text{a}}_{2,9} + \gamma^{\text{a}}_{9,10} + \gamma^{\text{a}}_{10,2} + \gamma^{\text{v}}_{3} + \gamma^{\text{v}}_{4} + \gamma^{\text{a}}_{3,4}\\ + \gamma^{\text{a}}_{4,3} + \gamma^{\text{v}}_{1} + \gamma^{\text{v}}_{5} + \gamma^{\text{v}}_{6} + \gamma^{\text{v}}_{1,5} + {\color{red} \gamma^{\text{v}}_{5,6}} \ge 1\\ i = 2, \ {\bar{\varCov}}^{\pi}_{Q}(\varFirst) = 5\\ \gamma^{\text{v}}_{2} + \gamma^{\text{v}}_{9} + \gamma^{\text{v}}_{10} + \gamma^{\text{a}}_{2,9} + {\color{red}\gamma^{\text{a}}_{9,10}} + \gamma^{\text{a}}_{10,2} + {\color{red}\gamma^{\text{v}}_{3}} + \gamma^{\text{v}}_{4} + \gamma^{\text{a}}_{3,4}\\ + \gamma^{\text{a}}_{4,3} + \gamma^{\text{v}}_{1} + \gamma^{\text{v}}_{5} + \gamma^{\text{v}}_{6} + \gamma^{\text{v}}_{1,5} + \gamma^{\text{v}}_{5,6} \ge 1 \rightarrow {\color{red} 2} \text{ by } \ref{prop:RHSi}\\ {\color{red}\gamma^{\text{v}}_{3}} + \gamma^{\text{v}}_{6} + \gamma^{\text{v}}_{4} + \gamma^{\text{a}}_{3,6} + \gamma^{\text{a}}_{6,4} + \gamma^{\text{a}}_{4,3}\\ + \gamma^{\text{v}}_{1} + \gamma^{\text{v}}_{5} + \gamma^{\text{v}}_{8} + \gamma^{\text{a}}_{1,5} + \gamma^{\text{a}}_{5,8} \ge 1\\ {\color{red}\gamma^{\text{v}}_{3}} + \gamma^{\text{v}}_{6} + \gamma^{\text{v}}_{4} + \gamma^{\text{a}}_{3,6} + \gamma^{\text{a}}_{6,4} + \gamma^{\text{a}}_{4,3} + \gamma^{\text{v}}_{1} + \gamma^{\text{v}}_{5}\\ + \gamma^{\text{v}}_{8} + \gamma^{\text{a}}_{1,5} + \gamma^{\text{a}}_{5,8} + \gamma^{\text{v}}_{2} + \gamma^{\text{v}}_{9} + \gamma^{\text{v}}_{10} + \gamma^{\text{a}}_{2,9} + {\color{red}\gamma^{\text{a}}_{9,10}}\\ + \gamma^{\text{a}}_{10,2} \ge 1 \rightarrow {\color{red} 2} \text{ by } \ref{prop:RHSExtdi} \end{align*} \end{multicols} Observe that in the second iteration, the failure scenario that is feasible to \ref{MasterBasic} is not feasible to \ref{MasterExp}. The new failure scenario for \ref{MasterBasic} would lead to an optimal recourse solution with again ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe}) = 5$, whereas the optimal recourse solution for \ref{MasterExp} would lead to ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe}) = 3$. That is, in the third iteration the right-hand side of constraints \ref{MasterExp} will get updated with a new upper bound ${\bar{\varCov}}^{\pi}_{Q}(\varFirst) = 3$, leading \ref{MasterExp} to infeasibility sooner. On the other hand, \ref{MasterBasic} in the third iteration will need another attempt to ``discover'' a failure scenario that will bring down the ceiling of ${\bar{\varCov}}^{\pi}_{Q}(\varFirst)$. \subsubsection{Dominating scenarios} \label{DominatingScenarios} A failure scenario $\tilde{\oneCe}^{\prime} \in \bigsCe(\varFirst)$ dominates another failure scenario $\tilde{\oneCe} \in \bigsCe(\varFirst)$ if the former implies the latter, i.e., $\varCov^{\star}_{R}(\varFirst,\oneCePrime) \le \varCov^{\star}_{R}(\varFirst, \oneCeNoPrime)$. Let $ I(\tilde{\oneCe}^{\prime})$ and $I(\tilde{\oneCe})$ be the number of failed vertices and arcs under their corresponding scenario. Moreover, let $C^{\policy}(\varFirst, \oneCePrime) \subseteq \cycleSet^{\pi}(\varFirstPlain) \cup \chainSet^{\pi}(\varFirstPlain)$ and $C^{\policy}(\varFirst, \oneCeNoPrime) \subseteq \cycleSet^{\pi}(\varFirstPlain) \cup \chainSet^{\pi}(\varFirstPlain)$ be the set of feasible cycles and chains including vertices and arcs that fail in the transitory graph $D^{\pi}(\varFirstPlain)$ under scenarios $\tilde{\oneCe}^{\prime}$ and $\tilde{\oneCe}$, respectively. We then state, \begin{proposition} \label{prop:dominance} If $C^{\policy}(\varFirst, \oneCeNoPrime) \subseteq C^{\policy}(\varFirst, \oneCePrime)$, $\tilde{\oneCe}^{\prime}$ dominates $\tilde{\oneCe}$ and the following dominance inequality is valid for \ref{MasterBasic} and \ref{MasterExp}. \end{proposition} \vspace*{-9mm} \begin{align} \label{eq:dominance} \sum_{u: \tilde{\oneCe}_{u} = 1} \gamma^{\text{v}}_{u} + \sum_{(\vertexa,\vertexb): \tilde{\oneCe}_{\vertexa\vertexb} = 1} \gamma^{\text{a}}_{\vertexa\vertexb} \le I(\tilde{\oneCe}) \left(I(\tilde{\oneCe}^{\prime}) - \sum_{u: \tilde{\oneCe}^{\prime}_{u} = 1} \gamma^{\text{v}}_{u} - \sum_{(\vertexa,\vertexb): \tilde{\oneCe}^{\prime}_{\vertexa\vertexb} = 1} \gamma^{\text{a}}_{\vertexa\vertexb} \right) \end{align} \proof By definition $C^{\policy}(\varFirst, \oneCeNoPrime) \subseteq C^{\policy}(\varFirst, \oneCePrime)$, thus, all cycles and chains that fail under $\tilde{\oneCe}$ also fail under $\tilde{\oneCe}^{\prime}$, which means that $\varCov^{\star}_{R}(\varFirst,\oneCePrime) \le \varCov^{\star}_{R}(\varFirst, \oneCeNoPrime)$. Thus, if $\tilde{\oneCe}^{\prime}$ occurs, another failure scenario should be proposed instead of $\tilde{\oneCe}$ to bring the value of ${\bar{\varCov}}^{\pi}_{Q}(\varFirst)$ down. A natural observation is that, finding all $\tilde{\oneCe}, \tilde{\oneCe}^{\prime} \in \bigsCe(\varFirst)$ satisfying Proposition \ref{prop:dominance} is not straightforward and there could be too many constraints of type \eqref{eq:dominance} to feed into our master problem formulations. We explore two alternatives to separate these dominating-scenario cuts: the first one consists on identifying a subset of scenarios that satisfy constraint \eqref{eq:dominance} a priori. The second one consists of attempting to solve \textit{exactly} \ref{MasterBasic} and \ref{MasterExp} via a heuristics and then ``discovering'' dominating scenarios on the fly. Next, we present a subset of dominated scenarios following either of the two strategies. \paragraph{Adjacent-failure separation} When a vertex failure occurs in the transitory graph $D^{\pi}(\varFirstPlain)$, the number of failed cycles and chains do not get affected if an arc adjacent to that vertex also fails. Thus, for every vertex $u \in V$, we can build a dominating scenario in which the only non-zero value in vector is $\tilde{\oneCe}^{\prime\text{v}}$ is $\tilde{\oneCe}^{\prime\text{v}}_{u} = 1$, and a dominated scenario for all arcs either leaving $u$, i.e., $\tilde{\oneCe}^{\text{a}}_{\vertexa\vertexb} = 1$ or pointing towards it, i.e., $\tilde{\oneCe}^{\text{a}}_{\vertexb\vertexa} = 1$, respectively. Note that the following constraints satisfy Proposition \ref{prop:dominance}: \begin{align} \label{eq:adjacent} \sum_{(\vertexa,\vertexb) \in A} \gamma^{\text{a}}_{\vertexa\vertexb} + \sum_{(\vertexb,\vertexa) \in A} \gamma^{\text{a}}_{\vertexb\vertexa} \le r^{\text{a}} \left(1 - \gamma^{\text{v}}_{u} \right)&& u \in V \end{align} Constraints \eqref{eq:adjacent} can be added to \ref{MasterBasic} and \ref{MasterExp} before the start of Algorithm \ref{Alg:BasicCovering}. \paragraph{Single-vertex-arc separation} \label{vxtarcsep} Suppose we have found a set of proposed failed arcs $A(\tilde{\oneCe}^{\text{a}})$ such that $\gamma^{\text{a}}_{\vertexa\vertexb} = 1 \ \forall {(\vertexa,\vertexb)} \in A(\tilde{\oneCe}^{\text{a}})$, and a set of proposed failed vertices, $V(\tilde{\oneCe}^{\text{v}})$, such that $\tilde{\oneCe}^{\text{v}}_{u} = 1 \ \forall u \in V(\tilde{\oneCe}^{\text{v}})$. A realization of the second-stage compatibility graph corresponds to $D^{\pi}(\varFirstPlain, \tilde{\oneCe}) = (V^{\pi}_{\varFirstPlain} \setminus V(\tilde{\oneCe}^{\text{v}}), A^{\pi}_{\varFirstPlain} \setminus A(\tilde{\oneCe}^{\text{a}}))$. Then, the following two cases also satisfy Proposition \ref{prop:dominance}: \begin{enumerate} \item \label{CaseOne} Suppose there is a candidate pair $\bar{v} \in V^{\pi}_{\varFirstPlain} \setminus V(\tilde{\oneCe}^{\text{v}})$ and $C^{\policy, \bar{\vertexb}}(\varFirst)$ is the set of feasible cycles and chains in $D^{\pi}(\varFirstPlain, \tilde{\oneCe})$ that include vertex $\bar{v}$. Then, if $C^{\policy, \bar{\vertexb}}(\varFirst) = \emptyset$ scenario $\tilde{\oneCe}$ dominates scenario $\tilde{\oneCe} \cup \{\bar{v}\}$ and thus for ${\bar{\varCov}}^{\pi}_{Q}(\varFirst)$ to decrease, another vertex $\bar{v}$ should be proposed to fail. \item \label{CaseTwo} Suppose there is a candidate arc $\bar{a} \in A^{\pi}_{\varFirstPlain} \setminus A(\tilde{\oneCe}^{\text{a}})$ and $C^{\policy,\bar{a}}(\varFirst)$ is the set of recourse cycles and chains in $D^{\pi}(\varFirstPlain, \tilde{\oneCe})$ that include arc $\bar{a}$. Then, if $C^{\policy,\bar{a}}(\varFirst) = \emptyset$ scenario $\tilde{\oneCe} \cup \{\bar{v}\}$ and thus another arc $\bar{a}$ should be proposed to fail if ${\bar{\varCov}}^{\pi}_{Q}(\varFirst)$ is to be lowered. \end{enumerate} Instead of removing dominated scenarios from \ref{MasterBasic} and \ref{MasterExp} through cutting planes, we incorporate the previous two cases in a heuristics that attempts to solve such master problems exactly, as shown in the next section. \section{Solution Algorithms for the Second Stage} \label{sec:HSAs} This section presents solution algorithms to solve the second stage. We refer to \textit{hybrid} solution algorithms as the combination of a linear optimization solver and a heuristics aiming to solve \textit{exactly} the second-stage problem. Our goal is to specialize the steps of Algorithm \ref{Alg:BasicCovering} to improve the efficiency of our final solution approaches. \subsection{A feasibility-based solution algorithm for MasterBasic} The first of our two feasibility-based solution algorithms has \ref{MasterBasic} as master problem, and it is referred to as FBSA\_MB. FBSA\_MB requires a policy $\pi \in \Pi$ and a first-stage solution $\varFirstPlain$ found at Step 0 of Algorithm \ref{Alg:ScenarioGeneration}. We refer to $\textbf{ToMng}(\cdot)$ as a function that ``extracts'' the matching from a recourse solution, which is added to a set $\mathcal{M}$ containing all matchings associated to the recourse solutions found up to some iteration $i$ of the algorithm. At Step 0, $\mathcal{M}$ takes as input the matching associated to the first-stage solution $\varFirstPlain \in \mathcal{X}$. The procedure \textbf{Heuristics}($\mathcal{M}$)---whose algorithmic details we present shortly, attempts to find a failure scenario satisfying the constraints in \ref{MasterBasic}. The heuristics returns a tuple $(\incumCe, \cover)$ with two outputs: The first one corresponds to a \textit{candidate} failure scenario $\oneCe^{\prime}$. The second output is $\textit{cover}$, a boolean variable that indicates whether $\oneCe^{\prime}$ satisfies Constraints \eqref{eq:AtLeastOne}. At Step 0, because there is only one matching in $\mathcal{M}$, it is trivial to heuristically choose one element (vertex or arc) that satisfies Constraint \eqref{eq:AtLeastOneX} and thus the result of the boolean variable $\textit{cover}$ is \textbf{true}. This failure scenario is then added to $\hat{\bigsCe}(\varFirst)$. At Step 1, we attempt to find the optimal recourse solution through a column generation (CG) algorithm, $\textbf{ColGen}(R^{\policy}(\varFirst, \cdteOne))$, for which we generate the sets $\cycleSet^{\pi}(\varFirstPlain)$ and $\chainSet^{\pi}(\varFirstPlain)$ every time a new first-stage decision is made. For very large instances, large-scale decomposition algorithms such as the one proposed in \citep{Riascos2020} can be used to find positive-price columns (cycles or chains) as opposed to searching through all cycles and chains. As usual in a CG algorithm, only positive-price columns are added to its master problem iteratively. Once the optimality of the CG master problem is proven, an upper bound $Z^{\text{UB}}_{\text{cg}}$ on the optimal value of the recourse problem is returned. Then, we take the decision variables from the master problem base and turn them into binary ones to obtain a feasible recourse solution $\tilde{\mathbf{y}}$ with objective value $Z^{\star}_{\text{cg}}$. If $Z^{\star}_{\text{cg}}$ equals the upper bound $Z^{\text{UB}}_{\text{cg}}$, then $\tilde{\mathbf{y}}$ is optimal to formulation $R^{\policy}(\varFirst, \cdteOne)$. If that is the case, then at Step 1, $\tilde{\mathbf{y}}$ becomes the optimal recourse solution $\hat{\varReco}$ and both ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe})$ and $\mathcal{M}$ are updated. Otherwise, we incur in the cost of solving $R^{\policy}(\varFirst, \cdteOne)$ from scratch as a MIP instance, and update ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe})$ and $\mathcal{M}$, correspondingly. If the new recourse value ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe})$ is smaller than the upper bound on the objective value of the second stage, ${\bar{\varCov}}^{\pi}_{Q}(\varFirst)$, then we update both ${\bar{\varCov}}^{\pi}_{Q}(\varFirst)$ and the incumbent failure scenario $\oneCe^{\star}$ that led to that value. Since a new lower bound has been found, the right-hand side of Constraints (\ref{eq:AtLeastOneX}-\ref{eq:AtLeastOne}) is updated following Proposition \ref{prop:RHSi}. At Step 2, a new iteration starts. We try to find a feasible failure scenario in \ref{MasterBasic}+Consts.\eqref{eq:adjacent} through either \textbf{Heuristics}($\mathcal{M}$) or through solving \ref{MasterBasic}+Consts.\eqref{eq:adjacent} as a MIP instance. If a feasible failure scenario is found, then the recourse problem is solved newly at Step 1. Otherwise, at Step 3, the algorithm ends and returns the optimal recovery plan ($Z_{Q}^{\pi,\star}(\varFirst)$, $\oneCe^{\star}$). \begin{algorithm}[tbp] \algorithmicrequire { A recourse policy $\pi \in \Pi$ and first-stage solution $\varFirstPlain \in \mathcal{X}$}\\ \algorithmicensure { Optimal recovery plan value $Z_{Q}^{\pi,\star}(\varFirst)$ and worst-case scenario $\oneCe^{\star} \in \bigsCe(\varFirst)$} \begin{algorithmic}[1] \item[] \textbf{Step 0: }\\ \STATE $i = 1 $; $I \gets I \cup \{i\}$; ${\bar{\varCov}}^{\pi}_{Q}(\varFirst) \gets \tilde{Z}_{P}$; $\mathcal{M} \gets \textbf{ToMng}(\varFirstPlain)$ \STATE $(\incumCe, \text{\textbf{true}}) \gets$ \text{\textbf{Heuristics}}($\mathcal{M}$) \\ \STATE $\hat{\oneCe} \gets \oneCe^{\prime}$; $\hat{\bigsCe}(\varFirst) \gets \hat{\bigsCe}(\varFirst) \cup \hat{\oneCe}$ \item[] \textbf{Step 1: }\\ \STATE $(\ZRecoCol, \ZRecoColUB, \tilde{\varReco}) \gets \textbf{ColGen}\left(R^{\policy}(\varFirst, \cdteOne) \right)$ \IF{$Z^{\star}_{\text{cg}} = Z^{\text{UB}}_{\text{cg}}$} \STATE ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe}) \gets Z^{\star}_{\text{cg}}$\\ $\hat{\varReco} \gets \tilde{\mathbf{y}}$; $\mathcal{M} \gets \mathcal{M} \cup \textbf{ToMng}(\hat{\varReco})$ \ELSE \STATE Solve $R^{\policy}(\varFirst, \cdteOne)$ to obtain solution $\hat{\varReco} \in \setRecoSym^{\policy}(\varFirst, \hat{\oneCe})$ with objective value ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe})$; $\mathcal{M} \gets \mathcal{M} \cup \textbf{ToMng}(\hat{\varReco})$ \ENDIF \IF{${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe}) < {\bar{\varCov}}^{\pi}_{Q}(\varFirst)$} \STATE ${\bar{\varCov}}^{\pi}_{Q}(\varFirst) \gets {Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe})$; $\oneCe^{\star} \gets \hat{\oneCe}$ \STATE Update $H(\hat{\oneCe})$ in \ref{MasterBasic}, \ $\forall \hat{\oneCe} \in \hat{\bigsCe}(\varFirst)$ following Proposition \ref{prop:RHSi} \ENDIF \item[] \textbf{Step 2:} \\ \STATE $i \gets i + 1 $; \STATE $(\incumCe, \cover) \gets \text{\textbf{Heuristics}}(\mathcal{M})$ \IF{$\textit{cover} = \text{\textbf{true}}$} \STATE $\hat{\oneCe} \gets \oneCe^{\prime}$; $\hat{\bigsCe}(\varFirst) \gets \hat{\bigsCe}(\varFirst) \cup \hat{\oneCe}$ \STATE Go to \textbf{Step 1} \ELSE \STATE Create \eqref{eq:AtLeastOne} for $\hat{\varReco} \in \setRecoSym^{\policy}(\varFirst, \hat{\oneCe})$. Attempt to solve \ref{MasterBasic}+Consts.\eqref{eq:adjacent} to get new candidate scenario $\hat{\oneCe}$ \IF{\ref{MasterBasic} is feasible} \STATE $\hat{\bigsCe}(\varFirst) \gets \hat{\bigsCe}(\varFirst) \cup \hat{\oneCe}$ \STATE Go to \textbf{Step 1} \ENDIF \ENDIF \STATE \textbf{Step 3: } $Z_{Q}^{\pi,\star}(\varFirst) \gets {\bar{\varCov}}^{\pi}_{Q}(\varFirst)$; Return $\oneCe^{\star}$ and $Z_{Q}^{\pi,\star}(\varFirst)$ \caption{FBSA\_MB: A Feasibility-based Solution Algorithm for \ref{MasterBasic}} \label{Alg:BasicHybrid} \end{algorithmic} \end{algorithm} \subsection{A feasibility-based solution algorithm for MasterExp} Our second feasibility-based solution algorithm has \ref{MasterExp} as the master problem of the second stage, and we refer to it as FBSA\_ME. FBSA\_ME requires a recourse policy $\pi \in \Pi$ and a first-stage solution $\varFirstPlain \in \mathcal{X}$. In addition to the notation defined for FBSA\_MB, we introduce new one. We refer to $\mathcal{M}_{\text{exp}}$ as the set of matchings associated to the optimal recourse solutions when solving the recourse problem in the transitory graph $D^{\pi}(\varFirstPlain)$. The CG algorithm, $\textbf{XColGen}\left(\Rexp(\varFirst, \hat{\oneCe}) \right)$, returns a tuple with three values: $Z^{\text{UB}}_{\text{Xcg}}$, $Z^{\star}_{\text{Xcg}}$ and $\tilde{\mathbf{y}}$. $Z^{\text{UB}}_{\text{Xcg}}$ corresponds to the optimal objective value returned by the master problem of our CG algorithm; $Z^{\star}_{\text{Xcg}}$ is the feasible solution to the KEP after turning the decision variables of the optimal basis of the CG master problem into binaries. Lastly, $\tilde{\mathbf{y}}$ is the feasible recourse solution found by the CG algorithm with objective value $Z^{\star}_{\text{Xcg}}$. At the start, FBSA\_ME takes as input the matching associated to the first-stage solution $\varFirstPlain \in \mathcal{X}$ and \textbf{Heuristics}($\mathcal{M}$) returns the first failure scenario. At Step 1, we solve the linear relaxation of recourse problem $\Rexp(\varFirst, \oneCe)$ through $\textbf{XColGen}\left(\Rexp(\varFirst, \hat{\oneCe}) \right)$. If $Z^{\star}_{\text{Xcg}} = Z^{\text{UB}}_{\text{Xcg}}$ then a function $\textbf{TrueVal}(Z^{\star}_{\text{Xcg}})$ returns the weighted sum of the cycles and chains in $\tilde{\mathbf{y}}$ that did not fail under the current failure scenario $\hat{\oneCe} \in \hat{\bigsCe}(\varFirst)$. In case the feasible solution $Z^{\star}_{\text{Xcg}}$ has a lower value than $Z^{\text{UB}}_{\text{Xcg}}$, then the original recourse problem $R^{\policy}(\varFirst, \cdteOne)$ is solved as a MIP instance. The reason for this decision is that $\Rexp(\varFirst, \oneCe)$ includes a larger set of cycle-and-chain decision variables, possibly leading to scalability issues when the recourse problem is solved as a MIP instance. For iterations where this occur, i.e., $Z^{\star}_{\text{Xcg}} < Z^{\text{UB}}_{\text{Xcg}}$, both $\mathcal{M}$ and $\mathcal{M}_{\text{exp}}$ take as input the matching associated to $\hat{\varReco}$. If the upper bound ${\bar{\varCov}}^{\pi}_{Q}(\varFirst)$ is updated, so is the right-hand side of constraints in \ref{MasterExp}. At Step 2, a new iteration starts. A new attempt to find a feasible failure scenario is made first by \text{\textbf{Heuristics}}($\mathcal{M} \cup \mathcal{M}_{\text{exp}}$) and then by solving \ref{MasterExp}+Consts.\eqref{eq:adjacent} as a MIP instance. If a feasible scenario is found, then a new recourse solution is found at Step 1. Otherwise, FBSA\_ME returns an optimal solution to the second stage at Step 3. \begin{algorithm}[tbp] \algorithmicrequire { A recourse policy $\pi \in \Pi$, a first-stage solution $\varFirstPlain \in \mathcal{X}$}\\ \algorithmicensure { Optimal recovery plan value $Z_{Q}^{\pi,\star}(\varFirst)$ and worst-case scenario $\bm{\oneCeS}^{\star} \in \bigsCe(\varFirst)$} \begin{algorithmic}[1] \item[] \textbf{Step 0: }\\ \STATE $i = 1 $; ${\bar{\varCov}}^{\pi}_{Q}(\varFirst) \gets \tilde{Z}_{P}$; $\mathcal{M} \gets \textbf{ToMng}(\varFirstPlain)$\\ \STATE $(\incumCe, \text{\textbf{true}}) \gets$ \text{\textbf{Heuristics}}($\mathcal{M}$) \\ \STATE $\hat{\oneCe} \gets \oneCe^{\prime}$; $\hat{\bigsCe}(\varFirst) \gets \hat{\bigsCe}(\varFirst) \cup \hat{\oneCe}$ \item[] \textbf{Step 1: }\\ \STATE $(\ZRecoColexp, \ZRecoColUBexp, \tilde{\varReco}) \gets \textbf{XColGen}\left(\Rexp(\varFirst, \hat{\oneCe}) \right)$ \IF{$Z^{\star}_{\text{Xcg}} = Z^{\text{UB}}_{\text{Xcg}}$} \STATE $\bar{\varReco} \gets \tilde{\mathbf{y}}$ \STATE ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe}) \gets \textbf{TrueVal}(Z^{\star}_{\text{Xcg}})$ \STATE $\mathcal{M}_{\text{exp}} \gets \mathcal{M}_{\text{exp}} \cup \textbf{ToMng}(\bar{\varReco})$; $\mathcal{M} \gets \mathcal{M} \cup \textbf{ToMng}(\VarRecoOpExp(\hat{\oneCe}))$ \ELSE \STATE Solve $R^{\policy}(\varFirst, \cdteOne)$ to obtain recourse objective value ${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe})$ and recourse solution $\hat{\varReco}$;\\ $\mathcal{M}_{\text{exp}} \gets \mathcal{M}_{\text{exp}} \cup \textbf{ToMng}(\hat{\varReco})$; $\mathcal{M} \gets \mathcal{M} \cup \textbf{ToMng}(\hat{\varReco})$ \ENDIF \IF{${Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe}) < {\bar{\varCov}}^{\pi}_{Q}(\varFirst)$} \STATE ${\bar{\varCov}}^{\pi}_{Q}(\varFirst) \gets {Z}^{\pi,\star}_{R}(\varFirst, \hat{\oneCe})$; $\oneCe^{\star} \gets \hat{\oneCe}$ \STATE Update $H(\hat{\oneCe})$ and $H_{\text{e}}(\hat{\oneCe})$ in \ref{MasterExp}, \ $\forall \hat{\oneCe} \in \hat{\bigsCe}(\varFirst)$ following Propositions \ref{prop:RHSi} and \ref{prop:RHSExtdi}, respectively. \ENDIF \item[] \textbf{Step 2:} \\ \STATE $i \gets i + 1 $ \STATE $(\incumCe, \cover) \gets \text{\textbf{Heuristics}}(\mathcal{M} \cup \mathcal{M}_{\text{exp}})$ \IF{$\textit{cover} = \text{\textbf{true}}$} \STATE $\hat{\oneCe} \gets \oneCe^{\prime}$ \STATE Go to \textbf{Step 1} \ELSE \STATE Create Const.\eqref{eq:AtLeastOneExp} for $\VarRecoOpExp(\hat{\oneCe}) \in \setRecoSym^{\policy}(\varFirst, \hat{\oneCe})$ and Const.\eqref{eq:AtLeastOneRHSExp} for $\bar{\varReco} \in \setRecoSym_{\text{exp}}^{\policy}(\varFirst, \hat{\oneCe})$. Attempt to solve \ref{MasterExp}+Consts.\eqref{eq:adjacent} to get new candidate scenario $\hat{\oneCe}$; \IF{\ref{MasterExp} is feasible} \STATE $\hat{\bigsCe}(\varFirst) \gets \hat{\bigsCe}(\varFirst) \cup \hat{\oneCe}$ \STATE Go to \textbf{Step 1} \ENDIF \ENDIF \STATE \textbf{Step 3: } $Z_{Q}^{\pi,\star}(\varFirst) \gets {\bar{\varCov}}^{\pi}_{Q}(\varFirst)$; Return $\oneCe^{\star}$ and $Z_{Q}^{\pi,\star}(\varFirst)$ \caption{FBSA\_ME: A Feasibility-based Solution Algorithm for \ref{MasterExp}} \label{Alg:EnhancedHybrid} \end{algorithmic} \end{algorithm} \subsection{Hybrid solution algorithms} The main change of the algorithms presented shortly with respect to the feasibility-based algorithms is the possibility of transitioning from the feasibility-seeking master problems to \ref{objMPOpt} after TR number of iterations. The goal is to obtain a lower bound on the objective value of the second stage, ${\bunderline{\varCov}}^{\pi}_{Q}(\varFirst)$, that can be compared to ${\bar{\varCov}}^{\pi}_{Q}(\varFirst)$ to prove optimality. We refer to these hybrid algorithms as HSA\_MB and HSA\_ME. The former has \ref{MasterBasic} as master problem, whereas the latter has \ref{MasterExp}. For both hybrid algorithms we solve the recourse problem $\Rexp(\varFirst, \hat{\oneCe})$ in the transitory graph, but in the case of HSA\_MB whenever an attempt to find a failure scenario occurs, it is done with respect to \ref{MasterBasic}+Consts.\eqref{eq:adjacent}. The reason for this approach is to accumulate the recourse solutions given by $\Rexp(\varFirst, \hat{\oneCe})$ and use them to solve \ref{objMPOpt} when the number of iterations exceeds TR. Algorithm \ref{Alg:Hybrid} presents the changes required in Algorithms \ref{Alg:BasicHybrid} and \ref{Alg:EnhancedHybrid} to obtain HSA\_MB and HSA\_ME. Particularly, Algorithm \ref{Alg:Hybrid} should be included in lines 12 and 14 of Algorithm \ref{Alg:BasicHybrid} and Algorithm \ref{Alg:EnhancedHybrid}, respectively. \begin{algorithm}[tbp] \algorithmicrequire{ Parameter TR indicating iterations threshold as of which \ref{objMPOpt} is solved} \begin{algorithmic}[1] \IF{$i \ge$ TR} \STATE Create one Const.\eqref{eq:solsTwo} per matching in $\mathcal{M}_{\text{exp}}$ and solve $\ref{objMPOpt}$ to obtain ${\bunderline{\varCov}}^{\pi}_{Q}(\varFirst)$ and failure scenario $\hat{\oneCe}$ \IF{${\bunderline{\varCov}}^{\pi}_{Q}(\varFirst) = {\bar{\varCov}}^{\pi}_{Q}(\varFirst)$} \STATE $\oneCe^{\star} \gets \hat{\oneCe}$, Go to \textbf{Step 3} in Algorithm \ref{Alg:BasicHybrid} or Algorithm \ref{Alg:EnhancedHybrid} \ELSE \STATE Go to \textbf{Step 1} in Algorithm \ref{Alg:BasicHybrid} or Algorithm \ref{Alg:EnhancedHybrid} \ENDIF \ENDIF \caption{Additional steps for hybrid solution algorithms, HSA\_MB and HSA\_ME} \label{Alg:Hybrid} \end{algorithmic} \end{algorithm} \subsection{Heuristics} We now discuss the heuristics (Algorithm \ref{Alg:Heuristics}), used in the feasibility-based and hybrid algorithms. Algorithm \ref{Alg:Heuristics} requires as input a set of matchings $W$. If a failure scenario $\oneCe^{\prime}$ is returned, it satisfies either \ref{MasterBasic}+Consts.\eqref{eq:adjacent} or \ref{MasterExp}+Consts.\eqref{eq:adjacent}, accordingly. The heuristics performs the adjacent-failure separation and the two single-vertex-arc separation cases presented in Section \ref{DominatingScenarios}. A function \textbf{UniqueElms}($W$) returns the set of unique vertices and arcs among all matchings in $W$, which are then kept in $E$. With some abuse of notation, we say that each vertex/arc $\text{e}_{n} \in E$ with $n = 1,...\lvert E \rvert$ has an associated boolean variable $s_{n}$ and a calculated weight $\text{w}_{n}$. The boolean variable $s_{n}$ becomes \textbf{true} if element $\text{e}_{n}$ has been checked, i.e., either it has been proposed for failure in $\oneCe^{\prime}$ or whether it has been proven that $\text{e}_{n}$ is dominated according to Section \ref{DominatingScenarios}. Function $\text{\textbf{Weight}}(W)$ returns the weight $\text{w}_{n}$ corresponding to the number of times element $\text{e}_{n}$ is repeated in the set of matchings $W$. Moreover, a function $\text{\textbf{IsNDD}}(\Elem^\star_{n})$ determines whether element $\Elem^\star_{n}$ is a singleton donor. While it is true, a list of unique and non-checked elements $\mathcal{E}$ is created as long as the vertex and arc budgets, $r^{\text{v}}$ and $r^{\text{a}}$ are not exceeded, respectively. If $\mathcal{E}$ turns out to have no elements, the algorithm ends in line 10. Otherwise, the elements in $\mathcal{E}$ are sorted in non-increasing order of their weights, and among the ones with the highest weight, an element $\Elem^\star_{n}$ is selected randomly. In line 14, the single-vertex-arc separation described in Section \ref{vxtarcsep} is performed as long as $\Elem^\star_{n}$ is a pair, there is at least one arc or at least two vertices proposed for failure already. The reason we check $\Elem^\star_{n}$ is not a singleton donor is because by definition a singleton donor does not belong to a cycle and if removed as a vertex from the transitory or second-stage compatibility graph, its removal causes the failure of all chains triggered by it. Between lines 15 and 24 we perform the two single-vertex-arc separation procedures using the same notation as in Section \ref{vxtarcsep}. A boolean variable \textit{ans} remains \textbf{true} if $\Elem^\star_{n}$ is a singleton donor or a non-proven dominated vertex/arc. Element $\Elem^\star_{n}$ is labeled as checked by setting $\Elem^\star_{n} = \text{\textbf{true}}$. Then, if \textit{ans} = \textbf{true}, it is checked whether $\Elem^\star_{n}$ is a vertex or an arc, and the proposed failure scenario $\bar{\bm{\gamma}}$ is updated, accordingly. If element $\Elem^\star_{n}$ is a vertex, in line 28 the set of arcs adjacent to $\Elem^\star_{n}$ are labeled as checked, so that $\bar{\bm{\gamma}}$ satisfies Constraints \eqref{eq:adjacent}. We say that scenario $\bar{\bm{\gamma}}$ covers all matchings in $W$, if for every matching in that set, there exists at least $H(\hat{\oneCe})$ or $H_{\text{e}}(\hat{\oneCe})$; $\hat{\oneCe} \in \hat{\bigsCe}(\varFirst)$ vertices/arcs in each matching that have been proposed for failure in $\bar{\bm{\gamma}}$. Recall that every matching in $W$ is associated with one constraint in either (\ref{eq:AtLeastOneX}-\ref{eq:AtLeastOne}) or (\ref{eq:AtLeastOneXExp}-\ref{eq:AtLeastOneRHSExp}). If $\bar{\bm{\gamma}}$ covers all matchings in $W$, then \textit{cover} = \textbf{true}. The heuristics is greedy in the sense that even when \textit{cover} = \textbf{true}, it will attempt to use up the vertex and arc budgets as checked in line 34. If another vertex/arc can still be proposed for failure, then a new iteration in the While loop starts in line 3. Thus, the heuristics ends at either lines 10 or 35. \begin{algorithm}[tbp] \algorithmicrequire{ A set of matchings $W$}\\ \algorithmicensure{ Failure scenario $\bar{\bm{\gamma}}$ and a boolean variable \textit{cover}\ indicating whether $\bar{\bm{\gamma}}$ covers matchings in $W$} \begin{algorithmic}[1] \item[] \textbf{Step 0: } \STATE \textit{cover} $=$ \textbf{false} ; $E \gets \textbf{UniqueElms}(W)$ \item[] \textbf{Step 1: } \WHILE{\textit{true}} \STATE $\mathcal{E} = \emptyset$ \FOR{$n = 1,..., \lvert E \rvert$} \IF{$\text{e}_{n} \in A$ and $s_{n} =$ \textbf{false} and $\sum_{u \in V} \bar{\bm{\gamma}}_{u}^{\text{v}} < r^{\text{v}}$} \STATE $\text{w}_{n} = \text{\textbf{Weight}}(W)$ ; $\mathcal{E} \gets \mathcal{E} \cup \{\text{e}_{n}\}$ \ELSIF{$\text{e}_{n} \in V$ and $s_{n} =$ \textbf{false} and $\sum_{(\vertexa,\vertexb) \in A} \bar{\bm{\gamma}}_{\vertexa\vertexb}^{\text{a}} < r^{\text{a}}$} \STATE $\text{w}_{n} = \text{\textbf{Weight}}(W)$ ; $\mathcal{E} \gets \mathcal{E} \cup \{\text{e}_{n}\}$ \ENDIF \ENDFOR \IF{$\mathcal{E} = \emptyset$} \STATE Go to \textbf{Step 2} \ENDIF \STATE Sort $\mathcal{E}$ in non-increasing order of values s.t. $\text{w}_{1} \ge \text{w}_{2} ...\ge \text{w}_{\lvert E \rvert}$ \STATE Select $\Elem^\star_{n} \in \mathcal{E}$ randomly among elements whose weight equals $\text{w}_{1}$ \STATE \textit{ans} = \textbf{true} \item[] \textbf{---Start single-vertex-arc separation \ref{vxtarcsep}---} \IF{$(\sum_{u \in V} \bar{\bm{\gamma}}_{u}^{\text{v}} + \sum_{(\vertexa,\vertexb) \in A} \bar{\bm{\gamma}}_{\vertexa\vertexb}^{\text{a}}) \ge 1$ and $\text{\textbf{IsNDD}}(\Elem^\star_{n}) = \textbf{false}$ and $(\sum_{(\vertexa,\vertexb) \in A} \bar{\bm{\gamma}}_{\vertexa\vertexb}^{\text{a}} \ge 1 \text{ or } \sum_{u \in V} \bar{\bm{\gamma}}_{u}^{\text{v}} \ge 2)$} \STATE $V(\tilde{\oneCe}^{\text{v}}) \gets V(\bar{\bm{\gamma}}^{\text{v}})$; $A(\tilde{\oneCe}^{\text{a}}) \gets A(\bar{\bm{\gamma}}^{\text{a}})$ \STATE \textit{ans} = \textbf{false} \IF{$\Elem^\star_{n} \in V$} \STATE $\bar{v} \gets \Elem^\star_{n}$; $C^{\policy, \bar{\vertexb}}(\varFirst) \gets$ Check Case \ref{CaseOne} \IF{$C^{\policy, \bar{\vertexb}}(\varFirst) \neq \emptyset$} \STATE \textit{ans} = \textbf{true} \ENDIF \ELSE \STATE $\bar{a} \gets \Elem^\star_{n}$; $C^{\policy,\bar{a}}(\varFirst) \gets $ Check Case \ref{CaseTwo} \IF{$C^{\policy,\bar{a}}(\varFirst) \neq \emptyset$} \STATE \textit{ans} = \textbf{true} \ENDIF \ENDIF \ENDIF \item[] \textbf{---End separation---} \STATE $\Elem^\star_{n} = \text{\textbf{true}}$ \IF{\textit{ans} = \textbf{true}} \IF{$\Elem^\star_{n} \in V $} \STATE $\bar{\bm{\gamma}}^{\text{v}}_{\Elem^\star_{n}} = 1$; $s_{n} =$ \textbf{true} $\forall \ \text{e}_{n} \in \delta^{+}(\Elemstar_{n}) \cup \delta^{-}(\Elemstar_{n})$ \ELSE \STATE $\bar{\bm{\gamma}}^{\text{a}}_{\Elem^\star_{n}} = 1$ \ENDIF \ENDIF \IF{$\bar{\bm{\gamma}} \text{ covers all matchings in } W$} \STATE \textit{cover} = \textbf{true} \ENDIF \IF{\textit{cover} = \textbf{true}} \IF{$\sum_{u \in V} \bar{\bm{\gamma}}_{u}^{\text{v}} = r^{\text{v}}$ and $\sum_{(\vertexa,\vertexb) \in A} \bar{\bm{\gamma}}_{\vertexa\vertexb}^{\text{a}} = r^{\text{a}}$} \STATE Go to \textbf{Step 2} \ENDIF \ENDIF \ENDWHILE \STATE \textbf{Step 2: } Return $\bar{\bm{\gamma}}$ and \textit{cover} \caption{Heuristics} \label{Alg:Heuristics} \end{algorithmic} \end{algorithm} \section{Computational Experiments} \label{Experiments} In this section we present the results of our computational experience. We use the same instances tested in \citep{Blom2021, Carvalho2020}. Although, there are three sets, each with 20, 50 and 100 vertices, our analysis focuses on the set with 100 vertices for performance purposes under homogeneous failure and on the sets 50 and 100 for our policy-based analysis under the non-homogeneous case. There are 30 instances within every instance set. In the first part of this section, we compare the efficiency of our solution approaches with respect to the state-of-the-art algorithm addressing the full-recourse policy under homogeneous failure, Benders-PICEF, proposed in \citep{Blom2021}. In the second part, we analyze the performance and practical impacts of our approaches under non-homogeneous failure for the two policies presented in Section \ref{sec:RecoursePolicies}. All our implementations are coded in C++, including the state-of-the-art algorithm we compare against, on a machine with Debian GNU/Linux as operating system and a 3.60GHz processor Intel(R) Core(TM). We use CPLEX 12.10 as an LP/MIP solver across all algorithms. A time limit of one hour is given to every run. The TR value for the HSA\_ MB and HSA\_ME algorithms was 150 iterations across all runs. \subsection{Benchmark under homogeneous failure} Current state of the art algorithms only address homogeneous failure \citep{Carvalho2020, Blom2021} where a worst-case scenario can be found by only considering vertices. In this section, we compare our solution algorithms under non-homogeneous failure with the Benders-type of decomposition proposed by \citet{Blom2021} for the full-recourse policy, referred to as Benders-PICEF. Benders-PICEF is a decomposition whose master problem and sub-problem (recourse problem), are hybrid formulations adapting aspects of the position-indexed formulation proposed by \citet{Dickerson2016} to model chains and aspects of the cycle formulation to model cycles. Unlike the implementation in \citep{Blom2021}, our implementation of Benders-PICEF generates and stores in memory cycles from the transitory graph given a first-stage solution, which we used to build their position-indexed recourse formulation. Figure \ref{fig:Performance} shows the performance profile for five algorithms, one of them being Benders-PICEF. The other four correspond to the solution algorithms proposed in this work: FBSA\_MB, FBSA\_ME, HSA\_MB and HSA\_ME. Recall that FBSA\_MB and FBSA\_ME do not transition to their optimization problem counterpart, \ref{objMPOpt}. Among the two, Figure \ref{fig:Performance} shows that solving the recourse problem in the transitory graph pays off for FBSA\_ME. Although Benders-PICEF solves more instances in general than FBSA\_MB and FBSA\_ME, the performance of FBSA\_ME noticeably outperforms that of Benders-PICEF when the maximum length of cycles is four and that of chains is three and up to three vertices are allowed to fail. In the same settings, FBSA\_MB is comparable to Benders-PICEF. However, as soon as the feasibility-seeking master problems transition to their optimization counterpart, the performance of HSA\_MB and HSA\_ME is consistently ahead across all other algorithms and settings. As we show shortly, in most cases HSA\_MB and HSA\_ME need a small percentage of iterations in the optimization version of the master problem of the second stage to converge. Benders-PICEF is substantially fast when cycles and chains of size up to three and four are considered, respectively. The increase in the cycle length and budget failure make it challenging for Benders-PICEF to keep up. \begin{figure}[tbp] \centering \includegraphics[width=\linewidth]{Figures/performance_profile.pdf} \caption{Performance profile for multiple $r^{\text{v}}$ values and $r^{\text{a}} = 0$ under full recourse} \label{fig:Performance} \end{figure} Table \ref{tab:performance} summarizes the behaviour of FBSA\_ME, HSA\_ME and Benders-PICEF. We start by explaining the lower level columns in the table that have not yet been defined. Starting on the left-hand side of the table we find column total, which corresponds to the average total time spent optimizing an instance. Then, columns Alg\ref{Alg:Heuristics}, \ref{MasterExp}, \ref{RecoP:Objexp}, CG and \ref{objMPOpt} correspond to the average percentage of time that a run spent on the following items in the second stage, in that order: time spent by the heuristics in Algorithm \ref{Alg:Heuristics} while attempting to find a feasible scenario in \ref{MasterExp}, time solving \ref{MasterExp} as a MIP instance when the heuristics failed to find a feasible scenario, time solving the recourse problem \ref{RecoP:Objexp} as a MIP instance when the CG algorithm failed to find an optimal recourse solution and the time spent by the CG algorithm attempting to solve to optimality the recourse problem \ref{RecoP:Objexp} and lastly, the time spent solving \ref{objMPOpt} as a MIP instance when more than TR = 150 iterations passed without \ref{MasterExp} becoming infeasible. The remaining percentage of time to complete 100\% of the total time corresponds to solving the first-stage formulation and finding cycles/chains for the transitory graph. Following the left-to-right order, there is columns 1stS and 2SndS. 1stS indicates the average total number of first-stage decisions that were required before finding the robust solution. 2SndS is the average total number of iterations spent on solving the second-stage problem for all the first-stage decisions. That is, on average column 2SndS over column 1stS indicates the number of iterations that were needed per first-stage iteration to solve the second-stage problem. Then, columns Alg\ref{Alg:Heuristics}-true, \ref{MasterExp}, \ref{RecoP:Objexp}, CG-true and \ref{objMPOpt} indicate the average total number of iterations spent on each of those processes. Particularly, column Alg\ref{Alg:Heuristics}-true indicates the average number of iterations the heuristics successfully found a feasible failure scenario. Its corresponding column on the average percentage with respect to the total time does include all iterations spent by the heuristics (those in column 2SndS), regardless of whether it was successful or not. Similarly, column CG-true indicates the average total number of iterations the CG algorithm found an optimal recourse solution. Also, note the column Time-CG does include the total time taken by the CG algorithm to run regardless of its result. To know the average number of iterations per first-stage iteration, the previous columns should be divided by column 1stS. The first observation is that the CG algorithm successfully finds optimal recourse solutions in most cases and it is responsible for most of the average total time for FBSA\_BE and HSA\_ME. On the other hand, the average total number of iterations needed by FBSA\_ME to converge were significantly higher than those needed by HSA\_ME and yet the average total time per iteration of the second stage with respect to the total time (2ndS/total) is efficient. FBSA\_ME clearly generates scenarios that \ref{objMPOpt} would not explore for the expressiveness of the cycle-and-chain decision variables and the minimization objective, both properties that FBSA\_MB and FBSA\_ME lack. However, also for the full recourse policy, \citet{Blom2021} tested a master problem analogous to \ref{objMPOpt} and showed that due to the large number of cycles and chains in the formulation, its scalability is limited as the size of cycles and chains grows. In fact, when looking at the average number of iterations spent on \ref{objMPOpt} and its corresponding percentage of the total time, and then comparing those two columns for Alg\ref{Alg:Heuristics} and \ref{MasterExp}, we see that (i) the heuristics even when close to 1700 iterations, only accounts for about 26\% of the time (ii) even when the heuristics fails and \ref{MasterExp} is solved as a MIP instance, around 1000 iterations are needed to account for about 39\% of the total time, while for \ref{objMPOpt}, only below 200 iterations are needed to account for about the same percentage. Note that a lower bound need not to be found through \ref{objMPOpt}. For instance, we could have obtained it through the master problem of Benders-PICEF (MPBP), whose iterations are less expensive in terms of time when compared to \ref{objMPOpt}. Observe that in some cases the average total number of second iterations spent by HSA\_ME is a third of those spent by FBSA\_ME, indicating that when shortly transitioning to \ref{objMPOpt}, the hybrid algorithms are able to converge quickly in all the tested settings. This short transition is seen when dividing the average total number of iterations spent on the second stage (2ndS) over the ones spent on the first stage (1stS). Recall that iterations above TR = 150 are the ones spent on solving \ref{objMPOpt} for a given first-stage decision. Thus, the feasibility-seeking master problems are able to find near-optimal failure scenarios within the first hundreds of iterations but in the absence of a lower bound, may require thousands to reach infeasibility and thus prove optimality. \begin{table}[tbp] \caption{Performance comparison of FBSA\_ME, HSA\_ME and Benders-PICEF under homogeneous failure and full recourse for $\lvertP \rvert = 100$. MPBP and RPBP are the master problem and recourse problem presented in \citep{Blom2021}, respectively. A column is left empty if it does not apply. Data includes all runs.} \label{tab:performance} \begin{adjustbox}{width=1\textwidth} \centering \input{99_table_performance} \end{adjustbox} \end{table} \subsection{Analysis under non-homogeneous failure} In this part of the analysis, we focus on understanding both the scalability of the solution algorithms under non-homogeneous failure and the impact of the two studied recourse policies on the total number of pairs that can be re-arranged into new cycles and chains, and the percentage of those that correspond to highly-sensitized patients. Pairs in the instances published by \cite{Carvalho2020} have an associated panel reactive antibody (PRA), which determines how likely is that a patient may reject a potential donor. The higher this value the more sensitized a patient becomes and thus a patient is less likely to get a compatible donor. Typically, a patient with a PRA greater or equal to 90\% is considered highly sensitized. Table \ref{tab:policiesk3} summarizes the results when the solution algorithm used is HSA\_MB and the longest cycle has three exchanges. A similar table for cycle length four is presented in Appendix \ref{CycleLengthFourNonHomoFail}. In addition to the columns described for Table \ref{tab:performance}, there are three new ones: $r^{\text{a}}$, HSP(\%) and DomS. $r^{\text{a}}$ is the average number of arcs that fail in an instance set, corresponding to either 5\% or 10\% of the arcs in the deterministic solution of an instance. Thus, in this column there are two unique values corresponding to 5\% or 10\% of those arcs. Column $r^{\text{v}}$ corresponds as before, to the maximum number of failed vertices. HSP(\%) refers to the average percentage of highly-sensitized patients (PRA $>$ 90\%) that are able to be re-arranged into new cycles and chains in the second stage with respect to the total number of highly-sensitized patients in that instance. DomS is the average total number of dominated scenarios that are found by the single-vertex-arc separation procedure described in Section \ref{vxtarcsep}. The average total number of dominated scenarios for instances in Table \ref{tab:performance} was negligible and thus, not presented. The data in this table includes all runs, optimal or not, except for column HSP(\%) where only instances that were solved to optimality are included. In terms of performance, the non-homogeneous case is more difficult to solve to judge by the average total time (total), the average total number of first-stage iterations (1stS), and the average percentage of time with respect to the total that takes solving the optimality problem \ref{objMPOpt} after 150 iterations (\ref{objMPOpt}(\%)). The average number of dominated scenarios is particularly high for the first-stage-only recourse. This behaviour may be explained because when a failure occurs, the transitory graph under this policy is smaller and thus more susceptible to having vertices and arcs that cannot longer be part of cycles and chains. In terms of the robust objective, all instances that were able to be solved optimally under both policies, obtained the same objective. This interesting fact, indicates that for the tested instances, there exists a set of transplants that allow to recover pairs among themselves, and they are as many as those that can be recovered by the full-recourse policy. The percentage of highly-sensitized patients that can be recovered vary, being 30\% approximately the average. It seems to increase for larger instances and when the recourse policy considered is the first-stage-only recourse. The length of chains does seem to have a positive effect on some settings but it is not a generalized effect. Naturally, the fact that there are runs for which the optimal robust value is unknown indicates the analysis is not yet conclusive. \begin{table}[tbp] \caption{Policies comparison for $K = 3$ under HSA\_MB and non-homogeneous failure.} \label{tab:policiesk3} \begin{adjustbox}{width=\textwidth} \centering \input{policies_table_K3_aBudget_5_10} \end{adjustbox} \end{table} \section{Conclusion} \label{sec:Conclusion} We present a general solution framework for KPDPs to find robust matchings that after failure can be repaired through a predefined recourse policy. We achieve this goal by solving a two-stage RO problem. We showed two decompositions where the master problems are a feasibility-seeking problem, which only require a polynomial number of decision variables, a desirable property for scalability purposes. This advantage comes at the price of easily computing a lower bound. To overcome this drawback, we proposed separating a family of dominated scenarios using a heuristics that also attempts to find a feasible failure scenario to the master problems. Our final solution algorithms solve the two-stage robust problem exactly. We compared our solution methods with the state of the art under homogeneous failure, since our work is the first attempt to solve the two-stage robust problem in kidney exchange under non-homogeneous failure. Some of our algorithms are capable to outperform the state of the art in some settings when no lower bound is known. When allowing our feasibility-based solution algorithms to transition to their optimization counterpart, they consistently outperform the state-of-the-art algorithm. On the other hand, under non-homogeneous failure, we compared the two policies studied in this paper in terms of efficiency but also in terms of their ability to re-assign highly-sensitized patients into new cycles and chains. We found that although the percentage of highly-sensitized that can be re-arranged varies across runs, in most cases at least 30\% of highly-sensitized patients seem to be recovered in the second stage. \section{Robust formulation for full recourse} \label{sec:fullROFormulation} Our solution approach supports the use of any MIP formulation that has the structure shown in Section \ref{RobustModels}. For the results presented in this paper, we adapt a variant presented by \cite{Carvalho2020} of the position-indexed cycle edge formulation (PICEF) proposed by \cite{Dickerson2016}. Cycles are modeled through cycle variables $z_c \ \forall c \in \mathcal{C}_{\cycleCap}$ for first-stage decisions and through $\varRecoZ_\match^{\oneCe} \ \forall c \in \mathcal{C}_{\cycleCap}$ for recourse cycles under scenario $\bm{\oneCeS} \in \Gamma$. Chains, on the other hand, are modeled through first-stage decision variables $\delta_{\vertexa\vertexb\ell}$, indexed by arc $(\vertexa,\vertexb) \in A$ and the feasible position $\ell \in \mathcal{L}(\vertexa, \vertexb)$ of that arc within a chain. The set $\mathcal{L}(\vertexa, \vertexb) \subseteq \mathcal{L} = \{1,...,L\}$ corresponds to the set of positions for which that arc is reached from some singleton donor in a simple path with $\ell \le L$ arcs. For vertices $u \in N$, the set of possible arc positions becomes $\mathcal{L}(\vertexa, \vertexb) = \{1\}$, since singleton donors always start a chain. To identify $\mathcal{L}(\vertexa, \vertexb)$ for the other arcs, a shortest-path based search can be performed \citep{Dickerson2016}. Likewise, recourse chain decision variables for every scenario $\bm{\oneCeS} \in \Gamma$ are denoted by $\delta^{\oneCe}_{\vertexa\vertexb\ell}$. A binary decision variable $t^{\oneCe}_{\vertexb}$ is also defined for every pair $v \in P$ and scenario $\bm{\oneCeS} \in \Gamma$ to identify the pairs that are selected in both the first stage and in the second stage under some scenario $\bm{\oneCeS} \in \Gamma$. Moreover, we denote by $\cycleSet^{\vertexb}$ and $\cycleSet^{\vertexa\vertexb}$ the set of feasible cycles including vertex $v \in P$ and the set of feasible cycles including arc $(\vertexa,\vertexb) \in A$. \label{FullFirstSFormulation} \begin{subequations} \begin{align} \max \qquad Z \label{objmaxRO}\\ Z - \sum_{v \in P} t^{\oneCe}_{\vertexb} \le 0 && \bm{\oneCeS} \in \Gamma \label{selidxF}\\ t^{\oneCe}_{\vertexb} - \sum_{c \in \cycleSet^{\vertexb}} z_c - \sum_{\ell \in \mathcal{L}} \sum_{(\vertexa,\vertexb) \in \arcSet_{\ell}} \delta_{\vertexa\vertexb\ell} \le 0 && \bm{\oneCeS} \in \Gamma, v \in P \label{FirstSSol}\\ t^{\oneCe}_{\vertexb} - \sum_{c \in \cycleSet^{\vertexb}} \varRecoZ_\match^{\oneCe} - \sum_{\ell \in \mathcal{L}} \sum_{(\vertexa,\vertexb) \in \arcSet_{\ell}} \delta^{\oneCe}_{\vertexa\vertexb\ell} \le 0 && \bm{\oneCeS} \in \Gamma, v \in P \label{SecondSSol}\\ \sum_{u: (\vertexb,\vertexa)}\delta^{\oneCe}_{\vertexb\vertexa 1} \le 1 - \gamma^{\text{v}}_{v} && \bm{\oneCeS} \in \Gamma, v \in N \label{FailedNDD}\\ \sum_{c \in \cycleSet^{\vertexb}} \varRecoZ_\match^{\oneCe} - \sum_{\ell \in \mathcal{L}} \sum_{(\vertexa,\vertexb) \in \arcSet_{\ell}} \delta^{\oneCe}_{\vertexa\vertexb\ell} \le 1 - \gamma^{\text{v}}_{v} && \bm{\oneCeS} \in \Gamma, v \in P \label{FailedPair}\\ \sum_{c \in \cycleSet^{\vertexa\vertexb}} \varRecoZ_\match^{\oneCe} - \sum_{\ell \in \mathcal{L}(\vertexa, \vertexb)} \delta^{\oneCe}_{\vertexa\vertexb\ell} \le 1 - \gamma^{\text{a}}_{\vertexa\vertexb} && \bm{\oneCeS} \in \Gamma, (\vertexa,\vertexb) \in A \label{FailedArc}\\ \sum_{u:(\vertexa,\vertexb) \in \arcSet_{\ell}} \delta^{\oneCe}_{\vertexa\vertexb\ell} - \sum_{u:(\vertexb,\vertexa) \in \arcSet_{\ell}} \delta^{\oneCe}_{\vertexb\vertexa(\ell + 1)} \le 0 && \bm{\oneCeS} \in \Gamma, v \in P, \ell \in \mathcal{L} \setminus \{L - 1\} \label{ChainPos}\\ \sum_{u: (\vertexb,\vertexa)}\delta_{\vertexb\vertexa 1} \le 1 && v \in N \label{FailedNDDf}\\ \sum_{c \in \cycleSet^{\vertexb}} \varRecoZ_\match - \sum_{\ell \in \mathcal{L}} \sum_{(\vertexa,\vertexb) \in \arcSet_{\ell}} \delta_{\vertexa\vertexb\ell} \le 1 && v \in P\\ \sum_{u:(\vertexa,\vertexb) \in \arcSet_{\ell}} \delta_{\vertexa\vertexb\ell} - \sum_{u:(\vertexb,\vertexa) \in \arcSet_{\ell}} \delta_{\vertexb\vertexa(\ell + 1)} \le 0 && \bm{\oneCeS} \in \Gamma, v \in P, \ell \in \mathcal{L} \setminus \{L - 1\} \label{ChainPosf}\\ t^{\oneCe}_{\vertexb} \ge 0 && \bm{\oneCeS} \in \Gamma, v \in P\\ z_c, \varRecoZ_\match^{\oneCe} \in \{0,1\} && \bm{\oneCeS} \in \Gamma, c \in \mathcal{C}_{\cycleCap}\\ \delta_{\vertexa\vertexb\ell}, \delta^{\oneCe}_{\vertexa\vertexb\ell} \in \{0,1\} && \bm{\oneCeS} \in \Gamma, (\vertexa,\vertexb) \in A, \ell \in \mathcal{L}(\vertexa, \vertexb) \label{varNature} \end{align} \label{for:PICEF_RO} \end{subequations} Constraints \eqref{selidxF} are used to determine the scenario binding the number of patients that receive a transplant in both stages. Such scenario is the worst-case scenario. The objective \eqref{objmaxRO} is then equivalent to the maximum number of patients from the first stage that can be recovered in the second stage under the worst-case scenario. Constraints \eqref{FirstSSol} and \eqref{SecondSSol} assure that a pair $v$ is counted as recovered in the objective if it is selected in the first-stage solution (Constraints \eqref{FirstSSol}) and it is also selected in the second stage under scenario $\bm{\oneCeS} \in \Gamma$ (Constraints \eqref{SecondSSol}). Constraints \eqref{FailedNDD} to Constraints \eqref{FailedArc} guarantee that the solution obtained for every scenario $\bm{\oneCeS} \in \Gamma$ is vertex disjoint and only uses non-failed vertices/arcs. Particularly, Constraints \eqref{FailedNDD} assure that if a singleton donor fails under some scenario $\bm{\oneCeS} \in \Gamma$, i.e., $\gamma^{\text{v}}_{v} = 1$ then its corresponding arcs in position one cannot be used to trigger a chain, but if the vertex associated to the single donor does not fail, it can donate to at most one patient in a pair. Similarly, Constraints \eqref{FailedPair} guarantee that if a pair fails, then it cannot be present in neither a cycle nor a chain. Constraints \eqref{FailedArc} ensure that when an arc $(\vertexa,\vertexb) \in A$ fails under some scenario $\bm{\oneCeS} \in \Gamma$, it does not get involved in neither a cycle or a chain. Constraints \eqref{ChainPos} assure the continuity of a chain by selecting arcs in consecutive positions. Constraints \eqref{FailedNDDf} to Constraints \eqref{ChainPosf} select a solution corresponding to matching in the first stage. The remaining constraints correspond to the nature of the decision variables. \section{Robust formulation for first-stage-only recourse} \label{sec:fisrtOnlyROFormulation} In addition to the constraints defining formulation \eqref{for:PICEF_RO}, a new one is introduced to limit the recourse solutions under every scenario to include only vertices that were selected in the first stage. \begin{subequations} \label{FirstStageOnlyFormulation} \begin{align} \max \qquad &Z \\ \ref{selidxF} &-\ref{varNature}&\\ \sum_{c \in \cycleSet^{\vertexb}} \varRecoZ_\match^{\oneCe} - \sum_{\ell \in \mathcal{L}} \sum_{(\vertexa,\vertexb) \in \arcSet_{\ell}} \delta^{\oneCe}_{\vertexa\vertexb\ell} &\le \sum_{c \in \cycleSet^{\vertexb}} z_c - \sum_{\ell \in \mathcal{L}} \sum_{(\vertexa,\vertexb) \in \arcSet_{\ell}} \delta_{\vertexa\vertexb\ell} & \bm{\oneCeS} \in \Gamma, v \in P \end{align} \end{subequations} \section{The recourse problem} \label{Alg:CyChrecourse} We present in this section two algorithms to enumerate the feasible cycles and chains in $\mathcal{C}_{\cycleCap}^{\pi}(\varFirstPlain)$ and $\mathcal{C}_{\chainCap}^{\pi}(\varFirstPlain)$, respectively, that leads to a transitory graph $D^{\pi}(\varFirstPlain)$ and a a realization of the second-stage compatibility graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})$ under scenario $\bm{\oneCeS} \in \bigsCe(\varFirst)$. \subsection{Cycles and chains for the full-recourse policy} We start by defining some notation. Let $\pairSet(\varFirstPlain) = V(\varFirstPlain) \cap P$ be an auxiliary set, corresponding to the pairs selected in the first-stage solution. Moreover, we denote by $\mathcal{C}_{K}^{u}$ the set of feasible \textit{cycles} that include vertex $u \in \pairSet(\varFirstPlain)$. Lastly, consider $R$ as an auxiliary vertex set. Algorithm \ref{Alg:FullRecoCycleSet} iteratively builds $\mathcal{C}_{\cycleCap}^{\text{Full}}(\varFirstPlain)$. At the start $R$ and $\mathcal{C}_{\cycleCap}^{\text{Full}}(\varFirstPlain)$ are empty. Within the While loop, a pair $u$ from the first-stage solution is selected. Then, in line 4, a deep search procedure, starting from vertex $u$, is used to find $\mathcal{C}_{K}^{u}$ in graph $\tilde{D} = (V \setminus R, A)$. Note that if $R \neq \emptyset$, the new cycles are found in a graph where the previous selected vertices (the ones in set $R$) are removed, since otherwise, cycles already in $\mathcal{C}_{K}^{u}$ could be found again. The new cycles are then added to $\mathcal{C}_{\cycleCap}^{\text{Full}}(\varFirstPlain)$ and vertex $u$ is removed from $P(\varFirstPlain)$. When no more vertices are left in that set, the algorithm ends. \begin{algorithm}[tbp] \algorithmicrequire { Set with pairs from the first-stage solution, $\pairSet(\varFirstPlain)$}\\ \algorithmicensure { Set $\mathcal{C}_{\cycleCap}^{\text{Full}}(\varFirstPlain)$} \begin{algorithmic}[1] \item[] \textbf{Step 0: } \STATE $R = \emptyset$; $\mathcal{C}_{\cycleCap}^{\text{Full}}(\varFirstPlain) = \emptyset$ \\ \item[] \textbf{Step 1: } \WHILE{$\pairSet(\varFirstPlain) \neq \emptyset$} \STATE \text{Select vertex} $u \in \pairSet(\varFirstPlain) $ \STATE \text{Find} $\mathcal{C}_{K}^{u}$ \text{in graph } $\tilde{D} = (V \setminus R, A)$ \text{from vertex } $u$ \STATE $\mathcal{C}_{\cycleCap}^{\text{Full}}(\varFirstPlain) \gets \mathcal{C}_{\cycleCap}^{\text{Full}}(\varFirstPlain) \cup \mathcal{C}_{K}^{u}$ \STATE $\pairSet(\varFirstPlain) \gets \pairSet(\varFirstPlain) \setminus \{u\}$ \STATE $R \gets R \cup \{u\}$ \ENDWHILE \item[] \textbf{Step 2: } \STATE Return $\mathcal{C}_{\cycleCap}^{\text{Full}}(\varFirstPlain)$ \caption{Obtaining $\mathcal{C}_{\cycleCap}^{\pi}(\varFirstPlain)$ with $\pi = \text{Full}$} \label{Alg:FullRecoCycleSet} \end{algorithmic} \end{algorithm} The correctness of Algorithm \ref{Alg:FullRecoCycleSet} follows from the fact that only cycles/chains with at least one vertex from the first stage contribute to the weight of a recourse solution. Thus, it suffices to find such cycles per every vertex in $\pairSet(\varFirstPlain)$. A similar reasoning is used when finding chains. To this end, we include additional notation. Let $\tilde{N} = N$ be an auxiliary set that corresponds to the set of singleton donors. Moreover, let $\mathcal{C}_{\ell}^{u}$ be the set of chains that include at least one pair in $\pairSet(\varFirstPlain)$ and are triggered by vertex $u \in N$ with exactly $1 \le \ell \le L$ arcs. Algorithm \ref{Alg:FullRecoChainSet} iteratively builds the set of chains that include at least one pair that is selected in the first stage. \begin{algorithm}[tbp] \algorithmicrequire { A first-stage solution $\varFirstPlain \in \mathcal{X}$}\\ \algorithmicensure { Set $\mathcal{C}_{\chainCap}^{\text{Full}}(\varFirstPlain)$} \begin{algorithmic}[1] \item[] \textbf{Step 0: } \STATE $R = \emptyset$; $\mathcal{C}_{\chainCap}^{\text{Full}}(\varFirstPlain) = \emptyset$ \\ \item[] \textbf{Step 1: } \WHILE{$\tilde{N} \neq \emptyset$} \STATE \text{Select vertex} $u \in \tilde{N}$ \FORALL{$ 1 \le \ell \le L$} \STATE \text{Find} $\mathcal{C}_{\ell}^{u}$ \text{in graph } $\graph = (\vertexSet, \arcSet)$ \text{from vertex } $u$ \STATE $\mathcal{C}_{\chainCap}^{\text{Full}}(\varFirstPlain) \gets \mathcal{C}_{\chainCap}^{\text{Full}}(\varFirstPlain) \cup \mathcal{C}_{\ell}^{u}$ \ENDFOR \STATE $\tilde{N} \gets \tilde{N} \setminus \{u\}$ \ENDWHILE \item[] \textbf{Step 2: } \STATE Return $\mathcal{C}_{\chainCap}^{\text{Full}}(\varFirstPlain)$ \caption{Obtaining $\mathcal{C}_{\chainCap}^{\pi}(\varFirstPlain)$ with $\pi = \text{Full}$} \label{Alg:FullRecoChainSet} \end{algorithmic} \end{algorithm} \subsection{Cycles and chains for the first-stage-only recourse policy} Algorithm \ref{Alg:FullRecoCycleSet} can be modified to accommodate the first-stage-only recourse for cycles. Particularly, we can replace $\mathcal{C}_{\cycleCap}^{\text{Full}}(\varFirstPlain)$ by $\mathcal{C}_{\cycleCap}^{\text{1stSO}}(\varFirstPlain)$, where the latter is the set of simple cycles with at least one vertex in $\pairSet(\varFirstPlain)$ satisfying the first-stage-only recourse policy. In line 4, the vertex set $V$ in graph $\tilde{D}$ is replaced by $\pairSet(\varFirstPlain)$ so that only pairs from the first stage can be part of the allowed cycles. The arc set of $\tilde{D}$ can then be defined such that every arc in it has a starting and terminal vertex in $\pairSet(\varFirstPlain)$. Likewise, Algorithm \ref{Alg:FullRecoChainSet} can also support chains for the first-stage-only recourse. The only change in Algorithm 2, is to replace graph $\graph = (\vertexSet, \arcSet)$ by graph $\tilde{D} = (V(\varFirstPlain), \tilde{A})$ where every arc in the arc set $\tilde{A}$ has both extreme vertices in $V(\varFirstPlain)$. The algorithms just described are used to obtain the cycles and chains that can participate in a recourse solution when the recourse problem is solved as a sub-problem in the robust decomposition presented in the next section. \subsection{Formulations for the recourse problem} \label{sec:RecourseFormls} In this section we present the cycle-and-chain MIP formulations presented in \citep{Blom2021} adapted to the problem we study. We note, however, that a MIP formulation for the recourse problem does not require the explicit enumeration of cycles and chains. The advantage of enumeration is that different policies can be easily addressed, in particular the full recourse and the first-stage-only recourse. Recall that $\cycleSet^{\pi}(\varFirstPlain, \bm{\oneCeS}) \cup \chainSet^{\pi}(\varFirstPlain, \bm{\oneCeS}) \subseteq \cycleSet^{\pi}(\varFirstPlain) \cup \chainSet^{\pi}(\varFirstPlain)$ is the set of non-failed cycles/chains under scenario $\bm{\oneCeS} \in \Gamma$ and policy $\pi \in \Pi$, i.e., $\sum_{u \in V(c)} \gamma^{\text{v}}_{u} + \sum_{(\vertexa,\vertexb) \in A(c)} \gamma^{\text{a}}_{\vertexa\vertexb} = 0 \ \forall c \in \cycleSet^{\pi}(\varFirstPlain, \bm{\oneCeS}) \cup \chainSet^{\pi}(\varFirstPlain, \bm{\oneCeS})$. We present the recourse problem based on the so-called cycle formulation \citep{Abraham2007}, as follows: \begin{subequations} \label{RecoPModel} \begin{align} R^{\policy}(\varFirst, \oneCe): \max_{y} \quad& \sum_{c \in \cyclechainSet^{\policy}{(\varFirst, \oneCe)}}\mathbf{w_{\match}}(\varFirst) y_{c} \label{RecoP:Obj} \tag{R}\\ & \sum_{c: u \in V(c)} y_{c} \le 1 & u \in V \label{eq:uniqueC}\\ &y_{c} \in \{0,1\} & c \in \cycleSet^{\pi}(\varFirstPlain, \bm{\oneCeS}) \cup \chainSet^{\pi}(\varFirstPlain, \bm{\oneCeS}) \end{align} \end{subequations} \noindent Here, $y_{c}$ is a decision variable for every cycle/chain $c \in \cyclechainSet^{\policy}{(\varFirst, \oneCe)}$, taking on value one if selected, and zero otherwise. Constraints \eqref{eq:uniqueC} ensure that a vertex belongs to at most one cycle/chain. An optimal solution to formulation $R^{\policy}(\varFirst, \oneCe)$ finds a matching with the greatest number of matched pairs from the first stage after observing failure scenario $\bm{\oneCeS}$. Thus, by solving formulation \eqref{RecoPModel}, a new Constraint \eqref{eq:solsTwo} can be created. \cite{Blom2021} proposed to expand recourse solutions by including, in addition to the optimal non-failed cycles/chains found by formulation \eqref{RecoPModel}, failed cycles and chains while guaranteeing that the solution is still a matching. Although, an expanded solution does not contribute to more recourse value under the failure scenario in consideration, it may imply other violated constraints as follows: consider two recourse solutions $\bar{\varReco} \in \setRecoSym_{\text{exp}}^{\policy}(\varFirst, \oneCe)$ and $\hat{\varReco} \in \setRecoSym^{\policy}(\varFirst, \oneCe)$, one in the transitory compatibility graph and another one in the second-stage compatibility graph, respectively, and also assume that $\hat{\varReco} \subseteq \bar{\varReco}$. Then, constraint \eqref{eq:solsTwo} associated to $\hat{\varReco}$ is directly implied by that of $\bar{\varReco}$, i.e., if the constraint corresponding to $\hat{\varReco}$ is violated, so is the constraint corresponding to $\bar{\varReco}$. We find expanded recourse solutions by solving a deterministic KEP in the transitory graph $D^{\pi}(\varFirstPlain)$ instead of the second-stage graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})$ in formulation \eqref{RecoPModel}, and by assigning new weights to all cycles/chains $c \in \cycleSet^{\pi}(\varFirstPlain) \cup \chainSet^{\pi}(\varFirstPlain)$ as in \citep{Blom2021}. The new weights, assigned to each cycle $c \in \cycleSet^{\pi}(\varFirstPlain) \cup \chainSet^{\pi}(\varFirstPlain)$, are given by \begin{align} \label{expRecoObFcn} \mathbf{\hat{w}_{\match}}(\varFirst,\oneCe) = \left\{ \begin{array}{ll} \mathbf{w_{\match}}(\varFirst) \lvert V \rvert + 1 & \mbox{if } V(c) \subseteq V_{\bm{\oneCeS}}\\ 1 & \mbox{otherwise } \end{array} \right. \end{align} We denote by $\Rexp(\varFirst, \oneCe)$ the resulting recourse problem with expanded solutions, \begin{subequations} \label{RecoPModelexp} \begin{align} \Rexp(\varFirst, \oneCe): \max_{y} \quad& \sum_{c \in \cycleSet^{\pi}(\varFirstPlain) \cup \chainSet^{\pi}(\varFirstPlain)}\mathbf{\hat{w}_{\match}}(\varFirst,\oneCe) y_{c} \label{RecoP:Objexp} \tag{RE}\\ & \sum_{c: u \in V(c)} y_{c} \le 1 & u \in V \label{eq:uniqueCexp}\\ &y_{c} \in \{0,1\} & c \in \cycleSet^{\pi}(\varFirstPlain) \cup \chainSet^{\pi}(\varFirstPlain) \end{align} \end{subequations} The following lemma, based on \citep{Blom2021}, states that the set of cycles/chains that do not fail in a recourse solution optimal to $\Rexp(\varFirst, \oneCe)$ is an optimal recourse solution to the original recourse problem $R^{\policy}(\varFirst, \oneCe)$. \begin{lemma} \label{lemma:exp} For a recourse solution $\bar{\varReco} \in \setRecoSym_{\text{exp}}^{\policy}(\varFirst, \oneCe)$ that is optimal to $\Rexp(\varFirst, \oneCe)$, its set of non-failed cycles/chains, $\VarRecoOpExp(\oneCe) \subseteq \bar{\varReco}$, under scenario $\bm{\oneCeS} \in \bigsCe(\varFirst)$ is an optimal recourse solution to $R^{\policy}(\varFirst, \oneCe)$. \end{lemma} \proof Let ${Z}^{\pi,\star}_{RE}(\varFirst, \oneCe)$ be the optimal objective value to solution $\bar{\varReco} \in \setRecoSym_{\text{exp}}^{\policy}(\varFirst, \oneCe)$ and consider $\vert V \rvert/2$ as the maximum number of cycles/chains that can originate in a feasible matching, i.e., the maximum number of decision variables for which $\bar{\varReco}_{c} = 1 \forall c \in \cycleSet^{\pi}(\varFirstPlain) \cup \chainSet^{\pi}(\varFirstPlain)$. Moreover, let $\mathbbm{1}_{c}$ be an indicator variable that takes on value one if $\bar{\varReco}_{c} = 1 \forall c \notin \cycleSet^{\pi}(\varFirstPlain, \bm{\oneCeS}) \cup \chainSet^{\pi}(\varFirstPlain, \bm{\oneCeS})$ and zero otherwise. Then, from \eqref{expRecoObFcn} we know that \begin{subequations} \begin{align} {Z}^{\pi,\star}_{RE}(\varFirst, \oneCe) = & \sum_{c \in \cycleSet^{\pi}(\varFirstPlain, \bm{\oneCeS}) \cup \chainSet^{\pi}(\varFirstPlain, \bm{\oneCeS}): \bar{\varReco}_{c} = 1} \left( \mathbf{w_{\match}}(\varFirst) \lvert V \rvert + 1 \right) + \sum_{c \notin \cycleSet^{\pi}(\varFirstPlain, \bm{\oneCeS}) \cup \chainSet^{\pi}(\varFirstPlain, \bm{\oneCeS}): \bar{\varReco}_{c} = 1} \mathbbm{1}_{c} \\ =&\lvert V \rvert \sum_{c \in \cycleSet^{\pi}(\varFirstPlain, \bm{\oneCeS}) \cup \chainSet^{\pi}(\varFirstPlain, \bm{\oneCeS}): \bar{\varReco}_{c} = 1} \mathbf{w_{\match}}(\varFirst) + \sum_{i = 1}^{\vert V \rvert/2} 1 + \sum_{i = 1}^{\vert V \rvert/2} 1\\ =&z^{\star}\lvert V \rvert + \vert V \rvert \label{result} \end{align} \end{subequations} Now, suppose $\VarRecoOpExp(\oneCe)$ is not optimal to $R^{\policy}(\varFirst, \oneCe)$, therefore $\sum_{c \in \cycleSet^{\pi}(\varFirstPlain, \bm{\oneCeS}) \cup \chainSet^{\pi}(\varFirstPlain, \bm{\oneCeS}): \bar{\varReco}_{c} = 1} \mathbf{w_{\match}}(\varFirst)$ can be at most $z^{\star} - 1$. If that is true, then, when replacing $z^{\star}$ in \eqref{result} by $z^{\star} - 1$, we obtain that ${Z}^{\pi,\star}_{RE}(\varFirst, \oneCe) = z^{\star}\lvert V \rvert + \vert V \rvert = z^{\star}\lvert V \rvert$, which is a contradiction. Thus, it follows that $\VarRecoOpExp(\oneCe)$ must be optimal to $R^{\policy}(\varFirst, \oneCe)$. \hfill$\square$ It is worth noting that we formulated the recourse problem by means of cycle-and-chain decision variables, but since it is reduced to a deterministic KEP, the recourse problem can be solved via multiple formulations/algorithms from the literature, e.g., \citep{Omer2022, Blom2021, Riascos2020}. \section{Additional results for non-homogeneous failure} \label{CycleLengthFourNonHomoFail} The data in Table \ref{tab:policiesk4} includes all runs, optimal or not, except for column HSP(\%) where only instances that were solved to optimality are included. The interpretation of the displayed columns correspond to those in Table \ref{tab:policiesk3}. \begin{table}[htpb] \caption{Policies comparison for $K = 4$ under HSA\_MB and non-homogeneous failure.} \label{tab:policiesk4} \begin{adjustbox}{width=\textwidth} \centering \input{policies_table_K4_aBudget_5_10} \end{adjustbox} \end{table}
1,108,101,565,059
arxiv
\section{Introduction} What is awareness or consciousness? Think of a well-trained and experienced car driver who automatically identifies and follows the traffic protocols in different surrounding environments (e.g. street, highway, and city centre) by interpreting the visual scenes directly (such as buildings, school etc.). Similarly, imagine a car with defective parking sensors which sometimes miscalculate the nearby object distance. It means that the audio input is ambiguous and driver can't fully rely on parking sensors for precise maneuvering decisions e.g. while reversing the car. In this situation, we observe that the driver automatically starts utilizing visual cues to leverage the complementary strengths of both ambiguous sound (defective reversing beeps) and visuals. This is one example of consciousness or awareness, where the surrounding environment or situation (UCF) helps establishing the anticipated behaviour to comply with, defining the optimal roles of incoming multisensory information, and eventually controlling human actions. Similarly, in contextual audio-visual (AV) speech processing, we observe that in a very noisy environment, our brain naturally utilizes other modalities (such as lips, body language, facial expressions) to perceive speech or the conveyed message (i.e. speech perception in noise) \cite{sumby1954visual}\cite{mcgurk1976hearing}\cite{summerfield1979use}\cite{patterson2003two}. However, it raises crucial questions: How does it happen in the brain? How the incoming sensory signals (such as vision and sound) integrate with respect to the situation? How are the roles of incoming signals (selective amplification/suppression) defined? How does the neuron(s) originate a precise control command that controls human actions based on incoming multisensory information and their precise integration, complying with the anticipated behavioural-constraint of the environment? Certainly, defining the context and its relevant features knowing when a change in context has taken place are challenging problems in modelling human behaviour \cite{gonzalez2008formalizing}. It is also claimed in the literature that context could be of infinite dimensions but humans have a unique capability of correlating the significant context and set its boundaries intuitively with respect to the situation. However, once the context is identified, it is relatively easy to utilize and set its bounds to more precisely define the search space for the selection of best possible decision \cite{gonzalez2008formalizing}. A simple example of contextual modulation is shown in Figure 1 \cite{kay2018contrasting}. It illustrates the role of localized contextual information (i.e. LCF) that comes from the nearby location in space. It can be seen that the ambitious RF input (in the top row) is interpreted as B or 13 depending on the local contextual information coming from the nearby location in space. Identically, consider the perception of any ambiguous letter or speech sound. At times, if available, the surrounding environment and its understanding significantly help to disambiguate the ambiguous input. The contextual modulation can in principle be from anywhere in space/time that modulates the transmission of information about other driving signals \cite{kay2018contrasting}. However, the selective amplification and attenuation of incoming multisensory information with respect to the outside world at the neural level is still very little understood. In addition, to the best of our knowledge, not much progress has been made to fully interpret the role of consciousness and objectively define its contribution in multisensory integration. The complexity of the problem is widely appreciated by scientists with a consensus that it is not easy to use awareness and contextual modulation to show enhanced processing, learning, and reasoning. In this research article, we propose a novel conscious neural structure and objectively define awareness in terms of newly proposed UCF. The proposed spiking conscious neuron exhibits a switch-like behavior that defines the role of incoming multisensory signals with respect to the outside environment and anticipated behaviour. It is believed that the conscious neuron inherently contains enough knowledge about the situation in which the problem is to be solved based on past learning and reasoning and it helps defining the precise role of incoming multimodal signals to originate a precise control command. The conscious neuron exploits four types of contexts: modulatory (LCF), temporal, spatial, and awareness (UCF). The preliminary behavioural modelling analysis and simulation results demonstrate enhanced learning and reasoning capability of the proposed SCNN as compared to the state-of-the-art unimodal and multimodal models. The rest of the paper is organized as follows: Section 2 discusses the conceptual foundation and motivation which leads to the development of conscious neural structure and SCNN. Section 3 presents the conscious neural structure and SCNN. In section 4, the conscious neural structure and SCNN are utilized for behavioural modelling including AV speech processing and driving behaviour. Finally, section 5 discusses the research impact, applications, and future research directions. \begin{figure} \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=0.16\textwidth]{conference/B} \caption{Ambiguous decision-making and local contextual modulation} \label{fig:Picture2} \end{figure} \section{Motivation and Contribution} The simplified state-of-the-art integrate-and-fire neural structure is doing wonders today, think of the potential of neuron representing a closer form of biophysical reality. There exists ample evidence that divisive and multiplicative gain modulations are widely spread in mammalian neocortex with an indication of amplification or attenuation via contextual modulation \cite{kay2018contrasting}. Evidence gathered in the literature suggests that multisensory interactions emerge at the primary cortical level \cite{stein2008multisensory}\cite{stein2009neural}. Scientists have presented several theories and empirical results on the role of contextual modulation to disambiguate the ambiguous input or improve feature detection with weak or noisy inputs \cite{kay1998contextually}. Recently, the authors in \cite{kay2018contrasting} used modern developments in the foundation of information theory to study the properties of local processors (neuron or microcircuit) embedded within the neural system that uses the contextual input to amplify or attenuate transmission of information about their driving inputs. Specifically, the authors used advances in information decomposition to show that the information transmitted by the local processor with two distinct inputs (driving and contextual information) can be decomposed into components unique to each other having three-way mutual/shared information. In \cite{kay1998contextually}, the authors used an edge detection problem as a benchmark to demonstrate the effectiveness of contextual modulation in recognizing specific patterns with noisy RF input. It was shown how surrounding regions in different parallel streams helped detecting the edge within any particular region and played a significant role in combating noisy input. Recently, researchers have also proposed several deep recurrent neural network architectures to exploit contextual modulation by leveraging the complementary strengths of multimodal cues. For example, researchers in \cite{ephrat2018looking} presented a joint AV model for isolating a single speech signal from a mixture of sounds. The authors used a complementary strength of both audio and visual cues using deep learning to focus the audio on the desired speaker in a scene. Similarly, the authors in \cite{adeel2018contextual} and \cite{gogate2018dnn} developed an AV switching component for speech enhancement and mask estimation to effectively account for different noisy conditions contextually. The contextual AV switching components were developed by integrating a convolutional neural network (CNN) and long-short-term memory (LSTM) network. However, these end-to-end multimodal learning models operate at the network level and can't be used for deep analysis and information decomposition to understand neural circuitry and underlying information processing mechanisms at the neural level, with respect to the outside world and anticipated behaviour. In addition, these methods only exploit the localized context without considering the overall knowledge of the problem (awareness). Thus, the limited contextual exploitation leads to imprecise behavioural representation. This work proposes a new conscious neural structure and SCNN that objectively define awareness at the neural level. Contrary to the work presented in \cite{kay1998contextually}, the proposed SCNN is evaluated with a noisy speech filtering problem. Specifically, we used two distinctive multimodal multistreams (lip movements as LCF and noisy speech as RF) and studied how the LCF helped to improve the noisy speech filtering in different noisy conditions (ranging from a very noisy to an almost zero noise environment). Later, going beyond the theory of local contextual modulation, we added UCF as another input variable (as a fourth virtual dimension) to define descriptive and control context (environment and anticipated behaviour) in SCNN. \section{Spiking Conscious Neural Network} The proposed conscious neural structure is presented in Figure 2. The output of the neuron depends on three functionally distinctive integrated input variables: driving (RF), modulatory (LCF), and awareness (virtual UCF). The RF is defining the ambiguous sensory signal, LCF is defining the modulatory sensory signal coming from other parts of the brain, and UCF is defining the outside world and anticiapted behaviour. The interaction among RF, LCF, and UCF in an SCNN is shown in Figure 3. The output is denoted by the random variable Y, whereas X, Z, and U represent RF, LCF, and UCF respectively. It is believed that the proposed neural structure when implemented within a multi-layered multiunit network of similar neurons produce a widely distributed activity pattern with respect to the current circumstances (i.e. a combination of RF, LCF, and UCF at the neuronal level). This activity helps neural network to explore and exploit the associative relations between the features extracted within different streams \cite{kay1998contextually}\cite{kay2018contrasting}. In the implementation, the neuron in one stream is connected to all other neurons in the neighbouring stream of the same layer. This is achieved through shared connections among the neurons that guide learning and processing with respect to local and universal contexts. \begin{figure} [htb!] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=0.80\textwidth]{conference/SingleNeuron} \caption{Proposed conscious neural structure with a switch-like behaviour: The output depends on three functionally distinctive integrated input variables: driving (RF), modulatory (LCF), and awareness (virtual UCF). The RF is defining the ambiguous sensory signal, LCF is defining the modulatory sensory signal coming from other parts of the brain, and UCF is defining the outside world and anticiapted behaviour. The conscious neuron, with respect to the outside environment (UCF), decides whether the role of LCF is modulatory or null.} \label{fig:Picture2} \end{figure} \subsection{Mathematical Modelling} The conscious neuron (\textit{y}) in the proposed SCNN interacts by exchanging the excitatory and inhibitory spikes probabilistically (in the form of bipolar signal trains) as shown in Figure 3 and Figure 4. In steady state, the stochastic spiking behaviour of the network has a “product form” property (product of firing rates and transition probabilities) which defines state probability distribution with easily solvable non-linear network equations. The firing from neuron \textit{y} to succeeding neuron \textit{w} in the network is according to the Poisson process, represented by the synaptic weights $w_{yw}^+$ = $r_y[P_{yx}^+ + P_{yz}^+ + P_{yu}^{+}]$ and $w_{yw}^-$ = $r_y[P_{yx}^- + P_{yz}^- + P_{yu}^-]$, where $P_{yx}^+, P_{yz}^+, P_{yu}^+$ and $P_{yx}^-, P_{yz}^-, P_{yu}^-$ represent the probabilities of excitatory and inhibitory RF, LCF, and UCF signals, respectively. The term $r_y$ represents the firing rate of the conscious neuron. The terms $w_{yx}^+$, $w_{yz}^+$, $w_{yu}^+$ and $w_{yx}^-$, $w_{yz}^-$, $w_{yu}^-$ represent the RF, LCF, and UCF synaptic weights (i.e. the rates of positive and negative signal transmission) that network learns through the process of learning or training. In the network, the conscious neuron \textit{y} can receive exogenous signals positive/negative from the inside (within the network) or outside world according to Poisson arrival streams of rates $\Lambda_x$, $\lambda_x$, respectively. The potential (\textit{Y}) of the conscious neuron represents its state that increases/decreases with respect to an incoming signal coming from the inside or outside world. The proposed neural structure is implemented using G-networks which possess product-form asymptotic solution \cite{gelenbe1993g}. The neuron \textit{y} in firing state transmits an impulse to neuron \textit{w} with a Poisson rate ($r_y$) and probability $P^+ (y, w)$ or $P^- (y, w)$ depending on the incoming signal as excitatory or inhibitory. The transmitted signal can also leave the network and go outside the world with probability $d(y)$ such that: \begin{figure} \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=0.75\textwidth]{conference/UNN1} \caption{The interaction among RF, LCF, and UCF in an SCNN. Please note that the switch-like behaviour or filtering rules are enforced by the positive and negative synaptic weights associated with each input field.} \label{fig:Picture2} \end{figure} \begin{figure} [!htb] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=0.65\textwidth]{MSMU1n.png} \caption{Proposed SCNN: Multi-layered multiunit network of many similar conscious neurons, where the unit in one stream is connected to all other units in the neighbouring streams of the same layer.} \label{fig:Picture2} \end{figure} \begin{multline} d(y) + \sum_{x=1}^{N} [P^+ (y, x) + P^- (y, x)] + \sum_{z=1}^{N} [P^+ (y, z) + P^- (y, z)] + \sum_{u=1}^{N} [P^+ (y, u) + P^- (y, u)] = 1 \end{multline} Where, \begin{multline} w^+ (y,w) = r_y[P^+ (y,x) + P^+ (y,z)+P^+ (y,u)]\geq 0, w^- (y,w) = r_y[P^- (y,x) + P^- (y,z)+P^- (y,u)]\geq 0 \end{multline} The firing rate of the conscious neuron can be written as: \begin{multline} r(y) = (1-d(y))^{-1} (\sum_{x=1}^{N} [w^+ (y,x) + w^- (y,x)] + \sum_{z=1}^{N} [w^+ (y,z) + w^- (y,z)] + \sum_{u=1}^{N} [w^+ (y,u) + w^- (y,u)]) \end{multline} If $Y(t)$ is the potential of the conscious neuron \textit{y} then in \textit{n} number of neurons, vector $\overline{Y(t)}$ = $(Y_1(t), Y_2(t), ...,Y_n(t))$ can be modelled as a continuous-time Markov process. The stationary joint probability of the network is given as: \begin{equation} \lim\limits_{n\to\infty} P(\overline{Y(t)}) = y_1(t), y_2(t), ...,y_n(t) = \prod_{y=1}^{n}(1-q_y)q_y^{ny}, q_y = \frac{Q_Y^+}{r_y + Q_Y^-} \end{equation} Where $Q_Y^+$ and $Q_Y^-$ are the average rate of +ive and -ive signals at neuron \emph y, given as: \begin{equation} Q_Y^+ = \sum_{x=1}^{N}q_x w^+(y, x) + \sum_{z=1}^{N}q_z w^+(y, z) + \sum_{u=1}^{N}q_u w^+(y, u) \end{equation} \begin{equation} Q_Y^- = \sum_{x=1}^{N}q_x w^-(y, x) + \sum_{z=1}^{N}q_z w^-(y, z) + \sum_{u=1}^{N}q_u w^-(y, u) \end{equation} The probability that the conscious neuron (\textit{Y}) is excited can be written as: \begin{equation} q_y = \frac{\sum_{x=1}^{N}q_x w^+ (y,x) + \sum_{z=1}^{N}q_z w^+ (y,z) + \sum_{u=1}^{N}q_u w^+ (y,u)}{[W_W^+ + W_W^-] + \sum_{w=1}^{N}q_x w^- (y,x) + \sum_{z=1}^{N}q_z w^- (y,z) + \sum_{u=1}^{N}q_u w^- (y,u)} \end{equation} where $w^+ (y,x)$, $w^- (y,x) $, $w^+ (y,z)$, $w^- (y,z) $ $w^+ (y,u)$, $w^+ (y,u)$ are the positive and negative RF, LCF, and UCF weights. $W^+ _W$ and $W^- _W$ are the positive and negative weights between the conscious neuron \textit{y} and succeeded neuron \textit{w}. For training and weights update, state-of-the-art gradient descent algorithm is used. The RF input ($q_x$) is given as: \begin{equation} q_x = \frac{Q_x^+}{[w(x,y)^+ + w(x,y)^-] + Q_x^-} \end{equation} \begin{equation} Q_x^+ = \Lambda_x + \sum_{v=1}^{N}q_v w^+(x, v) \end{equation} \begin{equation} Q_x^+ = \lambda_x + \sum_{v=1}^{N}q_v w^- (x, v) \end{equation} Where $q_v$ is the potential of the preceding neuron \textit{v} coming from the outside world, and $q_u$ and $q_z$ in (7) are the potentials of incoming LCF and UCF. It is to be noted that $w(x,y)^+$ and $w(y,x)^+$ are different. \subsection{Information Decomposition} A Venn diagram of the information theoretic measures for distinctive integrated input variables is depicted in Figure 5, where RF, LCF, and UCF are represented by the green, orange, and grayish pink ellipses respectively. The output (Y) is represent by the blue ellipse. In information processing equations, the output is denoted by the random variable Y, whereas RF, LCF, and UCF are represented by X, Z, and U respectively. \\ \begin{figure} [htb] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=0.25\textwidth]{conference/vendn} \caption{Venn diagram of information theoretic measures for distinctive integrated input variables RF, LCF, and UCF represented by the green ellipse, orange ellipse, and grayish pink ellipse respectively. The output (Y) is represent by the blue ellipse. The UCF (U) and associated $H(U|X,Y,Z)$ is interpreted as the information contained in U but not in X, Y, and Z. The output (Y) and associated $H(Y|X,Z,U)$ is interpreted as the information contained in Y but not in X, Z, and U. The LCF (Z) and associated $H(Z|X,Y,U)$ is interpreted as the information contained in Z but not in X, Y, and U. The RF (X) and associated $H(X|Y,Z,U)$ is interpreted as the information contained in X but not in Y, Z, and U.} \label{fig:Picture2} \end{figure} The mutual information shared between random variables X (RF) and Y (output) can be written as \cite{kay2011coherent}: \begin{equation} I(X;Y) = H(X) - H(X|Y) \end{equation} Where, H(X) is the Shannon entropy associated with the distribution of X and $H(X|Y)$ is the Shannon entropy associated with the conditional distribution of X given Y. It is defined as the information contained in X but not in Y \cite{kay2011coherent}. It is assumed that the mutual information is always non-negative when random variables are stochastically independent \cite{kay2011coherent}. Since we are dealing with four random variables, the conditional mutual information can be written as: \begin{equation} I(X; Y|Z, U) = H(Y|Z, U) - H(Y|X, Z, U) \end{equation} This is the conditional mutual information shared between X and Y, having observed Z and U. It is defined as the information shared between X and Y but not shared with Z and U.\\ The four-way mutual information shared among four random variables X, Y, Z, and U can be defined as: \begin{multline} I(X; Y; Z; U) = I(X; Y) - I(X; Y|Z, U) = I(X; Z) - I(X; Z|Y, U) = \\ I(X; U) - I(X; U|Y, Z) = I(Y; Z) - I(Y; Z|X, U) = I(Y; U) - I(Y; U|X, Z) \end{multline} If the four-way mutual information is positive, Shannon entropy associated with the distribution of Y can be defined as \cite{kay2011coherent}: \begin{multline} H(Y) = I(Y; X; Z; U) + I(Y; X|Z, U) + I(Y; Z|X, U) + I(Y; U|X, Z) + H(Y; X|Z, U) \end{multline} In case the random variables are discrete, the integrals are replaced by summations, and the probability mass function can be written as \cite{kay2011coherent}: \begin{equation} H(Y) = -\int p(y)log{p(y)}dy \end{equation} \begin{equation} H(Y|X) = -\int \int p(y|x)log{p(y|x)}p(x)dydx \end{equation} \begin{equation} H(Y|X, Z) = -\int \int \int p(y|x, z)log{p(y|x,z)}p(x,z)dydxdz \end{equation} \begin{equation} H(Y|X, Z, U) = -\int \int \int \int p(y|x, z, u)log{p(y|x,z,u)}p(x,z,u)dydxdzdu \end{equation} The objective function to be maximized can be defined as: \begin{multline} F = \phi_0 I(Y; X; Z; U) + \phi_1 I(Y; X|Z, U) + \phi_2 I(Y; Z|X, U) + \phi_3 I(Y; U|X, Z) + \phi_4 H(Y; X|Z, U) \end{multline} $I(Y; X|Z, U)$ is the information that the output shares with the RF (X) and is not contained in the LCF and UCF units. $ I(Y; Z|X, U)$ is the information that the output shares with the LCF and not contained in the RF and UCF units. $I(Y; U|X, Z)$ is the information that the output shares with the UCF and not contained in the RF and LCF units.\\ The values of $\phi's$ are tunable within the range [-1, 1]. Different $\phi$ values allow investigating specific mutual/shared information, such that: \[ f(x)= \begin{cases} F = I(Y; X),& \text{if } \phi_1=1, \phi_2= \phi_3=\phi_4=0\\ F = I(Y; Z),& \text{if } \phi_2=1, \phi_1= \phi_3=\phi_4=0\\ F = I(Y; U),& \text{if } \phi_3=1, \phi_1= \phi_2=\phi_4=0\\ I(Y; X; Z; U), & \text{otherwise} \end{cases} \] \begin{figure} \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=0.22\textwidth]{conference/DifferentBehaviours2n1} \caption{General human behavioural modelling in any environment \textit{N} using the conscious neural structure.} \label{fig:Picture2} \end{figure} \begin{figure} \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=0.72\textwidth]{conference/10} \caption{Human AV speech processing model in three different environments using the conscious neural structure. Please note that in the first two environments, LCF (visual information from lip movements) has a modulatory role, but in the third environment it has a Null role. } \label{fig:Picture2} \end{figure} \section{Human behavioural modelling} Contextual identification and transition are two difficult problems. Given any desired human behaviour to be modelled, a set of appropriate contexts could be identified and grouped together to develop a computationally efficient model (given a broader understanding of the task in hand). According to the proposed theory in this paper, a combination of RF, LCF, and UCF can help modelling different human behaviours more precisely. A general human behavioural model is depicted in Figure 6. The two distinctive input variables RF and LCF are defining the incoming sensory inputs (e.g., vision and sound), whereas the UCF input is defining the specific situation (outside world) and associated anticipated behaviour. In the proposed neural structure, the role of RF and LCF changes with respect to the outside environment (UCF). To further illustrate the proposed theory, following are the two case studies. \subsection{Case Study 1: Human AV speech processing} Human performance for speech recognition in a noisy environment is known to be dependent upon both aural and visual cues, which are combined by sophisticated multi-level integration strategies to improve intelligibility \cite{adeel2018Lip}. The multimodal nature of the speech is well established in the literature, and it is well understood how speech is produced by the vibration of vocal folds and configuration of the articulatory organs. The correlation between the visible properties of the articulatory organs (e.g., lips, teeth, tongue) and speech reception has been previously shown in numerous behavioural studies \cite{sumby1954visual}\cite{summerfield1979use}\cite{mcgurk1976hearing}\cite{patterson2003two}. Therefore, a clear visibility of some articulatory organs could be effectively utilized to extract a clean speech signal out of a noisy audio background. Figure 7 depicts the audio-visual speech processing in three different surrounding environments: Restaurant, Cafe, and Home. In any of the environments, multisensory information (audio and visual cues) is available all the time but their optimal utilization depends on the outside environment. For example, in a busy cafe and restaurant environments (multi-talker speech perception), if there is a high background noise, our brain automatically utilizes other modalities (such as lips, body language, and facial expressions) to perceive speech or the conveyed message. Therefore, based on the information provided by the UCF (i.e. outside environment), the roles of LCF and RF are defined. For example, both RF and LCF are active in the first two scenarios (i.e. LCF modulates RF), whereas in the Home scenario (with little or zero noise), lip-reading is less effective for speech enhancement and indeed of no importance (Null role). This phenomenon is shown in our previous work at the network level \cite{adeel2018contextual}, where we showed that lip-reading driven speech enhancement significantly outperforms benchmark audio-only (A-only) speech enhancement approaches (such as spectral subtraction and log-minimum mean square error) at low signal-to-noise ratios (SNRs). However, at low levels of background noise, visual cues become less effective. \begin{figure} [htb!] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=1\textwidth]{conference/1n} \caption{Audio-visual speech processing: Shallow SCNN - Three streams-three layered multisensory multiunit network of several similar conscious neurons (where the unit in one stream is connected to all other units in the neighbouring streams). Different surrounding environments defining the respective anticipated behaviour and establishing the roles of incoming multisensory signals.} \label{fig:Picture2} \end{figure} \begin{figure} [htb!] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=0.85\textwidth]{conference/TrainingComparison} \caption{Neural level shallow SCNN performance: Results for A-only (RF), AV (RF+LCF), and AV with UCF, considering 3 prior AV coefficients. It is to be noted that RF+LCF+UCF model outperforms both RF-only and RF+LCF-only models. The data samples include 2D-logFB (speech) and 2D-DCT (lip movements) coefficients for 1000 utterances from Grid and ChiME3 Corporas (Speaker 1 of the Grid). The number of clean logFB audio features are 22$\times$205,712. The combined noisy logFB audio features for -12dB, -9dB, -6dB, -3dB, 0dB, 3dB, 6dB, 9dB, and 12dB SNRs are 22$\times$205,712. Similarly, the DCT visual features are 25$\times$205,712 in total. For UCF modelling, five dynamic real-world commercially-motivated scenarios are considered: cafe, restaurant, public transport, pedestrian area, and home. Please note that a particular SNR range defines a particular environment (UCF), represented by a unique pattern using one-hot-encoding.} \label{fig:Picture2} \end{figure} \begin{figure}[!htb] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=0.80\textwidth]{conference/LSTM_all} \caption{Network level lip-reading driven deep learning performance: Stacked LSTM validation results for different prior visual frames. The figure on left presents an overall behaviour of an LSTM model when different number of previous visual frames are added. The figure on right presents estimated clean logFB audio features using 14 prior visual frames (i.e. actual normalized energy in each of the 23 frequency bands (target, estimated)). The LSTM network had total 550 LSTM cells in the hidden layer with 23 output neurons \cite{adeel2018Lip}.} \label{fig:Picture2} \end{figure} To test our proposed theory and SCNN, three distinctive multimodal streams (lip movements as LCF, noisy speech as RF, and the outside environment as UCF) are used. We studied how LCF and UCF are helping to improve the noisy speech filtering in different noisy conditions (ranging from a noisy (-12dB SNR) to a little noisy environment (12dB SNR)). The implemented three streams-three layered SCNN is shown in Figure 8. To train the shallow SCNN, the deep problem was transformed into a shallow problem. Specifically, in our previous AV deep learning implementation \cite{adeel2018Lip}, the output layer of the LSTM network had 23 log filter-bank (FB) coefficients (i.e. frame by frame prediction). In contrast, the evaluated shallow SCNN model predicted one coefficient at a time (i.e. coefficient by coefficient prediction). In the experiments, neurons interact by exchanging the excitatory and inhibitory spiking signals probabilistically and fire when excited as explained in Section 3. For training, the benchmark AV ChiME3 corpus is used which is developed by mixing the clean Grid videos \cite{cooke2006audio} with the ChiME3 noises \cite{barker2015third} for SNRs ranging from -12dB to 12dB \cite{adeel2018contextual}. The preprocessing includes sentence alignment and incorporation of prior audio and visual frames. Prior multiple visual frames are used to incorporate temporal information. The audio and visual features were extracted using log-FB and 2-dimensional discrete cosine transform (2D-DCT) methods. More corpus related and preprocessing details are comprehensively presented in \cite{adeel2018contextual}\cite{adeel2018Lip}. Figure 9 depicts the prediction of clean logFB coefficients, where it can be seen that multimodal RF+LCF+UCF model outperformed multimodal RF+LCF (audio-visuals) and unimodal RF (audio-only) models, achieving MSE of 0.051, 0.064, and 0.072 respectively. The performance of a network level lip-reading driven deep learning approach for speech enhancement is presented in Figure 10 \cite{adeel2018Lip}. It is to be noted that the shallow SCNN with only 29 spiking conscious neurons performed comparably to deep LSTM network which had 550 hidden cell. The ongoing work includes exploiting optimized deep learning driven AV features from \cite{adeel2018Lip}\cite{adeel2018contextual} to train SCNN. It is believed that the enhanced learning in an SCNN is due to the shared connections and shared local and universal contextual information. The SCNN discovered and exploited the associative relations between the features extracted within each of the RF, LCF, and UCF streams. We believe that the UCF helped establishing the outside environment and anticipated behaviour and defined the roles of incoming multisensory information with respect to different situations as shown in Figure 7 (e.g. the use of audio-visual cues and audio-only cues in extreme noisy conditions and relatively clean conditions respectively). However, to further strengthen our proposed theory, we intend to study the properties of the conscious neuron(s) using advances in information decomposition methods. Specifically, we intend to quantify the suppression and attenuation using partial information decomposition methods and explore the properties of conscious neuron and its functioning in terms of four basic arithmetic operators and their various forms \cite{kay2018contrasting}. Furthermore, we aim to critically analyze how the information is decomposed into components unique to each other having multiway mutual/shared information in recurrent SCNN. \begin{figure} [!htb] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=0.65\textwidth]{conference/car} \caption{Driver behavioural model in two different environments using conscious neural structure. Please note that the audio cues (defective audible reversing warning beeps) are represented by ambiguous RF, coming from a defective parking sensors (partially working) which sometimes miscalculate the nearby object distance. Different surrounding environments (UCF) defining the respective anticipated behaviours and establishing the roles of incoming multisensory signals. For example, in a car reverse situation with no blind spot, the driver can't rely only on parking sensors for precise maneuvering decisions, as the audio input is ambiguous due to defective parking sensors. Therefore, in this situation, the conscious neuron defines the role of visual cues (LCF) as modulatory. In contrast, when there is a blind spot, the driver may have to rely on other modulatory signals to make an optimal decision along with the ambiguous RF.} \label{fig:Picture2} \end{figure} \begin{figure} [htb!] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=1\textwidth]{conference/ShallowSCNN} \caption{Driver behavioural modelling with a Shallow SCNN.} \label{fig:Picture2} \end{figure} \subsection{Case Study 2: Driver behavioral model} The gap between humans and machine is shrinking and scientists are trying to develop more human-like computing devices. It is becoming increasingly important to develop computer systems that incorporate or enhance situation awareness. However, methods to reduce margins of uncertainty and minimize miscommunication need further exploration. The proposed conscious neural structure and its property of originating a controlled neural command based on a precise mutisensory signals integration with respect to the external environment can help addressing these modelling challenges. For example, the proposed SCNN can help modelling a more precise driving behaviour. Figure 11 and Figure 12 depict the driving behavioural model at a neural and network level in two different surrounding environments: car reverse with no blind spot and care reverse with blind spot. It is assumed that the parking sensors are not fully functional and at times they miscalculate the nearby object distance. Therefore, the audio input is ambiguous and the driver can't rely only on parking sensors for precise maneuvering decisions. In the first situation, where there is no blind spot, the driver leverages the complementary strengths of both AV cues. The visual cues are modulating the ambiguous audio signal (RF). In contrast, when there is a blind spot, the driver may have to rely on other modulatory signals to make an optimal decision along with the ambiguous RF. In a nutshell, in any of the surrounding environments (UCF), multisensory information is available, but depending on the situation, the roles of incoming multisensory signals are defined to originate a precise control command (driver's maneuvering decision) complying with the anticipated behaviour. \section{Discussion, Research Impact, and Future Directions} In this research, we introduced a novel theory on the role of awareness and universal context in an SCNN. The proposed theory throws light on selective amplification and attenuation of incoming multisensory information in the brain with respect to the external environment and anticipated behaviour. Specifically, it defines a guidance framework to study and model human behaviours and their underlying neural functioning in different conditions. The proposed SCNN is used to model human AV speech processing and driving behaviour. For AV speech modelling, the SCNN outperformed state-of-the-art multimodal (RF+LCF) and unimodal (RF-only) processing models. Similarly, in driver behavioural modelling, it is shown that the conscious neuron allows modelling a more precise human driving behaviour. We hypothesize that the integration of RF, LCF, and UCF helped SCNN to discover and exploit the associative relations between the features extracted within each of the RF, LCF, and UCF streams. This integration and the shared local and universal contextual information enabled enhanced learning. We believe that the inherent SCNN properties ideally place it as a powerful tool for precise behavioural modelling. However, an in-depth analysis is required to further study the properties of conscious neuron(s). In the future, we intend to use the advances in information decomposition to quantify the suppression and attenuation using partial information decomposition methods. Ongoing work also includes the development of hierarchical deep SCNN (HD-SCNN) by integrating multiple SCNNs responsible for a specific human behaviour such as audio processing, visual processing etc. For the training of HD-SCNN, we are using a theory of hypnosis for the selective training of a subnetwork (single SCNN) without affecting other already well-trained models. The testing of the proposed theory using biomedical and clinical experimental methods is also a part of our ongoing work. In subsequent sections, we present the application of SCNN in developing more human-like computing devices, low-power neuromorphic chips, and modelling sentiment and financial behaviours. \subsection{Research Impact} \subsubsection{Understanding Neurodegenerative Processes using Biomedical and Clinical Methods} Sensory impairments have an enormous impact on our lives and are closely linked to cognitive functioning. Neurodegenerative processes in Alzheimer's disease (AD) and Parkinson's disease (PD) affect the structure and functioning of neurons, resulting in altered neuronal activity \cite{liebscher2016selective}. However, the cellular and neuronal circuit mechanisms underlying this disruption are elusive. The patients with AD suffer from sensory impairment and they lack the ability to channelise awareness. Therefore, it is important to understand how multisensory integration process changes in AD and why AD patients fail to guide their actions. Our ongoing work includes designing an appropriate subjective testing protocol using biomedical and clinical methods to observe the role of RF, LCF, and UCF in processing and learning. For example, study AD and normal mice to observe differences in their multisensory integration processes with respect to different environmental conditions (e.g. circadian rhythms). The circadian context could be used as a UCF e.g., to define day and night along with the associated expected behaviours. The incoming multisensory information such as information from retinal ganglion cells (RGCs) could be used as RF/LCF in the proposed computational model. We believe that an integration of the proposed computational model with biomedical and clinical experiments can help understanding the underlying disrupted neural processing in different medical conditions. Specifically, it can help understanding the precise and imprecise neural firing in normal and neurodegenerative disorder patients respectively, and how different medical conditions affect the functioning of neurons. The experimental observations in the light of the proposed theory can be quantified to develop improved normal/AD/PD models. \subsection{Low-power Neuromorphic Chips and Internet of Things (IoT) Sensors} The controlled firing property of the proposed conscious neural structure can help developing highly energy-efficient (low-power) neuromorphic chips and IoT sensors. The proposed SCNN inherently leverages the complementary strengths of incoming multisensory signals with respect to the outside environment and anticipated behaviour. For example, as explained in case study 1 that in a high background noise, the conscious neuron leverages the complementary strengths of both visual and audio cues to perceive ambiguous speech. In contrast, in a low background noise, the audio cues are good enough to solve the problem. Consequently, the synaptic weights associated with the input audio cues in case of a low background noise possess high synaptic strength that leads to firing of relevant neurons and dysfunctioning of neurons associated with visual cues. This precise neural firing behaviour stops the unnecessary successive neural processing and power consumption which could be very useful in developing low-power wireless sensors. Ongoing work also includes the development of low-power neuromorphic chips and IoT sensors based on our proposed theory. \subsection{Other Real-World Applications} The proposed theory can also be applied to address problems such as developing accurate financial market models, sentiment analysis models etc. The authors in \cite{kraus2017decision} proposed a decision support from financial disclosures using deep neural networks and transfer learning. Specifically, the authors used deep recurrent neural network (LSTM) to automatically extract features from ordered sequences of words in order to capture highly non-linear relationships and context-dependent meanings. The authors demonstrated a higher directional accuracy as compared to traditional machine learning methods when predicting stock price movements in response to financial disclosures. Similar network level multimodal integration has widely been used for applications such as sentiment analysis, emotion recognition, and deception detection \cite{zou2018microblog} \cite{gogate2017novel}\cite{gogate2017deep}. For example, the authors in \cite{zou2018microblog} addressed the problem of ambiguous and context-aware tweets utilizing connected adjacent information with world-level and tweet level context attention. However, such implementations exploit the temporal contextual information or LCF at the network level with no integration of the overall knowledge of the problem (awareness) at the neural level, restricting accurate modelling or precise behavioral representation. \section*{Acknowledgment} This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Grant No. EP/M026981/1 and deepCI.org. The author would like to greatly acknowledge Prof. Amir Hussain and Mandar Gogate from the University of Stirling for their contributions in implementing lip-reading driven deep learning approach and contextual AV switching for speech enhancement, which are published previously and cited here for reference. The author would also like to acknowledge Prof. Bruce Graham, Prof. Leslie Smith, Prof. Peter Hancock, and Prof. Bill Phillips from the University of Stirling, Areej Riaz from the London Business School, and Dr Mino Belle and Prof. Andrew Randall from the University of Exeter for their help and support in several different ways including appreciation, motivation, and encouragement. \bibliographystyle{IEEEtran}
1,108,101,565,060
arxiv
\section{Introduction} We stumbled upon a recent paper by John Conway and Alex Ryba \cite{ConwayRyba2016}, \textit{The extra Fibonacci series and the Empire State Building}. The title intrigued us, and our initial goal was to find the Empire State Building related to the Tribonacci sequences. The Conway-Ryba paper \cite{ConwayRyba2016} studies the Wythoff array. The array itself initially appeared in relation to the Wythoff game introduced by Wythoff \cite{Wythoff1907}. It is a two-player impartial combinatorial game with deep connections to the Fibonacci numbers. The Wythoff array is an infinite table that contains all integers once. The integers increase along each row and column. The integers in the array can be divided into pairs naturally, and each pair is a P-position in the Wythoff game. In this paper, we do not try to generalize the Wythoff game. We use an independent definition of the Wythoff array using Fibonacci numbers. Every integer has a unique Zeckendorf representation, which can be considered as a Fibonacci-base representation. Namely, every integer can be represented as a sum of distinct Fibonacci numbers so that no two numbers are consecutive. This representation can be encoded as a string of ones and zeros without two consecutive ones. The $n$th column of the Wythoff array are numbers with $n-1$ zeros at the end of their Zeckendorf representation. For example, the first column consists of the numbers with their Zeckendorf representation ending in 1. A number $m$ not in the first column is a Fibonacci successor of the number $n$, which is located to the left of $m$ in the array. Conway and Ryba \cite{ConwayRyba2016} use the notation $m = \out(n)$. Conway and Ryba \cite{ConwayRyba2016} extended the array to the left and found some natural border lines that symmetrically surround the central column of the array. The lines form a shape resembling the Empire State building. The Conway-Ryba paper \cite{ConwayRyba2016} contains a lot of statements that are called facts. We generalized half of the statements to the Tribonacci numbers. We also went on different tangents and added more statements unrelated to the paper. The Wythoff array's analog is known and called the Tribonacci array \cite{DucheneRigo2008,Keller1972}. In this paper, we call it the Trithoff array to emphasize the connection to the Wythoff array. We did not actually find an analog of the Empire State building in the Trithoff array. This is because when extending the Trithoff array to the left, we lose the symmetry properties of the array. But we have found many other things. Here we describe what is done in the paper together with a road map. Section~\ref{sec:Preliminaries} covers the list of facts from Conway-Ryba paper \cite{ConwayRyba2016} that we generalize. We state well-known facts about Tribonacci numbers $T_n$, the Fibonacci word, and the Tribonacci word. We introduce the Tribonacci successor of integer $n$, which we denote as $\out(n)$, by analog with Conway and Ryba \cite{ConwayRyba2016}. It is known that the Tribonacci sequence grows approximately as a geometric series $\alpha^n$, where $\alpha$ is the Tribonacci constant. In Section~\ref{sec:bounds} we estimate the value of $\out(n)$ as $\alpha n-0.85 < \out(n)<\alpha n+ 0.85$. We also study the difference $T_{n+1} - \alpha T_n$. We show that the sequence of such differences cannot have the same sign for any three consecutive terms and describe positive and negative records for such differences. In Section~\ref{sec:trithoffarray}, we discuss the Tribonacci array, which we call the Trithoff array, to emphasize the connections to the Wythoff array. In Section~\ref{subsec:rowdiffsequences}, we discuss difference sequences for every row. We discuss how for a given row to find the row that represents the difference sequence. We prove that given a row $r$, we can find another row such that $r$ is its difference sequence if and only if row $r$ is not the row that repeats periodically even, even, odd, and odd numbers. In Section ~\ref{subsec:columndiffsequences}, we describe the difference sequence of the first column and show that it consists of twos and threes. We also show that a number in a Trithoff array in column $c$ and row $r$ can be approximated as $\frac{r\alpha^c}{\alpha-1}$. In Section~\ref{sec:precolumns}, we extend the Trithoff array to the left. We study the columns $-2$, $-1$, and 0, which we call, by analog to Conway-Ryba \cite{ConwayRyba2016}, the pre-seed, seed, and wall. We describe these columns in terms of the first column. For example, we prove that the wall term $w$ is followed by $\out(w) - 1$ in the first column. The pre-seed, seed, and wall form new sequences, which we describe in full detail. In Section~\ref{sec:multiples}, we prove that any positive Tribonacci-like sequence has its tail appearing in the Trithoff array. We also study how multiples of Tribonacci-like sequences appear in the array. We prove that such multiples appear in order. In addition, we show that the $n$th multiple of a Tribonacci-like sequence has the row number equal to 1 modulo $n$. We explain that when extending a positive Tribonacci-like sequence to the left, before some index $i$, the sequence cannot have three numbers with the same sign. Moreover, if we take the absolute values of the numbers before the index $i$ and reverse the sequence, it cannot be a Tribonacci-like sequence. This is a stark difference from the Fibonacci case. In Section~\ref{sec:tribinarrynumbers}, we describe Fibbinary, Tribbinary, Fibternary, Tribternary numbers, and their properties. We mention 23 existing sequences from the OEIS. We study and describe 13 new sequences: four of them are particular columns of the Trithoff array, three sequences are related to Tribonacci numbers, and six sequences are related to the Trithoff array but are not rows or columns. \begin{comment} New sequences A351631, A351685, A351689, A352719, A352748, A353083, A353084, A353086, A353090, A353178, A353193, A354215, A356823 A353083 The second column of the Trithoff (tribonacci) array. A353084 Column 0 of the extended Trithoff (tribonacci) array. A353086 Column -1 of the extended Trithoff (Tribonacci) array. A353090 Column -2 of the extended Trithoff (tribonacci) array. A352719 Indices k of tribonacci numbers T(k) such that T(k+1) - (tribonacci constant)*T(k) is nonnegative. A352748 Indices k of tribonacci numbers T(k) such that T(k+1) - (tribonacci constant)*T(k) is negative. A356823 Tribternary numbers. A351631 The numbers that are not doubled in column -1 of the extended Trithoff (tribonacci) array A351685 a(n) is the row of the Trithoff (tribonacci) array that contains the tails of the sequence which is n times the tribonacci numbers. A351689 a(n) is the number in the first column of the Trithoff (tribonacci) array that starts off the row containing the tail of n times the tribonacci sequence. A353178 The row numbers of the Trithoff (tribonacci) array that correspond to difference sequences of other rows of the Trithoff array. A353193 The row numbers of the Trithoff (tribonacci) array that don't correspond to difference sequences of other rows of the Trithoff array. A354215 a(n) is the row number of the Trithoff (tribonacci) array where we can find the tail of the following sequence: apply the difference operator n times to the tribonacci sequence. \end{comment} \section{Preliminaries}\label{sec:Preliminaries} \subsection{The extra Fibonacci series and the Empire state building}\label{sec:EmpireState} We summarize results relevant to us from the paper ``The extra Fibonacci series and the Empire State Building'' \cite{ConwayRyba2016} by John Conway and Alex Ryba. The paper has a lot of statements in the form of facts. We present here some facts from the paper that we plan to generalize. In the Fibonacci sequence 0, 1, 1, 2, 3, 5, 8, $\ldots$ (A000045), each term is the sum of the previous two. We say that this sequence follows the \textit{Fibonacci rule}. Integer sequences that follow the Fibonacci rule and end in positive integers are called \textit{extraFib series} or \textit{extraFibs}, see \cite{ConwayRyba2016}. They use the word series to emphasize that the sequences can be extended in both directions. The \textit{Zeckendorf representation} of an integer $k$ is its expression as a sum of positive Fibonacci numbers, where each Fibonacci number can only be used once, and there can be no two consecutive Fibonacci numbers in the sum. \begin{fact}{2} The Zeckendorf expansion of $n$ is unique. \end{fact} Suppose integer $n$ has Zeckendorf representation \[n = F_{i_1} + \cdots +F_{i_k}.\] We can denote \textit{the Fibonacci successor} of $n$ as $\out(n)$, where it is defined as: \[\out(n) = F_{i_1+1} + \cdots +F_{i_k+1}.\] \begin{fact}{1} The function $\out(n)$ is well-defined. \end{fact} For the next fact, we denote the golden ratio as $\phi$. \begin{fact}{3} The unique integer in the open unit interval $(\phi n - \phi^{-2}, \phi n + 1 - \phi^{-2})$ is $\out(n)$. \end{fact} The Wythoff array is an infinite table $W_{m,n}$, with $m,n > 0$, where $W_{m,n}$ denotes the entry in row $m$ and column $n$ of the array, and where \begin{itemize} \item $W_{{m,1}}=\left\lfloor \lfloor m\varphi \rfloor \varphi \right\rfloor$, \item $W_{m,2}=\left\lfloor \lfloor m\varphi \rfloor \varphi ^{2}\right\rfloor$, \item $W_{m,n} = W_{m,n-2} + W_{m,n-1}$ for $n > 2$. \end{itemize} We see that each row of the array is an extraFib, and thus, we can continue each sequence to the left. Table~\ref{table:theGardenCR} shows the corner of the Wythoff array with two more columns on the left. \begin{table}[ht!] \begin{center} \begin{tabular}{c|c!{\vrule width 1pt}ccccccccc} 0&1&1&2&3&5&8&13&21&34& \dots\\ 1&3&4&7&11&18&29&47&76&123& \dots\\ 2&4&6&10&16&26&42&68&110&178& \dots\\ 3&6&9&15&24&39&63&102&165&267& \dots\\ 4&8&12&20&32&52&84&136&220&356& \dots\\ 5&9&14&23&37&60&97&157&254&411& \dots\\ 6&11&17&28&45&73&118&191&309&500& \dots\\ 7&12&19&31&50&81&131&212&343&555& \dots\\ 8&14&22&36&58&94&152&246&398&644& \dots\\ 9&16&25&41&66&107&173&280&453&733& \dots\\ 10&17&27&44&71&115&186&301&487&788& \dots\\ \end{tabular} \end{center} \caption{The Garden State} \label{table:theGardenCR} \end{table} Conway and Ryba \cite{ConwayRyba2016} call the table the Garden State. The numbers in the first column are called the \textit{seed} terms; in the second column, the \textit{wall} terms; and other numbers are called \textit{the Garden}. The Garden grows out in the sense that for any term $n$ in the Garden, the next term to the right is $\out(n)$. The first row is Fibonacci numbers, while the second row is Lucas numbers. The next three rows are 2, 3, and 4 times the Fibonacci numbers. The sixth row is called the Pibonacci numbers (A104449). When extended to the left, the first few terms of the sequence look like the digits of the number Pi: 3, 1, 4, 5, 9. We can see that the first column of the Wythoff array consists of integers whose Zeckendorf representation ends in 1. It is easy to prove the following structure of Table~\ref{table:theGardenCR}. \begin{fact}{5} (a) A garden term $n$ is followed by $\out(n)$. (b) A seed term $s$ is followed by $\out(s) + 1$. (c) A wall term $w$ is followed by $\out(w) - 1$. \end{fact} We also have the following amazing fact about how natural numbers appear in the Garden State. \begin{fact}{6} Every positive integer appears exactly once in the Garden and once as a seed, and zero also appears just once as a seed. \end{fact} The next fact explains how extraFibs appear in the array. \begin{fact}{7} Every series that satisfies Fibonacci's rule and ends with positive integers is represented in the Garden State. \end{fact} By the words ``is represented'' they mean that given an extraFib (which may be extended in both directions), we can find a number $k$, such that all the terms of the extraFib starting from index $k$ form a row in the Garden State. \begin{fact}{8} If $X_n$ is any extraFib series, so too is any positive multiple $mX_n$. \end{fact} \begin{fact}{9} The multiples of any extraFib series appear in order in the Garden State. \end{fact} The parts above related to the Garden, also known as the Wythoff array, are widely known. The paper \cite{ConwayRyba2016} also describes how to find each particular extraFib in the Garden State. Given the extraFib $S_n$, we expand it to the left and right and find the out values of every term. The wall term $S_w$ corresponds to the largest $w$, such that $S_{w+1} = \out(w) - 1$. It is well-known that if we continue extraFibs to the left, negative integers will alternate with positive integers. Thus, Conway and Ryba defined a \textit{reversal} of an extraFib, which is obtained by reversing the order and changing negative signs to positive ones. One can check the following fact. \begin{fact}{10} The reversal of an extraFib series is also an extraFib series. \end{fact} In our paper, we prove analogs of facts described above for the Tribonacci case. We denote our analogs of these facts with a letter T after the number. But first, we need some more information about Fibonacci numbers. \subsection{The Fibonacci word} A \textit{Fibonacci word} is an infinite word in a two-letter alphabet formed by the following infinite process. Let $S_{0} = a$ and $S_{1} = ab$. Now $S_{n}=S_{n-1}S_{n-2}$, the concatenation of the previous word and the one before that. The infinite Fibonacci word is the limit $S_{\infty}$, that is, the unique infinite sequence that contains each $S_{n}$, for finite $n$, as a prefix. The first few terms of the Fibonacci word are described in sequence A003849: \[abaababaabaab\ldots.\] The positions of $a$'s form the lower Wythoff sequence A000201. These are the numbers from the odd-numbered columns in the Wythoff array. Moreover, these are the wall numbers. The positions of $b$'s form the complementary sequence: the upper Wythoff sequence A001950. Now we describe the background for the Tribonacci numbers. \subsection{Tribonacci numbers}\label{sec:tribonacci} The Tribonacci numbers $T_n$ start with $T_0 = 0$, $T_1 = 0$, $T_2 = 1$, and each Tribonacci number thereafter is calculated by summing up the previous 3 numbers: \[T_{n} = T_{n-1} + T_{n-2} + T_{n-3}\] for $n > 2$. The Tribonacci sequence, A000073, would thus be \[0,\ 0,\ 1,\ 1,\ 2,\ 4,\ 7,\ 13,\ 24,\ 44,\ 81,\ 149,\ 274,\ 504,\ 927,\ 1705,\ \ldots\]. The \textit{Tribonacci representation} of an integer $k$ is its expression as a sum of positive Tribonacci numbers, where each Tribonacci number can only be used once, and there can be no 3 consecutive Tribonacci numbers in the sum. Note that number 1 can only be used once, even though it appears twice in the sequence. The Tribonacci representation can be viewed as numbers written in the Tribonacci base. We denote the Tribonacci representation of integer $N$ as $(N)_T$ and the evaluation of a string $w$ written in the Tribonacci base as $[w]_T$. For example, 9 can be expressed as $7+2$, making 1010 a Tribonacci representation of 9, where the rightmost digit 0 represents that there is no 1 in the sum, the 1 to the left of that represents there is a 2, the 0 to the left of that represents that there are no 4s, and finally the leftmost 1 represents that there is one 7. Thus, we write $(9)_T = 1010$ and $[1010]_T = 9$. One can find a unique Tribonacci representation by using a greedy algorithm. Start by finding the largest Tribonacci number that is less than our number and subtract it. Then, repeat the process. This method ensures that no three consecutive Tribonacci numbers are used. This is similar to Fact 2 in the Conway and Ryba paper \cite{ConwayRyba2016} in that the Zeckendorf representation is unique. The following theorem is proved in \cite{Keller1972}. \begin{fact}{2T} Every natural number has a unique Tribonacci representation. \end{fact} Consider the characteristic equation for the Tribonacci sequence: \[x^3-x^2-x-1=0.\] We can define the Tribonacci successor similar to how the Fibonacci successor is defined. Suppose integer $n$ has a Tribonacci representation \[n = T_{i_1} + \cdots +T_{i_k}.\] We define the \textit{Tribonacci successor} of $n$ as $\out(n)$, where \[\out(n) = T_{i_1+1} + \cdots +T_{i_k+1}.\] The following analog of Fact 1 follows from the uniqueness of the Tribonacci representation. \begin{fact}{1T} The Tribonacci successor is well-defined. \end{fact} The Tribonacci numbers grow approximately as a geometric series with the ratio equal to the Tribonacci constant. Thus, \[\out(n) \approx \alpha n.\] \subsection{The Tribonacci word}\label{sec:tribonacciword} The \textit{Tribonacci word} is the limit of the sequence of words $W(n)$, where $W(n)$ is a string of digits $a$, $b$, and $c$ formed in the following manner: $W(0) = a$, $W(1) = ab$, and $W(2) = abac$. Then $W(n) = W(n-1)W(n-2)W(n-3)$ is the concatenation of the previous Tribonacci word, the one before it, and the one before that: \[abacabaabacababac.\] The following theorem \cite{DucheneRigo2008} by Duch\^{en}e and Rigo connects the positions of different letters in the Tribonacci word with the Tribonacci representations of said positions. \begin{theorem}[\cite{DucheneRigo2008}] \label{thm:abcCorrespondsTo0-01-11} The $n$th symbol of the Tribonacci word is $a$, $b$, or $c$ if the Tribonacci representation of $n-1$ ends in 0, 01, or 11, respectively. \end{theorem} This is equivalent to the following statement. The $n$th symbol of the Tribonacci word is $a$, $b$, or $c$ if the Tribonacci representation of $n$ ends in the number of trailing zeros that equal 0, 1, or 2 modulo 3, respectively. Equivalently, The $n$th symbol of the Tribonacci word is $a$, $b$, or $c$, if $n$ is in the column number of the Trithoff array, that has remainder 1, 2, or 0 modulo 3, respectively. The following three complementary sequences correspondingly describe the positions of letters $a$, $b$, and $c$ in the Tribonacci word. Sequence A003144 describes the positions of the letter $a$ in the Tribonacci word. \[1,\ 3,\ 5,\ 7,\ 8,\ 10,\ 12,\ 14,\ 16,\ 18,\ 20,\ 21,\ 23,\ 25,\ 27,\ 29,\ 31,\ \ldots.\] It is known \cite{DSS2019} that A003144($n$) is always either $\lfloor \alpha n\rfloor$ or $\lfloor \alpha n + 1 \rfloor$ for all $n$, where $\alpha$ is the Tribonacci constant. In other words, $\alpha n - 1 < A003144(n) < \alpha n + 1$. Sequence A003145 describes the positions of the letter $b$ in the Tribonacci word. \[2,\ 6,\ 9,\ 13,\ 15,\ 19,\ 22,\ 26,\ 30,\ 33,\ 37,\ 39,\ 43,\ 46,\ 50,\ 53,\ 57,\ \ldots.\] Sequence A003146 describes the positions of the letter $c$ in the Tribonacci word. \[4,\ 11,\ 17,\ 24,\ 28,\ 35,\ 41,\ 48,\ 55,\ 61,\ 68,\ 72,\ 79,\ 85,\ 92,\ 98,\ 105, \ldots.\] \section{Bounds on the successor}\label{sec:bounds} The Tribonacci sequence grows approximately as a geometric progression with the ratio $\alpha$. It is well-known that for $n > 2$, we have $T_n < \alpha^{n-2} < T_{n+1}$, see \cite{BravoLuca2013}. We are interested in a Tribonacci analog of Fact 3, which provides bounds on the Fibonacci successor. We start by estimating the growth of the Tribonacci sequence in terms of the roots of the characteristic equation defined in the previous section. \begin{lemma} \label{lemma:alphadiff} We have \[|T_{n+1}-\alpha T_n | <\frac{2\alpha^{-\frac{n}{2}}}{|\beta-\gamma|}.\] \end{lemma} \begin{proof} The explicit formula for $T_n$ in terms of roots of the characteristic equation is well-known \cite{Spickerman1980}: \[T_n=\frac{\alpha^n}{(\alpha-\beta)(\alpha-\gamma)}+\frac{\beta^n}{(\beta-\alpha)(\beta-\gamma)}+\frac{\gamma^n}{(\gamma-\alpha)(\gamma-\beta)}.\] We compute \begin{multline*} T_{n+1}-\alpha T_n=\frac{\alpha^{n+1}}{(\alpha-\beta)(\alpha-\gamma)}+\frac{\beta^{n+1}}{(\beta-\alpha)(\beta-\gamma)}+\frac{\gamma^{n+1}}{(\gamma-\alpha)(\gamma-\beta)} - \\ \frac{\alpha\alpha^n}{(\alpha-\beta)(\alpha-\gamma)}+\frac{\alpha\beta^n}{(\beta-\alpha)(\beta-\gamma)}+\frac{\alpha\gamma^n}{(\gamma-\alpha)(\gamma-\beta)} = \frac{\beta^n}{\beta-\gamma}+\frac{\gamma^n}{\gamma-\beta}. \end{multline*} Taking the absolute value of both sides, we get $|T_{n+1}-\alpha T_n| \leq |\frac{\beta ^n}{\beta - \gamma}| + |\frac{\gamma ^n}{\gamma - \beta}|$. Since $|\beta|=|\gamma|$, we get $|T_{n+1}-\alpha T_n| \leq 2\frac{|\beta| ^n}{|\beta - \gamma|}$. Since $|\beta|=|\gamma|$ and $\alpha\beta\gamma=1$, we know that $|\beta|^n=\alpha^{-\frac{n}{2}}$. This lemma follows. \end{proof} \begin{fact}{3T} If $\out(n)$ is the Tribonacci successor of $n$, then $\alpha n-0.85 < \out(n)<\alpha n+ 0.85$. \end{fact} \begin{proof} Suppose the Tribonacci representation of $n$ is $T_a+T_b+\cdots$, where $a \ge 3$. Then the successor is $T_{a+1}+T_{b+1}+\cdots$. By Lemma~\ref{lemma:alphadiff}, we have \[| \out(n)-\alpha n|<\frac{2\alpha^{-\frac{a}{2}}}{|\beta-\gamma|}+\frac{2\alpha^{-\frac{b}{2}}}{|\beta-\gamma|}+\cdots.\] Given that the Tribonacci representation excludes three consecutive Tribonacci numbers, we can bound the expression as \begin{multline*} \frac{2}{|\beta-\gamma|}\left(\alpha^{-\frac{3}{2}} + \alpha^{-\frac{4}{2}} + \alpha^{-\frac{6}{2}} + \alpha^{-\frac{7}{2}} + \cdots\right) = \\ \frac{2}{|\beta-\gamma|}\left(\alpha^{-\frac{3}{2}} + \alpha^{-\frac{4}{2}}\right)\sum_{m=0}^\infty \alpha^{-\frac{3m}{2}} = \frac{2(\alpha^{-\frac{3}{2}} + \alpha^{-\frac{4}{2}})}{|\beta-\gamma|(1-\alpha^{-\frac{3}{2}} )} \approx 1.91746 < 2. \end{multline*} We can improve this bound further by adding computational results. We calculated the largest possible difference for numbers that have a Tribonacci representation up to 35 Tribonacci digits. The largest possible difference was less than 0.849. By a similar calculation to the one above, the larger digits can contribute to the difference no more than \[\frac{2(\alpha^{-\frac{36}{2}} + \alpha^{-\frac{37}{2}})}{|\beta-\gamma|(1-\alpha^{-\frac{3}{2}} )} < 0.0001.\] Thus, the maximum difference is not more than 0.85. \end{proof} Consider the sequence of integers $n$ such that $T_{n+1} - \alpha T_n$ is positive (This is now sequence A352719.): \[0,\ 1,\ 3,\ 4,\ 6,\ 7,\ 9,\ 10,\ 12,\ 15,\ 18,\ 21,\ 24,\ 26,\ 27,\ 29,\ 30,\ 32,\ 33,\ 35,\ 36,\ 38,\ 41,\ 44,\ \ldots.\] Let us denote this sequence as $P(k)$, where $P(0) = 0$ and $P(1) = 1$. We denote by $Q(n)$ the complementary sequence, with $Q(1) = 2$. The sequence $Q(n)$ is then the sequence of integers $n$ such that $T_{n+1} - \alpha T_n$ is negative (This is now sequence A352748.): \[2,\ 5,\ 8,\ 11,\ 13,\ 14,\ 16,\ 17,\ 19,\ 20,\ 22,\ 23,\ 25,\ 28,\ 31,\ 34,\ 37,\ 39,\ 40,\ 42,\ 43,\ 45,\ \ldots.\] \begin{proposition} The value $T_{n+1} - \alpha T_n$ cannot have the same sign for any three consecutive integers $n$. \end{proposition} \begin{proof} We know that $T_{i+1}-\alpha T_{i}=2\Re(\frac{\beta^i}{\beta-\gamma})$, where $\Re$ is the real part of a complex number. Now $2\Re(\frac{\beta^a}{\beta-\gamma})=\frac{2| \beta^a|}{\beta-\gamma}\sin(a\psi)$, where $\psi$ is the polar angle coordinate of $\beta$. We can calculate $\psi=\pm 2.17623$ radians, so $| 2\psi|>\pi$. This means there are not three consecutive integers where $T_{i+1}-\alpha T_i$ has the same sign. \end{proof} It follows that $P(n) - P(n-1) < 3$, and the sequence $P(n)$ does not contain three consecutive numbers. An analogous statement is true for $Q(n)$. We now want to find numbers that create new records in the differences. \begin{proposition} Numbers that create new records in positive differences have the following properties. \begin{enumerate} \item If $k$ is a positive record, then all the indices in its Tribonacci representation belong to the sequence $P(n)$. \item For any $k$, the number $T_{P(2)} + T_{P(3)} + \cdots + T_{P(k-1)} + T_{P(k)}$, creates a new record for positive differences. \item For every $t > 1$, there exists $N_0$ such that the $N$th positive (negative) record for any $N > N_0$ contains $P(t)$. \end{enumerate} Similar statements are true for negative differences. \end{proposition} \begin{proof} Consider the Tribonacci representation of $n$. Suppose $n$ achieves a new positive record, and suppose its Tribonacci representation contains $T_j$ such that $\out(T_j) - \alpha T_j < 0$; that is, $j\neq P(t)$ for any $t > 1$. Then consider the number $n - T_j$. The corresponding difference $\out(n-T_j) - \alpha(n - T_j)$ equals $\out(n) - \out(T_j) - \alpha n + \alpha T_j > \out(n) - \alpha n$. Thus, the difference for a smaller number $n-T_j$ exceeds the difference for a larger number $n$, which means $n$ cannot be a record. Hence, the Tribonacci representations of records must only have terms of the form $T_{P(t)}$ for some $t > 1$. Moreover, suppose we have a number $n$ with Tribonacci representation $T_{P(2)} + T_{P(3)} + \cdots + T_{P(k-1)} + T_{P(k)}$ for some $k$. Consider a number $m < n$. Its Tribonacci representation can be built by removing some of the terms $T_{P(j)}$ for $j \leq k$ and adding terms of the form $T_{Q(i)}$. That is, we remove terms contributing a positive difference and add terms contributing a negative difference. Then $\out(n) - \alpha n > \out(m) - \alpha m$. So $n$ is a record. Finally, we show that for $t>1$, any term $T_{P(t)}$ is contained in the Tribonacci representation of every sufficiently large record. First, as we just showed, for every $M$, the number $N_0 =T_{P(2)} + T_{P(3)} + \cdots + T_{P(M-1)} + T_{P(M)}$ is a record. Suppose $N > N_0$ does not contain $T_{P(t)}$ in its Tribonacci representation. Then $(\out(N_0) - \alpha N_0) - (\out(N) - \alpha N) \ge T_{P(t)+1} - \alpha T_{P(t)} - (T_{P(a_0)+1} - \alpha T_{P(a_0)} + T_{P(a_1)+1} - \alpha T_{P(a_1)} + \cdots)$, where the $a_i$ are all greater than $M$. But from Lemma~\ref{lemma:alphadiff}, we know that $T_{P(a_i)+1} - \alpha T_{P(a_i)} \le 2\frac{|\beta|^{P (a_i)}}{|\beta-\gamma|}$. Thus, $T_{P(a_0)+1} - \alpha T_{P(a_0)} + T_{P(a_1)+1} - \alpha T_{P(a_1)} + \cdots$ is bounded by $\frac{2}{|\beta-\gamma|}(|\beta|^{M+1} + |\beta|^{M+2} + \cdots) = \frac{2|\beta|^{M+1}}{|\beta-\gamma|(1-|\beta|)}$, which goes to 0 as $M$ goes to infinity. We know that $T_{P(t)+1} -\alpha T_{P(t)}$ is positive, so picking a sufficiently large $M$, we can ensure that $(T_{P(a_0)+1} - \alpha T_{P(a_0)} + T_{P(a_1)+1} - \alpha T_{P(a_1)} + \cdots)$ is smaller than $T_{P(t)+1} - \alpha T_{P(t)}$. Thus, $(\out(N_0) - \alpha N_0) - (\out(N) - \alpha N)$ is positive, and $N$ is not a record. The argument for negative records is similar. \end{proof} Table~\ref{table:positivediff} displays the numbers $n$ that set a new record for a positive difference $T_{n+1}-\alpha n$. The first column shows $n$, the second column the approximate value of $|\alpha n-\out(n)|$, and the last column shows indices of Tribonacci numbers in the Tribonacci representation of $n$. The proposition above tells us that the numbers in the right column belong to sequence $P(k)$. Moreover, in the limit, the numbers approach the sequence $P(k)$ in reverse order. The data for negative records is in Table~\ref{table:negativediff}. \begin{table}[ht!] \begin{subtable}[c]{0.5\textwidth} \centering \begin{tabular}{|c|c|r|} 1 &0.1607132 &[3]\\ 2 &0.3214264 &[4]\\ 3 &0.4821397 &[4, 3]\\ 10 &0.6071324 &[6, 4, 3]\\ 23 &0.6964046 &[7, 6, 4, 3]\\ 67 &0.7677874 &[9, 7, 6, 4, 3]\\ 148 &0.7855602 &[10, 9, 7, 6, 4, 3]\\ 341 &0.8032164 &[12, 9, 7, 6, 4, 3]\\ 422 &0.8209892 &[12, 10, 9, 7, 6, 4, 3]\\ \end{tabular} \subcaption{Records for positive difference.} \label{table:positivediff} \end{subtable} \begin{subtable}[c]{0.5\textwidth} \centering \begin{tabular}{|c|c|r|} 4 &$-0.3571470$&[5]\\ 28 &$-0.5000291$&[8, 5]\\ 177 &$-0.5537556$&[11, 8, 5]\\ 681 &$-0.5542803$&[13, 11, 8, 5]\\ 1104 &$-0.5725777$&[14, 11, 8, 5]\\ 1608 &$-0.5731023$&[14, 13, 11, 8, 5]\\ 4240 &$-0.5758421$&[16, 14, 11, 8, 5]\\ 4744 &$-0.5763667$&[16, 14, 13, 11, 8, 5]\\ 6872 &$-0.5785818$&[17, 14, 11, 8, 5]\\ \end{tabular} \subcaption{Records for negative difference.} \label{table:negativediff} \end{subtable} \caption{Records for positive and negative difference.} \end{table} \section{Trithoff array}\label{sec:trithoffarray} We want to build the analog of the Wythoff array for Tribonacci numbers. We call it the \textit{Trithoff} array. Our first column are numbers in increasing order whose Tribonacci representation ends in 1. For a number $n$ in the array, the number to the right is $\out(n)$. Thus, the second column are numbers whose Tribonacci representation ends in 10, and so on. Table~\ref{table:trithoffarray} shows the Trithoff array, where the last column is the sequence numbers in the OEIS \cite{OEIS} for corresponding rows. The array itself is in OEIS by antidiagonals as sequence A136175, and it is called the Tribonacci array. By analogy with Conway and Ryba \cite{ConwayRyba2016}, we will later extend the Trithoff array while we call this part the Garden State. \begin{table}[ht!] \begin{center} \begin{tabular}{ccccccc|r} 1 & 2 & 4 & 7 & 13 & 24 & \ldots & A000073\\ 3 & 6 & 11 & 20 & 37 & 68 & \ldots & A001590\\ 5 & 9 & 17 & 31 & 57 & 105 & \ldots & A000213\\ 8 & 15 & 28 & 51 & 94 & 173 & \ldots & A214899\\ 10 & 19 & 35 & 64 & 118 & 217 & \ldots & A020992\\ 12 & 22 & 41 & 75 & 138 & 254 & \ldots & A100683\\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \end{tabular} \end{center} \caption{The Trithoff array.} \label{table:trithoffarray} \end{table} Here are some properties of the Trithoff array. In the Trithoff array, the numbers in columns with index equaling 1, 2, or 3 mod 3, when sorted, form the positions of letters $a$, $b$, and $c$ in the Tribonacci word, respectively. We denote the element in row $i$ and column $j$ as $T_{i,j}$. In particular, the first row is a shifted Tribonacci sequence: $T_{1,k} = T_{k+3}$. Similar to extraFibs, we call integer sequences that follow the Tribonacci rule and end in positive integers \textit{extraTrib series} or \textit{extraTribs}. The sequences that just follow the Tribonacci rule we call \textit{Tribonacci-like} sequences. It is well-known that any extraTrib can be expressed through Tribonacci numbers. For example, consider an extraTrib sequence $S_n$, where $S_0 = x$, $S_1 = y$, and $S_2 = z$. Then, \[S_n = xT_{n-1} + y (T_{n+1} - T_n) + z T_n.\] This allows one to derive many formulae connecting the values of different rows. For example \begin{itemize} \item $T_{2,n} = T_{1,n}+T_{1,n+1} = T_{1,n+3} - T_{1,n+2} = T_{1,n+2} - T_{1,n-1}$, \item $T_{3,n} = T_{2,n-1}+T_{2,n} = T_{2,n+1} - T_{2,n-2} = T_{1,n} + T_{1,n+2} = T_{n-1} + 2T_n + T_{n+1}$. \end{itemize} Column 1 is sequence A003265: \[1,\ 3,\ 5,\ 8,\ 10,\ 12,\ 14,\ 16,\ 18,\ 21,\ 23,\ 25,\ 27,\ 29,\ 32,\ 34,\ 36,\ 38,\ 40,\ \ldots.\] We can estimate the growth rate of this sequence. Each next term is either 2 or 3 more than the previous term. \begin{proposition} The ratio of a number to its index in sequence A003265 above approaches \[\frac{\alpha^2+1}{2} = \frac{\alpha}{\alpha-1}\] as the index approaches infinity. \end{proposition} This is correlated to the frequency of numbers ending in 1 in Tribonacci representation. \begin{proof} We want to see how many numbers that are less than $b$ end with a 1 in their Tribonacci representation. These numbers are either in the form of $w011$ or $w01$. Suppose $a$ is the number with Tribonacci representation $w$. Then $(\out(\out(\out(a))) + 3)_T = w011$ and $(\out(\out(a)) + 1)_T = w01$. Recall $\out(n) \approx \alpha n$. For the number with Tribonacci representation $w011$ to be less than $b$, we need $\alpha^3 a < b$ approximately. Thus the number of such $a$'s is about $\frac{b}{\alpha^3}$. Similarly, by the same logic for the second case, we need $\alpha^2 a < b$ approximately, and the number of such numbers is approximately $\frac{b}{\alpha^2}$. Thus, the number of numbers less than $b$ with Tribonacci representation ending in 1 is approximately $\frac{b}{\alpha^3} + \frac{b}{\alpha^2}$. Suppose A003265$(r) = n$, or equivalently, the value in column 1 and row $r$ of the Trithoff array is $n$. This means that there are $r-1$ numbers less than $n$ with Tribonacci representations ending in 1. Using what we previously calculated, $r \approx \frac{n}{\alpha^3} + \frac{n}{\alpha^2}$. Isolating $n$, we get that $n \approx \frac{\alpha^3}{\alpha+1}r= \frac{\alpha}{\alpha-1}r$, which proves the proposition. \end{proof} \begin{corollary} \label{cor:Trithoffformula} A number in column $c$ and row $r$ of the Trithoff array can be estimated as \[\frac{r\alpha^c}{\alpha - 1}.\] \end{corollary} Column 2 is now sequence A353083: \[2,\ 6,\ 9,\ 15,\ 19,\ 22,\ 26,\ 30,\ 33,\ 39,\ 43,\ 46,\ 50,\ 53,\ 59,\ 63,\ 66,\ 70,\ \ldots.\] Given a sequence $S_i$, the \textit{difference sequence} $D_i$ is defined as $D_i = S_{i+1} - S_i$. We can see that the difference sequence of an extraTrib is an extraTrib. Suppose a row starts in $[a1]_T$. What is the row number? Let us first introduce a new base $U$, based on the second row of the array. Let us denote $U_i = T_{2,i-2} = T_{i+1} - T_i$. Suppose we consider a Tribonacci-like representation system based on the second row of the array. Then using this system, $[\ldots a_4a_3a_2a_1a_0]_U$ is equal to $\cdots + a_4U_4 + a_3U_3 + a_2U_2 + a_1U_1 + a_0U_0 = \cdots + 6a_4 + 3a_3 + 2a_2 + a_1 + 0a_0$. Such representation is not unique, as $a_0$ doesn't affect the value. Also, $[1000]_U = [110]_U = 3$. However, the point of this system is provided by the following lemma. \begin{lemma} \label{lemma:rownumber} If the row in Trithoff array starts with $[a1]_T$, then the row number is $[a1]_T-[a]_T = 1 + [a1]_U$. \end{lemma} \begin{proof} The row number is the number of rows from the first to that row. Each row starts with a number whose Tribonacci representation ends in 1. There are $[a1]_T$ total numbers not exceeding $[a1]_T$. Out of those, the numbers of the form $[a0]_T$ correspond to numbers ending in zero, and there are exactly $[a]_T$ of them. Therefore, the total number of numbers not exceeding $[a1]_T$ and with Tribonacci representation ending in 1 is $[a1]_T-[a]_T = 1 + [a0]_T-[a]_T$. Using the fact that $U_i = T_{i+1} - T_i$, we get $[a1]_T-[a]_T = 1 + [a0]_U = 1 + [a1]_U $. \end{proof} \subsection{Rows of the Trithoff array and their difference sequences} \label{subsec:rowdiffsequences} The difference sequence of a Fibonacci-like sequence is again the same sequence with the index shifted by 1. In the Tribonacci case, the situation is way more interesting. The difference sequence of the first row of the Trithoff array is the second row. The difference sequence of the second row is the third row. The difference sequence of the third row is the seventh row. When we continue, we get the following sequence $a(n)$, which is now sequence A354215. In this sequence $a(n+1)$ is the row in the Trithoff array corresponding to the difference sequence of row $a(n)$. \[1,\ 2,\ 3,\ 7,\ 19,\ 29,\ 81,\ 125,\ 353,\ 161,\ 1545,\ 705,\ 2001,\ \ldots ,\] The following proposition defines the difference sequence in terms of Tribonacci representation of the given sequence. \begin{proposition} The difference sequence of a row containing $a_T$ contains $|11_T \cdot a_T|$, where the multiplication is done in any integer base larger than 2. \end{proposition} \begin{proof} We consider the difference sequences in terms of Tribonacci representation. First consider two large Tribonacci numbers: $(T_n)_T = 10^{n-3}$ and $(T_{n+1})_T = 10^{n-2}$. Then their difference is $T_{n+1} - T_n = T_{n-2} + T_{n-1}$. Thus, the Tribonacci representation is $(T_{n+1} - T_n)_T = (T_{n-2} + T_{n-1})_T = 110^{n-3}$. By linearity, if we take the difference sequence of a row containing $a_T$, the result contains $|11_T \cdot a_T|$, where the multiplication is done in any integer base larger than 2. \end{proof} For example, from row 1 containing 1, we get Tribonacci representation $11 \cot 1 = 11$, which is in row 2. Then multiplying 11 by 11, we get 121, which is evaluated to 9, which corresponds to row 3. Row 3 starts with 5, with Tribonacci representation 101. Repeating again, we multiply 101 from row 3 by 11 to get 1111 which evaluates to 14, corresponding to row 7. Given a sequence $S_i$, the \textit{partial-sums sequence} $P_i$ is sequence defined as $P_i = \sum_{k=0}^iS_k$. If a sequence $S_i$ has the difference sequence $D_i$, we call the sequence $S_i$ the \textit{difference-inverse} of $D_i$. It is well-known that any difference-inverse sequence equals a partial-sums sequence plus a constant. Before proving our result about difference-inverses of extraTribs, we want to describe possible parities of Tribonacci-like integer sequences. One can check that there are four possible cases: \begin{itemize} \item Type (EEEE): All terms are even (for example, row 7); \item Type (OOOO): All terms are odd (for example, row 3); \item Type (EOEO): The terms alternate (for example, row 2); \item Type (EEOO): The terms form a pattern of two even, then two odd (for example, row 1); \end{itemize} \begin{theorem} An extraTrib always has a unique Tribonacci-like difference-inverse. The inverse is an extraTrib if and only if the given sequence does not belong to Type EEOO. \end{theorem} \begin{proof} Take four adjacent terms $a$, $b$, $c$, $a+b+c$ of an extraTrib sequence $S$. Then the partial sums are $a$, $a+b$, $a+b+c$, and $2a+2b+2c$. Any difference-inverse sequence equals the partial sums sequence plus a constant. The sum of the first three is $3a+2b+c$, so we need to add $\frac{c-a}{2}$ to each term to make the fourth term the sum of the first three terms. Consider a Tribonacci-like sequence $Q$ that starts with four consecutive entries \[\frac{a+c}{2}, \quad \frac{a+2b+c}{2}, \quad \frac{a+2b+3c}{2}, \quad \frac{3a+4b+5c}{2}.\] As $Q$ follows the Tribonacci rule, its difference sequence follows the Tribonacci rule, and since the difference sequence agrees with $S$ for three consecutive terms, it agrees with $S$ everywhere. But the new sequence $Q$, though it is Tribonacci-like, is not always an extraTrib, as it is not guaranteed to be an integer sequence. When we add $\frac{c-a}{2}$ to the terms, the new terms are integers if and only if $a$ and $c$ have the same parity. Thus Tribonacci-like sequences with all the terms of the same parity and the ones where parity alternates have integral difference-inverses, while the sequences of Type EEOO do not. \end{proof} We call an extraTrib $S$ \textit{invertible} if there exists an extraTrib with the difference sequence $S$. Equivalently, an extraTrib is invertible if it is of types EEEE, OOOO, or EOEO. Testing the rows in the Trithoff array, we get the sequence of invertible rows, which is now sequence A353178: \[2,\ 3,\ 4,\ 7,\ 11,\ 12,\ 16,\ 17,\ 19,\ 20,\ 21,\ 25,\ 26,\ 28,\ 29,\ 30,\ 33,\ 34,\ \ldots.\] Non-invertible rows are now sequence A353193: \[1,\ 5,\ 6,\ 8,\ 9,\ 10,\ 13,\ 14,\ 15,\ 18,\ 22,\ 23,\ 24,\ 27,\ 31,\ 32,\ 36,\ 37,\ 39,\ \ldots.\] Interestingly, nothing of this sort appears in the Fibonacci case. As the difference sequence of a Fibonacci-like sequence is the sequence itself (shifted), we get that all extraFibs are invertible, as they invert to themselves. \begin{theorem} If we start with an extraTrib and keep inverting it, we will get to a non-invertible sequence in a finite number of steps. \end{theorem} \begin{proof} We start by looking at how the difference operator changes types. Type EEOO changes to EOEO; the latter changes to OOOO, then to EEEE. The sequence of type EEEE stays EEEE. Define the valuation of an extraTrib to be the largest integer $n$ such that all numbers in the sequence are multiples of $2^n$. An extraTrib of valuation $n$, after dividing by $2^n$, is of the type OOOO, EOEO, or EEOO. Similar to before, the difference sequence of sequence $2^n$ times type EEOO is sequence $2^n$ times type EOEO. The latter changes to $2^n$ times type OOOO, and the difference of that has valuation $m > n$. After taking at most three difference operators, the result has a greater valuation than the starting sequence. Thus, for every three inverse differences, the valuation decreases, and the result follows. \end{proof} \subsection{The difference sequence for the first column} \label{subsec:columndiffsequences} Consider the first column (sequence A003265): \[1,\ 3,\ 5,\ 8,\ 10,\ 12,\ 14,\ 16,\ 18,\ 21,\ 23,\ 25,\ 27,\ 29,\ 32,\ 34,\ 36,\ 38,\ 40,\ \ldots,\] and its difference sequence: \[2,\ 2,\ 3,\ 2,\ 2,\ 2,\ 2,\ 2,\ 3,\ 2,\ 2,\ 2,\ 2,\ 3,\ 2,\ 2,\ 2,\ 2,\ 2,\ \ldots,\] \begin{proposition} The difference sequence of the first column consists of twos and threes. The indices of the rows such that the next value in the first column is 3 greater form sequence A305373. \end{proposition} \begin{proof} Consider a number $a$ in the first column. Looking at its Tribonacci representation that must end in a 1, consider cases $b001$, $b011$, $b0101$, and $b01101$. If $[a]_T = b0101$, then the next term in column 1 will be $|b1001| = [b0101]_T + 3 = a+3$, so the difference is 3. For other possibilities, $b001$, $b011$, and $b01101$, the next terms are $b011$, $|b101|$, and $|b10001|$ correspondingly. In all these cases, the next term is increased by 2. The indices of the rows such that the next value in the first column is 3 greater correspond to rows starting with numbers of the form $[b0101]_T$. The row number is $[b0101]_T-[b010]_T$ by Lemma~\ref{lemma:rownumber}. This equals $\out(b)+\out^2(b)+3$. Sequence A305373 is defined as the sum of A003144 and A003145. A003144 is the sequence $[x0]_T+1$ and A003145 is the sequence $[x01]_T+1$. Thus, their sum is $\out(x)+\out^2(x)+3$. Thus, our sequence is A305373. \end{proof} \section{Extending the Trithoff array. Precolumns}\label{sec:precolumns} We extend the Trithoff array to the left by using the rule that in every row, each number is the sum of the three previous numbers. We assume that the columns of the Trithoff array start with index 1. Similar to Conway and Ryba \cite{ConwayRyba2016}, we call the 0th column \textit{the wall}, column $-1$, \textit{the seed}, and column $-2$, \textit{the pre-seed}. Table~\ref{table:Trithoffpre} shows the upper-left part of the Trithoff array with precolumns. \begin{table}[H] \centering \begin{tabular}{|l|l|l||r|r|r|r|} \hline $i=-2$ (pre-seed) & $i=-1$ (seed) & $i=0 $ (wall) & $i=1$ & $i=2$ & $i=3$ & $i=4$ \\ \hline 0 & 0 & 1 & 1 & 2 & 4 & 7 \\ \hline 0 & 1 & 2 & 3 & 6 & 11 & 20 \\ \hline 1 & 1 & 3 & 5 & 9 & 17 & 31 \\ \hline 1 & 2 & 5 & 8 & 15 & 28 & 51 \\ \hline 1 & 3 & 6 & 10 & 19 & 35 & 64 \\ \hline 2 & 3 & 7 & 12 & 22 & 41 & 75 \\ \hline \end{tabular} \caption{Trithoff array with precolumns.} \label{table:Trithoffpre} \end{table} Before describing precolumns, we need to do some work with Tribonacci representations. We call a Tribonacci representation \textit{non-canonical} if it consists of integers zero and one, but might contain three consecutive ones, as opposed to the Tribonacci representation, which we might call \textit{canonical} to emphasize it. For example, suppose $n=13$. Then its canonical representation is 10000. While a non-canonical word 1110 also evaluates to 13. Similar to Conway and Ryba \cite{ConwayRyba2016}, we denote by $|v|$ the canonization of a non-canonical representation $v$. Suppose we have a non-canonical Tribonacci representation of the number $n$. We can view this representation as a sum of distinct Tribonacci numbers. We call replacing $T_{n-2}+T_{n-1}+T_n$ in this sum with $T_{n+1}$ \textit{carrying}. \textbf{Canonization of a non-canonical representation.} Suppose we have a non-canonical representation $v$ of number $n$. We use the leftmost possible carry. In other words, we are replacing the leftmost $0111$ with $1000$. This way, the new representation contains only digits zero and one and evaluates to the same number $n$, while the sum of digits decreases. That means the procedure terminates in a canonical representation of $n$. \begin{lemma}\label{lemma:outnoncanonical} Suppose a binary word $w$ is a non-canonical representation of integer $n$, then $w0$ is a non-canonical representation of $\out(n)$. \end{lemma} \begin{proof} Consider a canonization procedure described above for binary words $w$ and $w0$. Each step of this procedure is the same, except we have 0 at the end of the second word. Thus $|w0| = |w|0 = (n)_T0 = (\out(n))_T$. \end{proof} Now we are ready to describe the Tribonacci representation of precolumns, given the Tribonacci representation of the first column. \begin{lemma} \label{lemma:precolumns} Suppose number $n$ in column 1 has the Tribonacci representation $abc1$, where $b$ and $c$ are digits zero or one, and $a$ is a binary word. Then the wall $w$ in the same row equals $[abc]_T+1$, the seed in the same row equals $[ab]_T+c$, and the pre-seed in the same row equals $[a]_T+b$. \end{lemma} \begin{proof} We have $(\out(n))_T = abc10$ and $(\out(\out(n)))_T = abc100$. By definition, the corresponding wall element $w$ is $\out(\out(n)) - \out(n)-n$. We can split $n$ as $[abc0]_T + 1$, $\out(n)$ as $[abc00]_T + 2$ and $\out(\out(n))$ as $[abc000]_T + 4$. Then $w = ([abc000]_T + 4) - ([abc00]_T + 2) - ([abc0]_T + 1) = ([abc000]_T - [abc00]_T - [abc0]_T) + 1 = [abc]_T + 1$. The seed $s$ equals $\out(n) - n - w = [abc10]_T - [abc1]_T - [abc]_T - 1 = ([ab000]_T + 4c + 2) - ([ab00]_T + 2c + 1) - ([ab0]_T + c) - 1 = ([ab000]_T - [ab00]_T - [ab0]_T) + c = [ab]_T + c$. The pre-seed $p$ equals $n - w - s = [abc1]_T - ([abc]_T + 1) - ([ab]_T + c)= ([a000]_T + 4b + 2c + 1) - ([a00]_T + 2b + c + 1) - ([a0]_T + b + c) = ([a000]_T - [a00]_T - [a0]_T) + b= [a]_T + b$. \end{proof} Analogs for our case for the structure of the Tribonacci Garden State of Facts 5T describe how the next term to the right depends on the previous term. \begin{fact}{5Ta} A garden term $n$ is followed by $\out(n)$. \end{fact} This is true by definition. \begin{fact}{5Tc} A wall term $w$ is followed by $\out(w) - 1$. \end{fact} \begin{proof} Suppose the wall term $w$ is followed by the garden term $n$. Suppose the Tribonacci representation of $n$ is $abc1$. Then from Lemma~\ref{lemma:precolumns} we have $w = [abc]_T + 1$. Thus, $n = \out(w-1) + 1$. We consider cases as we can assume that $b$ and $c$ are not both 1. \begin{itemize} \item If $bc$ is 00, then $\out(w) - 1 = \out([abc]_T+1)-1 = \out([a00]_T+1)-1= \out([a01]_T)-1= [a010]_T-1= [a001]_T= [abc1]_T$. \item If $bc$ is 01, then $\out(w) - 1 = \out([abc]_T+1)-1 = \out([a01]_T+1)-1 = \out([a10]_T)-1$. The representation $[a10]_T$ might be non-canonical, so by Lemma~\ref{lemma:outnoncanonical} $\out(w) - 1 = [a100]_T-1=[a011]_T=[abc1]_T$. \item If $bc$ is 10, then $\out(w) - 1 = \out([abc]_T+1)-1=\out([a10]_T+1)-1=\out([a11]_T)-1$. Again $[a11]_T$ might be a non-canonical representation, and by Lemma~\ref{lemma:outnoncanonical} we have $\out([a11]_T)-1 = [a110]_T-1 = [a101]_T = [abc1]_T$. \end{itemize} So $w$ is always followed by $out(w)-1$. \end{proof} The analog of Fact 5b describes how to calculate the seed from the pre-seed and the wall from the seed in the same row. The formulae depend on the last digits of the Tribonacci representation of the first garden term in the same row. Consider row $x$, where we denote the pre-seed by $p(x)$, the seed by $s(x)$, the wall term as $w(x)$, and the first garden term as $g(x)$. \begin{fact}{5Tb} If $(g(x))_T$ ends with 11, then $w(x) = \out(s(x))$ and $s(x) = \out(p(x))+1$. If $(g(x))_T$ ends in $001$, then $w(x) = \out(s(x))+1$ and $s(x) = \out(p(x))$. If $(g(x))_T$ ends in $101$, then $w(x) = \out(s(x))+1$ and $s(x) = \out(p(x))-1$. \end{fact} \begin{proof} We use Lemma~\ref{lemma:precolumns} that states that if $(g(x))_T = abc1$, then $w(x) = [abc]_T+1$, $s(x) = [ab]_T+c$, and $p(x) = [a]_T+b$. If $(g(x))_T$ ends with 11, then $b=0$ and $c = 1$. We have $w(x) = [a01]_T + 1 = [a10]_T$ and $s(x) = [a0]_T + 1 = [a1]_T$. The representation $[a1]_T$ might be non-canonical, but it still respects the out function. Thus, in this case, $w(x) = \out(s(x))$. We also have $p(x) = [a]_T + 0 = [a]_T$, so $s(x) = \out(p(x)) + 1$. If $(g(x))_T$ ends in 01, then $c = 0$. Thus, $w(x) = [ab0]_T+1$ and $s(x) = [ab]_T + 0 = [ab]_T$, and therefore $w(x) = \out(s(x))+1$. If $(g(x))_T$ ends in 001, then $b = c = 0$. Thus, $s(x) = [a0]_T + 0 = [a0]_T$ and $p(x) = [a]_T + 0 = [a]_T$, and therefore $s(x) = \out(p(x))$. If $(g(x))_T$ ends in 101, then $b = 1$ and $c = 0$, so $s(x) = [a1]_T + 0 = [a0]_T + 1$ and $p(x) = [a]_T + 1$. To prove that $[a0]_T + 1 = \out([a]_T + 1) - 1$ we must consider two cases. If $a$ ends in 0, then adding 1 might result in a non-canonical representation, but we still can take the successor. In this case, $\out(p(x)) = \out([a]_T + 1) = [a0]_T + 2 = s(x) + 1$, so the lemma holds. If $a$ ends in 1, it must end in 01, otherwise, the representation of $(g(x))_T$ would not be canonical. Let $a = d01$ for some binary string $d$. Then, $[a]_T + 1 = [d01]_T + 1 = [d10]_T$. Again, $d10$ might not be canonical. In any case, $\out(p(x)) = \out([a]_T + 1) = \out([d10]_T) = [d100]_T = \out([d01]_T) + 2 = \out([a]_T) + 2 = s(x) + 1$, and the lemma still holds. \end{proof} \subsection{Precolumns} The following theorem describes precolumns in terms of the Tribonacci word. \begin{theorem} \begin{itemize} \item The wall is an increasing sequence of numbers that are positions of letters $a$ and $b$ in the Tribonacci word. \item The seed is a non-decreasing sequence, starting with 0 followed by all integers, where the positions of letter $a$ in the Tribonacci word are doubled, and all the other integers are not doubled. \item The pre-seed forms a non-decreasing sequence, starting with two zeros followed by all integers, where the positions of letters $a$ and $b$ in the Tribonacci word are tripled, while the positions of letters $c$ are doubled. \end{itemize} \end{theorem} \begin{proof} Consider a term $x$ in the first column of the Trithoff array in the form $[abc1]_T$. By Lemma~\ref{lemma:precolumns}, the wall $w$ in the same row equals $[abc]_T+1$, the seed in the same row equals $[ab]_T+c$, and the pre-seed in the same row equals $[a]_T+b$. \textbf{The wall.} The word $abc$ goes, in order, through all Tribonacci representations ending in 0 or 01. Thus, the wall term goes in order through numbers that are one greater than numbers with Tribonacci representations ending in 0 or 01. So by Theorem~\ref{thm:abcCorrespondsTo0-01-11} the wall term will go through all positions of the letters $a$ and $b$ in the Tribonacci word. \textbf{The seed.} Consider how it changes when moving to the previous row. Suppose $b = 0$ and $c = 1$. Then the first column term $x$ for that row is $[a011]_T$, and the corresponding seed is $[a0]_T + 1$, matching a position of letter $a$ in the Tribonacci word. The previous row has the garden term $[a001]_T$, and the corresponding seed equals $[a0]_T$. Thus the seed increases by 1 from the previous row. Suppose $b = 1$ and $c = 0$. Then $x = [a101]_T$, and the corresponding seed is $[a1]_T$, matching a position of letter $a$ in the Tribonacci word. The previous row has garden term $[a011]_T$, and the corresponding seed is $[a0]_T + 1 = [a1]_T$. Thus this row's seed equals the previous seed. Suppose $b = c = 0$. Then $x = [a001]_T$, and the corresponding seed is $[a0]_T$, which matches a position of letter $b$ or $c$ in the Tribonacci word. Suppose $a'$ is the Tribonacci representation of $[a]_T -1$. The previous row has garden term $[a'101]_T$ or $[a'011]_T$ with the same corresponding seed $[a'1]_T$ or $[a'0]_T + 1$, which is less than the current seed $[a0]_T$ by 1. To summarize, a row's seed equals the previous seed when $b = 1$ and $c = 0$; otherwise, the row's seed is one greater than the previous seed. Thus, only the positions of the letter $a$ in the Tribonacci word are doubled. \textbf{The pre-seed.} The pre-seeds are non-decreasing because as $[abc1]_T$ increases, $[a]_T + b$ either remains the same or increases by 1. The number $[a]_T$ appears at least twice as a pre-seed: in rows with the garden value equal $[a001]_T$ or $[a011]_T$. The other possible values for the first column are $[a101]_T$ where $a$ does not end in two 1s, and the corresponding pre-seed is $[a]_T+1$. If $[a]_T=x0$, for some prefix $x$, then such pre-seed has the possibly non-canonical representation $[x1]_T$, so its canonical representation has the number of trailing zeroes equal to 0 mod 3 and if $[a]_T=x01$ then such pre-seed has the possibly non-canonical representation $[x10]_T$, which has a canonical representation with a number of trailing zeroes equal to 1 mod 3. These extra values are indices of $a$ and $b$ in the Tribonacci word. \end{proof} Column $0$, the wall, is now sequence A353084: \[1,\ 2,\ 3,\ 5,\ 6,\ 7,\ 8,\ 9,\ 10,\ 12,\ 13,\ 14,\ 15,\ 16,\ 18,\ 19,\ 20,\ \ldots.\] Column $-1$, the seed, is now sequence A353086: \[0,\ 1,\ 1,\ 2,\ 3,\ 3,\ 4,\ 5,\ 5,\ 6,\ 7,\ 7,\ 8,\ 8,\ 9,\ 10,\ 10,\ \ldots,\] The numbers that are not doubled in the seed are now sequence A351631: \[0,\ 2,\ 4,\ 6,\ 9,\ 11,\ 13,\ 15,\ 17,\ 19,\ 22,\ 24,\ 26,\ 28,\ 30,\ 33,\ 35,\ \ldots.\] Column $-2$, the pre-seed, is now sequence A353090: \[0,\ 0,\ 1,\ 1,\ 1,\ 2,\ 2,\ 2,\ 3,\ 3,\ 3,\ 4,\ 4,\ 5,\ 5,\ 5,\ 6,\ 6,\ 6,\ \ldots .\] \subsection{Fact 6} We can combine the results in this section into the following analog of Fact 6. \begin{fact}{6T} Each positive integer appears once in the Trithoff array. In addition, positions of $a$ in the Tribonacci word appear 1 time in the wall, 2 times in the seed, and 3 times in the pre-seed. Positions of $b$ in the Tribonacci word appear 1 time in the wall, 1 time in the seed, and 3 times in the pre-seed. Positions of $c$ in the Tribonacci word appear 0 times in the wall, 1 time in the seed, and 2 times in the pre-seed. \end{fact} \section{ExtraTribs and their multiples}\label{sec:multiples} According to Fact 7 from Conway-Ryba paper \cite{ConwayRyba2016}, every series that satisfies Fibonacci's rule and is eventually positive is represented in the Garden State. Namely, every extraFib has a tail that is a row in the Wythoff array. We want to generalize this to extraTribs. As an example, consider the sequence that is twice the Tribonacci numbers: 0, 0, 2, 2, 4, 14, 26, 48, and so on. The first few terms can be found in the first row of the Trithoff array. The tail starting with 14, 26, and 48 is the seventh row of the Trithoff array. To help us deal with extraTribs, we need to deal with improper Tribonacci representations discussed below. \subsection{Improper Tribonacci representation and its canonization} We call a Tribonacci representation \textit{improper} if it uses digits other than zero and one. To emphasize the difference, we call a canonical or a non-canonical representation (aka representations that use only ones and zeros) \textit{proper}. Suppose a word $v$ is an improper Tribonacci representation of integer $n$. As before, we denote by $|v|$ the canonization of the word $v$, aka the canonical Tribonacci representation of $n$. In other words, $|v| = (n)_T$. An example of an improper representation of 13 is 1030, and its canonization is 10000. Suppose we have an improper Tribonacci representation of number $n$, which is a linear combination of Tribonacci numbers. Recall that we call replacing $T_{n-2}+T_{n-1}+T_n$ with $T_{n+1}$ carrying. We call replacing $T_{n+1}$ with $T_{n-2}+T_{n-1}+T_n$ \textit{reverse carrying}. In terms of a Tribonacci representation of $n$, the carrying replaces $dabc$ with $(d+1)(a-1)(b-1)(c-1)$ for $a,b,c > 0$, while reverse carrying replaces $dabc$ with with $(d-1)(a+1)(b+1)(c+1)$ for $d > 0$. Our goal in this section is to introduce the canonization procedure that, given enough zeros at the end of a Tribonacci representation, converts an improper representation of some number to a canonical representation of the same number in a finite number of steps. We call the position (index) of the leftmost digit that is greater than 1 in an improper representation $w$ the \textit{improper boundary index} and the value at this position the \textit{improper boundary value}. Next, we define the \textbf{weight} of $w$ to be the sum of all digits in $w$ that are to the right from the last 0 preceding the improper boundary. We look at the word $w$ from left to right, where we can assume that $w$ is padded with zeros on the left if needed. \begin{lemma} \label{lemma:leftomostboundary} The leftmost carrying does not move the improper boundary index to the left and does not increase the weight. \end{lemma} \begin{proof} The leftmost carrying replaces $0abc$ with $1(a-1)(b-1)(c-1)$, where $a,b,c > 0$. Thus the boundary does not move to the left. If the leftmost carrying acts on digits to the left of the 0 preceding the boundary, then it does not change the weight, if not, it decreases it. \end{proof} Consider the following procedure acting on an improper representation of a number. \textbf{Canonization procedure:} \begin{itemize} \item Step 1. Use leftmost carrying when possible. This procedure ensures that the longest prefix of the word $w$ that contains only 0's and 1's is in the canonical form. For the following steps, we can always assume that no three consecutive digits are greater than 0. \item Step 2. When step 1 is not available, work on the leftmost improper boundary $a$. There are two cases of what we do depending on what is before $a$: 0, or 01. \begin{itemize} \item Step 2a. Replace $0abcd$ with $1(a-2)bc(d+1)$. This is a combination of reverse carrying (replacing $0abcd$ with $0(a-1)(b+1)(c+1)(d+1)$) and carrying (replacing $0(a-1)(b+1)(c+1)(d+1)$ with $1(a-2)bc(d+1)$). \item Step 2b. Replace $01a0cd$ with $10(a-2)0(c+1)(d+1)$. This is a combination of reverse carrying (replacing $01a0cd$ with $01(a-1)1(c+1)(d+1)$) and carrying (replacing $01(a-1)1(c+1)(d+1)$ with $10(a-2)0(c+1)(d+1)$). \end{itemize} \end{itemize} These operations are not defined when $a$ is one of the last three digits of a number. If the procedure on the word $w'$ ends with a canonical word $w$ we call the $w$ the canonization of $w'$: $w = |w'|$. If $d$ is an integer, we write the string consisting of $m$ copies of $d$ as $d^m$. Note that when our presentation is non-canonical but proper, we only need Step 1. \begin{theorem} \label{thm:canon} Given an improper representation $w$ of an integer $n$ with weight $m$ ending in at least $3m$ zeros, the canonization procedure applied to $w$ terminates in the finite number of steps. \end{theorem} \begin{proof} Each of the steps in canonization does not change the value of the number while making it lexicographically larger. In addition, Step 1 decreases the sum of digits of the Tribonacci representation, while both Steps 2a and 2b do not change the sum of the digits. Now we look at the weight. Step 1 does not increase the weight. Step 2b decreases the weight. Now we look at Step 2a. If $a>3$, then Step 2a does not change the weight, but the next operation has to be either Step 1 or Step 2b, both of which decrease the weight. If $a = 3$, then Step 2a replaces $03bcd$ with $11bc(d+1)$. If $b=0$, the boundary moves, and the weight decreases; if $b > 0$, we perform Step 1 that decreases the weight. If $a=2$, then Step 2a replaces $02bcd$ with $10bc(d+1)$, and the weight decreases. Step 1 does not change the number of trailing zeros while increasing the lexicographical order. Moreover, the digits of the improper representation of a number $x$ cannot exceed $x$. It follows that we can only make a finite number of such steps in a row. The total number of times Step 2 runs is also finite as Steps 2b decrease the weight, and each Step 2a is followed by steps decreasing the weight. Now we estimate the number of trailing zeros that we need. Notice that though Step 2a might not decrease the weight, as soon as the boundary moves, the weight is decreased. For every operation in Step 2, we need three digits after the boundary to be available. It follows that it is enough to have $3m$ trailing zeros. \end{proof} The canonization procedure for the word $w$ uses a fixed number of zeros. If we add more zeros to the end of $w$, the procedure is still the same. \begin{corollary} \label{cor:canon} If the canonization procedure above for an improper representation $w$ of number $n$ terminates in the canonical Tribonacci representation $|w|$, then the same procedure for $w0^k$ terminates in $|w|0^k$. \end{corollary} \subsection{ExtraTribs} Consider an extraTrib $S_n$ that starts with non-negative numbers $a$, $b$, and $c$, that is $S_0 = a$, $S_1 = b$, and $S_2 = c$. Then $S_n$ is a linear combination of three sequences: \begin{itemize} \item A sequence that starts as 1, 0, 0. This sequence continues as 1, 1, 2 and can be described as shifted Tribonacci sequence $T_{n-1}$. \item A sequence that starts as 0, 1, 0. This sequence continues as 1, 2, 3, 6, and so on. It is a second row of the Trithoff array and can be represented as the sequence $T_n+T_{n+1}$. \item A sequence that starts as 0, 0, 1, which is the Tribonacci sequence $T_n$. \end{itemize} Thus, \[S_n = aT_{n-1}+b(T_n+T_{n+1})+cT_n= aT_{n-1}+(b+c)T_n+bT_{n+1}.\] \begin{fact}{7T} Any extraTrib sequence has its tail appearing in the array. \end{fact} \begin{proof} Suppose we are given an extraTrib sequence $S_n$. Its terms can be expressed as a positive integer linear combination of shifted Tribonacci sequences: $S_n = aT_{n-1}+(b+c)T_n+bT_{n+1}$. Thus, for $n > 3$, the term $S_n$ has an improper Tribonacci representation $b(b+c)a0^{n-4}$. By Theorem~\ref{thm:canon}, there exists $N_0$, such that the canonization procedure for the word $b(b+c)a0^{N_0-4}$ terminates in a word $w$. By Corollary~\ref{cor:canon} for any number $N \geq N_0$, we have $(S_n)_T = w0^{N-N_0}$. Thus, all these numbers are in the same row of the Trithoff array. \end{proof} \subsection{Multiples of Tribonacci sequences in the array} According to Fact 8, any positive multiple of an extraFibs is an extraFib. It immediately generalizes to Fact 8T. \begin{fact}{8T} Any positive multiple of an extraTribs is an extraTribs. \end{fact} Thus, any multiple of an extraTribs appears in the Trithoff array. We wrote a program to calculate multiples of the Tribonacci numbers and find them in the Trithoff array. This data is summarized in Table~\ref{table:TribonacciMultiples}. The first row is the multiple coefficient. The next row of the table is the row of the array where the tail of the $n$th multiple appears. The third row of the table is the value of the first column of that row in the array. The last row of the table is the third row divided by the first row. \begin{table}[ht!] \begin{center} \begin{tabular}{c|ccccccccccccc} multiple &1&2&3&4&5&6&7&8&9& 10 & 11 & 12 & \dots\\ row \# &1&7&10&81&101&121&141&161&1126& 1251 & 1376 & 1501 & \dots\\ first column &1&14&21&176&220&264&308&352&2466& 2740 & 3014 & 3288 & \dots\\ Trib \# &1&7&7&44&44&44&44&44&274& 274 & 274 & 274 & \dots\\ \end{tabular} \end{center} \caption{Tribonacci multiples} \label{table:TribonacciMultiples} \end{table} The sequence of row numbers is now sequence A351685: \[1,\ 7,\ 10,\ 81,\ 101,\ 121,\ 141,\ 161,\ 1126,\ 1251,\ 1376,\ 1501,\ 1626,\ 1751,\ \ldots.\] The numbers that start off the rows that are multiples of the Tribonacci sequence are now sequence A351689: \[1,\ 14,\ 21,\ 176,\ 220,\ 264,\ 308,\ 352,\ 2466,\ 2740,\ 3014,\ 3288,\ 3562,\ \ldots.\] Notice that this sequence contains runs of arithmetic progressions. For example, numbers 176, 220, 264, 308, 352 form an arithmetic progression with difference 44. Correspondingly, number 44 appears 5 times in row 4 of Table~\ref{table:TribonacciMultiples}. The next 13 numbers form an arithmetic progression with difference 274. The next 27 numbers form an arithmetic progression with difference 1705. \subsection{Order of multiples} Fact 9, states that multiples of any extraFib series appear in order in the Wythoff array. \begin{fact}{9T} Multiples of extraTribs appear in order in the Trithoff array. \end{fact} \begin{proof} Let the sequence in the array be $S_n$. Suppose we found $kS_n$ in the array. If it requires $m$ trailing zeroes to canonize, then the first term in the Trithoff array corresponding to $kS_n$ starts with the canonization of $kS_{m+1}$, as $S_{m+1}$ is the smallest term in the array $S_n$ which has $m$ zeros. Now we want to canonize $(k+1)S_n=kS_n+S_n$. First, canonizing $kS_n$ requires $m$ trailing zeroes, then adding $S_n$ requires a total of $m^\prime\ge m$ zeroes because the process never moves the rightmost digit leftwards. So the sequence $(k+1)S_n$ starts at $(k+1)S_{m^\prime}\ge (k+1)S_m>kS_m$ and thus, it appears later. \end{proof} The exact same argument proves that the sequence $A+B$ appears after sequences $A$ and $B$. By the way, the sequence $a(n)$, where $a(n)$ is the row number in the Wythoff array that is $n$ times the Fibonacci sequence is A269725. The similar sequence for Lucas numbers is A269726. \subsection{Row numbers for multiples} \begin{theorem} When all the numbers in an extraTrib are divisible by $n$, the row number is 1 modulo $n$. \end{theorem} \begin{proof} Consider an extraTrib $S$ with all terms divisible by $n$. After dividing by $n$, we get another extraTrib that is a row in the Trithoff array. Suppose its element in the first column is $p$. Then our sequence $S$ contains an element $np$. Consider the Tribonacci representation $v = (p)_T$ and an improper word $w$, where we replace every digit one in the word $v$ with $n$. Suppose an improper word $w$ requires adding exactly $z$ trailing zeroes to canonize. Thus, the row for sequence $S$ starts with $[|w0^z|]_T$. The canonization procedure depends on the rule for the Tribonacci-like sequences but not on the sequences themselves. Thus, the canonization steps are identical for bases $T$ and $U$. By Lemma~\ref{lemma:rownumber}, the row number is $1 + [|w0^z|]_U = 1 + n[v0^z]_U$. This is 1 plus a multiple of $n$. \end{proof} For example, consider the Tribonacci sequence and its multiples. Suppose that some range of values of $n$ needs the same number of zeros $z$ to get canonized. In other words, for this range, the canonization of $n0^z$ ends in 1. That means the first column of the row that is the $n$th multiple of the Tribonacci sequence equals $nT_{n+2+z}$. Thus, for this range the elements in the first column form an arithmetic progression with difference $T_{n+2+z}$, and row numbers form an arithmetic progression with difference $T_{n+2+z} - T_{n+1+z}$. \subsection{How to find the extraTribs in the garden} Given an extraTrib, how can we locate it in the array? We can do this by computing the outs of each term. From Fact 5T, when $n$ is the wall, $\out(n)-1$ is the term after $n$, and it is the last term that does so. Moreover, suppose we have three consecutive terms of an extraTrib that are $m$, $\out(m)$, and $\out^2(m)$. Then $n+\out(n)+\out^2(n)=\out^3(n)$, it follows that the next terms is $\out^3(n)$ and so on. Thus to find the wall term in an extraTrib, it is enough to locate a term $n$, such that the next term is $m = \out(n)-1$, and the next two terms are $\out(m)$ and $\out^2(m)$. \subsection{Extending to the left} Let us extend the Fibonacci sequence to the left: \[\ldots,\ -8,\ 5,\ ,-3,\ 2,\ -1,\ 1,\ 0,\ 1,\ ,1\ 2.\] We see that the signs on the left alternate. Note that any extraFib series extended to the left has a similar pattern. The signs on the left alternate, and the absolute values moving to the left form an extraFib, see \cite{ConwayRyba2016}. Going backwards through the Tribonacci sequence gives 1, 0, 0, 1, $-1$, 0, 2, $-3$, 1, 4, $-8$, 5, 7, $-20$, 18, 9, $-47$, 56, 0, $-103$, 159, $-56$, $-206$, 421, $-271$, etc. We see that the signs do not form a nice pattern, and the absolute values do not form an extraTrib. \begin{lemma} \label{lemma:negativeextraTribs} When extending an extraTrib to the left, we have to reach a negative number. After that, no three consecutive numbers to the left of it cannot have the same sign. \end{lemma} \begin{proof} First, we prove that when extending an extraTrib sequence to the left, we always reach a negative number. Assume this is false, and there exists such an extraTrib whose elements are all non-negative. It follows that the sequence is non-decreasing as for any $k$, we have $T_k = T_{k-1} + T_{k-2} + T_{k-3} \geq T_{k-1}$. It follows that the sequence does not contain zeros and is actually monotonically increasing. There is a finite number of non-negative integers less than any given non-negative integer, but there are no bounds on the index moving to the left, so as we move to the left in an extraTrib sequence, we must at some point encounter a negative element. Suppose $T_k < 0$. Let $T_n$, $T_{n-1}$, and $T_{n-2}$ for $n < k$ be all positive. This means that $T_{n+1} = T_n+T_{n-1}+T_{n-2} > 0$. By continuing, we see that for all $j > n$, we have $T_j > 0$, contradicting that $T_k < 0$. Let $T_n$, $T_{n-1}$, and $T_{n-2}$ for $n < k$ be all negative. This means that $T_{n+1} = T_n+T_{n-1}+T_{n-2} < 0$. By continuing, we see that for all $j > n$, we have $T_j < 0$, contradicting the fact that this is an extraTrib. \end{proof} \subsection{Reversal} Similar to Conway and Ryba \cite{ConwayRyba2016}, we can define the reversal of an extraTrib series, the series where we change the index $n$ to $-n$ and replace the numbers with their absolute values. The following proposition is a negation of Fact 10. \begin{theorem} The reversal of the extraTrib is not an extraTrib. \end{theorem} \begin{proof} We use the fact from Lemma~\ref{lemma:negativeextraTribs} that no three consecutive terms of an extraTrib can be all positive or all negative once the term index is below some constant $i_0$. Moreover, for the reversal of the extraTrib series to be an extraTrib, we can assume that the terms to the left of some index constant $i_0$ increase in absolute value. Let the absolute values of some four consecutive terms $A$, $B$, $C$, $D$ to the left of $i_0$ be $a$, $b$, $c$, and $d$. By our assumptions, $$a > b > c > d.$$ Now consider the signs of $A$, $B$, $C$, and $D$. Without loss of generality, we can assume that the sign for $A$ is positive. As no three consecutive terms have the same sign, there are 5 cases for the distribution of signs: $+ + -+$, $+ + --$, $+ - ++$, $+ - +-$, and $+ - -+$. We have that $A+B+C=D$. Thus, we get the following equations: $a + b - c = d$, $a + b - c = - d$, $a - b + c = d$, $a - b + c = - d$, and $a - b - c = d$. Or equivalently, $a + b = c + d$, $a + b + d = c$, $a + c = b + d$, $ a + c + d = b$, and $a = b + c + d$. Given that $a > b > c > d$, we can exclude the first four cases. We are left with the case of four consecutive numbers $a$, $-b$, $-c$, and $d$. Consider the number to the left of $a$. On one hand, it has to equal $-c - (-b) - a = b - c - a$. On the other hand, its absolute value has to be $a+b+c$. We get a contradiction. \end{proof} \section{Fib/Trib binary/ternary numbers}\label{sec:tribinarrynumbers} The sequence of \textit{Fibbinary} numbers (A003714) is defined as numbers whose binary representation contains no two adjacent ones. In other words, the Fibbinary numbers can be formed by writing the Zeckendorf representations of natural numbers and then evaluating the result in binary: \[0,\ 1,\ 2,\ 4,\ 5,\ 8,\ 9,\ 10,\ 16,\ 17,\ 18,\ 20,\ 21,\ 32,\ 33,\ 34,\ 36,\ 37,\ 40,\ \ldots. \] Analogously, we define the \textit{Tribbinary} numbers as those numbers whose binary representation has no three consecutive ones. The sequence of Tribbinary numbers can be constructed by writing out the Tribonacci representations of non-negative integers and then evaluating the result in binary. This is sequence A003726: \[0,\ 1,\ 2,\ 3,\ 4,\ 5,\ 6,\ 8,\ 9,\ 10,\ 11,\ 12,\ 13,\ 16,\ 17,\ \ldots\] Now we would like to introduce two more sequences related to base 3, rather than base~2. We define \textit{Fibternary} numbers as numbers whose ternary representations consist only of zeros and ones and do hot have two consecutive ones. The sequence of Fibternary numbers can be constructed by writing out the Zeckendorf representations of non-negative integers and then evaluating the result in ternary. This is sequence A060140: \[0,\ 1,\ 3,\ 9,\ 10,\ 27,\ 28,\ 30,\ 81,\ 82,\ 84,\ 90,\ 91,\ 243,\ 244,\ \ldots.\] These are Fibbinary numbers written in base 2, then evaluated in base 3. We define \textit{Tribternary} numbers as numbers whose ternary representations consist only of zeros and ones and do hot have three consecutive ones. The sequence of Tribternary numbers can be constructed by writing out the Tribonacci representations of non-negative integers and then evaluating the result in ternary. This is now sequence A356823: \[0,\ 1,\ 3,\ 4,\ 9,\ 10,\ 12,\ 27,\ 28,\ 30,\ 31,\ 36,\ 37,\ 81,\ 82,\ 84,\ 85,\ 90,\ 91,\ \ldots.\] These are Tribbinary numbers written in base 2, then evaluated in base 3. A lot is known about Fibbinary numbers and can be easily generalized to the other three sequences. \textbf{Powers of 2 and 3.} The number of Fibbinary numbers less than any power of two is a Fibonacci number. It is easy to prove that the number of Tribbinary numbers less than any power of two is a Tribonacci number. Similarly, the number of Fibternary(Tribternary) numbers less than any power of three is a Fibonacci(Tribonacci) number. \textbf{Recursive generation.} We can generate all the four sequences we discuss here recursively. Start by adding 0 to the sequence. Then, if $x$ is a number in the sequence, add the following numbers to the sequence (ignoring repeated zeros): \begin{itemize} \item $2x$ and $4x+1$, for Fibbinary; \item $2x$, $4x+1$, and $8x+3$, for Tribbinary; \item $3x$ and $9x+1$, for Fibternary; \item $3x$, $9x+1$, and $27x+4$, for Tribternary; \end{itemize} \textbf{Fibonacci(Tribonacci) word.} The Fibbinary numbers have the property that the $n$th Fibbinary number is even if the $n$th term of the Fibonacci word is $a$. Respectively, the $n$th Fibbinary number is odd (of the form $4x+1$) if the $n$th term of the Fibonacci word is $b$. Similarly, the $n$th Fibternary number is of the form $3x$ (correspondingly $9x+1$) if $n$th term of the Fibonacci word is $a$ (correspondingly $b$) (see comment in the OEIS for A060140). Similarly, the $n$th Tribbinary number is even if the $n$th term of the Tribonacci word is $a$. Respectively, the $n$th Tribbinary number is of the form $4x+1$ if the $n$th term of the Tribonacci word is $b$, and the $n$th Tribbinary number is of the form $8x+3$ if the $n$th term of the Tribonacci word is $c$. This follows from Theorem~\ref{thm:abcCorrespondsTo0-01-11}, see \cite{DucheneRigo2008}. Similarly, the $n$th Tribternary number is divisible by 3 if the $n$th term of the Tribonacci word is $a$. Respectively, the $n$th Tribbinary number is of the form $9x+1$ if $n$th term of the Tribonacci word is $b$, and the $n$th Tribbinary number is of the form $27x+4$ if $n$th term of the Tribonacci word is $c$. \textbf{Sums.} It is known and can be easily checked, that every non-negative integer can be written as the sum of two Fibbinary numbers. As Fibbinary numbers are a subset of Tribbinary numbers, we get that every non-negative integer can be written as the sum of two Tribbinary numbers. Here is the analog for Fibternary and Tribternary numbers. \begin{proposition} Every non-negative integer can be written as a sum of four Fibternary numbers or as a sum of three Tribternary numbers. \end{proposition} \begin{proof} We start with Fibternary numbers. Suppose base-3 representation of integer $n$ has $k$ digits. Consider two $k$-digit Fibternary numbers of the form $101010\ldots$ in base 3 and another two $k$-digit numbers of the form $010101\ldots$. We can add these four numbers to get the $k$-digit number $N$ written as $222222\ldots$ in base 3. We can get to our number $n$ by subtracting one or two in some digit placements. We can distribute these subtractions between our four numbers by replacing some ones with zeros in them. The four numbers will remain Fibternary and will sum up to $n$. We continue with Tribternary numbers. Suppose base-3 representation of integer $n$ has $k$ digits. Consider three special numbers in base 3, all of them consisting of zeros and ones. The first number has zeros in digit places divisible by 3, the second number in digit places that have remainder 1 when divided by three, and the third number in digit places that have remainder 2 when divided by 3. These three numbers sum up to a number with $k$ digits in base three, all equal to 2. Suppose our number $n$ in base 3 has the digit 1 in some place. Then we can remove a 1 from one of the two special numbers that have a 1 in the same place. Suppose our number $(n)_3$ has the digit 0 in some place. Then we can remove a 1 from both of the two special numbers with a 1 in the same place. When all digits are adjusted, we will have three numbers that sum to $n$, and all of them have every third digit as zero. Thus, all of them are Tribonacci representations of some numbers. \end{proof} \textbf{Multiples.} Every number has a Fibbinary multiple. The proof is available in the sequence A300867 entry in the OEIS \cite{OEIS}. Our generalization is in the next proposition. \begin{proposition} Every number has a Fibternary multiple. \end{proposition} \begin{proof} Let $a(k)=\frac{9^k-1}{8}$, and then for any $n$, the pigeonhole principle implies there are $i\neq j$ such that $a(i)\equiv a(j)\mod n$, making $a(i)-a(j)$ a multiple of $n$. In addition, $$9^k-1={\overbrace{888\ldots8}^{k\text{ 8's}}}_9,$$ so $$a(k)={\overbrace{111\ldots1}^{k\text{ 1's}}}_9.$$ Then $$a(i)-a(j)=\overbrace{111\ldots1}^{i-j\text{ 1's}}{\overbrace{000\ldots0}^{j\text{ 0's}}}_9=\overbrace{010101\ldots01}^{i-j\text{ 01's}}{\overbrace{000\ldots0}^{2j\text{ 0's}}}_3.$$ This means that $a(i)-a(j)$ is actually fibternary. \end{proof} As every Fibbinary number is also Tribbinary and every Fibternary number is also Tribternary, we have the following corollary. \begin{corollary} Every number has a Tribbinary and a Tribternary multiple. \end{corollary} \section{Acknowledgments} We are grateful to PRIMES STEP program for giving us the opportunity to conduct this research.
1,108,101,565,061
arxiv
\section{Introduction} Graphs, or networks, are a general and flexible data structure to encode complex relationships among objects. Examples of real-world graphs include social networks, airline networks, protein-protein interaction networks, and traffic networks. Recently, there has been increasing interest from both academic and industrial communities in analyzing graphical data. Examples span a variety of domains and applications such as node classification~\cite{cao2015grarep,tang2015line} and link prediction~\cite{gao2011temporal,wang2011community} in social networks, role prediction in protein-protein interaction networks \cite{krogan2006global}, and prediction of information diffusion in social and citation networks~\cite{newman2004finding}. One fundamental task of graph analysis is community detection, which aims to cluster nodes into multiple groups called communities. Each community is a set of nodes that are more closely connected to each other than to nodes in different communities. A community level description is able to capture important information about a graph's global structure. Such a description is useful in many real-world applications, such as identifying users with similar interests in social networks \cite{newman2004finding} or proteins with similar functionality in biochemical networks \cite{krogan2006global}. Community detection has been extensively studied in the literature, and a number of methods have been proposed, including algorithmic approaches \cite{ahn2010link,derenyi2005clique} and probabilistic models \cite{gopalan2013efficient,mcauley2014discovering,yang2013overlapping,yang2013community}. A classical approach to detect communities is spectral clustering~\cite{white2005spectral}, which assumes that neighboring nodes tend to belong to the same communities and detects communities by finding the eigenvectors of the graph Laplacian. Another important task of graph analysis is node representation learning, where nodes are described using low-dimensional features. Node representations effectively capture local graph structure and are often used as features for many prediction tasks. Modern methods for learning node embeddings \cite{grover2016node2vec,perozzi2014deepwalk,tang2015line} have proved effective on a variety of tasks such as node classification \cite{cao2015grarep,tang2015line}, link prediction \cite{gao2011temporal,wang2011community} and graph visualization \cite{tang2016node,wang2016structural}. Clustering, which captures the global structure of graphs, and learning node embeddings, which captures local structure, are typically studied separately. Clustering is often used for exploratory analysis, while generating node embeddings is often done for predictive analysis. However, these two tasks are very correlated and it may be beneficial to perform both tasks simultaneously. The intuition is that (1) node representations can be used as good features for community detection (e.g., through K-means)~\cite{cavallari2017learning,rozemberczki2018gemsec,tsitsulin2018verse}, and (2) the node community membership can provide good contexts for learning node representations~\cite{wang2017community}. However, how to leverage the relatedness of node clustering and node embedding in a unified framework for joint community detection and node representation learning is under-explored. In this paper, we propose a novel probabilistic generative model called vGraph{} for joint community detection and node representation learning. vGraph{} assumes that each node $v$ can be represented as a mixture of multiple communities and is described by a multinomial distribution over communities $z$, i.e., $p(z|v)$. Meanwhile, each community $z$ is modeled as a distribution over the nodes $v$, i.e., $p(v|z)$. vGraph{} models the process of generating the neighbors for each node. Given a node $u$, we first draw a community assignment $z$ from $p(z|u)$. This indicates which community the node is going to interact with. Given the community assignment $z$, we generate an edge $(u,v)$ by drawing another node $v$ according to the community distribution $p(v|z)$. Both the distributions $p(z|v)$ and $p(v|z)$ are parameterized by the low-dimensional representations of the nodes and communities. As a result, this approach allows the node representations and the communities to interact in a mutually beneficial way. We also design a very effective algorithm for inference with backpropagation. We use variational inference for maximizing the lower-bound of the data likelihood. The Gumbel-Softmax~\cite{jang2016categorical} trick is leveraged since the community membership variables are discrete. Inspired by existing spectral clustering methods~\cite{dong2012clustering}, we added a smoothness regularization term to the objective function of the variational inference routine to ensure that community membership of neighboring nodes is similar. The whole framework of vGraph{} is very flexible and general. We also show that it can be easily extended to detect hierarchical communities. In the experiment section, we show results on three tasks: overlapping community detection, non-overlapping community detection, and node classification-- all using various real-world datasets. Our results show that vGraph{} is very competitive with existing state-of-the-art approaches for these tasks. We also present results on hierarchical community detection. \section{Related Work} \textbf{Community Detection.} Many community detection methods are based on matrix factorization techniques. Typically, these methods try to recover the node-community affiliation matrix by performing a low-rank decomposition of the graph adjacency matrix or other related matrices \cite{kuang2012symmetric,li2018community,wang2011community,yang2013overlapping}. These methods are not scalable due to the complexity of matrix factorization, and their performance is restricted by the capacity of the bi-linear models. Many other studies develop generative models for community detection. Their basic idea is to characterize the generation process of graphs and cast community detection as an inference problem \cite{yang2013community,zhang2015incorporating,zhou2015infinite}. However, the computational complexity of these methods is also high due to complicated inference. Compared with these approaches, vGraph{} is more scalable and can be efficiently optimized with backpropagation and Gumbel-Softmax~\cite{jang2016categorical,maddison2016concrete}. Additionally, vGraph{} is able to learn and leverage the node representations for community detection. \textbf{Node Representation Learning.} The goal of node representation learning is to learn distributed representations of nodes in graphs so that nodes with similar local connectivity tend to have similar representations. Some representative methods include DeepWalk \cite{perozzi2014deepwalk}, LINE \cite{tang2015line}, node2vec \cite{grover2016node2vec} and GraphRep \cite{cao2015grarep}. Typically, these methods explore the local connectivity of each node by conducting random walks with either breadth-first search~\cite{perozzi2014deepwalk} or depth-first search~\cite{tang2015line}. Despite their effectiveness in a variety of applications, these methods mainly focus on preserving the local structure of graphs, therefore ignoring global community information. In vGraph{}, we address this limitation by treating the community label as a latent variable. This way, the community label can provide additional contextual information which enables the learned node representations to capture the global community information. \textbf{Framework for node representation learning and community detection.} There exists previous work~\cite{cavallari2017learning,jia2019communitygan,tsitsulin2018verse,tu2018unified,wang2017community} that attempts to solve community detection and node representation learning jointly. However, their optimization process alternates between community assignment and node representation learning instead of simultaneously solving both tasks \cite{cavallari2017learning,tu2018unified}. Compared with these methods, vGraph{} is scalable and the optimization is done end-to-end. \textbf{Mixture Models.} Methodologically, our method is related to mixture models, particularly topic models (e.g. PSLA \cite{hofmann1999probabilistic} and LDA \cite{blei2003latent}). These methods simulate the generation of words in documents, in which topics are treated as latent variables, whereas we consider generating neighbors for each node in a graph, and the community acts as a latent variable. Compared with these methods, vGraph{} parameterizes the distributions with node and community embeddings, and all the parameters are trained with backpropagation. \section{Problem Definition} Graphs are ubiquitous in the real-world. Two fundamental tasks on graphs are community detection and learning node embeddings, which focus on global and local graph structures respectively and hence are naturally complementary. In this paper, we study jointly solving these two tasks. Let $\mathcal{G} = (\mathcal{V}, \mathcal{E})$ represent a graph, where $\mathcal{V}=\{v_1,\ldots ,v_V\}$ is a set of vertices and $\mathcal{E}=\{e_{ij}\}$ is the set of edges. Traditional graph embedding aims to learn a node embedding $\boldsymbol{\phi}_i \in \mathbb{R}^d$ for each $v_i \in \mathcal{V}$ where $d$ is predetermined. Community detection aims to extract the community membership $\mathcal{F}$ for each node. Suppose there are $K$ communities on the graph $\mathcal{G}$, we can denote the community assignment of node $v_i$ as $\mathcal{F}(v_i) \subseteq \{1,...,K\}$. We aim to jointly learn node embeddings $\boldsymbol{\phi}$ and community affiliation of vertices $\mathcal{F}$. \section{Methodology} \begin{figure}% \centering \subfloat[vGraph{}]{{\includegraphics[width=6cm]{images/graphical-1.png} }}% \qquad \subfloat[Hierarchical vGraph{}]{{\includegraphics[width=6.3cm]{images/graphical-2.png} }}% \caption{The diagram on the left represents the graphical model of vGraph{} and the diagram on the right represents the graphical model of the hierarchical extension. $\boldsymbol{\phi}_n$ is the embedding of node $w_n$, $\boldsymbol{\psi}$ denotes the embedding of communities, and $\boldsymbol{\varphi}$ denotes the embeddings of nodes used in $p(c|z)$. Refer to Eq.~\ref{eq:softmax1} and Eq.~\ref{eq:softmax2}.} \label{fig:graphical_model} \end{figure} In this section, we introduce our generative approach vGraph{}, which aims at collaboratively learning node representations and detecting node communities. Our approach assumes that each node can belong to multiple communities representing different social contexts \cite{epasto2019single}. Each node should generate different neighbors under different social contexts. vGraph{} parameterizes the node-community distributions by introducing node and community embeddings. In this way, the node representations can benefit from the detection of node communities. Similarly, the detected community assignment can in turn improve the node representations. Inspired by existing spectral clustering methods \cite{dong2012clustering}, we add a smoothness regularization term that encourages linked nodes to be in the same communities. \subsection{vGraph{}} vGraph models the generation of node neighbors. It assumes that each node can belong to multiple communities. For each node, different neighbors will be generated depending on the community context. Based on the above intuition, we introduce a prior distribution $p(z|w)$ for each node $w$ and a node distribution $p(c|z)$ for each community $z$. The generative process of each edge $(w,c)$ can be naturally characterized as follows: for node $w$, we first draw a community assignment $z \sim p(z|w)$, representing the social context of $w$ during the generation process. Then, the linked neighbor $c$ is generated based on the assignment $z$ through $c \sim p(c|z)$. Formally, this generation process can be formulated in a probabilistic way: \begin{equation} p(c|w) = \sum_z p(c|z)p(z|w). \label{eq:simple} \end{equation} vGraph{} parameterizes the distributions $p(z|w)$ and $p(c|z)$ by introducing a set of node embeddings and community embeddings. Note that different sets of node embeddings are used to parametrize the two distributions. Specifically, let $\boldsymbol{\phi}_i$ denote the embedding of node $i$ used in the distribution $p(z|w)$, $\boldsymbol{\varphi}_i$ denote the embedding of node $i$ used in $p(c|z)$, and $\boldsymbol{\psi}_j$ denote the embedding of the $j$-th community. The prior distribution $p_{\boldsymbol{\phi},\boldsymbol{\psi}}(z|w)$ and the node distribution conditioned on a community $p_{\boldsymbol{\psi},\boldsymbol{\varphi}}(c|z)$ are parameterized by two softmax models: \begin{equation} \label{eq:softmax1} p_{\boldsymbol{\phi},\boldsymbol{\psi}}(z=j|w) =\frac{\mathrm{exp}(\boldsymbol{\phi}^{T}_w \boldsymbol{\psi}_j)}{\sum_{i=1}^K\mathrm{exp}(\boldsymbol{\phi}^{T}_w \boldsymbol{\psi}_i)}, \end{equation} \begin{equation} \label{eq:softmax2} p_{\boldsymbol{\psi},\boldsymbol{\varphi}}(c|z=j) =\frac{\mathrm{exp}(\boldsymbol{\psi}_j^{T} \boldsymbol{\varphi}_c )}{\sum_{c^{'} \in \mathcal{V}}\mathrm{exp}(\boldsymbol{\psi}_j^{T} \boldsymbol{\varphi}_{c^{'}})}. \end{equation} Calculating Eq.~\ref{eq:softmax2} can be expensive as it requires summation over all vertices. Thus, for large datasets we can employ negative sampling as done in LINE \cite{tang2015line} using the following objective function: \begin{equation} \label{eq:neg} \log \sigma(\boldsymbol{\varphi}_c^T\cdot \boldsymbol{\psi}_j)+\sum_{i=1}^KE_{v\sim P_n(v)}[\log \sigma(-\boldsymbol{\varphi}_{v}^T\cdot \boldsymbol{\psi}_j)], \end{equation} where $\sigma(x)=1/(1+\exp(-x))$, $P_n(v)$ is a noise distribution, and $K$ is the number of negative samples. This, combined with stochastic optimization, enables our model to be scalable To learn the parameters of vGraph{}, we try to maximize the log-likelihood of the observed edges, i.e., $\mathrm{log}~p_{\boldsymbol{\phi},\boldsymbol{\varphi},\boldsymbol{\psi}}(c|w)$. Since directly optimizing this objective is intractable for large graphs, we instead optimize the following evidence lower bound (ELBO)~\cite{kingma2013auto}: \begin{equation} \label{eq:lowerbound} \begin{split} \mathcal{L}= E_{z\sim q(z|c,w)}[\mathrm{log}~p_{\boldsymbol{\psi},\boldsymbol{\varphi}}(c|z)]-\mathrm{KL}(q(z|c,w)||p_{\boldsymbol{\phi},\boldsymbol{\psi}}(z|w)) \end{split} \end{equation} where $q(z|c,w)$ is a variational distribution that approximates the true posterior distribution $p(z|c,w)$, and $\mathrm{KL}(\cdot||\cdot)$ represents the Kullback-Leibler divergence between two distributions. Specifically, we parametrize the variational distribution $q(z|c,w)$ with a neural network as follows: \begin{equation} \label{eq:softmax3} q_{\boldsymbol{\phi},\boldsymbol{\psi}}(z=j|w,c) =\frac{\mathrm{exp}((\boldsymbol{\phi}_w \odot \boldsymbol{\phi}_c)^{T} \boldsymbol{\psi}_j)}{\sum_{i=1}^K\mathrm{exp}((\boldsymbol{\phi}_w \odot \boldsymbol{\phi}_c)^{T} \boldsymbol{\psi}_i)}. \end{equation} where $\odot$ denotes element-wise multiplication. We chose element-wise multiplication because it is symmetric and it forces the representation of the edge to be dependent on both nodes. The variational distribution $q(z|c,w)$ represents the community membership of the edge $(w,c)$. Based on this, we can easily approximate the community membership distribution of each node $w$, i.e., $p(z|w)$ by aggregating all its neighbors: \begin{equation} p(z|w)= \sum_c p(z,c|w) = \sum_c p(z|w,c)p(c|w) \approx \frac{1}{|N(w)|} \sum_{c \in N(w)} q(z|w,c), \end{equation} where $N(w)$ is the set of neighbors of node $w$. To infer non-overlapping communities, we can simply take the $\argmax$ of $p(z|w)$. However, when detecting overlapping communities instead of thresholding $p(z|w)$ as in \cite{jia2019communitygan}, we use \begin{equation} \mathcal{F}(w)= \{ \argmax_k q(z=k|w,c) \}_{c \in N(w)}. \end{equation} That is, we assign each edge to one community and then map the edge communities to node communities by gathering nodes incident to all edges within each edge community as in \cite{ahn2010link}. \noindent \textbf{Complexity.} Here we show the complexity of vGraph. Sampling an edge takes constant time, thus calculating Eq.~\eqref{eq:neg} takes $\mathcal{O}(d(M+1))$ time, where $M$ is the number of negative samples and $d$ is the dimension of embeddings (the node embeddings and community embeddings have the same dimension). To calculate Eq. \eqref{eq:softmax3}, it takes $\mathcal{O}(dK)$ time where $K$ is the number of communities. Thus, an iteration with one sample takes $\mathcal{O}(\max(dM, dK))$ time. In practice the number of updates required is proportional to the number of edges $\mathcal{O}(|\mathcal{E}|)$, thus the overall time complexity of vGraph is $\mathcal{O}(|\mathcal{E}|d\max(M, K))$. \subsection{Community-smoothness Regularized Optimization} For optimization, we need to optimize the lower bound \eqref{eq:lowerbound} w.r.t. the parameters in the variational distribution and the generative parameters. If $z$ is continuous, the reparameterization trick~\cite{kingma2013auto} can be used. However, $z$ is discrete in our case. In principle, we can still estimate the gradient using a score function estimator \cite{glynn1990likelihood,williams1992simple}. However, the score function estimator suffers from a high variance, even when used with a control variate. Thus, we use the Gumbel-Softmax reparametrization~\cite{jang2016categorical,maddison2016concrete} to obtain gradients for the evidence lower bound. More specifically, we use the straight-through Gumbel-Softmax estimator~\cite{jang2016categorical}. A community can be defined as a group of nodes that are more similar to each other than to those outside the group \cite{pei2015nonnegative}. For a non-attributed graph, two nodes are similar if they are connected and share similar neighbors. However, vGraph{} does not explicitly weight local connectivity in this way. To resolve this, inspired by existing spectral clustering studies~\cite{dong2012clustering}, we augment our training objective with a smoothness regularization term that encourages the learned community distributions of linked nodes to be similar. Formally, the regularization term is given below: \begin{equation} \mathcal{L}_{reg} = \lambda \sum_{(w,c) \in \mathcal{E}} \alpha_{w,c} \cdot d(p(z|c), p(z|w)) \end{equation} where $\lambda$ is a tunable hyperparameter , $\alpha_{w,c}$ is a regularization weight, and $d(\cdot,\cdot)$ is the distance between two distributions (squared difference in our experiments). Motivated by \cite{rozemberczki2018gemsec}, we set $\alpha_{w,c}$ to be the Jaccard's coefficient of node $w$ and $c$, which is given by: \begin{equation} \alpha_{w,c} = \frac{|N(w) \cap N(c)|}{|N(w) \cup N(c)|}, \end{equation} where $N(w)$ denotes the set of neighbors of $w$. The intuition behind this is that $\alpha_{w,c}$ serves as a similarity measure of how similar the neighbors are between two nodes. Jaccard's coefficient is used for this metric and thus the higher the value of Jaccard's coefficient, the more the two nodes are encouraged to have similar distribution over communities. By combining the evidence lower bound and the smoothness regularization term, the entire loss function we aim to minimize is given below: \begin{equation} \mathcal{L} = -E_{z\sim q_{\boldsymbol{\phi},\boldsymbol{\psi}}(z|c,w)}[\mathrm{log}~p_{\boldsymbol{\psi},\boldsymbol{\varphi}}(c|z)] + \mathrm{KL}(q_{\boldsymbol{\phi},\boldsymbol{\psi}}(z|c,w)||p_{\boldsymbol{\phi},\boldsymbol{\psi}}(z|w)) + \mathcal{L}_{reg} \end{equation} For large datasets, negative sampling can be used for the first term. \subsection{Hierarchical vGraph{}} \label{subsection:hier} One advantage of vGraph{}'s framework is that it is very general and can be naturally extended to detect hierarchical communities. In this case, suppose we are given a $d$-level tree and each node is associate with a community, the community assignment can be represented as a $d$-dimensional path vector $\vec{z} = (z^{(1)}, z^{(2)}, ..., z^{(d)})$, as shown in Fig.~\ref{fig:graphical_model}. Then, the generation process is formulated as below: (1) a tree path $\vec{z}$ is sampled from a prior distribution $p_{\boldsymbol{\phi},\boldsymbol{\psi}}(\vec{z}|w)$. (2) The context $c$ is decoded from $\vec{z}$ with $p_{\boldsymbol{\psi},\boldsymbol{\varphi}}(c|\vec{z})$. Under this model, the likelihood of the network is \begin{equation} p_{\boldsymbol{\phi},\boldsymbol{\varphi},\boldsymbol{\psi}}(c|w) = \sum_{\vec{z}} p_{\boldsymbol{\phi},\boldsymbol{\psi}}(c|\vec{z})p_{\boldsymbol{\phi},\boldsymbol{\psi}}(\vec{z}|w). \end{equation} At every node of the tree, there is an embedding vector associated with the community. Such a method is similar to the hierarchical softmax parameterization used in language models \cite{morin2005hierarchical}. \begin{comment} \subsection{Optimization} By adding the evidence lower bound and the smoothness regularization term, the entire loss function we aim to minimize is given below: \begin{equation} \mathcal{L} = -E_{z\sim q_{\boldsymbol{\phi},\boldsymbol{\psi}}(z|c,w)}[\mathrm{log}~p_{\boldsymbol{\psi},\boldsymbol{\varphi}}(c|z)] + \mathrm{KL}(q_{\boldsymbol{\phi},\boldsymbol{\psi}}(z|c,w)||p_{\boldsymbol{\phi},\boldsymbol{\psi}}(z|w)) + \mathcal{L}_{reg} \end{equation} It is straightforward to obtain the gradient of the last two terms. For the first term, we can estimate the gradient using a score function estimator \cite{glynn1990likelihood,williams1992simple}. However, the score function estimator suffers from high variance, even when used with a control variate. Thus instead of exact sampling, we use Gumbel-Softmax reparametrization~\citep{jang2016categorical,maddison2016concrete}. Given an edge $(w, c)$, we denote the output class probabilities of the $K$-categorical distribution $q_{\boldsymbol{\phi},\boldsymbol{\psi}}(z|c,w)$ by $[e_1, e_2, ..., e_K]$. Instead of sampling $z \sim \mathrm{Discrete}([e_1, e_2, ..., e_K])$, we sample a continuous relaxation $\widetilde{\bb{z}}$ from a density $Q_{\boldsymbol{\phi},\boldsymbol{\psi}}(z|c,w)$ where the output class probabilities $[y_1, y_2, ..., y_K]$ is defined as: \begin{equation} \small y_k =\frac{\mathrm{exp}((\mathrm{log}(e_k)+g_k)/\tau)}{\sum_{i=1}^K\mathrm{exp}((\mathrm{log}(e_i)+g_i)/\tau)} \; \text{with} \; g_i=-\mathrm{log}(-\mathrm{log}(u_i)), u_i \sim U(0,1) \end{equation} where $U$ denotes an uniform distribution and $\tau \in (0,1]$ is the softmax temperature. That is, we can denote $\widetilde{\bb{z}} \sim Q_{\boldsymbol{\phi},\boldsymbol{\psi}}(z|c,w)$ using a differentiable transformation $f_{\boldsymbol{\phi},\boldsymbol{\psi}}(c,w,g)$ of an (auxiliary) noise variable $\vec{g} = [g_1, g_2, ..., g_K]$: \begin{equation} \widetilde{\bb{z}} = f_{\boldsymbol{\phi},\boldsymbol{\psi}}(c,w,\vec{g}) \; \text{with} \; g_i=-\mathrm{log}(-\mathrm{log}(u_i)), u_i \sim U(0,1) \end{equation} The relaxed objective is: \meng{In the following formula, the proposal distribution still depends on the parameter $\boldsymbol{\phi},\boldsymbol{\psi}$. We can follow Eq.(5) of Auto-Encoding Variational Bayes to revise the formula.} \begin{align} \label{eq:relax} E_{z\sim q_{\boldsymbol{\phi},\boldsymbol{\psi}}(z|c,w)}[\mathrm{log}~p_{\boldsymbol{\psi},\boldsymbol{\varphi}}(c|z)] & \overset{\text{relax}}{\leadsto} E_{\widetilde{\bb{z}}\sim Q_{\boldsymbol{\phi},\boldsymbol{\psi}}(z|c,w)}[\mathrm{log}~p_{\boldsymbol{\psi},\boldsymbol{\varphi}}(c|\widetilde{\bb{z}})] \\ & = \sum_{(w,c) \in \mathcal{E}} \mathrm{log}~h_{\boldsymbol{\psi},\boldsymbol{\varphi}}(f_{\boldsymbol{\phi},\boldsymbol{\psi}}(c,w,\vec{g})) \end{align} where $h_{\boldsymbol{\psi},\boldsymbol{\varphi}}$ the function of $p_{\boldsymbol{\psi},\boldsymbol{\varphi}}(c|\widetilde{\bb{z}})$. The gradient of the relaxed objective can be directly calculated by backpropagation. In all experiments, we use straight-through Gumbel-Softmax. For the forward pass, we use \begin{equation} \widetilde{\bb{z}}_{\text{forward}} = \text{one\_hot} (\argmax_i [y_i] ). \end{equation} However, the continuous approximation (Eq. \ref{eq:relax}) is used during the backward pass. \meng{I don't understand why we can use one-hot vector in forward pass and use continuous vector in backward pass.} \end{comment} \section{Experiments} As vGraph{} can detect both overlapping and non-overlapping communities, we evaluate it on three tasks: overlapping community detection, non-overlapping community detection, and vertex classification. \subsection{Datasets} We evaluate vGraph{} on 20 standard graph datasets. For non-overlapping community detection and node classification, we use 6 datasets: Citeseer, Cora, Cornell, Texas, Washington, and Wisconsin. For overlapping communtiy detection, we use 14 datasets, including Facebook, Youtube, Amazon, Dblp, Coauthor-CS. For Youtube, Amazon, and Dblp, we consider subgraphs with the 5 largest ground-truth communities due to the runtime of baseline methods. To demonstrate the scalability of our method, we additionally include visualization results on a large dataset -- Dblp-full. Dataset statistics are provided in Table~\ref{tab:dataset}. More details about the datasets is provided in Appendix A. \subsection{Evaluation Metric} For overlapping community detection, we use \textit{F1-Score} and \textit{Jaccard Similarity} to measure the performance of the detected communities as in \cite{yang2013community,li2018community}. For non-overlapping community detection, we use \textit{Normalized Mutual Information (NMI)} \cite{tian2014learning} and \textit{Modularity}. Note that Modularity does not utilize ground truth data. For node classification, \textit{Micro-F1} and \textit{Macro-F1} are used \begin{comment} \begin{equation} \small \begin{aligned} \frac{1}{2}(\frac{1}{|C^*|}\sum_{C^*_i \in \mathcal{C}^*}\max_{C_j \in \mathcal{C}} \delta(C^*_i, C_j) + \frac{1}{|C|}\sum_{C_i \in \mathcal{C}}\max_{C^*_j \in \mathcal{C}^*} \delta(C^*_j, C_i)). \nonumber \end{aligned} \end{equation} Where $\delta(C^*_i, C_j)$ is defined as the harmonic mean (F1-score) of $C^*_i$ and $C_j$. The Jaccard Similarity score is defined by $\delta(C^*_i, C_j) = \frac{|C^*_i \cap C_j|}{|C^*_i \cup C_j|}$. For both metrics, larger values indicate better performance. \end{comment} \subsection{Comparative Methods} For overlapping community detection, we choose four competitive baselines: \textbf{BigCLAM} \cite{yang2013overlapping}, a nonnegative matrix factorization approach based on the Bernoulli-Poisson link that only considers the graph structure; \textbf{CESNA} \cite{yang2013community}, an extension of BigCLAM, that additionally models the generative process for node attributes; \textbf{Circles} \cite{mcauley2014discovering}, a generative model of edges w.r.t. attribute similarity to detect communities; and \textbf{SVI} \cite{gopalan2013efficient}, a Bayesian model for graphs with overlapping communities that uses a mixed‐membership stochastic blockmodel. To evaluate node embedding and non-overlapping community detection, we compare our method with the five baselines: \textbf{MF} \cite{wang2011community}, which represents each vertex with a low-dimensional vector obtained through factoring the adjacency matrix; \textbf{DeepWalk} \cite{perozzi2014deepwalk}, a method that adopts truncated random walk and Skip-Gram to learn vertex embeddings; \textbf{LINE} \cite{tang2015line}, which aims to preserve the first-order and second-order proximity among vertices in the graph; \textbf{Node2vec} \cite{grover2016node2vec}, which adopts biased random walk and Skip-Gram to learn vertex embeddings; and \textbf{ComE} \cite{cavallari2017learning}, which uses a Gaussian mixture model to learn an embedding and clustering jointly using random walk features. \subsection{Experiment Configuration} For all baseline methods, we use the implementations provided by their authors and use the default parameters. For methods that only output representations of vertices, we apply K-means to the learned embeddings to get non-overlapping communities. Results report are averaged over 5 runs. \textbf{No node attributes} are used in all our experiments. We generate node attributes using node degree features for those methods that require node attributes such as CESNA \cite{yang2013community} and Circles \cite{mcauley2014discovering}. It is hard to compare the quality of community results when the numbers of communities are different for different methods. Therefore, we set the number of communities to be detected, $K$, as the number of ground-truth communities for all methods, as in \cite{li2018community}. For vGraph{}, we use full-batch training when the dataset is small enough. Otherwise, we use stochastic training with a batch size of 5000 or 10000 edges. The initial learning rate is set to 0.05 and is decayed by 0.99 after every 100 iterations. We use the Adam optimizer and we trained for 5000 iterations. When smoothness regularization is used, $\lambda$ is set to $100$. For community detection, the model with the lowest loss is chosen. For node classification, we evaluate node embeddings after 1000 iterations of training. The dimension of node embeddings is set to 128 in all experiments for all methods. For the node classification task, we randomly select 70\% of the labels for training and use the rest for testing. \begin{table}[!t] \caption{Evaluation (in terms of F1-Score and Jaccard Similarity) on networks with overlapping ground-truth communities. NA means the task is not completed in 24 hours. In order to evaluate the effectiveness of smoothness regularization, we show the result of our model with (vGraph{}+) and without the regularization.} \centering \scalebox{.68}{ \begin{tabular}{|c|c|c|c|c|c|c||c|c|c|c|c|c|} \hline \multicolumn{7}{|c||}{F1-score} & \multicolumn{6}{c|}{Jaccard} \\ \hline Dataset & Bigclam & CESNA & Circles & SVI & vGraph & vGraph+ & Bigclam & CESNA & Circles & SVI & vGraph & vGraph+ \\ \hline facebook0 & \textbf{0.2948} & 0.2806 & 0.2860 & 0.2810 & 0.2440 & 0.2606 & \textbf{0.1846} & 0.1725 & 0.1862 & 0.1760 & 0.1458 & 0.1594 \\ facebook107 & \textbf{0.3928} & 0.3733 & 0.2467 & 0.2689 & 0.2817 & 0.3178 & \textbf{0.2752} & 0.2695 & 0.1547 & 0.1719 & 0.1827 & 0.2170 \\ facebook1684 & 0.5041 & \textbf{0.5121} & 0.2894 & 0.3591 & 0.4232 & 0.4379 & 0.3801 & \textbf{0.3871} & 0.1871 & 0.2467 & 0.2917 & 0.3272 \\ facebook1912 & 0.3493 & 0.3474 & 0.2617 & 0.2804 & 0.2579 & \textbf{0.3750} & 0.2412 & 0.2394 & 0.1672 & 0.2010 & 0.1855 & \textbf{0.2796} \\ facebook3437 & 0.1986 & 0.2009 & 0.1009 & 0.1544 & 0.2087 & \textbf{0.2267} & 0.1148 & 0.1165 & 0.0545 & 0.0902 & 0.1201 & \textbf{0.1328} \\ facebook348 & 0.4964 & 0.5375 & 0.5175 & 0.4607 & \textbf{0.5539} & 0.5314 & 0.3586 & 0.4001 & 0.3927 & 0.3360 & \textbf{0.4099} & 0.4050 \\ facebook3980 & 0.3274 & 0.3574 & 0.3203 & NA & \textbf{0.4450} & 0.4150 & 0.2426 & 0.2645 & 0.2097 & NA & \textbf{0.3376} & 0.2933 \\ facebook414 & 0.5886 & 0.6007 & 0.4843 & 0.3893 & 0.6471 & \textbf{0.6693} & 0.4713 & 0.4732 & 0.3418 & 0.2931 & 0.5184 & \textbf{0.5587} \\ facebook686 & 0.3825 & 0.3900 & 0.5036 & 0.4639 & 0.4775 & \textbf{0.5379} & 0.2504 & 0.2534 & 0.3615 & 0.3394 & 0.3272 & \textbf{0.3856} \\ facebook698 & 0.5423 & 0.5865 & 0.3515 & 0.4031 & 0.5396 & \textbf{0.5950} & 0.4192 & 0.4588 & 0.2255 & 0.3002 & 0.4356 & \textbf{0.4771} \\ Youtube & 0.4370 & 0.3840 & 0.3600 & 0.4140 & 0.5070 & \textbf{0.5220} & 0.2929 & 0.2416 & 0.2207 & 0.2867 & 0.3434 & \textbf{0.3480} \\ Amazon & 0.4640 & 0.4680 & \textbf{0.5330} & 0.4730 & \textbf{0.5330} & 0.5320 & 0.3505 & 0.3502 & 0.3671 & 0.3643 & 0.3689 & \textbf{0.3693} \\ Dblp & 0.2360 & 0.3590 & NA & NA & 0.3930 & \textbf{0.3990} & 0.1384 & 0.2226 & NA & NA & 0.2501 & \textbf{0.2505} \\ Coauthor-CS & 0.3830 & 0.4200 & NA & 0.4070 & 0.4980 & \textbf{0.5020} & 0.2409 & 0.2682 & NA & 0.2972 & \textbf{0.3517} & 0.3432 \\ \hline \end{tabular} } \label{tab:overlapping} \end{table} \subsection{Results} Table \ref{tab:overlapping} shows the results on overlapping community detection. Some of the methods are not very scalable and cannot obtain results in 24 hours on some larger datasets. Compared with these studies, vGraph{} outperforms all baseline methods in 11 out of 14 datasets in terms of F1-score or Jaccard Similarity, as it is able to leverage useful representations at node level. Moreover, vGraph{} is also very efficient on these datasets, since we use employ variational inference and parameterize the model with node and community embeddings. By adding the smoothness regularization term (vGraph{}+), we see a farther increase performance, which shows that our method can be combined with concepts from traditional community detection methods. The results for non-overlapping community detection are presented in Table~\ref{tab:nonoverlapping}. vGraph{} outperforms all conventional node embeddings + K-Means in 4 out of 6 datasets in terms of NMI and outperforms all 6 in terms of modularity. ComE, another framework that jointly solves node embedding and community detection, also generally performs better than other node embedding methods + K-Means. This supports our claim that learning these two tasks collaboratively instead of sequentially can further enhance performance. Compare to ComE, vGraph{} performs better in 4 out of 6 datasets in terms of NMI and 5 out of 6 datasets in terms of modularity. This shows that vGraph{} can also outperform frameworks that learn node representations and communities together Table~\ref{tab:node} shows the result for the node classification task. vGraph{} significantly outperforms all the baseline methods in 9 out of 12 datasets. The reason is that most baseline methods only consider the local graph information without modeling the global semantics. vGraph{} solves this problem by representing node embeddings as a mixture of communities to incorporate global context. \begin{table}[] \caption{Evaluation (in terms of NMI and Modularity) on networks with non-overlapping ground-truth communities.} \centering \scalebox{.68}{ \begin{tabular}{|c|c|c|c|c|c|c||c|c|c|c|c|c|} \hline \multicolumn{7}{|c||}{NMI} & \multicolumn{6}{|c|}{Modularity} \\ \hline Dataset & MF & deepwalk & LINE & node2vec & ComE & vGraph{} & MF & deepwalk & LINE & node2vec & ComE & vGraph{} \\ \hline cornell & 0.0632 & 0.0789 & 0.0697 & 0.0712 & 0.0732 & \textbf{0.0803} & 0.4220 & 0.4055 & 0.2372 & 0.4573 & 0.5748 & \textbf{0.5792} \\ texas & 0.0562 & 0.0684 & \textbf{0.1289} & 0.0655 & 0.0772 & 0.0809 & 0.2835 & 0.3443 & 0.1921 & 0.3926 & \textbf{0.4856} & 0.4636 \\ washington & 0.0599 & 0.0752 & \textbf{0.0910} & 0.0538 & 0.0504 & 0.0649 & 0.3679 & 0.1841 & 0.1655 & 0.4311 & 0.4862 & \textbf{0.5169} \\ wisconsin & 0.0530 & 0.0759 & 0.0680 & 0.0749 & 0.0689 & \textbf{0.0852} & 0.3892 & 0.3384 & 0.1651 & 0.5338 & 0.5500 & \textbf{0.5706} \\ cora & 0.2673 & 0.3387 & 0.2202 & 0.3157 & \textbf{0.3660} & 0.3445 & 0.6711 & 0.6398 & 0.4832 & 0.5392 & 0.7010 & \textbf{0.7358} \\ citeseer & 0.0552 & 0.1190 & 0.0340 & 0.1592 & \textbf{0.2499} & 0.1030 & 0.6963 & 0.6819 & 0.4014 & 0.4657 & 0.7324 & \textbf{0.7711} \\ \hline \end{tabular} } \label{tab:nonoverlapping} \end{table} \begin{table} \caption{Results of node classification on 6 datasets.} \centering \scalebox{.68}{ \begin{tabular}{|c|c|c|c|c|c|c||c|c|c|c|c|c|} \hline \multicolumn{7}{|c||}{Macro-F1} & \multicolumn{6}{c|}{Micro-F1} \\ \hline Datasets & MF & DeepWalk & LINE & Node2Vec & ComE & vGraph & MF & DeepWalk & LINE & Node2Vec & ComE & vGraph \\ \hline Cornell & 13.05 & 22.69 & 21.78 & 20.70 & 19.86 & \textbf{29.76} & 15.25 & 33.05 & 23.73 & 24.58 & 25.42 & \textbf{37.29} \\ Texas & 8.74 & 21.32 & 16.33 & 14.95 & 15.46 & \textbf{26.00} & 14.03 & 40.35 & 27.19 & 25.44 & 33.33 & \textbf{47.37} \\ Washington & 15.88 & 18.45 & 13.99 & 21.23 & 15.80 & \textbf{30.36} & 15.94 & 34.06 & 25.36 & 28.99 & 33.33 & \textbf{34.78} \\ Wisconsin & 14.77 & 23.44 & 19.06 & 18.47 & 14.63 & \textbf{29.91} & 18.75 & \textbf{38.75} & 28.12 & 25.00 & 32.50 & 35.00 \\ Cora & 11.29 & 13.21 & 11.86 & 10.52 & 12.88 & \textbf{16.23} & 12.79 & 22.32 & 14.59 & 27.74 & \textbf{28.04} & 24.35 \\ Citeseer & 14.59 & 16.17 & 15.99 & 16.68 & 12.88 & \textbf{17.88} & 15.79 & 19.01 & 16.80 & \textbf{20.82} & 19.42 & 20.42 \\ \hline \end{tabular} } \label{tab:node} \end{table} \subsection{Visualization} In order to gain more insight, we present visualizations of the facebook107 dataset in Fig.~\ref{fig:vis}(a). To demonstrate that our model can be applied to large networks, we present results of vGraph{} on a co-authorship network with around 100,000 nodes and 330,000 edges in Fig.~\ref{fig:vis}(b). More visualizations are available in appendix B. We can observe that the community structure, or ``social context'', is reflected in the corresponding node embedding (node positions in both visualizations are determined by t-SNE of the node embeddings). To demonstrate the hierarchical extension of our model, we visualize a subset of the co-authorship dataset in Fig.~\ref{fig:hier-vis}. We visualize the first-tier communities and second-tier communities in panel (a) and (b) respectively. We can observe that the second-tier communities grouped under the same first-tier communities interact more with themselves than they do with other second-tier communities \begin{figure}[] \centering \subfloat[]{{\includegraphics[width=5cm]{images/facebook107.png} }}% \qquad \subfloat[]{{\includegraphics[width=4cm]{images/dblp5000-new.png} }}% \caption{In panel (a) we visualize the result on the facebook107 dataset using vGraph{}. In panel (b) we visualize the result on Dblp-full dataset using vGraph{}. The coordinates of the nodes are determined by t-SNE of the node embeddings.}% \label{fig:vis} \end{figure} \begin{figure}[] \centering \subfloat[]{{\includegraphics[width=3.7cm]{images/dblp-1-new.png} }}% \qquad \subfloat[]{{\includegraphics[width=3.5cm]{images/dblp-2-new.png} }}% \qquad \subfloat[]{{\includegraphics[width=3.5cm]{images/tree.png} }}% \caption{We visualize the result on a subset of Dblp dataset using two-level hierarchical vGraph{}. The coordinates of the nodes are determined by t-SNE of the node embeddings. In panel (a) we visualize the first-tier communities. In panel (b), we visualize the second-tier communities. In panel (c) we show the corresponding hierarchical tree structure.}% \label{fig:hier-vis} \vspace{-4mm} \end{figure} \section{Submission of papers to NeurIPS 2019} \input{1-introduction} \input{2-related-work} \input{3-model} \input{4-experiment} \section{Conclusion} In this paper, we proposed vGraph{}, a method that performs overlapping (and non-overlapping) community detection and learns node and community embeddings at the same time. vGraph{} casts the generation of edges in a graph as an inference problem. To encourage collaborations between community detection and node representation learning, we assume that each node can be represented by a mixture of communities, and each community is defined as a multinomial distribution over nodes. We also design a smoothness regularizer in the latent space to encourage neighboring nodes to be similar. Empirical evaluation on 20 different benchmark datasets demonstrates the effectiveness of the proposed method on both tasks compared to competitive baselines. Furthermore, our model is also readily extendable to detect hierarchical communities. \medskip \bibliographystyle{plain} \section{Datasets} Citeseer, Cora, Cornell, Texas, Washington, and Wisconsin are available online\footnote{https://linqs.soe.ucsc.edu}. For Youtube, Amazon, and Dblp, we consider subgraphs with the 5 largest ground-truth communities due to the runtime of the baseline methods. \textbf{Facebook}\footnote{https://snap.stanfod.edu/data/ego-Facebook.html} is a set of Facebook ego-networks. It contains 10 different ego-networks with identified circles. Social circles formed by friends are regarded as ground-truth communities. \textbf{Youtube}\footnote{http://snap.stanford.edu/data/com-Youtube.html} is a network of social relationships of Youtube users. The vertices represent users; the edges indicate friendships among the users; the user-defined groups are considered as ground-truth communities \textbf{Amazon}\footnote{http://snap.stanford.edu/data/com-Amazon.html} is collected by crawling amazon website. The vertices represent products and the edges indicate products frequently purchased together. The ground-truth communities are defined by the product categories on Amazon \textbf{Dblp}\footnote{http://snap.stanford.edu/data/com-DBLP.html} is a co-authorship network from Dblp. The vertices represent researchers and the edges indicate co-author relationships. Authors who have published in a same journal or conference form a community \textbf{Coauthor-CS}\footnote{https://aminer.org/aminernetwork} is a computer science co-authorship network. We chose 21 conferences and group them into five categories: \textit{Machine Learning}, \textit{Computer Linguistics}, \textit{Programming language}, \textit{Data mining}, and \textit{Database} \newpage \section{Visualization} \begin{figure}[!h] \includegraphics[width=.76\linewidth]{images/facebook1684.png} \caption{Visualization of the result of vGraph on the facebook1684 dataset. The coordinates of the nodes are determined by t-SNE of the node embeddings.} \end{figure} \begin{figure}[] \includegraphics[width=.76\linewidth]{images/facebook107.png} \caption{Visualization of the result of vGraph on the facebook107 dataset. The coordinates of the nodes are determined by t-SNE of the node embeddings.} \end{figure} \begin{figure}[] \includegraphics[width=.8\linewidth]{images/facebook414.png} \caption{Visualization of the result of vGraph on the facebook414 dataset. The coordinates of the nodes are determined by t-SNE of the node embeddings.} \end{figure} \begin{figure}[] \includegraphics[width=.8\linewidth]{images/youtube5.png} \caption{Visualization of the result of vGraph on the Youtube dataset. The coordinates of the nodes are determined by t-SNE of the node embeddings.} \end{figure}
1,108,101,565,062
arxiv
\section*{Introduction} It has been known since the foundational work of Frobenius that partitions of $n$ naturally label the complex irreducible representations of the symmetric group $\mathfrak{S}_n$. If we take an irreducible representation labeled by a partition $\lambda$ and tensor it with the sign representation, we obtain an irreducible representation labeled by the transpose of $\lambda$. The story in positive characteristic is more subtle: the irreducible representations of $\mathfrak{S}_n$ over a field of characteristic $p>0$ are labeled by the $p$-regular partitions (partitions in which each non-zero part occurs at most $p-1$ times). Tensoring such a representation with the sign representation still yields an irreducible representation, but the resulting involution on $p$-regular partitions lacks such a simple description as taking the transpose. In 1979, Mullineux defined a combinatorial algorithm producing an involution on $p$-regular partitions (now called the Mullineux involution), and he conjectured that this involution describes the result of tensoring an irreducible representation with the sign representation in characteristic $p$ \cite{Mullineux1979}. In 1995, Kleshchev came up with a surprising algorithm to compute the Mullineux involution \cite{Kleshchev1995III}. In fact, whereas Mullineux's algorithm involved repeated operations with strips of boxes in the rim of the Young diagram, it has been understood later that Kleshchev's algorithm can be interpreted in terms of the Kashiwara crystal of an irreducible highest weight module of level $1$ for the quantum group of affine type $A_{p-1}$ \cite{LLT1996}: the Mullineux involution is the automorphism of oriented $\mathbb{Z}/p\mathbb{Z}$-colored graphs which switches the sign of each arrow. This algorithm led to Ford and Kleshchev's proof of the Mullineux Conjecture \cite{FK}; a different proof was given later by Bessenrodt and Olsson \cite{BessenrodtOlsson1998}. The Mullineux involution can be generalized to various extents. First, one can look at the Hecke algebra of $\mathfrak{S}_n$ (which can be seen as a deformation of the group algebra) with parameter specialized to a primitive $e$-th root of $1$, $e\in\mathbb{Z}_{\geq 2}$. An involution on the set of $e$-regular partitions (which parametrize the associated irreducible representations) can then be defined using crystals as above, see \cite[Section 7]{LLT1996}. Next, Fayers defined a Mullineux involution for the Hecke algebra of the complex reflection group $G(\ell,1,n)$ (the Ariki-Koike algebra) \cite{fayers}. Fayers' involution can also be computed using crystal graphs (now for irreducible highest weight modules of level $\ell$) or via a combinatorial algorithm generalizing Mullineux's original procedure \cite{JaconLecouvey2008}. The Ariki-Koike algebra has cell modules labeled by all $\ell$-partitions, but simples labeled only by Uglov $\ell$-partitions (which coincide with $e$-regular partitions for $\ell=1$). However, its module category is a quotient of a highest weight category $\mathcal{O}_{\kappa,\mathbf{s}}$ where every $\ell$-partition labels a simple module, raising the question whether the Mullineux involution admits a further meaningful extension to that bigger category. Namely, consider the category $\mathcal{O}_{\kappa,\mathbf{s}}(n)$ of the Cherednik algebra of $G(\ell,1,n)$. This category depends on parameters $\kappa\in\mathbb{Q}^\times$ and $\mathbf{s}\in\mathbb{Q}^\ell$ \cite{GGOR2003}, \cite{Rouquier2008}, \cite{Losev2015a} and its Grothendieck group has a basis consisting of $\ell$-partitions of $n$. In order to relate categories depending on different parameters, Losev introduced derived equivalences called wall-crossing functors \cite{Losev2015}. Each wall-crossing can be thought of as a partial version of a duality functor called Ringel duality. The wall-crossing functors and Ringel duality are examples of a special kind of derived equivalence called a perverse equivalence \cite{ChuangRouquier2008},\cite{Losev2017}, and consequently they effect a permutation of the set of simple objects, that is, a permutation of $\ell$-partitions. It is natural to ask for an explicit formula for these combinatorial maps. We now summarize the main results of this paper. In Theorem \ref{genmul} we define a generalization of the Mullineux involution on all multipartitions. The proof uses the result of \cite{Gerber2016} that the $\widehat{\mathfrak{sl}_e}$-, $\mathfrak{sl}_\infty$-, and $\widehat{\mathfrak{sl}_\ell}$-crystals on the level $\ell$ Fock space all commute. Our involution $\Phi$ is compatible with both Fayers' and Losev's involutions, recovering Fayers' in the case of Uglov multipartitions. The next question is the representation-theoretic meaning of $\Phi$. In Section \ref{chered} we study the combinatorics of perverse equivalences on module categories of Cherednik algebras. Theorems \ref{wcmul1} and \ref{wcmul2} give some formulas for the $\kappa=0$ wall-crossing in terms of $\ell$ copies of the level $1$ Mullineux involution; we recover \cite[Corollary 5.7]{Losev2015} when $\ell=1$. Next, we look for a duality functor which produces the involution $\Phi$, and we find in Theorem \ref{ringelmul} that $\Phi$ arises from Ringel duality. Here the perspective of diagrammatic Cherednik algebras \cite{Webster2017} is crucial, especially \cite[Corollary 5.11]{Webster2017}. In Section \ref{perspectives}, we define a refinement of $\Phi$ with a speculative eye towards the Alvis-Curtis duality, a perverse equivalence for finite groups of Lie type which still lacks a combinatorial description outside type $A$. This generalizes Dudas and the second author's definition of a generalized Mullineux involution in the case $\ell=1$ \cite{DudasJacon2018} by refining the $\mathfrak{sl}_\infty$-crystal with respect to an integer parameter $d$. \medskip {\bf Acknowledgements.} The authors thank Olivier Dudas, Ivan Losev, Tomasz Przezdziecki, Catharina Stroppel, and Ben Webster for useful discussions, and the anonymous referee for helpful suggestions to improve the readability of the paper. The first author is supported by the Ambizione project of the Swiss National Science Foundation. The second author is supported by Agence Nationale de la Recherche GeRepMod ANR-16-CE40-0010-01. \section{The Mullineux involution for cyclotomic Hecke algebras}\label{mulAK} We here give a quick review of the definition of the Mullineux involution for cyclotomic Hecke algebras and its crystal interpretation \cite{fayers}, \cite{JaconLecouvey2008}. This generalizes the usual notion of Mullineux involution. \subsection{Definition} Let $\ell\in\mathbb{Z}_{\geq1}$ and $n\in\mathbb{Z}_{\geq1}$. Denote $W_{\ell,n}$ the complex reflection group $G(\ell,1,n)=\mathfrak{S}_n\ltimes(\mathbb{Z}/\ell\mathbb{Z})^n$. Let $R$ be a field of arbitrary characteristic and let $v\in R^{\times}$ and let $(s_{1},s_{2},\ldots,s_{\ell})$ be an $\ell$-tuple of integers. The cyclotomic Hecke algebra (also called Ariki-Koike algebra) $\mathcal{H}_{R,n}^\mathbf{s}=\mathcal{H}(v;s_1,\ldots,s_\ell)$ over $R$ is the unital associative $R$-algebra with a presentation by \begin{itemize} \item generators: $T_0$, $T_1$,\ldots, $T_{n-1}$, \item relations: \begin{align*} & T_0 T_1 T_0 T_1=T_1 T_0 T_1 T_0, \\ & T_iT_{i+1}T_i=T_{i+1}T_i T_{i+1}\ (i=1,\ldots,n-2), \\ & T_i T_j =T_j T_i\ (|j-i|>1), \\ &(T_0-v^{s_1})(T_0-v^{s_2})\ldots(T_0- v^{s_\ell}) = 0, \\ &(T_i-v)(T_i+1) = 0\ (i=1,\ldots,n-1). \end{align*} \end{itemize} It can be seen as a deformation of the group algebra of $W_{\ell,n}$. In particular, if $\ell=1$, it is the usual Hecke algebra of type $A$ and if moreover $v=1$, we obtain the group algebra $R\mathfrak{S}_n$ of the symmetric group. We denote by: \begin{itemize} \item $\Pi^\ell$ the set of all $\ell$-partitions, that is, the set of all $\ell$-tuples $(\lambda^{1},\ldots,\lambda^\ell)$ of partitions. \item $\Pi=\Pi^1$ the set of all partitions. \end{itemize} The unique $\ell$-partition of size $0$ is denoted by $\boldsymbol{\emptyset}$. For any subset $\mathcal{E}$ of $\Pi^\ell$ and any $n\in\mathbb{Z}_{\geq0}$, we denote by $\mathcal{E}(n)$ the set of $\ell$-partitions in $\mathcal{E}$ of total size $|\lambda^1|+\ldots+|\lambda^\ell|=n$. Let $e$ be the multiplicative order of $v$ in $R$. We assume that $v\neq 1$ so that we have $e\in \{2,3,\ldots \}\sqcup \{ \infty\}$. We now recall several facts about the representation theory of cyclotomic Hecke algebras. We refer to \cite[Chapter 5]{GeckJacon2011} for details. For each $\boldsymbol{\la}\in \Pi^{\ell} (n)$, there is an $\mathcal{H}_{R,n}^\mathbf{s}$-module $S^{\boldsymbol{\la}}$ which is the Specht module associated to $\boldsymbol{\la}$. There exists a natural bilinear form, $\mathcal{H}_{R,n}^\mathbf{s}$-invariant, on each of these modules and an associated radical such that the quotients $D^{\boldsymbol{\la}}:=S^{\boldsymbol{\la}}/\text{rad}(S^{\boldsymbol{\la}})$ are either $0$ or irreducible. The non-zero $D^{\boldsymbol{\la}}$ then give a complete set of non-isomorphic simple $\mathcal{H}_{R,n}^\mathbf{s}$-modules. The set $\left\{\boldsymbol{\la}\in\Pi^\ell(n) \mid D^{{\boldsymbol{\lambda}}}\neq 0\right\}$ depends only on $e$ and $\mathbf{s}$ and is known as the set of Kleshchev $\ell$-partitions, denoted ${\operatorname{Kl}}_{e,\mathbf{s}} (n)$. It was originally defined using the notion of crystal (see \cite[Section 6.2.10]{GeckJacon2011}), but there is another independent description, see \cite{Jkle}. \begin{remark} Let $e\geq 2, \mathbf{s}=(s_1,\ldots,s_\ell)$ and $\mathbf{t}=(t_1,\ldots,t_\ell)$ such that $t_i=s_i\mod e$ for all $i=1,\ldots, \ell$. Observe that for any $n\in\mathbb{Z}_{\geq0}, \mathcal{H}_{R,n}^\mathbf{s} = \mathcal{H}_{R,n}^{\bf t}$ and the definition of Kleshchev $\ell$-partitions gives that ${\operatorname{Kl}}_{e,\mathbf{s}}(n)={\operatorname{Kl}}_{e,\mathbf{t}}(n)$. Now if there exists $\sigma\in \mathfrak{S}_\ell$ such that $t_i=s_{\sigma(i)}\mod e$ for all $i=1,\ldots, \ell$ then we still have that for any $n\in\mathbb{Z}_{\geq0}, \mathcal{H}_{R,n}^\mathbf{s} = \mathcal{H}_{R,n}^{\bf t}$ but ${\operatorname{Kl}}_{e,\mathbf{s}}(n)$ is different from ${\operatorname{Kl}}_{e,\mathbf{t}}(n)$ in general. \end{remark} Set $\widetilde{\mathcal{H}}_{R,n}^\mathbf{s}:=\mathcal{H}_{R,n}(v^{-1};s_{\ell},\ldots,s_{1})$ and denote by $\widetilde{T }_{0}$,\ldots,$\widetilde{T}_{\ell-1}$ the associated standard generators. For each $\boldsymbol{\la} \in \Pi^\ell (n)$, denote by $\widetilde{S}^{{\boldsymbol{\lambda}}}$ the associated Specht module of $\widetilde{\mathcal{H}}_{R,n}^\mathbf{s}$. By \cite{fayers}, the simple modules of $\widetilde{\mathcal{H}}_{R,n}^\mathbf{s}$ are labeled by the set ${\operatorname{Kl}}_{e,-\mathbf{s}_{\textrm{rev} }} (n)$ where $-\mathbf{s}_{\textrm{rev}}=(-s_\ell,\ldots,-s_1)\in (\mathbb{Z}/e\mathbb{Z})^\ell$. Thus, for each $ \boldsymbol{\la}\in {\operatorname{Kl}}_{e,-\mathbf{s}_{\textrm{rev} }} (n)$, we have an associated simple $\widetilde{\mathcal{H}}_{R,n}^\mathbf{s}$-module $\widetilde{D}^{{\boldsymbol{\lambda}} }$. We have an involutive isomorphism $\theta : \mathcal{H}_{R,n}^\mathbf{s}\rightarrow \widetilde{\mathcal{H}}_{R,n}^\mathbf{s}$ given by \begin{equation*} T_{0}\mapsto \widetilde{T}_{0}\qquad T_{i}\mapsto -v\widetilde{T}_{i}\ (i=1,\ldots,n-1). \end{equation*} Then, $\theta $ induces a functor $F$ from the category of $\widetilde{ \mathcal{H}}_{R,n}^\mathbf{s}$-modules to the category of ${\mathcal{H}}_{R,n}^\mathbf{s}$-modules. As a consequence, we obtain a bijective map \begin{equation*} \mathfrak{m}_{e,{\mathbf{s}}}:{\operatorname{Kl}}_{e,\mathbf{s}} (n) \rightarrow {\operatorname{Kl}}_{e,-{\mathbf{s}_{\textrm{rev}}}} (n), \end{equation*} satisfying \begin{equation*} F(\widetilde{D}^{\mathfrak{m}_{e,{\mathbf{s}}} ({\boldsymbol{\lambda}})})\simeq {D}^{ \boldsymbol{\lambda}}, \end{equation*} for all $\lambda \in {\operatorname{Kl}} _{e,\mathbf{s}}$. \begin{remark}\label{rem_mulFayers} \begin{enumerate} \item By definition of $\theta$, we have $\mathfrak{m}_{e,-\mathbf{s}_\mathrm{rev}}\circ\mathfrak{m}_{e,\mathbf{s}}=\text{\rm Id}_{{\operatorname{Kl}}_{e,\mathbf{s}}}$ and $\mathfrak{m}_{e,\mathbf{s}}\circ\mathfrak{m}_{e,-\mathbf{s}_\mathrm{rev}}=\text{\rm Id}_{{\operatorname{Kl}}_{e,-\mathbf{s}_\mathrm{rev}}}$. \item Assume that $\ell=1$. Then the map $\mathfrak{m}_{e,\mathbf{s}}$ is an involution and it does not depend on the choice of $\mathbf{s}$. In fact, we have $\mathfrak{m}_{e,\mathbf{s}}=m_e$, where $m_e$ is the usual Mullineux involution defined in the introduction. \end{enumerate} \end{remark} \subsection{The quantum algebra $\mathcal{U}_t (\widehat{\mathfrak{sl}_e})$} We denote by $\Lambda_0, \ldots, \Lambda_{e-1}$ (where the subscripts are understood modulo $e$) the fundamental weights attached to the Kac-Moody algebra $\widehat{\mathfrak{sl}_e}$. The simple roots are denoted by $\alpha_0,\ldots,\alpha_{e-1}$ and $\delta:=\alpha_0+\ldots +\alpha_{e-1}$ is the null root. The fundamental weights and the simple roots are related by the following formula: $$\alpha_i=2\Lambda_i-\Lambda_{i-1}-\Lambda_{i+1}+\delta_{i,0}\delta \quad \text{ for all } 0\leq i\leq e-1$$ (where $\delta_{ij}$ denotes the Kronecker symbol). We denote by $\mathcal{P}=\bigoplus_{0\leq i\leq e-1} \mathbb{Z}\Lambda_i \oplus \mathbb{Z} \delta$ the weight lattice and by $\mathcal{U}_t (\widehat{\mathfrak{sl}_e})$ the quantum algebra associated to $\widehat{\mathfrak{sl}_e}$, where $t$ is an indeterminate. This is an algebra over $\mathbb{C} (t)$ with generators $e_i$, $f_i$, $t_i^{\pm 1}$ ($0\leq i\leq e-1$) and $\partial^{\pm1}$ subject to standard relations which we do not recall. We refer to \cite[Chapter 6]{GeckJacon2011} for details on this algebra and its the representation theory. \subsection{The level $\ell$ Fock space} \label{fock} Let us fix some notation. Fix $e,\ell\geq 2$ and $s\in\mathbb{Z}$. For $\mathbb{K}=\mathbb{Z}$ or $\mathbb{Q}$, we denote $$\mathbb{K}^\ell(s)=\left\{ (s_1,\ldots,s_\ell)\in\mathbb{K}^\ell \, \left| \, \sum_{i=1}^\ell s_i=s\right. \right\}.$$ For $\mathbf{s}\in\mathbb{Z}^\ell(s)$, we denote by $\Pi^\ell_\mathbf{s}$ the set of all symbols of the form $|\boldsymbol{\la},\mathbf{s}\rangle$ with $\boldsymbol{\la} \in \Pi^\ell$. Further, denote by $\Pi^\ell_s$ the sets of all elements in $\Pi^\ell_\mathbf{s}$ where $\mathbf{s}\in \mathbb{Z}^\ell(s)$. Let $\mathcal{F}_{e,\mathbf{s}}$ be the $\mathbb{C} (t)$-vector space with standard basis $\Pi^\ell_\mathbf{s}$, i.e. $\mathcal{F}_{e,\mathbf{s}}=\bigoplus_{\boldsymbol{\la}\in\Pi^\ell}\mathbb{C} (t) |\boldsymbol{\la},\mathbf{s}\rangle$, called the Fock space of level $\ell$ and rank $e$ (associated to the charge $\mathbf{s}$). This space can be endowed with a structure of an integrable $\mathcal{U}_t (\widehat{\mathfrak{sl}_e})$-module, see \cite[Section 6.2]{GeckJacon2011}. One can decompose this module as a direct sum of remarkable vector spaces. Indeed, if $w:=\sum_{0\leq i\leq e-1} a_i \Lambda_i+d\delta \in \mathcal{P}$, define: $$\mathcal{F}_{e,\mathbf{s}}[w]:= \{m\in \mathcal{F}_{e,\mathbf{s}}\ |\ \partial m=t^dm,\ t_i m=t^{a_i} m\ \forall i\in [0,e-1]\}.$$ If this space is non zero, we say that $w$ is a weight for $\mathcal{F}_{e,\mathbf{s}}$ and $\mathcal{F}_{e,\mathbf{s}}[w]$ is called the $w$-weight space. The elements of $\mathcal{F}_{e,\mathbf{s}}[w]$ are called weight vectors. Importantly, each element of the standard basis $|\boldsymbol{\la},{\bf s} \rangle$ is a weight vector and the associated weight may be easily computed (see for example \cite[Corollary 2.5]{Yvonne2007}). In particular, one can always write it as follows: $$d \delta +\sum_{0\leq i\leq e-1} \Lambda_{s_i} -\sum_{0\leq i\leq e-1} m_i \alpha_i$$ and the number $\sum_{0\leq i\leq e-1} m_i$ corresponds to the size of $\boldsymbol{\la}$. In particular, the weight of $|\bemptyset,\mathbf{s}\rangle$ is $\sum_{0\leq i\leq e-1} \Lambda_{s_i}$. Thus $\mathcal{F}_{e,\mathbf{s}}$ is the direct sum of its weight spaces. \subsection{The $\widehat{\mathfrak{sl}_e}$-crystal of the Fock space} \label{fock_crys} As mentioned in the introduction, an important part of the representation theory of cyclotomic Hecke algebras is controlled by the theory of crystals for Fock spaces. The $\widehat{\mathfrak{sl}_e}$-crystal of the Fock space $\mathcal{F}_{e,\mathbf{s}}$ is a combinatorial construction arising from the action of $\mathcal{U}_t (\widehat{\mathfrak{sl}_e})$ on the Fock space (see the general definition in \cite{Kashiwara1991}, \cite{HongKang2002}). Concretely, the $\widehat{\mathfrak{sl}_e}$-crystal is a graph with \begin{itemize} \item {vertices}: the elements of $\Pi^\ell_\mathbf{s}$, \item {arrows}: $|\boldsymbol{\la},\mathbf{s}\rangle \overset{i}{\rightarrow } |\boldsymbol{\mu},\mathbf{s}\rangle$ for $\boldsymbol{\la},\boldsymbol{\mu}\in\Pi^\ell$, $i\in\{0,\ldots,e-1\}$ if and only if $|\boldsymbol{\mu},\mathbf{s}\rangle = \widetilde{f}_i |\boldsymbol{\la},\mathbf{s}\rangle$, where $\widetilde{f}_i$ is the $i$-th lowering Kashiwara operator of $\mathcal{U}_t (\widehat{\mathfrak{sl}_e})$. \end{itemize} An explicit recursive formula for computing the $\widehat{\mathfrak{sl}_e}$-crystal is given in \cite{JMMO1991} in terms of adding good boxes, see also \cite{FLOTW1999}. It has infinitely many connected components, each of which is parametrized by its unique source vertex, called a highest weight vertex. We denote by $\operatorname{Ug}_{e,\mathbf{s}}$ the $\ell$-partitions appearing in the connected component parametrized by the highest weight vertex $\bemptyset=(\emptyset,\ldots,\emptyset)$, and call them the Uglov $\ell$-partitions. When $\ell=1$ this set is nothing but the set of $e$-regular partitions. The following is an easy consequence of the definition of Kleshchev and Uglov $\ell$-partitions (see \cite[Ex. 6.2.16]{GeckJacon2011}). \begin{proposition} \label{ariki} Fix $n\in\mathbb{Z}_{\geq 0}$. Let $\mathbf{s}=(s_1,\ldots,s_\ell)\in \mathbb{Z}^\ell$ and ${\bf t}=(t_1,\ldots,t_\ell)\in \mathbb{Z}^\ell$ be such that $s_i=t_i\mod e$ for all $i=1,\ldots,\ell$, and $t_i-t_{i-1}>n-1$ for all $i=2,\ldots,\ell$. Then $$\operatorname{Ug}_{e,\mathbf{t}}(n)={\operatorname{Kl}}_{e,\mathbf{t}}(n)={\operatorname{Kl}}_{e,\mathbf{s}}(n).$$ \end{proposition} In other words, Kleshchev $\ell$-partitions are a particular case of Uglov $\ell$-partitions, i.e. we can index irreducible modules of cyclotomic Hecke algebras by certain vertices of Fock space crystals. The following result is due to Fayers \cite[Section 2]{fayers} in the case of Kleshchev multipartitions (that is, under the condition of \Cref{ariki}) and to \cite[Section 4]{JaconLecouvey2008} in general. \begin{theorem}\label{FJLmul} Let $n\in\mathbb{Z}_{\geq0}, \mathbf{s}\in\mathbb{Z}^\ell(s)$ and $e\geq 2$. There exists a unique bijection $$\begin{array}{rccc} {\tt\Phi}_{e,\mathbf{s}} :& \operatorname{Ug}_{e,\mathbf{s}}(n) &\longrightarrow& \operatorname{Ug}_{e,-\mathbf{s}_\mathrm{rev}}(n)\\ & \boldsymbol{\la} & \longmapsto & {\tt\Phi}_{e,\mathbf{s}}(\boldsymbol{\la}) \end{array} $$ such that \begin{itemize} \item ${\tt\Phi}_{e,\mathbf{s}}(\bemptyset)=\bemptyset$, \item for all $0\leq i\leq e-1$, we have ${{\tt\Phi}_{e,\mathbf{s}} }\circ \widetilde{f}_i =\widetilde{f}_{-i} \circ {{\tt\Phi}_{e,\mathbf{s}} }$. \end{itemize} \end{theorem} This means that for all paths \begin{equation*} |\boldsymbol{\emptyset},\mathbf{s}\rangle\overset{i_{1}}{\rightarrow }\cdot \overset{i_{2}}{ \rightarrow }\cdot \overset{i_{3}}{\rightarrow }\cdots \overset{i_{n}}{ \rightarrow }|{\boldsymbol{\lambda}},\mathbf{s}\rangle \end{equation*} in the $\widehat{\mathfrak{sl}_e}$-crystal on the Fock space $\mathcal{F}_{e,\mathbf{s}}$, there exists a corresponding path \begin{equation*} |\boldsymbol{\emptyset},-\mathbf{s}_\mathrm{rev}\rangle\overset{-i_{1}}{\rightarrow }\cdot \overset{-i_{2}}{ \rightarrow }\cdot \overset{-i_{3}}{\rightarrow }\cdots \overset{-i_{n}}{ \rightarrow }|{\boldsymbol{\mu}},-\mathbf{s}_\mathrm{rev}\rangle \end{equation*} in the $\widehat{\mathfrak{sl}_e}$-crystal on the Fock space $\mathcal{F}_{e,-\mathbf{s}_{\textrm{rev}}}$ from the empty $\ell$-partition to an $\ell$-partition ${\boldsymbol{\mu}}\in \operatorname{Ug}_{e,-\bf s_{\textrm{rev}}}$. Then ${\tt\Phi}_{e,\mathbf{s}}(\boldsymbol{\la})=\boldsymbol{\mu}$. In \cite{JaconLecouvey2008}, it is explained how the map ${\tt\Phi}_{e,\mathbf{s}}$ can be explicitly computed without constructing the $\widehat{\mathfrak{sl}_e}$-crystal. \begin{example} Take $s=4$, $e=4$, $\ell=3$, $\mathbf{s}=(5,-1,0)$ (so that $-\mathbf{s}_\mathrm{rev}=(0,1,-5)$) and $\boldsymbol{\la}=(1,3.2,\emptyset)$\footnote{In the examples, we use the multiplicative notation for partitions and we forget the brackets around components of a multipartition.}. One can write for instance $\boldsymbol{\la} = {\tilde{f}}_1{\tilde{f}}_1{\tilde{f}}_3{\tilde{f}}_0{\tilde{f}}_2{\tilde{f}}_3 \, \bemptyset$, so that $\boldsymbol{\la}\in\operatorname{Ug}_{e,\mathbf{s}}(6)$. Therefore, in the crystal of the Fock space $\mathcal{F}_{e,-\mathbf{s}_\mathrm{rev}}$, we get \begin{align*} {\tt\Phi}_{e,\mathbf{s}} &= {\tilde{f}}_{-1}{\tilde{f}}_{-1}{\tilde{f}}_{-3}{\tilde{f}}_0{\tilde{f}}_{-2}{\tilde{f}}_{-3} \, \bemptyset \\ & ={\tilde{f}}_3{\tilde{f}}_3{\tilde{f}}_1{\tilde{f}}_0{\tilde{f}}_2{\tilde{f}}_1 \, \bemptyset \\ & = (2.1,3,\emptyset). \end{align*} \end{example} The following result by Fayers \cite{fayers} gives the desired crystal interpretation of the Mullineux involution for cyclotomic Hecke algebras. \begin{theorem}[Fayers] Fix $n\in\mathbb{Z}_{\geq 0}, \mathbf{s}\in\mathbb{Z}^\ell$ and $e\geq 2$. For all $\boldsymbol{\la}\in{\operatorname{Kl}}_{e,\mathbf{s}}(n)$, we have \begin{equation*} \mathfrak{m}_{e,{\mathbf{s}}} ({\boldsymbol{\lambda}})={\tt\Phi}_{e,\mathbf{s}} (\boldsymbol{\la}). \end{equation*} \end{theorem} To summarize, starting with the usual Mullineux involution $m_e$ for the symmetric group, we obtain: \begin{itemize} \item a generalization of $m_e$ : the involution $\mathfrak{m}_{e,{\mathbf{s}}}$ on the set of Kleshchev $\ell$-partitions which label the irreducible representations of cyclotomic Hecke algebras. If $\ell=1$, we have $\mathfrak{m}_{e,{\mathbf{s}}}=m_e$. \item a generalization of $\mathfrak{m}_{e,{\mathbf{s}}}$ : the involution ${\tt\Phi}_{e,\mathbf{s}}$ on the set of Uglov $\ell$-partitions. If $\mathbf{s}$ is such that $s_i-s_{i-1}>n-1$ for all $i=2,\ldots,\ell$, we have ${\tt\Phi}_{e,\mathbf{s}}=\mathfrak{m}_{e,{\mathbf{s}}}$. \end{itemize} \section{The generalized Mullineux involution}\label{section_genmul} Remember from Section \ref{fock} that we have fixed $e,\ell\geq 2$ and $s\in\mathbb{Z}$. Let us denote $\mathcal{F}_{e,s}=\bigoplus_{\mathbf{s}\in\mathbb{Z}^\ell(s)}\mathcal{F}_{e,\mathbf{s}}$ and $\mathcal{F}_e= \bigoplus_{s\in\mathbb{Z}}\mathcal{F}_{e,s}$. \subsection{Triple crystal structure} By Section \ref{fock}, the space $\mathcal{F}_{e,s}$ has a structure of integrable $\mathcal{U}_t (\widehat{\mathfrak{sl}_e})$-module of level $\ell$. This space can also be endowed with a structure of $\mathcal{U}_{-1/t} (\widehat{\mathfrak{sl}_\ell})$-module of level $e$. Denote by $\dot{\Lambda}_0$, \ldots, $\dot{\Lambda}_{l-1}$ the fundamental weights attached to the Kac-Moody algebra $\widehat{\mathfrak{sl}_\ell}$. The simple roots are denoted by $\dot{\alpha}_0, \ldots, \dot{\alpha}_{\ell-1}$ and $\dot{\delta}$ is the null root. We denote by $\dot{\mathcal{P}}=\bigoplus_{0\leq i\leq \ell-1} \mathbb{Z}\dot{\Lambda}_i \oplus \mathbb{Z} \dot{\delta}$ the corresponding weight lattice. Following \cite{Gerber2016}, there is a \textit{level-rank duality} between $\ell$-partitions and $e$-partitions. This is a map $$ \begin{array}{rrcl} \mathsf{k}_{s}^{\ell,e}: & \Pi^\ell_s & \longrightarrow & \Pi^e_{-s} \\ \end{array} $$ inducing a linear map between the Fock spaces $\mathcal{F}_{e,s}\longrightarrow\mathcal{F}_{\ell,-s}$. To avoid cumbersome notations, write $\mathsf{k}$ for $\mathsf{k}_{s}^{\ell,e}$ and $\dot{\mathsf{k}}$ for $\mathsf{k}_{-s}^{e,\ell}$. From \cite[Formula (3.8)]{Gerber2016}, it is straightforward that $\dot{\mathsf{k}}\circ\mathsf{k}=\text{\rm Id}_{\Pi_s^\ell}$ and $\mathsf{k}\circ \dot{\mathsf{k}}=\text{\rm Id}_{\Pi_{-s}^e}$. We can extend $\mathsf{k}$ linearly to $\mathcal{F}_{e,s}$, which endows it with the structure of a $\mathcal{U}_{-1/t} (\widehat{\mathfrak{sl}_\ell})$-module, by considering the natural action on $\mathcal{F}_{\ell,-s}$ and composing with $\dot{\mathsf{k}}$. This yields an $\widehat{\mathfrak{sl}_\ell}$-crystal structure on $\mathcal{F}_{e,s}$. More precisely, if we denote by $\tilde{\dot{f}}_j$, $j=0,\ldots,\ell-1$ the lowering $\widehat{\mathfrak{sl}_\ell}$-crystal operators, the action of $\tilde{\dot{f}}_j$ on an $\ell$-partition is defined by $\dot{\mathsf{k}}\circ\tilde{\dot{f}}_{j}\circ\mathsf{k}$, as indicated by the following diagram: \begin{equation}\label{lr_diag} \begin{tikzcd} \Pi^\ell_s \arrow[dashrightarrow]{d}{} \arrow{r}{\mathsf{k}} & \Pi^e_{-s} \arrow{d}{\tilde{\dot{f}}_{j}} \\ \Pi^\ell_s & \Pi^e_{-s} \arrow{l}{\dot{\mathsf{k}}} \end{tikzcd} \end{equation} \begin{remark}\label{Ugyv} \begin{enumerate} \item As explained in \cite[Section 7.1]{Gerber2016}, the map $\mathsf{k}$ is, up to a twist by conjugation, categorified by \textit{Koszul duality} between the corresponding Cherednik categories $\mathcal{O}$. This justifies the notation. \item The level-rank duality $\mathsf{k}$ used in our paper is not the same as the one used in Yvonne and Uglov's paper. However, our map can be recovered from Uglov and Yvonne's ones by composing with the map $|\boldsymbol{\la},\mathbf{s}\rangle \mapsto |\boldsymbol{\la}^\mathrm{tr}_{\mathrm{rev}},-\mathbf{s}_{rev}\rangle$, where for $\boldsymbol{\la}:=(\lambda^{(1)},\ldots,\lambda^{(\ell)})$ we have $\boldsymbol{\la}^\mathrm{tr}_{\mathrm{rev}}=((\lambda^{(\ell)})^\mathrm{tr},\ldots,(\lambda^{(1)})^\mathrm{tr})$ and $\lambda^\mathrm{tr}$ is the transpose of $\lambda$. \end{enumerate} \end{remark} For $s\in \mathbb{Z}$, we denote $$A (s)=\left\{ (s_1,\ldots,s_\ell)\in \mathbb{Z}^\ell (s)\ |\ s_1\leq \ldots \leq s_\ell \leq s_1+e \right\}$$ and, in a dual fashion, $$\dot{A} (s)=\left\{ (t_1,\ldots,t_e)\in \mathbb{Z}^e (s)\ |\ t_1\leq \ldots \leq t_e \leq t_1+\ell \right\}$$ Write $\dot{\bemptyset}=(\emptyset,\ldots,\emptyset)\in\Pi^e$. Note that for $\mathbf{s}\in A(s)$, the set $\operatorname{Ug}_{e,\mathbf{s}}$ has a convenient non-recursive definition, see \cite[Theorem 2.10]{FLOTW1999}. By \cite[Formula (3.8)]{Gerber2016}, if $\mathbf{s}\in A(s)$, then $\mathsf{k} |\bemptyset,\mathbf{s}\rangle = |\dot{\bemptyset},\dot{\bs}\rangle$ for some $\dot{\bs}\in\dot{A}(-s)$. Finally, there is an $\mathfrak{sl}_\infty$-crystal structure on $\Pi_s^\ell$ arising from the action of a Heisenberg algebra \cite{ShanVasserot}, \cite{Losev2015}, \cite{Gerber2016a}. Its connected components are all isomorphic to the branching graph of the symmetric group in characteristic $0$ and thus have vertices in bijection with $\Pi$. If $\boldsymbol{\la}_0$ is a highest weight vertex for the $\mathfrak{sl}_\infty$-crystal, then any $\ell$-partition in the same crystal component as $\boldsymbol{\la}_0$ is obtained as $\tilde{a}_\sigma(\boldsymbol{\la}_0)$ for a unique $\sigma\in\Pi$, where ${\tilde{a}}_\sigma$ denotes the Heisenberg crystal operator associated to $\sigma$, see \cite{Losev2015} \cite{Gerber2016a}. We will make repeated use of the following important theorem, proved in \cite[Theorems 6.17 and 6.19]{Gerber2016}, and its corollary. \begin{theorem}\label{3crystals} \ \begin{enumerate} \item The three crystals pairwise commute. \item Every $|\boldsymbol{\la},\mathbf{s}\rangle\in\Pi^\ell_\mathbf{s}$ decomposes as $$|\boldsymbol{\la},\mathbf{s}\rangle=\tilde{\dot{f}}_{j_r}\ldots \tilde{\dot{f}}_{j_1} {\tilde{a}}_{\sigma} {\tilde{f}}_{i_p}\ldots {\tilde{f}}_{i_1}|\bemptyset,\mathbf{r}\rangle$$ for some $\mathbf{r} \in A(s)$, $\sigma\in \Pi$, $p,r\in \mathbb{Z}_{\geq 0}$ and for some $i_p,\ldots,i_1\in \{0,1,\ldots,e-1\}$ and $j_r,\ldots,j_1\in \{0,1,\ldots,\ell-1\}$. \end{enumerate} \end{theorem} \begin{corollary}\label{bijtriple} The elements $\mathbf{r},\sigma,p$ and $r$ of Theorem \ref{3crystals} are uniquely determined by $|\boldsymbol{\la},\mathbf{s}\rangle$. This yields a bijection $$ \begin{array}{rrcl} \beta : & \Pi^\ell_s & \longrightarrow &\displaystyle \bigsqcup_{\mathbf{r} \in A (s)} \operatorname{Ug}_{e,\mathbf{r}} \times \Pi \times \operatorname{Ug}_{\ell,\dot{\br}} \\ & |\boldsymbol{\la},\mathbf{s}\rangle &\longmapsto & ( {\tilde{f}}_{i_p}\ldots {\tilde{f}}_{i_1}|\bemptyset,\mathbf{r}\rangle, \sigma, \tilde{\dot{f}}_{j_r}\ldots \tilde{\dot{f}}_{j_1}|\dot{\bemptyset},\dot{\mathbf{r}}\rangle). \end{array} $$ \end{corollary} \begin{proof} Let $\boldsymbol{\la}\in\Pi^\ell$. By Theorem \ref{3crystals} (2), there exist $\mathbf{r} \in A(s)$, $\sigma\in \Pi$, $p,r\in \mathbb{Z}_{\geq 0}$ and elements $i_p,\ldots,i_1\in \{0,1,\ldots,e-1\}$ and $j_r,\ldots,j_1\in \{0,1,\ldots,\ell-1\}$ such that $$|\boldsymbol{\la},\mathbf{s}\rangle=\tilde{\dot{f}}_{j_r}\ldots \tilde{\dot{f}}_{j_1} {\tilde{a}}_{\sigma} {\tilde{f}}_{i_p}\ldots {\tilde{f}}_{i_1}|\bemptyset,\mathbf{r}\rangle$$ Assume that we have: $$\tilde{\dot{f}}_{j'_{r'}}\ldots \tilde{\dot{f}}_{j'_1} {\tilde{a}}_{\sigma'} {\tilde{f}}_{i'_{p'}}\ldots {\tilde{f}}_{i'_1}|\bemptyset,\mathbf{r}'\rangle =\tilde{\dot{f}}_{j_r}\ldots \tilde{\dot{f}}_{j_1} {\tilde{a}}_{\sigma} {\tilde{f}}_{i_p}\ldots {\tilde{f}}_{i_1}|\bemptyset,\mathbf{r}\rangle$$ for $\mathbf{r}' \in A(s)$, $\sigma'\in \Pi$, $p',r'\in \mathbb{Z}_{\geq 0}$ and indices $i'_{p'},\ldots,i'_1\in \{0,1,\ldots,e-1\}$ and $j'_{r'},\ldots,j'_1\in \{0,1,\ldots,\ell-1\}$. Then the elements $|\boldsymbol{\mu}',{\bf t} '\rangle:= \tilde{\dot{f}}_{j'_{r'}}\ldots \tilde{\dot{f}}_{j'_1} {\tilde{a}}_{\sigma'} |\bemptyset,\mathbf{r}' \rangle$ and $|\boldsymbol{\mu},{\bf t} \rangle:=\tilde{\dot{f}}_{j_r}\ldots \tilde{\dot{f}}_{j_1} {\tilde{a}}_{\sigma} |\boldsymbol{\emptyset},\mathbf{r}\rangle$ are both highest weight vertices in the $\widehat{\mathfrak{sl}_e}$-crystal. As we have ${\tilde{f}}_{i'_{p'}}\ldots {\tilde{f}}_{i'_1} |\boldsymbol{\mu}',{\bf t} '\rangle ={\tilde{f}}_{i_{p}}\ldots {\tilde{f}}_{i_1} |\boldsymbol{\mu},{\bf t} \rangle$, these two elements are in the same connected component of the $\widehat{\mathfrak{sl}_e}$-crystal so they must be equal. From this equality, we deduce in the same way that the two $\widehat{\mathfrak{sl}_e}$-highest weight vertices must be equal: ${\tilde{a}}_{\sigma'} |\bemptyset,\mathbf{r}' \rangle= {\tilde{a}}_{\sigma} |\bemptyset,\mathbf{r} \rangle$. By the description of the $\mathfrak{sl}_\infty$-crystal operators \cite{Losev2015} \cite{Gerber2016}, we obtain $\sigma=\sigma'$ and $\mathbf{r}'=\mathbf{r}$. We deduce that $ \tilde{\dot{f}}_{j'_{r'}}\ldots \tilde{\dot{f}}_{j'_1} |\dot{\bemptyset},\dot{\br} \rangle=\tilde{\dot{f}}_{j_r}\ldots \tilde{\dot{f}}_{j_1} |\dot{\bemptyset},\dot{\br}\rangle$, where $|\dot{\bemptyset},\dot{\br}\rangle=\mathsf{k}|\bemptyset,\mathbf{r}\rangle$. In particular, we have $r'=r$. Using the same argument but exchanging the role of $e$ and $\ell$, we also get ${\tilde{f}}_{i'_{p'}}\ldots {\tilde{f}}_{i'_1} |\bemptyset,\mathbf{r} \rangle ={\tilde{f}}_{i_{p}}\ldots {\tilde{f}}_{i_1} |\bemptyset,\mathbf{r} \rangle$ and $p'=p$. This proves uniqueness, and therefore $\beta$ is well-defined. For $\mathbf{r} \in A (s)$ and $(\boldsymbol{\nu}, \boldsymbol{\pi})\in \operatorname{Ug}_{e,\mathbf{r}} \times \operatorname{Ug}_{\ell,\dot{\br}} $, by \cite[Theorem 2.10]{FLOTW1999}, there exist indices $i_1,\ldots,i_p\in \{0,1,\ldots,e-1\}$ and $j_1,\ldots,j_r\in \{0,1,\ldots,\ell-1\}$ such that $${\tilde{f}}_{i_p}\ldots {\tilde{f}}_{i_1}|\bemptyset,\mathbf{r}\rangle=|\boldsymbol{\nu},\mathbf{r}\rangle \text{ and }\tilde{\dot{f}}_{j_r}\ldots \tilde{\dot{f}}_{j_1}|\dot{\bemptyset},\dot{\mathbf{r}}\rangle =|\boldsymbol{\pi},\dot{\br}\rangle.$$ Now, the map $\delta : \bigsqcup_{\mathbf{r} \in A (s)} \operatorname{Ug}_{e,\mathbf{r}} \times \Pi \times \operatorname{Ug}_{\ell,\dot{\br}} {\tilde{o}} \Pi^\ell_s, (\boldsymbol{\nu},\sigma,\boldsymbol{\pi})\mapsto \tilde{\dot{f}}_{j_r}\ldots \tilde{\dot{f}}_{j_1} {\tilde{a}}_\sigma{\tilde{f}}_{i_p}\ldots {\tilde{f}}_{i_1}|\bemptyset,\mathbf{r}\rangle$ is well-defined, since it does not depend on the choice of the indices and the 3 crystals commute. It is straightforward that $\beta$ and $\delta$ are inverse to each other, which concludes the proof. \end{proof} \begin{example}\label{example_triple} Take $s=1$, $e=3$, $\ell=4$, $\mathbf{s}=(-3,2,1,1)$ and $\boldsymbol{\la}=(\emptyset, 3.2^2,\emptyset,3)$. Then \begin{align*} \beta ( |\boldsymbol{\la},\mathbf{s}\rangle) & = ( \,\, |(\emptyset,\emptyset,\emptyset,3),(-1,0,0,2)\rangle \,\, , \,\, (2) \,\, , \,\, |(2^2,2.1,\emptyset),(-1,-1,1)\rangle \,\,) \\ & =( \,\, {\tilde{f}}_1{\tilde{f}}_0{\tilde{f}}_2 \, |(\emptyset,\emptyset,\emptyset,\emptyset),(-1,0,0,2)\rangle \,\, , \,\, (2) \,\, , \,\, \tilde{\dot{f}}_0\tilde{\dot{f}}_2\tilde{\dot{f}}_3\tilde{\dot{f}}_3\tilde{\dot{f}}_0\tilde{\dot{f}}_2\tilde{\dot{f}}_3\, |(\emptyset,\emptyset,\emptyset),(-1,-1,1)\rangle \,\, ). \end{align*} \end{example} \begin{remark}\label{rem_bij} Note that for $\ell=1$, Corollary \ref{bijtriple} reduces to a very simple bijection. Indeed, there is no $\widehat{\mathfrak{sl}_\ell}$-crystal (and no level-rank duality) in this case, and the bijection associates to any partition $\lambda$ a pair of partitions $(\rho, \sigma)$ determined by the ``euclidean division'' of $\lambda$ by $e$, as follows. Given two partitions $\mu$ and $\mu'$, let $\mu \sqcup \mu'$ be the partition obtained by concatenating the two partitions and then reordering the parts to obtain a partition (see for instance \cite[Section 3.1]{DudasJacon2018}). Then we can uniquely write $$\lambda =(\sigma)^e \sqcup \rho$$ where $\rho$ is an $e$-regular partition and $\sigma\in\Pi$. \end{remark} \begin{example} Choose $e=3$ and $\lambda=(4^4.3^2.2.1^8)$. Then $\lambda = (4.1^2)^3 \sqcup (4.3^2.2.1^2)$. \end{example} \subsection{The generalized Mullineux map} \label{subsec_genmul} In the following, we will need to go from one indexation by $\ell$-partitions to the other by $e$-partitions using the map $\mathsf{k}$. We will use the relationship between the weight spaces for the action of $\widehat{\mathfrak{sl}_e}$ and the weight spaces for the action of $\widehat{\mathfrak{sl}_\ell}$ of $\mathcal{F}_{s,e}$. We start by defining a map $\theta_{\ell,e,s}$ by setting $$\begin{array}{lccc} \theta_{\ell,e,s} :& \mathbb{Q}^{\ell} (s) &\longrightarrow &\mathbb{Q}^{\ell} (e)\\ & (s_1,\ldots, s_{\ell}) & \longmapsto & (e-s_1+s_{\ell},s_1-s_2,\ldots,s_{{\ell}-1}-s_{\ell}). \end{array} $$ This is a bijection with inverse map $$\begin{array}{lccc} \theta_{\ell,e,s}^{-1} :& \mathbb{Q}^{\ell} (e) &\longrightarrow &\mathbb{Q}^{\ell} (s)\\ & (a_1,\ldots, a_{\ell}) & \longmapsto & (s_1,\ldots,s_{\ell}), \end{array} $$ where we have for all $1\leq i\leq \ell$: $$s_i=\frac{1}{\ell} (s-\sum_{1\leq j\leq \ell-1} j a_{j+1} )+\sum_{i+1\leq j\leq \ell} a_j.$$ \begin{lemma}\label{combij} Keeping the above notation, assume that $\mathbf{s}=\theta_{\ell,e,s}^{-1} (a_1,\ldots,a_{\ell})$ then we have $-\mathbf{s}_\mathrm{rev}=\theta_{\ell,e,-s}^{-1} (a_1,a_\ell,\ldots,a_{2})$ \end{lemma} \begin{proof} Write $\mathbf{s}=(s_1,\ldots,s_\ell)$ and ${\bf v}=\theta_{\ell,e,-s}^{-1} (a_1,a_\ell,\ldots,a_{2})$. On the one hand, we have for all $i=1,\ldots,\ell$ $$s_i=\frac{1}{\ell} (s-\sum_{1\leq j\leq \ell-1} j a_{j+1} )+\sum_{i+1\leq j\leq \ell} a_j,$$ and on the other hand: $$\begin{array}{rcl} v_{\ell-i+1}&=&\displaystyle{\frac{1}{\ell} (-s-\sum_{1\leq j\leq \ell-1} j a_{\ell-j+1} )+\sum_{\ell-i+2\leq j\leq \ell } a_{\ell-j+2}}\\ &=& \displaystyle{\frac{1}{\ell} (-s-\sum_{1\leq j\leq \ell-1} (\ell-k) a_{k+1} )+\sum_{2\leq k\leq i } a_{k}} \end{array} $$ We obtain: $$s_j+v_{\ell-j+1}=0$$ and the result follows. \end{proof} We have the following result whose proof can be found in \cite[Proposition 2.12]{Yvonne2007}, taking into account Remark \ref{Ugyv} (2). \begin{proposition}\label{yv} Let $\dot{\bs}=(\dot{s}_1,\ldots,\dot{s}_e)\in \mathbb{Z}^e (s)$ and let $\dot{w}\in\dot{\mathcal{P}}$ be a weight for $\mathcal{F}_{\ell,\dot{\bs}}$. Then there exists a unique $\mathbf{s}\in \mathbb{Z}^\ell (s)$ and a unique $w\in\mathcal{P}$ such that $\mathsf{k}\left(\mathcal{F}_{e,\mathbf{s}}[w]\right)=\mathcal{F}_{\ell,\dot{\bs}}[\dot{w}]$. If we write $\dot{w}=d\dot{\delta} +\sum_{0\leq i\leq \ell-1} a_i \dot{\Lambda}_{i-1}$ with $(a_1,\ldots, a_{\ell})\in \mathbb{Z}^\ell$ we have $$\mathbf{s}=\theta_{\ell,e,s}^{-1} (a_1,a_{\ell},\ldots,a_2).$$ Moreover the associated weight is $$w=d\delta+\sum_{0\leq i\leq e-1} (\dot{s}_i-\dot{s}_{i+1}) \Lambda_i$$ where $\dot{s}_0=\ell+\dot{s}_e$. \end{proposition} We are now ready to prove the first main result of this paper. Recall the generalized Mullineux map ${\tt\Phi}_{e,\mathbf{s}}$ on Uglov $\ell$-partitions of \Cref{FJLmul}. By level-rank duality, we have a dual Mullineux map ${\tt\Phi}_{\ell,\mathbf{t}}$ for all $\mathbf{t}\in\mathbb{Z}^e(-s)$ which acts on $e$-partitions. \begin{theorem}\label{genmul} \ \begin{enumerate} \item There exists a unique bijection $$\begin{array}{rccc} \Phi:& \Pi^\ell_s &\longrightarrow& \Pi^\ell_{-s} \\ & |\boldsymbol{\la},\mathbf{s} \rangle & \longmapsto & |\boldsymbol{\mu},-{\mathbf{s}}_\mathrm{rev} \rangle \end{array} $$ such that for all $0\leq i\leq e-1$, $\sigma \in \Pi$ and $0\leq j\leq \ell-1$, \begin{enumerate} \item $\Phi \circ {\tilde{f}}_i = {\tilde{f}}_{-i} \circ \Phi$ \item $\Phi \circ {\tilde{a}}_{\sigma} = {\tilde{a}}_{\sigma^\mathrm{tr}} \circ \Phi $ \item $\Phi \circ \tilde{\dot{f}}_j = \tilde{\dot{f}}_{-j} \circ \Phi $. \item $\Phi (|\boldsymbol{\emptyset},\mathbf{s} \rangle) = |\boldsymbol{\emptyset},-{\mathbf{s}}_\mathrm{rev} \rangle$. \end{enumerate} \item Using the notation of Corollary \ref{bijtriple}, we have $$\Phi = \beta^{-1} \circ ({\tt\Phi}_{e,\mathbf{r}}, (.)^\mathrm{tr}, {\tt\Phi}_{\ell,\dot{\br}}) \circ \beta.$$ In other words, writing $|\boldsymbol{\la},\mathbf{s}\rangle=\tilde{\dot{f}}_{j_r}\ldots \tilde{\dot{f}}_{j_1} {\tilde{a}}_{\sigma} {\tilde{f}}_{i_p}\ldots {\tilde{f}}_{i_1}|\bemptyset,\mathbf{r}\rangle$ with $\mathbf{r}\in A(s)$, we have $\Phi|\boldsymbol{\la},\mathbf{s}\rangle=\tilde{\dot{f}}_{-j_r}\ldots \tilde{\dot{f}}_{-j_1} {\tilde{a}}_{\sigma^\mathrm{tr}} {\tilde{f}}_{-i_p}\ldots {\tilde{f}}_{-i_1}|\bemptyset,-\mathbf{r}_\mathrm{rev}\rangle$. \item We have $|\boldsymbol{\la}|=|\boldsymbol{\mu}|$ if $|\boldsymbol{\mu},-\mathbf{s}_\mathrm{rev}\rangle=\Phi(|\boldsymbol{\la},\mathbf{s}\rangle)$. \end{enumerate} \end{theorem} \begin{proof} Let $\boldsymbol{\la}\in\Pi^\ell$, $\mathbf{s}\in\mathbb{Z}^\ell(s)$ and write $|\boldsymbol{\la},\mathbf{s}\rangle=\tilde{\dot{f}}_{j_r}\ldots \tilde{\dot{f}}_{j_r} {\tilde{a}}_{\sigma} {\tilde{f}}_{i_p}\ldots {\tilde{f}}_{i_1} |\bemptyset, {\mathbf{r}} \rangle$ with $\mathbf{r} \in A(s)$ as in Theorem \ref{3crystals}. If $\Phi$ satisfies the four assumptions of the Theorem, we have that: $$\Phi(|\boldsymbol{\la},\mathbf{s}\rangle)={\tilde{f}}_{-i_p}\ldots {\tilde{f}}_{-i_1} {\tilde{a}}_{\sigma^\mathrm{tr}} \tilde{\dot{f}}_{-j_r}\ldots \tilde{\dot{f}}_{-j_1} |{\bemptyset},-{\mathbf{r}}_\mathrm{rev}\rangle$$ and this shows uniqueness. Now, to prove $(1)$ and $(2)$, we need to show that there exists an $\ell$-partition $\boldsymbol{\mu}$ such that $$|\boldsymbol{\mu},-\mathbf{s}_\mathrm{rev}\rangle={\tilde{f}}_{-i_p}\ldots {\tilde{f}}_{-i_1} {\tilde{a}}_{\sigma^\mathrm{tr}} \tilde{\dot{f}}_{-j_r}\ldots \tilde{\dot{f}}_{-j_1} |{\bemptyset},-{\mathbf{r}}_\mathrm{rev}\rangle$$ First, note that $\mathsf{k}|\bemptyset,-\mathbf{r}_\mathrm{rev}\rangle= |\dot{\bemptyset},-\dot{\br}_\mathrm{rev}\rangle$ by \cite[Formula (3.8)]{Gerber2016}. Consider the $e$-partition $\boldsymbol{\la}_2$ such that $$|\boldsymbol{\la}_2,\dot{\mathbf{r}}\rangle=\tilde{\dot{f}}_{j_r}\ldots \tilde{\dot{f}}_{j_1} |\dot{\bemptyset},\dot{\mathbf{r}}\rangle$$ and the $e$-partition ${\boldsymbol{\mu}_2}$ such that $$|{\boldsymbol{\mu}_2},-\dot{\mathbf{r}}_\mathrm{rev} \rangle=\tilde{\dot{f}}_{-j_r}\ldots \tilde{\dot{f}}_{-j_1} |\dot{\bemptyset},-\dot{\mathbf{r}}_\mathrm{rev}\rangle$$ defined thanks to \Cref{FJLmul}, i.e. $$|\boldsymbol{\mu}_2,-\dot{\mathbf{r}}_\mathrm{rev} \rangle={\tt\Phi}_{\ell,\dot{\br}}(|\boldsymbol{\la}_2,\dot{\mathbf{r}}\rangle).$$ Let $|\boldsymbol{\la}_1,\mathbf{s} \rangle=\dot{\mathsf{k}} (|\boldsymbol{\la}_2,\dot{\mathbf{r}}\rangle)$, so that $|\boldsymbol{\la},\mathbf{s}\rangle= {\tilde{f}}_{i_p}\ldots {\tilde{f}}_{i_1} {\tilde{a}}_\sigma| \boldsymbol{\la}_1, \dot{\mathbf{s}} \rangle$. Let $\mathbf{v}\in\mathbb{Z}^\ell(-s)$ and $\mu_1$ be the $\ell$-partition such that $|\boldsymbol{\mu}_1,\mathbf{v}\rangle=\dot{\mathsf{k}} (|\boldsymbol{\mu}_2,-\dot{\mathbf{r}}_\mathrm{rev}\rangle)$. Let us show that $\mathbf{v}=-\mathbf{s}_\mathrm{rev}$. By hypothesis, the $\widehat{\mathfrak{sl}_\ell}$-weight $\dot{w}$ of $|\boldsymbol{\la}_2,\dot{\mathbf{r}}\rangle$ can be written $$\dot{w}=d\dot{\delta}+\sum_{1\leq i\leq \ell} \dot{\Lambda}_{r_i } -\sum_{1 \leq j\leq t} \alpha_{i_j} $$ and there exists a sequence of non negative integers $(a_1,\ldots,a_\ell)$ such that $$\dot{w}=d\dot{\delta}+\sum_{1\leq i\leq \ell} a_i \dot{\Lambda}_{i-1}.$$ By Proposition \ref{yv}, with these notations, we have $\mathbf{s}=\theta_{\ell,e,s}^{-1} (a_1,a_\ell,\ldots a_2)$ By hypothesis, the $\widehat{\mathfrak{sl}_e}$-weight $\dot{w}'$ of $|\boldsymbol{\mu}_2,-\dot{\mathbf{r}}_\mathrm{rev}\rangle$ can be written $$\dot{w}'=d'\dot{\delta}+\sum_{1\leq i\leq \ell} \dot{\Lambda}_{-r_i } -\sum_{1 \leq j\leq t} \dot{\alpha}_{-i_j}$$ and thus we have $$\dot{w}'=d'\dot{\delta}+\sum_{1\leq i\leq \ell} a_i \dot{\Lambda}_{1-i} $$ We thus have $\mathbf{v}=\theta_{\ell,e,-s}^{-1} (a_1,a_2, \ldots a_\ell)$. We conclude that $\mathbf{v}=-\mathbf{s}_\mathrm{rev}$ using Lemma \ref{combij}. Therefore, we can set $\boldsymbol{\mu}$ to be the $\ell$-partition such that $|\boldsymbol{\mu},-\mathbf{s}_\mathrm{rev}\rangle = \beta^{-1} (\boldsymbol{\mu}_1,\sigma,\boldsymbol{\mu}_2)$, see \Cref{bijtriple}, and this concludes the proof of $(1)$ and $(2)$. It remains to prove $(3)$. To do this, let us study the $\widehat{\mathfrak{sl}_e}$-weight of $|\boldsymbol{\la}_1,\mathbf{s}\rangle$ and $|\boldsymbol{\mu}_1,-\mathbf{s}_\mathrm{rev} \rangle$. Again by Prop \ref{yv}, the weight $w$ of $|\boldsymbol{\la}_1,\mathbf{s} \rangle$ is $$w=d\delta + (e-r_1+r_e) \Lambda_0+( r_1-r_2) \Lambda_1+\ldots + ( r_{e-1}-r_e) \Lambda_{e-1} $$ which can be written as $$w=d\delta +\sum_{1\leq i\leq l} \Lambda_{t_i}-\sum_{0\leq i\leq e-1} m_i \alpha_i $$ for a sequence of non negative integers $(m_i)_{i=0,\ldots,e-1}$. Note that , with this notation, the number $N:=\sum_{0\leq i\leq e-1} m_i$ correspond to the size of $\boldsymbol{\la}_1$ (see Section \ref{fock}). Now again, the weight $w'$ of $|\boldsymbol{\mu}_1,-\mathbf{s}_\mathrm{rev} \rangle$ is $$w'=d\delta + (e-r_1+r_e) \Lambda_0+( r_{e-1}-r_e) \Lambda_1+\ldots + ( r_{1}-r_2) \Lambda_{e-1} $$ which thus can be written as: $$w'=d\delta +\sum_{1\leq i\leq l} \Lambda_{-t_i}-\sum_{0\leq i\leq e-1} (m_{-i}) \alpha_i. $$ We conclude that $|\boldsymbol{\la}_1|=|\boldsymbol{\mu}_1|$, that is, $N:=\sum_{0\leq i\leq e-1} m_i$. It follows that $|\boldsymbol{\mu}|=|\boldsymbol{\la}|=N+|\sigma| e+r$. \end{proof} We may write $\Phi(\boldsymbol{\la})$ instead of $\Phi(|\boldsymbol{\la},\mathbf{s} \rangle)$ when the charge $\mathbf{s}$ is understood. \begin{example} Take the same values as in Example \ref{example_triple}. Denote $\mathbf{r}=(-1,0,0,2)$, so that $\dot{\br}=(-1,-1,1)$. Then we have \begin{align*} \Phi ( |\boldsymbol{\la},\mathbf{s}\rangle) & = \beta^{-1} \, ( \,\, {\tt\Phi}_{e,\mathbf{r}} ( {\tilde{f}}_1{\tilde{f}}_0{\tilde{f}}_2 \, |\bemptyset,\mathbf{r}\rangle) \,\, , \,\, (2)^\mathrm{tr} \,\, , \,\, {\tt\Phi}_{\ell,\dot{\br}} ( \tilde{\dot{f}}_0\tilde{\dot{f}}_2\tilde{\dot{f}}_3\tilde{\dot{f}}_3\tilde{\dot{f}}_0\tilde{\dot{f}}_2\tilde{\dot{f}}_3\, |\dot{\bemptyset},\dot{\br}\rangle ) \,\, ) \\ & = \beta^{-1} \, ( \,\, {\tilde{f}}_2{\tilde{f}}_0{\tilde{f}}_1 \, |(\emptyset,\emptyset,\emptyset,\emptyset),(-2,0,0,1)\rangle \,\, , \,\, (1^2) \,\, , \,\, \tilde{\dot{f}}_0\tilde{\dot{f}}_2\tilde{\dot{f}}_1\tilde{\dot{f}}_1\tilde{\dot{f}}_0\tilde{\dot{f}}_2\tilde{\dot{f}}_1\, |(\emptyset,\emptyset,\emptyset),(-1,1,1)\rangle \,\, ) \\ & = \beta^{-1} \, ( \,\, |(\emptyset,1,\emptyset,2),(-2,0,0,1)\rangle \,\, , \,\, (1^2) \,\, , \,\, |(\emptyset,2^2,2.1),(-1,1,1)\rangle \,\, ) \\ &= |(1,2.1,\emptyset,2^3),(-1,-1,-2,3)\rangle. \end{align*} \end{example} The following corollary shows that $\Phi$ generalizes the map ${\tt\Phi}_{e,\mathbf{s}}$ of Section \ref{mulAK}. \begin{corollary}\label{mul_coincide_on_uglovs} Let $e\geq 2$, $\mathbf{s}\in\mathbb{Z}^\ell(s)$, and $n\geq0$. For all $\boldsymbol{\la}\in\operatorname{Ug}_{e,\mathbf{s}}(n)$, we have $$\Phi(\boldsymbol{\la}) = {\tt\Phi}_{e,\mathbf{s}}(\boldsymbol{\la}).$$ \end{corollary} \begin{proof} Let $\mathbf{s}\in\mathbb{Z}^\ell(s)$. By Property $(3)$ of Theorem \ref{genmul}, we have $$\Phi(|\bemptyset, \mathbf{s}\rangle )= |\bemptyset, -{\mathbf{s}}_\mathrm{rev} \rangle.$$ Now if $\boldsymbol{\la}\in\operatorname{Ug}_{e,\mathbf{s}}(n)$ there exists a sequence of Kashiwara operators such that: $${\tilde{f}}_{i_p}\ldots {\tilde{f}}_{i_1}|\bemptyset, \mathbf{s}\rangle=|\boldsymbol{\la}, \mathbf{s}\rangle.$$ So we can use Property $(1)$ of Theorem \ref{genmul} to see that $$\Phi({\tilde{f}}_{i_p}\ldots {\tilde{f}}_{i_1} |\bemptyset, \mathbf{s}\rangle )={\tilde{f}}_{-i_p}\ldots {\tilde{f}}_{-i_1} |\bemptyset, -{\mathbf{s}}_\mathrm{rev} \rangle.$$ By definition of ${\tt\Phi}_{e,\mathbf{s}}$ in \Cref{FJLmul}, we get the result. \end{proof} \subsection{More on crystal isomorphisms}\label{ciso} Let $P_\ell:=\mathbb{Z}^\ell$ be the $\mathbb{Z}$-module with standard basis $\{z_i\ |\ i=1,\ldots, \ell\}$. For $k=1,\ldots,\ell-1$, we denote by $\sigma_k$ the transposition $(k,k+1)$ of $\mathfrak{S}_\ell$. The extended affine symmetric group $\widehat{\mathfrak{S}}_\ell$ is the semidirect product $P_\ell \rtimes \mathfrak{S}_\ell$ with the relations given by $\sigma_i z_j=z_j \sigma_i$ for $j\neq i,i+1$ and $\sigma_i z_i \sigma_i=z_{i+1}$ for $i=1,\ldots,\ell-1$ and $j=1,\ldots,\ell$. It acts faithfully on $\mathbb{Z}^\ell$ as follows: for any ${{\mathbf{s}}}=(s_{1},\ldots ,s_{\ell})\in \mathbb{Z}^{\ell}$: $$\begin{array}{rcll} \sigma _{c}.{{\mathbf{s}}}&=&(s_{1},\ldots ,s_{c-1},s_{c+1},s_{c},s_{c+2},\ldots ,s_{\ell})&\text{for }c=1,\ldots,\ell-1 \text{ and }\\ z_i.{{\mathbf{s}}}&=&(s_{1},s_{2},\ldots,s_i+e,\ldots ,s_{\ell})&\text{for }i=1,\ldots,\ell. \end{array}$$ If $\mathbf{s}$ and $\mathbf{s}'$ are in the same orbit modulo the action of $\widehat{\mathfrak{S}}_\ell$, then there is an $\widehat{\mathfrak{sl}_e}$-crystal isomorphism $\Psi_{\mathbf{s} {\tilde{o}} \mathbf{s}'}$ between the Fock spaces $\mathcal{F}_{e,\mathbf{s}}$ and $\mathcal{F}_{e,\mathbf{s}'}$, that is a map: $$\Psi_{\mathbf{s} {\tilde{o}} \mathbf{s}'}:\Pi^\ell_\mathbf{s} {\tilde{o}}\Pi^\ell_{\mathbf{s}'} $$ such that : \begin{itemize} \item $|\boldsymbol{\la},\mathbf{s}\rangle$ is a highest weight vertex in $\mathcal{F}_{e,\mathbf{s}}$ if and only if $|\Psi_{\mathbf{s} {\tilde{o}} \mathbf{s}'}(\boldsymbol{\la}),\mathbf{s} ' \rangle$ is a highest weight vertex in $\mathcal{F}_{e,\mathbf{s}'}$, \item For all $\boldsymbol{\la} \in \Pi^\ell$, we have $\Psi_{\mathbf{s} {\tilde{o}} \mathbf{s}'} ({\tilde{f}}_i |\boldsymbol{\la},\mathbf{s}\rangle)={\tilde{f}}_i\Psi_{\mathbf{s} {\tilde{o}} \mathbf{s}'} ( |\boldsymbol{\la},\mathbf{s}\rangle)$ \end{itemize} These crystal isomorphisms have been explicity described in \cite{JaconLecouvey2008}. Let us now come back to our situation. Assume that $\mathbf{s}\in \mathbb{Z}^\ell$ and choose any ${\mathbf{s} '} \in \mathbb{Z}^\ell$ in the orbit of $-\mathbf{s}$ modulo the action of the extended affine symmetric group. This is in particular the case for $-{\mathbf{s}}_\mathrm{rev}$. There is a $\widehat{\mathfrak{sl}_e}$-crystal isomorphism between the Fock spaces $\mathcal{F}_{e,-\mathbf{s}}$ and $\mathcal{F}_{e,\mathbf{s}'}$. Composing this map with $\Phi$ thus gives an isomorphism between $\mathcal{F}_{e,\mathbf{s}}$ and $\mathcal{F}_{e,\mathbf{s}'}$. \section{Combinatorics of perverse equivalences for cyclotomic Cherednik category $\mathcal{O}$}\label{chered} In Section \ref{mulAK}, we studied the Mullineux involution in the context of representations of cyclotomic Hecke algebras. In this section, we use the results of Section \ref{section_genmul} to study the Mullineux involution in the context of representations of cyclotomic rational Cherednik algebras. The goal is to realize the generalized Mullineux involution as the permutation of $\Pi^\ell$ induced by certain perverse equivalences. We follow Losev's approach \cite{Losev2015a}, \cite{Losev2015}. \subsection{Representations of cyclotomic rational Cherednik algebras}\label{reps of cyclo RCA} We can deform $\mathbb{C}[x_1,\ldots,x_n,y_1,\ldots,y_n]\rtimes\mathbb{C} W_{\ell,n}$ to obtain an algebra called the cyclotomic rational Cherednik algebra \cite{EtingofGinzburg2002}. This deformation depends on parameters $\kappa\in\mathbb{Q}^\times$ and $\mathbf{s}=(s_1,\ldots,s_\ell)\in\mathbb{Q}^\ell$ (the charge). We denote it $\mathsf{H}_{\kappa,\mathbf{s}}(n)$. The charge $\mathbf{s}$ is identified with $\mathbf{s}+\alpha(1,1,\ldots,1)$ for any scalar $\alpha$, thus the parameter space is $\ell$-dimensional. As a $\mathbb{C}$-vector space, $\mathsf{H}_{\kappa,\mathbf{s}}(n)=\mathbb{C}[y_1,\ldots,y_n]\otimes\mathbb{C}[W_{\ell,n}]\otimes\mathbb{C}[x_1,\ldots,x_n]$. This makes it possible to define a category $\mathcal{O}$ for $\mathsf{H}_{\kappa,\mathbf{s}}(n)$ as the full subcategory of $\mathsf{H}_{\kappa,\mathbf{s}}(n)$-mod consisting of finitely generated $\mathsf{H}_{\kappa,\mathbf{s}}(n)$-modules which are locally nilpotent for the action of $\mathbb{C}[y_1,\ldots,y_n]$ \cite{GGOR2003}. Let $\mathcal{O}_{\kappa,\mathbf{s}}(n)$ denote the category $\mathcal{O}$ of $\mathsf{H}_{\kappa,\mathbf{s}}(n)$, and $\mathcal{O}_{\kappa,\mathbf{s}}=\bigoplus_{n\in\mathbb{Z}_{\geq 0}} \mathcal{O}_{\kappa,\mathbf{s}}(n)$. $\mathcal{O}_{\kappa,\mathbf{s}}(n)$ is a highest weight category whose simple objects are indexed by $\text{\rm Irr}_\mathbb{C} W_{\ell,n}$, the irreducible representations of the underlying group algebra $\mathbb{C} W_{\ell,n}$ \cite{GGOR2003}, thus by $\Pi^\ell(n)$. Furthermore, the highest weight structure of $\mathcal{O}_{\kappa,\mathbf{s}}(n)$ depends on a partial order on $\Pi^\ell(n)$ which is determined by the parameter $(\kappa,\mathbf{s})$. \subsection{Branching rules and crystals} In this subsection we restrict to the case of \textit{integral parameters}, that is $$ \kappa = \pm 1/e \text{ for some }e\in\mathbb{Z}_{\geq 2} \text{\quad and \quad } \mathbf{s}\in\mathbb{Z}^\ell.$$ The (complexified) Grothendieck group of $\mathcal{O}_{\kappa,\mathbf{s}}$ is a level $\ell$ Fock space. Recall from Section \ref{fock} that the parameters $e,\mathbf{s}$ for the Fock space come from its $\mathcal{U}_t (\widehat{\mathfrak{sl}_e})$-module structure. Shan \cite{Shan2011} has shown that if $\kappa=1/e$, there is a notion of branching rule arising from Bezrukavnikov and Etingof's parabolic induction functors \cite{BezrukavnikovEtingof}, which categorifies the $\widehat{\mathfrak{sl}_e}$-crystal of the Fock space $\mathcal{F}_{e,\mathbf{s}}$ \cite[Theorem 6.3]{Shan2011}. Moreover, there is a categorical Heisenberg action on $\mathcal{O}_{\kappa,\mathbf{s}}$ giving rise to the $\mathfrak{sl}_\infty$-crystal on $\mathcal{F}_{e,\mathbf{s}}$ \cite{ShanVasserot}, \cite{Losev2015}. We have: \begin{theorem}(Shan-Vasserot \cite[Proposition 5.18]{ShanVasserot})\label{crystalcusp} The following are equivalent: \begin{enumerate} \item $\boldsymbol{\la}$ is a highest weight vertex for both the $\widehat{\mathfrak{sl}_e}$- and $\mathfrak{sl}_\infty$-crystals on $\mathcal{F}_{e,\mathbf{s}}$, \item $L(\boldsymbol{\la})$ is killed by the categorical Heisenberg and $\widehat{\mathfrak{sl}_e}$ annihilation operators, \item $L(\boldsymbol{\la})$ is finite-dimensional. \end{enumerate} \end{theorem} Thus finite-dimensional simples are labeled by the source vertices of the $\widehat{\mathfrak{sl}_e}$- and $\mathfrak{sl}_\infty$-crystals. \subsection{Perverse equivalences}\label{subsection:perverse} Perverse equivalences are a special kind of derived equivalence introduced by Chuang-Rouquier \cite{ChuangRouquier2008},\cite{ChuangRouquier2017} which are well-suited for combinatorial applications. Let $\mathcal{A},\mathcal{A}'$ be abelian categories with finitely many simple objects and in which every object has finite length. Let $S,S'$ be the sets of isomorphism classes of simple objects of $\mathcal{A},\mathcal{A}'$ respectively. Let $0\subset S_0\subset S_1\subset\ldots\subset S_r=S$ be a filtration of $S$ and $0\subset S_0'\subset S_1'\subset\ldots\subset S_r'=S'$ a filtration of $S'$. Let $\mathcal{A}_i\subset\mathcal{A},\mathcal{A}_i'\subset \mathcal{A}'$ be the Serre subcategories generated by $S_i,S_i'$ respectively and let $\pi:\{0,1,\ldots,r\}\rightarrow\mathbb{Z}$ be a function. \begin{definition}[Chuang-Rouquier]\label{def:perverse} A derived equivalence $F:D^b(\mathcal{A})\rightarrow D^b(\mathcal{A}')$ is \textit{perverse} if for all $i\geq 0$ and all $s\in S_i\setminus S_{i-1}$, the complex $F(s)$ satisfies: \begin{enumerate} \item for $j\neq \pi(i)$ all composition factors of $H^j(F(s))$ are in $S_{i-1}'$, \item all the composition factors of $H^{\pi(i)}(F(s))$ are in $S_{i-1}'$ except for a unique one in $S_i'\setminus S_{i-1}'$. \end{enumerate} \end{definition} A perverse equivalence $F:D^b(\mathcal{A})\rightarrow D^b(\mathcal{A}')$ therefore gives rise to a canonical bijection $\mathfrak{f}:S\rightarrow S'$ sending $s\in S_i\setminus S_{i-1}$ to $\mathfrak{f}(s):=H^{\pi(i)}(F(s))\mod \mathcal{A}'_{i-1}$. In the following sections \ref{subsection:wcfunctors} and \ref{subsection:Ringel}, we show how the generalized Mullineux involution arises from the perverse equivalences given by wall-crossing functors and Ringel duality. \subsection{Wall-crossing functors}\label{subsection:wcfunctors} Let us recall Losev's construction of wall-crossing functors in \cite{Losev2015}. These are derived equivalences between category $\mathcal{O}_{?,\dagger}(n)$'s for parameters which differ by a perturbation of the partial order on $\Pi^\ell(n)$. Denote by $\mathfrak{c}_\mathbb{Z}\subset \mathbb{Q}^\times \times \mathbb{Q}^\ell$ the $\ell$-dimensional lattice in the parameter space $\mathbb{C}^\ell$ consisting of those parameters $c'=(\kappa',\mathbf{s}')$ that have \textit{integral difference} with a fixed parameter $c=(\kappa,\mathbf{s})$, i.e. such that $\kappa'-\kappa\in\mathbb{Z}$ and $\kappa' (s'_i-s_j ') -\kappa (s_i-s_j)\in\mathbb{Z}$ for all $1\leq i<j \leq \ell$. Note that replacing the $\ell$-tuple $\mathbf{s}=(s_1,\ldots,s_\ell)$ with $\mathbf{s}+(\alpha,\ldots,\alpha)$ for $\alpha\in\mathbb{Z}$ does not affect the definition of the corresponding rational Cherednik algebra, see the formulas in \cite[\S 3.3, Theorem 6.10]{ShanVasserot}. The lattice $\mathfrak{c}_\mathbb{Z}$ is the shift by $c$ of the dual lattice to the lattice spanned by certain hyperplane elements, see \cite[Section 2.1.4, Section 2.7, Section 4.1.3]{Losev2015}. It is isomorphic to $\mathbb{Z}^\ell$. There is a finite set of hyperplanes in $\mathbb{C}^\ell$ called walls dividing $\mathfrak{c}_\mathbb{Z}$ into open cones called chambers. These hyperplanes are defined as follows. For each $\boldsymbol{\la}=((\lambda_1^1,\lambda_{2}^1,\ldots),\ldots (\lambda_1^\ell,\lambda_{2}^\ell,\ldots))\in \Pi^{\ell} (n)$, let $$[\boldsymbol{\la}]:=\{ (a,b,j)\ |\ 1\leq a,\ 1\leq b\leq \lambda^j_a,\ 1\le j \leq \ell\}$$ be the Young diagram of $\boldsymbol{\la}$. For each $(a,b,j)\in [\boldsymbol{\la}]$, we define $$ co_c (\gamma)=\kappa \ell(b-a)+\ell h_j$$ where $h_j=\kappa s_j-\frac{j}{\ell}$ and: $$c_{\boldsymbol{\la}}:=\sum_{\gamma \in [\boldsymbol{\la}]} co_c(\gamma) $$ called the $c$-function; the formula in our cyclotomic case was given in \cite{GordonLosev}. By definition, the walls are the hyperplanes $\Pi_{\boldsymbol{\la},\boldsymbol{\la}'}$ given by $c_{\boldsymbol{\la}}=c_{\boldsymbol{\la}'}$. Among these walls, we will be interested in the so called ``essential walls'' which are the following ones: \begin{itemize} \item[(a)] the wall $\kappa=0$ between chambers containing parameters $(\kappa,\mathbf{s})$ such that the denominator $e$ of $\kappa$ satisfies $2\leq e\leq n$, in terms of the above definition, it is of the form $\Pi_{\boldsymbol{\la},\boldsymbol{\la}'}$ for some $[\boldsymbol{\la}]=[\boldsymbol{\mu}]\sqcup\{\gamma\}$ and $[\boldsymbol{\la}']=[\boldsymbol{\mu}]\sqcup\{\gamma'\}$, for some multipartition $\boldsymbol{\mu}$ and two boxes $\gamma=(a,b,j)$ and $\gamma'=(a',b',j')$ of $\boldsymbol{\mu}$ with $j=j'$. \item[(b)] the walls $h_i-h_j=\kappa m$ with $i\neq j$, $m\in\mathbb{Z}$ and $|m|<n$ between chambers containing parameters such that $s_i-s_j-m\in\kappa^{-1}\mathbb{Z}$. In terms of the above definition they are of the form $\Pi_{\boldsymbol{\la},\boldsymbol{\la}'}$ where $[\boldsymbol{\la}]=[\boldsymbol{\mu}]\sqcup\{\gamma\}$ and $[\boldsymbol{\la}']=[\boldsymbol{\mu}]\sqcup\{\gamma'\}$ for a multipartition $\boldsymbol{\mu}$ and two boxes $\gamma=(a,b,i)$ and $\gamma'=(a',b',j)$. \end{itemize} Two categories whose parameters lie in the same chamber are equivalent as highest weight categories \cite[Proposition 2.8]{Losev2015}. On the other hand, the bounded derived categories of $\mathcal{O}_{c}(n)$ and $\mathcal{O}_{c'}(n)$ are derived equivalent when $c:=(\kappa,\mathbf{s})$ is obtained from $c':=(\kappa',\mathbf{s}')$ by crossing a wall to an adjacent chamber. Two chambers are separated by the wall $\Pi_{\boldsymbol{\la},\boldsymbol{\la}'}$ if and only if the sign of $c_{\boldsymbol{\la}}-c_{\boldsymbol{\la}'}$ in a chamber is opposite to $c_{\boldsymbol{\la}}-c_{\boldsymbol{\la}'}$ in the adjacent one. The derived equivalences $\mathsf{WC}_{c\leftarrow c'}:D^b(\mathcal{O}_c)\rightarrow D^b(\mathcal{O}_{c'})$ are called wall-crossing functors. They are defined by taking the derived tensor product with a Harish-Chandra bimodule, see \cite[Section 2.8]{Losev2015} or \cite{Losev2017} for the construction of these functors. Losev proved that $\mathsf{WC}_{c\leftarrow c'}:D^b(\mathcal{O}_c)\rightarrow D^b(\mathcal{O}_{c'})$ is a perverse equivalence with respect to the filtration of simple modules by their supports (the function $\pi$ then picks out the dimension of the support) \cite[Proposition 2.12]{Losev2015}. Therefore $\mathsf{WC}_{c\leftarrow c'}:D^b(\mathcal{O}_c)\rightarrow D^b(\mathcal{O}_{c'})$ induces a canonical bijection $\mathsf{wc}_{c\leftarrow c'}$ on $\Pi^\ell$, called the combinatorial wall-crossing. For walls of type (b), the combinatorial wall-crossings have been studied in \cite{Losev2015}, \cite{JaconLecouvey2018}. In particular, they are given by the crystal isomorphisms of Section \ref{ciso} for the appropriate parameters \cite[Theorem 11]{JaconLecouvey2018}. We are here interested in the walls of type (a). First we need to see for which types of parameters they are defined. We will denote by $\mathsf{wc}_{-\leftarrow +}$ the combinatorial wall-crossing from $\kappa>0$ to $\kappa'<0$ corresponding to the wall of type (a). For the next proposition, we will use the following notation. For $c:=(\kappa,\mathbf{s})$ and $\pi\in \mathfrak{S}_\ell$, we say that $c$ is \textit{$\pi$-asymptotic} if we have for all $i=1,\ldots,\ell$: $$\left(s_{\pi (i)} - \frac{\pi (i)}{\kappa \ell}\right) -\left(s_{\pi (i+1)}- \frac{\pi (i+1)}{\kappa \ell}\right) >n-1.$$ Note that this is slightly more general than Losev's definition \cite{Losev2015}, and that both agree for integral parameters. We say that $c$ is \textit{asymptotic} if it is $\pi$-asymptotic for some $\pi$. \begin{proposition}\label{prop_wc} \begin{enumerate} \item Let $c=(\kappa,\mathbf{s})$ and $c'=(\kappa',\mathbf{s}')$ in $\mathfrak{c}_{\mathbb{Z}}$ such that $\kappa$ and $\kappa'$ have the same sign and such that $c$ and $c'$ are both $\pi$-asymptotic for some $\pi\in \mathfrak{S}_\ell$. Then there are no essential walls between $c$ and $c'$. \item Assume that $c:=(\kappa,\mathbf{s})$ and $c':=(\kappa-1,\mathbf{s}')$ are two parameters with integral difference, with $\kappa>0$ and $\kappa':=\kappa-1<0$ lying in two different chambers separated by a unique wall of type (a) (and no wall of type (b)). Then there exists $\pi\in \mathfrak{S}_\ell$ such that $c$ is $\pi$-asymptotic and $c'$ is $\pi'$-asymptotic where $\pi ' \in\mathfrak{S}_\ell$ is defined by $\pi'(i) = \pi(\ell-i+1)$ for all $i=1,\ldots,\ell$. Conversely, if $c$ and $c'$ are as above, they are separated by a unique wall of type (a) and no wall of type (b). \end{enumerate} \end{proposition} \begin{proof} Let $(i_1,i_2)\in \{1,\ldots,\ell\}^2$ be such that $i_1\neq i_2$ and assume that $$\left(s_{i_1} - \frac{i_1}{\kappa \ell}\right) -\left(s_{i_2}- \frac{i_2}{\kappa \ell}\right) >n-1.$$ Let $\boldsymbol{\la}$ be an arbitrary $\ell$-partition of $n$ and let $\gamma_1 =(a_1,b_1,i_1)$ and $\gamma_2 =(a_2,b_2,i_2)$ be two boxes of the Young diagram. By \cite[Example 5.5.18]{GeckJacon2011}, we have: $$b_1-a_1-(b_2-a_2)\geq 1-n.$$ Thus we deduce $$b_1-a_1+s_{i_1} - \frac{i_1}{\kappa \ell} >b_2-a_2+s_{i_2} - \frac{i_2}{\kappa \ell}$$ which implies that $co_c (\gamma_1)>co_c (\gamma_2)$. The same calculation holds for $c'$ and we get $co_{c'}(\gamma_1)>co_{c'}(\gamma_2)$. Thus the order on boxes induced by the $c$-function is the same if we choose it with respect to $c$ or to $c'$, and therefore there is no wall of type (b) between $c$ and $c'$. Thus we cannot have any walls of type $\Pi_{\boldsymbol{\la},\boldsymbol{\la}'}$ between the two parameters. This directly implies $(1)$. Now we prove $(2)$. Assume thus that $c:=(\kappa,\mathbf{s})$ and $c':=(\kappa-1,\mathbf{s}')$ have integral difference, with $\kappa>0$ and $\kappa':=\kappa-1<0$ lying in two different chambers separated by a unique wall of type (a) (and no wall of type (b)). Let $(i_1,i_2)\in \{1,\ldots,\ell\}^2$ be such that $i_1\neq i_2$. Consider the $\ell$-partition $\boldsymbol{\la}$ of $n-1$ such that $\lambda^k=\emptyset$ for all $k\in \{1,\ldots,\ell\}\setminus \{i_1\}$ and $\lambda^{i_1}=(n-1)$. We then consider the $\ell$-partition $\boldsymbol{\mu}$ of $n$ obtained from $\boldsymbol{\la}$ by adding $\gamma_1:=(1,n,i_1)$ and the $\ell$-partition $\boldsymbol{\nu}$ of $n$ obtained from $\boldsymbol{\la}$ by adding $\gamma_2:=(1,1,i_2)$. If $c:=(\kappa,\mathbf{s})$ does not lie in an essential wall, we have two cases to consider: \begin{itemize} \item If $c_{{\boldsymbol{\mu}}}<c_{\boldsymbol{\nu}}$ then we have $co_c (\gamma_1)<co_c (\gamma_2)$. We thus obtain: $$\kappa \ell \left(n-1+s_{i_1}-\frac{i_1}{\kappa \ell}\right) < \kappa \ell \left(s_{i_2}-\frac{i_2}{\kappa \ell}\right)$$ which implies that: $$\left(s_{i_2} - \frac{i_2}{\kappa \ell}\right) -\left(s_{i_1}- \frac{i_1}{\kappa \ell}\right) >n-1.$$ Now consider the $\ell$-partition $\boldsymbol{\la}'$ of $n-1$ such that ${\lambda'}^k=\emptyset$ for all $k\in \{1,\ldots,\ell\}\setminus \{i_1\}$ and ${\lambda'}^{i_1}=(1^{n-1})$. We then consider the $\ell$-partition $\boldsymbol{\mu}'$ of $n$ obtained from $\boldsymbol{\la}$ by adding $\gamma_1:=(n,1,i_1)$ and the $\ell$-partition $\boldsymbol{\nu}'$ of $n$ obtained from $\boldsymbol{\la}$ by adding $\gamma_2:=(1,1,i_2)$. We must have: $$\kappa \ell \left(1-n+s_{i_1}-\frac{i_1}{\kappa \ell}\right) < \kappa \ell \left(s_{i_2}-\frac{i_2}{\kappa \ell}\right)$$ but now as no walls of type (b) must be crossed to go to $c'$, we must also have: $$\kappa' \ell \left(1-n+s_{i_1}'-\frac{i_1}{\kappa \ell}\right) < \kappa' \ell \left(s_{i_2}'-\frac{i_2}{\kappa \ell}\right).$$ This implies that $$\left(s_{i_1} ' - \frac{i_1}{\kappa' \ell}\right) -\left(s_{i_2}'- \frac{i_2}{\kappa '\ell}\right) >n-1$$ because $\kappa'$ is negative. \item If $c_{\boldsymbol{\mu}}>c_{\boldsymbol{\nu}}$, we now have $co_c (\gamma_1)>co_c (\gamma_2)$ and $$\kappa \ell \left(n-1+s_{i_1}-\frac{i_1}{\kappa \ell}\right) > \kappa \ell \left(s_{i_2}-\frac{i_2}{\kappa \ell}\right).$$ This implies that $$\left(s_{i_2}' - \frac{i_2}{\kappa' \ell}\right) -\left(s_{i_1}'- \frac{i_1}{\kappa '\ell}\right) >n-1$$ and by considering the same $\boldsymbol{\la}'$ as above we now obtain $$\left(s_{i_1} - \frac{i_1}{\kappa' \ell}\right) -\left(s_{i_2}- \frac{i_2}{\kappa '\ell}\right) >n-1.$$ \end{itemize} We can thus conclude that $c$ is $\pi$-asymptotic and $c'$ is $\pi'$-asymptotic for a certain $\pi\in \mathfrak{S}_\ell$. Now let us take such a pair $(c,c')$ and let $(i_1,i_2)\in \{1,\ldots,\ell\}^2$ such that $i_1\neq i_2$ and $$\left(s_{i_1} - \frac{i_1}{\kappa \ell}\right) -\left(s_{i_2}- \frac{i_2}{\kappa \ell}\right) >n-1$$ so that $$\left(s_{i_2}' - \frac{i_2}{\kappa' \ell}\right) -\left(s_{i_1}'- \frac{i_1}{\kappa' \ell}\right) >n-1.$$ We have already seen that for any $\ell$-partition $\boldsymbol{\la}$, and $\gamma_1 =(a_1,b_1,i_1)$ and $\gamma_2 =(a_2,b_2,i_2)$ two boxes of the Young diagram, we have $co_c (\gamma_1)>co_c (\gamma_2)$. But, the hypothesis also shows that $co_{c'} (\gamma_1)>co_{c'} (\gamma_2)$. This implies that no wall of type (b) is crossed between $c$ and $c'$. If we now take an arbitrary $l$-partition of $n-1$ and consider two addable boxes $\gamma_1$ and $\gamma_2$ in the same component such that $co_c(\gamma_1)<co_c(\gamma_2)$, then we have $co_{c'}(\gamma_1)>co_{c'}(\gamma_2)$. This implies that a wall of type $(a)$ is crossed. This concludes the proof. \end{proof} Assume that ${\bf s}$ is $\pi$-asymptotic for some $\pi\in\mathfrak{S}_\ell$ and assume moreover that we are in the integral parameter case. We denote $\mathbf{s}_{\mathrm{opp}}:=\mathbf{s}'$ an associated multicharge which is $\pi'$-asymptotic with respect to the notation of the proposition. This is a slight abuse of notation because $\mathbf{s}'$ is of course not unique in general. However, two $\pi$-asymptotic parameters lie in the same chamber. Thus, the associated categories are equivalent as highest weight categories, and we can identify these two parameters. Moreover, note that one can assume that $\mathbf{s}'$ and $\mathbf{s}$ lives in the same orbit modulo the action of the affine symmetric group. Indeed, if we denote ${\bf s}:=(s_1,\ldots,s_\ell)$. One can choose $(k_1,\ldots,k_\ell)\in \mathbb{Z}^\ell$ so that ${\bf s}'':=(s_1+k_1 e,\ldots, s_\ell+k_\ell e)$ is $\pi'$-asymptotic. In this case, $c''=(\kappa',\mathbf{s}'')$ and $c$ are in $\mathfrak{c}_{\mathbb{Z}}$ and again $c''$ and $c'$ lie in the same chamber. \begin{proposition}\cite[Proposition 5.6]{Losev2015}\label{wccomm1} \ \begin{enumerate} \item The $\widehat{\mathfrak{sl}_e}$-crystal commutes with $\mathsf{wc}_{-\leftarrow +}$. \item The $\mathfrak{sl}_\infty$-crystal commutes with $\mathsf{wc}_{-\leftarrow +}$ up to taking the transpose, i.e. $\mathsf{wc}_{-\leftarrow +} \circ {\tilde{a}}_\sigma = {\tilde{a}}_{\sigma^\mathrm{tr}} \circ \mathsf{wc}_{-\leftarrow +}$ for all $\sigma\in\Pi$. \end{enumerate} \end{proposition} For the next definition, if $\boldsymbol{\la}$ is an arbitrary $\ell$-partition and ${\bf s}$ an arbitrary multicharge we denote by $(|\boldsymbol{\la},\mathbf{s} \rangle)^\mathrm{tr}:=|\boldsymbol{\la}^{\mathrm{tr}},-\mathbf{s}\rangle$ where $\boldsymbol{\la}^{\mathrm{tr}}$ is the $\ell$-partition $((\lambda^1)^{\mathrm{tr}},\ldots,(\lambda^\ell)^\mathrm{tr})$ and $-\mathbf{s}:=(-s_1,\ldots,-s_\ell)$. Now it follows from the definition of the crystal operators and \cite[Section 4.1.4]{Losev2015} that when changing $\kappa$ to $-\kappa$, $()^\mathrm{tr}$ commutes with the $\widehat{\mathfrak{sl}_e}$-crystal up to changing $\tilde{f}_i$ to $\tilde{f}_{-i}$ (see \cite[Section 3.2.3]{JaconLecouvey2010}), and commutes with the $\mathfrak{sl}_\infty$-crystal up to taking the transpose (when $\kappa<0$, the $\mathfrak{sl}_\infty$-crystal adds horizontal $e$-strips instead of vertical $e$-strips, see \cite[Section 4.2.3]{Losev2015}). We deduce the following result. \begin{theorem}\label{wcmul1} Assume that we are in the integral case, and that moreover $(\kappa,\mathbf{s})$ is asymptotic. Assume that $\boldsymbol{\la}=(\lambda^1,\ldots,\lambda^\ell)$ is in the connected component of empty multipartition for the $\widehat{\mathfrak{sl}_e}$-crystal and the $\mathfrak{sl}_\infty$-crystal. Then we have: $$\Psi_{-{\mathbf{s}}_\mathrm{rev}{\tilde{o}} -{\mathbf{s}}_{\mathrm{opp}}} \circ \Phi (\boldsymbol{\la})=\mathsf{wc}_{-\leftarrow +} (\boldsymbol{\la})^\mathrm{tr}$$ \end{theorem}\label{wcviapsi} \begin{proof} By hypothesis, there exist $\sigma\in\Pi$ and $(i_1,\ldots,i_p)\in (\mathbb{Z}/e\mathbb{Z})^p$ such that $$ {\tilde{f}}_{i_p}\ldots {\tilde{f}}_{i_1}{\tilde{a}}_{\sigma} |\bemptyset, \mathbf{s}\rangle=|\boldsymbol{\la}, \mathbf{s}\rangle$$ From Theorem \ref{genmul} $(1)$ together with the definition of the crystal isomorphism in Section \ref{ciso}, we have that: $$\begin{array}{rcl} \Psi_{-{\mathbf{s}}_\mathrm{rev}{\tilde{o}} -{\mathbf{s}}_{\mathrm{opp}}} \circ \Phi ({\tilde{f}}_{i_p}\ldots {\tilde{f}}_{i_1}{\tilde{a}}_{\sigma} |\bemptyset, \mathbf{s}\rangle)&=& {\tilde{f}}_{-i_p}\ldots {\tilde{f}}_{-i_1} \Psi_{-{\mathbf{s}}_\mathrm{rev}{\tilde{o}} -{\mathbf{s}}_{\mathrm{opp}}} \circ \Phi ({\tilde{a}}_{\sigma} |\bemptyset, \mathbf{s}\rangle)\\ &=&{\tilde{f}}_{-i_p}\ldots {\tilde{f}}_{-i_1} {\tilde{a}}_{\sigma^\mathrm{{tr}}} |\bemptyset, -\mathbf{s}_{\mathrm{opp}}\rangle \end{array}$$ using the explicit formulae of the crystal isomorphism together with the explicit formula of the action of ${\tilde{a}}_\sigma$ \cite[Section 5]{Gerber2016a}. On the other hand, by Proposition \ref{wccomm1}, we also get: $$ \mathsf{wc}_{-\leftarrow +} ({\tilde{f}}_{i_p}\ldots {\tilde{f}}_{i_1}{\tilde{a}}_{\sigma} |\bemptyset, \mathbf{s}\rangle)= {\tilde{f}}_{i_p}\ldots {\tilde{f}}_{i_1} {\tilde{a}}_{\sigma^\mathrm{tr}} |\bemptyset, \mathbf{s}_{\mathrm{opp}}\rangle$$ where the right-hand-side is computed with respect to $\kappa'<0$. The result then follows by sending $\kappa'$ to $-\kappa'$ (the latter is positive) and applying our operator $()^\mathrm{tr}$. \end{proof} This Theorem is in fact a generalization of a result by Losev in level 1 \cite[Corollary 5.7]{Losev2015}, which we recover below. Using the notation of Remark \ref{rem_bij}, denote $$ \begin{array}{cccc} M_e : & \Pi & \longrightarrow & \Pi \\ & \lambda=(\sigma^e) \sqcup \rho & \longmapsto & (\sigma^\mathrm{tr})^e \sqcup m_e(\rho) \end{array} $$ \begin{corollary}[Losev]\label{Lmul} Suppose $\ell=1$. Then for all $\lambda\in\Pi$, $$\mathsf{wc}_{-\leftarrow +}(\lambda) = (M_e(\lambda))^\mathrm{tr}.$$ \end{corollary} \begin{proof} In the case $\ell=1$, the charge is irrelevant (see also Remark \ref{rem_mulFayers}), thus so is $\Psi_{-\mathbf{s}_\mathrm{rev}{\tilde{o}}\ -\mathbf{s}_{\mathrm{opp}}}$. By Remark \ref{rem_bij}, $\lambda=(\sigma^e) \sqcup \rho$ is the level $1$ analogue of the decomposition of Corollary \ref{bijtriple} used to define $\Phi$, and we can identify $M_e(\lambda)$ with $(m_e(\rho), (\sigma^\mathrm{tr})^e)$. Thus by Theorem \ref{genmul} (2), we have $\Phi(\lambda)=(m_e(\rho), (\sigma^\mathrm{tr})^e)=M_e(\lambda)$, and we conclude using Theorem \ref{wcmul1}. \end{proof} We are able to partially generalize the level $1$ statement to level $\ell$: \begin{theorem}\label{wcmul2} Assume that we are in the integral case, and that moreover $(\kappa,\mathbf{s})$ is asymptotic. Let $k\in \{1,\ldots,\ell\}$ be such that $$s_k=\operatorname{max} \{s_i\ \, ;\, i=1,\ldots,\ell\}$$ Assume that $\boldsymbol{\la}=(\lambda^1,\ldots,\lambda^\ell)$ is such that $\lambda^i$ is $e$-regular for all $i\in \{1,\ldots,\ell-1\} \setminus \{k\}$ and $\lambda^k$ is arbitrary. The combinatorial wall-crossing $\mathsf{wc}_{-\leftarrow +}(\boldsymbol{\la})$ is then given by the formula: $$\mathsf{wc}_{-\leftarrow +}(\boldsymbol{\la})=(m_e (\lambda^1),\ldots,M_e(\lambda^k),\ldots,m_e (\lambda^\ell))^\mathrm{tr}.$$ \end{theorem} \begin{proof} Let us first assume that $\lambda^j$ is $e$-regular for all $j=1,\ldots,\ell$. By \cite[Proposition 3.1]{Losev2015}, we know that the wall-crossing is independent of the choice of a Weil generic parameter. In our situation this means that we can assume that for each $\boldsymbol{\la} \in \Pi^\ell (n)$, if two boxes have the same residues then they are in the same component. We thus have an action of a Kac-Moody algebra as a tensor product of $\ell$ copies of $\widehat{\mathfrak{sl}_e}$, one for each component of the multipartition. By Proposition \ref{wccomm1}, the associated Kashiwara operators commute with $\mathsf{wc}_{-\leftarrow +}$. Moreover, we know that $\mathsf{wc}_{-\leftarrow +}$ sends the empty multipartition to the empty multipartition and that for each $e$-regular partition $\lambda$, there exists a sequence of Kashiwara operators sending $\emptyset$ to $\lambda$. The result follows. Next, assume that $\boldsymbol{\la}=(\lambda^1,\ldots,\lambda^\ell)$ is such that $\lambda^j$ is $e$-regular for all $j\in \{1,\ldots,\ell\} \setminus \{k\}$ and $\lambda^k$ is arbitrary. By our assumption on $k$ and the definition of the action of the Heisenberg crystal operators ${\tilde{a}}_\sigma$ in \cite[Proposition 5.3]{Losev2015} , we know that there exists $\sigma\in\Pi$ and $\boldsymbol{\mu}'=(\mu^1 ,\ldots,\mu^\ell)$ such that $\mu^i=\lambda^i $ if $i\neq k$, $\mu^k $ is $e$-regular, and ${\tilde{a}}_{\sigma} \boldsymbol{\mu}=\boldsymbol{\la}$. The result then follows from Proposition \ref{wccomm1}. \end{proof} We obtain the following interesting corollary which was not immediate from the crystal graph perspective. \begin{corollary} Assume that we are in the integral case and that $(\kappa,\mathbf{s})$ is an asymptotic parameter and assume that $\boldsymbol{\la}=(\lambda^1,\ldots,\lambda^\ell)$ is a highest weight vertex such that each $\lambda^j$ is $e$-regular. Then $\mathsf{wc}_{-\leftarrow +}(\boldsymbol{\la})=(m_e (\lambda^1),\ldots,m_e (\lambda^\ell))^\mathrm{tr}$ is a highest weight vertex for the opposite asymptotic parameter. \end{corollary} \begin{example} Take $\ell=2$ and $s\in \mathbb{Z}$ such that $s>n-1$ so that ${\bf s}=(0,s)$ is an asymptotic $2$-charge. The bipartitions $(\lambda^1,\lambda^2)$ which are both highest weight vertex for the $\mathfrak{sl}_\infty$-crystal and the $\widehat{\mathfrak{sl}_e}$-crystal are exactly the ones satisfying $\lambda^2=\emptyset$ and one of the following conditions: \begin{itemize} \item $\lambda^1=\mu^e$ for a partition $\mu$. \item $\lambda^1$ has exactly one good removable box of residue $s \mod e$. \end{itemize} Let us consider the second case and assume that $\lambda^1$ is $e$-regular. Then our theorem asserts that $\mathsf{wc}_{-\leftarrow +}(\boldsymbol{\la})=(m_e(\lambda^1)^\mathrm{tr},\emptyset)$. This is consistent with the fact that it must be both a highest weight vertex for the $\mathfrak{sl}_\infty$-crystal and the $\widehat{\mathfrak{sl}_e}$-crystal because $m_e(\lambda^1)^\mathrm{tr}$ has exactly one removable box of residue $-s \mod e.$ \end{example} \subsection{Ringel duality for $\mathcal{O}_{\kappa,\mathbf{s}}$}\label{subsection:Ringel} Consider $\mathcal{O}_{\kappa,\mathbf{s}}(n)$, the cyclotomic Cherednik category $\mathcal{O}$ of the complex reflection group $G(\ell,1,n)$ for parameters $\kappa=r/e$ such that $e\geq 2$ and $\mathrm{gcd}(r,e)=1$, and $\mathbf{s}\in\mathbb{Q}^\ell$ satisfying $\kappa e s_i\in\mathbb{Z}$ for $i=1,\ldots,\ell$. The category $\mathcal{O}_{\kappa,\mathbf{s}}(n)$ is a highest weight category with standard objects $\Delta(\boldsymbol{\la})$, costandard objects $\nabla(\boldsymbol{\la}),$ simple objects $L(\boldsymbol{\la})$, etc, indexed by $\{\boldsymbol{\la}\in\Pi^\ell(n)\}$ \cite{GGOR2003}. By general theory \cite[Appendix]{Donkin} there is a corresponding Ringel duality functor $$\mathsf{R}: D^b(\mathcal{O}_{\kappa,\mathbf{s}}(n))\stackrel{\sim}{\longrightarrow} D^b(^\vee\mathcal{O}_{\kappa,\mathbf{s}}(n))$$ induced by the derived Hom functor with respect to a full tilting module, and which was realized explicitly in \cite{GGOR2003}. The category $^\vee\mathcal{O}_{\kappa,\mathbf{s}}(n)$ is called the Ringel dual of $\mathcal{O}_{\kappa,\mathbf{s}}(n)$. It is shown in \cite{GGOR2003} that $^\vee\mathcal{O}_{\kappa,\mathbf{s}}(n)$ is again a cyclotomic Cherednik category $\mathcal{O}$. In particular, it holds that $^\vee\mathcal{O}_{\kappa,\mathbf{s}}(n)\ \simeq \mathcal{O}_{\kappa',\mathbf{s}'}(n)$ for some parameters $\kappa',\mathbf{s}'$ of the same rank and level, so that the Grothendieck group of $\bigoplus\limits_n\mathcal{O}_{\kappa',\mathbf{s}'}(n)$ is again a level $\ell$ and rank $e$ Fock space. Losev proved that Ringel duality is perverse with respect to the filtration by support of simple modules \cite[Lemma 2.5]{Losev2017}. As discussed in Section \ref{subsection:perverse}, this means that we can pick out a unique composition factor $L(\boldsymbol{\mu})$ in the homology of the complex $\mathsf{R}(L(\boldsymbol{\la}))$ such that $\boldsymbol{\mu}$ has maximal bidepth in the $\widehat{\mathfrak{sl}_e}$ and $\mathfrak{sl}_\infty$-crystals. Set $\mathsf{r}(\boldsymbol{\la})=\boldsymbol{\mu}$. For the rest of this section, we assume $n$ is fixed and write $\mathcal{O}_{\kappa,\mathbf{s}}$ as shorthand for $\mathcal{O}_{\kappa,\mathbf{s}}(n)$. Note that in Section \ref{subsection:wcfunctors}, we have defined wall-crossing functors as crossing a single essential wall. However, there is a derived equivalence $\mathsf{WC}_{c'\leftarrow c}:D^b(\mathcal{O}_c)\stackrel{\sim}{\rightarrow}D^b(\mathcal{O}_{c'})$ for any $c'$ in the lattice $\mathfrak{c}_\mathbb{Z}$ of parameters having integral difference with $c$, defined by taking the derived tensor product with a Harish-Chandra bimodule \cite{Losev2017}. By abuse of language let us refer to such equivalence as a wall-crossing functor whenever $c$ and $c'$ do not lie in the same chamber. We are now going to recall Losev's comparison of the Ringel duality with the wall-crossing functor to a maximally distant chamber. The first part of the following proposition appears in \cite{Losev2017} immediately after \cite[Theorem 6.1]{Losev2017}. We add some details to the proof by justifying the assumption of the existence of a parameter $c'$ in the same lattice as $c$ such that the order on category $\mathcal{O}_{c'}$ is opposite to that on $\mathcal{O}_c$. \begin{lemma}\label{lem_sleRingel} Assume the parameter $c=(\kappa,\mathbf{s})$ is as in the assumptions of \cite[Theorem 4.1]{Losev2017}. The following statements hold. \begin{enumerate} \item The Ringel duality $\mathsf{R}:D^b(\mathcal{O}_{\kappa,\mathbf{s}})\rightarrow D^b(^\vee\mathcal{O}_{\kappa,\mathbf{s}})$ can be realized by the (inverse) wall-crossing functor to the opposite chamber. \item The combinatorial bijection $\mathsf{r}$ induced by Ringel duality commutes with the $\widehat{\mathfrak{sl}_e}$-crystal. \end{enumerate} \end{lemma} \begin{proof} (1) This is a special case of \cite[Theorem 6.1]{Losev2017} where it is observed that it follows from \cite[Lemma 2.5]{Losev2017} and \cite[Theorem 4.1]{Losev2017}. We justify the existence of an opposite chamber and its intersection with the $\mathbb{Z}$-lattice of parameters $\mathfrak{c}_\mathbb{Z}$ in the case $W=G(\ell,1,n)$. The category $\mathcal{O}_{\kappa,\mathbf{s}}$ is a highest weight category with respect to the order on $\text{\rm Irr} \ G(\ell,1,n)=\Pi^\ell(n)$ given by the $c$-function $c_{\boldsymbol{\la}}$ (see Section \ref{subsection:wcfunctors}). It is a general fact about Ringel duality for highest weight categories that the Ringel dual category has the same poset with the opposite partial order \cite[Appendix]{Donkin}. Thus the $c$-order on $\Pi^\ell(n)$ with respect to the parameters $(\kappa',\mathbf{s}')$ is opposite to the $c$-order on $\Pi^\ell(n)$ with respect to the parameters $(\kappa,\mathbf{s})$. Recall from \cite{Losev2015} that the $c$-order is constant on the open chambers given by the complement of the hyperplane arrangement of essential walls. We claim that we can find an example of a parameter $c'=(\kappa',\mathbf{s}')$ with the property that $\mathcal{O}_{\kappa',\mathbf{s}'}$ realizes the Ringel dual of $\mathcal{O}_{\kappa,\mathbf{s}}$ and $(\kappa',\mathbf{s}')$ has integral difference with $(\kappa,\mathbf{s})$. Recall the definitions of walls and chambers from Section \ref{subsection:wcfunctors}. First, assume that $\kappa>0$, and let $t\in \mathbb{Z}_{>0}$ such that $t{\mathbf{s}}\in \mathbb{Z}^\ell$ and $n\in \mathbb{Z}$ be such that $\kappa-n<0$, $t$ divides $n$ and $n/(l\kappa)\in \mathbb{Z}$. Set $c':=(\kappa',{\bf s}')\in \mathbb{Q}_{<0}^{\times} \times \mathbb{Q}^\ell$ such that $\kappa':=\kappa-n$ and for all $j=1,\ldots,\ell$, we have $s_j '=s_j +\frac{jn}{\ell\kappa \kappa'}$. Then for all $j=1,\ldots,\ell$, we have $$\begin{array}{rcl} \kappa' s_j'-\kappa s_j&=& (\kappa-n) (s_j+\displaystyle\frac{jn}{\ell\kappa (\kappa-n)}) -\kappa s_j \\ &=&-ns_j+\displaystyle\frac{jn}{\ell \kappa} \in \mathbb{Z}. \end{array}$$ Since we have in addition $\kappa-\kappa'\in \mathbb{Z}$, we deduce that $(\kappa,{\bf s})$ and $(\kappa',{\bf s}')$ are in the same lattice. Now, let $\gamma=(a,b,j)\in \mathbb{Z}_{>0}\times \mathbb{Z}_{\geq 0} \times \{1,\ldots,\ell\}$ be a box of an $\ell$-partition. We have $$co_c (\gamma)=\kappa \ell (b-a+s_j-\frac{j}{\ell\kappa})\text{ and }co_{c'} (\gamma)=\kappa' \ell (b-a+s_j'-\frac{j}{\ell\kappa'})$$ and for $j=1,\ldots,\ell$: $$\begin{array}{rcl} s_j'-\displaystyle \frac{j}{\ell\kappa'}-(s_j-\frac{j}{\ell\kappa}) &=&\displaystyle\frac{jn}{\ell\kappa \kappa'} - \frac{j}{\ell\kappa'} + \frac{j}{\ell\kappa}\\ &=&\displaystyle\frac{jn-j\kappa+j(\kappa-n)}{\ell\kappa (\kappa-n)}\\ &=&0 \end{array}$$ As $\kappa$ and $\kappa'$ have opposite sign, we conclude that $c$ and $c'$ induce opposite orders with respect to the $c$-function.\\ (2) The proof follows by exactly the same argument as the proof of \cite[Proposition 5.6(1)]{Losev2015}. Indeed, the assumption of \cite[Proposition 5.6(1)]{Losev2015} that the wall-crossing is through a single essential wall is only used in reference to \cite[Proposition 3.8]{Losev2015}, which states that wall-crossing functors across essential walls intertwine the parabolic restriction functors. The analogous statement that (inverse) Ringel duality intertwines parabolic induction and restriction functors is proved in \cite[Lemma 4.7]{Losev2017}. \end{proof} The symmetrical statement to Proposition \ref{lem_sleRingel}(2) for the $\widehat{\mathfrak{sl}_\ell}$-crystal follows by level-rank duality which switches the roles of $e$ and $\ell$ and commutes with Ringel duality. For this, we need to be in the integral parameter case again. \begin{lemma}\label{lem_sllRingel} The map $\mathsf{r}$ commutes with the $\widehat{\mathfrak{sl}_\ell}$-crystal. \end{lemma} \begin{proof} By \cite[Theorem 7.4]{RSVV} and \cite{Webster2017}, the category $\mathcal{O}_{\kappa,\mathbf{s}}$ is standard Koszul, which by \cite{Mazorchuk} implies that Ringel duality commutes with the Koszul duality. The Koszul duality $\mathsf{K}$ lifts the level-rank duality $\mathsf{k}$, by which the $\widehat{\mathfrak{sl}_\ell}$-crystal is defined, to the categorical level \cite[Theorem 7.4]{RSVV}. Since we need to compare Ringel duality for level $e$ with Ringel duality for level $\ell$, write $\mathsf{R}$ for the former and $\dot{\mathsf{R}}$ for the latter; likewise, write ${\mathsf{r}}$ for the former and $\dot{\mathsf{r}}$ for the latter on the level of Grothendieck groups. We have $\dot{\mathsf{R}}\mathsf{K}=\mathsf{K}\mathsf{R}$ and thus $\dot{\mathsf{r}}\mathsf{k}=\mathsf{k}\mathsf{r}$ and $\mathsf{r}\dot{\mathsf{k}}=\dot{\mathsf{k}}\dot{\mathsf{r}}$. Let $\tilde{\dot{f}}_j$ be an $\widehat{\mathfrak{sl}_\ell}$ crystal operator. Recall from Diagram \ref{lr_diag} that the action of $\tilde{\dot{f}}_j$ in level $\ell$ and rank $e$ is defined by $\dot{\mathsf{k}}\tilde{\dot{f}}_j\mathsf{k}$. By the preceding lemma, in level $e$ and rank $\ell$ we have $\tilde{\dot{f}}_j\dot{\mathsf{r}}=\dot{\mathsf{r}}\tilde{\dot{f}}_j$. Therefore, $\mathsf{r}(\dot{\mathsf{k}}\tilde{\dot{f}}_j\mathsf{k})= \dot{\mathsf{k}}\dot{\mathsf{r}}\tilde{\dot{f}}_j\mathsf{k}= \dot{\mathsf{k}}\tilde{\dot{f}}_j \dot{\mathsf{r}} \mathsf{k}= (\dot{\mathsf{k}}\tilde{\dot{f}}_j\mathsf{k})\mathsf{r}.$ \end{proof} \subsection{The combinatorial Ringel duality} For this section, we take again parameters to be integral. The category $\mathcal{O}_{\kappa,\mathbf{s}}$ is equivalent as a highest weight category to the category $\mathcal{S}_{\kappa,\mathbf{s}}$ of finite-dimensional modules over the diagrammatic Cherednik algebra with Uglov weighting defined by Webster \cite[Theorem 4.8]{Webster2017}. Webster proved that for the category $\mathcal{S}_{\kappa,\mathbf{s}}$, taking the Ringel dual corresponds to sending $\kappa$ to $-\kappa$ and keeping the charge $\mathbf{s}$ fixed \cite[Corollary 5.11]{Webster2017}. Moreover, there is a natural isomorphism of diagram algebras (which is given by replacing the label $i\in\mathbb{Z}/e\mathbb{Z}$ with $-i$ on black strands and the label $j\in\mathbb{Z}/\ell\mathbb{Z}$ with $-j$ on red strands, see \cite[Proposition 4.5]{Webster2017}). This isomorphism induces an equivalence $$\ast:\mathcal{S}_{\kappa,\mathbf{s}} \rightarrow\mathcal{S}_{-\kappa,-\mathbf{s}_\mathrm{rev}}$$ which evidently satisfies $\tilde{f}_{-i}\circ\ast=\ast\circ \tilde{f}_i$ and $\tilde{\dot{f}}_{-j}\circ\ast=\ast\circ\tilde{\dot{f}}_j$. Composing $*$ with Ringel duality for the diagrammatic Cherednik algebra then gives an equivalence (via identification of $\mathcal{O}_{\kappa,\mathbf{s}}$ with $\mathcal{S}_{\kappa,\mathbf{s}}$) $$\ast\circ \mathsf{R}:D^b(\mathcal{O}_{\kappa,\mathbf{s}})\simeq D^b(\mathcal{S}_{\kappa,\mathbf{s}})\stackrel{\sim}{\longrightarrow} D^b(\mathcal{S}_{\kappa,-\mathbf{s}_\mathrm{rev}})\simeq D^b(\mathcal{O}_{\kappa,-\mathbf{s}})$$ which is perverse, being the composition of abelian equivalences with a perverse equivalence \cite{ChuangRouquier2017}. Set $\mathsf{D}=\ast\circ \mathsf{R}:D^b(\mathcal{O}_{\kappa,\mathbf{s}})\stackrel{\sim}{\rightarrow}D^b(\mathcal{O}_{\kappa,-\mathbf{s}_\mathrm{rev}})$ (c.f. \cite[Proposition 4.10]{GGOR2003}). On the level of Grothendieck groups, $\mathsf{D}$ yields an involutive isomorphism of Fock spaces $$\mathsf{d} : \mathcal{F}_{e,\mathbf{s}} \longrightarrow \mathcal{F}_{e,-\mathbf{s}_\mathrm{rev}}.$$ We call $\mathsf{d}$ the \textit{combinatorial Ringel duality}. \begin{corollary}\label{twistedchapeauchapelle} For all $i=0,\ldots,e-1$ and all $j=0,\ldots,\ell-1$ we have $$\mathsf{d}\tilde{f}_i=\tilde{f}_{-i}\mathsf{d}\qquad\hbox{ and }\qquad \mathsf{d}\tilde{\dot{f}}_j=\tilde{\dot{f}}_{-j}\mathsf{d} .$$ \end{corollary} \begin{proof} This is straightforward from Lemmas \ref{lem_sleRingel} and \ref{lem_sllRingel}, because $\mathsf{d}=\ast\circ\mathsf{r}$ and $\ast$ verifies $\tilde{f}_{-i}\circ\ast=\ast\circ \tilde{f}_i$ and $\tilde{\dot{f}}_{-j}\circ\ast=\ast\circ\tilde{\dot{f}}_j$, as explained above. \end{proof} \begin{lemma}\label{twistedslinf} For all $\sigma\in\Pi$, we have $\mathsf{d}\tilde{a}_\sigma=\tilde{a}_{\sigma^\mathrm{tr}}\mathsf{d}$. \end{lemma} \begin{proof} Since $\mathsf{D}$ is perverse with respect to supports of simple modules, i.e. cuspidal depth, $\mathsf{d}$ is an involutive isomorphism of $\mathfrak{sl}_\infty$-crystals $\mathcal{F}_{e,\mathbf{s}}\rightarrow\mathcal{F}_{e,-\mathbf{s}_\mathrm{rev}}$. Thus either $\mathsf{d}\circ\tilde{a}_\sigma=\tilde{a}_\sigma\circ\mathsf{d}$ or $\mathsf{d}\circ\tilde{a}_\sigma=\tilde{a}_{\sigma^\mathrm{tr}}\circ\mathsf{d}$. By Corollary \ref{twistedchapeauchapelle}, $\mathsf{d}$ commutes with the two kinds of Kashiwara crystal operators up to a change of sign in the indices. Therefore, by Theorem \ref{3crystals} (2), it is enough to prove that the two maps agree on multipartitions of the form $|\bemptyset,\mathbf{s}\rangle$. Further, by \cite{JaconLecouvey2018}, $|\bemptyset,\mathbf{s}\rangle$ can be obtained by applying a sequence of combinatorial wall-crossings of type (b) to $|\bemptyset,\mathbf{s}'\rangle$ for some asymptotic parameter $(\kappa,\mathbf{s}')$. By \cite[Proposition 5.6 (1),(2)]{Losev2015}, the combinatorial wall-crossings of type (b) commute with the three kinds of crystal operators. Therefore, we can reduce the proof to the case where $(\kappa,\mathbf{s})$ is asymptotic. Now, we have already seen that in this case, $\tilde{a}_\sigma$ acts only on one component of the empty multipartition, so we can reduce the proof to the case $\ell=1$. For the rest of the proof, consider the functors $\mathsf{D}$ and $\ast\circ\mathsf{WC}_{-\leftarrow +}$ when $\ell=1$ so that we are considering functors on the category $\mathcal{O}_\kappa(\mathfrak{S}_n)$ (as $G(1,1,n)=\mathfrak{S}_n$). By \cite[Corollary 5.13]{Rouquier2008}, if $r>0$ is coprime to $e$ then $\mathcal{O}_{-r/e}(\mathfrak{S}_n)\simeq \mathcal{O}_{-1/e}(\mathfrak{S}_n)$ are equivalent as highest weight categories. Moreover, the equivalence sends $\Delta(\lambda)$ to $\Delta(\lambda)$, as follows from \cite[Corollary 6.10]{GGOR2003} together with the fact that the Hecke algebra at a primitive $e$'th root of unity depends only on $e$, not on the root of unity \cite{BrundanKleshchev}. Thus, though abusing notation, the composition $\ast\circ\mathsf{WC}_{-\leftarrow +}$ is well-defined. We claim that for $\ell=1$, the combinatorial Ringel duality $\mathsf{d}$ coincides with the transpose of the combinatorial wall-crossing. If $\kappa>0$ and $\kappa'<0$ and $\mathsf{WC}_{-\leftarrow +}:D^b(\mathcal{O}_\kappa(\mathfrak{S}_n))\stackrel{\sim}{\rightarrow} D^b(\mathcal{O}_{\kappa'}(\mathfrak{S}_n))$ is the wall-crossing functor across the (unique) essential wall defined by the equation $\kappa=0$, then the combinatorial wall-crossing $\mathsf{wc}_{-\leftarrow +}$ is given by $\mathsf{wc}_{-\leftarrow +}(\lambda)=(M_e(\lambda))^{\mathrm{tr}}$ for all $\lambda\in\Pi(n)$ \cite[Corollary 5.7]{Losev2015}. The functors $\mathsf{R}$ and $\mathsf{WC}_{-\leftarrow +}$ are both perverse equivalences from $\mathcal{O}_\kappa(\mathfrak{S}_n)$ with the same perversity function, namely the cuspidal depth of $L(\lambda)$. Thus $\mathsf{D}=\ast\circ\mathsf{R}$ and $\ast\circ\mathsf{WC}_{-\leftarrow +}$ are perverse self-equivalences of $\mathcal{O}_\kappa(\mathfrak{S}_n)$ with the same perversity function \cite[Lemma 2.5, Theorem 6.1]{Losev2017}. By \cite[Proposition 4.17]{ChuangRouquier2017}, if $\mathsf{F}:D^b(\mathcal{C})\stackrel{\sim}{\rightarrow}D^b(\mathcal{D})$ and $\mathsf{G}: D^b(\mathcal{C})\stackrel{\sim}{\rightarrow}D^b(\mathcal{E})$ are two perverse equivalences with the same perversity function, then $\mathsf{G}\circ\mathsf{F}^{-1}:\mathcal{D}\stackrel{\sim}{\rightarrow} \mathcal{E}$ is an equivalence of abelian categories. In particular, this implies that $\mathsf{R}(\mathcal{O}_\kappa(\mathfrak{S}_n))\simeq \mathsf{WC}_{-\leftarrow +}(\mathcal{O}_\kappa(\mathfrak{S}_n))$. Set $\mathsf{A}:=\left(\ast\circ\mathsf{WC}_{-\leftarrow +} \right)\circ\mathsf{D}^{-1}:\mathcal{O}_{\kappa}(\mathfrak{S}_n)\stackrel{\sim}{\rightarrow}\mathcal{O}_{\kappa}(\mathfrak{S}_n)$. On $K_0$, it holds that $[\mathsf{D}(\Delta(\lambda))]=[\left(\ast\circ\mathsf{R}\right)(\Delta(\lambda))]=[\left(\ast\circ\mathsf{WC}_{-\leftarrow +}\right)(\Delta(\lambda))]=[\Delta(\lambda^{\mathrm{tr}})]$ for any $\lambda\in\Pi(n)$ by \cite[Proposition 3.3, Corollary 4.8, Proposition 4.10]{GGOR2003} and \cite[Proposition 3.2]{Losev2015}. Thus $[\mathsf{A}(\Delta(\lambda))]=[\Delta(\lambda)]$ for all $\lambda\in\Pi(n)$. Moreover, it follows from the definitions of $\mathsf{R}$ and $\mathsf{WC}_{-\leftarrow +}$ that $\mathsf{A}$ sends standardly filtered objects to standardly filtered objects. Therefore $\mathsf{A}(\Delta(\lambda))=\Delta(\lambda)$, from which it follows that $\mathsf{A}(L(\lambda))=L(\lambda)$. We conclude that $\mathsf{d}=(-)^{\mathrm{tr}}\circ\mathsf{wc}_{-\leftarrow +}=M_e$. This concludes the proof. \end{proof} Now recall the generalized Mullineux involution $\Phi:\mathcal{F}_{e,\mathbf{s}}\rightarrow\mathcal{F}_{e,-\mathbf{s}_\mathrm{rev}}$ from Theorem \ref{genmul}. \begin{theorem}\label{ringelmul} We have $\mathsf{d}=\Phi.$ \end{theorem} \begin{proof} Corollary \ref{twistedchapeauchapelle} and Lemma \ref{twistedslinf} combined imply that $\mathsf{d}$ and $\Phi$ satisfy the same commutation relations with the operators for the $\mathfrak{sl}_\infty$-, $\widehat{\mathfrak{sl}_e}$-, and $\widehat{\mathfrak{sl}_\ell}$-crystals. Moreover, as seen in the proof of \Cref{twistedslinf}, $\mathsf{d}$ maps $|\bemptyset,\mathbf{s}\rangle$ to $|\bemptyset,-\mathbf{s}_\mathrm{rev}\rangle$. By Theorem \ref{genmul} (1), $\Phi$ is the unique involution with these properties, so $\mathsf{d}=\Phi$. \end{proof} We recall for the reader the fact that the finite-dimensional modules of a rational Cherednik algebra $H_c(W)$ coincide with the cuspidal modules, i.e. those sent to $0$ by every parabolic restriction functor with respect to a proper parabolic subgroup of $W$ \cite{BezrukavnikovEtingof}. \begin{corollary}\label{level2} Suppose $L(\boldsymbol{\la})$ is a finite-dimensional irreducible representation of the rational Cherednik algebra of type $B_n=G(2,1,n)$ for parameters corresponding to Fock space charge $\mathbf{s}=(s_1,s_2)\in\mathbb{Z}^2$. Then $\Phi(\boldsymbol{\la})=\boldsymbol{\la}$, and $\mathsf{D}\left(L(\boldsymbol{\la})\right)=L(\boldsymbol{\la}),$ where we identify $L(\boldsymbol{\la})$ with the complex concentrated in degree $0$. \end{corollary} \begin{proof} First, we make some remarks which hold for arbitrary $\ell$. By Theorem \ref{crystalcusp}, $L(\boldsymbol{\la})$ is finite-dimensional if and only if $|\boldsymbol{\la},\mathbf{s}\rangle$ has depth $0$ in both the $\mathfrak{sl}_\infty$- and $\widehat{\mathfrak{sl}_e}$-crystals. Recall that $\mathsf{D}$ is perverse with respect to filtration by the depth in the $\widehat{\mathfrak{sl}_e}$- and $\mathfrak{sl}_\infty$-crystals, i.e. cuspidal depth. Since $\mathsf{D}$ is not just a derived equivalence but a perverse equivalence, by Definition \ref{def:perverse} it holds that $\mathsf{D}$ induces an abelian equivalence on each associated graded layer of $\mathcal{O}_{\kappa,\mathbf{s}}(n)$ with respect to the filtration by supports of simple modules. In particular, $\mathsf{D}$ restricts to an equivalence of abelian categories on the bottom filtration layer of $\mathcal{O}_{\kappa,\mathbf{s}}(n)$, and this subcategory coincides with the subcategory of cuspidal, that is finite-dimensional, modules. Thus if $\boldsymbol{\la}$ has depth $0$ in both the $\widehat{\mathfrak{sl}_e}$- and $\mathfrak{sl}_\infty$-crystals, $\mathsf{D}$ sends $L(\boldsymbol{\la})$ to some $L(\boldsymbol{\mu})$ where $\boldsymbol{\mu}$ again has depth $0$ in both the $\widehat{\mathfrak{sl}_e}$- and $\mathfrak{sl}_\infty$-crystals. Here, we identify $L(\boldsymbol{\la})$ with a complex concentrated in degree $0$, and likewise for $L(\boldsymbol{\mu})$. In this case, $\mathsf{D}(L(\boldsymbol{\la}))=L(\Phi(\boldsymbol{\la}))$ by Theorem \ref{ringelmul}. Now we specify to the case $\ell=2$. When $\ell=2$, the map $\Phi$ fixes the $\widehat{\mathfrak{sl}_\ell}$-crystal operators $\tilde{\dot{f}}_j$ since $j=-j\mod 2$. Moreover, $-\mathbf{s}_\mathrm{rev}=(-s_2,-s_1) = (s_1,s_2) - (s_2+s_1,s_2+s_1)$, i.e. $-\mathbf{s}_\mathrm{rev}$ is just an integer shift of $\mathbf{s}$. Recall the remark at the beginning of \S\ref{reps of cyclo RCA} that charges differing by an integer shift define the same rational Cherednik algebra. Thus we may identify the Cherednik algebras defined by $(\kappa,\mathbf{s})$ and $(\kappa,-\mathbf{s}_\mathrm{rev})$, and hence we may identify their categories $\mathcal{O}$. Now, it is straightforward to see that the action of $\tilde{\dot{f}}_j$ on an element of $\Pi^\ell_\mathbf{s}$ is not affected by an integer shift of the charge, more precisely $$\tilde{\dot{f}}_j|\boldsymbol{\nu},\mathbf{t}\rangle =: |\boldsymbol{\nu}',\mathbf{t}'\rangle \,\, \Rightarrow \,\, \tilde{\dot{f}}_j|\boldsymbol{\nu},\mathbf{t}+k\rangle = |\boldsymbol{\nu}',\mathbf{t}'+k\rangle$$ for all $\boldsymbol{\nu}\in\Pi^\ell$, $\mathbf{t}\in\mathbb{Z}^\ell(s)$ and $k\in\mathbb{Z}$. In particular, we get $\Phi(|\boldsymbol{\la},\mathbf{s}\rangle)=|\boldsymbol{\la},-\mathbf{s}_\mathrm{rev}\rangle$, which proves the claim. \end{proof} \begin{remark} Corollary \ref{level2} implies that Ringel duality can be seen as a self-equivalence of $\mathcal{O}_{e,\mathbf{s}}(B_n)$ which fixes cuspidal (i.e. finite-dimensional) modules, and we recover \cite[Remark 4.12]{GGOR2003}. \end{remark} \begin{example} Take $e=3$, $\mathbf{s}=(-1,3)$ and $\boldsymbol{\la}=(3.3,\emptyset)$. Then \begin{align*} \beta (|\boldsymbol{\la},\mathbf{s}\rangle) & = ( \,\, |(\emptyset,\emptyset),(0,2)\rangle\,\, , \,\, \emptyset \,\, , \,\, |(2,2,1),(-1,-1,0)\rangle \,\, ) \\ & = ( \,\, |(\emptyset,\emptyset),(0,2)\rangle\,\, , \,\, \emptyset \,\, , \,\, \tilde{\dot{f}}_0\tilde{\dot{f}}_0\tilde{\dot{f}}_0\tilde{\dot{f}}_1\tilde{\dot{f}}_1 \, |(\emptyset,\emptyset,\emptyset),(-1,-1,0)\rangle \,\, ), \end{align*} so that $L(\boldsymbol{\la})$ is finite-dimensional in $\mathcal{O}_{1/e,\mathbf{s}}(6)$. We have \begin{align*} \Phi(|\boldsymbol{\la},\mathbf{s}\rangle) & = \beta^{-1} ( \,\, |(\emptyset,\emptyset),(-2,0)\rangle\,\, , \,\, \emptyset \,\, , \,\, \tilde{\dot{f}}_0\tilde{\dot{f}}_0\tilde{\dot{f}}_0\tilde{\dot{f}}_1\tilde{\dot{f}}_1 \, |(\emptyset,\emptyset,\emptyset),(0,1,1)\rangle \,\, ) \\ & = \beta^{-1} ( \,\, |(\emptyset,\emptyset),(-2,0)\rangle\,\, , \,\, \emptyset \,\, , \,\, |(1,2,2),(0,1,1)\rangle \,\, ) \\ & = |(3.3,\emptyset),(-3,1)\rangle \\ & = |\boldsymbol{\la},-\mathbf{s}_\mathrm{rev}\rangle. \end{align*} \end{example} \section{Other crystal structures and generalization of Mullineux involutions}\label{perspectives} We now explain a possible generalization of the Mullineux involution defined in \cite{DudasJacon2018} that uses the results of Section \ref{section_genmul}. We already know that we have three different crystal structures on level $\ell$ Fock spaces which are pairwise commuting, namely the $\widehat{\mathfrak{sl}_e}$-, $\mathfrak{sl}_\infty$- and $\widehat{\mathfrak{sl}_\ell}$-crystal. We now slightly generalize this by introducing an additional parameter $d\in \mathbb{Z}_{\geq0}$ (which plays the role of the parameter $\ell$ in \cite{DudasJacon2018} in level one - that is type $A$ situation). To do this, recall that if $\lambda\in \Pi$, we can uniquely write the decomposition: $$\lambda=\lambda_{(0)}+\lambda_{(1)}^d+\ldots +\lambda_{(n)}^{d^n}$$ for $n\in \mathbb{Z}_{\geq0}$ and where each $\lambda_{(i)}$ is $d$-regular. Let $j\in \mathbb{Z}_{>0}$. One can define an action of a Kashiwara operator ${\tilde{f}}_{k,j}$ (for $k=0,\ldots,d-1$) as follows: $${\tilde{f}}_{k,j}.\lambda=\mu \quad \text{ for all } \lambda\in\Pi$$ where $\mu_{(t)}=\lambda_{(t)}$ when $t\neq d$ and $\mu_{(d)}={\tilde{f}}_k \lambda_{(d)}$ (where ${\tilde{f}}_k$ is denoting the usual Kashiwara operator acting on $d$-regular partitions). Using the decomposition in Theorem \ref{3crystals}, we get an action of Kashiwara operators on the whole Fock space as follows. Let $|\boldsymbol{\la},\mathbf{s}\rangle \in \Pi^\ell_s$ and write $$\beta (|\boldsymbol{\la}, \mathbf{s}\rangle )=(|\boldsymbol{\la}_1,\mathbf{r}\rangle ,\sigma,|\boldsymbol{\la}_2,\dot{\br} \rangle ).$$ Then, for all $j\in \mathbb{Z}_{>0}$ : $${\tilde{f}}_{k,j}. |\boldsymbol{\la}, \mathbf{s}\rangle =\beta^{-1}(|\boldsymbol{\la}_1,\mathbf{r}\rangle,{\tilde{f}}_{k,j}.\sigma,|\boldsymbol{\la}_2,\dot{\br}\rangle)$$ For each $j\in \mathbb{Z}_{\geq0}$, we thus get an $\widehat{\mathfrak{sl}_d}$-crystal. It is immediate to see that these actions also commute with the $\widehat{\mathfrak{sl}_e}$-crystal and the $\widehat{\mathfrak{sl}_\ell}$-crystal (it just follows from the existence of the bijection $\beta$). Finally, there is an obvious analogue of the Mullineux involution for this decomposition, which depends on $d$. Namely, for $|\boldsymbol{\la},\mathbf{s}\rangle \in \Pi^\ell_s$ and $\beta(|\boldsymbol{\la},\mathbf{s}\rangle)=(|\boldsymbol{\la}_1,\mathbf{r}\rangle ,\sigma,|\boldsymbol{\la}_2,\mathbf{r} \rangle )$, we define: $$\Phi^{(d)} (|\boldsymbol{\la},\mathbf{s}\rangle )= \beta^{-1} (m_{e,\mathbf{r}}(\boldsymbol{\la}_1) , m_e(\sigma_{(0)})+m_e(\sigma_{(1)})^d+\ldots +m_e(\sigma_{(n)})^{d^n} ,m_{\ell,\dot{\br}}(\boldsymbol{\la}_2) ),$$ so that $\Phi^{(d)}$ generalizes simultaneously $\Phi$ and the version of the Mullineux involution of \cite{DudasJacon2018} (which we recover by taking $\ell=1$). As in \cite{DudasJacon2018}, we believe it would be interesting to look at the case $\ell=2$ and investigate the relationship between $\Phi^{(d)}$ on the one hand, and the Alvis-Curtis duality for unipotent representations of finite unitary groups in transverse characteristic $d$ on the other hand (or more generally of finite groups of Lie type $B$ and $C$). \bibliographystyle{plain}
1,108,101,565,063
arxiv
\section{Introduction} The method of maximum likelihood estimation (MLE) was introduced by R.A. Fisher in the early 20$^\text{th}$ century as a way to estimate the parameters associated with an observed quantity based on some statistical model \cite{Fisher_1922,Fisher_1925,Fisher_1935}. Since then, it has been used in wide-ranging applications in the physical and social sciences \cite{Refregier_2003,Gailmard_2014,King_1998,Ly_2017}. This tutorial concentrates on its application to the measurement of an optical intensity distribution $I({\bf x};{\bf p})$ that depends on some vector of unknown physical parameters ${\bf p}=(p_1,\ldots,p_N)$, for example, the physical dimensions or refractive index of an unknown substrate. These parameters can take a continuous range of values, and in general they might each have different units. In this context, the goal of MLE is to determine the most likely value of ${\bf p}$ from a measurement of $I$. The spatial variable ${\bf x}$ is typically a two-dimensional coordinate in the plane perpendicular to the direction of light propagation, although in some instances it may be replaced by a one-dimensional (1D) coordinate $x$. The treatment shown in this discussion emphasizes the information gained from the shape of $I$ (i.e., its dependence on ${\bf x}$) without regard for the overall intensity (i.e., the total power incident on the detector). One advantage of this approach is that the accuracy of the parameter estimate is not influenced by power fluctuations of the light source, which would otherwise be especially problematic when operating under low-light conditions, as discussed further in Section \ref{sect:MLE_example3}. Useful in-depth tutorials on MLE and the related topic of Fisher information can be found in Refs.~\cite{Ly_2017,Myung_2003}. The key concepts are summarized in Section \ref{sect:MLE_overview} for the case of a discrete random variable that depends on one or more parameters $p_n$. This situation applies directly to most real-world optical measurements, in which the detector is divided into a discrete pixel array, implying that a measurement consisting of a finite number of photon detections has a finite number of possible outcomes. A mathematical description of this scenario is derived explicitly in Section \ref{sect:MLE_optics}. For context and further insight, the results are then compared in Section \ref{sect:Bayesian_statistics} to the Bayesian statistical approach employed in Ref.~\cite{Ramkhalawon_2013}. Lastly, Sections \ref{sect:MLE_examples_1param} and \ref{sect:MLE_examples_2param} contain a number of simple one- and two-parameter examples illustrating the procedure of MLE for optical measurements, as well as the role of Fisher information in evaluating and optimizing the accuracy of an experiment. The Mathematica code for these calculations is provided in the appendix. \begin{sloppypar} The theory developed in Sections \ref{sect:MLE_overview} through \ref{sect:Bayesian_statistics} is presented for the multiple-parameter case (vector-valued ${\bf p}$), which can trivially be reduced to the single-parameter case when needed (as in Section \ref{sect:MLE_examples_1param}). The key results of this tutorial are those established in Section \ref{sect:MLE_optics}. \end{sloppypar} \section{Overview of MLE: likelihood, Fisher information, and the Cram\'er-Rao bound}\label{sect:MLE_overview} Before discussing its application to an optical measurement, in this section the basic concepts of MLE are reviewed in a general context. Consider a discrete random variable $Y$, and let $P(y|{\bf p})$ denote the probability mass function (PMF) specifying the conditional probability of the outcome $Y=y$ given some vector of parameters ${\bf p}$. The PMF is normalized such that \begin{equation} \sum_{y\in\mathcal{Y}}P(y|{\bf p}) = 1,\label{eq:MLE_pmf_normalization} \end{equation} where $\mathcal{Y}$ is the set of all possible outcomes of $Y$. It should be emphasized that the PMF is interpreted as a function of $y$. That is, given a fixed value of ${\bf p}$, the function $P(y|{\bf p})$ provides the probability of each possible outcome $y$. In a typical measurement, however, we require just the opposite: given an observed value of $y$, we wish to determine the value of ${\bf p}$ that is most likely to have produced the measured outcome. This inverse problem is solved by introducing the likelihood function, defined as\footnote{Often, the likelihood is used to describe of a set of measurements $\mathcal{S}=(y_1,y_2,\ldots)$, in which case it could be denoted as $L(\mathcal{S}|{\bf p})$. In this discussion, the notation $L({\bf p}|y)$ is used with the understanding that $y$ could represent either a single measurement or an ensemble of measurements, e.g., an optical intensity distribution, which is a collection of many individual photon detection events.}\linebreak[3] $L({\bf p}|y)=P(y|{\bf p})$. Although the likelihood function and the PMF appear to be mathematically identical (and indeed they are in their unevaluated symbolic forms), they actually have quite different meanings. In contrast to the PMF, the likelihood function is regarded as a continuous function of ${\bf p}$ for some fixed value of $y$. It is not subject to any normalization condition over ${\bf p}$. Given an observation $Y\hspace{-1pt}=\hspace{-1pt} y$,\linebreak[4] $L({\bf p}|y)$ represents the likelihood (relative probability) of a vector ${\bf p}$ of candidate parameter values. Accordingly, the maximum likelihood estimate (also abbreviated as MLE) for the unknown parameter values is obtained by determining the value of ${\bf p}$ that maximizes $L({\bf p}|y)$. For computational convenience, the log-likelihood function $\ell({\bf p}|y)=\ln L({\bf p}|y)$ is often equivalently maximized instead. Next, consider the related problems of (1) evaluating the uncertainty of a maximum likelihood estimate and (2) designing an experiment for optimal sensitivity. These problems both pertain to the Fisher information, which quantifies the amount of information about ${\bf p}$ that is contained within a measurement of $Y$. For the case of $N$ parameters, the Fisher information matrix (FIM) $\mathbb{J}({\bf p})$ is defined as the $N\times N$ symmetric, positive semi-definite matrix with elements% \begin{subequations}\label{eq:Fisher_1stders} \begin{align} [\mathbb{J}({\bf p})]_{mn} &= \mathrm{E}\left[ \left(\mfrac{\partial}{\partial p_m}\ell({\bf p}|y)\right)\!\hspace{-1pt} \left(\mfrac{\partial}{\partial p_n}\ell({\bf p}|y)\right)\right] \label{eq:Fisher_1stders_E}\\[2pt] &= \sum_{y\in\mathcal{Y}} \left(\mfrac{\partial}{\partial p_m}\ell({\bf p}|y)\right)\!\hspace{-1pt} \left(\mfrac{\partial}{\partial p_n}\ell({\bf p}|y)\right)\! L({\bf p}|y),\label{eq:Fisher_1stders_sum} \end{align} \end{subequations} where $\mathrm{E}$ denotes the expectation value over $\mathcal{Y}$. Under mild regularity conditions \cite{Rao_2017}, the FIM is equivalently defined as\footnote{To prove this result, one can expand the derivatives in Eq.~(\ref{eq:Fisher_2ndder_sum}) using the chain rule and product rule. This produces the RHS of Eq.~(\ref{eq:Fisher_1stders_sum}) plus an additional term $-\sum_{y\in\mathcal{Y}}\frac{\partial^2}{\partial p_m\partial p_n}L({\bf p}|y)=-\frac{\partial^2}{\partial p_m\partial p_n}\sum_{y\in\mathcal{Y}}L({\bf p}|y)$. By Eq.~(\ref{eq:MLE_pmf_normalization}), the sum over $L({\bf p}|y)$ is equal to 1, so its derivative is zero. The ``regularity conditions'' for this proof essentially require that $L({\bf p}|y)$ is twice differentiable and that the order of summation and differentiation can be swapped. In practice, these conditions are met in all but the most pathological cases.}% \begin{subequations}\label{eq:Fisher_2ndder} \begin{align} [\mathbb{J}({\bf p})]_{mn} &= -\mathrm{E}\left[\mfrac{\partial^2}{\partial p_m\partial p_n}\ell({\bf p}|y)\right] \label{eq:Fisher_2ndder_E}\\[2pt] &= -\sum_{y\in\mathcal{Y}} \left(\textstyle\mfrac{\partial^2}{\partial p_m\partial p_n}\ell({\bf p}|y)\right)\! L({\bf p}|y).\label{eq:Fisher_2ndder_sum} \end{align} \end{subequations} Since $\mathbb{J}({\bf p})$ represents the information contained in a single observation of the random variable $Y$, it is sometimes called the \emph{unit} Fisher information. If the measurement is repeated for $T$ independent trials, it can be shown that the total information obtained is $T\,\mathbb{J}({\bf p})$. Note that while the Fisher information is a function of the true parameter values ${\bf p}$, it is independent of $y$. This indicates that $\mathbb{J}({\bf p})$ is not a property of an individual measurement, but rather of the measurement scheme (and its expected outcome). For this reason, $\mathbb{J}({\bf p})$ is often referred to as the \emph{expected} Fisher information. Some texts also define the \emph{observed} Fisher information $\mathbb{J}^\text{(obs)}({\bf p};y)$ associated with a particular measured outcome $y$ by dropping the expectation values from Eqs.~(\ref{eq:Fisher_1stders_E}) and (\ref{eq:Fisher_2ndder_E}) and evaluating at the maximum likelihood estimate for ${\bf p}$. There has been debate regarding the conditions under which it is more appropriate to use the observed or expected Fisher information \cite{Efron_1978,Cao_2013}. In the asymptotic limit of a large number of observations, it can be shown that the two definitions are equivalent \cite{Newey_1994}. The statistical significance of the FIM is that its inverse $\mathbb{J}^{-1}({\bf p})$ places a lower limit on the covariance matrix $\mathbb{C}({\bf p})$ for a maximum likelihood estimate of ${\bf p}$. More precisely, for any unbiased estimator\footnote{In general, the MLE can be biased. However, it is asymptotically unbiased for a sufficiently large sample size \cite{Naftali_2001}. The form of the Cram\'er-Rao bound given in Eq.~(\ref{eq:Cramer-Rao}) only applies when the MLE is unbiased.}, the Cram\'er-Rao bound \cite{Refregier_2003} states that the matrix $\mathbb{C}-\mathbb{J}^{-1}$ must be positive semi-definite, i.e., for any vector ${\bf p}$, \begin{equation} {\bf p}^{\rm T}\mathbb{C}\,{\bf p} \geq {\bf p}^{\rm T}\mathbb{J}^{-1}{\bf p}. \label{eq:Cramer-Rao} \end{equation} The diagonal elements $[\mathbb{J}^{-1}]_{nn}$ provide the minimum variance of each parameter $p_n$, while the off-diagonal elements $[\mathbb{J}^{-1}]_{mn}$ (where $m\neq n$) represent the expected covariances between parameters $p_m$ and $p_n$. The uncertainty of the measurement can be visualized as an ellipsoid in $N$-dimensional parameter space (centered at the MLE) representing the standard deviation confidence interval. The principal axis orientations of the ellipsoid are given by the eigenvectors of $\mathbb{J}^{-1}$, and the semi-axis lengths are the square roots of the corresponding eigenvalues \cite{Friendly_2013}. Four examples are illustrated in Table~\ref{tbl:ellipse_examples} for the case of a two-parameter measurement in which the true parameter values for $p_1$ and $p_2$ are both zero. Since $\mathbb{J}^{-1}$ is a function of ${\bf p}$, in general the size and shape of the error ellipsoid also varies over the parameter space. This dependence can be visualized for the two-parameter case (or a 2D slice of a higher-dimensional parameter space) by plotting a grid of ellipses over a selection of parameter values, as seen in Section \ref{sect:MLE_examples_2param} and in Ref.~\cite{Vella_2018_fbs_arxiv}. \begin{table} \begin{center} \begin{tabular}{M{1in}M{1.1in}M{1.5in}M{1.7in}} \toprule $\mathbb{J}^{-1}$ & Eigenvalues & Eigenvectors & Error ellipse \\ \midrule $\left[\!\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\!\right]$ & 1, 1 & $\left[\!\begin{array}{r} 1 \\ 0 \end{array}\!\right]$\hspace{-1pt}, $\left[\!\begin{array}{r} 0 \\ 1 \end{array}\!\right]$ & \includegraphics[width=1.7in]{Figures/MLE/Ellipse_demo/ell1.pdf} \\ $\left[\!\begin{array}{cc} 1 & 0 \\ 0 & 0.2 \end{array}\!\right]$ & 1, 0.2 & $\left[\!\begin{array}{r} 1 \\ 0 \end{array}\!\right]$\hspace{-1pt}, $\left[\!\begin{array}{r} 0 \\ 1 \end{array}\!\right]$ & \includegraphics[width=1.7in]{Figures/MLE/Ellipse_demo/ell2.pdf} \\ $\left[\!\begin{array}{cc} 1 & 0.5 \\ 0.5 & 1 \end{array}\!\right]$ & 1.5, 0.5 & $\left[\!\begin{array}{r} 0.71 \\ 0.71 \end{array}\!\right]$\hspace{-1pt}, $\left[\!\!\begin{array}{r} 0.71 \\ \scalebox{0.85}[1.0]{$-$} 0.71 \end{array}\!\right]$ & \includegraphics[width=1.7in]{Figures/MLE/Ellipse_demo/ell3.pdf} \\ $\left[\!\!\begin{array}{cc} \,0.2 & \scalebox{0.85}[1.0]{$-$} 0.5 \\ \scalebox{0.85}[1.0]{$-$} 0.5 & \,2 \end{array}\!\right]$ & 2.13, 0.07 & $\left[\!\!\begin{array}{r} \scalebox{0.85}[1.0]{$-$} 0.25 \\ 0.97 \end{array}\!\right]$\hspace{-1pt}, $\left[\!\begin{array}{r} 0.97 \\ 0.25 \end{array}\!\right]$ & \includegraphics[width=1.7in]{Figures/MLE/Ellipse_demo/ell4.pdf}\\ \bottomrule \end{tabular} \caption{Plots of the error ellipses associated with four different $2\times 2$ Fisher information matrices. The square roots of the eigenvalues of $\mathbb{J}^{-1}$ determine the semi-axis lengths of the ellipse, i.e., the dimensions of the bounding rectangle, while the eigenvectors determine the orientation. The blue points in each plot represent the estimated parameters from 250 observations of the random variable $Y$ (assuming a bivariate normal distribution) given true parameter values $p_1=p_2=0$. In these examples, $p_1$ and $p_2$ are taken to be unitless, and they are plotted over the range $-3\leq p_1,p_2\leq 3$.} \label{tbl:ellipse_examples} \end{center} \end{table} In summary, the Cram\'er-Rao lower bound can be used to assess the minimum expected error of a maximum likelihood estimate based on the inverse of the expected Fisher information matrix for the measurement. In a similar manner, the FIM can be used to predict and optimize the accuracy of an experiment before any measurements are taken. This is done by minimizing a suitable merit function (chosen based on the desired relative accuracies of each parameter) over the range of interest of ${\bf p}$. It is often convenient to reparametrize ${\bf p}$ to be unitless, such that the intervals $-1\leq p_n\leq 1$ (for $n=1,\ldots,N$) correspond to each physical parameter's range of interest.\footnote{One of the advantages of MLE is that it is invariant to the choice of parametrization \cite{Refregier_2003}.\label{footnote:MLE_parametrization} } Then one reasonable choice for the merit function would be the product of the eigenvalues of $\mathbb{J}$, which is inversely proportional to the square root of the area (for two parameters) or volume/hypervolume (for three or more parameters) of the error ellipsoid. Another option is the root mean square (RMS) of the eigenvalues of $\mathbb{J}^{-1}$, which is half of the diagonal length of the rectangle/box containing the ellipse/ellipsoid. This second merit function is used in Ref.~\cite{Vella_2018_fbs_arxiv} since it has a lower tendency to heavily prioritize the accuracy of one parameter at the expense of another. \section{MLE formalism for an optical measurement}\label{sect:MLE_optics} The MLE formalism is now applied to the optical measurement described previously, in which one or more parameters ${\bf p}$ are to be estimated from a measurement of an intensity distribution $I({\bf x};{\bf p})$. The functional form of $I({\bf x};{\bf p})$ (not to be confused with the measured intensity $\tilde{\bf I}$ defined below) is generally obtained from either a theoretical model, simulated data, experimental calibration data, or some combination thereof. Suppose that the detector is discretized into a finite number of pixels $i=1,2,\ldots$ centered at coordinates ${\bf x}_i$, and assume the pixels are sufficiently small so that $I({\bf x};{\bf p})$ is nearly constant over the area of one pixel. Then, given some vector of true parameter values ${\bf p}$, the probability that a single incident photon will hit the detector at pixel $i$ is prescribed by the normalized intensity distribution: \begin{equation} P(i|{\bf p}) = \frac{I({\bf x}_i;{\bf p})}{\sum_i I({\bf x}_i;{\bf p})},\label{eq:photon_pmf} \end{equation} where the sum is taken over all pixels.\footnote{This approximation for small pixels is acceptable for most applications involving sensors with dense pixel arrays. For large pixels, however, one should instead use the exact expression $P(i|{\bf p})=\lrangle{I}_i/\sum_i\lrangle{I}_i$, where $\lrangle{I}_i$ is the integral of $I({\bf x};{\bf p})$ over the area of pixel $i$. For experiments in which the expected intensity distribution is obtained from a set of calibration images (which themselves are discretized), Eq.~(\ref{eq:photon_pmf}) is an exact result.\label{footnote:P(i|p)_exact}} This equation represents the PMF for a single detected photon. Notice that in this context, the outcome of a measurement (denoted as $y$ in the previous section) is the pixel $i$ where a photon is detected. For a classical measurement, each photon detection can be considered as an independent event, so the probability of $M$ photons hitting pixels $i_1,\ldots,i_M$ is given by the product \begin{equation} P(i_1\cap\cdots\cap i_M|{\bf p}) = \prod_{m=1}^M P(i_m|{\bf p}). \end{equation} Now consider a measured intensity $\tilde{\bf I}=(\tilde{I}_1,\tilde{I}_2,\ldots)$, where $\tilde{I}_i$ is the number of photons detected at pixel $i$. Since the detector is indifferent to the order in which photons arrive (i.e., photons are indistinguishable), the probability of obtaining this distribution is \begin{equation} P(\tilde{\bf I}|{\bf p}) = P_0\, \prod_i P(i|{\bf p})^{\tilde{I}_i},\label{eq:P(tbI|p)} \end{equation} where the leading factor $P_0=(\sum_i \tilde{I}_i)!\, / \prod_i \tilde{I}_i!$ accounts for all possible permutations. When regarded as a function of ${\bf p}$, the right-hand side of Eq.~(\ref{eq:P(tbI|p)}) represents the likelihood function $L({\bf p}|\tilde{\bf I})$. The log-likelihood is therefore given by \begin{equation} \ell({\bf p}|\tilde{\bf I}) = \ln P_0 + \sum_i \tilde{I}_i \ln P(i|{\bf p}).\label{eq:ell(p|tbI)} \end{equation} Since $P_0$ is a constant, the maximum likelihood estimate for ${\bf p}$ is obtained by maximizing the sum in the second term of this expression. As described in Section \ref{sect:MLE_overview}, the inverse of the Fisher information matrix places a lower bound on the covariance matrix for this estimate. The expected FIM for a single photon can be calculated using Eq.~(\ref{eq:Fisher_1stders}) or (\ref{eq:Fisher_2ndder}), with $y$ replaced by the pixel index $i$ specifying the photon's location. For a measurement of $\mathcal{N}$ photons, the total information is\footnote{Here the FIM is written in terms of the PMF $P(i|{\bf p})$ to emphasize the dependence on the normalized intensity distribution. However, the likelihood function $L({\bf p}|i)$ associated with pixel $i$, which has the same functional form, could also be used. Also, note that in this analysis $\mathcal{N}$ is taken as an integer representing the actual number of measured photons (i.e., the number of photoelectrons registered by the detector), as opposed to the mean or expected number of photons over a particular time interval.} \begin{subequations}\label{eq:MLE_Fisher} \begin{align} [\sp\mathcal{N}\sp\mathbb{J}({\bf p})]_{mn} &= \mathcal{N} \sum_i P(i|{\bf p}) \left(\mfrac{\partial}{\partial p_m} \ln P(i|{\bf p})\right)\! \left(\mfrac{\partial}{\partial p_n} \ln P(i|{\bf p})\right)\label{eq:MLE_Fisher_1der}\\ &= -\mathcal{N} \sum_i P(i|{\bf p}) \left(\mfrac{\partial^2}{\partial p_n\partial p_m} \ln P(i|{\bf p})\right)\hspace{-1pt}. \end{align} \end{subequations} On the other hand, the observed FIM associated with a particular measurement $\tilde{\bf I}$ is obtained by summing the derivatives of $\ell({\bf p}|\tilde{\bf I})$ over all detected photons \begin{subequations}\label{eq:MLE_Fisher_obs} \begin{align} \sp[\mathbb{J}^\text{(obs)}({\bf p};\tilde{\bf I})]_{mn} &= \sum_i \tilde{I}_i \left(\mfrac{\partial}{\partial p_m} \ln P(i|{\bf p})\right)\! \left(\mfrac{\partial}{\partial p_n} \ln P(i|{\bf p})\right)\label{eq:MLE_Fisher_obs_1der}\\ &= - \sum_i \tilde{I}_i \left(\mfrac{\partial^2}{\partial p_n\partial p_m} \ln P(i|{\bf p})\right)\hspace{-1pt}. \end{align} \end{subequations} Since $\tilde{I}_i\approx\mathcal{N}P(i|{\bf p})$ when a large number of photons are measured, the expected and observed information converge in the limit as $\mathcal{N}\to\infty$, in agreement with the claim made in the previous section. In practice, they should yield nearly identical results in most applications, with the exception of extreme low-light measurements using single-photon detectors. In the above analysis, it has been implicitly assumed that the detector is capable of measuring any arbitrary number of photons incident on a pixel, i.e., that it can resolve individual photons. However, most real detectors have a finite bit depth, meaning that they can only resolve some finite number of distinct intensity levels. For example, in an 8-bit sensor, each pixel has an integer readout value between 0 and 255. This discretization of pixel values is analogous to the discreteness of photons; therefore, in this situation, Eqs.~(\ref{eq:P(tbI|p)}) through (\ref{eq:MLE_Fisher_obs}) can be used with $\tilde{I}_i$ interpreted as the readout value of pixel $i$. In the absence of thermal noise or other sources of error, the equivalent ``photon count'' of the signal from a sensor with finite bit depth must be less than or equal to $\mathcal{N}$, the actual number of photons incident on the detector. As needed, the effective bit depth of the sensor can be increased by averaging the output signal over multiple exposures. This time-averaging has the added benefit of reducing the impact of electronic shot noise. \section{Comparison to Bayesian statistics}\label{sect:Bayesian_statistics} The method of MLE is considered a ``frequentist'' approach in the sense that it does not assign a probability distribution to the unknown parameter ${\bf p}$, but rather it estimates the value of ${\bf p}$ that is most consistent with the observed data. A popular alternative is the Bayesian approach, which is predicated on the calculation of a posterior probability density function (PDF) $P({\bf p}|\tilde{\bf I})$ describing the probability of every possible value of ${\bf p}$ given an observed intensity $\tilde{\bf I}$. In general, $P({\bf p}|\tilde{\bf I})$ depends on a prior distribution $P({\bf p})$ as well as the observed intensity. The prior distribution $P({\bf p})$ may be uniformly distributed (i.e., constant), or it may be used to introduce known (or assumed) information about ${\bf p}$ before the measurement takes place. For example, in the polarimetry experiment discussed in Ref.~\cite{Ramkhalawon_2013} (with ${\bf p}=(p_1,p_2,p_3)$ representing the normalized Stokes parameters), $P({\bf p})$ could be used to incorporate prior knowledge about the source's polarization. Another example is the focused beam scatterometry experiment discussed in Ref.~\cite{Vella_2018_fbs_arxiv}, in which it might be possible in some cases to assign a prior distribution $P({\bf p})$ based on the fabrication process of the sample under test. Using Bayes' theorem, the posterior PDF can be written as \begin{equation} P({\bf p}|\tilde{\bf I}) = \frac{P({\bf p})}{P(\tilde{\bf I})} P(\tilde{\bf I}|{\bf p}),\label{eq:P(p|tbI)} \end{equation} where the constant term in the denominator, given by \begin{equation} P(\tilde{\bf I}) = \int\hspace{-1pt} P({\bf p}) P(\tilde{\bf I}|{\bf p})\sp \mathrm{d}^Np, \end{equation} ensures the normalization condition $\int\hspace{-1pt} P({\bf p}|\tilde{\bf I})\sp\mathrm{d}^Np=1$. Substituting Eq.~(\ref{eq:P(tbI|p)}) into Eq.~(\ref{eq:P(p|tbI)}), one obtains \begin{subequations} \begin{align} P({\bf p}|\tilde{\bf I}) &= \frac{P({\bf p})}{P(\tilde{\bf I})}P_0 \prod_i P(i|{\bf p})^{\tilde{I}_i} \\[2pt] &= \frac{P({\bf p})}{P(\tilde{\bf I})}P_0 \exp\!\left(\hspace{-1pt}\sum_i\, \tilde{I}_i \ln P(i|{\bf p}) \hspace{-1pt}\right)\hspace{-1pt}. \end{align} \end{subequations} Notice that $P({\bf p}|\tilde{\bf I})$ is proportional to the prior distribution times the likelihood. If no prior information is assumed about ${\bf p}$ (as is the case for all examples discussed throughout this tutorial), then $P({\bf p})$ is constant and the peak of $P({\bf p}|\tilde{\bf I})$ coincides with the maximum likelihood estimate for ${\bf p}$. More generally, if $P({\bf p})$ is nonuniform, the two values converge in the limit as $\mathcal{N}\to\infty$, assuming that $P({\bf p})$ is smooth and nonzero near the true value of ${\bf p}$. As discussed in Ref.~\cite{Ramkhalawon_2013}, if the measurement is limited by photon noise (as opposed to other noise mechanisms or systematic errors) and $\mathcal{N}$ is large, then $P({\bf p}|\tilde{\bf I})$ is approximately a narrow, generally anisotropic Gaussian distribution that is maximized by the true parameter values ${\bf p}_0$: \begin{equation} P({\bf p}|\tilde{\bf I}) \propto \exp\!\left[-\tfrac{1}{2}({\bf p}-{\bf p}_0)^{\rm T}{\bm\Sigma}^{-1}({\bf p}-{\bf p}_0)\right]\!.\label{eq:MLE_P(p|I)_Gaussian} \end{equation} Here the covariance matrix ${\bm\Sigma}$ determines the shape and width of the distribution, and its inverse ${\bm\Sigma}^{-1}$ is the Hessian matrix of second derivatives of $\ln \hspace{-1pt} P(\tilde{\bf I}|{\bf p})$ evaluated at ${\bf p}_0$. Recalling the results of the previous sections, one can see that if $P({\bf p})$ is constant, then ${\bm\Sigma}^{-1}$ is equal to the observed FIM $\mathbb{J}^\text{(obs)}({\bf p}_0;\tilde{\bf I})$, and its expected value (taken over all possible outcomes for $\tilde{\bf I}$) is the expected FIM $\mathbb{J}({\bf p}_0)$. Intuitively, a measurement with high information content, for which the FIM is large and nearly diagonal, will result in a narrow posterior distribution $P({\bf p}|\tilde{\bf I})$, enabling a precise estimate of ${\bf p}$. Thus, even in a Bayesian framework, the maximum likelihood estimate and the Fisher information matrix can both be shown to have clear statistical meanings. \section{One-parameter optical MLE examples}\label{sect:MLE_examples_1param} This section contains a series of four simple thought experiments involving one-dimensional intensity distributions $I_j(x;p_1)$ (where $j=1,2,3,4$) that depend on a single parameter $p_1$. Without loss of generality, let us assume that $p_1$ is unitless and that its range of interest is $-1\leq p_1\leq 1$. (As noted on page \pageref{footnote:MLE_parametrization}, any physical parameter can be reparametrized in this way without affecting the MLE.) The one-dimensional coordinate $x$ is also taken to be unitless. In the examples that follow, the function \begin{equation} \Pi(x) = \begin{cases} I_0,&-1\leq x\leq 1,\\ 0 & \text{otherwise}, \end{cases} \end{equation} where $I_0$ represents some reference intensity level, is used as a normalization factor that also serves to limit each intensity distribution to the spatial extent of the sensor (as if the beam were truncated by a hard aperture). Each intensity distribution is normalized such that it reaches a maximum value of $I_0$ over the range of interest of $p_1$. Note, however, that this does not preclude the possibility of intensities greater than $I_0$ when $|p_1|>1$. For simplicity, suppose that the detector consists of a one-dimensional array of 9 pixels, with pixel $i$ centered at coordinate $x_i=(i-5)/4$, so that \begin{equation} (x_1,\ldots,x_9)=(-1,-0.75,-0.5,-0.25,0,0.25,0.5,0.75,1).\label{eq:MLE_examples_u_coords} \end{equation} According to Eq.~(\ref{eq:photon_pmf}), the probability of an incident photon hitting pixel $i$ is \begin{equation} P_j(i|p_1) = \frac{I_j(x_i;p_1)}{\sum_i I_j(x_i;p_1)}.\label{eq:P(i|p1)} \end{equation} As mentioned earlier, for such a sparse array of pixels, this is a relatively poor approximation since the intensity may vary significantly over the width of each pixel. However, since the approximation is reasonable for most real applications, it is used here for instructive purposes. If desired, the exact expression for $P_j(i|p_1)$ (which is provided in footnote \ref{footnote:P(i|p)_exact} following Eq.~(\ref{eq:photon_pmf})) could be substituted into the analysis with minimal modifications required. Similarly, while the concepts of Fisher information and the Cram\'er-Rao bound are usually applied to measurements consisting of many observations (photons), the calculations below are demonstrated for measurements of just a few photons and then extended to larger sample sizes. Also note that while the following examples all involve intensity distributions over a 1D spatial coordinate, the more general 2D case can be treated in the same manner by rearranging the numerical output of the detector's 2D pixel array into a 1D array during signal processing. The intensity distributions considered in each of the following sections are summarized in Table \ref{tbl:MLE_1param_intensity_dist}. \begin{table}[b] \renewcommand{\arraystretch}{.8} \begin{center} \begin{tabular}{c@{\hspace{20pt}}l} \toprule Section & Intensity distribution\\ \midrule \phantom{a}&\\[-8pt] \ref{sect:MLE_example1} & $I_1(x;p_1)=\Pi(x)(0.5+0.5\, p_1 x)$\\[8pt] \ref{sect:MLE_example2} & $I_2(x;p_1)=\Pi(x)(0.9+0.1\, p_1 x)$\\[8pt] \ref{sect:MLE_example3} & $I_3(x;p_1)=\Pi(x)\mfrac{1}{(|c|+1)^2}(p_1-cx)^2$, where $c=\text{constant}$\\[12pt] \ref{sect:MLE_example4} & $I_4(x;p_1)=\Pi(x)\mfrac{1}{(|d|+2)^2}(p-x-d)^2$, where $d=\text{constant}$\\ \bottomrule \end{tabular} \caption{Intensity distributions for each example considered in Section \ref{sect:MLE_examples_1param}.} \label{tbl:MLE_1param_intensity_dist} \end{center} \end{table} In Section \ref{sect:MLE_example1}, an in-depth analysis is performed for a simple intensity distribution that depends linearly on $p_1$. In Section \ref{sect:MLE_example2}, the results are compared to a similar intensity distribution with a weaker linear dependence on $p_1$. Next, the commonly-used experimental configurations of null and off-null measurements are explored in Section \ref{sect:MLE_example3}. Finally, Section \ref{sect:MLE_example4} examines the case of an intensity that may be far from perfect nulling conditions, and the results are compared to the near-null case. \subsection{Linear dependence on $p_1$}\label{sect:MLE_example1} For the first example, consider the intensity distribution \begin{equation} I_1(x;p_1) = \Pi(x)\bigl(0.5+0.5\,p_1x\bigr).\label{eq:MLE_I_Ex1} \end{equation} The distribution is only valid when $-1\leq p_1\leq 1$ since larger parameter values would result in negative intensity values, which are not allowed. This is an extreme case of a common real-world scenario in which an approximation is made for the intensity that is only valid over some range of parameter values (for example, the quadratic approximation seen in Ref.~\cite{Vella_2018_fbs_arxiv}). In practice, for reliable parameter estimation, the range of interest of ${\bf p}$ should be smaller than the region where the approximation is valid (within some prescribed accuracy). Using Eq.~(\ref{eq:P(i|p1)}), it is straightforward to calculate the PMF for a detected photon:% \begin{equation} P_1(i|p_1) = \frac{1}{9}\left(1+\frac{i-5}{4}\,p_1\right)\hspace{-1pt}.\label{P1(i|p1)} \end{equation} The continuous intensity distribution $I_1(x;p_1)$ and discrete PMF $P_1(i;p_1)$ are plotted in Figs.~\ref{fig:MLE_0.63_IntPlot_Ex1}(a) and \ref{fig:MLE_0.63_IntPlot_Ex1}(b) for the case that $p_1=0.63$. To visualize the relationship between the intensity and PMF, it is useful to combine the two plots with appropriately chosen scales, as seen in Fig.~\ref{fig:MLE_0.63_IntPlot_Ex1}(c). \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/Int/I_and_P_plots_Ex1_p0_63.pdf} \caption{(a) Linear intensity distribution $I_1(x;p_1)$ and (b) the corresponding PMF for each pixel $i$, both shown for the case that $p_1=0.63$. The two plots are shown together in part (c). For practical reasons, the axis labels for $i$ are excluded from the combined plot. In all subsequent figures, the vertical axis labels are also omitted to reduce clutter.} \label{fig:MLE_0.63_IntPlot_Ex1} \end{figure} The dependence of each quantity on $p_1$ is illustrated in Fig.~\ref{fig:MLE_IntPlots_Ex1}, which contains plots of $I_1(x;p_1)$ and $P_1(i|p_1)$ for five different parameter values over the range of interest. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex1.pdf} \caption{Plots of $I_1(x;p_1)$ (left axis) and $P_1(i|p_1)$ (right axis) for several values of $p_1$.} \label{fig:MLE_IntPlots_Ex1} \end{figure} As discussed previously, the likelihood function $L_1(p_1|i)$ has the same algebraic form as $P_1(i|p_1)$, but it is regarded as a continuous function of $p_1$. The likelihood functions associated with individual photons detected at each pixel $i=1,\ldots,9$ are plotted in Fig.~\ref{fig:MLE_Lplot1}. \begin{figure} \centering \includegraphics[width=.65\linewidth]{Figures/MLE/L/Lplot1.pdf} \caption{Likelihood functions $L_1(i|p_1)$ associated with each pixel $i$ in a measurement with theoretical intensity distribution $I_1(p_1)$.} \label{fig:MLE_Lplot1} \end{figure} To illustrate the procedure of calculating the MLE from the likelihood function, let us now consider a simulated measurement of the intensity for which the true parameter value is $p_1=0.63$. The simulated intensity $\tilde{\bf I}$ is constructed by randomly selecting individual photons according to the probability distribution $P(i|p_1\!=\!0.63)$ that was shown previously in Fig.~\ref{fig:MLE_0.63_IntPlot_Ex1}(b). For demonstrative purposes, suppose that the sensor is capable of detecting individual photons, even though this is typically not the case in real experiments where many photons accumulate within the sensor's exposure time. This will allow us to examine the influence of each photon on the likelihood and the MLE, as well as the evolution of the MLE as photons accumulate. Suppose that the first simulated photon hits the detector at pixel 1. From Eq.~(\ref{P1(i|p1)}), the likelihood of this event is found to be $L_1(p_1|i\hspace{-1pt} =\hspace{-1pt} 1)=\frac{1}{9}(1-p_1)$. The MLE based on this single photon is obtained by maximizing the likelihood with respect to $p_1$. This example illustrates the fact that the MLE is not guaranteed to exist in general, since $L_1(p_1|i=1)$ would be unbounded if $p_1$ were allowed to take any real value. A sufficient condition for the existence of an MLE is that the parameter space is compact \cite{vanderVaart_1992,Demidenko_1999}, such as the closed interval $p_1\in[-1,1]$. Within this interval, the likelihood function is maximized by $p_1=-1$.\footnote{Note that the condition of compactness is sufficient but not necessary. In fact, in the present example, the restriction quickly becomes unnecessary as soon as multiple photons are detected at different pixels. Another example is the polarimetry application in Ref.~\cite{Ramkhalawon_2013}, in which the Stokes parameters are restricted to the interval $[-1,1]$ by definition, guaranteeing the existence of an MLE.} Notice from Fig.~\ref{fig:MLE_Lplot1} that a single photon detected at pixel 2, 3, or 4 also would have produced the same MLE, albeit with lower confidence. Now suppose that a second photon is detected at pixel 7, so that the measured intensity becomes $\tilde{\bf I}=(1,0,0,0,0,0,1,0,0)$. The likelihood function associated with this second photon is $L_1(p_1|i=7)=\frac{1}{9}(1-\frac{1}{2}p_1)$. Using Eq.~(\ref{eq:P(tbI|p)}) (and remembering that the probability and likelihood are algebraically equivalent), the likelihood of measuring this two-photon intensity distribution is \begin{equation} L_1(p_1|\tilde{\bf I}) = \frac{2!}{1!\hspace{-1pt}\times\hspace{-1pt} 1!}L_1(p_1|i=1) L_1(p_1|i=7)=\frac{1}{81}(-p_1^2-p_1+2). \end{equation} It is easy to show that this function is maximized when $p_1=-0.5$, which becomes the new MLE. Similarly, suppose that a third photon is detected, also at pixel 7, so that the measured intensity becomes $\tilde{\bf I}=(1,0,0,0,0,0,2,0,0)$. The likelihood of measuring this intensity distribution is \begin{equation} L_1(p_1|\tilde{\bf I}) = \frac{3!}{1!\hspace{-1pt}\times\hspace{-1pt} 2!}L_1(p_1|i=1) L_1(p_1|i=7)^2=\frac{1}{972}(-p_1^3-3p_1^2+4), \end{equation} which is maximized when $p_1=0$. The likelihood functions for individual photons at pixels 1 and 7 are plotted in Fig.~\ref{fig:MLE_L3ph_Ex1}(a), as well as the likelihoods of the two- and three-photon intensity distributions from above. The latter two functions are also plotted separately in Fig.~\ref{fig:MLE_L3ph_Ex1}(b,c). \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{Figures/MLE/L3ph/L3ph_plots_Ex1.pdf} \caption{(a) Likelihood functions (based on intensity distribution $I_1$) for detected photons at pixels $i=1$ and $i=7$ and for intensity measurements consisting of one photon at pixel 1 and one or two photons at pixel 7. The two- and three-photon likelihoods are also shown on independent scales in plots (b) and (c).} \label{fig:MLE_L3ph_Ex1} \end{figure} From these plots one can see the effect of each photon: as photons are detected at pixel 1, then pixel 7, then pixel 7 again, the peak of the likelihood function shifts from $p_1=-1$ to $p_1=-0.5$ to $p_1=0$. Additionally, the distribution becomes more sharply peaked with each accumulated photon, reducing the uncertainty in the MLE. This uncertainty can be quantified by using Eq.~({\ref{eq:MLE_Fisher_obs}) to calculate the observed Fisher information, which is a $1\times 1$ ``matrix'' (i.e., a scalar) in the one-parameter case. For example, for the three-photon measurement $\tilde{\bf I}=(1,0,0,0,0,0,2,0,0)$, Eq.~({\ref{eq:MLE_Fisher_obs_1der}) yields \begin{align} J_1^\text{(obs)}(p_1;\tilde{\bf I}) &= \sum_i \tilde{I}_i \left(\frac{\partial}{\partial p_1}\ln P(i|p_1)\right)^{\!2}\nonumber\\[2pt] &= \left(\left.\frac{i-5}{4+(i-5)p_1}\right|_{i=1}\right)^{\!2} + 2\left(\left.\frac{i-5}{4+(i-5)p_1}\right|_{i=7}\right)^{\!2} \nonumber\\[2pt] &=\frac{1}{(p_1-1)^2} + \frac{2}{(p_1+2)^2}, \end{align} which produces $J_1^\text{(obs)}=1.5$ when evaluated at the MLE $p_1=0$. In the one-parameter case, the eigenvalue of the ``matrix'' $J_1^\text{(obs)}$ is just the value of $J_1^\text{(obs)}$ itself. Therefore, the minimum expected standard deviation uncertainty of the measurement is $1/\sqrt{1.5}=0.816$. Considering the fact that only three photons were detected, this large uncertainty (relative to the range of interest) is not surprising. Alternatively, using Eq.~({\ref{eq:MLE_Fisher_1der}), the minimum error for a measurement of $\mathcal{N}$ photons (independent of the specific outcome of the measurement) can be quantified by calculating the expected Fisher information \begin{equation} \mathcal{N}J_1(p_1)=\frac{\mathcal{N}}{36}\sum_{i=1}^{9}\frac{(i-5)^2}{4+(i-5)p_1}.\label{eq:Fisher_MLE_Ex1} \end{equation} For example, for a three-photon measurement with MLE $p_1=0$, the expected standard deviation error is $[3J_1(0)]^{-1/2}=0.894$. Keep in mind, however, that the expected Fisher information is not necessarily appropriate for a measurement containing very few photons. As seen in Fig.~\ref{fig:MLE_Fisher_Ex1}, $J_1(p_1)$ grows infinitely large in the limit that $|p_1|\to 1$, implying that the uncertainty approaches zero. \begin{figure} \centering \includegraphics[height=2.25in]{Figures/MLE/Fisher/FisherPlot_Ex1.pdf} \caption{Expected unit Fisher information for a measurement of $I_1(x;p_1)$.} \label{fig:MLE_Fisher_Ex1} \end{figure} Although this is a meaningful limit for the case of large $\mathcal{N}$, it would clearly be nonsensical to suggest that a single photon could produce an MLE with zero uncertainty! To observe these concepts on a larger scale, suppose that the simulation continues until 100,000 photons have accumulated. For a single random trial of the experiment, Table \ref{tbl:MLE_ex1_photonsim} contains the measured intensities and corresponding MLEs obtained throughout the simulation for several values of $\mathcal{N}$. Notice that the MLE approaches the true parameter value ($p_1=0.63$) as $\mathcal{N}$ increases. As seen in Fig.~\ref{fig:MLE_LL_Ex1}, the log-likelihood function $\ell_1(p_1|\tilde{\bf I})$ becomes increasingly narrow as photons accumulate, and its shape becomes approximately parabolic; therefore, the likelihood $L_1(p_1|\tilde{\bf I})$ approaches a Gaussian distribution, i.e., an exponentiated concave-downward quadratic function. Furthermore, as observed above, the location of the peak likelihood (which by definition determines the MLE) approaches the true parameter value. The MLE is plotted against $\mathcal{N}$ in Fig.~\ref{fig:MLE_conf_Ex1}, with shaded regions representing the standard deviation confidence intervals based on the expected and observed Fisher information. Notice that as $\mathcal{N}$ increases, not only does the MLE approach the true value of $p_1$ with increasing confidence, but the expected and observed information rapidly converge. \begin{table} \begin{center} \begin{tabular}{l@{\hspace{6pt}}lM{0in}l} \toprule $\mathcal{N}$ & MLE $(p_1)$ && $\tilde{\bf I}=(\tilde{I}_1,\ldots,\tilde{I}_9)$ \\ \midrule 1 & $-1.0000$ && $(1,0,0,0,0,0,0,0,0)$ \\ 2 & $-0.5000$ && $(1,0,0,0,0,0,1,0,0)$ \\ 3 & $\phantom{-}0.0000$ && $(1,0,0,0,0,0,2,0,0)$ \\ 4 & $\phantom{-}0.3187$ && $(1,0,0,0,0,0,2,1,0)$ \\ 5 & $\phantom{-}0.5024$ && $(1,0,0,0,0,0,2,1,1)$ \\ 6 & $\phantom{-}0.5429$ && $(1,0,0,0,0,1,2,1,1)$ \\ 7 & $\phantom{-}0.6187$ && $(1,0,0,0,0,1,2,2,1)$ \\ 8 & $\phantom{-}0.6727$ && $(1,0,0,0,0,1,2,3,1)$ \\ 9 & $\phantom{-}0.6916$ && $(1,0,0,0,0,2,2,3,1)$ \\ 10 & $\phantom{-}0.6646$ && $(1,0,0,1,0,2,2,3,1)$ \\ 100 & $\phantom{-}0.7114$ && $(6,1,8,9,8,9,15,19,25)$ \\ 1000 & $\phantom{-}0.6656$ && $(41,56,64,91,112,121,166,160,189)$ \\ 10000 & $\phantom{-}0.6243$ && $(413,583,784,956,1112,1262,1446,1615,1829)$ \\ 100000 & $\phantom{-}0.6329$ && $(4009,5847,7696,9460,11151,12839,14588,16160,18250)$ \\ \bottomrule \end{tabular} \caption{Evolution of the MLE for $p_1$ and the measured intensity distribution $\tilde{\bf I}$ as individual photons accumulate for a simulated measurement of $I_1(x;p_1)$ with true parameter value $p_1=0.63$.} \label{tbl:MLE_ex1_photonsim} \end{center} \end{table} \begin{figure} \centering \includegraphics[height=2.5in]{Figures/MLE/LL/LLplot_with_legend3_Ex1.pdf} \caption{Log-likelihood functions associated with the simulated intensities listed in Table \ref{tbl:MLE_ex1_photonsim}.} \label{fig:MLE_LL_Ex1} \end{figure} \begin{figure} \centering \includegraphics[width=.75\linewidth]{Figures/MLE/MLE_conf/MLEc1.pdf} \caption{Evolution of the maximum likelihood estimate and standard deviation confidence interval for $p_1$ as 100,000 photons accumulate for a simulated measurement of $I_1(x;p_1)$ with true parameter value $p_1=0.63$. The solid red and dashed blue regions represent the confidence intervals based on the expected and observed Fisher information, respectively.} \label{fig:MLE_conf_Ex1} \end{figure} Although the above simulation is a representative example of the behavior of the MLE, it is merely a single observation of a random process. To gain a broader view of the statistical behavior of $I_1(x;p_1)$, a Monte Carlo simulation of 50,000 trials of a 100-photon intensity measurement was performed, first for a true parameter value of $p_1=0$ and then for $p_1=0.63$. The results of the simulations are plotted in Figs.~\ref{fig:MLE_retr_hist_Ex1}(a) and \ref{fig:MLE_retr_hist_Ex1}(b), which contain histograms showing the distribution of the MLE over all trials. \begin{figure} \centering \includegraphics[width=.8\linewidth]{Figures/MLE/Hist/hists_Ex1.pdf} \caption{Histograms of the maximum likelihood estimates obtained from 50,000 trials of a 100-photon simulation of $I_1(x;p_1)$ with true parameter values (a) $p_1=0$ and (b) $p_1=0.63$. The mean ($\mu_\text{data}$) and standard deviation ($\sigma_\text{data}$) of each distribution are indicated in the upper left corner of the plot. For comparison, a normal distribution with mean $p_1$ and standard deviation $\sigma=[100J_1(p_1)]^{-1/2}$ is overlaid in red; the value of $\sigma$ is indicated alongside each curve.} \label{fig:MLE_retr_hist_Ex1} \end{figure} As seen in the upper left corner of each plot, the mean MLE over all trials differs from the true parameter value by less than 0.001. The standard deviations of the MLEs obtained for the $p_1=0$ and $p_1=0.63$ cases are 0.1554 and 0.1303, respectively. In comparison, using Eq.~(\ref{eq:Fisher_MLE_Ex1}), the expected Fisher information for the $p_1=0$ case is $100J_1(0)=41.67$, corresponding to a standard deviation error of $0.1549$. Similarly, the expected error for the $p_1=0.63$ case is found to be $0.1285$. These values closely agree with the results of the simulation. To help visualize this, a normal distribution with the expected standard deviation is overlaid in red on top of each histogram in Fig.~\ref{fig:MLE_retr_hist_Ex1}; notice that each curve almost exactly matches the distribution of MLEs over 50,000 trials. \subsection{Weaker linear dependence on $p_1$}\label{sect:MLE_example2} For the next example, consider the intensity distribution \begin{equation} I_2(x;p_1) = \Pi(x)\bigl(0.9+0.1\,p_1x\bigr),\label{eq:MLE_I_Ex2} \end{equation} which is valid when $-9\leq p_1\leq 9$. (However, the range of interest is still $-1\leq p_1\leq 1$.) Using Eq.~(\ref{eq:P(i|p1)}), the PMF for a single photon is \begin{equation} P_2(i|p_1) = \frac{1}{9}\left(1+\frac{i-5}{36}\,p_1\right)\hspace{-1pt}.\label{P2(i|p1)} \end{equation} This distribution is nearly the same as the first example except that the linear $p_1$ term is 9 times smaller. As a result, the variations in intensity, PMF, and likelihood with respect to $p_1$ have much lower contrast over the range of interest, as seen in Figs.~\ref{fig:MLE_IntPlots_Ex2} and \ref{fig:MLE_Lplot2}. \begin{figure}[tbp] \centering \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex2.pdf} \caption{Plots of $I_2(x;p_1)$ (left axis) and $P_2(i|p_1)$ (right axis) for several values of $p_1$.} \label{fig:MLE_IntPlots_Ex2} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=.65\linewidth]{Figures/MLE/L/Lplot2.pdf} \caption{Likelihood functions $L_2(i|p_1)$ associated with each pixel $i$ in a measurement with theoretical intensity distribution $I_2(p_1)$.} \label{fig:MLE_Lplot2} \end{figure} Analogously to Section \ref{sect:MLE_example1}, suppose that we simulate a measurement of $I_2(x;p_1)$ and that the first three photons are again detected at pixels 1, 7, and 7. Following the same procedure as in the previous example, it can be shown that the maximum likelihood estimates after each photon detection are $p_1=-9$, $-4.5$, and $0$. The corresponding likelihood functions, shown in Fig.~\ref{fig:MLE_L3ph_Ex2}, are nearly flat, which is a sign that the MLE has a large uncertainty. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/L3ph/L3ph_plots_Ex2.pdf} \caption{(a) Likelihood functions (based on intensity distribution $I_2$) for detected photons at pixels $i=1$ and $i=7$ and for intensity measurements consisting of one photon at pixel 1 and one or two photons at pixel 7. The two- and three-photon likelihoods are also plotted on independent scales in plots (b) and (c).} \label{fig:MLE_L3ph_Ex2} \end{figure} Indeed, for $\tilde{\bf I}=(1,0,0,0,0,0,2,0,0)$, the observed Fisher information is found to be \begin{equation} J_2^\text{(obs)}(p_1;\tilde{\bf I}) = \frac{1}{(p_1-9)^2} + \frac{2}{(p_1+18)^2}, \end{equation} which yields $J_1^\text{(obs)}=0.0185$ when evaluated at the MLE $p_1=0$, corresponding to a standard deviation uncertainty of $1/\sqrt{0.0185}=7.35$. Similarly, the expected Fisher information \begin{equation} \mathcal{N}J_2(p_1)=\frac{\mathcal{N}}{324}\sum_{i=1}^{9}\frac{(i-5)^2}{36+(i-5)p_1}\label{eq:Fisher_MLE_Ex2} \end{equation} for an $\mathcal{N}$-photon measurement of $I_2$ is significantly smaller than the information contained in a measurement of $I_1$, as shown in Fig.~\ref{fig:MLE_Fisher_Ex2}. For example, the expected standard deviation error for a three-photon measurement, given by $[3J_2(0)]^{-1/2}=8.05$, is nine times larger than it was in the previous example. The discrepancy grows even larger as $|p_1|$ increases. \begin{figure} \centering \includegraphics[height=2.25in]{Figures/MLE/Fisher/FisherPlot_Ex2.pdf} \caption{Expected unit Fisher information $J_1(p_1)$ and $J_2(p_1)$ for measurements of $I_1(x;p_1)$ and $I_2(x;p_1)$, respectively, plotted on a logarithmic scale.} \label{fig:MLE_Fisher_Ex2} \end{figure} Similarly to the previous section, a 100,000 photon simulation of $I_2(x;p_1)$ was performed, and the results were monitored along the way as photons accumulated. The intensities and corresponding MLEs obtained at several steps throughout the simulation are listed in Table \ref{tbl:MLE_ex2_photonsim}, and the MLE and standard deviation confidence interval are plotted as a function of $\mathcal{N}$ in Fig.~\ref{fig:MLE_conf_Ex2}. From these results, one can see that the MLE approaches the true parameter value more slowly than in the previous example, with a much larger uncertainty. (Take note of the increased scale of the plot compared to Fig.~\ref{fig:MLE_conf_Ex1}.) \begin{table} \begin{center} \begin{tabular}{l@{\hspace{6pt}}lM{0in}l} \toprule $\mathcal{N}$ & MLE $(p_1)$ && $\tilde{\bf I}=(\tilde{I}_1,\ldots,\tilde{I}_9)$ \\ \midrule 1 & $-9.0000$ && $(1,0,0,0,0,0,0,0,0)$ \\ 2 & $-4.5000$ && $(1,0,0,0,0,0,1,0,0)$ \\ 3 & $\phantom{-}0.0000$ && $(1,0,0,0,0,0,2,0,0)$ \\ 4 & $-3.8285$ && $(1,1,0,0,0,0,2,0,0)$ \\ 5 & $-3.8285$ && $(1,1,0,0,1,0,2,0,0)$ \\ 6 & $-2.3629$ && $(1,1,0,0,1,1,2,0,0)$ \\ 7 & $-5.1192$ && $(1,2,0,0,1,1,2,0,0)$ \\ 8 & $-5.1192$ && $(1,2,0,0,2,1,2,0,0)$ \\ 9 & $-6.0605$ && $(1,2,0,1,2,1,2,0,0)$ \\ 10 & $-4.8152$ && $(1,2,0,1,2,2,2,0,0)$ \\ 100 & $\phantom{-}2.3159$ && $(6,6,12,17,13,11,9,13,13)$ \\ 1000 & $\phantom{-}1.8366$ && $(91,98,89,105,113,108,145,120,131)$ \\ 10000 & $\phantom{-}0.7542$ && $(1000,1044,1101,1077,1117,1088,1204,1168,1201)$ \\ 100000 & $\phantom{-}0.6331$ && $(10278,\hspace{-1pt} 10541,\hspace{-1pt} 10629,\hspace{-1pt} 11026,\hspace{-1pt} 11138,\hspace{-1pt} 11377,\hspace{-1pt} 11438,\hspace{-1pt} 11843,\hspace{-1pt} 11730)$ \\ \bottomrule \end{tabular} \caption{Evolution of the MLE for $p_1$ and the measured intensity distribution $\tilde{\bf I}$ as individual photons accumulate for a simulated measurement of $I_2(x;p_1)$ with true parameter value $p=0.63$.} \label{tbl:MLE_ex2_photonsim} \end{center} \end{table} \begin{figure} \centering \includegraphics[height=2.5in]{Figures/MLE/LL/LLplot_with_legend3_Ex2.pdf} \caption{Log-likelihood functions associated with the simulated intensities listed in Table \ref{tbl:MLE_ex2_photonsim}.} \label{fig:MLE_LL_Ex2} \end{figure} \begin{figure} \centering \includegraphics[width=.75\linewidth]{Figures/MLE/MLE_conf/MLEc2.pdf} \caption{Evolution of the maximum likelihood estimate and standard deviation confidence interval for $p_1$ as 100,000 photons accumulate for a simulated measurement of $I_2(x;p_1)$ with true parameter value $p_1=0.63$. The solid red and dashed blue regions represent the confidence intervals based on the expected and observed Fisher information, respectively.} \label{fig:MLE_conf_Ex2} \end{figure} \widowpenalty10000 Finally, to complete the comparison to Section \ref{sect:MLE_example1}, a Monte Carlo simulation was performed for 50,000 trials of a 1000-photon measurement of $I_2(x;p_1)$. For true parameter values $p_1=0$ and $p_1=0.63$, the expected standard deviation errors are $0.4409$ and $0.4401$, respectively. Histograms of the results of each simulation for 50,000 trials are shown in Fig.~\ref{fig:MLE_retr_hist_Ex2}; as indicated on the plots, the standard deviations of the MLEs obtained for each case are $0.4413$ and $0.4394$, closely matching expectations. \begin{figure} \centering \includegraphics[width=.8\linewidth]{Figures/MLE/Hist/hists_Ex2.pdf} \caption{Histograms of the maximum likelihood estimates obtained from 50,000 trials of a 1000-photon simulation of $I_2(x;p_1)$ with true parameter values (a) $p_1=0$ and (b) $p_2=0.63$. The mean ($\mu_\text{data}$) and standard deviation ($\sigma_\text{data}$) of each distribution are indicated in the upper left corner of the plot. For comparison, a normal distribution with mean $p_1$ and standard deviation $\sigma=[1000J_1(p_1)]^{-1/2}$ is overlaid in red; the value of $\sigma$ is indicated alongside each curve.} \label{fig:MLE_retr_hist_Ex2} \end{figure} Notice that the errors are larger than they were in the previous example ($0.1554$ and $0.1303$) despite the fact that the measured intensity contains ten times as many photons. This is noteworthy because for any value of $p_1$, the total power incident on the detector (given by the sum of the intensity over all pixels) is 1.8 times larger for $I_2$ than it is for $I_1$, indicating that on average nearly twice as many photons will be measured within a given exposure time. Even so, based on the above results, we can conclude that if measurements of $I_1$ and $I_2$ were conducted with identical exposure times, then the measurement of $I_1$ (for which the output signal would contain fewer photons) would be expected to produce a more accurate parameter estimate. This is an important lesson to keep in mind when designing an experiment: the most informative measurement is not always the one with the strongest signal! On the contrary, it can be beneficial to filter out a large fraction of the light before it reaches the detector (e.g., via polarization selection) in such a way that the measured signal contains only the photons emitted from the source that provide the most information about $p_1$.\footnote{When possible, it would be preferential to encode information by rearranging the light rather than filtering it out. However, sometimes this is not possible, e.g., when measuring the coupling induced by a scattering process between a pair of specific input and output polarization states.} This idea is explored further in the next example. \subsection{Null and off-null measurements}\label{sect:MLE_example3} For some optical applications, it is advantageous to design the experiment so that low light levels are observed at the detector plane, resulting in increased parameter sensitivity. One notable example is off-null ellipsometry, in which polarization elements are configured to produce a high extinction ratio over the range of interest of the parameter(s) under test \cite{Arwin_1993}. The focused beam scatterometry experiment in Ref.~\cite{Vella_2018_fbs_arxiv} operates on the same principle but with a spatially-varying polarization distribution, resulting in an output intensity of the form $I\propto \left|\sum_n a_n(x)[p_n-\bar{p}_n(x)]\right|^2$, where the functions $a_n(x)$ characterize the sample under test and the functions $\bar{p}_n(x)$ (which determine the required input polarization) can be tailored to optimize the sensitivity to each parameter. As an example of this type of measurement for the one-parameter case, consider the intensity distribution \begin{equation} I_3(x;p_1)=\Pi(x)\frac{1}{(|c|+1)^2}(p_1-cx)^2, \end{equation} where $c$ is a real constant. For $c=0$, this represents a null measurement for which the (spatially uniform) intensity vanishes when $p_1=0$ and increases quadratically with $p_1$. For $c\neq 0$, the value of $p_1$ for zero intensity (i.e., the departure from perfect nulling) varies linearly with the coordinate $x$. Using Eq.~(\ref{eq:P(i|p1)}), the PMF for a detected photon is found to be \begin{equation} P_3(i|p_1)=\frac{(4p_1-(i-5)c)^2}{144p_1^2+60c^2}. \end{equation} Let us begin by examining the case of perfect nulling ($c=0$), for which the intensity $I_3(x;p_1)=\Pi(x)p_1^2$ and PMF $P_3(i|p_1)=1/9$ are plotted in Fig.~\ref{fig:MLE_IntPlots_Ex3_null}. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex3.pdf} \caption{Plots of $I_3(x;p_1)$ (left axis) and $P_3(i|p_1)$ (right axis) for several values of $p_1$ for the case of perfect nulling.} \label{fig:MLE_IntPlots_Ex3_null} \end{figure} In contrast to the previous two examples, these plots illustrate that for a given coordinate $x_i$, the ratio between the measured intensities at two different parameter values need not be the same as the ratio between the corresponding PMF values. In fact, in this example the PMF is the same for all values of $p_1$ with the exception of $p_1=0$, for which it is undefined (due to the fact that no photons are detected). Consequently, the likelihood function is completely flat and the Fisher information is zero, implying that it is impossible to determine $p_1$ from the shape of the measured intensity distribution.\footnote{In this case, the MLE exists but it is not unique, since all values of $p_1$ within the range of interest maximize the likelihood function.} (Of course, this is also obvious from the simple fact that the PMF is independent of $p_1$.) In this situation, it would only be possible to deduce the value of $p_1$ from the total optical power incident on the detector, which is beyond the scope of the current statistical approach. Even then, it would only be possible to determine the magnitude of $p_1$ but not its sign (since $I_3$ is an even function of $p_1$), and the measurement would be susceptible to temporal fluctuation errors unless the illumination source power were very stable. The aforementioned shortcomings of a null measurement can be avoided by designing the experiment to operate under an off-null condition, which corresponds to the choice of some constant $c\neq 0$ in the present example. The intensity and PMF are plotted in Fig.~\ref{fig:MLE_IntPlots_Ex3_offnull} for several positive values of $c\sp$; symmetric behavior is observed when $c$ is negative. Notice in each plot that the null in intensity (when one exists within the range of interest) is located at $x=p_1/c$. When $|c|=1$, the null shifts across the entire width of the sensor as $p_1$ varies from $-1$ to $1$, causing the shape of $P_3(i|p_1)$ to vary substantially over the entire parameter range. When $|c|\gg 1$, the null is confined to a narrow region near the center of the sensor, resulting in very little variation in $P_3(i|p_1)$ with respect to $p_1$. On the other hand, when $|c|\ll 1$, the null shifts away from the origin very quickly when $p_1$ is nonzero. This results in dramatic variations in $P_3(i|p_1)$ (and very low intensity levels) when $|p_1|$ is small, but much smaller changes near the edge of the parameter range. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex4.pdf}\\[4pt] \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex5.pdf}\\[4pt] \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex6.pdf}\\[4pt] \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex7.pdf}\\[4pt] \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex8.pdf} \caption{Plots of $I_3(x;p_1)$ (left axes) and $P_3(i|p_1)$ (right axes) for several values of $p_1$. Each row of plots corresponds to a different value of $c$, as indicated in the leftmost plot.} \label{fig:MLE_IntPlots_Ex3_offnull} \end{figure} This behavior can also be visualized by plotting the likelihood functions $L_3(i|p_1)$ for each pixel, which are shown in Fig.~\ref{fig:MLE_Lplot3}. \begin{figure} \centering \includegraphics[width=.95\linewidth]{Figures/MLE/L/Lplots3-8.pdf} \vspace{-6pt} \caption{Likelihood functions $L_3(i|p_1)$ associated with each pixel $i$ in a measurement with theoretical intensity distribution $I_3(p_1)$, plotted for several nonnegative values of~$c$. Symmetric results are obtained for the corresponding negative values of $c$, with each plot flipped about the vertical $p_1=0$ axis.} \label{fig:MLE_Lplot3} \end{figure} From the definition of the Fisher information, recall that the magnitude of the local slope of $L_3$ is an indicator of the information content of a measurement of $p_1$. In agreement with the observations made above, for $|c|\ll 1$, the likelihood generally has a very large slope when $|p_1|$ is small (enabling a precise estimate of $p_1$), but it becomes nearly flat for larger parameter values. Meanwhile, for $|c|\gg 1$, the likelihood is relatively flat over the entire range of interest, making parameter estimation difficult. Qualitatively, it is evident that the best balance between these two extremes is achieved when $c$ is on the order of unity, so that $L_3(i|p_1)$ exhibits a similar amount of variation over the full range of interest \nolinebreak[4] of $p_1$. For a measurement containing a large number of photons, the uncertainty of the MLE can be calculated from the expected unit Fisher information; a somewhat lengthy but straightforward calculation shows that \begin{equation} J_3(p_1)=\sum_{i=1}^{9}\frac{16c^2[5c+3(i-5)p_1^2]^2}{3(12p_1^2+5c^2)^3} = \frac{240c^2}{(12p_1^2+5c^2)^2}. \label{eq:Fisher_MLE_Ex3} \end{equation} This function is plotted in Fig.~\ref{fig:MLE_Fisher_Ex3} for several values of $c$. \begin{figure} \centering \includegraphics[height=2.25in]{Figures/MLE/Fisher/FisherPlot_Ex3.pdf} \vspace{-6pt} \caption{Expected unit Fisher information $J_3(p_1)$ for a measurement of $I_3(x;p_1)$, plotted on a logarithmic scale for several values of $c$.} \label{fig:MLE_Fisher_Ex3} \end{figure} Notice that the Fisher information is the same for positive and negative $c$; the $c=0$ case does not appear on the plot since $J_3(p_1)$ goes to zero. Suppose that we are designing an experiment where the output intensity takes the form of $I_3(x;p_1)$, and we wish to determine the optimal value of $c$ that, on average, will produce the best parameter estimate for any true value of $p_1$ within the range of interest, i.e., the smallest expected error $\sigma(p_1)=J_3(p_1)^{-1/2}$. One approach to do so is by minimizing the average value of the variance $\sigma(p_1)^2$ over the interval $p_1\in[-1,1]$, which is given by \begin{align} \lrangle{\sigma^2} &= \frac{1}{2}\int_{-1}^1 \sigma(p_1)^2 \mathrm{d} p_1 \nonumber\\ &= \frac{1}{240c^2}\int_{-1}^1 (12p_1^2+5c^2)^2 \mathrm{d} p_1 \nonumber \\[4pt] &= \frac{5}{24}c^2 + \frac{6}{25}\frac{1}{c^2}+\frac{1}{3}. \end{align} This function is plotted as a solid line in Fig.~\ref{fig:MLE_sigma_vs_c_Ex3}. \begin{figure} \centering \includegraphics[scale=.62]{Figures/MLE/sigma_vs_c/sigma_vs_c_Ex3.pdf} \caption{Expected variances (averaged over $p_1$) for parameter estimates based on measurements of $I_3(x;p_1)$ containing one detected photon (solid line) and one emitted photon (dashed line), plotted as a function of $c$. For the latter case, the error is scaled by the ratio between $I_0$ and the source power $\Psi_s$, which can be treated as a unitless quantity (see footnote \ref{footnote:I_0_power_units} on page \pageref{footnote:I_0_power_units}).} \label{fig:MLE_sigma_vs_c_Ex3} \end{figure} (The dashed line will be explained shortly). Note that for a multi-photon measurement, the variance scales as $1/\mathcal{N}$. The average error $\lrangle{\sigma^2}$ is minimized when $c=\pm(144/125)^{1/4}\approx \pm 1.036$, in close agreement with the above prediction that the optimal value of $c$ is on the order of unity. As alluded to in the previous section, all of the statistics and performance metrics discussed thus far have pertained exclusively to photons detected by the sensor. However, the information contained in each detected photon is not the only thing to take into consideration when designing an experiment. In a typical experiment, the light source emits a constant optical power $\Psi_s$, of which some fraction reaches the detector. The power incident on the detector, which is given by \begin{equation} \Psi_d(p_1) = \int_{-1}^1 I_3(x;p_1)\mathrm{d} x = I_0 \frac{2(3p^2+c^2)}{3(|c|+1)^2}\label{eq:MLE_Ex3_Phi} \end{equation} in this example\footnote{The right-hand side of Eq.~(\ref{eq:MLE_Ex3_Phi}) implicitly has units of $I_0$ times the unitless coordinate $x$ (acquired from the integration), i.e., units of power.\label{footnote:I_0_power_units}}, is usually smaller than $\Psi_s$ by some ratio that is influenced by the choice of measurement scheme (e.g., an off-null configuration). During the exposure time of the sensor, the number of detected photons is (on average) equal to $\mathcal{N}=(\Psi_d/\Psi_s)\mathcal{N}_s$, where $\mathcal{N}_s$ is the number of photons emitted by the source. If the speed of the measurement is a priority, then it is important to make efficient use of the source, i.e., to maximize the information acquired per emitted photon. To that end, let us define the expected unit Fisher information per emitted photon as \begin{equation} J^{({\rm e})}(p_1)=\frac{\Psi_d}{\Psi_s}J(p_1), \end{equation} so that the total information acquired in a given time interval is $\mathcal{N}J(p_1)=\mathcal{N}_s J^{({\rm e})}(p_1)$. (Obviously, this is not to suggest that each photon carries information about $p_1$ at the moment that it is emitted from the source; rather, $J^{({\rm e})}(p_1)$ is the average information acquired at the detector plane per photon emitted by the source.) For the present example, using Eqs.~(\ref{eq:Fisher_MLE_Ex3}) and (\ref{eq:MLE_Ex3_Phi}), the Fisher information per emitted photon is found to be \begin{equation} J_3^{({\rm e})}(p_1) = \frac{I_0}{\Psi_s}\sp\frac{160c^2(3p_1^2+c^2)}{(12p_1^2+5c^2)^2(|c|+1)^2}.\label{eq:MLE_Ex3_FisherEm} \end{equation} This result is plotted in Fig.~\ref{fig:MLE_FisherEm_Ex3} for several values of $c$. \begin{figure} \centering \includegraphics[height=2.25in]{Figures/MLE/Fisher/FisherEmPlot_Ex3.pdf} \caption{Expected unit Fisher information $J_3^{({\rm e})}(p_1)$ per emitted photon for a measurement of $I_3(x;p_1)$, scaled by the ratio of source power to $I_0$ and plotted on a logarithmic scale for several values of $c$.} \label{fig:MLE_FisherEm_Ex3} \end{figure} In comparison to Fig.~\ref{fig:MLE_Fisher_Ex3}, notice that the peak in $J_3^{({\rm e})}(p_1)$ when $|c|\ll 1$ is much less pronounced than that of $J_3(p_1)$. This is because as $|c|$ decreases, the amount of information per detected photon increases, but the number of detected photons decreases by nearly the same ratio. From Eq.~(\ref{eq:MLE_Ex3_FisherEm}), the minimum expected variance $\sigma^{({\rm e})}(p_1)^2=J_3^{({\rm e})}(p_1)^{-1}$ can be calculated for a measurement of one emitted photon, averaged over the range of interest of $p_1$:% \begin{align} \lrangle{(\sigma^{({\rm e})})^2} &= \frac{1}{2}\int_{-1}^1 \sigma^{({\rm e})}(p_1)^2 \mathrm{d} p_1 \nonumber\\[4pt] &= \frac{\Psi_s}{I_0}\frac{(|c|+1)^2}{320c^2}\int_{-1}^1 \frac{(12p_1^2+5c^2)^2}{3p_1^2+c^2} \mathrm{d} p_1 \nonumber \\[4pt] &= \frac{\Psi_s}{I_0}\frac{(|c|+1)^2}{480c^2} \left[\sqrt{3}\,c^3\arctan\Bigl(\!\mfrac{\sqrt{3}}{c}\hspace{.5pt}\Bigr) + 72 c^2 + 48 \right]\hspace{-1pt}. \end{align} This function is plotted as a dashed line in Fig.~\ref{fig:MLE_sigma_vs_c_Ex3}, shown in comparison to the average variance per detected photon derived earlier. A numerical calculation shows that the expected error per emitted photon is minimized when $c=\pm0.863$, which is slightly smaller than the optimal value $c=\pm 1.036$ for detected photons. This is due to the fact that for parameter values near $|p_1|=1$, the power on the detector is up to 10\% larger for $|c|=0.863$ than for $|c|=1.036$, compensating for the slight reduction in information per detected photon in the former case. Recall that in this example the intensity is normalized to have a peak value of $I_0$ regardless of the value of $c$. This is not particularly realistic, since in an actual off-null measurement, a change in the (spatially varying) off-null condition is likely to be accompanied by a global scaling factor in the measured intensity. In some cases, this could result in a much more dramatic difference between the Fisher information per emitted and detected photon than in this example. On a separate note, in situations where $\sigma(p_1)^2$ and $\sigma^{({\rm e})}(p_1)^2$ cannot be calculated analytically, the integral over $p_1$ can be evaluated numerically. If the numerical integration is too computationally expensive, a simpler merit function could be constructed by summing the variance over some appropriately chosen set of parameter values. \FloatBarrier \subsection{Far-from-null (high intensity) measurement}\label{sect:MLE_example4} For the final one-parameter example, consider the intensity distribution \begin{equation} I_4(x;p_1)=\Pi(x)\frac{1}{(|d|+2)^2}(p_1-x-d)^2, \end{equation} where the constant $d$ introduces a spatially uniform offset from the off-null condition considered in the previous example. When $d=0$, the intensity is identical to $I_3(x;p_1)$ with $c=1$, which was plotted previously in Fig.~\ref{fig:MLE_IntPlots_Ex3_offnull}(c). For comparison, Fig.~\ref{fig:MLE_IntPlots_Ex4} contains plots of $I_4(x;p_1)$ and the corresponding PMF for several positive values of \nolinebreak[3] $d$. (Symmetric results are obtained for negative $d$.) The likelihood functions $L_4(i|p_1)$ for each case are plotted in Fig.~\ref{fig:MLE_Lplot4}.% \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex9_1.pdf}\\[2pt] \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex9_2.pdf}\\[2pt] \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex9_3.pdf}\\[2pt] \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex9_4.pdf}\\[2pt] \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex9_5.pdf} \caption{Plots of $I_4(x;p_1)$ (left axes) and $P_4(i|p_1)$ (right axes) for several values of $p_1$. Each row of plots corresponds to a different value of $d$, as indicated in the leftmost plot.} \label{fig:MLE_IntPlots_Ex4} \end{figure} \begin{figure} \centering \includegraphics[width=.95\linewidth]{Figures/MLE/L/Lplots9.pdf} \caption{Likelihood functions $L_4(i|p_1)$ associated with each pixel $i$ in a measurement with theoretical intensity distribution $I_4(p_1)$, plotted for several nonnegative values of $d$. Notice that the effect of $d$ is simply a horizontal translation; when $d\gg 1$, the range of interest $p_1\in[-1,1]$ only contains a small portion of the left tail of the distribution. Symmetric results are obtained for the corresponding negative values of $d$, for which the curves are translated in the opposite direction (with respect to the $d=0$ case).} \label{fig:MLE_Lplot4} \end{figure} Observe that when $d=1$, the intensity profile and likelihood function are translated in parameter space so that they are symmetric about $p_1=1$. As $d$ increases, the distribution continues to shift farther away from the off-null condition of $I_3(x;p_1)$, so that the intensity becomes large and uniform over the range of interest of $p_1$ and the likelihood function becomes very flat. As seen in Fig.~\ref{fig:MLE_Fisher_Ex4}, the expected Fisher information per detected photon\footnote{Henceforth, all mentions of the Fisher information refer to the expected information per detected photon unless specified otherwise.} decreases rapidly as $d$ increases. \begin{figure} \centering \includegraphics[height=2.25in]{Figures/MLE/Fisher/FisherPlot_Ex4.pdf} \caption{Expected unit Fisher information $J_4(p_1)$ for a measurement of $I_4(x;p_1)$, plotted on a logarithmic scale for several values of $d$. The $d=0$ case is identical to $J_3(p_1)$ with $c=1$ (see Fig.~\ref{fig:MLE_Fisher_Ex3}). For negative values of $d$, each curve is flipped about the vertical $p_1=0$ axis.} \label{fig:MLE_Fisher_Ex4} \end{figure} Following the same procedure as in the previous example, it can be shown that the average estimation error over the parameter range is minimized when $d=0$. (This holds true when optimizing for detected or emitted photons, though as noted before, the latter result is in part due to the choice of normalization of the intensity.) The takeaway from this example is that it illustrates the statistical advantage of off-null measurements over a ``far-from-null'' experimental configuration in which the parameter of interest causes a small fractional change in the output intensity. Although the parameter estimation technique outlined in Section \ref{sect:MLE_optics} is only useful for imaging experiments where the off-null condition (and thus the output intensity) varies with position, by looking at Fig.~\ref{fig:MLE_IntPlots_Ex4} one can also appreciate the principle of traditional off-null ellipsometry, in which only the total power is measured. In that case, the off-null configuration greatly increases the contrast of the variation in power with respect to $p_1$, enabling a more accurate measurement while placing less stringent requirements on the fidelity of the sensor. More generally, a similar argument can be made for a broader class of optical experiments that are applications of the weak measurement formalism in quantum mechanics \cite{Aharonov_1988,Tamir_2013,Svensson_2013}, wherein preselected and postselected states are chosen to enhance the sensitivity to small variations of an unknown parameter. Examples of such applications include the measurement of small optical beam shifts \cite{Hosten_2008,Dennis_2012} and the focused beam scatterometry experiment discussed in Ref.~\cite{Vella_2018_fbs_arxiv}. \section{Two-parameter optical MLE examples}\label{sect:MLE_examples_2param} To illustrate the use of MLE in the multiple-parameter case, this section contains several intensity distributions that depend on two parameters ${\bf p}=(p_1,p_2)$. The procedures for calculating the PMF, FIM, and expected error are fundamentally the same as in the one-parameter case, although the algebra is more complicated. Rather than dwelling on the mathematical details, numerical results are presented in the following discussion. This is representative of most real-world applications, in which MLE techniques are typically implemented numerically. The intensity distributions discussed in Sections \ref{sect:MLE_example5} through \ref{sect:MLE_example10} are summarized in Table \ref{tbl:MLE_2param_intensity_dist}. \begin{table} \renewcommand{\arraystretch}{.8} \begin{center} \begin{tabular}{c@{\hspace{20pt}}l} \toprule Section & Intensity distribution\\ \midrule \phantom{a}&\\[-8pt] \ref{sect:MLE_example5} & $I_5(x;{\bf p}) = 0.563\sp\Pi(x)[2 + p_1 x + p_2 \sin(\pi x)]$\\[10pt] \ref{sect:MLE_example6} & $I_6(x;{\bf p}) = 0.250\sp\Pi(x)[2 + p_1 x + p_2 \cos(\pi x)]$\\[8pt] \ref{sect:MLE_example7} & $I_7(x;{\bf p}) = \begin{cases} 0.5\Pi(x)(1 + p_1 x),&x<0\\ 0.5\Pi(x)(1 + p_2 x),&x\geq 0 \end{cases}$\\[20pt] \ref{sect:MLE_example8} & $I_8(x;{\bf p}) = \begin{cases} 0.5\Pi(x)\left[1 + 2p_1 (x+0.625)\right],&x<-0.125\\ 0.5\Pi(x),&-0.125\leq x< 0.125\\ 0.5\Pi(x)\left[1 + 2p_2 (x-0.625)\right],&x\geq 0.125 \end{cases}$\\[26pt] \ref{sect:MLE_example9} & $I_9(x;{\bf p}) = 0.125\Pi(x)\bigl[(p_1-x)^2 + (p_2-\cos(\pi x))^2\sp\bigr]$\\[10pt] \ref{sect:MLE_example10} & $I_{10}(x;{\bf p}) = 0.320\Pi(x)\!\left[(p_1-0.25x)^2 + (p_2-0.25\cos(\pi x))^2\right]$\\ \bottomrule \end{tabular} \caption{Intensity distributions for each example considered in Section \ref{sect:MLE_examples_2param}.} \label{tbl:MLE_2param_intensity_dist} \end{center} \end{table} Similarly to the one-parameter examples, each intensity distribution is normalized so that it attains a maximum value of $I_0$ over the region of interest $-1\leq p_1,p_2\leq 1$. The distributions considered in Sections \ref{sect:MLE_example5} and \ref{sect:MLE_example6} each have a $p_1$ term with linear spatial variation and a $p_2$ term with sinusoidal spatial variation, serving as simple examples for the two-parameter case. Sections \ref{sect:MLE_example7} and \ref{sect:MLE_example8} contain two thought-provoking (albeit unrealistic) examples that illustrate the mathematical mechanisms that can lead to statistical correlations between the parameter estimates for $p_1$ and $p_2$. Finally, a pair of two-parameter off-null measurements are discussed in Sections \ref{sect:MLE_example9} and \ref{sect:MLE_example10}. \FloatBarrier \subsection{Linear and sinusoidal variations (case 1)}\label{sect:MLE_example5} For the first two-parameter example, consider the intensity distribution \begin{equation} I_5(x;{\bf p}) = 0.563\sp\Pi(x)[2 + p_1 x + p_2 \sin(\pi x)]\sp,\label{eq:MLE_I_Ex5} \end{equation} which is valid over the region of interest $-1\leq p_1,p_2\leq 1$. Similarly to the first example in Section \ref{sect:MLE_examples_1param}, $I_5(x;{\bf p})$ depends linearly on the product of $p_1$ and $x$. The dependence on $p_2$ is also linear, but this additional term varies sinusoidally across the sensor. Therefore, variations in $p_1$ and $p_2$ result in distinct changes in the shape of the intensity $I_5(x;{\bf p})$ and the PMF $P_5(i|{\bf p})$, as shown in Fig.~\ref{fig:MLE_IntPlots_Ex5}. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex21.pdf} \caption{Plots of $I_5(x;{\bf p})$ (left axes) and $P_5(i|{\bf p})$ (right axes) for several values of $p_1$ and $p_2$.} \label{fig:MLE_IntPlots_Ex5} \end{figure} For instance, when $p_2=0$ (the third row of plots), the intensity is strictly a linear function of $x$ with slope $p_1$. When $p_1=0$ (the third column of plots), it is a sine function with a DC offset. For all other cases, the intensity is a linear combination of the two. For the two-parameter case, the likelihood $L_5({\bf p}|i)=P_5(i|{\bf p})$ can be plotted in two dimensions as a function of $p_1$ and $p_2$. The likelihood functions associated with each pixel are shown in Fig.~\ref{fig:MLE_L2_Ex5}, with contour lines drawn as a visual aid to identify paths of constant likelihood. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/L_2param/L2Ex21.pdf} \caption{Likelihood functions $L_5({\bf p}|i)$ associated with each pixel $i$ for a measurement of $I_5(x;{\bf p})$. Contour lines are shown in increments of $0.01$.} \label{fig:MLE_L2_Ex5} \end{figure} These plots have several interesting features. First, notice that $L_5({\bf p}|i=5)$ is constant, meaning that pixel 5 provides no useful information about $p_1$ and $p_2$. (Incidentally, this was also the case for the one-parameter intensity distributions $I_1$ and $I_2$. Since the signal from pixel 5 has no effect on the MLE, it can be ignored.) Secondly, the likelihood functions for pixels 1 and 9 are independent of $p_2$ (as evident from the vertical contour lines) since $\sin(\pi x)=0$ for $x=\pm 1$. In contrast, the likelihood functions associated with pixels 4 and 6 depend more strongly on $p_2$ than $p_1$ as a consequence of the fact that $\sin(\pi x)$ has a larger slope near the center of the sensor than the linear term $x$. Lastly, note that the paths of constant likelihood generally have negative (or vertical) slopes in parameter space. Roughly speaking, this means that if $p_1$ increases and $p_2$ decreases by a similar amount (or if $p_2$ increases and $p_1$ decreases), the likelihood function will only change slightly, making it difficult to distinguish linear combinations of parameters along this direction. On the other hand, a simultaneous increase (or simultaneous decrease) in $p_1$ and $p_2$ will tend to cause a more significant change in the likelihood function, making it easier to distinguish this type of variation in ${\bf p}$. The patterns described above can be quantified by calculating the estimation error based on the $2\times 2$ expected Fisher information matrix, whose elements may be computed using either form of Eq.~(\ref{eq:MLE_Fisher}). For a measurement of $\mathcal{N}=1000$ photons with true parameter values ${\bf p}=(0,0)$, the FIM and its inverse are found to be \begin{equation} \renewcommand{\arraystretch}{.8} \mathcal{N}\mathbb{J}_5 = \left[\! \begin{array}{rr} 104.2 & 67.1 \\ 67.1 & 111.1 \end{array} \!\right]\hspace{-1pt}, \qquad\quad (\mathcal{N}\mathbb{J}_5)^{-1} = \left[\! \begin{array}{rr} 0.0157 & -0.0095 \\ -0.0095 & 0.0147 \end{array} \!\right]\hspace{-1pt}.\label{eq:MLE_Ex5_Fisher} \end{equation} As discussed in Section \ref{sect:MLE_overview}, $(\mathcal{N}\mathbb{J}_5)^{-1}$ places a lower limit on the covariance matrix for a 1000-photon measurement of $p_1$ and $p_2$. Since its off-diagonal elements are fairly large in relation to its diagonal elements, a strong coupling between parameters (i.e., large covariance) is expected. Indeed, the principal axes of the error ellipse are given by the eigenvectors $[0.69;0.72]$ and $[0.72;-0.69]$, and the axis lengths (the square roots of the corresponding eigenvalues) are $0.076$ and $0.157$, respectively. Thus, the major axis of the ellipse is oriented at approximately $-45^\circ$ in parameter space, and the standard deviation error is about twice as large along the $-45^\circ$ direction as the $+45^\circ$ direction.\footnote{It is only meaningful to refer to angles in parameter space when $p_1$ and $p_2$ have the same units and are normalized to their respective ranges of interest, as they are in this discussion.} In this example, it turns out that similar results are obtained for all values of ${\bf p}$ within the region of interest. The error ellipses for a selection of true parameter values are plotted in Fig.~\ref{fig:MLE_ellipses_Ex5}. \begin{figure} \centering \includegraphics[width=.553\linewidth]{Figures/MLE/Ellipses/ellipsesEx21.pdf} \caption{Ellipses representing the expected standard deviation error of a \mbox{1000-photon} measurement of $I_5(x;{\bf p})$ with true parameter values $p_1$ and $p_2$, sampled over a $9\times 9$ grid in parameter space.} \label{fig:MLE_ellipses_Ex5} \end{figure} Given a measured intensity $\tilde{\bf I}$, the magnitude and orientation of the uncertainty of the MLE are also manifested in the shape of the likelihood function $L_5({\bf p}|\tilde{\bf I})$ and its logarithm $\ell_5({\bf p}|\tilde{\bf I})$. Fig.~\ref{fig:MLE_LL2_1000ph_Ex5} contains two examples of the log-likelihood functions obtained for simulated 1000-photon measurements with true parameter values ${\bf p}=(0,0)$ and ${\bf p}=(0.63,-0.25)$. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/L_2param_1000/L21000_Ex21.pdf} \caption{Log-likelihood functions $\ell_5({\bf p}|\tilde{\bf I})$ for simulated 1000-photon measurements of $I_5(x;{\bf p})$ with true parameter values (a) ${\bf p}\hspace{-1pt}=\hspace{-1pt}(0,0)$ and (b) ${\bf p}\hspace{-1pt}=\hspace{-1pt}(0.63,-0.25)$. The plots are shaded on a logarithmic scale with solid contour lines drawn at powers of 2, as indicated in the legend. The peak of each distribution is marked with a red dot. The locations of these maxima (i.e., the MLEs for each measurement) are ${\bf p}=(-0.115,0.064)$ and ${\bf p}=(0.673,-0.366)$, respectively. The dashed contour line indicates where the likelihood $L_5({\bf p}|\tilde{\bf I})$ drops to $1/\sqrt{e}$ times its peak value, representing the standard deviation confidence interval for the MLE.} \label{fig:MLE_LL2_1000ph_Ex5} \end{figure} Again, these plots contain several interesting features. First, notice that the contours of equal likelihood are approximately elliptical. This behavior is characteristic of a bivariate Gaussian distribution $f({\bf p})=f_0\exp(-\frac{1}{2}{\bf p}^{\rm T}\bm{\Sigma}^{-1}\sp{\bf p})$ with covariance matrix $\bm{\Sigma}$, for which the locus of points satisfying ${\bf p}^{\rm T}\bm{\Sigma}^{-1}\sp{\bf p}=\kappa^2$ (for some constant $\kappa$) traces out an ellipse \cite{Friendly_2013}. Thus, the shape of $\ell_5({\bf p}|\tilde{\bf I})$ supports the claim made earlier (see Eq.~(\ref{eq:MLE_P(p|I)_Gaussian})) that the posterior probability distribution $P({\bf p}|\tilde{\bf I})$, which is a scaled version of the likelihood if no prior distribution is assumed, closely approximates a Gaussian distribution when a large number of photons are measured. Comparing Figs.~\ref{fig:MLE_ellipses_Ex5} and \ref{fig:MLE_LL2_1000ph_Ex5}, one can also see that the likelihood function is elongated along the direction with the largest expected estimation error. In Section \ref{sect:MLE_examples_1param} it was noted that the estimation error is largest when the likelihood function is nearly flat; for the multiple-parameter case, it can be further specified that the error is largest along the \emph{direction} where the likelihood function is flattest, i.e., the direction perpendicular to the local gradient of $\ell$ with respect to ${\bf p}$. Each plot in Fig.~\ref{fig:MLE_LL2_1000ph_Ex5} contains a red dot representing the MLE for the measurement, i.e., the location of the peak of $\ell_5({\bf p}|\tilde{\bf I})$. The estimated parameter values (which are listed in the figure caption) differ considerably from the true values, with errors as large as $\sim\! 0.11$ for each parameter. The standard deviation confidence interval for the MLE, which is outlined by a red dashed line, consists of the region where the likelihood function $L_5({\bf p}|\tilde{\bf I})$ is greater than or equal to $1/\sqrt{e}$ times its peak value.\footnote{For the Gaussian distribution $f({\bf p})$ mentioned above, the $\kappa=1$ ellipse encloses one standard deviation. Along this contour, the function value drops to $f_0\exp(-\frac{1}{2})=f_0/\sqrt{e}$.} This is equivalent to an additive decrease in the log-likelihood by $\ln(e^{-1/2})=-0.5$. Notice that this region is elliptical, and its size and shape are virtually identical to the nearest ellipse in Fig.~\ref{fig:MLE_ellipses_Ex5}. In fact, by evaluating the expected FIM at the MLE with $\mathcal{N}=1000$, an extremely close agreement is found between the predicted covariance matrix $(\mathcal{N}\mathbb{J}_5)^{-1}$ and the standard deviation confidence interval of $\ell_5({\bf p}|\tilde{\bf I})$. (When plotted together, the ellipses are virtually indistinguishable even when zoomed in.) In general, the correlation between the two grows stronger as the number of photons increases. In this example, 1000 photons are sufficient to obtain a very close agreement; in an experiment with smaller expected error, fewer photons would be required. To conclude this example, similarly to Sections \ref{sect:MLE_example1} and \ref{sect:MLE_example2}, a Monte Carlo simulation was performed for 50,000 trials of a 1000-photon simulated measurement of $I_5(x;{\bf p})$ for which the true parameter values are given by ${\bf p}=(0,0)$. A histogram of the maximum likelihood estimates obtained in all trials is shown in Fig.~\ref{fig:MLE_hist2_Ex5}(a); an overhead view of the distribution is also shown in Fig.~\ref{fig:MLE_hist2_Ex5}(b). \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/Hist2/hist2_Ex21.pdf} \caption{(a) Histogram of the maximum likelihood estimates obtained from 50,000 trials of a simulated 1000-photon measurement of $I_5(x;{\bf p})$ with true parameter value ${\bf p}=(0,0)$. (b) Overhead view of the distribution shown in plot (a), with the color of each pixel indicating the number of trials for which the MLE was within a given interval. The black ellipse at the center of the plot represents the expected standard deviation error based on the Fisher information matrix.} \label{fig:MLE_hist2_Ex5} \end{figure} The data closely resembles a Gaussian distribution with the same orientation as the expected error ellipse, which is shown in black in the overhead view. The statistical covariance matrix of the data matches the matrix $(\mathcal{N}\mathbb{J}_5)^{-1}$ given in Eq.~(\ref{eq:MLE_Ex5_Fisher}) to within three significant digits. \subsection{Linear and sinusoidal variations (case 2)}\label{sect:MLE_example6} For the second two-parameter example, consider the intensity distribution \begin{equation} I_6(x;{\bf p}) = 0.250\sp\Pi(x)[2 + p_1 x + p_2 \cos(\pi x)]\sp, \end{equation} which is similar to $I_5(x;{\bf p})$, but with the sine term replaced by a cosine. The intensity and PMF are plotted for several parameter values in Fig.~\ref{fig:MLE_IntPlots_Ex6}, and the likelihood functions for each pixel are shown in Fig.~\ref{fig:MLE_L2_Ex6}. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex22.pdf} \caption{Plots of $I_6(x;{\bf p})$ (left axes) and $P_6(i|{\bf p})$ (right axes) for several values of $p_1$ and $p_2$.} \label{fig:MLE_IntPlots_Ex6} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/L_2param/L2Ex22.pdf} \caption{Likelihood functions $L_6({\bf p}|i)$ associated with each pixel $i$ for a measurement of $I_6(x;{\bf p})$. Contour lines are shown in increments of $0.01$.} \label{fig:MLE_L2_Ex6} \end{figure} In this example, it can be seen\nolinebreak[3] that the paths of constant likelihood have different orientations for each pixel. This implies, for instance, that a simultaneous increase in $p_1$ and $p_2$ will cause a significant change in $L_6({\bf p}|i\!=\!1)$, but very little change in $L_6({\bf p}|i\!=\!9)$; meanwhile, a simultaneous increase in $p_1$ and decrease in $p_2$ will do just the opposite. The reason for this can be understood by examining the plots of $x$, $\sin(\pi x)$, and $\cos(\pi x)$ shown in Fig.~\ref{fig:MLE_sincos}. \begin{figure} \centering \includegraphics[height=2.05in]{Figures/MLE/sincos/sincosplot.pdf} \caption{Spatial variations of each term appearing in intensity distributions $I_5(x;{\bf p})$ and $I_6(x;{\bf p})$.} \label{fig:MLE_sincos} \end{figure} Whereas $x$ and $\sin(\pi x)$ always have the same sign, this is not the case for $x$ and $\cos(\pi x)$. Therefore, for the intensity distribution $I_5(x;{\bf p})$, an increase in $p_1$ can be compensated (to a certain extent) by a decrease in $p_2$. The distribution $I_6(x;{\bf p})$ is less prone to this situation since any linear combination of $p_1$ and $p_2$ produces distinct fluctuations at different pixels. However, correlations can still arise in cases where very few photons are incident on one or more pixels (for example, when $p_1=p_2=1$), since the contributions of each pixel to the log-likelihood function $\ell_6({\bf p}|\tilde{\bf I})$ associated with a measured intensity $\tilde{\bf I}$ may be imbalanced. Based on the above observations, one can reasonably expect there to be a smaller correlation between the estimated parameters from a measurement of $I_6(x;{\bf p})$ than in the previous example. As a matter of fact, for ${\bf p}=(0,0)$, the FIM and its inverse are diagonal, indicating that there is zero covariance: \begin{equation} \renewcommand{\arraystretch}{.8} \mathcal{N}\mathbb{J}_6 = \left[\! \begin{array}{cc} 104.2 & 0 \\ 0 & 135.8 \end{array} \!\right]\hspace{-1pt}, \qquad\quad (\mathcal{N}\mathbb{J}_6)^{-1} = \left[\! \begin{array}{cc} 0.0096 & 0 \\ 0 & 0.0074 \end{array} \!\right]\hspace{-1pt},\label{eq:MLE_Ex6_Fisher} \end{equation} where $\mathcal{N}=1000$. The eigenvectors of $(\mathcal{N}\mathbb{J}_6)^{-1}$ are $[1;0]$ and $[0;1]$, and the square roots of the corresponding eigenvalues are 0.098 and 0.086, respectively. Thus, the error ellipse is nearly circular, with its principal axes oriented along the $p_1$ and $p_2$ axes. The error ellipses for a selection of parameter values are shown in Fig.~\ref{fig:MLE_ellipses_Ex6}. \begin{figure} \centering \includegraphics[width=.553\linewidth]{Figures/MLE/Ellipses/ellipsesEx22.pdf} \caption{Ellipses representing the expected standard deviation error of a \mbox{1000-photon} measurement of $I_6(x;{\bf p})$ with true parameter values $p_1$ and $p_2$, sampled over a $9\times 9$ grid in parameter space.} \label{fig:MLE_ellipses_Ex6} \end{figure} As seen in the plot, the expected error is relatively uniform over the entire parameter range, with the smallest error occurring when $p_2$ is close to 1. The covariance between $p_1$ and $p_2$ is also generally small, with one notable exception: as $|p_1|\to 1$ and $p_2\to 1$, the two parameters become highly correlated. At the far upper corners of the region of interest, the error ellipse resembles a straight line, indicating complete correlation between $p_1$ and $p_2$. (Even so, the magnitude of the uncertainty of each parameter is still smaller than the expected errors for other parameter values.) From the uppermost plots in Fig.~\ref{fig:MLE_IntPlots_Ex6}, it can be seen that this correlation arises when the intensity drops to zero at either edge of the sensor (near pixel 1 or pixel 9). This happens because the intensity distribution and the likelihood functions $L({\bf p}|i)$ are distributed such that the remaining pixels cannot easily distinguish between all possible combinations of $p_1$ and $p_2$, as alluded to in the previous paragraph.\footnote{The astute reader might wonder why the expected error is asymmetric with respect to $p_2$ despite the fact that the last term of $I_6(x;{\bf p})$ exhibits symmetry with respect to both $p_2$ and $x$. The answer is that the asymmetry is a sampling artifact of the 9-pixel array, since pixels 1 and 9 sample the periodic function $\cos(\pi x)$ at points that are offset by $2\pi$ radians. This causes the total measured intensity to vary with $p_2$ despite the fact that $\int_{-1}^1\cos(\pi x)\mathrm{d} x=0$. As is often the case, the error is smallest in this example when the total intensity is minimized, which occurs when $p_2=1$.} The log-likelihood functions $\ell_6({\bf p}|\tilde{\bf I})$ for simulated 1000-photon measurements of $I_6(x;{\bf p})$ with true parameter values ${\bf p}=(0,0)$ and ${\bf p}=(0.63,-0.25)$ are shown in Fig.~\ref{fig:MLE_LL2_1000ph_Ex6}. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figures/MLE/L_2param_1000/L21000_Ex22.pdf} \caption{Log-likelihood functions $\ell_6({\bf p}|\tilde{\bf I})$ for simulated 1000-photon measurements of $I_6(x;{\bf p})$ with true parameter values (a) ${\bf p}\hspace{-1pt}=\hspace{-1pt}(0,0)$ and (b) ${\bf p}\hspace{-1pt}=\hspace{-1pt}(0.63,-0.25)$. The plots are shaded on a logarithmic scale with solid contour lines drawn at powers of 2, as indicated in the legend. The peak of each distribution is marked with a red dot. The locations of these maxima (i.e., the MLEs for each measurement) are ${\bf p}=(-0.043,-0.014)$ and ${\bf p}=(0.591,-0.278)$, respectively. The dashed contour line indicates where the likelihood $L_6({\bf p}|\tilde{\bf I})$ drops to $1/\sqrt{e}$ times its peak value, representing the standard deviation confidence interval for the MLE.} \label{fig:MLE_LL2_1000ph_Ex6} \end{figure} As in the previous example, the contours of equal likelihood are highly elliptical near the peak, indicating that the likelihood is approximately a Gaussian distribution. The Gaussian approximation weakens away from the peak, with the contours of $\ell_6({\bf p}|\tilde{\bf I})$ becoming slightly distorted. Compared to $\ell_5({\bf p}|\tilde{\bf I})$, the distribution is much more symmetric due to the small covariance between $p_1$ and $p_2$ (for these particular true parameter values). The standard deviation confidence interval, indicated by the dashed red line, is also highly symmetric and slightly narrower than it was in the previous example, matching the expected error based on the FIM. The uncertainty is also reflected in the distribution of the MLEs obtained from 50,000 trials of a 1000-photon measurement of $I_6({\bf p}|\tilde{\bf I})$, as shown in Fig.~\ref{fig:MLE_hist2_Ex6}}. The diagonal elements of the covariance matrix of the simulated data agree with the matrix $(\mathcal{N}\mathbb{J}_6)^{-1}$ given in Eq.~(\ref{eq:MLE_Ex6_Fisher}) to within two significant digits; the off-diagonal elements of the matrix are very close to zero (approximately 500 times smaller than the diagonal elements). \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/Hist2/hist2_Ex22.pdf} \caption{(a) Histogram of the maximum likelihood estimates obtained from 50,000 trials of a simulated 1000-photon measurement of $I_6(x;{\bf p})$ with true parameter value ${\bf p}=(0,0)$. (b) Overhead view of the distribution shown in plot (a), with the color of each pixel indicating the number of trials for which the MLE was within a given interval. The black ellipse at the center of the plot represents the expected standard deviation error based on the Fisher information matrix.} \label{fig:MLE_hist2_Ex6} \end{figure} \subsection{Piecewise linear dependence (nonzero covariance)}\label{sect:MLE_example7} The next two examples involve intensity distributions for which fluctuations due to $p_1$ and $p_2$ occur in completely separate portions of the sensor. Although this is not a particularly common real-world scenario, some interesting insight can be gained from the analysis. First, consider the piecewise intensity distribution \begin{equation} I_7(x;{\bf p}) = \begin{cases} 0.5\Pi(x)(1 + p_1 x),&x<0,\\ 0.5\Pi(x)(1 + p_2 x),&x\geq 0, \end{cases} \end{equation} which is plotted in Fig.~\ref{fig:MLE_IntPlots_Ex7}. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex23.pdf} \caption{Plots of $I_7(x;{\bf p})$ (left axes) and $P_7(i|{\bf p})$ (right axes) for several values of $p_1$ and $p_2$.} \label{fig:MLE_IntPlots_Ex7} \end{figure} This distribution is similar to the one-parameter linear intensity profile $I_1(x;p_1)$, except that the slopes on the left and right halves of the sensor are proportional to $p_1$ and $p_2$, respectively. Since the intensities on each half of the sensor only depend on a single parameter, one would expect the parameters to be completely uncoupled, enabling an estimate with zero covariance. However, this turns out not to be the case when applying the MLE approach outlined in Section \ref{sect:MLE_optics}. (Note: the MLE formalism only requires the PMF to be twice differentiable with respect to ${\bf p}$, so the discontinuity in the derivative of $I_7({\bf x};{\bf p})$ with respect to ${\bf x}$ is not problematic.) As established previously, this treatment relies on the information contained in the shape of the intensity distribution, that is, the relative intensity or the PMF. Clearly, the value of $p_1$ impacts the probability $P_7(i|{\bf p})$ of detecting a photon at each pixel on the left half of the sensor ($i=1,\ldots 5$); what is perhaps less obvious, however, is that it also affects the probabilities for pixels 6 through 9. Indeed, within any given row of Fig.~\ref{fig:MLE_IntPlots_Ex7} (for which $p_2$ has a fixed value), the intensity on the right half of the sensor is always the same, yet the PMF changes depending on the value of $p_1$. This is possible because the total intensity $\sum_i I_7(x_i|{\bf p})$, which appears in the denominator of $P_7(i|{\bf p})$, varies with $p_1$ and $p_2$ so that each parameter affects the relative number of photons incident on every pixel $i$. Therefore, the estimates for $p_1$ and $p_2$ based on the PMF will generally be correlated to some degree. (In this particular example, the best workaround is to treat the signals from each half of the detector as completely separate measurements --- more on this later.) As usual, these effects can also be visualized by plotting the likelihood functions $L_7(i|{\bf p})$ for each pixel, which are shown in Fig.~\ref{fig:MLE_L2_Ex7}. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/L_2param/L2Ex23.pdf} \caption{Likelihood functions $L_7({\bf p}|i)$ associated with each pixel $i$ for a measurement of $I_7(x;{\bf p})$. Contour lines are shown in increments of $0.01$.} \label{fig:MLE_L2_Ex7} \end{figure} Notice that the likelihood function for pixel 1 is most heavily influenced by $p_1$, while that of pixel 9 is mostly influenced by $p_2$. Nevertheless, every pixel contains information about both $p_1$ and $p_2$, since the partial derivatives of $\ell_7(i|{\bf p})$ with respect to each parameter are nonzero. Interestingly, this even implies that photons measured at pixel 5 (the center of the sensor, where $I(x_5|{\bf p})=0.5$ for any ${\bf p}$) provide information about $p_1$ and $p_2$ when considered in relation to the number of photons measured at the other eight pixels. The error ellipses for several values of $p_1$ and $p_2$ are shown in Fig.~\ref{fig:MLE_ellipses_Ex7}. \begin{figure}[t] \centering \includegraphics[width=.553\linewidth]{Figures/MLE/Ellipses/ellipsesEx23.pdf} \caption{Ellipses representing the expected standard deviation error of a \mbox{1000-photon} measurement of $I_7(x;{\bf p})$ with true parameter values $p_1$ and $p_2$, sampled over a $9\times 9$ grid in parameter space.} \label{fig:MLE_ellipses_Ex7} \end{figure} Unlike the prior two examples, the expected estimation error for a measurement of $I_7(x;{\bf p})$ is strongly dependent on ${\bf p}$, with the largest error (and substantial covariance between $p_1$ and $p_2$) occurring in the upper left quadrant where $p_1<0$ and $p_2>0$. The distributions of the log-likelihood functions obtained for two 1000-photon measurements with different true parameter values, shown in Fig.~\ref{fig:MLE_LL2_1000ph_Ex7}, are consistent with this trend. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{Figures/MLE/L_2param_1000/L21000_Ex23.pdf} \caption{Log-likelihood functions $\ell_7({\bf p}|\tilde{\bf I})$ for simulated 1000-photon measurements of $I_7(x;{\bf p})$ with true parameter values (a) ${\bf p}\hspace{-1pt}=\hspace{-1pt}(0,0)$ and (b) ${\bf p}\hspace{-1pt}=\hspace{-1pt}(0.63,-0.25)$. The plots are shaded on a logarithmic scale with solid contour lines drawn at powers of 2, as indicated in the legend. The peak of each distribution is marked with a red dot. The locations of these maxima (i.e., the MLEs for each measurement) are ${\bf p}=(-0.067,0.024)$ and ${\bf p}=(0.582,-0.232)$, respectively. The dashed contour line indicates where the likelihood $L_7({\bf p}|\tilde{\bf I})$ drops to $1/\sqrt{e}$ times its peak value, representing the standard deviation confidence interval for the MLE.} \label{fig:MLE_LL2_1000ph_Ex7} \end{figure} The magnitude of the expected error is inversely proportional to the total intensity $\sum_i I_7(x_i|{\bf p})$, which is minimized when $p_1=1$ and $p_2=-1$. Not coincidentally, the errors in $p_1$ and $p_2$ approach zero as $p_1\to 1$ and $p_2\to -1$, respectively. (As in Section \ref{sect:MLE_example1}, this expectation of zero error is only meaningful in the limit of large $\mathcal{N}$.) The dramatic variations in error with respect to ${\bf p}$ can also be understood by revisiting Fig.~\ref{fig:MLE_L2_Ex7}, in which the contours of equal likelihood for each pixel tend to be most closely spaced in the lower right quadrant (where $p_1>0$ and $p_2<0$), indicating high information content. Pixel 5 in particular provides extremely useful information in this quadrant, not only due to the large slope of $L_7({\bf p}|i\!=\!5)$, but also because the direction of maximum variation (i.e., the gradient with respect to ${\bf p}$) opposes that of pixels 1 and 9. In contrast, pixel 5 is nearly useless in the upper left quadrant of the parameter space since the likelihood changes very slowly with respect to ${\bf p}$. As mentioned before, in practice, the best way to deal with an intensity distribution such as $I_7(x;{\bf p})$ would be to treat it as two separate measurements: one involving pixels 1 through 5 (for which the intensity only depends on $p_1$), and another involving pixels 5 through 9 (for which the intensity only depends on $p_2$). The MLE approach could then be applied separately to each set of data, producing independent estimates for each parameter. In general, whenever it is possible to set up an experiment such that independent measurements can be made in this manner, it is probably best to do so, at least from a statistical standpoint. However, in cases where one does not have this luxury, the above example illustrates how subtle interactions between parameters (of either a physical or mathematical nature) can affect the accuracy of the measurement. Therefore, extra care should be taken to design the experiment such that the error obtained using the chosen statistical method is minimized. \FloatBarrier \subsection{Piecewise linear dependence (zero covariance)}\label{sect:MLE_example8} Next, in comparison to the previous example, consider the intensity distribution \begin{equation} I_8(x;{\bf p}) = \begin{cases} 0.5\Pi(x)\left[1 + 2p_1 (x+0.625)\right],&x<-0.125,\\ 0.5\Pi(x),&-0.125\leq x< 0.125,\\ 0.5\Pi(x)\left[1 + 2p_2 (x-0.625)\right],&x\geq 0.125, \end{cases} \end{equation} which is plotted in Fig.~\ref{fig:MLE_IntPlots_Ex8}. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex24.pdf} \caption{Plots of $I_8(x;{\bf p})$ (left axes) and $P_8(i|{\bf p})$ (right axes) for several values of $p_1$ and $p_2$.} \label{fig:MLE_IntPlots_Ex8} \end{figure} As with $I_7(x;{\bf p})$, this intensity varies linearly with $p_1$ or $p_2$ in either half of the sensor. The key difference in this example is that $I_8(x;{\bf p})$ is contrived in such a way that the total intensity $\sum_i I_8(x_i|{\bf p})$ is independent of ${\bf p}$. As a result, the PMF (relative intensity) $P_8(i|{\bf p})$ only depends on $p_1$ on the left half of the sensor and $p_2$ on the right half of the sensor. Naturally, the same is true of the likelihood function $L_8({\bf p}|i)$, as seen in Fig.~\ref{fig:MLE_L2_Ex8}. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/L_2param/L2Ex24.pdf} \caption{Likelihood functions $L_8({\bf p}|i)$ associated with each pixel $i$ for a measurement of $I_8(x;{\bf p})$. Contour lines are shown in increments of $0.01$.} \label{fig:MLE_L2_Ex8} \end{figure} Since the gradient of $L_8({\bf p}|i)$ always points along $p_1$ or $p_2$ (when it is nonzero), the FIM and its inverse are always diagonal, indicating that there is zero covariance between the parameters. For any value of ${\bf p}$, the principal axes of the error ellipse are oriented along the $p_1$ and $p_2$ axes, as seen in Fig.~\ref{fig:MLE_ellipses_Ex8}. \begin{figure} \centering \includegraphics[width=.553\linewidth]{Figures/MLE/Ellipses/ellipsesEx24.pdf} \caption{Ellipses representing the expected standard deviation error of a \mbox{1000-photon} measurement of $I_8(x;{\bf p})$ with true parameter values $p_1$ and $p_2$, sampled over a $9\times 9$ grid in parameter space.} \label{fig:MLE_ellipses_Ex8} \end{figure} When ${\bf p}=(0,0)$, the error ellipse is circular, meaning that the expected error is identical for each parameter. For other values of ${\bf p}$, the relative errors of the two parameters vary in a symmetric fashion over the region of interest. Fig.~\ref{fig:MLE_LL2_1000ph_Ex8} contains plots of the log-likelihood functions $\ell_8({\bf p}|\tilde{\bf I})$ for simulated 1000-photon measurements of $I_8(x;{\bf p})$ with true parameter values ${\bf p}=(0,0)$ and ${\bf p}=(0.63,-0.25)$. In light of the above observations, it should come as no surprise that the distribution is highly symmetric about the MLE in each case. \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{Figures/MLE/L_2param_1000/L21000_Ex24.pdf} \caption{Log-likelihood functions $\ell_8({\bf p}|\tilde{\bf I})$ for simulated 1000-photon measurements of $I_8(x;{\bf p})$ with true parameter values (a) ${\bf p}\hspace{-1pt}=\hspace{-1pt}(0,0)$ and (b) ${\bf p}\hspace{-1pt}=\hspace{-1pt}(0.63,-0.25)$. The plots are shaded on a logarithmic scale with solid contour lines drawn at powers of 2, as indicated in the legend. The peak of each distribution is marked with a red dot. The locations of these maxima (i.e., the MLEs for each measurement) are ${\bf p}=(0.021,-0.029)$ and ${\bf p}=(0.565,-0.256)$, respectively. The dashed contour line indicates where the likelihood $L_8({\bf p}|\tilde{\bf I})$ drops to $1/\sqrt{e}$ times its peak value, representing the standard deviation confidence interval for the MLE.} \label{fig:MLE_LL2_1000ph_Ex8} \end{figure} To recap, the contrast between $I_7(x;{\bf p})$ and $I_8(x;{\bf p})$ illustrates a limitation of the MLE approach described in Section \ref{sect:MLE_optics}, as well as one of its key strengths. The shortcoming is that the sole reliance of the parameter estimate on the relative intensity can introduce correlations between parameters that are not present in the absolute (unnormalized) intensity; furthermore, any additional information contained within the overall scale of the intensity is ignored. On the other hand, the advantage of the method is that with good experimental design, the relative intensity can be tailored for optimal sensitivity and minimal coupling between parameters, so that there is no need to analyze the unnormalized intensity. Conveniently, the MLE formalism includes a straightforward error metric (the FIM) that can be used to predict and optimize the sensitivity of the measurement. As stated earlier, the lack of reliance on total intensity has the added benefit of reducing or eliminating errors arising from fluctuations of the source power. \subsection{Two-parameter off-null measurement}\label{sect:MLE_example9} The final two examples involve a pair of off-null measurements involving two parameters, starting with the intensity distribution \begin{equation} I_9(x;{\bf p}) = 0.125\sp\Pi(x)\bigl[(p_1-x)^2 + (p_2-\cos(\pi x))^2\sp\bigr]. \end{equation} This is a slightly simplified example of the distribution considered in Ref.~\cite{Vella_2018_fbs_arxiv}, with the contributions from each parameter adding incoherently (i.e., in intensity) rather than coherently (i.e., in electric field). Despite this difference, similar statistical behavior is observed in either case. Notice that the $p_1$ term of $I_9(x;{\bf p})$ is identical to that of the one-parameter example $I_3(x;p_1)$ considered in Section \ref{sect:MLE_example3}, with \mbox{$c=1$}. The $p_2$ term introduces an additional departure from the null condition, which varies sinusoidally over the sensor. These spatial variations were chosen to allow comparison between $I_9(x;{\bf p})$ and the earlier two-parameter example $I_6(x;{\bf p})$, for which the terms with $x$ and $\cos(\pi x)$ dependences were linear in $p_1$ and $p_2$, respectively. The intensity and PMF for $I_9(x;{\bf p})$ are shown in Fig.~\ref{fig:MLE_IntPlots_Ex9}. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex25.pdf} \caption{Plots of $I_9(x;{\bf p})$ (left axes) and $P_9(i|{\bf p})$ (right axes) for several values of $p_1$ and $p_2$.} \label{fig:MLE_IntPlots_Ex9} \end{figure} Compared to $I_6(x;{\bf p})$, observe that the off-null configuration employed in the present example produces more dramatic variations in the shape of the intensity profile with respect to $p_1$ and $p_2$, particularly for parameter values close to zero. The likelihood functions $L_9({\bf p}|i)$ for each pixel, which are plotted in Fig.~\ref{fig:MLE_L2_Ex9}, have a far more complex structure than the ones seen in the previous examples. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/L_2param/L2Ex25.pdf} \caption{Likelihood functions $L_9({\bf p}|i)$ associated with each pixel $i$ for a measurement of $I_9(x;{\bf p})$. Contour lines are shown in increments of $0.01$.} \label{fig:MLE_L2_Ex9} \end{figure} The contributions of each pixel have similar shapes, consisting of a peaked distribution that rotates clockwise and changes scale as $i$ runs from 1 to 9. The balance between different pixels and the densely spaced contours of constant likelihood suggest that the FIM is likely to be large and diagonal, which would result in a small and diagonal covariance matrix. As indicated by the ellipse map shown in Fig.~\ref{fig:MLE_ellipses_Ex9}, the expected error is indeed quite small, particularly for parameter values near ${\bf p}=(0,0)$, for which the total measured intensity tends to be the lowest. \begin{figure}[tb] \centering \includegraphics[width=.553\linewidth]{Figures/MLE/Ellipses/ellipsesEx25.pdf} \caption{Ellipses representing the expected standard deviation error of a \mbox{1000-photon} measurement of $I_9(x;{\bf p})$ with true parameter values $p_1$ and $p_2$, sampled over a $9\times 9$ grid in parameter space.} \label{fig:MLE_ellipses_Ex9} \end{figure} This symmetric ellipse pattern, with the error growing as the departure from null increases, is typical for an off-null measurement. There is a considerable covariance between $p_1$ and $p_2$ near the edge of the parameter range, but in nearly all cases, the error is still smaller (often significantly so) than it would be for a measurement of $I_6(x;{\bf p})$ (see Fig.~\ref{fig:MLE_ellipses_Ex6} for comparison). The log-likelihood functions $\ell_9({\bf p}|\tilde{\bf I})$ obtained for two simulated measurements of $I_9(x;{\bf p})$ with true parameter values ${\bf p}=(0,0)$ and ${\bf p}=(0.63,-0.25)$ can be found in Fig.~\ref{fig:MLE_LL2_1000ph_Ex9}. \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{Figures/MLE/L_2param_1000/L21000_Ex25.pdf} \caption{Log-likelihood functions $\ell_9({\bf p}|\tilde{\bf I})$ for simulated 1000-photon measurements of $I_9(x;{\bf p})$ with true parameter values (a) ${\bf p}\hspace{-1pt}=\hspace{-1pt}(0,0)$ and (b) ${\bf p}\hspace{-1pt}=\hspace{-1pt}(0.63,-0.25)$. The plots are shaded on a logarithmic scale with solid contour lines drawn at powers of 2, as indicated in the legend. (Values smaller than $-1024$ are shown in black.) The peak of each distribution is marked with a red dot. The locations of these maxima (i.e., the MLEs for each measurement) are ${\bf p}=(0.016,0.001)$ and ${\bf p}=(0.648,-0.237)$, respectively. The dashed contour line indicates where the likelihood $L_9({\bf p}|\tilde{\bf I})$ drops to $1/\sqrt{e}$ times its peak value, representing the standard deviation confidence interval for the MLE. (The dashed contour in plot (a) is too small to be seen.)} \label{fig:MLE_LL2_1000ph_Ex9} \end{figure} For the ${\bf p}=(0,0)$ case, the likelihood is a sharply peaked distribution, with the location of the peak (the MLE) nearly coinciding with the true value of ${\bf p}$. (The numerical results are provided in the figure caption.) The distribution is considerably wider and less symmetric for the ${\bf p}=(0.63,-0.25)$ case, but the standard deviation uncertainty is still quite small. These results demonstrate the usefulness of an off-null measurement, which enables the simultaneous estimate of multiple parameters with high precision. \subsection{Two-parameter off-null measurement with smaller departure from null}\label{sect:MLE_example10} For the final example, consider the intensity distribution \begin{equation} I_{10}(x;{\bf p}) = 0.320\sp\Pi(x)\bigl[(p_1-0.25x)^2 + (p_2-0.25\cos(\pi x))^2\sp\bigr]. \end{equation} Notice that the $x$ dependence of $I_{10}(x;{\bf p})$ is identical to the previous case except that the departure from null associated with each parameter is four times smaller. As seen in the plots of the intensity profile (Fig.~\ref{fig:MLE_IntPlots_Ex10}) and the likelihood functions for each pixel (Fig.~\ref{fig:MLE_L2_Ex10}), the measurement is very sensitive to variations in $p_1$ and $p_2$ when both parameters are close to zero. However, similarly to the $c\ll 1$ case in Section \ref{sect:MLE_example3}, this comes at the expense of greatly reduced sensitivity (i.e., slower variations in likelihood) near the edges of the region of interest. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/Int/IntPlots_Ex26.pdf} \caption{Plots of $I_{10}(x;{\bf p})$ (left axes) and $P_{10}(i|{\bf p})$ (right axes) for several values of $p_1$ and $p_2$.} \label{fig:MLE_IntPlots_Ex10} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/L_2param/L2Ex26.pdf} \caption{Likelihood functions $L_{10}({\bf p}|i)$ associated with each pixel $i$ for a measurement of $I_{10}(x;{\bf p})$. Contour lines are shown in increments of $0.01$.} \label{fig:MLE_L2_Ex10} \end{figure} The expected error ellipses based on the FIM are plotted for several parameter values in Fig.~\ref{fig:MLE_ellipses_Ex10}. \begin{figure} \centering \includegraphics[width=.553\linewidth]{Figures/MLE/Ellipses/ellipsesEx26.pdf} \caption{Ellipses representing the expected standard deviation error of a \mbox{1000-photon} measurement of $I_{10}(x;{\bf p})$ with true parameter values $p_1$ and $p_2$, sampled over a $9\times 9$ grid in parameter space.} \label{fig:MLE_ellipses_Ex10} \end{figure} The error for a measurement of $I_{10}(x;{\bf p})$ exhibits the same pattern as that of $I_9(x;{\bf p})$ (see Fig.~\ref{fig:MLE_ellipses_Ex9}), but with a larger disparity between the magnitudes of the errors near the center and edges of the parameter range. More precisely, for a true parameter value of ${\bf p}=(0,0)$, the expected error is exactly four times smaller for a measurement of $I_{10}$ as it is for a measurement of $I_9$; conversely, the errors near the far corners of the parameter range (where $|p_1|\approx|p_2|\approx 1$) are about two to three times larger for $I_{10}$ than for $I_9$. Finally, the log-likelihood functions $\ell_{10}({\bf p}|x)$ for simulated measurements of $I_{10}$ with true parameter values ${\bf p}=(0,0)$ and ${\bf p}=(0.63,-0.25)$ are shown in Fig.~\ref{fig:MLE_LL2_1000ph_Ex10}. \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{Figures/MLE/L_2param_1000/L21000_Ex26.pdf} \caption{Log-likelihood functions $\ell_{10}({\bf p}|\tilde{\bf I})$ for simulated 1000-photon measurements of $I_{10}(x;{\bf p})$ with true parameter values (a) ${\bf p}\hspace{-1pt}=\hspace{-1pt}(0,0)$ and (b) ${\bf p}\hspace{-1pt}=\hspace{-1pt}(0.63,-0.25)$. The plots are shaded on a logarithmic scale with solid contour lines drawn at powers of 2, as indicated in the legend. (Values smaller than $-1024$ are shown in black.) The peak of each distribution is marked with a red dot. The locations of these maxima (i.e., the MLEs for each measurement) are ${\bf p}=(0.004,-2.6\times 10^{-4})$ and ${\bf p}=(0.602,-0.308)$, respectively. The dashed contour line indicates where the likelihood $L_{10}({\bf p}|\tilde{\bf I})$ drops to $1/\sqrt{e}$ times its peak value, representing the standard deviation confidence interval for the MLE. (The dashed contour in plot (a) is too small to be seen.)} \label{fig:MLE_LL2_1000ph_Ex10} \end{figure} As expected, the likelihood for the ${\bf p}=(0,0)$ case is extremely narrowly distributed about its peak, producing an estimate with error on the order of 0.001. In contrast, the distribution for ${\bf p}=(0.63,-0.25)$ is substantially wider; for parameter values with magnitudes closer to 1, the width of the distribution would continue to grow. The practical implication of this example is that an off-null measurement can be tailored for high sensitivity over an arbitrarily small range of parameter values. Therefore, it is possible to design an iterative experiment for which the parameter estimate is refined through a series of successive measurements. For example, in the focused beam scatterometry setup described in Ref.~\cite{Vella_2018_fbs_arxiv}, an SLM could be used to produce an arbitrary spatially-varying polarization state, which can be chosen differently for each iteration of the measurement. The experimental details of such an implementation are discussed in Ref.~\cite{Head_2018}. As an example of this iterative procedure, suppose that we wish to refine the measurement of $I_9(x;{\bf p})$ with true parameter values ${\bf p}=(0.63,-0.25)$ obtained in Section \ref{sect:MLE_example9}. The plot of the log-likelihood function $\ell_9({\bf p}|\tilde{\bf I})$ for this measurement is shown again in Fig.~\ref{fig:MLE_LL2_1000ph_Ex41-44}(a); the MLE based on this initial measurement is ${\bf p}=(0.648,-0.237)$. To refine the parameter estimate, the experimental configuration could be altered such that the output intensity follows the distribution \begin{equation} I_9^{(2)}(x;{\bf p}) = \Pi(x)\bigl[(p_1-0.648-0.5x)^2 + (p_2+0.237-0.5\cos(\pi x))^2\sp\bigr], \end{equation} where the constant normalization factor in front of $\Pi(x)$ has been omitted for simplicity.\footnote{In a real experiment, the leading factor (which determines the peak intensity) would typically vary under different experimental configurations. Since the MLE approach ignores any information contained in this scaling factor, it is not important for this discussion.} This distribution is designed so that the departure from null is half as large and centered at the previous MLE. The resulting log-likelihood function $\ell_9^{(2)}({\bf p}|\tilde{\bf I})$ for a simulated measurement of 1000 photons, shown in Fig.~\ref{fig:MLE_LL2_1000ph_Ex41-44}(b), is much more narrowly distributed than $\ell_9({\bf p}|\tilde{\bf I})$. The MLE based on the refined measurement is found to be ${\bf p}=(0.644,-0.255)$. This process can be applied repeatedly to obtain an estimate with arbitrary precision (barring experimental limitations, as discussed in the next paragraph). The intensity distributions and resulting MLEs for the first four iterations of the process, including the two mentioned above, are listed in Table~\ref{tbl:MLE_iterative_intensities}, and the log-likelihood functions for simulated measurements of $I_9^{(3)}(x;{\bf p})$ and $I_9^{(4)}(x;{\bf p})$ are plotted in Fig.~\ref{fig:MLE_LL2_1000ph_Ex41-44}(c,d). \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/MLE/L_2param_1000/L21000_Ex41-44.pdf} \caption{Log-likelihood functions for simulated 1000-photon measurements of intensity distributions (a) $I_9(x;{\bf p})$, (b) $I_9^{(2)}(x;{\bf p})$, (c) $I_9^{(3)}(x;{\bf p})$, and (d) $I_9^{(4)}(x;{\bf p})$ obtained throughout a four-step iterative measurement with true parameter values ${\bf p}=(0.63,-0.25)$. The peaks of each distribution are indicated with a red dot, and their locations are listed in the rightmost column of Table~\ref{tbl:MLE_iterative_intensities}. The dashed red contour in plot (a) represents the standard deviation confidence interval; the confidence intervals in plots (b-d) are too small to be seen.} \label{fig:MLE_LL2_1000ph_Ex41-44} \end{figure} As seen in the table, the MLE gets closer to the true value with each iteration, leading to a final estimate of ${\bf p}=(0.631,-0.249)$. As this happens, the likelihood function becomes increasingly compact with an exceptionally sharp peak, which is the reason for the improvement in accuracy. However, note that the calculation of the MLE must be performed carefully in this case since the likelihood function may contain local maxima or regions with very small slopes, which can cause problems with the numerical search procedure. These issues can generally be mitigated by using the previous MLE as the starting point for the search. \begin{table} \small \renewcommand{\arraystretch}{.8} \begin{center} \small \begin{tabular}{l@{\hspace{20pt}}l} \toprule Intensity distribution & MLE for ${\bf p}$\\ \midrule \phantom{a}&\\[-8pt] $I_9(x;{\bf p}) \propto \bigl[(p_1-x)^2 + (p_2-\cos(\pi x))^2\sp\bigr]$ & $(0.648,-0.237)$ \\[8pt] $\mathrlap{I_9^{(2)}(x;{\bf p}) \propto \bigl[(p_1-0.648-0.50x)^2 + (p_2+0.237-0.50\cos(\pi x))^2\sp\bigr]}$ & $(0.644,-0.255)$ \\[8pt] $I_9^{(3)}(x;{\bf p}) \propto \bigl[(p_1-0.644-0.25x)^2 + (p_2+0.255-0.25\cos(\pi x))^2\sp\bigr]$ & $(0.628,-0.246)$ \\[8pt] $I_9^{(4)}(x;{\bf p}) \propto \bigl[(p_1-0.628-0.10x)^2 + (p_2+0.246-0.10\cos(\pi x))^2\sp\bigr]$ & $(0.631,-0.249)$ \\[2pt] \bottomrule \end{tabular} \caption{Intensity distributions used for a simulated four-step iterative measurement with true parameter values ${\bf p}=(0.63,-0.25)$, along with the MLEs obtained from the simulated intensities at each step. The off-null departures for iterations 2 through 4 are each centered at the MLE from the previous iteration. The magnitude of the departure from null decreases with each iteration in order to refine the accuracy of the estimate.} \label{tbl:MLE_iterative_intensities} \end{center} \end{table} \pagebreak[4] As mentioned above, from a statistical standpoint, this iterative MLE approach can be employed to obtain a parameter estimate with arbitrary precision. That is, for any fixed, reasonably large number of detected photons $\mathcal{N}$, the experiment can be designed to make the Cram\'er-Rao bound arbitrarily small, meaning that there is no fundamental limit to the sensitivity of the measurement. In practice, the accuracy is determined by experimental factors, including but not limited to: \begin{itemize} \item the bit depth and signal-to-noise ratio of the sensor; \item the power of the source (which affects the number of photons detected in a given time interval); \item the level of precision and temporal stability of the experimental configuration (e.g., SLM control in the application mentioned above); \item the validity of the theoretical model and any approximations made; \item other sources of random or systematic error (e.g., thermal fluctuations or ghost images). \end{itemize} (Note that the second point above can be addressed by optimizing the FIM for emitted photons, as in Section \ref{sect:MLE_example3}.) In any case, the statistical methods discussed in this tutorial are still useful for determining the best nominal design for an experiment, as well as for obtaining parameter estimates from measured data based on a theoretical or empirical model. \section{Concluding remarks} This tutorial has summarized the fundamental concepts of maximum likelihood estimation and their application to the measurement of an optical intensity distribution. In this treatment, one or more parameters are estimated from the shape of the intensity profile, without regard for the total measured power. However, the power incident on the detector is still relevant because it determines the uncertainty of the parameter estimate, which scales as the inverse of the square root of the number of detected photons. Depending on the needs of a given application, the methods discussed in this manuscript may be used to optimize the performance of an experiment for minimal estimation error per photon detected by the sensor or per photon emitted by the source. Some sample code for calculating and evaluating the uncertainty of the maximum likelihood estimate in such an experiment can be found in the appendix. \addcontentsline{toc}{section}{Acknowledgments} \section*{Acknowledgments} The author would like to thank Miguel A.~Alonso and Philippe R\'{e}fr\'egier for helpful discussions and suggestions. This work was supported by funding from the National Science Foundation (NSF) (PHY-1507278). \FloatBarrier
1,108,101,565,064
arxiv
\section{INTRODUCTION} In the last decade substantial attention has been given to transport of electrons through individual atomic point contacts or molecules, commonly referred to as quantum junctions (QJ). New experimental results for $I(V)$ curves of QJ are promptly followed by their numerical modeling which, however, frequently lead to results that differ by order of magnitude from the measured data~\cite{Nitzan03}. These discrepancies triggered interest in the reliability of the often quietly assumed approximations. Prime suspects are incorrect atomic geometries, inappropriate description of the exchange-correlation effects based on ground state methods and possible necessity to use a full time-dependent simulation to arrive to a correct current-carrying stead-state. In this short paper we first give a very simple derivation of the Landauer formula for a 2-point conductance of QJ $G^{2P}$, based on the uncertainty principle. The aim of this is to introduce this central equation of quantum transport to a general audience. Next we analyse the dynamics of setting up a steady-state current in a simple many-electron system and use these observations to present physical basis and formal result for the 4-point conductance $G^{4P}$, rigorously related to the non-local conductivity of an extended system consisting of electrodes and their junction. \section{LANDAUER FORMULA AND THE UNCERTAINTY PRINCIPLE} Let us consider a general QJ. Single-particle quantum states - wavefunctions that can be occupied with electrons and carry current - extend with nonzero amplitude from the left electrode through the junction into the right electrode. These states, as in every metallic system, form a continuum of states of energy $E\in (E_{min},E_{max})$, where $E_{min}$ and $E_{max}$ are the bottom and the top of the conduction band of the electrodes in equilibrium. In equilibrium the Fermi energy $E_{F}$ is located somewhere within this interval. Applying bias voltage $\Delta V$ between the two electrodes means that those states within the continuum that carry current to the right (right-going scattering states) will be occupied up to energy $\mu_L$, that is, $e\Delta V$ higher than the states that carry current to the left (left-going scattering states), occupied up to the energy $\mu_{R}$, i.e. $e\Delta V = \mu_{R} - \mu_{L}$. Since degenerate energy levels with both right- and left- going states occupied carry zero total current, the only contribution to the transport of charge originates from the energy interval $(\mu_{L}, \mu_{R})$, occupied only with the right-going states. At this point we depart from the traditional derivation~\cite{Buttiker87} and make use of the uncertainty principle. Right-going electrons occupying states from the energy interval $(\mu_{L}, \mu_{R})$ can be put into wavepackets with energy uncertainty $\Delta E = \mu_{L} - \mu_{R} = e\Delta V$. The uncertainty principle then states that there is an uncertainty in time $\Delta t$ within which we can observe {\it one electron passing through the junction} \begin{equation} \Delta E \Delta t \sim h. \end{equation} However, there might be only some probability $T(E_{F})<1$, depending in general on the energy\footnote{Assuming smooth dependence of the transmission probability on the energy and that the applied bias is small we set $T(E)\approx T(E_{F})$ for $E\in(\mu_R,\mu_L)$.}, that an electron will pass through the junction, out of $N$ electrons approaching the junction within time interval $N \Delta t$ only fraction $N T(E_{F})$ will actually get through. Using these observation and the fact that current is the number of electrons per time, we find \begin{equation} I = 2 e \frac{N \sum_i T_i(E_{F})}{N \Delta t} \sim \label{eq-landauer} \frac{2e^2}{h} \sum_i T_i(E_{F}) \Delta V \end{equation} where we have added a factor $2$ to account for the spin degeneracy and sum over possibly several degenerate right-going states $i$. This is the celebrated 2 point Landauer formula~\cite{Buttiker87} relating the current and the difference between electrochemical potentials in electrodes $\Delta V$, the basic equation of modern mesoscopic physics. The quantum of conductance $G^0=\frac{2e^2}{h}=(12.9 k\Omega)^{-1}$ is obtained for ``open'' QJ with $T_i(E_{F})=1$ for only one $i$. Landauer equation represents an important result - it relates quantum-mechanical properties of QJ - the transmission probability - with the macroscopically measured quantity - the conductance. We would like to note that the weak ``proportional to'' sign can be made into strong ``equal'' introducing occupation-adapted orthogonal wavepacket basis set. Including the mean-field, local-neutrality arguments leads to the 4-point conductance, e.g. in 1D $G^{4P}=G^{2P}/(1-T(E_{F}))$ relating the current to the {\it induced electrostatic drop in potential} in the vicinity of QJ, $\Delta V^i$~\cite{Buttiker87}. \section{DYNAMICS OF 1D QUANTUM GAS OF ELECTRONS} While previous derivation gives some hint of the time-dynamics in quantum transport, namely that based on wavepackets of electrons having significant amplitude in a given space for certain time $\Delta t$, the approach is really describing a steady state. So we ask - how do we make a transition from equilibrium to steady non-equilibrium situation in QJ? Consider a 1D electron gas of density $n$, fixed by the Fermi energy $E_{F} = \frac{\hbar^2 k_F^2}{2m}$. At time $t=0$ we apply a localized electric field of the form \begin{equation} E^e = - \frac{\Delta V}{a} (\theta(x+a/2) - \theta(x-a/2) ), \end{equation} where $\theta()$ is the unit step function, $a$ the distance on which the field is nonzero, modeling the width of QJ, and $\Delta V$ corresponds to the applied voltage. The response of the density and the current can be found using the linear response theory~\cite{Bokes04}. In general, the non-local conductivity $\vec{\vec{\sigma}}$ gives a causal linear relationship between the total electric field and the current density \begin{equation} \vec{j}(\vec{r},t) = \int_{0}^{t} dt' \int d^3 r' \vec{\vec{\sigma}}(\vec{r}, \vec{r}';t-t') \cdot \vec{E}(\vec{r}',t') \end{equation} For our simple system the nonlocal conductivity, calculated directly from the continuum of occupied quantum states, is known analytically~\cite{Bokes04}. Using this within the general formula (4) and performing a simple numerical calculation we find that the current is indeed settling to a steady value $I=\frac{2e^2}{h} \Delta V$ with a well defined relaxation time $\tau$ (see inset in Fig.1). It is instructive to analyze the dependence of the latter on the width of the junction $a$, while keeping the overall bias voltage $\Delta V$ constant. The resulting dependence for a gas with $E_{F}=0.07 \textrm{Ha}=2$eV (corresponding to a gold nanowire) is shown in the Fig.1. The behavior of $\tau$ has a nice \begin{figure} [h,t] \begin{center} \includegraphics[width=80mm]{BOKES-fig.eps} \end{center} \vspace{-2mm} \caption{Dependence of the relaxation time $\tau$ (defined by the first local maximum in $j(t)$, see the inset) on the width of the junction $a$. The arrows show the limiting behaviour.} \end{figure} physical interpretation: for junctions smaller than the Fermi wavelength $\lambda_F$, the relaxation time is constant and given by the timescale dictated by the Fermi energy, for junctions larger than Fermi wavelength the relaxation is related to a time it takes for an electron with Fermi speed to pass the region of the junction. These limiting results are obtainable by a detailed analysis of the analytical formulas as well. These results demonstrate the mechanism how the current is established for non-interacting electrodes, which is pertinent to all present ab-initio calculations. However, as soon as we include even Hartree interaction, the process changes dramatically. Namely, the observed current in Fig.1 leads to charge imbalance - there are electrons piling up on the right of the junction and electrons being depleted on the left of the junction. Physically this is almost obvious as any localised electric field should be screened out by electrons. The screening can be overcome only in the limit $a \rightarrow \infty, E^e \rightarrow 0, \Delta V = const$ when the field becomes homogeneous. However, as the Fig.~1 clearly shows, in this case the relaxation time becomes infinitely as well! The resolution is following. To set a steady current in a system of interacting electrons, we need to switch on a external homogeneous field of {\it finite magnitude for a finite time} $t<t_e$. This is to be compared with the infinitesimal magnitude and for all positive time in the previous paragraph. In the semi-infinite electrode this will create an uniform current even for $t>t_e$ and there is no problem with charge accumulation. Only in the region of QJ the charge will pile up on the left and deplete on the right. This will lead to a {\it induced field } $E^i$, localised at the QJ and characterised with some induced drop in potential $\Delta V^i$. This induced field will self-consistently develop to such a form that the current $I$, established throughout the semi-infinite electrodes, will be able to pass the junction. \section{GENERAL EXPRESSION FOR CONDUCTANCE} The above ideas can be nicely formalised within the framework of the linear response theory. The current is given by \begin{equation} I(x,t) = \int_0^t dt' dx' \sigma(x,x';t-t')\left( E^e(x',t') + E^i(x',t') \right). \end{equation} Dividing the conductivity into the electrode's, translationally invariant part and the part primarily due to the junction, $\sigma(x,x') = \sigma^H(x-x') + \sigma^J(x,x')$ respectively, and performing the long-time analysis we arrive at the final expression for the steady-state current~\cite{Bokes06} \begin{equation} I = \frac{\alpha}{\beta} G^{2P} \Delta V^i = G^{4P} \Delta V^i, \label{eq-1} \end{equation} where \begin{eqnarray} \mathcal \alpha &=& \sigma^H(q=0,t\rightarrow \infty), \\ \mathcal \beta &=& - \int dq \sigma^J(q,q'=0;t\rightarrow \infty), \\ G^{2P} &=& \int \frac{dq dq'}{2\pi} \sigma(q,q';t\rightarrow \infty), \end{eqnarray} where $q,q'$ are wavenumbers, reciprocal to $x,x'$ respectively. The equation (\ref{eq-1}) is expressing the 4-point conductance in terms of the limiting character of the non-local microscopic conductivity. The utility of this formulation lies in a formally straightforward evaluation of the latter using various ab-initio and, unlike the Eq.~(\ref{eq-landauer}), also correlated many-body methods. \noindent ACKNOWLEDGMENT: The author acknowledges Rex Godby, Hector Mera and Jeil Jung for stimulating discussions. This research was supported by the Slovak grant agency VEGA (project No. 1/2020/05) and the NATO Security Through Science Programme (EAP.RIG.981521).
1,108,101,565,065
arxiv
\section{\@startsection {section}{1}{\z@}% \parskip 6 pt \marginparwidth 0pt \oddsidemargin -5mm \evensidemargin 0pt \marginparsep 0pt \topmargin -2cm \textwidth 17cm \textheight 24cm \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\nonumber}{\nonumber} \newcommand{\partial}{\partial} \defequivalence principle {equivalence principle } \defgeodesic {geodesic } \def$\boldsymbol{{\cal P}}[\chi; p_i]$ {$\boldsymbol{{\cal P}}[\chi; p_i]$ } \defmodified gravity {modified gravity } \defLevi-Civita {Levi-Civita } \newcommand{\fixme}[1]{\textbf{FIXME: }$\langle$\textit{#1}$\rangle$} \newcommand{\bo}[1]{\boldsymbol{#1}} \newcommand{\note}[1]{\textbf{NOTE: }$\langle$\textit{#1}$\rangle$} \newcommand{\begin{subequations}}{\begin{subequations}} \newcommand{\end{subequations}}{\end{subequations}} \begin{document} \begin{titlepage} \thispagestyle{empty} \begin{flushright { }\end{flushright} \begin{center} \font\titlerm=cmr10 scaled\magstep3 \font\titlei=cmmi10 scaled\magstep4 \font\titleis=cmmi7 scaled\magstep4 \centerline{\titlerm Residual Diffeomorphisms and Symplectic Softs Hairs:} \vskip2mm \centerline{\large{The Need to Refine Strict Statement of Equivalence Principle}} \vspace{1.0cm} \noindent{\textbf{\large{ M. M. Sheikh-Jabbari}}}\\ \vspace{0.8cm} {\small\it School of Physics, Institute for research in fundamental sciences (IPM),\\ P.O.Box 19395-5531, Tehran, Iran}, \vskip 2mm E-mail: [email protected] \today \end{center} \vskip 2cm \begin{abstract} General covariance is the cornerstone of Einstein's General Relativity (GR) and implies that any two metrics related by diffeomorphisms are physically equivalent. There are, however, many examples pointing to the fact that this strict statement of general covariance needs refinement. There are a very special (measure-zero) subset of diffeomorphisms, \emph{the residual diffeomrphisms}, to which one can associate well-defined conserved charges. This would hence render these diffeomorphic geometries physically distinct. We discuss that these symmetries may be appropriately called ``symplectic symmetries''. Existence of residual diffeomorphisms and sympelctic symmetries can be a quite general feature and not limited to the examples discussed so far in the literature. We propose that, in the context of black holes, these diffeomorphic, but distinct, geometries may be viewed as ``symplectic soft hair'' on black holes. We comment on how this may remedy black hole microstate problem, which in this context are dubbed as ``horizon fluffs''. \end{abstract} \begin{center} \vskip1cm {\textit{Essay received Honorable Mention\\ in the Gravity Research Foundation 2016 Awards for Essays on Gravitation.}} \end{center} \end{titlepage} It is well known that equivalence principle was Einstein's guide into the formulation of General Relativity (GR). Equivalence principle implies that all physical information should be available to all observers which are in causal contact with, i.e. can send and receive signals from, those events. Of course, to make the equivalence principle precise we usually limit it to ``local observables,'' which are associated with events local in spacetime. The precise meaning of ``local event'' may be stated in a case-by-case basis. Also, in any standard Einstein GR course we are taught that (e.g. see \cite{Padmanabhan-book}) there is a coordinate system associated with any observer. Equivalence principle then implies general covariance of the theory and that physical observables should necessarily be invariant under general coordinate transformations.\footnote{As it is well known, equivalence principle is more than just general covariance and implies the minimal coupling.} Einstein's GR is best formulated in the mathematical framework of differential and manifold geometry, where general covariance finds a precise statement: Dynamical degrees of freedom and field equations should be written in a covariant form in terms of tensors, which have a well defined transformation under general coordinate transformations, the \emph{diffeomorphisms}. Physical observables should then be built from scalars. Instead of field equations, we may use an action and the variational principle, where the action is a diffeomorphism invariant quantity and is given through integral of the Lagrangian over the spacetime manifold; the Lagrangian is a scalar function of the fields. In this setup, all physical observables should hence be made from ``geometric'' objects and should be diffeomorphism invariant. In particular, consider a geometry which may be specified by a metric written in a specific coordinate system, plus all possible coordinate extensions. ``Standard general covariance'' then implies that any metric which is related to this metric upon a coordinate transformation is physically equivalent to it. In other words, standard general covariance implies that any two metrics which are diffeomorphic to each other are physically equivalent. The above statement is of course a generic one in any gauge field theory, any field theory with local (gauge) symmetry: Diffeomorphisms may be viewed as gauge transformations; all tensor fields (including scalars) have a prescribed transformation under diffeomprhisms and in the terminology of gauge theories any field is ``charged'' under diffeomorphisms. Metric, as a two-tensor $g_{\mu\nu}$, is no exception and has a particular transformation: $$ x^\mu\to x^\mu-\xi^\mu(x),\qquad g_{\mu\nu}(x)\to g_{\mu\nu}(x)+\delta g_{\mu\nu}(x),\ \delta g_{\mu\nu}= \nabla_\mu\xi_\nu+\nabla_\nu\xi_\mu. $$ The two metrics $g$ and $g+\delta g$ are physically equivalent. Physical observables in any gauge theory are then ``gauge invariant'' quantities. For a metric these are the \emph{geometric} quantities which are independent of how we parametrize our spacetime and the coordinate system we choose. In GR, these information are the ones which could be probed or measured by different geodesics, like geodesic distance between two events (points in the spacetime), causal structure or global Killing vector fields of the spacetime. In the usual treatment of local gauge theories, gauge symmetries are often told to be ``redundancies of description''. Meaning that to write the Lagrangian of the theory in a covariant way, we usually introduce extra, unphysical, gauge degrees of freedom. Gauge invariance then guarantees that these extra degrees of freedom are not local propagating degrees of freedom. Moreover, constructing observables from gauge invariant quantities guarantees that these extra degrees of freedom do not appear in physical observables. At the technical level, to make computations in a local (quantum) gauge field theory, we need to fix a gauge. Then Ward-Takahashi identities guarantee that physical observables are independent of the gauge-fixing choice. In the terminology of gauge field theories, any choice of coordinate system is like fixing a gauge. \paragraph{Gauge symmetries and conserved charges.} Symmetries and the Noether theorems have played a pivotal role in the development of our modern physics. Symmetries subject to Noether theorems are either global-continuous ones (subject of Noether's first theorem), or local ones (subject to Noether's second theorem). The former is related to conserved charges, while the latter leads to identities which should be satisfied to guarantee gauge invariance. These identities appear as the integrability condition of field equations at classical level and as Ward identities at quantum level. Nonetheless, one may still wonder if it is possible to associate conserved charges to local gauge symmetries. This question has been analyzed in some detail in the literature of gauge theories in general, e.g. see \cite{Soft-photon}, and differomophism invariant theories in particular. The most notable and prime examples are Bondi-Metzner-Sachs (BMS) \cite{BMS}, and Brown and Henneaux \cite{Brown-Henneaux}. As we will review below, the answer is affirmative. Existence of states carrying these conserved charges are what we would be focusing on here. For the purpose of this essay let us focus on $d$ dimensional gravity theories. We can fix diffeomorphisms by choosing a coordinate system. Although our arguments are more general, to be specific, let us fix time-like Gaussian coordinates. With this choice we can fix the form of metric to $$ ds^2=-dt^2+g_{ij}(t,x^l)dx^i dx^j, \qquad i,j=1,\cdots, d-1. $$ We can always do so locally by appropriate choice of $d$ functions in $\xi^\mu(t,x^i)$. However, as is well known, even after this choice, the diffeomorphism freedom is not completely fixed: one has the diffeomoephisms which only depend on $x^i$. This latter differomophisms are in fact necessary for consistently removing the momenta conjugate to $g_{t\mu}$ components of metric and the associated constraints \cite{Weinberg-Cosmology}. Even after using this part of diffeomorphisms we remain with a part of diffeomorphisms which is generically a part of, or of the same cardinality as, $d-2$ dimensional diffeomorphisms. This remaining part is of course a measure-zero subset of original gauge freedom we started with and is not capable of removing any further propagating degrees of freedom. It is not difficult to show that these remaining diffeomorphisms together with the Lie bracket close onto an algebra which in $d\geq 3$ dimensions is infinite dimensional; it has infinitely many elements in it. If one can consistently associate well defined conserved charges to the remaining diffeomorphisms, they will not represent ``gauge,'' unphysical degrees of freedom; they would then become physical. \paragraph{Covariant Phase Space Method.} To examine whether there is a consistent way of associating conserved charges with the remaining diffeomorphisms, there is a powerful technical tool introduced primarily in 1987 in \cite{early-CPSM} and expanded and developed by Robert Wald and collaborators since the earliy 1990's \cite{Lee-Wald} and enhanced and made more precise in particular by the later works of the group in ULB in Brussels, e.g. in \cite{Barnich-Brandt}. Here we only sketch the argument and the results, the technical details may be found e.g. in \cite{Seraj-Hajian} and references therein. Consider a solution to a generic diffeomorphism invariant gravity theory, specified by $g_{\mu\nu}$ and other fields collectively denoted by $\Phi$. This set of solutions can be specified by a set of parameters $p_i$ and are generically given in a convenient coordinate system. Make the gauge fixings mentioned above and let $\chi$ denote the vector field generating the remaining diffeomorphisms. One may then construct continuous set of metrics generated by the successive action of the $\chi$ field on this class of solutions. At infinitesimal level, these are $g_{\mu\nu}+{\cal L}_\chi g_{\mu\nu}$, where ${\cal L}_\chi g_{\mu\nu}$ denotes the Lie derivative along $\chi$. One may do this in an active way (keeping the coordinate system and changing the components of metric tensor $g_{\mu\nu}$). We will denote this class of metrics which generically have some number of independent functions and parameters in them by $\boldsymbol{{\cal P}}[\chi; p_i]$. Constructed in this way, it is obvious that all these geometries are solutions to the original theory. These solutions for a given value of parameters $p_i$ are, however, deemed to be physically equivalent according to the ``standard general covariance.'' \paragraph{Solution Phase Space.} The Covariant Phase Space Method (CSPM) may then be used to promote $\boldsymbol{{\cal P}}[\chi; p_i]$ to a phase space, the \emph{solution phase space}, by providing a symplectic structure current: $\boldsymbol{\omega}(\delta_1\Phi, \delta_2\Phi; \Phi)$, $\boldsymbol{\omega}$ is a $(d-1)$-form on the spacetime while a two-form on the phase space $\boldsymbol{{\cal P}}[\chi; p_i]$ and $\delta\Phi$ are generic elements in the tangent space of $\boldsymbol{{\cal P}}[\chi; p_i]$. Since $d\boldsymbol{\omega}\approx 0$ ($\approx$ means on-shell equality, i.e., $\delta\Phi$ are satisfying linearized field equations and $\Phi$ is a solution), then $\boldsymbol{\omega}\approx d\boldsymbol k$ and in particular, $\boldsymbol{\omega}(\delta\Phi, \delta_\chi\Phi; \Phi)\approx d\boldsymbol{k}_\chi[\delta\Phi;\Phi]$, where $\bo{k}_\chi$ is a $(d-2)$-form on spacetime and a one-form on $\boldsymbol{{\cal P}}[\chi; p_i]$. Charge variation associated with the field variations $\delta_\chi\Phi$ is defined as a surface integral, an integral over a codimension two spacelike compact surface $\Sigma$ $$ \delta Q_\chi=\oint_{\Sigma} \boldsymbol{k}_\chi[\delta\Phi;\Phi]. $$ If the ``integrability condition'' is met (e.g. see \cite{Lee-Wald, Seraj-Hajian, Hajian-me}) then $\delta Q_\chi$ is an exact one-form on $\boldsymbol{{\cal P}}[\chi; p_i]$ and one can hence define the conserved charge $Q_\chi$ by integrating charge variation over an arbitrary path in the phase space. These charges are also called ``Hamiltonian generators'' of the $\chi$-transformations. Given a vector field $\chi$ we may, in principle, compute $\delta Q_\chi$. If $\delta Q_\chi$ are vanishing, $\chi$'s generate ``pure gauge transformations'' on the phase space. One needs to mod out the phase space $\boldsymbol{{\cal P}}[\chi; p_i]$ with these pure gauges to define the physical phase space \cite{Lee-Wald}. We will define our \emph{residual diffeomorphism} after modding out the set of $\chi$'s for which $\delta Q_\chi$ are finite, by such pure gauge diffeomorphisms. \paragraph{Symplectic symmetries and charges.} It may happen that, as has been demonstrated in some different examples \cite{Hajian-me}, for a set of $\chi$'s the presymplectic form $\bo{\omega}$ vanishes on-shell: $$ \bo{\omega}(\delta_\chi\Phi, \delta\Phi;\Phi)\approx 0. $$ In this case one may readily prove that the charge $\delta Q_\chi$ is conserved and is independent of $\Sigma$, which may now be chosen arbitrarily. In this case we are dealing with \emph{symplectic symmetries}. Based on different examples discussed e.g. in \cite{Seraj-Hajian}, I will assume that the set of the residual diffeomorphisms are not empty and that they are always related to symplectic symmetries. Symplectic symmetries may come into two families \cite{Hajian-me}, when $\chi$ generates an exact symmetry of the solution, which will be denoted by $\eta$, with $\delta_\eta\Phi=0$, or when $\delta_\chi\Phi\neq 0$. The Killing vectors are a part of the set of $\eta$'s. To each of these two sets one can associate \emph{symplectic charges}. For the symplectic non-exact symmetries, the charge is defined as we discussed above. For the exact symmetries, we define the charges for \emph{parametric variations}, $\delta_{p_i}\Phi$, explicitly, $$ \delta_{p_i}Q_\eta=\oint_{\Sigma} \boldsymbol{k}_{\eta}[\delta_{p_i}\Phi;\Phi],\qquad \bo{\omega}(\delta_{p_i}\Phi, \delta_\eta\Phi;\Phi)\approx d \boldsymbol{k}_{\eta}[\delta_{p_i}\Phi;\Phi]. $$ The set of $Q_{\eta}$ are essentially the ADM charges for asymptotically flat solutions and also include Wald's entropy \cite{Wald-entropy}. Since the set of $\chi$'s are generically defined based on arbitrary functions, their number is infinite (while usually countable). The $(Q_{\eta}, Q_\chi)$ charges may be used to specify points on the solution phase space $\boldsymbol{{\cal P}}[\chi; p_i]$. By construction, we expect this labelling to be unique up to possibly some ``topological'' discrete charges. From a different viewpoint, built upon the symmetries of symplectic structure, the symplectic charges $Q_{\eta}, Q_\chi$ may be viewed as the generators of symmetry directions on the phase space. As generators of symmetries, one may study the algebra of these charges. A fundamental theorem in the CPSM states that, e.g. see \cite{Seraj-Hajian} and references therein, $$ [Q_{\chi}, Q_{\tilde{\chi}}]=Q_{[\chi,\tilde{\chi}]}+ C_{(\chi,\tilde{\chi})},\qquad [Q_\chi, Q_{\eta}]=0, $$ where $C$ is a central element of the algebra (which may in principle be a function of $Q_\eta$). \paragraph{Residual diffeomorphisms and ``symplectic soft hair'' on black holes.}\footnote{The expression ``soft hair'' was coined in \cite{Hawking} and denotes states which are charged under residual diffeomorphisms.} As we argued elements in the solution phase space $\boldsymbol{{\cal P}}[\chi; p_i]$ are to be viewed as physically distinct solutions, while as far as the ``standard general covariance'' is concerned only elements with distinct $p_i$ are distinguishable; classical gravity observers are blind to the $Q_\chi$ charges and can only resolve $Q_\eta$'s. One may try to apply the above general arguments and picture to black holes. In this setting our proposal is very simply stated as: for a black hole solution with parameters $p_i$, which may be (uniquely) specified by $Q_\eta$, there are ``hairs'' labelled by $Q_\chi$. The black hole microstates and degrees of freedom relevant to (thermo)dynamics of black holes are elements of the phase space $\boldsymbol{{\cal P}}[\chi; p_i]$. In a different wording, our proposal is as follows: \emph{To any black hole, one may associate two kinds of information: Classical local information and ``quasi-local'' semi-classical information. The classical information are the usual geometric quantities which we are familiar with from the usual GR and standard general covariance. These information are available to classical local probes/geodesics and include geodesic distances, causal and horizon structure and usual ``thermodynamical'' quantities associated with black holes, like their mass, angular momentum, electric charge, surface gravity and horizon angular velocity and electric potentials. On the other hand, there are the symplectic charges $Q_\chi$. These charges are semi-classical and non-local as they cannot be measured by classical local observers, they are given by surface integrals. Being symplectic charges, they could be defined on generic compact, codimension two spacelike surface $\Sigma$. Geometries distinguished by these symplectic charges, the ``symplectic soft hair,'' are all diffeomorphic to each other and are hence deemed the same from a usual classic GR viewpoint. A black hole and its symplectic hair form a phase space $\boldsymbol{{\cal P}}[\chi;p_i]$. The above clearly states the necessity to revise the strict statement of equivalence principle and general covariance discussed in the opening of this essay, so that it includes and accommodates the residual symmetries and the associated charges. } Our vision and hope is that these ideas are relevant to identification of black hole microstates as states carrying non-trivial symplectic symmetry charges. The first examples towards explicit realization of this vision and hope has been taken in \cite{NH-soft-hair, AGS, Hossein-Shahin}. Here we sketch the ``horizon fluffs'' or ``fluff ball'' proposal put forward in \cite{AGS} and for the details of technicalities the interested reader may look at these papers and references therein: \emph{In geometries with (event) horizon like black holes, one has the possibility of distinct residual symmetries in the near horizon and asymptotic regions. In particular, a black hole state is the one identified with asymptotic symmetries (which include ADM-type charges like mass and angular momenta). Moreover, there are near horizon residual symmetries which may not be extended to the asymptotic region. The fluff ball proposal then states that the black hole microstates, horizon fluffs, are the states which are distinguishable by the near horizon residual symmetry charges. The horizon fluffs of a given black hole are, however, indistinguishable by their asymptotic charges.} The above proposal has been worked through in \cite{AGS, Hossein-Shahin} for generic AdS$_3$ black holes, while is expected to be generalizable to more realistic astrophysical black holes which may be approximated by Kerr geometries. Our ``fluff ball'' proposal has some similarities with the ``fuzz ball'' proposal of Samir Mathur \cite{Fuzzball}, while it has some basic differences with it. In the fuzz ball proposal microstates correspond to an ensemble of geometries, each of which is smooth and horizon free, and are classically distinct within the usual classical GR. Its realization has been successful in supersymmetric D1D5-P setting in string theory and relies on supersymmetry and string theory machinery, e.g. see \cite{Bena} and references therein. In the fluff ball proposal, however, the microstates are all geometries which are diffeomorphic to a given near horizon geometry (and not distinct in the standard general relativity sense); they differ in soft hairs. That is, they all have the same near horizon and causal structure, they all correspond to the same black hole with the specified asymptotic charges. Here we discussed surface charges associated with residual diffeomorphisms and that their presence calls for revisiting strict notion of equivalence principle and general covariance. As a possible application we used existence of states labelled by certain residual diffeomorphisms to identify black hole microstates, which in this context we called them horizon fluffs. One may then wonder if these ``soft hairs'' can be used to discuss more dynamical questions regarding black holes, questions like information paradox. Some early steps in this direction has been taken in \cite{Hawking, Compere}. Moreover, for the asymptotic AdS geometries where we have a dual CFT picture, one may wonder if there is any relation between our proposal and the CFT description. Some early discussions on this in the AdS$_3$ example was presented in \cite{AdS3}, where it was argued that the ``symplectic soft hair'' are indeed fully capturing the so-called ``boundary gravitons,'' the degrees of freedom of presumed dual 2d CFT. We hope our proposal here opens a new window onto the celebrated AdS/CFT correspondence \cite{AdS/CFT}. \paragraph{Acknowledgements.} I would like to thank my collaborators Hamid Afshar, Geoffrey Comp\'ere, Daniel Grumiller, Kamal Hajian, Ali Seraj, Joan Simon and Hossein Yavartanoo for many discussions over the years and for their role in the development of the ideas presented in this essay. I would also like to thank Glenn Barnich, Mehrdad Mirbabayi, Marco Serone and Marko Simonovi\'c for discussions or comments. This work is supported in part by grants from ICTP NET-68, ICTP Simons fellowship, Allameh Tabatabaii grant prize of Boniad Melli Nokhbegan of Iran and by SarAmadan grant of Iranian vice presidency in science and technology. I would also like to thank ICTP for the hospitality where this note was written down.
1,108,101,565,066
arxiv
\section{INTRODUCTION} Policy exploration has always been one of the critical topic in the field of Reinforcement Learning (RL), agent's policy would diverge under excessive exploration, However, if the exploration is not enough, policy is prone to converge prematurely. Part of exploratory research work focuses on the method of noise perturbations [1-2], or the method of entropy regularization [3,4], these methods are to explore the entire policy space, with strong randomness. The other part of the exploratory work is mainly to obtain a better policy by building a ``intrinsic'' reward [5-6]. In reinforcement learning tasks, epsilon-greedy is one of the most frequently applied exploration methods [1], but it does not carry out targeted exploration, so exponential data volume is required, as is Noisy Net [2]. Haarnoja et al. designed the Q value function into a boltzmann distribution form, which increases the diversity of policies [7]. Osband et al. [8] offered a promising approach to explore efficiently with generalization, which called randomized least-squares value iteration (RLSVI), but it is not suitable for non-linear value functions, such as neural networks. Osband et al. [9] developed bootstrapped Deep Q-Network (DQN), which combines deep exploration with deep neural networks. Subsequently, RLSVI was further extended to Multiplicative Normalizing Flows [10], which augments DQN and DDPG with multiplicative normalizing flows in order to track a rich approximate posterior distribution. Wealth of research is about how to design intrinsic rewards to help explore. Auer [11] proposed the confidence bounds method, which can be used to deal with situations which exhibit an exploitation-exploration trade-off in low-dimensional state space tasks. An extended of this work is that pseudo-count based method [12], which allocates rewards according to the pseudo-count, and guide the agent to visit the state with a low count value. Yet this method is not applicable if the state space is high-dimensional. In order to improve the accuracy of pseudo-count, PixelCNN is proposed [13]. Zhao and Tresp applied Curiosity-Driven Prioritization (CDP) framework to encourage the agent to over-sample those trajectories that have rare achieved goal states [14], so as to develop the agent's exploration ability.In addition, there are many other extension work [15-16] related to Count-based exploration. Unlike cont-based, Houthooft et al. [17-19] use predictive models to adjust the intrinsic reward of the agent when exploring, Stadie et al. made use of an Auto Encoder (AE) to encode the state space, and estimated the agent's familiarity with the environment with deep predictive model [20], and then allocates rewards based on the predicted value of the model. Pathak et al. design exploration rewards based on disagreement of ensembles of dynamics models [21], which guides the agent to explore. For the purpose of alleviating the catastrophic forgetting of neural networks, Guo et al. used previously trained multiple policy models to interact with the environment to generate more training data for training the current policy network [22], so as to facilitate the agent to remember the explored state. In the past, the method of noise perturbation usually adds noise directly on policies, which requires a large amount of data interacting with the environment in high-dimensional action space tasks. Inspired by the Liebig's law of the minimum [23], we propose an Add Noise to Noise (AN2N) policy exploration method. The Liebig's law of the minimum shows that the capacity of a barrel with staves of unequal length is limited by the shortest stave, by analogy, we look uppon the policy improvement of the agent in RL as a process of building or repairing a wooden barrel. For the sake of making the barrel hold more water at each step, we need to find the shortest stave and repair it higher. Similarly, in Reinforcement Learning, in order to help agents achieve better performance, we need to find the states that they need to explore most, and make the greater efforts to explore, which is the core idea of AN2N algorithm. \section{Preliminaries} Reinforcement learning considers the paradigm of an agent learning policies to maximize the expected reward in interacting with the environment. At each discrete time step $t$, the agent receives an observation $o_t \in \mathcal{O}$, selects actions $a_t \in \mathcal{A}$ with respect to its policy $\pi$: $\mathcal{O} \rightarrow \mathcal{A}$, and receives a scalar reward $r_t$ and a next observation $o_{t+1}$ from the environment. In general, Reinforcement learning can be regarded as a Markov Decision Process (MDP) which models stochastic, discrete-time and finite action space control problems [24-25]. A practical environment may always be partially observed, here, we assumed the environment is fully-observed, so $s_t=o_t, \mathcal{S}=\mathcal{O}$. In reinforcement learning, the objective is to find the optimal policy $\pi$, which maximizes the expected return, a action-value function $Q^{\pi}$ is uesed to assess the quality of a policy $\pi$, defined as following: $$ Q^{\pi}\left(s,a\right)=\mathbb{E}_{s_t \sim p_{\pi},a_t \sim \pi} \left[ \sum_{t=0}^{+\infty}\gamma^t R\left(s_t,a_t\right) \right] \eqno{(1)} $$ Where $\gamma \in [0,1]$ is the discount factor determining the importance of future rewards, $\mathbb{E}_{s_t \sim p_{\pi},a_t \sim \pi}$ is the expectation return over the distribution of the trajectories $(s_0, a_0, s_1, a_1, \dots)$ obtained by performing action $a \sim \pi$ in state $s \sim p_{\pi}$. The action-value function of the optimal policy is the largest, which is $Q^*(s, a) = \mathop{\arg\max}_{\pi} Q^{\pi}(s, a)$, the value function $V^\pi$ is the mean value of $Q^{\pi}$ obtained by selecting action $a$ according to policy $\pi(\cdot|s)$ distribution in state $s$, defined as $V^\pi(s)=\mathbb{E}_{a \sim \pi(\cdot|s)} \left[Q^{\pi}(s, a)\right]$. Since we consider reinforcement learning as an MDP problem, we can express action-value function $Q^{\pi}$ in the form of dynamic programming: $$ \begin{aligned} Q^{\pi}\left(s_t,a_t\right) &=\mathbb{E}_{s_{t+1} \sim p_{\pi}} [r(s_t,a_t) \\ &+\gamma \mathbb{E}_{a_{t+1} \sim \pi} \left[Q^{\pi}(s_{t+1}, a_{t+1})\right]] \end{aligned} \eqno{(2)} $$ In low dimensional state-action space tasks, the $Q^{\pi}$ function in (2) is usually expressed as look-up table method, for example in Q-Learning [26]. In pace with the dimension of state-action space becomes higher, the look-up table method is becoming less and less applicable, expecially in complex tasks. Therefore, Deep Reinforcement Learning (DRL) uses deep neural networks as function approximators for RL methods [27], Then, more and more algorithms are proposed in the field of deep reinforcement learning, such as Deep Deterministic Policy Gradient(DDPG) [28], Trust Region Policy Optimization [29], Asynchronous Advantage Actor-Critic (A3C) [30], Soft Actor-Critic(SAC) [4] and Twin Delayed Deep Deterministic Policy Gradient(TD3) [31] algorithms. DDPG applied neural network to approximate the action-value function $Q(s,a|\theta^Q)$ and policy function $\mu(s|\theta^\mu)$, called critic network and actor network, respectively, with the parameters $\theta^Q$, $\theta^\mu$, the DDPG algorithm introduces critic target network $\theta^{Q^{'}}$ and policy target network $\theta^{\mu^{'}}$, so as to improve the stability of policy update. Consequently, gradient descent is used to optimize the network weight by minimizing the loss: $$ \begin{aligned} L(\theta^Q)=\mathbb{E}_{s_t \sim p_{\mu(s_t|\theta^\mu)},a_t \sim \mu(s_t|\theta^\mu)} &[(Q(s_t,a_t|\theta^Q) \\ &-y_t)^2] \end{aligned} \eqno{(3)} $$ Where $$ y_t=r(s_t,a_t)+\gamma Q^{'}(s_{t+1},\mu^{'}(s_{t+1}|\theta^{\mu^{'}})|\theta^{Q^{'}}) \eqno{(4)} $$ $$ \begin{aligned} \nabla_{\theta^{\mu}}J &\approx \mathbb{E}_{s \sim p_{(s_t|\theta^{\mu})}}\left[ \nabla_{\theta^{\mu}} Q(s,a|\theta^Q ) |_{s=s_t,a=\mu(s_t|\theta^{\mu})}\right] \\ &= \mathbb{E}_{s \sim p_{(s_t|\theta^\mu)}} [ \nabla_a Q(s,a|\theta^Q) |_{s=s_t,a=\mu(s_t)} \\ &\qquad \qquad \qquad \quad \nabla_{\theta^{\mu}}\mu(s_t|\theta^\mu)|s=s_t ] \end{aligned} \eqno{(5)} $$ Equation (4) derived from (2), the target actor network decouples the process of policy updating and policy improving, and the weights of critic and policy target network are either updated periodically to slowly track the learned networks: $\theta^{'}\leftarrow \tau \theta +(1-\tau)\theta^{'}$ with $\tau \ll 1$, which avoids the large fluctuation in the agent's learning process. The actor is updated by (5), following the chain rule to the expected return $Q(s,a|\theta^Q )$ from the distribution $J$ with respect to the actor parameters $\theta^{\mu}$. \section{Exploring More When It Needs} In reinforcement learning environment, agent often selects different action in different state. Due to the vulnerability of the policy, it is presumable for agent to perform terribly in some states, agent proceed to the next step, getting into a new state that has not been learned before. The terrible policy begin to affect the decisions of the following states, and ultimately affect the overall performance of the agent. We decompose this problem into three sub problems: \begin{itemize} \item When the agent needs to explore as much as possible? \item How to determine whether the current state needs to explore more? \item How to explore? \end{itemize} The solution of these three problems is also the core idea of our proposed AN2N algorithm. \subsection{Exploring More When Agent in a Bad State} As the above analysis shows, due to the vulnerability of the policies, the agent may be in a dilemma in some states. Once an agent falls into a terrible state, it is likely to have an impact on the following trajectory, thus affecting the overall performance. This reminds us of the Liebig's law of the minimum, which indicates that the capacity of a barrel is limited by the length of the shortest stave. As shown in Fig. 1, for the sake of effectively improving the capacity of the barrel, it is necessary to lengthen the shortest stave first. See more details in Fig. 5 in Appendix. \begin{figure}[thpb] \centering \includegraphics[scale=0.4]{paperpic/4.jpg} \caption{Increasing the capacity of a barrel according to the Liebig's law of the minimum} \label{fig: figure1} \end{figure} Similar to the principle of repairing short stave, agents need to focus on these poor performance states and explore more to stabilize the process of policy improvement. So we use the cumulative reward of the state to evaluate whether the state is bad, equation (4) provides a solution, but it's a one-step Q-learning which obtains a reward $r$ and only directly affects the value of the state action pair $s, a$, The other state action pairs are affected indirectly by updating $Q(s,a)$ function, which slowdown the learning process since many updates required to propagate a reward to the relevant preceding states and actions. Hence, we choose n-step returns [26, 32] that propagates rewards faster, defined as: $$ \begin{aligned} Reward(s_t) &=r_t+\gamma r_{t+1}+\cdots +\gamma^{n-1}r_{t+n-1}\\ &+\max_{a}\gamma^{n}Q(s_{t+n},a) \end{aligned} \eqno{(6)} $$ Where current state reward affecting the values of n preceding state action pairs directly, which makes the process of propagating rewards to relevant state-action pairs potentially much more efficient. For the purpose of applying it to our algorithm, we rewrite it as follows: $$ \begin{aligned} Reward(s_t) &\approx r_t+\gamma r_{t+1}+\cdots+\gamma^{T-t+1}r_{T-1}\\ &+\gamma^{T-t}Q^{'}(s_T,\mu(s_T|\theta^{\mu})|\theta^{Q^{'}}) \end{aligned} \eqno{(7)} $$ We can calculate the reward value of each state according to (7) after the agent generates a trajectory, and greedily select the worst state to store in the fixed length FIFO queue. \subsection{Calculate the Similarity between States} When the agent interacts with the environment, it is necessary to determine whether the current interaction state needs to be explored. We use similarity measurement to judge if the current state is similar to the state in FIFO queue. The current state needs to be explored more if it is similar. Similarity Mesurement is widely used in the field of Recommender Systems, we select two kinds of distance to measure the similarity between different states, namely Manhattan distance and Cosine distance: $$ \begin{aligned} Manhattan\_sim(s_i,s_j) = \frac{1}{1+\sum_{k}\left| s_{i,k}-s_{j,k} \right|} \end{aligned} \eqno{(8)} $$ $$ \begin{aligned} &Cosine\_sim(s_i,s_j) = \\ &\frac{\sum_{k}(s_{i,k}-\bar{s_k}) \cdot (s_{j,k}-\bar{s_k})}{\sqrt{\sum_K(s_{i,k}-\bar{s_k})^2}\cdot\sqrt{\sum_K(s_{j,k}-\bar{s_k})^2}} \end{aligned} \eqno{(9)} $$ Equation (8) describes Manhattan distance similarity, which is able to capture local differences between states, while cosine distance similarity in (9) measures the difference as a whole. Since these method needs a threshold to judge whether two states are similar or not, we set an decayed variable $Pct_{add}$, means the proportion of bad states in the total interaction state, $Pct_{add}$ was used to automatically adjust the similarity threshold, if it'is too high, the similarity threshold will be increased, otherwise, decreased. \subsection{Add Noise to Noise} Agent knows whether the current state needs more exploration under the similarity of the bad states, those who need to be explored more called key states. A lot of exploration methods are analyzed in Section 1, we choose one of the most simple and effective methods to verify our method AN2N, that is, adding noise perturbations to the policy. When the agent interacts with the environment normally, it needs to add a small noise disturbance to the policy $\mu(s_t|\theta^\mu)$, so as to ensure the basic exploration ability of the policy. When the agent is in the key states, it needs to add a noise to the small noise, or directly add a big noise $\mathcal{N}_{\textrm{big}}=\mathcal{N}_{\textrm{noise}}+\mathcal{N}_{\textrm{noise}_{\textrm{add}}}$ to increase the exploration, which is also the origin of the algorithm name (Add Noise to Noise, AN2N). The pseudo code of AN2N algorithm is shown in algorithm~\ref{AN2N}. \begin{algorithm}[h] \begin{spacing}{0.8} \caption{AN2N} \label{AN2N} \KwIn{Noise $\mathcal{N}_{\textrm{small}}$, $\mathcal{N}_{\textrm{big}}$, Replay buffer size $R$, $R_{\textrm{AN2N}}$, $FIFO_{\textrm{AN2N}}$, Key states $K_{\textrm{upper}}$, $K_{\textrm{lower}}$} Randomly initialize critic network $Q(s,a|\theta^Q)$ and actor $\mu(s|\theta^\mu)$ with weights $\theta^Q$ and $\theta^\mu$\\ Initialize target network $Q^{'}$ and $\mu^{'}$ with weights $\theta^{Q^{'}} \leftarrow \theta^Q$, $\theta^{\mu^{'}} \leftarrow \theta^{\mu}$\\ \For{episode $ e \in \{$1,...,M$\}$} {Initialize a random process $\mathcal{N}$ for action exploration\\ Receive initial observation state $s_1$\\ \For{$t \in \{$1,...,T$\}$} { \# $S$ is the set of states in $FIFO_{AN2N}$\\ \uIf {\rm{Similarity($s_t, S$)}} { $\mathcal{N}_t$ = $\mathcal{N}_{\textrm{big}}$ } \Else{ $\mathcal{N}_t$ = $\mathcal{N}_{\textrm{small}}$ } Select action $a_t=\mu(s_t|\theta^\mu)+\mathcal{N}_t$ according to the current policy and exploration noise\\ Execute action $a_t$ and observe reward $r_t$ and observe new state $s_{t+1}$\\ Store transition $(s_t, a_t, r_t, s_{t+1})$ in $R$\\ Test the agent and store the trajectory $(s_t, r_t)$ in $R_{AN2N}$\\ Calculate the cumulative discount rewards of each state:\\ \begin{center} \vspace{2ex} $Reward(s_t) \approx r_t+\gamma r_{t+1}+\cdots+\gamma^{T-t+1}r_{T-1}+\gamma^{T-t})Q^{'}(s_T,\mu(s_T|\theta^{\mu})|\theta^{Q^{'}})$ \end{center} Save the clip$({20\times(\frac{\textrm{average} \ \textrm{reward}}{\textrm{reward}}})^2, K_{\textrm{lower}}, K_{\textrm{upper}}) $\\ $Reward(s)$ minimum key states in $FIFO_{\textrm{AN2N}}$\\ Run DDPG, SAC or TD3 etc. Algorithms } } \end{spacing} \end{algorithm} \section{Result} We choose two representative algorithms to combine with AN2N. The first one is DDPG, which uses neural network to represent action policy $\mu(s_t|\theta^\mu)$ for the first time, thus extending the application of deep reinforcement learning from discrete control to continuous control. It is one of the most famous algorithms in the field of continuous control. The second one is SAC, an off policy algorithm based on maximizing policy entropy, it is still a state of the art algorithm benefit from it's good exploration, and has a good landing application in the industry. \begin{figure}[thpb] \begin{center} \includegraphics[width=0.45\textwidth]{paperpic/Environment.jpg \caption{Samples of environments we attempt to solve with AN2N. In order from the left: make a 2D cheetah robot run, make a two-dimensional one-legged robot hop forward as fast as possible, make a two-dimensional bipedal robot walk forward as fast as possible, make a 3-link swimming robot swim forward as fast as possible in a viscous fluid, make a four-legged creature walk forward as fast as possible}\label{fig:figure2} \end{center} \end{figure} We evaluate our algorithm combined with DDPG and SAC on 5 continuous control tasks of varying levels of difficulty, all of which are simulated using the MuJoCo physics engine [34], as it offers a unique combination of speed, accuracy and modeling power, and it is the first full-featured simulator designed from the ground up for the purpose of motion control, illustrated in Fig. 2. To test the generalization of the algorithm, we kept the same hyperparameters in different environments. As introduced in Section 2, DDPG uses two actor-networks (acotr-network: $\mu(s|\theta^\mu)$ and target acotr-network:$\mu^{'}(s|\theta^{\mu^{'}})$) and critic-networks (critic-network: $Q(s,a|\theta^Q)$ and target critic-network:$Q^{'}(s,a|\theta^{Q^{'}})$) respectively to approximate the policy and action-state value. When the agent interacts with the environment, it first uses random policy to obtain some interaction data for the initial training of the networks, and then starts to use the policies of superimposing disturbance noise to interact with the environment. In the test phase, it records the reward of each state of the agent, and calculates the action state value of the last state according to (7), the pseudo code of DDPG with AN2N is shown in algorithm~\ref{algorithm1} in Appendix B. It should be noted that in AN2N, the superimposed small noise value is set to 0.05, the large noise value is set to 0.4, and the proportion of large noise $Pct_{add}$ linearly decays from 0.4 to 0.2, which limits the noise integral value in the whole interaction process to a reasonable range. \begin{figure}[htbp] \centering \subfigure[HalfCheetah]{\includegraphics[width=0.15\textwidth]{paperpic/ddpg_HalfCheetah.jpeg}} \subfigure[Hopper]{\includegraphics[width=0.15\textwidth]{paperpic/ddpg_Hopper.jpeg}} \subfigure[Walker2d]{\includegraphics[width=0.15\textwidth]{paperpic/ddpg_Walker2d.jpeg}} \\ \centering \subfigure[Swimmer]{\includegraphics[width=0.23\textwidth]{paperpic/ddpg_Swimmer.jpeg}} \subfigure[Ant]{\includegraphics[width=0.23\textwidth]{paperpic/ddpg_Ant.jpeg}} \caption{DDPG agent combined with AN2N versus the baseline Within 6e5 timestep for different test environments. (a) ddpgAN2N agent versus the baseline in HalfCheetah-v2. (b) ddpgAN2N agent versus the baseline in Hopper-v2. (c) ddpgAN2N agent versus the baseline in Walker2d-v2. (d) ddpgAN2N agent versus the baseline in Swimmer-v2. (e) ddpgAN2N agent versus the baseline in Ant-v2.} \label{fig:figure3} \end{figure} \begin{table*}[h] \caption{Mean value of cumulative reward of agent in different test environments} \label{tab:table3} \centering \begin{tabular}{c | c c c c c}\hline \textbf{Environment} &\textbf{Random} &\textbf{DDPG} & \textbf{DDPG with AN2N} & \textbf{SAC} & \textbf{SAC with AN2N} \\\hline HalfCheetah & -284$\pm$27 & 6550 $\pm$ 1291 & 7541 $\pm$ 651 & 8326 $\pm$ 1577 & $\mathbf{8803 \pm 579}$ \\ Hopper & 18$\pm$6 & 1659 $\pm$ 992 & 1067 $\pm$ 726 & 2348 $\pm$ 637 & $\mathbf{2562 \pm 573}$ \\ Walker2d & 2$\pm$2 & 541 $\pm$ 361 & 685 $\pm$ 455 & 2566 $\pm$ 765 & $\mathbf{2834 \pm 752}$ \\ Swimmer & 0$\pm$4 & 63 $\pm$ 26 & $\mathbf{74 \pm 31}$ & 41 $\pm$ 2 & 41 $\pm$ 2 \\ Ant & -58$\pm$35 & 24 $\pm$ 319 & 245 $\pm$ 342 & 1595 $\pm$ 848 & $\mathbf{1835 \pm 989}$ \\\hline \end{tabular} \end{table*} The training process of Deep Reinforcement Learning often fluctuates a lot as the instability of policy update and policy improvement, in order to increase the credibility of the experiment result, each simulation environment simulates $6\times 10^5$ steps, and every 4000 steps is set as an epoch, where the learned policies are tested 10 times, and the average value is taken as the performance of the test, and we use the same set of parameters in the five environments in Fig. 2, and repeat the experiment with five different random seeds in each environment, seeds are set to 0, 5, 10, 15 and 20 respectively. \begin{figure}[htbp] \centering \subfigure[HalfCheetah]{\includegraphics[width=0.15\textwidth]{paperpic/sac_HalfCheetah.jpeg}} \subfigure[Hopper]{\includegraphics[width=0.15\textwidth]{paperpic/sac_Hopper.jpeg}} \subfigure[Walker2d]{\includegraphics[width=0.15\textwidth]{paperpic/sac_Walker2d.jpeg}} \\ \centering \subfigure[Swimmer]{\includegraphics[width=0.23\textwidth]{paperpic/sac_Swimmer.jpeg}} \subfigure[Ant]{\includegraphics[width=0.23\textwidth]{paperpic/sac_Ant.jpeg}} \caption{SAC agent combined with AN2N versus the baseline Within 6e5 timestep for different test environments.} \label{apdfig1} \end{figure} The experimental results of DDPG with AN2N and DDPG benchmark are shown in Fig.~\ref{fig:figure3}, our algorithm achieves better performance in tasks HalfCheetah and Walker2d, and faster convergence speed in tasks HalfCheetah, Walker2d, Swimmer and Ant. Similarly, we combine SAC with AN2N. The Q function is usually updated by updating the bellman residual, while the SAC add the policy entropy term, and the output of SAC policy network is a distribution, which is generally expressed by the mean and variance of Gaussian distribution. Since Benchmark limits the variance of policy output, in order to combine SAC and AN2N more succinctly, we increase the variance limit range by $1.5$ times in the key states where we need to explore more, and the variance is reduced to $0.5$ times the original in other states. The pseudo code of SAC with AN2N is shown in algorithm \ref{algorithm2} in Appendix. Compared with the Benchmark of SAC in the test environment in Fig. 2, the experimental results are shown in Fig. 4. Though SAC has higher stability and better performance than DDPG and other algorithms, SAC with AN2N has better performance in convergence speed and performance in continuous action control tasks such as HalfCheetah, Hopper Walker2d etc.. We summarize and present the experimental results in Table 1, and the random means agent taking a randomly generated policy. Each value represents the average return over 10 trials of 0.6 million time steps in five different seeds, the maximum value for each task is bolded. $\pm$ corresponds to a single deviation over trials. *AN2N matches or outperforms all baselines in both final performance and learning speed across all tasks, especially SAC combined with AN2N, which shows thatthe combination of AN2N with DDPG and SAC achieve a significant performance improvement effect. \section{Conclusion} We propose a novel policy exploration method which called AN2N based on the Liebig's law of the minimum, owing to its excellent scalability, AN2N can be well combined with the currently frequently used algorithms such as DDPG, SAC, which enhances its exploration ability. AN2N algorithm is divided into the following three steps: 1. Use the idea of n-step Q-learning to calculate the return of each state,used for measuring which states are prone to get the agent into a dilemma, and preserve them; 2. Compare the current state with the dilemma state, if similar, the current state needs to explore more, and make use of the proportion of added noise to automatically adjust the similarity threshold. 3. Add noise on the noise to increase the intensity of exploration. We combine AN2N with DDPG and SAC algorithms to verify its performance in the mainstream test environments of continuous control tasks, and achieve significant improvement in performance and convergence speed. \addtolength{\textheight}{-3.5cm} \section*{APPENDIX} Appendix includes: the schematic diagram of the whole process of increasing the capacity of a barrel in Fig. 5, pseudo code of DDPG with AN2N in \textbf{Algorithm}~\ref{algorithm1}, pseudo code of SAC with AN2N in \textbf{Algorithm}~\ref{algorithm2}. \begin{figure}[h] \centering \includegraphics[width=0.35\textwidth]{paperpic/3.jpg} \caption{The whole process of increasing the capacity of a barrel according to the Liebig's law of the minimum} \label{apdfig: figure7} \end{figure} \begin{algorithm}[h] \begin{spacing}{0.8} \caption{DDPG with AN2N}\label{algorithm1} \KwIn{Noise $\mathcal{N}_{\textrm{small}}$, $\mathcal{N}_{\textrm{big}}$, Replay buffer size $R$, $R_{\textrm{AN2N}}$, $FIFO_{\textrm{AN2N}}$, Key states $K_{\textrm{upper}}$, $K_{\textrm{lower}}$} Randomly initialize critic network $Q(s,a|\theta^Q)$ and actor $\mu(s|\theta^\mu)$ with weights $\theta^Q$ and $\theta^\mu$\\ Initialize target network $Q^{'}$ and $\mu^{'}$ with weights $\theta^{Q^{'}} \leftarrow \theta^Q$, $\theta^{\mu^{'}} \leftarrow \theta^{\mu}$\\ \For{episode $ e \in \{$1,...,M$\}$} {Initialize a random process $\mathcal{N}$ for action exploration\\ Receive initial observation state $s_1$\\ \For{$t \in \{$1,...,T$\}$} { \# $S$ is the set of states in $FIFO_{\textrm{AN2N}}$\\ \uIf {\rm{Similarity($s_t, S$)}} { $\mathcal{N}_t$ = $\mathcal{N}_{\textrm{big}}$ } \Else{ $\mathcal{N}_t$ = $\mathcal{N}_{\textrm{small}}$ } Select action $a_t=\mu(s_t|\theta^\mu)+\mathcal{N}_t$ according to the current policy and exploration noise\\ Execute action $a_t$ and observe reward $r_t$ and observe new state $s_{t+1}$\\ Store transition $(s_t, a_t, r_t, s_{t+1})$ in $R$\\ Test the agent and store the trajectory $(s_t, r_t)$ in $R_{\textrm{AN2N}}$\\ Calculate the cumulative discount rewards of each state:\\ \begin{center} \vspace{2ex} $Reward(s_t) \approx r_t+\gamma r_{t+1}+\cdots+\gamma^{T-t+1}r_{T-1}+\gamma^{T-t}Q^{'}(s_T,\mu(s_T|\theta^{\mu})|\theta^{Q^{'}})$ \end{center} Save the clip$({20\times(\frac{\textrm{average} \ \textrm{reward}}{\textrm{reward}}})^2, K_{\textrm{lower}}, K_{\textrm{upper}}) $\\$ Reward(s)$ minimum key states in $FIFO_{\textrm{AN2N}}$\\ \If{t \rm{mod} u} { Sample a random minibatch of $N$ transitions $(s_i, a_i, r_i, s_{i+1})$ from $R$\\ Set $y_i=r_i+\gamma Q^{'}(s_{i+1},\mu^{'}(s_{i+1}|\theta^{\mu^{'}})|\theta^{Q^{'}})$\\ Update critic by minimizing the loss:$L=\frac{1}{N} \sum_{i}(y_i-Q(s_i, a_i|\theta^Q))^2$\\ Update the actor policy using the sampled policy gradient:\\ \begin{center} \vspace{1ex} $\nabla_{\theta^{\mu}}J\approx\frac{1}{N} \sum\limits_{i}\nabla_aQ(s, a|\theta^Q)|_{s=s_i, a=\mu(s_i)}\nabla_{\theta^{\mu}}\mu(s|\theta^{\mu})|s_i$ \end{center} Update the target networks: \begin{center} $\theta^{Q^{'}}\leftarrow\tau\theta^Q+(1-\tau)\theta^{Q^{'}}$\\ $\theta^{\mu^{'}}\leftarrow\tau\theta^{\mu}+(1-\tau)\theta^{Q^{\mu^{'}}}$ \end{center} } } } \end{spacing} \end{algorithm} \begin{algorithm}[h] \begin{spacing}{0.8} \caption{SAC with AN2N}\label{algorithm2} \KwIn{Noise standard deviation $\sigma_{\textrm{small}}$, $\sigma_{\textrm{big}}$, Replay buffer size $R$, $R_{\textrm{AN2N}}$, $FIFO_{\textrm{AN2N}}$, Key states $K_{\textrm{upper}}$, $K_{\textrm{lower}}$, Temperature parameter $\alpha$} Randomly initialize critic networks $Q(s,a|\theta_1^Q)$, $Q(s,a|\theta_2^Q)$, and actor $\mu(s|\theta^\mu)$ with weights $\theta_1^Q$, $\theta_2^Q$ and $\theta^\mu$\\ Initialize target network $Q^{'}$ and $\mu^{'}$ with weights $\theta_1^{Q^{'}} \leftarrow \theta_1^Q$, $\theta_2^{Q^{'}} \leftarrow \theta_2^Q$\\ \For{episode $ e \in \{$1,...,M$\}$} {Initialize a random process $\mathcal{N}$ for action exploration\\ Receive initial observation state $s_1$\\ \For{$t \in \{$1,...,T$\}$} { \# $S$ is the set of states in $FIFO_{\textrm{AN2N}}$\\ \uIf {\rm{Similarity($s_t, S$)}} { $\sigma$ = $\sigma_{\textrm{big}}$ } \Else{ $\sigma$ = $\sigma_{\textrm{small}}$ } Select action $a_t \sim \mathcal{N}(\mu(s_t|\theta^\mu),\sigma)$ according to the current policy\\ Execute action $a_t$ and observe reward $r_t$ and observe new state $s_{t+1}$\\ Store transition $(s_t, a_t, r_t, s_{t+1})$ in $R$\\ Test the agent and store the trajectory $(s_t, r_t)$ in $R_{\textrm{AN2N}}$\\ Calculate the cumulative discount rewards of each state:\\ \begin{center} \vspace{2ex} $Reward(s_t) \approx r_t+\gamma r_{t+1}+\cdots+\gamma^{T-t+1}r_{T-1}+\gamma^{T-t}Q^{'}(s_T,\mu(s_T|\theta^{\mu})|\theta^{Q^{'}})$ \end{center} Save the clip$({20\times(\frac{\textrm{average} \ \textrm{reward}}{\textrm{reward}}})^2, K_{\textrm{lower}}, K_{\textrm{upper}}) $\\$ Reward(s)$ minimum key states in $FIFO_{\textrm{AN2N}}$\\ \If{t \rm{mod} u} { Sample a random minibatch of $N$ transitions $(s_i, a_i, r_i, s_{i+1})$ from $R$\\ Set $y_{i}=r_i+\gamma(\min_{j=1,2} Q^{'}(s_{i+1},\mu(s_{i+1}|\theta^{\mu})|\theta_j^{Q^{'}})-\alpha$ log $\mu(s_{t+1}|\theta^\mu))$\\ Update critic (soft Q-function) by minimizing the loss:\\ \begin{center} $L=\frac{1}{N} \sum_{i}\sum_{j}(y_i-Q(s_i, a_i|\theta_j^Q))^2$ \end{center} Update the actor policy using the sampled policy gradient:\\ \begin{center} $\nabla_{\theta^{\mu}}J\approx\frac{1}{N} \sum\limits_{i}((\nabla_aQ(s, a|\theta^Q)|_{s=s_i, a=\mu(s_i)}\nabla_{\theta^{\mu}}\mu(s|\theta^{\mu})|s_i$ \\ $- \nabla_a$ log $\mu(s_t|\theta^\mu))-\nabla_{\theta^{\mu}}$log $\mu(s_t|\theta^\mu)|s_i)$ \end{center} Update the target networks: \begin{center} $\theta_j^{Q^{'}}\leftarrow\tau\theta_j^Q+(1-\tau)\theta_j^{Q^{'}}$ \end{center} } } } \end{spacing} \end{algorithm} \section*{ACKNOWLEDGMENT} We would like to thank Feng Pan, Weixing Li, Xiaoxue Feng, Yan Gao, Shengyang Ge and many others at Institute of Pattern Recognition and Intelligent System of BIT for insightful discussions and valuable suggestions. \clearpage
1,108,101,565,067
arxiv
\section{Introduction} Liquid crystals (LCs), rod-like polymers, and disk-formed particles all involve molecules with a high degree of shape anisotropy \cite{jurasek2017self}. LCs occur in many contexts, ranging from the well-known display applications to biological systems \cite{woltman2007liquid,de2017rod,tian2018self}. Depending on the density and the temperature, the anisotropy of LCs may lead to different structural phases. For the thermometric LCs of focus here, changing temperature and density may cause transitions from the ordinary crystalline state to smectic, nematic, and isotropic phases \cite{woltman2007liquid}. Pure fluids and mixtures consisting of aspherical particles have been subject to many theoretical, experimental, and simulations studies \cite{antypov2004role,cifelli2006smectic,berardi2007computer,vadnais2008study}. Theoretical studies are typically based on the Fokker-Planck equation \cite{mendez2010relaxation,doi1988theory}, generalized Langevin equations \cite{kalmykov2001rotational,hernandez1996rotational}, Onsager theory \cite{szalai1998external}, density-functional theory \cite{del1996wetting}, or generalized van der Waals descriptions \cite{del1995surface}. Different numerical techniques like Monte-Carlo and molecular dynamics (MD) have been applied for studying the phase behavior, thermodynamics, structure, and dynamics of rigid anisotropic molecules forming LCs \cite{allen1996computer}. Depending on the type of interaction between the molecules, one can classify LC models into two main groups \cite{allen1995simulations}. The first group considers models of hard particles with a non-spherical shape \cite{allen1993hard}. In such models there are no attractive interactions, i.e., the potential is purely repulsive and short-ranged. The main motivation for this approach is the success of the hard-sphere model in explaining the properties of simple liquids \cite{hansen1986theory,dyre2016simple}. Extensive simulation studies have used this approach to investigate structure and dynamics of LC fluids for different shape anisotropies (prolate ellipsoids, spherocylinders, rods, disks, etc) \cite{eppenga1984monte,allen1987observation,frenkel1987computer}. In the other main class of LC models, both short-range repulsive and long-range attractive interactions are taken into account. Several potential models for fluids of aspherical particles have been introduced for LC studies, e.g., the Kihara potential \cite{kihara1963convex}, the site–site potential \cite{streen1977liquids}, the Gaussian overlap model \cite{berne1972gaussian}, and the Gay–Berne (GB) model \cite{gay1981modification}. By using site-site potentials one can realistically mimic the structure of LC molecules and compare results to experiments \cite{egberts1988molecular,komolkin1989computer,wilson1991computer,wilson1992structure,paolini1993simulation,patnaik1995modelling}, but unfortunately such models usually require huge computational resources. This is why most simulations so far have been conducted for relatively small system sizes. An exception is the GB model based on the Lennard-Jones (LJ) pair interaction, which is computationally cheap and still realistic. For this reason, the GB model has become popular almost as the generic LC model. The GB model gives rise to a rich mesogenic behavior \cite{tian2018self}. Previous numerical studies of this model have focused on its phase behavior \cite{adams1987computer,luckhurst1990computer,de1991location,alejandre1994molecular,chalam1991molecular,andrew1993monte,hashim1995computer}, per-particle translational and orientational dynamics \cite{de1992dynamics}, interfacial properties \cite{del1995computer}, elastic constants \cite{stelzer1995molecular}, thermal conductivity \cite{sarman1994molecular,sarman1993self}, and viscosity \cite{sarman1993statistical,smondyrev1995viscosities}. Analytical perturbation theories have also been applied in order to explore the phase diagram of GB fluids \cite{gupta1989computer,velasco1995liquid}. The GB model allows one to describe different shape anisotropy, spanning from elongated ellipsoids to thin disks \cite{gay1981modification}. The GB potential depends on four dimensionless parameters, which is often signaled by the notation $GB(\kappa, \kappa^{\prime}, \mu, \nu)$. The four parameters control the shape of the molecules and the strength of the interaction between them. $GB(3,5,2,1)$ is the most studied case, leading to rod-shape molecules, and in this case the phase diagram and orientational order parameter are known \cite{de1991liquid}. Moreover, the velocity time-autocorrelation function \cite{de1992dynamics}, viscosity \cite{smondyrev1995viscosities}, elastic constants \cite{allen1996molecular}, free energies and enthalpy \cite{del1996wetting}, isotropic-nematic transition \cite{de1991location}, liquid-vapor coexistence curve \cite{rull2017computer}, stress-tensor components \cite{sarman2015non}, and self-diffusion coefficient \cite{sarman2016self} have been studied for the $GB(3,5,2,1)$ model. This paper presents a study of the Gay-Berne model with parameters corresponding to calamitic, i.e., rod-shaped elongated, molecules at high temperatures where the model is shown to obey the symmetry of hidden scale invariance. According to this symmetry, the system is expected yo have so-called isomorphs, which are curves in the thermodynamic phase diagram along which structure and dynamics are almost invariant when given in properly reduced units. A recent study of ours showed this for the more exotic discotic Gay-Berne model $GB(0.345,0.2,1,2)$ in the isotropic phase \cite{meh22}; the present paper demonstrates the existence of isomorphs in both the isotropic and the nematic phase of a more standard Gay-Berne model. The study given below nicely confirms the recent work by Liszka and co-workers \cite{lis22}, although that paper however focused more on density-scaling (a particular consequence of isomorph theory) than on demonstrating isomorph invariance of structure and dynamics. \section{Gay-Berne Potential} The GB potential between pairs of particles (``molecules''), $GB(\kappa, \kappa^{\prime}, \mu, \nu)$, is characterized by the following four dimensionless parameters that are all defined below: $\kappa\equiv \sigma_e/\sigma_s$ where $\sigma_e$ and $\sigma_s$ are lengths, $\kappa'\equiv\varepsilon_{ss}/\varepsilon_{ee}$ where $\varepsilon_{ss}$ and $\varepsilon_{ee}$ are energies, and $\mu$ and $\nu$ are exponents. The GB pair potential $v_{\rm GB}$, which is basically a direction-dependent LJ pair potential, is defined as follows \begin{subequations} \label{GB_pot} \begin{align} v_{\rm GB}(\textbf{r}_{ij},\hat{\textbf{e}}_i,\hat{\textbf{e}}_j) &= 4\varepsilon (\hat{\textbf{r}}, \hat{\textbf{e}}_i, \hat{\textbf{e}}_j) \left[\left(\sigma_s/\rho_{ij}\right)^{12} - \left(\sigma_s/\rho_{ij}\right)^{6} \right], \label{GB_pot_a} \\ \rho_{ij} &= r_{ij} - \sigma(\hat{\textbf{r}}, \hat{\textbf{e}}_i, \hat{\textbf{e}}_j) + \sigma_s\,. \label{GB_pot_b} \end{align} \end{subequations} Here, $r_{ij}$ is the distance between molecules $i$ and $j$, $\hat{\textbf{r}}\equiv\textbf{r}_{ij}/r_{ij}$ is the unit vector along the vector from molecule $i$ to molecule $j$ denoted by $\textbf{r}_{ij}$, and $\hat{\textbf{e}}_i$ and $\hat{\textbf{e}}_j$ are unit vectors along the major axis of molecules $i$ and $j$. The GB molecule is roughly an ellipsoid of two diameters $\sigma_s$ and $\sigma_e$, and one defines \begin{subequations} \label{GB_sigma} \begin{align} \sigma(\hat{\textbf{r}}, \hat{\textbf{e}}_i, \hat{\textbf{e}}_j) &= \sigma_s \bigg[1-\dfrac{\chi}{2} \bigg(\dfrac{(\hat{\textbf{e}}_i\cdot\hat{\textbf{r}}+\hat{\textbf{e}}_j\cdot\hat{\textbf{r}})^2}{1+\chi(\hat{\textbf{e}}_i\cdot\hat{\textbf{e}}_j)}+ \dfrac{(\hat{\textbf{e}}_i\cdot\hat{\textbf{r}}-\hat{\textbf{e}}_j\cdot\hat{\textbf{r}})^2}{1-\chi(\hat{\textbf{e}}_i\cdot\hat{\textbf{e}}_j)}\bigg) \bigg]^{-1/2} \label{GB_sigma_a}\\ \chi&=\dfrac{\kappa^2-1}{\kappa^2+1}\,. \label{GB_sigma_b} \end{align} \end{subequations} Physically, $\chi$ is a shape anisotropy parameter and $\kappa$ quantifies the molecular elongation. The case $\kappa=1$ ($\chi=0$) represents spherical molecules, the case $\kappa\rightarrow\infty$ ($\chi\rightarrow1$) corresponds to very long rods, and the case $\kappa\rightarrow 0$ ($\chi\rightarrow-1$) corresponds to very thin disks. The energy term is given as follows \begin{subequations} \label{GB_epsilon} \begin{align} \varepsilon(\hat{\textbf{r}},& \hat{\textbf{e}}_i, \hat{\textbf{e}}_j) = \varepsilon_0\, \left(\varepsilon_1(\hat{\textbf{e}}_i,\hat{\textbf{e}}_j)\right)^\nu \left(\varepsilon_2(\hat{\textbf{r}}, \hat{\textbf{e}}_i, \hat{\textbf{e}}_j)\right)^\mu \label{GB_epsilon_a}\\ \intertext{in which} \varepsilon_1(\hat{\textbf{e}}_i,\hat{\textbf{e}}_j)&=\big(1-\chi^2(\hat{\textbf{e}}_i\cdot\hat{\textbf{e}}_j)^2\big)^{-1/2} \label{GB_epsilon_b}\\ \varepsilon_2(\hat{\textbf{r}}, \hat{\textbf{e}}_i, \hat{\textbf{e}}_j)&= 1-\frac{\chi'}{2}\biggl(\dfrac{(\hat{\textbf{e}}_i\cdot\hat{\textbf{r}}+\hat{\textbf{e}}_j\cdot\hat{\textbf{r}})^2}{1+\chi'(\hat{\textbf{e}}_i\cdot\hat{\textbf{e}}_j)}+ \dfrac{(\hat{\textbf{e}}_i\cdot\hat{\textbf{r}}-\hat{\textbf{e}}_j\cdot\hat{\textbf{r}})^2}{1-\chi'(\hat{\textbf{e}}_i\cdot\hat{\textbf{e}}_j)}\biggr) \label{GB_epsilon_c}\\ \intertext{and the energy anisotropy parameter is given by} \chi'&=\frac{\kappa'^{1/\mu}-1}{\kappa'^{1/\mu}+1}\,. \end{align} \end{subequations} The energies $\varepsilon_{ss}$ and $\varepsilon_{ee}$ are the well depths of the potential in the side-side and end-end configurations, respectively. Henceforth, unless isomorph-theory reduced units are used (see \sect{sec:isom}), $\sigma_s$ defines the length unit used and $\varepsilon_0$ the energy unit. The density $\rho$ and the temperature $T$ are below always given in these units. The $GB(3,5,2,1)$ model was introduced in 1981 by Gay and Berne, inspired by the Gaussian overlap model of Berne and Pechukas \cite{berne1972gaussian,gay1981modification}. For realistic LCs the length-to-width ratio is at least $3$, leading to the choice of $\kappa=3$ by Gay and Berne \cite{gay1981modification}. To obtain the other parameters, the GB pair potential was compared to the case of a pair of linear molecules consisting of four LJ particles placed on a line such that the length-to-width ratio equals $3$. This results in $\kappa^{\prime}=5,~\mu=2$, and $\nu=1$ \cite{gay1981modification}. As mentioned, the $GB(3,5,2,1)$ model shows a rich phase behavior with isotropic, nematic, and smectic B phases \cite{adams1987gr,luckhurst1990computer,de1991liquid}. Actually, the model has also the following phases: smectic A \cite{luckhurst1990computer}, tilted smectic B \cite{de1991liquid}, and rippled smectic B phases \cite{hashim1995computer}. In some cases, more involved versions of the GB potential have been investigated by introducing, e.g., dipolar forces \cite{satoh1996monte}, flexibility \cite{la1996rigid}, more complex shapes \cite{neal1997molecular}, or biaxial molecules \cite{cleaver1996extension}. Other sets of parameters have also been studied, and other properties have been examined, e.g., the effect of the $\nu$ exponent on the orientational order parameter \cite{mori2003brownian}, elastic constant for $GB(3,5,1,3)$ \cite{germano2002simultaneous}, diffusion coefficient in the smectic A phase of $GB(4.4,20,1,1)$ \cite{bates2004studies}, stability of the smectic phase, radial distribution function, orientational order parameter \cite{miguel2004stability}, and rotational viscosity coefficient \cite{satoh2006characteristic}. Satoh $et~al.$ studied the effect of an external magnetic field on GB fluids \cite{satoh2006molecular}. The isotropic-nematic region has been explored for different values of $\kappa$ \cite{mcdonald2000surface,huang2014calculation}. Varying $\kappa^{\prime}$ while keeping the other parameters fixed at $\kappa=3,~\mu=2$ and $\nu=1$ has been investigated in detail, the liquid-vapor region has been analyzed \cite{de1991effect,de1996effect}, and so has the equation of state, structure, and diffusion coefficient \cite{he2009self}. For discotic GB fluids the phase diagram has been obtained for different $\kappa$ and $\kappa^{\prime}$ parameters \cite{akino2001molecular,caprion2003influence,yamamoto2005brownian,meh22}. The most studied discotic model is $GB(0.345,0.2,1,2)$ \cite{cienega2014phase}, and this incidentally led to an improvement of the angle of view of liquid crystals displays \cite{bushby2011liquid}. In this work we study the $GB(3,5,2,1)$ model because, as already mentioned, its phase diagram, structure, and dynamics are known. Fig. \ref{fig:rod_conf} presents snapshots of the system at equilibrium in the isotropic, nematic, and smectic phases. There is no positional or orientational ordering in the isotropic phase. In the nematic phase there is no positional ordering, but some long-range orientational ordering. In the smectic phase the molecules form parallel layers with a robust orientational ordering within the layers. \begin{figure}[!h] \includegraphics[width=0.3\textwidth]{fig1a} \includegraphics[width=0.3\textwidth]{fig1b} \includegraphics[width=0.27\textwidth]{fig1c} \caption{Snapshots of the calamitic GB model $GB(3,5,2,1)$ at three state points. (a) shows the isotropic liquid phase at the state point $(\rho, T ) = (0.27, 1.2)$; (b) shows the nematic phase at $(\rho, T ) = (3.3, 1.2)$; (c) shows the smectic phase at $(\rho, T ) = (3.9, 1.2)$. } \label{fig:rod_conf} \end{figure} \section{R-simple systems and isomorphs}\label{sec:isom} Recalling that the virial $W$ quantifies the part of the pressure $p$ deriving from molecular interactions via the defining identity $pV=Nk_BT+W$ (in which $V$ is the volume and $N$ is the number of molecules) \cite{bailey2013statistical}, liquids and solids may be classified according to the correlation between the equilibrium fluctuations of the virial and the potential energy $U$ \cite{ingebrigtsen2012simple}. The so-called R-simple systems, which are those with strong such correlations, are particularly simple because in this case the thermodynamic phase diagram is basically one-dimensional instead of two-dimensional in regard to structure and dynamics \cite{ingebrigtsen2012isomorphs,ingebrigtsen2012simple,dyre2014hidden,dyre2018perspective}. Isomorph theory dealing with R-simple systems was developed over the last decade \cite{bailey2008pressure, gnan2009pressure, veldhorst2014scaling, costigliola2016freezing}. The $WU$ Pearson correlation coefficient is defined by \begin{equation} R(\rho,T)=\frac{\langle \Delta W \Delta U \rangle}{\sqrt{\langle (\Delta W)^2 \rangle \langle (\Delta U)^2 \rangle}}\,. \label{pearson} \end{equation} Here the angular brackets denote $NVT$ ensemble averages, $\Delta$ is the deviation from the equilibrium mean value, and $\rho$ is density. Many systems, including the LJ fluid, have strong $WU$ correlations in the liquid and solid phases, whereas $R(\rho,T)$ usually decreases significantly below the critical density \cite{bel19a}. A system is considered to be R-simple whenever $R>0.9$ at the state points in question. \begin{figure}[!h] \includegraphics[width=.5\textwidth]{fig2} \caption{Scatter plot of $WU$ correlations for the $GB(3,5,2,1)$ model at the state point $(\rho,T)=(0.33, 1.2)$. The system is here strongly correlating with $R=0.92$; the density-scaling exponent $\gamma$ is $8.22$.}\label{U_W} \end{figure} The density-scaling exponent $\gamma$, which is characterized by $\Delta U\cong\gamma \Delta W$, is found from linear-regression fits to a $WU$ scatter plot as shown in Fig.\ref{U_W} using the defining equation \begin{equation} \gamma=\frac{\langle \Delta W \Delta U \rangle}{\langle (\Delta U)^2 \rangle}\,. \label{gamma} \end{equation} R-simple systems have curves in the phase diagram along which structure and dynamics are approximately invariant, and these curves are termed \textit{isomorphs}. It is important to emphasize that isomorph invariance only applies when data are presented in so-called reduced units. In the system of reduced units, which in contrast to ordinary units are state-point dependent, the density $\rho \equiv N/V$ defines the length unit $l_0$, the temperature defines the energy unit $e_0$, and the density and thermal velocity define the time unit $t_0$: \begin{equation*} l_0=\rho^{-1/3},~~ e_0=k_{\rm B}T,~~ t_0=\rho^{-1/3}\sqrt{m/k_{\rm B}T}\,. \end{equation*} Here $m$ is the molecule mass. Quantities given in the isomorph-theory reduced units are marked with a tilde. Strong virial potential-energy correlations arise whenever the hidden-scale-invariance symmetry applies. This is the condition that the potential-energy ordering of same-density configurations is maintained under a uniform scaling of all coordinates \cite{schroder2014simplicity}, formally expressed as follows: \begin{equation}\label{HSI} U(\mathbf{R}_a)<U(\mathbf{R}_b)\Rightarrow U(\lambda \mathbf{R}_a)<U(\lambda \mathbf{R}_b) \end{equation} where $\lambda$ is scaling factor. Consider two configurations with same potential energy, i.e., $U(\mathbf{R}_a)=U(\mathbf{R}_b)$. After a uniform scaling one has by \eq{HSI} $U(\lambda \mathbf{R}_a)=U(\lambda \mathbf{R}_b)$. By taking the derivative of this with respect to $\lambda$ one easily derives $W(\mathbf{R}_a)=W(\mathbf{R}_b)$ \cite{schroder2014simplicity}; thus same potential energy implies same virial, i.e., 100\% correlation between $W$ and $U$. \Eq{HSI} only applies approximately for realistic systems, however, so in practice one observes strong but not perfect virial potential-energy correlations. It can be shown that \eq{HSI} implies that the reduced-unit structure and dynamics are invariant along the lines of constant excess entropy, the isomorphs \cite{schroder2014simplicity}. Recall that a system's entropy $S$ can be expressed as that of an ideal gas plus a term deriving from the intermolecular interactions, $S=S_{\rm id}+S_{\rm ex}$. For an ideal gas one has $S_{\rm ex}=0$, for all other systems $S_{ex}<0$ because these are less disordered than the ideal gas. Along an isomorph one has \begin{equation} dS_{\rm ex}= \left(\frac{\partial S_{\rm ex}}{\partial T}\right)_V dT + \left(\frac{\partial S_{\rm ex}}{\partial V}\right)_T dV = 0\,. \label{ds} \end{equation} Using Maxwell's volume-temperature relation for the configurational degrees of freedom, $\left( {\partial S_{\rm ex}}/{\partial V} \right)_T=\left( {\partial\left( W/V \right)}/{\partial T}\right)_V$, we can rewrite the \eq{ds} as \begin{equation} \left( \frac{\partial S_{\rm ex}}{\partial T} \right)_V T\,d\ln T = \left( \frac{\partial W}{\partial T} \right)_V d\ln \rho\,. \end{equation} Using $dU=TdS_{\rm ex}-(W/V)dV$ leads to \begin{equation} \left( \frac{\partial U}{\partial T} \right)_V d\ln T = \left( \frac{\partial W}{\partial T} \right)_V d\ln \rho\,, \end{equation} which via the fluctuation relations $\left( {\partial W}/{\partial T} \right)_V=-\langle \Delta W \Delta U \rangle$ and $\left( {\partial U}/{\partial T} \right)_V=-\langle (\Delta U)^2 \rangle$ for $\gamma$ leads to the above \eq{gamma}, \begin{equation} \gamma\equiv\left( \frac{\partial \ln T}{\partial \ln \rho} \right)_{S_{\rm ex}}=\frac{\langle \Delta W \Delta U \rangle}{\langle (\Delta U)^2 \rangle}\,. \label{iso_gamma} \end{equation} \Eq{iso_gamma} is completely general \cite{gnan2009pressure}. This equation is of particular interest, however, when the system has isomorphs because the equation can then be used for tracing out the isomorphs without knowing the equation of state, which is done as follows. At a given state point $(\rho_1,T_1)$ one first calculates $\gamma$ from the equilibrium fluctuations of the potential energy and virial. Then, by scaling the system to a slightly different density $\rho_2$ and numerically calculating $\left( {\partial \ln T}/{\partial \ln \rho} \right)_{S_{\rm ex}}$ from \eq{iso_gamma}, one predicts the temperature $T_2$ with the property that $(\rho_2,T_2)$ is on the same isomorph as $(\rho_1,T_1)$. In the simulations of this paper we used fourth-order Runge-Kutta integration to generate isomorphs \cite{att21} involving density step sizes of approximately 1\%. \section{Properties studied} We simulated a system of $1372$ particles. The pair potential was cut and shifted at $r_c=4.0$ and the time step was $\Delta t = 0.001$. Because of the shape anisotropy, supplementing the standard $NVT$ Nose-Hoover algorithm for the center-of-mass motion we used the IMP algorithm for the rotational motion \cite{fincham1984more}. Different thermostats were applied for translational and rotational motion (we eventually concluded that using a single thermostat did not result in any noticeable differences, however). The molecular moment of inertia was set to $I=1$. At each simulated state point, 20 million time steps were taken to equilibrate the system before the production runs, each of which involved 67 million time steps. As a consistency check of our GB implementation, we compared simulation results with those of the literature and found good agreement in all cases. The quantities evaluated in these studies, which are all defined below, were the following (where we also list the reference(s) to which data were compared): the radial distribution function $g(r)$ \cite{bates1999computer,de1991liquid}, the radial distribution orientational correlation function $G_2(r)$ \cite{de1991liquid,de1991location,adams1987computer}, the $S_2$ orientational order parameter \cite{de1991liquid,de1991location,mendez2019equation}, and various time-autocorrelation functions \cite{de1992dynamics,jose2006multiple}. An order parameter is a physical quantity that distinguishes between two phases. We proceed to define the second-rank orientational order parameter $S_2$ that quantifies how much the molecular orientations vary throughout the system \cite{allen2017computer}. For a uniaxial phase, $S_2$ is defined as the following sum over all molecules \begin{equation} S_2 = \left\langle \frac{1}{N} \sum_i P_2(\hat{\mathbf{e}}_i\cdot \hat{\mathbf{e}}_d) \right\rangle\,. \label{eq:S2_1} \end{equation} Here $P_2$ is the second-order Legendre polynomial, $\hat{\mathbf{e}}_d$ is the director of the phase, and the angular brackets denote a time or ensemble average. This quantity takes values between 0 and 1; for a perfectly aligned system $S_2=1$ whereas $S_2=0$ implies an isotropic system. In a simulation $\hat{\mathbf{e}}_d$ is unknown. Here the order parameter can be evaluated by maximizing $S_2$ with respect to $\hat{\mathbf{e}}_d$, which is done by rewriting \eq{eq:S2_1} as follows \cite{eppenga1984monte,allen2017computer} \begin{equation} S_2=\langle \hat{\mathbf{e}}_d \cdot \mathbf{Q} \cdot \hat{\mathbf{e}}_d \rangle\, \label{eq:S2_2}\,. \end{equation} If $\otimes$ denotes a tensor product and $\mathbf{I}$ is the unity matrix, $\mathbf{Q}$ is defined by \begin{equation} \mathbf{Q}= \frac{1}{2N}\sum_i (3\hat{\mathbf{e}}_i \otimes \hat{\mathbf{e}}_i -\mathbf{I})\,. \end{equation} It can be shows that $S_2$ is the largest eigenvalue, $\lambda_{\rm max}$, of the $\mathbf{Q}$ tensor. \begin{figure}[!h] \includegraphics[width=0.5\textwidth]{fig3a} \includegraphics[width=0.5\textwidth]{fig3b} \caption{Density-temperature phase diagram of the $GB(3,5,2,1)$ model with (a) showing the virial potential-energy correlation coefficient $R$ (\eq{pearson}) and (b) showing the second-order orientational order parameter $S_2$ (\eq{eq:S2_2}). Dark green triangles connected by dark green dashed lines delimit the phase boundaries \cite{de2002global}. $I$ stands for the isotropic, $N$ for the nematic, and $S$ for the smectic phase. The solid yellow curves are isomorphs not investigated further here (results for these are presented in Ref. \onlinecite{Saeed_thesis}), the two black curves are two isomorphs for which results are reported below, one in the isotropic phase and one in the nematic phase. The light blue triangles mark the isomorph that continues the I-N phase boundary numerical data of Ref. \onlinecite{de2002global}. The isomorphs were determined by numerical integration of \eq{ds} starting from the following reference state points marked by the red filled circles: $(\rho_{\rm ref},T_{\rm ref})=(0.25,1.2)$, $(\rho_{\rm ref},T_{\rm ref})=(0.27,1.2)$, $(\rho_{\rm ref},T_{\rm ref})=(0.30,1.2)$, $(\rho_{\rm ref},T_{\rm ref})=(0.32,1.2)$, $(\rho_{\rm ref},T_{\rm ref})=(0.33,1.1)$, $(\rho_{\rm ref},T_{\rm ref})=(0.33,1.2)$, and $(\rho_{\rm ref},T_{\rm ref})=(0.35,1.2)$.} \label{fig:heatmap} \end{figure} \Fig{fig:heatmap} shows a ``heat-map'' phase diagram of the $GB(3,5,2,1)$ model with respect to the virial potential-energy correlation coefficient $R$ and the orientational order parameter $S_2$. By definition, the regions with $R>0.9$ are R-simple; this is where one expects isomorph theory to apply. This is not a sharp distinction, however, and many systems with $R$ between 0.8 and 0.9 have also been found to have good isomorphs. In Fig. \ref{fig:heatmap} $I$ stands for the isotropic, $N$ for the nematic, and $S$ for the smectic phase; the regions in-between are those of coexisting phases. The dark green triangles marking the phase boundaries were extracted from Ref. \onlinecite{de2002global}. Selected isomorphs are marked as solid yellow lines, each of which starts from a ``reference state point'' marked as a red full circle. The main paper presents results for two isomorphs (black): one in the isotropic phase and one in the nematic phase. Results for the remaining five isomorphs are reported in Ref. \onlinecite{Saeed_thesis}. We conclude from \fig{fig:heatmap}(a) that there are strong correlations whenever the temperature is above unity (roughly). The isomorph reference state points were selected to obey $R>0.9$; from here on $R$ increases when density and temperature are increased along each isomorph. The Appendix give details of the two isomorphs studied by listing for each isomorph several state points and the corresponding values of $R$ and $\gamma$. Note that invariance of the physics along the isomorphs -- the main focus of this paper -- is manifested already in \fig{fig:heatmap}(b) as regards the orientational ordering because $S_2$ is clearly approximately isomorph invariant. In particular, the phase boundary approximately follows an isomorph \cite{IV,ped16}. Based on this, the light blue triangles mark the expected isotropic-nematic phase boundary. This gives an example of how isomorph theory may be used for estimating the phase boundary by allowing one to go beyond the numerical phase-boundary data of Ref. \onlinecite{de2002global} without having to perform extensive additional simulations. Having established the phase diagram of the $GB(3,5,2,1)$ model, we next define the quantities studied. We probed the system's dynamics at the different state points by calculating the mean-square displacement (MSD) as a function of time, as well as four time-autocorrelation functions defined by \begin{equation} \phi_A(t)=\langle \mathbf{A}(t_0) \cdot \mathbf{A}(t_0+t) \rangle. \label{autocorr} \end{equation} Here $\mathbf{A}(t)$ is a vector or scalar molecular property and the angular brackets denote an ensemble and particle average. We evaluate below \eq{autocorr} from simulations for $\mathbf{A}$ equal to velocity, angular velocity, force, and torque. We also study the first- and second-order orientational order parameter time-correlation function defined by \begin{equation} \phi_l(t)= \langle P_l(\hat{\mathbf{e}}_i(t_0) \cdot \hat{\mathbf{e}}_i(t_0+t)) \rangle\,, \label{reor} \end{equation} in which $P_l$ is a Legendre polynomial ($l=1~\text{and}~2$). To quantify the structure we measured the standard radial distribution function, $g(r)$, as well as the radial-distribution orientational correlation function defined by \begin{equation} G_2(r) \equiv \langle P_2(\hat{\mathbf{e}}_i \cdot \hat{\mathbf{e}}_j) \rangle\, \label{G2} \end{equation} where the brackets imply an average over all pairs of molecules $i$ and $j$ that are the distance $r$ apart. \section{Results} This section investigates to which degree the reduced-unit structure and dynamics are invariant along two isomorphs. We present data for one isomorph in the isotropic phase and one in the nematic phase (the black lines in \fig{fig:heatmap}). In realistic models isomorph invariance is only approximate, so in order to put the findings into perspective we compare the results for each isomorph with results for the isochore defined by the reference-state-point density (red points in \fig{fig:heatmap}) with the same temperature variation as that of the isomorph. For the isotropic-phase isomorph the reference state point is $(\rho_{\rm ref},T)=(0.27,1.2)$; here we cover a density variation of about $40\%$ and temperatures in the range $1.2<T<27$. For the nematic-phase isomorph the reference state point is $(\rho_{\rm ref},T)=(0.33,1.2)$; here the density varies by about $35\%$ and temperatures are in the range $1.2<T<16$. \begin{figure}[h] \includegraphics[width=0.45\textwidth]{fig4a} \includegraphics[width=0.45\textwidth]{fig4b} \caption{\label{fig:r027_T12msd} Reduced mean-square displacement as a function of the reduced time $\tilde{t}$ along the isochore and the isomorph in the isotropic phase (left) and in the nematic phase (right). In both cases, the data collapse to a good approximation along the isomorph but not along the isochore. } \end{figure} Fig. \ref{fig:r027_T12msd} provides data for the reduced-unit MSD along the isochore and the isomorph in the isotropic and nematic phases, respectively. The level of invariance in the center-of-mass dynamics is clearly higher along the isomorph than along the isochore. At long times, the MSD is proportional to time and the reduced diffusion coefficients may be extracted from these data. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{fig5} \caption{Reduced diffusion coefficient as a function of temperature along the isochores (upper panel) and the isomorphs (lower panel). The approximate invariance in the latter case is clearly visible.} \label{fig:D} \end{figure} \Fig{fig:D} shows the reduced diffusion coefficient as a function of temperature along the isomorphs and isochores -- approximate isomorph invariance is again clearly visible. \begin{figure}[!h] \includegraphics[width=0.45\textwidth]{fig6a} \includegraphics[width=0.45\textwidth]{fig6b} \includegraphics[width=0.45\textwidth]{fig6c} \includegraphics[width=0.45\textwidth]{fig6d} \caption{The upper figures show the reduced-unit time-autocorrelation functions of the velocity $v$ and the angular velocity $\omega$ along the isochore and the isomorph in the isotropic (left) and the nematic phases (right). The lower figures show the analogous results for the reduced-unit time-autocorrelation functions of the force $f$ and the torque $\tau$. The color codes are the same as in \fig{fig:r027_T12msd}. Good isomorph invariance is generally observed except for significant short-time deviations for the two rotational autocorrelation functions (see the text).} \label{fig:r027_T12_VACF_AVACF} \end{figure} Next we show data for four time-autocorrelation functions. \Fig{fig:r027_T12_VACF_AVACF} gives in the two upper figures the velocity ($v$) and angular velocity ($\omega$) time-autocorrelation functions while the two lower figures give the force and torque time-autocorrelation functions. As in \fig{fig:r027_T12msd}, all functions are given in reduced units and as functions of the reduced time $\tilde{t}$. Overall, we see in both the isotropic and the nematic phases good isomorph invariance and a sizable variation along the corresponding isochore. The short-time angular velocity and torque time-autocorrelation functions violate isomorph invariance significantly, however. This is due to the fact that the moment of inertia in the simulations was kept fixed, implying that this quantity is not constant in reduced units. As a consequence, the short-time ballistic motion is not isomorph invariant. At intermediate and long times, we do find good isomorph invariance also for the rotational time-autocorrelation functions; here the moment of inertia plays much less of a role for the dynamics, which for a given molecule is dominated by interactions with the surrounding GB molecules. A weaker, but still clearly visible violation of isomorph invariance occurs at short times for the force autocorrelation function. In our understanding, this reflects the fact that the density-scaling exponent $\gamma$ changes with density along an isomorph, resulting in a changing effective inverse-power-law interaction. The lowest densities have the largest $\gamma$ (see the Appendix), leading to the highest average force squared coming from collisions. The collapse of the isochore rotational autocorrelation functions at short times is a consequence of the definition of reduced units, just as the short-time reduced-unit MSD collapse is. \begin{figure}[!h] \includegraphics[width=0.45\textwidth]{fig7a} \includegraphics[width=0.45\textwidth]{fig7b} \caption{The left figures show the first- and second-order orientational order parameter time-autocorrelation function in the isotropic phase, the right figures show the same in the nematic phase. Good isomorph invariance is observed in both phases.} \label{fig:r027_T12_OACF1_OACF2} \end{figure} Fig. \ref{fig:r027_T12_OACF1_OACF2} shows data for the first- and second-order orientational time-autocorrelation function, again plotted as functions of the reduced time. In the isotropic phase these functions go to zero at long times, confirming that there are no preferred orientations. This is not the case, of course, in the nematic phase. In both phases, however, we observe good isomorph invariance. According to the phase diagram (\fig{fig:heatmap}), the isochore defined from the nematic isomorph reference state point enters the isotropic phase at high temperatures; this is reflected in the figures by the fact that both time-autocorrelation functions converge to zero as the temperature increases. \begin{figure}[!h] \centering \includegraphics[width=0.45\textwidth]{fig8a} \includegraphics[width=0.45\textwidth]{fig8b} \caption{Reduced-unit radial distribution functions (upper figures) and radial-distribution orientational correlation function (\eq{G2}, lower figures) along the isochore and the isomorph in the isotropic (left) and nematic phases (right). Good isomorph invariance is observed in both phases, with some deviation at the first peak (see the text).} \label{fig:r027_T12_rdf_G2} \end{figure} So far we have discussed different dynamic signals and seen good isomorph invariance. The isomorph theory also predicts that the reduced-unit structure should be invariant. This is tested in \fig{fig:r027_T12_rdf_G2}, in which the upper panels show the center-of-mass radial distribution function along the isochores and isomorphs. The data are close to invariant along the isomorph, though with visible deviations around the first peak. This is often observed when isomorph theory is tested over a large density range; it reflects a non-invariance arising when the density-scaling exponent $\gamma$ varies along the isomorph similar to the non-invariance of the short-time force time-autocorrelation function discussed above. As in the latter case, a large $\gamma$ implies a large effective inverse-power-law exponent, which decreases the probability of near contacts and is ``compensated'' by a higher peak in order to arrive at the same coordination number (defined by integration of the radial distribution function over its first peak). This phenomenon was recently rationalized in terms of isomorph invariance of the so-called bridge function of liquid-state theory \cite{cas21}. Confirming this interpretation, the highest first peaks along the isomorphs correspond to the lowest temperatures where $\gamma$ is largest (Appendix). Data for the orientational structure quantified in terms of $G_2(r)$ are given in the lower panels of \fig{fig:r027_T12_rdf_G2}. In the nematic phase, this function does not converge to zero at long times as in the isotropic phase. Except for the first peak in the isotropic phase, there is good isomorph invariance of the structure. Note that the tail of $G_2(r)$ goes to zero as the temperature is increased along the isochore in the nematic phase. This reflects a phase transition into the isotropic phase. In contrast, the tail remains invariant as we increase the temperature along the isomorph. \section{Conclusions} When given in reduced units the dynamic and structural properties of the $GB(3,5,2,1)$ model are invariant to a good approximation along isomorphs in both the isotropic and the nematic phases, with some deviations at short times for orientational time-autocorrelation functions reflecting that the moment of inertia was assumed to be constant and thus not isomorph invariant in reduced units. In contrast, structure and dynamics are not invariant along isochores with the same temperature variation. Overall, our findings confirm isomorph-theory predictions and are consistent with the fact that the calamitic $GB(3,5,2,1)$ model obeys hidden scale invariance at high temperatures in both phases, i.e., has a virial potential-energy correlation coefficient above 0.9. For future work, it would be interesting to investigate the smectic phase of the model for which we based on \fig{fig:heatmap} expect good isomorphs even at lower temperatures than unity. \begin{acknowledgments} This work was supported by the VILLUM Foundation's \textit{Matter} grant (16515). \end{acknowledgments} \newpage \section*{Appendix: Isomorph state points} This Appendix provides details of the two isomorphs studied, giving for each of these at several state points: density, temperature, virial potential-energy correlation coefficient $R$ (\eq{pearson}), and density-scaling exponent $\gamma$ (\eq{ds}). \setlength{\tabcolsep}{30pt} \renewcommand{\arraystretch}{0.3} \begin{table}[!htbp] \centering \resizebox{0.8\textwidth}{!}{% \centering \begin{tabular}{c c c c} \hline $\rho$ & $T$ & $R$ & $\gamma$ \\ [0.1ex] \hline \hline 0.2700 & 1.2000 & 0.9077 & 8.4553\\ 0.2727 & 1.3057 & 0.9158 & 8.5107\\ 0.2754 & 1.4215 & 0.9231 & 8.5535\\ 0.2782 & 1.5480 & 0.9285 & 8.5737\\ 0.2810 & 1.6859 & 0.9330 & 8.5795\\ 0.2838 & 1.8361 & 0.9367 & 8.5756\\ 0.2866 & 1.9995 & 0.9396 & 8.5615\\ 0.2895 & 2.1770 & 0.9416 & 8.5383\\ 0.2924 & 2.3697 & 0.9433 & 8.5126\\ 0.2953 & 2.5788 & 0.9445 & 8.4820\\ 0.2982 & 2.8056 & 0.9457 & 8.4555\\ 0.3012 & 3.0514 & 0.9461 & 8.4252\\ 0.3042 & 3.3177 & 0.9467 & 8.3962\\ 0.3073 & 3.6062 & 0.9469 & 8.3646\\ 0.3104 & 3.9186 & 0.9468 & 8.3364\\ 0.3135 & 4.2570 & 0.9469 & 8.3084\\ 0.3166 & 4.6231 & 0.9465 & 8.2815\\ 0.3198 & 5.0194 & 0.9463 & 8.2542\\ 0.3230 & 5.4483 & 0.9460 & 8.2276\\ 0.3262 & 5.9125 & 0.9456 & 8.2032\\ 0.3295 & 6.4148 & 0.9450 & 8.1802\\ 0.3327 & 6.9583 & 0.9446 & 8.1647\\ 0.3361 & 7.5463 & 0.9442 & 8.1479\\ 0.3394 & 8.1827 & 0.9436 & 8.1289\\ 0.3428 & 8.8712 & 0.9428 & 8.1121\\ 0.3463 & 9.6164 & 0.9423 & 8.0987\\ 0.3497 & 10.4226 & 0.9416 & 8.0867\\ 0.3532 & 11.2953 & 0.9410 & 8.0735\\ 0.3567 & 12.2396 & 0.9403 & 8.0628\\ 0.3603 & 13.2616 & 0.9396 & 8.0562\\ 0.3639 & 14.3680 & 0.9390 & 8.0462\\ 0.3676 & 15.5657 & 0.9383 & 8.0443\\ 0.3712 & 16.8622 & 0.9376 & 8.0396\\ 0.3749 & 18.2660 & 0.9370 & 8.0345\\ 0.3787 & 19.7859 & 0.9362 & 8.0315\\ 0.3825 & 21.4318 & 0.9356 & 8.0285\\ 0.3863 & 23.2145 & 0.9350 & 8.0295\\ 0.3902 & 25.1457 & 0.9342 & 8.0295\\ 0.3941 & 27.2379 & 0.9335 & 8.0343\\\\ \hline \end{tabular} } \caption{Variation of density, temperature, correlation coefficient $R$, and density-scaling exponent $\gamma$ for the state points on the isotropic-phase isomorph with reference state point $(\rho_{\rm ref},T_{\rm ref})=(0.27,1.2)$. At each step we increased density by 1\% up to 40\% overall.} \label{table:r0.27_T1.2} \end{table} \setlength{\tabcolsep}{30pt} \renewcommand{\arraystretch}{0.3} \begin{table}[!htbp] \centering \resizebox{0.8\textwidth}{!}{% \centering \begin{tabular}{c c c c} \hline $\rho$ & $T$ & $R$ & $\gamma$ \\ [0.1ex] \hline \hline 0.3300 & 1.2000 & 0.9169 & 8.2217\\ 0.3333 & 1.3027 & 0.9230 & 8.2809\\ 0.3366 & 1.4149 & 0.9274 & 8.3187\\ 0.3400 & 1.5371 & 0.9315 & 8.3349\\ 0.3434 & 1.6700 & 0.9342 & 8.3301\\ 0.3468 & 1.8142 & 0.9359 & 8.3183\\ 0.3503 & 1.9705 & 0.9375 & 8.2957\\ 0.3538 & 2.1398 & 0.9385 & 8.2664\\ 0.3573 & 2.3229 & 0.9389 & 8.2333\\ 0.3609 & 2.5209 & 0.9393 & 8.2066\\ 0.3645 & 2.7349 & 0.9394 & 8.1697\\ 0.3682 & 2.9660 & 0.9392 & 8.1395\\ 0.3719 & 3.2156 & 0.9393 & 8.1057\\ 0.3756 & 3.4851 & 0.9387 & 8.0731\\ 0.3793 & 3.7759 & 0.9384 & 8.0433\\ 0.3831 & 4.0898 & 0.9375 & 8.0058\\ 0.3870 & 4.4283 & 0.9367 & 7.9788\\ 0.3908 & 4.7934 & 0.9362 & 7.9487\\ 0.3947 & 5.1873 & 0.9354 & 7.9212\\ 0.3987 & 5.6119 & 0.9345 & 7.8922\\ 0.4027 & 6.0697 & 0.9337 & 7.8677\\ 0.4067 & 6.5632 & 0.9321 & 7.8438\\ 0.4108 & 7.0953 & 0.9315 & 7.8229\\ 0.4149 & 7.6689 & 0.9306 & 7.7996\\ 0.4190 & 8.2870 & 0.9296 & 7.7791\\ 0.4232 & 8.9533 & 0.9286 & 7.7620\\ 0.4274 & 9.6715 & 0.9277 & 7.7464\\ 0.4317 & 10.4455 & 0.9264 & 7.7286\\ 0.4360 & 11.2797 & 0.9258 & 7.7155\\ 0.4404 & 12.1786 & 0.9244 & 7.6942\\ 0.4448 & 13.1473 & 0.9232 & 7.6844\\ 0.4492 & 14.1912 & 0.9223 & 7.6712\\ 0.4537 & 15.3160 & 0.9209 & 7.6535\\ 0.4583 & 16.5277 & 0.9197 & 7.6442\\ \\ \hline \end{tabular} } \caption{Variation of density, temperature, correlation coefficient $R$, and density-scaling exponent $\gamma$ for the state points on the nematic-phase isomorph with reference state point $(\rho_{\rm ref},T_{\rm ref})=(0.33,1.2)$. } \label{table:r0.33_T1.2} \end{table}
1,108,101,565,068
arxiv
\section{Our model} \section{Introduction} The Swampland Program has received a lot of attention over the last few years. Its importance relies on the establishment of some criteria to separate effective quantum field theories $-$considered as consistent with Quantum Gravity, a.k.a. String Theory$-$ from those which are not. The program focuses in different proposals commonly referred as Conjectures which appear to rule out some of the string model engineering constructions so far presented in the literature. Some of those conjectures are involved in our work, such as the instability of non-SUSY Anti de Sitter (AdS) vacua, the AdS scale separation and the Refined de Sitter conjecture, which in turn seem to be interconnected \cite{Palti:2019pca, vanBeest:2021lhn, Grana:2021zvf}). \\ The refined dS conjecture establishes that the minima of the scalar potential coming from the dimensional reduction of the low energy theory in string theory have to be AdS otherwise they are tachyonic or not consistent to Quantum Gravity, at least in the asymptotic regions of moduli space \cite{Ooguri:2006in, Garg:2018reu, Ooguri:2018wrx,Lust:2019zwm, Apers:2022zjx}. Even more restrictive, the AdS conjecture establishes that the scale of the lightest moduli is not parametrically separated from the AdS scale, and thus any attempt to uplift an AdS to a dS vacuum shall result in their destabilization\footnote{See \cite{Bena:2018fqc} for the case in the deformed conifold.}. \\ Recently it has been argued that the use of non-BPS states, classified by K-theory, shall be an interesting corner to evade these restrictions \cite{Blumenhagen:2019kqm,Damian:2019bkb}, unless the total K-theory charge must cancel as pointed out in \cite{Uranga:2000xp} and related to the cobordism conjecture in \cite{Blumenhagen:2021nmi}. The use of non-BPS states, typically constructed from a pair of stable branes and anti-branes in the presence of an orientifold plane, emulates the role played by non-perturbative contributions in KKLT scenarios by breaking the non-scale structure of the $\mathcal{N}=1$ superpotential and providing a nice mechanism to stabilize all the moduli. However, their inclusion is not suffice to guarantee the presence of apparently stable dS vacua but contributions to the effective scalar potential coming from the RR 5-form are necessary.\\ We are interested in two main aspects. First, in constructing a (meta)-stable dS vacuum by identifying the minimal set of ingredients the effective scalar potential must posses in the spirit of \cite{Hertzberg:2007wc, Shiu:2011zt} and also to find possible compactification scenarios where such conditions might be present. Second, in case we can construct a classical stable dS vacuum we want to look for possible sources of instabilities which in turn can be taken as evidence (or not) of the realization of the above referred Swampland Conjectures .\\ In this work we consider a compactification on a K\"ahler manifold admiting torsion, upon which there is a contribution of the torsional part of $F_5$ to the scalar potential allowing to find AdS vacua (but not dS). For that to happen it is necessary that the number of orientifold fixed points be greater than the number of $D3$-branes such that their contribution to the tadpole be negative, i.e., $\mathcal{N}_3<0$. Under this context it is then possible to wrap $D5$-branes on torsional 2-cycles which we claim are precisely $\hat{D5}$ non-BPS states and that contribute with a positive amount of energy such that uplift the AdS vacuum to a dS one.\\ As in the case of the AdS vacua, the realization of dS minima require some extra conditions, namely that there are fluxes in RR and NS-NS sectors supported in more than two 3-cycles and that the number of orientifold 3-planes has a lower bound given by \begin{equation} \mathcal{N}_{O3}>4\frac{ (A_{H_3} A_{F_3})^{1/2}}{A_3}+2\mathcal{N}_{\text{flux}},\nonumber \end{equation} where $A_{H_3}, A_{F_3}$ and $A_3$ are the contributions $-$upon dimensional reduction$-$ of 3-form fluxes and 3-dimensional sources as $D3$-branes and $O3^-$-planes, while $\mathcal{N}_{\text{flux}}$ is the usual flux number entering into the $D3$-brane charge tadpole contribution.\\ These conditions were inferred after implementing a Machine Learning (ML) algorithm specifically designed to look for dS vacua. The use of ML algorithms and tools has been proved to be prolific (and a more systematic way) to explore the vacua in string theory compactifications (see for instance \cite{He:2018jtw, Ashmore:2019wzb, Parr:2019bta, Bao:2020sqg, Halverson:2020opj, Gal:2020dyc, Erbin:2020tks, CaboBizet:2020cse, Cole:2021nnt, He:2022cpz}). For that we implemented a hybrid algorithm to explore the minima of a scalar potential of the form\footnote{Other sources are considered in the Appendix \ref{ML} such as fluxes, branes and negative curvature. More exotic fluxes, as non-geometric has been considered in the literature (see \cite{Plauschinn:2018wbo} and references therein). } \begin{equation} \mathcal{V}_{\text{eff}} = \mathcal{V}_{\text{eff}}( {H}_3, {F}_3, {F}_5, \widehat{D5})\nonumber \end{equation} subject to the constraints of 1) having a positive value at the minimum, 2) zero value of its derivative with respect to each of the moduli, 3) positive definiteness of the mass matrix, and 4) positiveness of the contribution of the $ \widehat{\text{D}5}$-brane. In the context of ML, these restrictions can be implemented through an objective function written as \begin{equation} \text{Error} = \sum_{i=1}^N \alpha_i \text{error}_i\nonumber \end{equation} where each of the error$_i$ contributions takes into account every single restriction above mentioned with the $\alpha$ parameter a real value giving a weight to each error contribution. In the present work we employ the hybrid algorithm including the Simulated Annealing (SA) as well as the Gradient Descent (GD). The SA algorithm is a heuristic method for solving optimization problems which, inspired by the annealing procedure of metal working, is able to look for an approximate solution to the optimization problem. The GD algorithm on the other hand is a second order iterative optimization algorithm designed to find local minima provided that the first derivative is known. Thus, at a first step, the SA algorithm shall look for interesting points in the error function whereas the GD shall improve the solution guaranteeing the zero value of the first derivative of the scalar potential. We describe in detail these algorithms in Appendix \ref{ML}.\\ Our work is then organized as follows: In section 2 we present the most usual conditions for a type IIB compactification and specify the notation we use along the paper. In section 3 we show that it is possible to construct AdS vacua by compactification of type IIB string Theory on a K\"ahler manifold with torsion, such that the RR five-form has a torsional contribution to the effective scalar. For that we implement a ML algorithm through the presence of an error function which allows to easily find a large number of stable and unstable vacua. In this case we report 389 different AdS vacua which existence relies upon the requirement that the number of $D3$-branes be less than one-half of the number of orientifold $O3^-$-planes. However no dS vacua were found under these conditions. In section 4, once we take a compactification on a manifold with torsion, we also consider $D5$-branes wrapping torsional 2-cycles while fulfilling the aforementioned conditions on fluxes and the orientifold bound. Extra assumptions were taken, such as the non existence of torsional components of all 3-form fluxes. For this case, we report over 170 different dS stable vacua. In section 5 we discuss the conditions upon which the AdS vacua can be lifted to dS ones and comment about the implications with respect to the Swampland Conjectures. In section 6 we present our conclusions, while in the Appendix we describe some useful technical information in relation with the Machine Learning algorithm to be implemented in our search, particularly about the incorporation of the above mentioned two algorithms: the Simulated Annealing and the Conjugate Gradient.\\ \section{Contribution to the scalar potential} Let us review the standard dimensional reduction procedure to construct the effective scalar potential. Consider the type IIB superstring compactified on a manifold $\mathbb{X}_6$ in the presence of 3-form fluxes and 3-dimensional local sources. We are not including 7-branes or orientifold 7-planes. As usual, the action for the massless modes in the string frame is \begin{equation} S_{IIB}= S_{G}+S_\phi+S_{G_3}+S_{F_5}+S_{CS}+S_{\text{loc}}, \end{equation} with \begin{eqnarray} S_{G}&=&\frac{1}{2\kappa^2_{10}}\int d^{10}x\sqrt{-G}~e^{-2\phi}R,\\ S_{\phi}&=&\frac{1}{2\kappa^2_{10}}\int d^{10}x\sqrt{-G}\left[e^{-2\phi}\left(4(\nabla\phi)^2\right)-\frac{|F_1|^2}{2}\right],\\ S_{G_3}&=-&\frac{1}{4\kappa^2_{10}}\int d^{10}x\sqrt{-G} \left(e^{-2\phi}|H_3|^2-|\hat{F}_3|^2\right),\\ S_{F_5}&=&-\frac{1}{8\kappa^2_{10}}\int d^{10}x\sqrt{-G}~|F_5|^2,\\ S_{CS}&=&-\frac{1}{4\kappa^2_{10}}\int C_4\wedge H_3\wedge F_3,\\ S_{\text{loc}}&=&S_{\text{DBI}}+S_3=T_3\mathcal{N}_3\int d^4x \sqrt{-g_4} e^{-2\phi} +\frac{1}{2}\mathcal{N}_3~\mu_3\int_{\Sigma_4} C_4, \end{eqnarray} where in terms of the string length $l_s$, \begin{equation} \kappa^2_{10}=\frac{l_s^8}{4\pi}, \end{equation} $T_3$ is the $D3$-brane tension, $\mathcal{N}_3= \mathcal{N}_{D3}-\frac{1}{2}\mathcal{N}_{O3}$ counts the number of $D3$-branes minus the number of orientifold planes $O3^-$ with $\mu_3=T_3=\frac{2\pi}{l_s^4}$. We consider the DBI action at leading order in $\alpha'$ for $D3$-branes and $O3^-$-planes along the extended coordinates, where the RR fluxes are \begin{eqnarray} \hat{F}_3&=&F_3-C_0\wedge H_3,\nonumber\\ F_5&=&dC_4-\frac{1}{2}C_2\wedge H_3-\frac{1}{2}B_2\wedge dC_2. \end{eqnarray} Thus, the action $S_{F_5}$ (before self-duality is imposed) can be written as \begin{equation} S_{F_5}=-\frac{5!}{8\kappa^2_{10}}\int F_5\wedge\ast F_5= \frac{15}{\kappa^2_{10}}\int \left[ C_4\wedge d\ast F_5 + (\frac{1}{2}C_2\wedge H_3+\frac{1}{2}B_2\wedge dC_2)\wedge \ast F_5\right]. \label{S5} \end{equation} Due to the action of the orientifold planes $O3^-$, the RR and NS-NS potentials $C_2$ and $B_2$ are projected out and the equations of motion from $\delta S/\delta C_4=0$ give us the tadpole condition for the 3-dimensional sources \begin{equation} \mathcal{N}_3+\frac{1}{l_s^4}\int F_3\wedge H_3=\mathcal{N}_3+\mathcal{N}_{\text{fluxes}}=0. \end{equation} Therefore, the contribution from $S_{F_5}+S_{CS}+S_3$ to effective the scalar potential $-$in a compactification on a CY manifold$-$ vanishes. As we shall see we are going to depart from a CY compactification into a more general setup such that $S_{F_5}$ does have a contribution.\\ In order to construct the effective scalar potential $\mathcal{V}_{\text{eff}}$, we specify the ten-dimensional metric as \begin{eqnarray} ds_{10}^2&=&g_{\mu\nu}dx^\mu dx^\nu+h_{mn}dy^ndy^m,\nonumber\\ &=& e^{-2\Omega}e^{2A(y)}\tilde{g}_{\mu\nu}dx^\mu dx^\nu+ e^{-2A(y)}\tilde{h}_{mn}dy^mdy^n, \label{metric} \end{eqnarray} where $e^{-2\Omega}$ is the conformal factor fixed as \begin{equation} e^{-2\Omega}= e^{-2\phi} \mathcal{V}_6 \end{equation} to change into the Einstein frame, with $\mathcal{V}_6=\int d^6y\sqrt{h_6}$. Notice we are not taking into account warping effects on the internal metric. \\ In terms of the axionic moduli fields $\tau$ and $s$ are given by \begin{equation} \tau=e^{-\phi}(\mathcal{V}_6)^{2/3}, \qquad s=e^{-\phi}, \end{equation} so, the contributions for the action terms $S_{G_3}$ and $S_{DBI}$ are given by \begin{eqnarray} S_{G_3}&=&\int d^4x\sqrt{-g_4} ~\left(\frac{A_{F_3}}{s\tau^3}+\frac{A_{H_3}s}{\tau^3}\right),\label{g3}\\ S_{\text{DBI}}&=&\int d^4x\sqrt{-g_4} ~\frac{A_3 \mathcal{N}_3} {\tau^3}\label{dbi}, \end{eqnarray} where $A_{F_3}, A_{F_3}$ and $A_3$ are the corresponding contributions not depending on $\tau$ and $s$ where \begin{equation} S=C_0+is, \qquad \text{and} \qquad T= \int C_4 + i\tau. \end{equation} On the above we have assumed that complex structure moduli $z_i$ are fixed through 3-form fluxes, by $D_{z_i}\mathcal{W}=0$, where as usual \begin{equation} \mathcal{W}=\int (F_3-SH_3)\wedge\Omega(z_i), \end{equation} but $D_S\mathcal{W}\ne 0$. Therefore SUSY is broken at least by the axio-dilaton modulo $S$, and the fluxes we are turning on, have not $(1,2)$-components. Together with the K\"ahler potential of the form \begin{equation} \mathcal{K}=-\log(-i(S-\bar{S}))-3\log(- i (T - \bar T )), \end{equation} the flux contribution to the scalar potential reduces to \begin{equation} \mathcal{V}_{\text{fluxes}}=e^{\mathcal{K}}|D_SW|^2K^{S\bar{S}}=\frac{\hat{f}^2+s^2h^2}{2s\tau^3}. \end{equation} with \begin{equation} \hat{f}=\int\hat{F}_3\wedge \Omega, \qquad \text{and}\qquad h=\int H_3\wedge\Omega. \end{equation} Comparing with expression (\ref{g3}), \begin{equation} A_{F_3}=\frac{|\hat{f}|^2}{2\kappa_{10}^2}, \qquad \text{and}\qquad A_{H_3}=\frac{|h|^2}{2\kappa_{10}^2}. \end{equation} As known, by exploring different values for $A_{F_3}$, $A_{H_3}$ and $A_3$ we find that no stable vacuum is obtained. More ingredients are required. \section{Stable non SUSY AdS vacua from torsion} As suggested in the literature (see \cite{Hertzberg:2007wc, Shiu:2011zt} and \cite{Danielsson:2012et, Blumenhagen:2015xpa, Junghans:2016uvg, CaboBizet:2016qsa, Andriot:2018ept, Kallosh:2018nrk, Andriot:2018mav, Andriot:2019wrs, Andriot:2020wpp, Andriot:2021rdy}), it is possible to find stable vacua by turning on different contributions to the scalar potential. Here we are interested in a non-vanishing contribution from $S_{F_5}$ to $\mathcal{V}_{\text{eff}}$. For that, we shall take into account the presence of torsion in the internal manifold $\mathbb{X}_6$ which, as we shall argue, naturally comes into play in the presence of orientifold planes \cite{McAllister:2008hb, Cai:2014vua}. This implies that the K\"ahler 2-form $J_2$ is no longer closed, i.e., $dJ_2\ne 0$ pointing out the necessity to compactify on generalized CY manifolds. By using the ML algorithm described in the Appendix \ref{ML}, we find that AdS stable vacua are obtained under some specific conditions we shall describe in detail. \subsection{Effective scalar potential from Torsion} Let us start by writing the action component $S_{F_5}$ in (\ref{S5}) as \begin{equation} S_{F_5}= \frac{15}{2\kappa^2_{10}}\int \omega_5\wedge \ast F_5. \end{equation} where \begin{equation} \omega_5=\frac{1}{2} C_2\wedge H_3+\frac{1}{2}B_2\wedge F_3. \end{equation} As said, in generic compactifications on $\mathbb{X}_6$ with orientifold planes $O3^-$, 2-forms are divided on odd or even according to the orientifold action on them \cite{Grimm:2004uq}. Since 2-form RR and NS-NS potentials are odd under an $O3^-$ action, and the fluxes $F_3,H_3$ are even and it follows that \begin{equation} \omega_5\in \Omega^2_-( \mathbb{X}_6,\mathbb{Z})\wedge H^3_+( \mathbb{X}_6,\mathbb{Z}). \end{equation} Therefore, for a generic CY manifold, $\omega_5$ does not contribute to $\mathcal{V}_{\text{eff}}$. Also notice that in the presence of orientifold $O3^-$-planes, the RR potential $C_6$ is projected out and it is not possible to have stable BPS $D5$-branes. The effective 4-dimensional scalar potential only receives contributions from the rest of the terms in the action $S$ and from the Dirac-Born-Infeld action of extended objects wrapping internal cycles on $\mathbb{X}_6$, as D3-branes and orientifold planes $O3^-$.\\ However, in the presence of orientifold planes, it is natural and expected to have torsional cycles. For instance, in a IIB toroidal orientifold, the quotient space $\mathbb{T}^6/\mathbb{Z}_2$ contains torsional cycles of different dimension (dual to torsional fluxes), meaning that there are cycles that after wrapping them a certain number of times, one ends up with a subspace of $\mathbb{T}^6$ with boundary. Since we are considering the presence of orientifold planes, we shall assume the existence of torsional cycles in generic K\"ahler manifolds. Under this context we shall study whether or not $\omega_5$ contributes to $\mathcal{V}_{\text{eff}}$ via torsional cycles.\\ The $p$th-cohomology group of a six-dimensional K\"ahler manifold is written as \begin{eqnarray} {\text{H}}^p(\mathbb{X}_6;\mathbb{R})&=&{\text{H}}^p(\mathbb{X}_6;\mathbb{Z})+\text{Tor~}{\text{H}}^p(\mathbb{X}_6;\mathbb{Z}),\nonumber\\ &=& \mathbb{Z}^{b_p} + \left(\mathbb{Z}_{k_1}\oplus \dots \oplus\mathbb{Z}_{k_n}\right), \label{torsion} \end{eqnarray} where $b_p$ is the Betti number for ${\text{H}}^p(\mathbb{X}_6,\mathbb{Z})$ and $k_i \in \mathbb{Z}$. Let us consider the case for $p=3$. A 3-form in the torsional part can be decomposed as \begin{equation} \pi^{\text{tor}}_3= \lambda^i \pi^{\text{tor}}_{3,i}, \end{equation} with $i=1,\dots ,n$ according to (\ref{torsion}) and $\lambda^i\in \mathbb{Z}$. In the case in which the set of integers $\lambda^i$ has a greatest common divisor (gcd) $\kappa$, there exists a non-closed 2-form $\hat{\omega}_2$ such that $ d\hat{\omega}_2=\kappa\pi_3^{\text{tor}}$, i.e., $\pi_3^{\text{tor}}\in\mathbb{Z}_k$. The set of such 2-forms is denoted $\hat{\Omega}^2(\mathbb{X}_6)$. If $\lambda^i=\kappa^i k^i$ only for some $i$, then there exists $\hat{\omega}_i\in \hat{\Omega}_i^2(\mathbb{X}_6,\mathbb{Z})$ such that $d\hat{\omega}_i=k_i\pi_{3,i}^{\text{tor}}$. In this scenario, generic RR and NS-NS potentials are given by \begin{eqnarray} C_2&=&c^a\omega_a+\tilde{c}^i\hat{\omega}_i,\nonumber\\ B_2&=&b^a\omega_a+\tilde{b}^i\hat{\omega}_i \end{eqnarray} where $\omega_a\in{\text{H}}^2_-(\mathbb{X}_6,\mathbb{Z})$, $\hat{\omega}_i\in\hat{\Omega}^2(\mathbb{X}_6,\mathbb{Z})$ with $a=1\dots h^{1,1}_-(\mathbb{X}_6)$ and $i=1,\dots n$. The presence of 2-forms $\hat{\omega}_i$ implies that the K\"ahler form $J_2$ can also be written as \begin{equation} J_2=t^a\omega_a+\tilde{t}^i\hat{\omega}_i, \end{equation} from which $dJ_2=k_i\tilde{t}^i\pi_{3,i}^{\text{tor}}$. Hence for $\tilde{t}^i=\tau^i/k_i$, $dJ_2$ is non trivial in ${\text{H}}^3(\mathbb{X}_6,\mathbb{Z})$ and $\mathbb{X}_6$ is not a CY manifold but at least a K\"ahler manifold modulo $k_i$.\\ If now we restrict the compactification over a K\"ahler manifold with torsion as above, the contribution from $S_{F_5}$ is not longer zero, but \begin{equation} S_{F_5}=\frac{5!}{16\kappa_{10}^2}\int d\text{Vol}_4\int \left(H_3\tilde{c}^i-F_3\tilde{b}^i\right)\wedge\hat{\omega}_i\wedge de^{4A(y)}, \end{equation} with $A(y)$ the warping factor in Eq.(\ref{metric}). Therefore, the contribution of $F_5$-form to the scalar potential, in the Einstein frame, is given by \begin{equation} V_{F_5}\sim \frac{A_5}{\tau^4}, \end{equation} where $A_5= A ~\text{mod}~k_i$ for some $A$. \subsection{Conditions for finding stable AdS vacua} The above contribution to $\mathcal{V}_{\text{eff}}$ from $S_{F_5}$ together with the contributions from 3-form fluxes, D3-branes and $O3^-$-planes, lead us to a scalar potential of the form \begin{equation} \mathcal{V}_{\text{eff}} = \frac{A_{H_3} s}{\tau^3}+\frac{A_{F_3}}{s\tau^3}+\frac{A_{F_5}}{\tau^4}+\frac{A_{3}\mathcal{N}_{3}}{\tau^3}, \end{equation} which actually has some stable AdS minima if there is at least one negative contribution from the above terms. However, since the flux contribution $A_{G_3}$ is positive definite\footnote{According to our previous analysis, this means that supersymmetry is broken by the dilaton modulus.} and $A_{F_5}$ is defined modulo an integer, the only option left is that, from the contribution of 3-dimensional sources, $\mathcal{N}_3$ must be negative.\\ By restricting the flux configurations and local sources to satisfy that $\mathcal{N}_3<0$, the number of $O3^-$-planes must be larger than the number of $D3$-branes, implying that at some points in the internal space, there must be isolated orientifold planes, or in other words that there are no $D3$-branes of top of some of the $O3^-$-planes. This follows from the usual assumption that orientifold planes are immovable and from the fact that there is an attraction between $D3$-branes and $O3^-$-planes due to the RR $D3$-brane charge they carry. For instance, the most simple configuration involving the presence of $D3$-branes with $\mathcal{N}_3<0$ is to have 4 orientifold fixed points and a single $D3$-brane sitting at one of those points. In such case, $\mathcal{N}_3=-1$ (see Figure \ref{fig:D3$-$O3} for a schematic representation of this configuration). \begin{figure}[htbp] \centering \includegraphics[scale=0.2]{D3-O3.pdf} \begin{picture}(0,0) \put(-120,132){O3} \put(-90,0){O3} \put(-10,100){O3} \put(-193,7){D3 -O3} \end{picture} \caption{Schematic picture of the compact space and the locii of the orientifold planes and $D3$-branes.} \label{fig:D3$-$O3} \end{figure} Under these conditions we implemented our ML algorithm described in Appendix \ref{ML}. With it, we were able to find 389 different stable AdS vacua. However, in spite of designing our algorithm such that finding dS vacua was favored over AdS, no dS one was found. Our results are shown in Figure \ref{fig:landsbpsnonbps} where all found vacua, stable or not, are represented by black squares. \\ \begin{figure}[htbp] \centering a)\includegraphics[scale=0.5]{landscape_bps-nobps.pdf} \begin{picture}(0,0) \put(-150,215){V$_0$} \put(-5,55){min $m^2$} \end{picture}\\ b) \includegraphics[scale=0.5]{ zoom_landscape_bps-nobps.pdf \begin{picture}(0,0) \put(-320,195){V$_0$} \put(-20,105){min $m^2$} \end{picture} \caption{Plot of the critical points found by the hybrid algorithm. The black squares correspond to the cases where $F_5$ contributions were taken into account without non-BPS states, whereas the blue crosses consider the presence of $\hat{D5}$ non-BPS states. On the second image, we present a zoom of the stable cases.} \label{fig:landsbpsnonbps} \end{figure} \section{Stable dS vacua from Non-BPS states} The presence of torsion opens up the possibility to consider wrapping $D$-branes on torsional cycles. The existence of torsional cycles follows from the dual maps between homology and cohomology, where \begin{equation} \int_{\Sigma^{j,\text{tor}}_2}\hat{\omega}_i=\int_{\mathbb{X}_6}\hat{\omega}_i\wedge PD(\Sigma^{j,\text{tor}}_2)=\delta^j_i. \label{2cycleform} \end{equation} with \begin{equation} k_i\Sigma^{i,\text{tor}}_2=\partial\hat{\Pi}_{3}^i. \end{equation} This last assertion means that the homology group $\text{H}_2(\mathbb{X}_6, \mathbb{R})$ also has a torsion component, i.e., $ \Sigma^{i,\text{tor}}_2\in \text{Tor~}{\text{H}}_2(\mathbb{X}_6, \mathbb{Z})$ and $\hat{\Pi}_{3}^i\in \hat{\Omega}_3(\mathbb{X}_6,\mathbb{Z})$. It follows then that $\text{Tor~}{{\text{H}}}_2(\mathbb{X}_6,\mathbb{Z})\sim \text{Tor~}{{\text{H}}}^3(\mathbb{X}_6,\mathbb{Z})$. We shall follow the argument in which these states $-$D-branes wrapped on torsional cycles$-$ are in fact related to the well-known non-BPS states constructed from K-theory \cite{Witten:1998cd}.\\ The existence of non-BPS states in the presence of an orientifold plane $O3^-$ can be inferred by applying T-duality on the corresponding coordinates on a torus compactification of Type I string theory, which actually have non-BPS branes as $\hat{D7}$, $\hat{D8}$, $\hat{D0}$ and $\hat{D(-1)}$. Hence, by taking for instance a non-BPS $\hat{D7}$-brane spanned on 4 coordinates on $T^6$ immersed in an $O9^-$-plane and applying T-duality on the compact coordinates, we get an extended $O3^-$-plane and a 5-brane wrapping a 2-dimensional space in the covering space. We expect this object to carry a topological $\mathbb{Z}_2$ charge as its T-dual partner. Indeed, by computing the 2nd-homology group of $\mathbb{T}^6/\mathbb{Z}_2$ we see that there are torsional 2-cycles. Wrapping $D5$-branes of type IIB theory on such cycles seem to be the way to construct the aforementioned non-BPS states. Moreover, by computing the corresponding T-dual K-theory group one sees that stable non-BPS states are present, carrying discrete topological charge $\mathbb{Z}_2$ with three extended coordinates while the other are wrapped on the compact space.\\ For a more general compactification, one must compute the K-theory groups of intersecting sources, i.e., of configurations of branes intersecting orientifold planes wrapping cycles on a compact manifold. This is indeed a difficult task. However, ignoring the compact component of the space, it is possible to classify intersecting branes with orientifolds by the use of the Kasparov KK-theory \cite{Asakawa:2002nv, Garcia-Compean:2008fzo}. Since the formulation is quite technical and it is beyond the scope of this work\footnote{The KK-theory group classifying $Dd$-branes on top of an $Op^-$-plane, with $p=3 ~\text{mod}~ 4$ and $d>p$ is given by \cite{Garcia-Compean:2008fzo} \begin{equation} KKH^{-2}(\mathbb{R}^{d-s,r}, \mathbb{R}^{9-p, p+r-s})=KO(\mathbb{S}^{2p-2s+d-3}), \end{equation} where $s$ are the number of coordinates of the D$d$-brane overlapping the orientifold plane and $r$ is the codimension of the $Dd$-brane inside the orientifold. For a $D5$-brane on top of an $O3^-$-plane with 2 transversal coordinates, $p=s=3$, $r=0$ and $d=5$.}, we just present the KK-theory group which classifies 5-branes fully intersecting an $O3^-$-plane, i.e., with 2 transversal coordinates to the orientifold plane and its relation to orthogonal K-theory group. This is: \begin{equation} KKH^{-2}(\mathbb{R}^{2,0}, \mathbb{R}^{6,0})=KO(\mathbb{S}^2)=\mathbb{Z}_2, \end{equation} as expected.\\ Based on these results we are taking as valid the construction of stable non-BPS states by wrapping D-branes on torsional cycles of a K\"ahler manifold $\mathbb{X}_6$. In particular, we can construct a non-BPS $\hat{D5}$-brane by wrapping a $D5$-brane on a torsional 2-cycle $\Sigma_2^{\text{tor}}\in {\text{H}}^{\text{tor}}_2(\mathbb{X}_6, \mathbb{Z})$, where $\Sigma^{\text{tor}}_2$ is the cycle where the 2-form $\hat{\omega}_2$ is supported as in Eq.(\ref{2cycleform}).\\ Summarizing, a compactification on a K\"ahler manifold $\mathbb{X}_6$ with torsional components in (co)-homology, leads us to the possibility to include D-branes wrapping torsional cycles. Here we shall consider the contribution to the effective scalar potential from non-BPS $\hat{D5}$-branes. However, before that we must discuss possible sources of instability on a configuration constructed with fluxes, $D3$-branes, $O3^-$-planes and non-BPS states.\\ \subsection{Consistency by adding non-BPS $\hat{D5}$-branes} As it is known \cite{Witten:1998cd}, the non-BPS brane $\hat{D7}$ in type I theory can be constructed by a pair of a $D7$ and $\bar{D7}$-branes, where the tachyon on the open sector string connecting the two branes is projected out by the orientifold $O9^-$. However, since in type I theory there are 32 $D9$-branes, there is also a tachyon from the open string between $D9$-branes and $D7$-branes, making the non-BPS $\hat{D7}$-brane to be unstable \cite{Loaiza-Brito:2001yer}.\\ In a T-dual version, upon compactification on a six-dimensional torus, the above configuration is mapped into $D3$-branes and $O3^-$-planes sitting at different points on $\mathbb{T}^6$ and $D5$-branes wrapping torsional 2-cycles on the compact space, corresponding to the non-BPS states $\hat{D5}$. Therefore, by T-duality, it is expected that in a given fixed point in the internal space, a $\hat{D5}$-brane coinciding with at least one $D3$-brane, would be unstable to decay into a field configuration while preserving its topological charge $\mathbb{Z}_2$ . This instability is not present (at least locally) if at the given fixed point, there are not $D3$-branes, a configuration we can have if there are more orientifolds than $D3$-branes, i.e., if $\mathcal{N}_3=\mathcal{N}_{D3}-\frac{1}{2}\mathcal{N}_{O3}<0$. In order to cancel the $D3$-brane charge tadpole, we then require a positive contribution from fluxes. These two characteristics, ${\cal N}_3<0$ and $\mathcal{N}_{\text{flux}}>0$ are essential to guarantee the stability of adding non-BPS $\hat{D5}$-branes. Notice that $\mathcal{N}_3<0$ is one of the conditions to assure the existence of stable AdS vacua without adding non-BPS states.\\ Under the above circumstances, we shall take a $D5$-brane and wrap it on a torsional 2-cycle $\Sigma^{\text{tor}}_2\in \text{Tor}~{\text{H}}_2(\mathbb{X}_6, \mathbb{Z})$. Following \cite{Bergman:2001rp}, we argue that such a state is classified by the corresponding K-theory group on $\mathbb{X}_6$. Also, we shall consider the contribution of this non-BPS $\hat{D5}$-brane to the effective scalar from the DBI term. However, it is important to notice that its contribution must be measured as $\text{mod} ~2$ since a pair of non-BPS branes with topological charge $\mathbb{Z}_2$ anihillate to each other. This means that if the total discrete charge vanishes, the effective contribution from non-BPS branes is null \cite{Blumenhagen:2019kqm, Blumenhagen:2021nmi}. Another important fact we must have in mind is that we are ignoring torsional components for 3-form fluxes, although there is no restriction for their presence\footnote{In \cite{Damian:2019bkb} some consequences of turning on torsion components of fluxes are commented.}. \\ Hence, the effective contribution of a non-BPS brane $\hat{D5}$ at leading order in $\alpha'$ is given by the DBI action, \begin{equation} S_{\hat{D5}}=-2T_5\int d^6\xi ~e^{-\phi} \sqrt{-\widetilde{g}_6} \end{equation} where $\widetilde{g}_6$ is the determinant of the induced metric on the $\hat{D5}$-brane worldvolume. Therefore, the corresponding effective scalar potential in the Einstein frame reads \begin{equation} V_{\hat{D5}}=\frac{{A}_{\hat{D5}}}{s^{1/2}\tau^{5/2}}, \label{VD5} \end{equation} where $2n{A}_{\hat{D5}}=0$ for $n\in \mathbb{Z}$.\\ \subsection{Stable dS vacua with non-BPS states} In order to look for dS minima we shall employ an hybrid method which consists in applying a stochastic method known as Simulated Snnealing followed by the gradient descent algorithm (see Appendix \ref{ML}). The effective scalar potential constructed by contributions from 3-form fluxes, 3-dimensional sources, a torsional component of $F_5$ and non-BPS $\hat{D5}$-branes is \begin{equation} \mathcal{V}_{\text{eff}} = \frac{A_{H_3} s}{\tau^3}+\frac{A_{F_3}}{s\tau^3}+\frac{A_{F_5}}{\tau^4}+\frac{A_{3}\mathcal{N}_3}{\tau^3}+ \frac{A_{\hat D5}}{{s}^{1/2}\tau^{5/2}}. \end{equation} As discussed in \cite{Shiu:2011zt} (see also \cite{Hertzberg:2007wc}), it is expected that this anzats evades the no-go theorems and increases the possibility to find some stable dS vacua. \\ In Figure \ref{fig:landsbpsnonbps} it is shown by blue crosses, the critical points found by the above-mentioned algorithm. Notice the presence of many stable dS vacua. In Table \ref{tab:dSv} we present the explicit values of the scalar potential contributions for some of these vacua. \begin{table}[!ht] \centering $ \begin{array}{|c c c c c c c c c c|} \hline \text{min}\,V & m_{\tau}^2 & m_{\text{s}}^2 & \tau & \text{s} & \text{A}_{F_3} & \text{A}_{H_3} & \text{A}_{F_5} & \text{A}_{3}\mathcal{N}_3 & \text{A}_{\hat D 5} \\ \hline 1.309 \times 10^{-6} & 0.000546 & 0.003728 & 3.695 & 2.363 & 0.7628 & 0.2486 & 0.9769 & -1.704 & 0.4231 \\ 5.676 \times 10^{-6} & 0.0005166 & 0.003874 & 3.727 & 2.298 & 0.7566 & 0.2575 & 0.9755 & -1.708 & 0.4125 \\ 6.980 \times 10^{-5} & 0.0006562 & 0.007878 & 2.917 & 2.111 & 0.7463 & 0.2189 & 0.3022 & -1.135. & 0.1851 \\ 9.561 \times 10^{-5} & 0.0003757 & 0.003277 & 3.855 & 2.373 & 0.7638 & 0.2495 & 0.9778 & -1.702 & 0.4238 \\ 4.039 \times 10^{-4} & 9.460 \times10^{-7}& 0.004265 &4.258 & 2.170 & 1.260 & 0.3864 & 0.6982 & -2.067 & 0.3677 \\ \hline \end{array} $ \label{tab:dSv} \caption{Selected vacua found by the hybrid SA+CG algorithm.} \end{table} \section{Uplifting conditions by non-BPS states} In this section we are interested in discussing the uplifiting of AdS stable vacua to dS by the presence of non-BPS states as the $\hat{D5}$-branes. As previously observed, a dimensional reduction in the presence of 3-form fluxes $H_3$ and $F_3$, as well as 3-dimensional sources as $D3$-branes and $O3^-$-planes together with a torsional $F_5$ form, leads us to the possibility to construct AdS stable vacua. For $A_{D5}=0$, the minimum for $\mathcal{V}_{\text{eff}}$ is located at \eq{ s_0= \left( \frac{A_{F_3}}{A_{H_3}} \right)^{1/2} \quad \tau_0 = \frac{4}{3} \frac{A_{F_5}}{\Delta} \label{vevsAdS} } for $\Delta = -(A_3 \mathcal{N}_3+2 A_{H_3}^{1/2} A_{F3}^{1/2})$. Notice that in the case we are turning on a single flux $G_3$, meaning that we are considering a contribution to the superpotential along one single period, $\Delta$ reduces to zero due to the tadpole cancellation. Therefore, it is necessary to consider more than one flux in order to uplift the AdS vacua while keeping $|A_3\mathcal{N}_3|>2(A_{H_3}A_{F_3})^{1/2}$ such that $\tau_0>0$. Therefore we require that two specific conditions must be taken: \begin{enumerate} \item $W=\int G_3\wedge \Omega$ must be constructed from more than just one period. \item $\mathcal{N}_{O3}>4\frac{(A_{H_3}A_{F_3})^{1/2}}{A_3}+2\mathcal{N}_{D3}$. \end{enumerate} We shall restrict the rest of our analysis to such a case.\\ The minima of the AdS can be written in function of the vacuum expectation value (vev) of the K\"ahler modulus as \eq{ {\mathcal{V}}_{AdS} = - \frac{1}{3}\frac{A_{F_5}}{\tau_0^{4}} \,, } thus, the larger $\tau_0$, the smaller value for the AdS vacua, which is compatible with the KKLT scenario. The eigenvalues can be written in terms of the vev's as \eq{ m^2_s = \frac{2 A_{H_3}^{1/2}}{s_0 \tau_0} \quad \text{and} \quad m^2_{\tau} = \frac{4 A_{F_5}}{\tau_0^6} \,, } and we see that for large values of $\tau_0$, the smallest eigenvalue is always in the $\tau$ direction. \\ Now, to uplift from stable AdS to dS vacua it is necessary to add energy associated to the non-BPS states $\hat{D5}$ as in Eq.(\ref{VD5}), which change the vevs of the moduli shifting its numerical values to greater values. In this case the K\"ahler modulus modify to \eq{ \tau = \frac{4(-A_{F_3}+ A_{H_3}s^2)^2}{A_{D5}^2 s}\,, } which in the limit of $A_{\hat{D5}}\ll 1$ can be written as \eq{ s &= s_0 + \frac{1}{2\sqrt{3}}\left( \frac{A_{F_5}}{A_{F_3}^{1/2} A_{H_3}^{3/2}} \right)^{1/2} \frac{1}{( \Delta )^{1/2}} A_{\hat{D5}}+ \mathcal{O} \left( A_{\hat {D5}}^2 \right) \, \\ \tau &=\tau_0 + \frac{1}{2^6}\frac{\tau_0^2}{A_{F_3}^{3/2} A_{H_3}^{1/2}}A_{\hat{D5}} + \mathcal{O} \left( A_{\hat {D5}}^2 \right) \,. } Notice from this and from Eq.(\ref{vevsAdS}) that for $\Delta > 0$ this branch of solution takes real values. In this context, one also can express the effective potential at leading terms in $A_{\hat{D5}}$ as \eq{ \mathcal{V}_{\text{eff}} = V_{AdS} + \frac{1}{s_0^{1/2} \tau_0^{5/2}} A_{\hat {D5}} + \mathcal{O} \left( A_{\hat {D5}}^2 \right) } where the uplifting from AdS to dS depends on how deep is the AdS vacuum. \\ However it is important to analyze whether the uplifting would be stable or not. For that we shall study under which conditions there are tachyons. Let us start by establishing the required stability criteria for the AdS vacua. Since we are interested only in their presence, we shall take the the mass matrix as \begin{equation} (M^2_{AdS})_{ij}=\partial_{ij}\mathcal{V}_{\text{AdS}}, \end{equation} with $i,j=s, \tau$. The eigenvalues $\lambda_{AdS}$ are given by \eq{ \lambda_{AdS} = \frac{1}{2}{\rm tr \,} M^2_{AdS} \pm \alpha } where $\alpha = \sqrt{({\rm tr \,} M^2_{AdS})^2- 4 \det M^2_{AdS}}$. According to the Silverster's criterium, a stable minimum exists always that ${\rm tr \,} M^2_{AdS} > 0$ and $\alpha$ be real. Notice that large values for the eigenvalues $\lambda_{AdS}$ indicate that it is difficult to destabilize the minimum. On the contrary, small values of $\lambda_{AdS}$ correspond to very flat potentials from which it is easy to escape from. Following this line of reasoning, we want to show that by adding non-BPS states $\hat{D5}$ the eigenvalues related to an AdS vacuum become smaller.\\ For that, let us consider adding the contribution from non-BPS states $\mathcal{V}_{\hat{D5}}$, such that \begin{equation} (M^2)_{ij}=\partial_{ij}\left(\mathcal{V}_{AdS}+\mathcal{V}_{\hat{D5}}\right)=(M_{AdS}^2)_{ij}+(M_{\hat{D5}})_{ij} \end{equation} One realizes that the eigenvalues for each of the moduli decreases as we add the $A_{\hat D5}$ term. To clearly show this, lets us split ${\rm tr \,} M^2$ and $\det M^2$ in terms of the contributions of $A_{\hat D5}$ as \eq{ {\rm tr \,} M^2 = {\rm tr \,} M^2_{AdS} + f(A_{\hat D5}), \qquad \det M^2 = \det M^2 _{AdS}+ g(A_{\hat{D5}}) \,, } where $f(A_{\hat D5})$ and $g(A_{\hat D5})$ are positive definite homogeneous functions of degree 1 on $A_{\hat D5}$. If the added potential is of the form \begin{equation} \mathcal{V}_{\hat{D5}}\sim \frac{1}{s^m\tau^n}, \end{equation} with $n,m>0$, which indeed is our case. Thus, by adding the $A_{\hat D5}$ terms, there is a contribution $\delta\lambda$ to the eigenvalues $\lambda_{AdS}$ as \eq{ \lambda = \lambda_{AdS}+ \delta \lambda. } In this context, we say that if $\delta \lambda < 0$, the eigenvalues of the mass matrix decrease due to the contribution of the non-BPS states. Indeed, the change of the eigenvalues can be written explicitly as \eq{\label{eigenvalues} \delta \lambda = \frac{1}{2}(f\pm\alpha) \left( 1 - \sqrt{1+ \gamma} \right) } where \eq{ \gamma = \frac{2 f \left( {\rm tr \,} M^2_{AdS} \pm \alpha \right) + 4 g}{(f\mp \alpha)^2}. } Since $f$ and $g$ are positive functions and $\alpha < {\rm tr \,} M^2$, then $\gamma$ is positive definite. In consequence the term $\left( 1 - \sqrt{1+ \gamma} \right)$ shall be negative. This in general implies that \eq{ \frac{2(\delta \lambda)}{(f\pm \alpha)} \leq 0. } Finally, putting $f$ and $\alpha$ in terms of the determinant and trace of the mass matrix we find that \eq{ f\pm \alpha = {\rm tr \,} M - {\rm tr \,} M^2_{AdS} \pm \sqrt{({\rm tr \,} M^2_{AdS})^2 - 4\det M^2_{AdS}} > 0, } and $\delta \lambda < 0$. \\ Adding non-BPS states drives two important features in the effective potential. In one hand, uplifts the value of $\mathcal{V}_{AdS}$ to a dS one, but in the other hand, since the contribution to the energy at the minimum is positive, the scalar potential becomes very flat increasing the probabilities for this vacuum to be destabilized. We show this behavior, for one case, in Figure \ref{fig:uplift}.\\ \begin{figure}[htbp] \centering a) \includegraphics[scale=0.25]{fixS.pdf} b) \includegraphics[scale=0.25]{fixtau.pdf \begin{picture}(0,0) \put(-350,5){V$_0$} \put(-195,88){s} \put(-155,5){V$_0$} \put(-5,95){$\tau$} \put(-70,55){V$_0$} \put(-15,20){$\tau$} \end{picture} \caption{Plots for the uplift mechanism by employing the A$_{\hat D 5}$ contribution. In red we show the effective potential in an AdS minimum defined by the contributions $A_{F_3}= 0.77046$, $A_{H_3}= 0.24018$, $A_{O_3}=-1.6974$, $A_{F_5}=0.97955$ and with moduli vevs given by $s =1.7911$, $\tau = 1.5603$. By adding non-BPS $\hat{D5}$-branes with $A_{D_5}= 0.43564$, it is observed that the uplift reduces the mass of the scalar field while its expectation value moves to the right as $s= 2.4473$, $\tau = 3.8434$. Notice that the uplift of the K\"ahler moduli produce a nearly flat direction, which is compatible with the KKLT scenario. } \label{fig:uplift} \end{figure} \subsection{Comments about some Swampland conjectures} We have described a way to construct a dS vacuum by adding the contribution to the scalar potential from a non-BPS $\hat{D5}$-brane to a non-SUSY AdS vacuum ($D_SW\ne 0$). However, as recently studied, there are some constraints around the construction of both states. First of all, it has been argued that a non-supersymmetric AdS vacuum is at most metastable in the context of the Swampland program \cite{Freivogel:2016qwc,Ooguri:2016pdq}. Second of all, it is expected a constraint on the AdS scale with respect to the lightest moduli mass, and finally, in case of uplifting the non-SUSY vacuum to a dS one, the final vacuum is at most, metastable. Let us comment about these three points and how they are manifested in our setup.\\ As mentioned, one way to assure the construction of an AdS vacuum by considering the contribution of $F_5$ in a manifold with torsion, implies the stabilization of the complex structure by $D_U\mathcal{W}=0$ while keeping $D_S\mathcal{W}\ne 0$. Therefore, the AdS vacuum is non-SUSY. According to the Swampland conjectures, such an AdS vacuum must be at most metastable. In our case, the source for instabilities could come from two places: first, from our assumption of not considering torsional components of 3-form fluxes, which usually drives some topological transitions as pointed out in \cite{Damian:2019bkb}. Second, since the contribution from $F_5$ is based on the existence of torsional cycles, it is possible that the total discrete charge must vanish following the recent proposal about having zero global charges in Quantum Gravity and its relation to K-theory by cobordisms as proposed in \cite{Blumenhagen:2021nmi}. We believe that both aspects are in fact related.\\ The second point concerns the AdS scale which it is also conjectured to satisfy a relation of the form \eq{ m_{\text{mod}} R_{\text{AdS}} \gg c' } where $c' \sim 1$ and $R_{\text{AdS}} \sim |V_0|^{1/2}$ in order to keep a robust realization of a dS vaccum. Recent studies argue that effective models which support such a parametric hierarchy are in fact in the Swampland. Again, in our case the above two factors can be expressed in terms of each of the contributions to the scalar potential, for which we obtain that \eq{ m_{\text{mod}} R_{\text{AdS}} = \frac{3\sqrt{3}}{2} \frac{2 A_{{H}_3}^{1/2}A_{{F}_3}^{1/2}+A_{3}\mathcal{N}_3}{A_{{F}_5}} . \label{mRAdS} } As all the constants $A_i$ for $i = \{ {H}_3, {F}_3, D3, O3 \}$ are of the same order, the energy added by $F_5$, for a $\mathbb{Z}_k$ discrete torsion, vanishes up to a multiple of $k$. Hence, unless $k$ is too large, the quotient (\ref{mRAdS}) is slightly larger than order 1, and by taking $k=2$, $m_{\text{mod}}R_{AdS}\gtrsim c'$.\\ In this context, it is possible to add energy for the uplifting in such a way we stay in a region where stability can be (parametrically) controlled. Indeed, in our model, the AdS vacua do not contain tachyons neither in the axio-dilaton nor along Kähler directions. Besides, the scale of the AdS is smaller that the energy coming from the lightest moduli violating the AdS conjecture. Thus, by adding a non-BPS states which its energy contribution scales a $s^{-1/2} \tau^{-5/2}$ generates a flattering effect accordingly. \\ Finally, according to the Swampland conjectures, a source of instabilities is expected to affect the uplifted dS vacuum. They could come from the fact that the pair $\hat D5-D3$ (dual to the $\hat D7$ -$D9$) is unstable \cite{Frau:1999qs, Lerda:1999um} and although a decay into a final state does not dilute the discrete charge, it is canceled out by requiring a vanishing K-theory charge \cite{Uranga:2000xp, Loaiza-Brito:2001yer}. However, in our case $\hat{D5}$-branes come from $D5$-branes wrapping torsional 2-cycles around an $O3^-$-plane with no $D3$-branes. Hence, at least locally, there are no instabilities at such points. Thus, the non-BPS states are stable and the only decay channel is through tunneling leading to the decompactification limit \cite{Brown:2010mf} probably described by a topological transition driven by torsional 3-form fluxes, as suggested in \cite{Damian:2019bkb}. A detail study about this process is reserved for a future work.\\ \section{Conclusions and Final comments} As expected, the incorporation of $F_5$ fluxes by contribution to the effective scalar potential $\mathcal{V}_{\text{eff}}$ seems to be fundamental to find classical stable dS vacua in an orientifolded flux compactification of string theory. However, since in a Calabi-Yau manifold $F_5$ does not contribute to $\mathcal{V}_{\text{eff}}$, we need to consider other internal manifolds, such as the considered in \cite{Candelas:2014jma,Candelas:2014kma}.\\ As shown in \cite{Cai:2014vua} a K\"ahler manifold admitting torsion is a suitable example in which $F_5$ contributes to $\mathcal{V}_{\text{eff}}$. Moreover, these type of manifolds allow to wrap $D5$-branes on torsional cycles, by which one can construct non-BPS states actually classified by K-theory, with a non-zero contribution to $\mathcal{V}_{\text{eff}}$.\\ Under these circumstances and by implementing a novel ML algorithm we were able to find more than 200 dS critical points for $\mathcal{V}_{\text{eff}}$ out of which 170 are stable.\\ We also find that there are certain specific conditions that our configurations of branes and fluxes must fulfill in order to generate a stable dS vacuum by uplifting an AdS one. First, to obtain a stable AdS it is necessary to turn on the torsional part of $F_5$ and to have a configuration of branes and orientifolds such that the number of $O3$-planes or fixed points are larger than the number of $D3$-branes implying that $\mathcal{N}_{flux}>0$. Second, for these vacua to be uplifted to dS by incorporating the non-BPS states $\hat{D5}$ we ougth to have that: \begin{enumerate} \item The RR and NS-NS 3-form fluxes are supported in more than a single cycle, \item $\mathcal{N}_{O3}>4\frac{\sqrt{A_{H_3}A_{F_3}}}{A_3}+2\mathcal{N}_{flux}$, \end{enumerate} where $A_{H_3}$ and $A_{F_3}$ are the contributions to $\mathcal{V}_{\text{eff}}$ (independent of moduli) from the fluxes $H_3$ and $F_3$, while $\mathcal{N}_3 A_3$ is the corresponding from 3-dimensional sources, with $\mathcal{N}_3=\mathcal{N}_{D3}-\frac{1}{2}\mathcal{N}_{O3}$. Under these conditions, it is possible to obtain that all mass eigenvalues are positive under the uplifting by non-BPS states. We observe, that \begin{enumerate} \item Getting a small positive value for $V_{\text{min}}$ seems to be a natural consequence by uplifting AdS vacua with a small deep. There are two consequences of this: the resulting uplifted potential is very flat while the probability for a destabilization of the dS vacua increases since at the limit for large volume, the potential goes to zero, indicating the presence of a barrier potential between the dS local vacuum and the the run away region for the K\"ahler moduli. \item We believe that this possibility of the scalar potential to become unstable could be generated by extra mechanisms or topological transitions driven by torsional components on the 3-form fluxes as suggested in \cite{Damian:2019bkb} and in consequence, establishing a rich scenario where Swampland conjectures can be tasted.\\ \end{enumerate} \begin{center} {\bf Acknowledgments}\\ \end{center} We thank to S. Ledesma-Orozco for its earlier collaboration. O. L.-B. thanks Nana Cabo, Yessenia Olguin and Ivonne Zavala for very nice discussions about related topics. C.D. is supported by CIIC-UG-DAIP No. 126/2022. O. L-B is supported by CONACyT project CB-2015, No. 258982 and by CIIC/UG/DAIP No. 236/2022. \\
1,108,101,565,069
arxiv
\section{Introduction} \label{sec:intro} The blazar class of active galactic nuclei (AGN) includes BL Lacertae objects (BL Lacs) and high polarization quasars \citep[HPQs, which are a subset of flat spectrum radio quasars, FSRQs, characterized by high fractional optical polarization degree ${\rm PD_{opt}}> 3\%$; see][]{1995PASP..107..803Urry}. For these, the total radiative energy output is dominated by the broad-band, non-thermal emission produced in relativistic jets. These jets are launched by supermassive black holes surrounded by magnetized accretion disks, that are located at the centers of (usually) massive elliptical galaxies, and when pointing close to our line of sight the emission is substantially Doppler boosted \citep{Begelman84,DeYoung02,Meier12}. The radio-to-optical/X-ray segment of the blazar emission continuum is due to the synchrotron radiation of ultrarelativistic leptons (hereafter ``electrons'', for simplicity), while the high-frequency X-ray-to-$\gamma$-ray segment is most widely believed to be due to the inverse-Comptonization of various circumnuclear photon fields (produced both internally and externally to the outflow) by the jet electrons. Blazars display a strong variability from radio to $\gamma$-rays on timescales ranging from decades down to hours, or even minutes. The observed flux changes are often classified broadly into the three major types, namely a `long-term variability' (LTV, with the corresponding timescales of decades-to-months), a `short-term variability' (STV; weeks-to-days), and an intra-night/day variability \citep[INV/IDV; timescales less than a day; see, e.g.,][]{1995ARA&A..33..163W,Ulrich97,2014A&ARv..22...73F}. Rapid -- hour- and minute-long -- blazar flares are especially pronounced at higher energies, particularly in the $\gamma$-ray regime, with intensity changes of up to even a few magnitudes \citep{2007ApJ...664L..71A,2011ApJ...730L...8Aleksic,2016ApJ...824L..20A,Foschini11,Saito13,Rani13}. The origin of such dramatic behavior, and in particular its relation to the similarly rapid but smaller-amplitude `microvariability' observed at lower wavelengths, (including INV in the optical range), is still being widely debated. Several competing scenarios have been proposed to explain the rapid variability of blazar sources. Some include {\it extrinsic causes}, such as gravitational lensing \citep[where the observed flux changes are attributed to a lensing of light rays by a foreground compact object; e.g.,][]{Schneider87,1991Natur.349..766G,Webb00} or, in the radio domain, interstellar scintillation \citep[see][for a review]{Melrose94}. Others involve {\it an intrinsic origin}, including some that are purely geometrical in nature, such as `the lighthouse effect', where the precession of a jet results in the differential forward beaming of the emission \citep{1992A&A...255...59C,1992A&A...259..109GK}. A set of promising intrinsic origin models, which may be favored because of polarimetric measurements revealing the accompanying changes in the fractional polarization and polarization angle, involves various plasma instabilities leading to the formation of shocks and turbulence in the outflow, which then heat and accelerate the jet particles \citep[e.g.,][and references therein]{2012MNRAS.420..604N,2012MNRAS.423.1707S,2014ApJ...780...87M, 2015ApJ...809..171S}. Other intrinsic models invoke the annihilation of magnetic field lines of opposite polarity, transferring the energy from the field to particles at the reconnection sites and during the subsequent evolution of the reconnected magnetic field \citep[e.g.,][]{2009MNRAS.395L..29G, 2015MNRAS.450..183S,2016ApJ...818L...9G}. At optical frequencies, the LTV of non-blazar AGN, including radio-quiet quasars (RQQs), radio-intermediate quasars (RIQs), lobe-dominated radio-loud quasars (LDQs), and low-polarization flat spectrum radio quasars (LPQs), is characterized by much smoother, yet comparable in amplitude, intensity changes to those observed in blazar sources \citep[e.g.,][]{2005MNRAS.356..607S, 2011ApJ...743L..12M, 2013ApJ...766...16E}. This finding is surprising, keeping in mind that the bulk of the optical emission of these non-blazar type active galaxies is understood to originate in accretion disks, rather than relativistic jets \citep[e.g.,][]{2006ASPC..350..183W}. On the other hand, a clear dichotomy exists on intra-night timescales, where the blazar class shows a considerably higher optical INV in both amplitude, $\psi$ (see equation \ref{eq:psi} below), and duty cycle, DC (equation \ref{eq:DC}), than non-blazar objects \citep[e.g.,][]{2003ApJ...586L..25GK, 2005MNRAS.356..607S, 2005A&A...440..855Gupta, 2009MNRAS.397.1893V, 2010MNRAS.401.2622G}. More recently, based on a systematic study using 262 intra-night light curves, each monitored for a duration of $\geq$ 4 hr, it was shown that the optical INV duty cycle is DC\,$\sim 40\%$ for the blazar class, while it is $\sim 5\%$, $\sim 11\%$, $\sim 3\%$, and $\sim 10\%$ for the RQQs, RIQs, LDQs, and LPQs, respectively, at least whenever clear INV amplitudes of $\psi \geq 3\%$ are considered \citep{2013MNRAS.435.1300AG}. Even though the physical processes giving rise to the flaring emission of blazars remain debatable, considerable progress has been made in characterizing the statistical properties of blazar variability at different wavelengths, and in different time domains. In particular, it has been demonstrated repeatedly that the power spectral density (PSD) of blazar light curves is, in general, of a power-law form \citep{1985ApJ...296...46S,2001ApJ...560..659K,2003A&A...397..565P,2003A&A...402..929B,2007ApJ...664L..71A,2007A&A...467..465C,2008ApJ...689...79C,2012ApJ...749..191C,2010ApJ...722..520A,2011AJ....141...49C,2011A&A...531A.123K,2013ApJ...766...16E,2013ApJ...773..177N, 2014ApJ...786..143S,2014ApJ...785...60R,2014ApJ...785...76P,2014MNRAS.445..437M,2015A&A...576A.126A, 2015ApJ...798...27I, 2015MNRAS.451.4328K}. A physical process with such a variability power spectrum, denoted hereafter as $P(\nu_k)=A \, \nu_k^{-\beta}$, where $\nu_k$ is the temporal frequency (corresponding to the timescale $1/\nu_k$), $A$ is the normalization constant, and $\beta$ is the spectral slope, is called white noise when $\beta = 0$, flicker (pink) noise when $\beta = 1$, and Brownian (red) noise when $\beta = 2$ \citep{1978ComAp...7..103P}. The PSD integrated over some variability frequency range is then a measure of the variance of the underlying signal in the time series within the corresponding range of variability timescales. Breaks in the slope or in the normalization of a PSD may appear, signaling characteristic/critical variability timescales in the system. In the case of blazars, various segments of radio, optical, X-ray, and $\gamma$-ray power spectra within the variability time domains from years to days (and in some instances, even sub-hour timescales), are characterized by spectral slopes $1 \leq \beta < 3$, meaning that the variability amplitude increases with increasing variability timescale. Rarely, however, have blazar PSDs been analyzed in a systematic way at different wavelengths across the electromagnetic spectrum and over a truly broad range of temporal frequencies. It is important to note that colored noise-type power spectra are expected to flatten on longer variability timescales (to preserve the total finite variance), and to cut-off at frequencies corresponding to the shortest variability timescale in a system. The detection of such cutoffs in blazar periodograms would be of a primary importance for constraining the physics of blazar jets; however, such detections may be hampered by the finite duration of available monitoring blazar data, on the one hand, and statistical fluctuations resulting from the measurement errors, on the other hand. In this work, we present our analysis and interpretation of the multi-wavelength (radio, optical, and high-energy $\gamma$-ray), and particularly long-span (decades to minutes) variability power spectrum of the BL Lac object PKS\,0735$+$178. This source is singled out from the blazar class by its persistently weak intra-night optical variability: using 17 nights data spanning over 11 years of optical monitoring of this blazar, \citet{2009MNRAS.399.1622AG} estimated the INV DC as $\sim 0\%$ for $\psi > 3\%$. Note that while such a low duty cycle is not unusual for non-blazar AGN (see above), it is surprising for highly polarized blazars such as PKS\,0735$+$178. In this study, we present the results of our extended intra-night monitoring programme of the source, now consisting a total of 25 nights and spanning over 18\,yr of observations (1998--2015). PKS\,0735$+$178 \citep[J2000.0 R.A.\,$\rm=07^{h}38^{m}07\fs39$, Dec.\,$\rm=+17\degr42\arcmin19\farcs99$; redshift $z = 0.45\pm0.06$;][]{2012A&A...547A...1Nilsson} is an otherwise typical example of a low-frequency-peaked BL Lac object detected in the GeV photon energy range \citep{2010ApJ...715..429Abdo}. It is highly polarized in the optical band \citep[${\rm PD_{opt}} > 3\%$;][]{1996AJ....112.1877G, 2011ApJS..194...19W}, and exhibits a flat-spectrum radio core with a superluminal pc-scale radio jet, both characteristic of blazars in general \citep{1992ApJ...398..454Wills, 2016AJ....152...12L}. Despite its pronounced optical and radio variability on yearly timescales \citep{1988AJ.....95..374W, 2007A&A...467..465C}, the source is relatively quiet in X-rays \citep{1988ApJ...330..776M}, and the existing rather sparse X-ray monitoring data preclude a meaningful power spectral analysis. Within the high-energy $\gamma$-ray regime covered by {\it Fermi}-LAT, the blazar is detected at a high significance level only with weekly or longer temporal binning \citep{2003ApJ...597..615N,2010ApJ...722..520A}. Upper limits for the PKS\,0735$+$178 emission in the very high-energy $\gamma$-ray domain (photon energies $>100$\,GeV) have been recently provided by the VERITAS Collaboration \citep{2016AJ....151..142A}. \begin{table*}[th!] \caption{Observational summary of the INV data and data analysis} \label{tab:result} \tiny \centering \begin{tabular}{ccccccccccccccc}\\\hline Date of obs. & Tel. & Dur. & $N_p$ & $\Delta m_{t-s1}$ & $\Delta m_{t-s2}$ & $\Delta m_{s1-s2}$ & $\sigma_{s1-s2}$ & SD$_{s1-s2}$ & $\psi$ & \multicolumn{3}{c} {$F-test$:value (status)$^\dag$ } & INV$^\dag$ & Ref. \\ & & (hr) & & (mag) &(mag) & (mag) & (1$e$-2\,mag)& (1$e$-2\,mag)& (1$e$-2\,mag) & (t-s1) & (t-s2) & (s1-s2) & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & (13) & (14) & (15) \\ \hline 1998 Dec 26 & ST & 7.8 & 49 & $-$0.531 & $-$0.407 & 0.124 &0.4 & 0.6 & 2.6 & 1.38(N) & 1.05(N) & 0.75(N) &N & (a) \\ 1999 Dec 30 & ST & 7.4 & 64 & $-$0.988 & $-$1.055 & $-$0.067 &0.4 & 0.5 & 2.1 & 0.49(N) & 0.68(N) & 0.62(N) &N & (a) \\ 2000 Dec 24 & ST & 6.0 & 42 & $-$1.676 & $-$1.737 & $-$0.061 &0.4 & 0.5 & 2.2 & 1.26(N) & 0.70(N) & 0.70(N) &N & (a) \\ 2001 Dec 24 & ST & 7.3 & 38 & $-$1.115 & $-$1.306 & $-$0.191 &0.3 & 0.4 & 1.8 & 1.46(N) & 0.98(N) & 0.52(N) &N & (a) \\ 2003 Dec 20 & HCT & 6.0 & 38 & $-$0.774 & 0.045 & 0.819 & 0.2 & 0.3 & 1.5 & 1.76(PV) & 2.13(PV) & 0.80(N) &PV& (b) \\ 2004 Dec 10 & ST & 6.2 & 30 & 0.286 & $-$0.227 & $-$0.513 & 0.2 & 0.3 & 2.1 & 4.48(V) & 3.54(V) & 1.17(N) &V & (b) \\ 2004 Dec 23 & ST & 5.9 & 13 & 0.196 & $-$0.310 & $-$0.506 & 0.2 & 0.3 & 1.5 & 5.59(V) & 3.35(PV) & 1.15(N) &V & (b) \\ 2005 Jan 2 & ST & 4.9 & 22 & 0.108 & $-$0.414 & $-$0.522 & 0.2 & 0.3 & 1.2 & 0.85(N) & 1.65(N) & 0.81(N) &N & (b) \\ 2005 Jan 5 & ST & 5.2 & 26 & 0.269 & 0.111 & $-$0.158 & 0.1 & 0.2 & 1.4 & 4.45(V) & 3.09(V) & 1.08(N) &V & (b) \\ 2005 Jan 9 & ST & 7.1 & 30 & 0.325 & 0.172 & $-$0.153 & 0.1 & 0.2 & 1.5 & 3.50(V) & 3.66(V) & 0.90(N) &V & (b) \\ 2005 Nov 9 & ST & 4.3 & 19 & 0.449 & 0.362 & $-$0.086 & 0.1 & 0.3 & 1.2 & 3.00(PV) & 2.32(PV) & 1.74(N) &PV& (b) \\ 2006 Nov 16 & ST & 5.0 & 21 & 0.580 & 1.550 & 0.970 & 0.2 & 0.4 & 1.4 & 1.72(N) & 0.82(N) & 2.49(N) &N & (b) \\ 2006 Nov 29 & ST & 6.5 & 28 & 0.947 & 0.430 & $-$0.517 & 0.2 & 0.3 & 1.9 & 1.31(N) & 1.27(N) & 1.00(N) &N & (b) \\ 2006 Dec 17 & ST & 6.5 & 28 & 1.013 & 0.505 & $-$0.507 & 0.2 & 0.3 & 2.2 & 1.87(N) & 1.93(PV) & 1.45(N) &N & (b) \\ 2007 Jan 11 & Ch & 3.9 & 90 & 1.053 & 1.924 & 0.870 & 0.5 & 0.6 & 4.8 & 0.59(N) & 0.58(N) & 0.64(N) &N & (c) \\ 2007 Dec 15 & ST & 7.1 & 29 & 0.364 & 0.201 & $-$0.163 & 0.1 & 0.2 & 2.6 & 4.62(V) & 4.99(V) & 1.35(N) &V & (b) \\ 2007 Dec 16 & ST & 7.1 & 29 & 0.281 & $-$0.228 & $-$0.509 & 0.2 & 0.2 & 1.7 & 2.16(N) & 1.31(N) & 0.46(N) &N & (b) \\ 2008 Nov 22 & ST & 6.0 & 29 & 1.037 & 0.908 & $-$0.129 & 0.2 & 0.2 & 1.4 & 0.64(N) & 0.74(N) & 0.53(N) &N & (b) \\ 2009 Jan 20 & ST & 4.0 & 66 & 0.459 & 2.043 & 1.584 & 0.2 & 0.7 & 5.5 & 6.01(V) & 7.47(V) &5.26(V) &N & (d) \\ 2009 Dec 8 & ST & 6.9 & 31 & 0.455 & 0.326 & $-$0.129 & 0.4 & 0.5 & 3.2 & 1.65(N) & 1.23(N) & 0.91(N) &N & (e) \\ 2011 Jan 5 & ST & 6.8 & 32 & 1.284 & 0.953 & $-$0.331 & 0.3 & 0.4 & 3.8 & 1.13(N) & 1.16(N) & 0.51(N) &N & (e) \\ 2011 Nov 29 & ST & 6.1 & 29 & 1.113 & 0.613 & $-$0.500 & 0.2 & 0.3 & 1.9 & 1.18(N) & 0.65(N) & 0.51(N) &N & (e) \\ 2012 Dec 20 & ST & 6.9 & 115 & & & & & & & & & &N & (f) \\ 2013 Jan 7 & ST & 5.6 & 22 & 0.535 & 0.383 & $-$0.152 & 0.3 & 0.4 & 4.9 & 6.43(V) & 7.00(V) & 0.91(N) &V & (e) \\ 2015 Feb 15 & AOJU& 4.1 & 30 & 0.630 & 1.143 & 0.513 & 0.8 & 1.1 & 9.1 & 1.99(PV) & 1.86(PV) & 0.81(N) &PV& (e) \\ \hline \end{tabular} \begin{minipage}{\textwidth} Columns: (1) date of observation; (2) telescope used; (3) duration of monitoring; (4) number of data points in the DLC; (5) mean magnitude difference of the t-s1 DLC; (6) mean magnitude difference of the t-s2 DLC; (7) mean magnitude difference of the s1-s2 DLC; (8) quadratic mean of the $\sc IRAF$ errors for the s1-s2 DLC; (9) standard deviation of the s1-s2 DLC; (10) INV amplitude ($\psi$); (11) $F-$ value obtained for the t-s1 DLC (variability status of the DLC); (12) $F-$ value obtained for the t-s2 DLC (variability status of the DLC); (13) $F-$ value obtained for the s1-s2 DLC (variability status of the DLC); (14) Variability status of BL Lac; (15) Reference for the INV data : (a) \citet{2004MNRAS.348..176Sagar}; (b) \citet{2009MNRAS.399.1622AG}; (c) \citet{2008AJ....135.1384Gupta}; (d) \citet{2011MNRAS.413.2157Rani}; (e) present work; (f) \citet{2012MNRAS.424.2625B}; $^\dag$ V = Variable; N = Non-variable; PV = Probable Variable; \end{minipage} \end{table*} \begin{figure*}[h!] \centering \includegraphics[width=0.41\textwidth]{fig1a.pdf} \includegraphics[width=0.41\textwidth]{fig1b.pdf} \includegraphics[width=0.41\textwidth]{fig1c.pdf} \includegraphics[width=0.41\textwidth]{fig1d.pdf} \includegraphics[width=0.41\textwidth]{fig1e.pdf} \caption{Our 5 newly acquired R-band intra-night optical DLCs of the BL Lac object PKS\,0735$+$178. The date, the telescope used, and the duration of monitoring are given at the top of each nights' plots. The upper two panels show the DLCs of the target relative to two steady comparison stars, while the third panel shows the star-star DLC. The bottom panels give the plots of seeing variation for the night, based on the three stars (shown by crosses, open circles, and filled circles, respectively), monitored along with the target blazar on the same CCD frame.} \label{fig:INV} \end{figure*} As revealed by very long baseline interferometry (VLBI) imaging, the radio jet of the blazar underwent several dramatic structural changes between 1981 and 2001 on the scale of 10\,pc, altering between a `staircase jet' with a highly bent trajectory (circa 1981 and 1995), and a straight jet with a linear trajectory (1985 and 2001); the last morphological transition coincided with the optical flaring but was accompanied by only a mild increase in the radio intensity \citep{2010A&A...515A.105B}. By inspecting milli-arcsec scale resolution images from the MOJAVE database\footnote{\scriptsize{\texttt{http://www.physics.purdue.edu/astro/MOJAVE/sourcepages/0735+178.shtml}}} and the Boston University database\footnote{\scriptsize{\texttt{https://www.bu.edu/blazars/VLBA\_GLAST/0735.html}}}, we confirmed that the jet trajectory has remained linear from 2001 until 2015. Faraday rotation gradients and circular polarization have been detected in the radio core of PKS\,0735$+$178 \citep{2008MNRAS.384.1003G}. The jet magnetic field structure revealed by the radio polarization maps is rather complex (though again, not unusual for a low-frequency-peaked BL Lac), consisting of magnetic field lines mostly perpendicular to the jet axis near the core region, but parallel to the jet further along the outflow; this change suggests either a helical configuration with changing pitch angle or strong interactions with the ambient medium leading to velocity shears and jet bends \citep{2006A&A...453..477A}. In Section 2 we describe our data and its reduction. Our analysis and results are given in Section 3 and Section 4 provides a discussion of the results and our main conclusions \section{Data acquisition} \label{sec:obs} \subsection{Optical: intra-night} The vast majority of the intra-night observations were carried out using the 104-cm Sampurnanand telescope (ST) located at the Aryabhatta Research Institute of observational sciencES (ARIES), Naini Tal, India. The ST has Ritchey-Chr\'etien (RC) optics with a f$/$13 beam \citep{1999CSci...77..643G}. The detector was a cryogenically cooled $2048 \times 2048$ chip mounted at the Cassegrain focus. This chip has a readout noise of 5.3\,e$^{-}$/pixel and a gain of 10\,e$^{-}$$/$Analog to Digital Unit (ADU) in slow readout mode. Each pixel has a dimension of 24\,$\mu$m$^{2}$ which corresponds to 0.37\,arcsec$^{2}$ on the sky, covering a total field of $13^{\prime} \times 13^{\prime}$. Our observations were carried out in $2 \times 2$ binned mode to improve the signal-to-noise ratio. We similarly used the 201-cm Himalayan Chandra Telescope (HCT) at the Indian Astronomical Observatory (IAO), located in Hanle, India. This telescope is also of the RC design but has a f$/$9 beam at the Cassegrain focus\footnote{\texttt{http://www.iiap.res.in/$\sim$iao}}. The detector was a cryogenically cooled $2048 \times 4096$ chip, of which the central $2048 \times 2048$ pixels were used. The pixel size is 15\,$\mu$m$^{2}$, so that the image scale of 0.29\,arcsec$/$pixel covers an area of $10^{\prime} \times 10{^\prime}$ on the sky. The readout noise of CCD is 4.87\,e$^{-}$/pixel and the gain is 1.22\,e$^{-}$$/$ADU. The CCD was used in the unbinned mode. Lastly, we also employed the 50-cm Cassegrain telescope in the Astronomical Observatory of the Jagiellonian University (AOJU), located in Krak\'ow, Poland. This telescope is also of the RC design with f$/$6.7 beam at the Cassegrain focus. The detector was a thermoelectric cooled $1024 \times 1024$ chip, corresponding to an image scale of 0.70\,arcsec$/$pixel, covering a total of $12^{\prime} \times 12{^\prime}$ on the sky. The CCD was used in $2 \times 2$ binned mode. The seeing mostly ranged between $\sim 1^{\prime\prime}.5$ to $\sim 3^{\prime\prime}$, as determined using three sufficiently bright stars on each CCD frame. All the observations were made using the R filter, as the CCD responses are maximized in this band. The field positioning was adjusted in order to have within each CCD frame at least two, and usually three, comparison stars. For all the telescopes, bias frames were taken intermittently, and twilight sky flats were also obtained. The pre-processing of the images (bias subtraction, flat-fielding and cosmic-ray removal) was done by applying the standard procedures in the Image Reduction and Analysis Facility ({\sc IRAF})\footnote{\texttt{http://iraf.noao.edu/}} software package. The instrumental magnitudes of the target AGN and the stars (all point-like) in the image frames were determined by the aperture photometry using {\sc APPHOT}. The magnitude of the target AGN was measured relative to a few apparently steady comparison stars present on the same CCD frame; relative star-star magnitudes were also recorded. In this way, the Differential Light Curves (DLCs) for the AGN and the comparison stars were derived. Out of the resulting three star-star DLCs, we selected the steadiest star-star DLC (based on the lowest variance) for testing the INV of the blazar monitored on a given night. These chosen stars are hereafter named `s1' and `s2', and the corresponding target-star and star-star DLCs are denoted as `t-s1', `t-s2', `s1-s2', respectively. Basic information about the comparison stars (apparent magnitudes, optical colors, positions on the sky) is given in \citet{2009MNRAS.399.1622AG}. The comparison stars we used are typically within about one magnitude of the target AGN; note that avoiding large differences in brightness is of crucial importance for minimizing the possibility of a spurious INV detection \citep[e.g.,][]{2007MNRAS.374..357Cellone}. In this context, spurious variability on account of different second-order extinction coefficients for the AGN and their comparison stars can also be problematic if the target AGN and the comparison stars have very different optical colors. However, as shown by \citet{1992AJ....104...15Carini} and \citet{2004JApA...25....1Stalin}, for color differences of up to 1-2 mag, the differential extinction of photons traveling through varying airmasses do not influence significantly the derived INV parameters of BL Lac objects, given their typical flux measurement uncertainties ($\simeq 0.1-0.2\%$, including the observations analyzed here). For each night, an optimum aperture radius for the photometry was chosen by identifying the minimum dispersion in the star-star DLC, starting from the median seeing (i.e., full width at half maxima) value on that night to four times that value. For a given night, we selected the aperture size which yielded the minimum dispersion in the steadiest star-star DLC. This also set the threshold for the INV detection on that night \citep[see][]{2013JApA...34..273Goyal}. Typically, the selected aperture radius was $\sim 4^{\prime\prime}$ and the effective seeing was $\sim 2^{\prime\prime}$. Our entire intra-night data are summarized in Table~\ref{tab:result}, and the newly acquired DLCs are shown in Figure~\ref{fig:INV}. \subsection{Optical: long/short-term} The nightly averaged R-band photo-polarimetric data for the period 2005 October 22 to 2015 September 27 were obtained using (i) the 70-cm AZT-8 telescope of the Crimean Astrophysical Observatory (Nauchnij, Ukraine), (ii) the 40-cm LX-200 telescope of the St. Petersburg State University (St. Petersburg, Russia), (iii) the 1.8-m Perkins telescope of the Lowell Observatory (Flagstaff AZ, USA), (iv) the 1.54-m Kuiper and 2.3-m Bok telescopes of the Steward Observatory (Mt.\ Bigelow AZ and Kitt Peak AZ, USA), and (v) the 2.2-m telescope of the Calar Alto Observatory (Calar Alto, Spain) within the MAPCAT program\footnote{\texttt{http://www.iaa.es/$\sim$iagudo/research/MAPCAT/MAPCAT.html}} (see \citealt{2008A&A...492..389L} for the description of the program and analysis). The photometric data were supplemented with the nightly averaged R-band optical data from 1993 until 2005, obtained from \citet{2007A&A...467..465C}. These nightly averaged data have quoted photometric uncertainty of the order of $\sim 5-10\%$, arising mainly from large calibration errors in the estimated magnitudes of the stars in the field \citep{2007A&A...467..465C}. It is a standard procedure to use only one or two stars to scale the magnitude of a blazar to the standard system. In case of intra-night data, however, as a very high photometric accuracy is required to detect variability down to $1\%$ amplitudes, we standardized our comparison stars \citep[Table 2 of][]{2009MNRAS.399.1622AG} using all the standard stars in the field of the target, as listed in \citet{2007A&A...467..465C}; this resulted in the accuracy of $\leq 0.2-0.5\%$. Finally, for a given R-band magnitude $M_{\rm R}$ the R-band flux (in Jy) was derived as $3064 \times 10^{-0.4\times M_{\rm R}}$, where 3064\,Jy is the zero point magnitude flux of the photometric system \citep{1999hia..book.....G}; the errors in R-band fluxes were derived using standard error propagation \citep{2003drea.book.....Bevington}. \subsection{High energy $\gamma$-rays: long-term} \label{sec:fermi} We have analyzed the {\it Fermi}-LAT data for the field containing PKS\,0735+178 from 2008 August through 2015 September, and produced a source light curve between 0.1 and 200\,GeV with an integration time of 15 days. We have performed the unbinned likelihood analysis using Fermi ScienceTools-v10r0p5 with {\sc p8r2\_source\_v6} source event selection and instrument response function, for the $20 ^\circ$ region centered at the blazar, following the Fermi tutorial\footnote{\texttt{http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/}}. The procedure starts with the selection of good data and time intervals (using the tasks {\sc `gtselect'} and {\sc `gtmktime'} with selection cuts {\sc evclass=128 evtype=3}), followed by the creation of an exposure map in the region of interest (ROI) with $30^\circ$ radius for each time bin (tasks {\sc `gtltcube'}, {\sc `gtexpmap'} while counting photons within zenith angle $< 90^\circ$). We then computed the diffuse source response (task {\sc `gtdifrsp'}), and finally modeled the data through a maximum-likelihood method (task {\sc `gtlike'}). In this last step, we used a model that includes PKS\,0735+178 and 170 other sources inside the ROI \citep[according to the third Fermi Large Area Telescope source catalog, 3FGL;][]{2015ApJ...810...14A}. The model also takes into account the diffuse emission from our Galaxy and the extragalactic $\gamma$-ray background\footnote{{\sc gll\_iem\_v06.fits} and {\sc iso\_p8r2\_source\_v6\_v06.txt}}\citep{2016ApJS..223...26A}. In the modelling, we followed the usual method and fixed the spectral indices and fluxes of all the point sources within the ROI, other than the target, at their 3FGL values. The $\gamma$-ray spectrum of PKS\,0735+178 was modeled with a simple power law. We considered a measurement to be a successful detection for a test statistic TS\,$\geq$\,10, which corresponds to a signal-to-noise ratio $\geq 3\sigma$ \citep{2009ApJS..183...46A}. \begin{figure*}[h!] \centering \includegraphics[width=\textwidth]{fig2.pdf} \caption{The multiwavelength, photo-polarimetric, and long-term variability light curves of PKS\,0735+178. Panel (a) shows the {\it Fermi-}LAT light curve at energy range 0.1-200 GeV. Panels (b--d) show the R-band total flux, polarization degree (PD) and electric vector position angle ($\chi$). Panels (e--g) show the GHz band total radio flux, PD and $\chi$ while, panel {\bf(h)} shows the run of radio spectral index obtained at GHz frequencies.} \label{fig:LC} \end{figure*} \clearpage \subsection{Radio: long-term} The radio data were obtained from the University of Michigan Radio Astronomy Observatory (UMRAO) 26m dish at 4.8, 8.0, and 14.5\,GHz, and the 40-m Telescope at the Owens Valley Radio Observatory (OVRO) at 15\,GHz. The UMRAO fluxes at 4.8, 8.0, and 14.5\,GHz were typically sampled twice per month \citep{1985ApJ...298..296A}, from 1980 February 16 -- 2010 April 16, from 1977 March 11 -- 2008 April 5, and from 1977 August 19 -- 2011 January 24, respectively, while the OVRO light curve at 15\,GHz was sampled twice a week \citep{2011ApJS..194...29R}, during the period from 2009 march 17 to 2014 May 8. The discussion on the corresponding observing strategy and calibration procedures can be found in \citet{1985ApJ...298..296A} for the UMRAO data, and in \citet{2011ApJS..194...29R} for the OVRO data. \section{Data analysis and Results} \subsection{Optical microvariability} \label{sec:micro} In our analysis, we used the $F-$test for assigning the INV detection significance. The $F-$statistic compares the observed variance $V_{\rm obs}$ to the expected variance $V_{\rm exp}$. The null hypothesis of no variability is rejected when the ratio \begin{equation} F_\nu^\alpha =\frac{V_{\rm obs}}{V_{\rm exp}} = \frac{V_{t-s}}{\langle \eta^2 \, \sigma_{t-s}^2 \rangle} , \label{eq:ftest} \end{equation} exceeds a critical value for a chosen significance level $\alpha$, for a given number of degrees of freedom (DOF) $\nu$; here $V_{t-s}$ is the variance of the `target-star' DLC, $\langle \sigma_{t-s}^2 \rangle$ is the mean of the squares of the (formal) rms errors of the individual data points in the `target-star' DLC. Since this method requires flux or magnitude estimates along with their error estimates, it is important to determine the photometric errors accurately. As emphasized in several independent studies, the photometric errors returned by $APPHOT$ are significatnly underestimated (\citealt{2004JApA...25....1Stalin} and references therein). \citet{2013JApA...34..273Goyal} reported the latest attempt to determine this under-estimation factor, $\eta$, using an unprecedented data set consisting of 262 steady star-star DLCs. They find $\eta= 1.54\pm0.05$ which confirms the previous estimates by the same group which were based on much smaller data sets. \citet{2013JApA...34..273Goyal} also showed that the determination of $\eta$ is quite insensitive to the magnitude difference between the pair of objects used for deriving a DLC, as long as it did not exceed 1.5 mag. Thus, $\eta=$1.54 has been used in the present analysis to scale up the {\sc IRAF} photometric magnitude errors (see also \citealt{1995MNRAS.274..701G, 2010ApJ...723..737Villforth}). Note that the standard expression for $F$ is given by $F_{\nu_1,\nu_2}^{\alpha} = \sigma_1^2/\sigma_2^2 $, where $\sigma_1$ and $\sigma_2$ are the two variances with the corresponding DOF, $\nu_1$ and $\nu_2$. In our analysis, we have simplified this definition since $\nu_1$ = $\nu_2 = \nu$ is also the number of DOF for the `star-star' DLC. The significance level set for a given test determines the {\it expected} number of {\it false positives}, which is an indicator of the robustness of the test. We have chosen two significance levels, $\alpha = $ 0.01 and 0.05, corresponding to $p-$values of $\ga$ 0.99 and $\ga$ 0.95, respectively. Recall that the smaller the value of $\alpha$ is, the less likely it is for the variability to occur by chance. Thus, in order to claim a genuine INV detection, i.e., to assign a `variable' designation (V), we stipulate that the computed statistic value is above the critical value corresponding to $p > 0.99$ (i.e., $\alpha=$ 0.01) for a given degree of freedom ($\nu = N_p - 1$, where $N_p$ stands for the number of data points in a given DLC). We assign a `probable variable' designation (PV) when the computed test statistic value is found to be between the critical values at $\alpha = $ 0.01 and 0.05; otherwise, a `non-variable' (N) designation is assigned to a DLC. All the three DLCs, i.e., $t-s1, t-s2$ and $s1-s2$, are subjected to the $F-$test analysis. In a few cases, the INV status was different for the two blazar-star DLCs, and this indicated a small amplitude variation of one or the other comparison stars. Since such small amplitude variations in star-star DLCs are difficult to ascertain, we only ascribed a ``V'' status if both blazar-star DLCs gave a ``V'' status; otherwise, we quote a``PV'' or ``N'' status if the star-star DLC itself turned to be variable. The analysis results are summarized in Table~\ref{tab:result}. Following \citet{1999A&AS..135..477Romero} the peak-to-peak INV amplitude was calculated as \begin{equation} \psi= \sqrt{({D_{\rm max}}-{D_{\rm min}})^2-2\sigma^2} \, , \label{eq:psi} \end{equation} with $D_{\rm min/max}$ denoting the minimum/maximum in the differential light curve of the source, and $\sigma^2= \eta^2 \, \langle\sigma^2_{i}\rangle$ where $\eta =1.54$ and $\sigma_i$ is the nominal error associated with each data point. The INV DC was computed according to \begin{equation} DC = 100\% \,\,\, \frac{\sum_{j=1}^n N_j \, (1/\Delta t_j)}{\sum_{j=1}^n (1/\Delta t_j)} \, , \label{eq:DC} \end{equation} where $\Delta t_j = \Delta t_{j,\, {\rm obs}} \, (1+z)^{-1}$ is the duration of the monitoring session of a source on the $j^{th}$ night, corrected for the cosmological redshift $z$, and $N_j$ is set equal to 1 if INV was detected, and otherwise to 0 \citep{1999A&AS..135..477Romero,2004JApA...25....1Stalin}. This estimate is essentially the ratio of the number of nights a source is found to be variable to the total number of nights it was monitored. Note that, since for a given source the monitoring times on different nights are not necessarily equal, the evaluation of the DC has been appropriately weighted by the actual monitoring duration $\Delta t_j$. The computed INV DC for the entire data set, consisting of 25 nights spanning over 18 years (1998--2015) is 22\% (37\% if three PV cases are included). The INV DC for the stronger nightly variations, where $\psi > 3\%$, is 4\% (10\% if one PV case is included). In order to validate our analysis procedure, we have performed a sanity check by computing the number of `Type 1 errors', or the false positives, for our data set. A false positive arises due the rejection of a true null hypothesis by a test, when applied to a non-varying DLC (i.e., the inability to discern a non-variable object as non-variable). Assuming {\it a priori} that the star-star DLCs are steady, the outcome of the statistical test applied to these should be consistent with the {\it expected} number of false positives for the assumed value of $\alpha$. The number of false positives depends only on the number of the star-star DLCs examined and the value of $\alpha$ chosen for the test. Thus, if the number of false positives is found to be significantly different from the expected number, either the test is not robust or the the measurement errors are not properly accounted for \citep[see, e.g.,][for the robustness of statistical tests in microvariability studies]{2010AJ....139.1269Diego,2013MNRAS.435.1300AG}. We note that for our data set consisting of 25 steady star-star DLCs, the means of the {\it expected} numbers of false positives are $\simeq 0.3$ and $\simeq 1.3$ for $\alpha= 0.01$ and 0.05, respectively. Since the distribution of false positives is expectedly binomial, for $\alpha = 0.01$ the number of false positives should in fact be scattered between 0 and 2, and for most of the cases around $\simeq 0.3\pm 0.5$. Similarly, with $\alpha = 0.05$, the number of false positives should lie between 0 and 5, and should largely cluster at $\simeq 1 \pm 1$. Meanwhile, the {\it observed} numbers of false positives reported by the application of the $F-$test (see column~13 of Table~\ref{tab:result}) is 1 for $\alpha = 0.01$ and 1 for $\alpha = 0.05$. The good agreement between the {\it expected} and the {\it observed} numbers of false positives provides validation for our analysis procedure. \subsection{Multi-wavelength long-term variability} Figure~\ref{fig:LC} presents the long-term, multi-wavelength, and photo-polarimetric light curve of PKS\,0735$+$178. Note that a brief discussion of the long-term optical and radio variability of the source (until 2008) was given in \citet{2009MNRAS.399.1622AG}. The figure includes the 0.1--200\,GeV integrated $\gamma$-ray flux $S_{\gamma}$ (panel 2a), the optical R-band photometric flux $S_{\rm opt}$, polarization degree ${\rm PD_{opt}}$, and polarization angle $\chi_{\rm opt}$ (panels 2b, 2c, and 2d, respectively), as well as the radio $S_{\rm rad}$, ${\rm PD_{rad}}$, $\chi_{\rm rad}$, and the spectral index $\alpha_{\rm rad}$ (panels 2e--2h). The radio spectral index, defined here as $S_{\nu}\propto{\nu^{\alpha}}$, was calculated by a linear regression analysis of the flux values at the three GHz frequencies in log-log space (with measurements at various radio frequencies performed within 14 days considered as simultaneous). The optical polarization measurements (prior to 2005) were obtained from \citet{1987ApJS...64..459S}, \citet{1998AJ....116.2119V}, \citet{2009MNRAS.397.1893V}, and \citet{2011ApJS..194...19W}. Figure~\ref{fig:ZLC} shows the expanded long-term light curve consisting of our newly acquired data for the period 2005--2015. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{fig3.pdf} \caption{An expanded view of the $\gamma-$ray, R-band photo-polarimetric, and radio long-term variability light curves of PKS\,0735$+$178 for the period 2005--2015.} \label{fig:ZLC} \end{figure} \subsubsection{Power spectral analysis} \label{sec:PSD} The PSDs of the observed light curves of PKS\,0735+178 were generated using the standard methods outlined in \citet{2002MNRAS.332..231U} and \citet{2014MNRAS.445..437M}, which can be summarized as follows. The PSD of an evenly sampled light curve $f(t_i)$ with mean $\mu$ and a total duration of $T$, consisting of $N$ data points, is defined as the rms-normalized periodogram, \begin{equation} P(\nu_k) = \frac{2 \, T}{\mu^2 \, N^2} \, | F(\nu_k) |^2 \, , \label{eq:psdeq} \end{equation} where $| F(\nu_k) |^2$ is the squared modulus of the discrete Fourier transform (DFT) calculated after subtracting the mean from flux measurements. In order to achieve a regular sampling of the light curve at evenly spaced time intervals, when our data were necessarily obtained at irregular intervals, we linearly interpolate between the two consecutive observed data points on the timescales typically 15-20 times smaller than the original observed sampling interval. Reasons for subtracting the mean and performing data interpolation in the case of colored noise-type sources are discussed in Appendix\,\ref{sec:App}. By definition, the integration of the periodogram over positive frequencies yields the total excess variance. Meanwhile, the noise floor levels corresponding to the variability power due solely to statistical fluctuations are estimated following \citet{2015ApJ...798...27I} as \begin{equation} P_{stat} = \frac{2 \, T}{\mu^2 \, N} \, \sigma_{\rm stat}^2 \, . \label{eq:poi_psd} \end{equation} In the above, $\sigma_{stat}^2= \sum_{j=1}^{j=N} \Delta f(t_j)^2 / N$ is the mean variance of the measurement uncertainties on the flux values $\Delta f\!(t_j)$ in the observed light curve at times $t_j$, with $N$ denoting the number of data points in the original light curve. In the case of Gaussian errors, the noise floor level is scaled using typical sampling intervals of 1 day for the long term optical light curve, 3 days for the OVRO light curve, and 15 days for the UMRAO light curves \citep[see also][Appendix A]{2003MNRAS.345.1271V}. Note that in the case of the lower frequency UMRAO data, in particular the 4.8\,GHz ones, the calculated noise floor level is relatively high when compared with the corresponding variability power (see Fig.\,\ref{fig:radio}, bottom panel), and is not obviously reflected as a high-frequency plateau in the raw periodogram; this is solely due to the fact that in this case the errors on flux measurements were effectively over-estimated during the first quarter of the survey program. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{fig4.pdf} \caption{The high-energy $\gamma$-ray ({\it Fermi}-LAT) PSD of PKS\,0735+178, corresponding to the data analyzed in this paper. The dashed gray line and filled upward pointing traingles denote the raw and binned periodogram estimates, respectively, while the dashed horizontal line indicates the noise floor level due to the measurement error achieved. The solid black line is the least-squares fit to the binned periodogram.} \label{fig:gamma} \end{figure} PSDs generated using this method are distorted due to the effects of the discrete sampling and the finite duration of the light curve, known as {\it aliasing} and {\it red noise leakage}, respectively. The net effect of aliasing is to fold back the power from high frequencies to lower frequencies; at the same time, the red noise leak causes extra power at higher frequencies through the side-lobes of a sampling window function \citep{1992nrca.book.....Press}. The aliasing is reduced for steeper power-law power spectra \citep{1993MNRAS.261..612P, 2002MNRAS.332..231U}, while the red-noise leak can be minimized using a proper sampling window function \citep[see][for a detailed discussion]{2014MNRAS.445..437M}. In our analysis of the PSD, we used the `Hanning' window function as it has the lowest side lobes and the fastest fall-off rate as a function of frequency when compared to other frequently employed window functions (see Appendix\,\ref{sec:App} for a detailed discussion). \begin{figure}[t!] \centering \includegraphics[width=0.83\columnwidth]{fig5a.pdf} \includegraphics[width=0.83\columnwidth]{fig5b.pdf} \includegraphics[width=0.83\columnwidth]{fig5c.pdf} \includegraphics[width=0.83\columnwidth]{fig5d.pdf} \caption{As in Fig.\ 4 for the radio (OVRO and UMRAO) PSDs of PKS\,0735$+$178.} \label{fig:radio} \end{figure} \begin{table*}[t!] \caption{The PSD analysis for the light curves of PKS\,0735+178} \label{tab:PSD} \small \centering \begin{tabular}{ccccccccccc}\\\hline Light curve & Range & $N_{obs}$ & $\Delta T_{\rm obs}$ & $\Delta T_{\rm intep}$ & $T_{\rm obs}$ &$\rm \log(P_{stat})$ & $\log (\nu_k) $ range & {$\beta \pm err$} \\ & & & (day) & (day) & & ((rms/mean)$^2$ day) & (day$^{-1})$ & bin \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline &&&&&&&&\\ Fermi-LAT (LT) & 0.1-200 (GeV) & 168 & 15 & 0.5 & 6.9 (yr) & $+$0.50 & $-$3.4 to $-$1.5 &1.1$\pm$0.2 \\ OVRO (LT) & 15 (GHz) & 263 & 3 & 0.5 & 5.1 (yr) & $-$2.24 & $-$3.3 to $-$1.0 &1.8$\pm$0.1 \\ UMRAO (LT) & 14.5 (GHz) & 359 & 15 & 0.5 & 33 (yr) & $-$1.00 & $-$4.1 to $-$1.6 &2.1$\pm$0.1 \\ UMRAO (LT) & 8.0 (GHz) & 299 & 15 & 0.5 & 31 (yr) & $-$1.06 & $-$4.1 to $-$1.6 &2.2$\pm$0.1 \\ UMRAO (LT) & 4.8 (GHz) & 220 & 15 & 0.5 & 30 (yr) & $-$0.99 & $-$4.0 to $-$1.6 &2.6$\pm$0.1 \\ Optical (LT) & R-band & 677 & 1 & 0.5 & 22.6 (yr) & $-$2.27 & $-$3.8 to $-$1.1 &2.0$\pm$0.1 \\ 2004 Dec 10 (IN) & R-band & 30 & 0.008 & 0.0004 & 6.2 (hr) & $-$7.23 & $+$0.6 to $+$1.8 &2.8$\pm$0.7 \\ 2004 Dec 23 (IN) & R-band & 13 & 0.017 & 0.0004 & 5.9 (hr) & $-$6.83 & $+$0.6 to $+$1.3 &4.1$\pm$0.3 \\ 2005 Jan 5 (IN) & R-band & 26 & 0.008 & 0.0004 & 5.2 (hr) & $-$7.23 & $+$0.8 to $+$1.8 &2.2$\pm$0.3 \\ 2005 Jan 9 (IN) & R-band & 30 & 0.008 & 0.0004 & 7.1 (hr) & $-$7.21 & $+$0.5 to $+$1.7 &1.5$\pm$0.2 \\ 2007 Dec 15 (IN) & R-band & 29 & 0.008 & 0.0004 & 7.0 (hr) & $-$7.10 & $+$0.5 to $+$1.7 &2.3$\pm$0.5 \\ 2013 Jan 7 (IN) & R-band & 22 & 0.008 & 0.0004 & 5.6 (hr) & $-$6.73 & $+$0.5 to $+$1.6 &3.3$\pm$0.4 \\ \hline \end{tabular} \begin{minipage}{\textwidth} Columns : (1) Light curve/date of the observation in the case of the optical intra-night datasets (LT: long-term, IN: intra-night); (2) the observed photon energy/frequency; (3) the number of data points in the observed light curve; (4) the typical sampling interval for the observed light curve; (5) the sampling interval for the interpolated light curve; (6) the total duration of the observed light curve (yr- year, hr- hour); (7) the noise floor level in PSD due to the measurement uncertainty; (8) the temporal frequency range covered by the binned logarithmic power spectra; (9) the power-law slope of the PSD along with the corresponding errors, for the binned logarithmic power spectra (see \S~\ref{sec:PSD}). \end{minipage} \end{table*} The periodogram obtained using equation~(\ref{eq:psdeq}), with the discrete set of frequencies $\nu_{k} = k/T$ with $k=1, ..., N/2$, is known as the `raw' periodogram. It consists of independently distributed $\chi^2$ variables with two DOF. This means that the dispersion in each estimate around the true value equals the true value itself, providing a noisy estimate of the spectral power \citep{1993MNRAS.261..612P, 2003MNRAS.345.1271V}. To circumvent this problem, we have also constructed `binned' logarithmic periodograms, following the procedure outlined in \citet{1993MNRAS.261..612P}, \citet{2005A&A...431..391V}, and \citet{2015ApJ...798...27I}. We binned our periodograms with a constant factor of 1.6 in frequency (and by a factor of 1.5, in case of periodograms derived for the long-term optical light curve) and evaluated the mean power at the representative frequency taken as the geometric mean of each bin. The derived periodogram displays constant variance around the true underlying power spectrum, as if we are observing a noise process following the $\chi^2$ distribution with two DOF. For the binned logarithmic periodogram the variance decreases by a factor of $0.310/M$, where $M$ is the number of data points in each bin. Furthermore, the true power spectrum is related to the observed power spectrum as $P(\nu_k) = (\chi^2/2) \, P_{true}({\nu_k})$. In order to derive the PSD slope we fit the power law function using a least-squares fit method in log-log space. Since the scatter is multiplicative in the linear space, it is additive in log-log space and identical at each frequency, so \begin{equation} \log [ P(\nu_k) ] = \log [ (P_{true}({\nu_k}) ] + \log [ \frac{\chi^2}{2} ] . \label{logpsd} \end{equation} Hence, the expectation value of the periodogram in log-log space is not the same as the expectation value of the power spectrum: there is a bias between the two values. This bias is a constant because of the shape of the $\chi^2$ distribution in log-log space. The expectation value of $\log [\chi^2/2] = -0.25068 $ \citep{1993MNRAS.261..612P}. Therefore, this value is added in the estimate of the binned periodogram. All the generated PSDs are summarized in Table~\ref{tab:PSD}, and presented in Figures~4 ({\it Fermi}-LAT), \ref{fig:radio} (UMRAO and OVRO), and \ref{fig:optical} (optical R-band) for the actual durations of the corresponding light curves, down to the observed sampling intervals. We have not subtracted the constant noise floor level (shown by the dashed horizontal lines in the figures), as some of the data points are below this level. For the analysis, the logarithmic binned PSDs were fitted with a single power law model $P(\nu_k)\propto{\nu_k^{-\beta}}$ using linear regression with weighted error in log-log space; the results of the fitting, along with the errors calculated as the rms residuals between the model and the data, as well as the corresponding variability frequency ranges of the analyzed light curves, are summarized in Table~\ref{tab:PSD}. Figure\,\ref{fig:optical} presents a composite PSD for the optical data set analyzed in this paper, including the long-term monitoring data (1993--2015; 23 years down to nightly sampling), as well as the intra-night data with confirmed INV detections (spanning 4--8\,h down to $\sim 10-15$\,min sampling; see Table~\ref{sec:obs}). The fitted power-law slope is $2.0\pm0.1$ for the long-term segment of the optical PSD, and ranges from $\sim 1.5$ up to $\sim 4.0$ for the individual intra-night data sets. The composite optical PSD (long-term + intra-night) covers an unprecedented frequency range of nearly 6 dex for a blazar source. This range, from, $\sim 10^{-9} - 10^{-3}$\,Hz, is primarily due to the high photometric accuracy achieved in our intra-night monitoring program. In all the cases, however, the normalization of intra-night PSDs turns out to be consistent with a simple extrapolation of the red-noise ($\beta \sim 2$) optical PSD from lower temporal frequencies. Figure\,\ref{fig:psd_mf} presents the composite multi-wavelength PSDs of PKS\,0735+178 corresponding to the variability timescales $>10$\,days. This explicitly shows the similarities between the radio and optical bands and the clear difference of the $\gamma$-ray band from the others for which we have enough data to compute sensible PSDs. \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{fig6.pdf} \caption{Composite (long-term + intra-night) optical R-band PSD of PKS\,0735+178. Filled symbols denote the binned logarithmic periodogram estimates, respectively, for each data set considered (denoted here with long-term for the long-term optical monitoring data and the Julian date for the intra-night data). The gray dot-dashed lines shows the raw periodogram for the long-term optical data while the same color (as that of filled symbols) has been chosen to display the `raw' periodograms for the intra-night optical data. The black dashed horizontal lines correspond to the noise floor level due to measurement uncertainty in long term optical data while blue dashed horizontal line show the typical noise floor level due to measurement uncertainty in intra-night data sets (see Table 2), as achieved in our observations.} \label{fig:optical} \end{figure*} \section{Discussion and Conclusions} The main findings from our analysis of the multi-wavelength, multi-epoch, photo-polarimetric data for PKS\,0735+178, can be summarized as follows: \begin{enumerate} \item{ The INV duty cycle of PKS\,0735$+$178 in the extended dataset consisting of 25 nights (22 from our monitoring program and three from the literature; Figure~\ref{fig:INV}), over the time span of 18 years, is DC\,$\sim 22\%$ for all the variability amplitudes $\psi$, but DC\,$\sim 4\%$ when only the nights showing $\psi > 3\%$ are considered. This value of INV DC is $\sim$ 5 --10 times smaller than that typically observed in blazar sources, indicating that PKS\,0735$+$178 remains in a state of optical quiescence on intra-night timescales, in conformity with our previous study \citep{2009MNRAS.399.1622AG}. } \item{ The newly obtained long-term {\it Fermi}-LAT and optical data reveal weakly-correlated, a-factor-of-a-few, flux changes in the source on month-like timescales (Figure~\ref{fig:ZLC}); meanwhile, the archival radio and optical data show uncorrelated, larger-amplitude (factor of several) variability over the timescale of years (Figure~\ref{fig:LC}). } \item{ The gathered photo-polarimetric data reveal large variations in the optical PA, traced during the period 2005--2015 on timescale of months to years; those changes seem erratic, with no clear repetitive pattern and no correlation with either the total optical flux, the optical PD, or even with the overall level of the optical polarization degree ($\lessgtr 3\%$; Figure~\ref{fig:ZLC}). Interestingly, the optical PD of this blazar has frequently been found to be below 3\%. } \item{ The PSDs of PKS\,0735$+$178 at radio (GHz) and optical frequencies, on the timescales from years to weeks, resemble each other closely: both can be well represented by a single power-law function with the slope $\beta \simeq 2$, meaning a ``pure red noise'' type of the source variability at the lower frequencies of the electromagnetic spectrum. On the other hand, the high-energy $\gamma$-ray PSD, within the corresponding temporal frequency range, is best fitted with a much flatter slope of $\beta\simeq 1$, corresponding to a ``pink noise'' behavior (Figure\,\ref{fig:psd_mf}); no low-frequency flattenings in the PSDs have been detected. } \item{ The optical PSDs on intra-night timescales (for the nights during which INV has been detected), are characterized by a range of slopes mostly between 1.5 and 3.3 with the exception of one instance where it was $\sim$4 (however, this was from a shorter observation with relatively long integration times so that it was derived using only 3 points in the binned periodogram).} This implies a non-stationarity of the variability process in the source, at least on hourly timescales; in all the cases, however, the normalized intra-night PSDs, is consistent with a simple extrapolation from the red-noise ($\beta \sim 2$) optical PSD observed on longer timescales (Figure~\ref{fig:optical}, Table~\ref{tab:PSD}). No high-frequencycut-offs have been detected in the intra-night power spectra down to the noise floor levels. \end{enumerate} \subsection{Polarization and microvariability} The very low INV DC of PKS\,0735+178 reported previously in the literature \citep{2009MNRAS.399.1622AG} was puzzling, in view of the persistently high optical polarization degree claimed for the source \citep{1996AJ....112.1877G, 2011ApJS..194...19W}. Our extended monitoring has confirmed the low INV DC but, at the same time, revealed that the ${\rm PD_{opt}}$ of the blazar frequently drops below $3\%$ -- for about one-third of the monitoring time, based on the available optical polarimetry. Traditionally, the degree of optical polarization has been used to differentiate between blazars and LPQs \citep{1980ARA&A..18..321A, 1984ApJ...279..465M}. In blazar sources, states with very high $\rm PD_{opt}$ reaching even $\sim 40-50\%$, are not uncommon \citep[e.g.,][]{2002PASJ...54L..55F, 2013ApJ...768...40L}, indicating highly ordered magnetic fields within the jet regions where the observed optical synchrotron emission is being produced; in the case of LPQs, on the other hand, the optical polarization is presumed to be reduced predominantly through the more significant contribution of (unpolarized) thermal emission from an accretion disk to the radiative output of a source at optical wavelengths \citep[e.g.,][]{1997A&A...325..109S}. Yet, PKS\,0735+178 is a typical example of a BL Lac object lacking any strong emission lines, while those would be expected to be visible in the presence of pronounced disk continuum emission \citep{2012A&A...547A...1Nilsson}. Although some blazars are known to alter between the ``blazar-like'' and the ``LPQ-like'' optical polarization levels \citep[e.g.,][]{1988A&A...205...86F}, the frequency of such transitions is poorly known, presumably due to the lack of regular polarimetric observations. Only a few of the brightest sources have been subjected to long-term optical photo-polarimetric monitoring. For example, the BL Lac object OJ\,287 during the period 2005--2009, displayed $\rm PD_{opt} < 3\%$ for only $\lesssim 0.5\%$ of the total observed time span \citep{2010MNRAS.402.2087V}. Hence, one may conclude that PKS\,0735$+$178 is a peculiar blazar in showing \emph{both} a relatively low microvariability duty cycle, \emph{and} frequent states of low optical polarization. Based on a comparative study of quasars of the HPQ and LPQ types, \citet{2012A&A...544A..37AG} concluded that the pronounced and frequent INV is correlated with the overall high degree of the optical polarization, rather than with the degree of relativistic beaming, which seems to play a secondary role. Specifically, blazars and quasars with similar degrees of relativistic beaming \citep[i.e., with a similar radio core prominence; e.g.,][]{1982MNRAS.200.1067Orr}, exhibit considerably and systematically different INV DCs. This is in agreement with our new results summarized above for PKS\,0735$+$178, suggesting a basic link between the magnetic field ordering in the jet, as reflected in the optical polarization degree, and the production of rapid flux changes. We finally note in this context that the VLBI radio jets of HPQs show $\chi_{\rm rad}$ of the core/inner jet region parallel to the jet direction, and typically well aligned with $\chi_{\rm opt}$, while the VLBI radio jets of LPQs are characterized by $\chi_{\rm rad}$ misaligned with respect to the jet axis, and uncorrelated with $\chi_{\rm opt}$ \citep{2000ApJ...541...66L}. For PKS\,0735$+$178, we observe erratic changes of $\chi_{\rm opt}$ between 0 and 180\,deg, uncorrelated with $\chi_{rad}$ and with no special relation to the VLBI jet direction in the source \citep[the VLBI jet position angle is $\sim 70$\,deg;][]{2010A&A...515A.105B}. Such a behavior is hard to reconcile with a ``grand design'' helical magnetic field structure inferred for the source by \citet{2006MNRAS.369.1596G} and \citet{2006A&A...453..477A}. \subsection{Power spectra} The LT radio PSDs of blazar sources analyzed in the literature within the temporal frequency range $\sim 10^{-9} - 10^{-7}$\,Hz, are best represented by single power-laws with slopes $\beta \sim 1.5-2.5$ \citep{2014ApJ...785...76P, 2014MNRAS.445..437M}. Specifically, the PSD of the BL Lac object PKS\,2155$-$304, while consistent with a power-law function $\beta \sim 1.8$, shows a flattening at frequencies below $\sim 10^{-3}$\,d$^{-1}$, suggesting a transition from a red noise to a white noise type of variability on the timescale of a few years \citep{2011A&A...531A.123K}. For PKS\,0735$+$178, \citet{2007A&A...467..465C} analyzed the long-term optical monitoring data (1970--2004) using the structure function method and found the associated PSD slope $\beta$ to be between 1.5 and 2.0 on the timescales ranging from 33 years down to weeks; at the same time, they found a variety of PSD slopes, $\beta \sim 1.5-2.3$, for densely-sampled light curves from separate observing seasons (covering the timescales of a few months to a few days). Also, the optical PSDs of blazars detected with the Kepler telescope are characterized by $\beta \sim1.5-2.0$ within the temporal frequency range $\sim 10^{-7} - 10^{-4}$ Hz \citep{2014ApJ...785...60R, 2013ApJ...766...16E}. The X-ray PSDs of BL Lacs, on the other hand, are typically consistent with a broken power-law model with the slopes $\beta \sim 2-3$ above the break frequency $\sim 1$\,d$^{-1}$, and $\beta \sim 1-2$ below the break \citep{1985ApJ...296...46S, 2001ApJ...560..659K,2002ApJ...572..762Z,2003A&A...402..929B, 2013ApJ...770...60S,2015ApJ...798...27I}. Unfortunately, PKS\,0735$+$178 is quite weak in X-rays, so very few observations of it have been made and it is not possible to compute a sensible X-ray PSD for it. The {\it Fermi}-LAT PSD slopes for the bright BL Lacs and FSRQs, have been estimated by \citet{2010ApJ...722..520A} as $\beta \sim 1.7 \pm 0.3$ and $\sim 1.4 \pm 0.3$, respectively, for temporal frequencies from $\sim 10^{-8}$ to $10^{-6}$\,Hz \citep[see also in this context][]{2013ApJ...773..177N,2010ApJ...721.1383A}. \citet{2014ApJ...786..143S}, who modeled in detail the 4\,yr-long {\it Fermi}-LAT light curves of the brightest blazar sources down to week-long sampling intervals, claimed the corresponding PSD slopes to be typically $\beta \simeq 1$, in a very good agreement with our result for PKS\,0735$+$178. Rather limited work has been carried out to compare the PSD properties at different wavelengths of the electromagnetic spectrum for a given blazar source. Most notably, \citet{2008ApJ...689...79C}, by comparing the LT PSD of the luminous FSRQ 3C\,279 at X-ray (3--20\,keV), optical (R-band), and radio (GHz) frequencies, covering the total time span of $\sim 11$ years (temporal frequencies from $10^{-8.5}$\,Hz down to $10^{-5}$\,Hz), demonstrated that each PSD can be well fitted by a single power-law with $\beta \sim 2.3$ (X-rays), $\sim 1.7$ (optical), and $\sim 2.3$ (radio). A more extensive study by \citet{2012ApJ...749..191C}, consisting of six blazars observed with {\it Fermi}-LAT and optical/near infra-red telescopes, albeit only for a total duration $\sim 1$ year (temporal frequency range $\sim 10^{-7.5} - 10^{-5.5}$\,Hz), revealed that, on average, the blazar PSDs at different frequencies are roughly consistent with $\beta \sim 1.6$. Only a few blazar studies have addressed the issue of the PSD characteristics on intra-night timescales. \citet{2003A&A...397..565P} estimated the optical PSD slopes for BL Lacertae to be $\beta \simeq 1.87 \pm 0.16$ on five observing nights within the frequency range $\sim 10^{-4} - 10^{-3}$\,Hz. In the case of a highly variable BL Lac object 0716+714, \citet{2005IAPPP.101....1A} noted rather flat optical PSD slopes with $\beta \sim 0.9-1.3$ on 10 monitoring nights for the frequency range $\sim 10^{-5} - 10^{-3}$\,Hz; meanwhile, \citet{2011AJ....141...49C} estimated $\beta \sim 2-3$ on five nights and within the same temporal frequency range \citep[see also][]{2012Ap&SS.342..147M}. The wide range of PSD slopes cited above on intra-night timescales for 0716$+$716, implies either some significant inconsistency between the different analysis methods applied, or the non-stationarity of blazar variability on hourly timescales. The latter case echoes our results for PKS 0735$+$178. Finally, in the very high energy $\gamma$-ray domain, \citet{2007ApJ...664L..71A} reported the PSD slope of $\beta \sim 2$ for $\nu_k \sim 10^{-4} - 10^{-2}$\,Hz during the famous TeV outburst of PKS\,2155$-$304. The PSD slopes derived from our detailed analysis of the LT high energy $\gamma$-ray (0.1--200\,GeV), optical (R-band), and radio (GHz frequencies) data on PKS\,0735$+$178, on the timescales ranging from years to weeks/days, suggest that the statistical character of the $\gamma$-ray flux changes is different from that of the radio and optical flux changes: there is increasingly more variability power in $\gamma$-ray fluctuations on comparable temporal frequencies when going to shorter and shorter variability timescales (see Fig.\ \ref{fig:psd_mf}). Our finding is in agreement with the {\it Fermi}-LAT blazar data analysis presented by \citet{2014ApJ...786..143S}. This result is somewhat surprising since, at least within the framework of the standard one-zone leptonic models for the broad-band blazar emission, which is supported to some extent by the multi-wavelength correlations sometimes observed \citep[e.g.,][]{2014MNRAS.439..690H, 2012A&A...537A..32A}, the power spectrum of the synchrotron emission component (radio and optical photon energies) should be the same as, or eventually flatter than, the power spectrum of the inverse-Compton component \citep{2014ApJ...791...21F}. As discussed by \citet{2009ApJ...698..895K}, a broken power-law form of the PSD with flat low-frequency segment $P(\nu_k) \propto const$ and the high-frequency slope $\beta = 2$, lacking any peaks indicative of a (quasi-)periodicity, can be understood in terms of a source variability being driven by an underlying stochastic process. In particular, \citet{2009ApJ...698..895K} proposed that such a variability can be modelled as a first-order autoregressive process (Gaussian Ornstein-Uhlenbeck process; OU for short), in which the source emissivity responds to some input noise (Gaussian white noise, by assumption), with a given relaxation timescale $\tau_1$. For such a situation, the source variability at temporal frequencies above the break $\nu_{k_1} \equiv (2 \pi \tau_1)^{-1}$ is of the red noise type, and below the break, of the white noise type. \citet{2011ApJ...730...52K} discussed a more complex case of a linear superposition of OU processes with two very different relaxation timescales, $\tau_1 > \tau_2$, resulting in the intermediate pink noise ($\beta = 1$) segment in the source PSD, in between of the white noise ($\beta=0$) below $\nu_{k_1}$ and the red noise ($\beta = 2$) above $\nu_{k_2}$. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{fig7.pdf} \caption{Composite multiwavelength PSD of PKS\,0735+178. Symbols denote the binned logarithmic periodogram estimates for each data set considered.} \label{fig:psd_mf} \end{figure} The shallower slopes of the high-energy $\gamma$-ray PSD (as compared to the optical and radio PSDs) could be therefore understood by hypothesizing, following \citet{2011ApJ...730...52K}, that the $\gamma$-ray variability in PKS\,0735$+$178 is shaped by a linear superposition of two types of stochastic processes: (i) the same process driving synchrotron (optical and radio) variability, with the relaxation time $\tau_1$ larger than decades (since we do not recover the transition from the red noise to the white noise in the optical PSD up to timescales of $\sim 20$ years); and (ii) the additional process relevant only in the inverse-Compton ($\gamma$-ray) domain, characterized by a relaxation timescale $\tau_2$ shorter than, or at most of the order of, days (the shortest variability timescales covered by our analysis of the {\it Fermi}-LAT PSD). Note that in this scenario there need not be some instability operating within the jet on the timescale $\tau_1$ which is driving the synchrotron variability -- the ``drivers'' can instead be random (stochastic) fluctuations in local jet conditions. Such fluctuations dissipate their energy and accelerate plasma particles, thus creating fluctuations in the distribution of ultra-relativistic jet electrons. The acceleration process itself and the radiative response of the accelerated electrons are, however, delayed with respect to the input perturbations by $\tau_1$, so that in the time domain below $\tau_1$ the flux changes are smoothed out and damped (forming the red noise segment of the PSD). The jet adjusts, forgetting about the input perturbations, only on timescales longer than $\tau_1$, for which the flux changes become uncorrelated (forming the white noise segment of the PSD). At this point we can only speculate about the nature of the input perturbations inferred above. They could be identified with, for example, local fluctuations in the plasma bulk velocity, leading to the formation of a complex system of internal shocks and plasma turbulence, possibly passing through and interacting with a stationary reconfinement shock further down the jet \citep{2014MNRAS.443..299M,2014ApJ...780...87M,2015JApA...36..255C,2016ApJ...820...12P}. Alternatively, input disturbances could be related to a turbulent magnetic field inherited from strongly magnetized accretion disk and carried out by the jet (see in this context \citealt{2003ApJ...592.1042I,2012MNRAS.423.3083M}; also \citealt{2013ApJ...772...83L}). In both cases, a relatively long relaxation timescale $\tau_1$ would correspond to some global (magneto-)hydrodynamical timescale characterizing an extended blazar emission region. Meanwhile, in the $\gamma$-ray regime, the additional process should be related to inhomogeneities in the local (radiating fluid rest frame) energy densities of soft photons, inverse-Comptonized to higher energies by the jet electrons in the neighboring cells of the outflow. A natural ``relaxation'' timescale in this case should be simply a light crossing timescale for the jet region where the bulk of the observed emission is being produced. For $\tau_2 \lesssim $\,day, one then has the corresponding jet radii $R_2 \sim c \tau_2 \, \langle \delta_{\rm j} \rangle \lesssim 0.03$\,pc, and the distances from the core $r_2 \sim R_2 \, \langle \Gamma_{\rm j} \rangle \leq 1$\,pc, where, for order-of-magnitude estimates only, we assumed a conical jet with a half-opening angle $\theta_j \simeq 1/\langle \Gamma_{\rm j} \rangle$, and the volume-averaged jet bulk Lorentz and Doppler factors, $\langle \Gamma_{\rm j} \rangle \simeq \langle \delta_{\rm j} \rangle \simeq 30$. In the interpretation mentioned above, which we will discuss in detail in future work, we must revisit the overly simplistic assumption of a single homogeneous emission zone in blazar jets. Instead, we propose that \emph{all} the observed blazar emission, variable on timescales of years, months, days, and hours --- i.e., both the long-term large-amplitude variability and the microvariability --- is generated by an underlying single {\it stochastic} process (radio and optical frequencies), or a linear superposition of such processes ($\gamma$-ray regime), within a highly non-uniform portion of the outflow extending from the jet base up to the $\lesssim$\,pc-scale distances. \begin{acknowledgements} Authors wish to thank the referee for making several constructive comments on the manuscript. AG thanks Bindu Rani, Alok C.\ Gupta, Rumen Bachev, Margo Aller, Hugh Aller, Paul Smith and Svetlana Jorstad for kindly providing data in electronic form. AG, {\L}S and MO acknowledge support from the Polish National Science Centre (NCN) through the grant 2012/04/A/ST9/00083. AG also acknowledges partial support from 2013/09/B/ST9/00026 and MS acknowledges the support of 2012/07/B/ST9/04404. VL acknowledges the support of Russian RFBR grant 15-02-00949 and St.\ Petersburg University research grant 6.38.335.2015. IA acknowledges support by a Ram\'on y Cajal grant of the Ministerio de Econom\'ia y Competitividad (MINECO) of Spain. Acquisition and reduction of the MAPCAT data was supported in part by MINECO through grants AYA2010-14844, AYA2013-40825-P, and AYA2016-80889-P, and by the Regional Government of Andaluc\'ia through grant P09-FQM-4784. The MAPCAT observations were carried out at the German-Spanish Calar Alto Observatory, which is jointly operated by the Max-Plank-Institut f\"ur Astronomie and the Instituto de Astrof\'isica de Andaluc\'ia-CSIC. PJW is grateful for hospitality at KIPAC, Stanford University, during a sabbatical visit. This research has made use of data from the University of Michigan Radio Astronomy Observatory which has been supported by the University of Michigan and by a series of grants from the National Science Foundation, most recently AST-0607523, and NASA Fermi grants NNX09AU16G, NNX10AP16G, and NNX11AO13G. The OVRO 40m Telescope Fermi Blazar Monitoring Program is supported by NASA under awards NNX08AW31G and NNX11A043G, and by the NSF under awards AST-0808050 and AST-1109911. Data from the Steward Observatory spectropolarimetric monitoring project were used. This program is supported by Fermi Guest Investigator grants NNX08AW56G, NNX09AU10G, NNX12AO93G, and NNX15AU81G. The paper uses optical photometric and polarimetric data from the BU blazar monitoring programme. The research at BU was supported in part by NASA grants NNX14AQ58G and NNX15AR34G. The Fermi-LAT Collaboration acknowledges support from a number of agencies and institutes for both development and the operation of the LAT as well as scientific data analysis. These include NASA and DOE in the United States, CEA/Irfu and IN2P3/CNRS in France, ASI and INFN in Italy, MEXT, KEK, and JAXA in Japan, and the K. A. Wallenberg Foundation, the Swedish Research Council, and the National Space Board in Sweden. Additional support from INAF in Italy and CNES in France for science analysis during the operations phase is also gratefully acknowledged. \end{acknowledgements}
1,108,101,565,070
arxiv
\section{Introduction} Grasping ability is one of the most important abilities of modern intelligent robots, especially for industrial robots, which will bring great power to society\cite{sanchez2018robotic}. As the most common basic action of robots in work, robotic autonomous grasping has great application prospects. Because of its significance, robotic autonomous grasping has been studied for a long time. Recently, robot grasping has made rapid progress due to the rapid development of deep learning. There are many tasks in robot grasping, including object localization, pose estimation, grasp detection, motion planning, etc. Among these tasks, grasp detection is a key task in the computer vision and robotics discipline and has been the subject of considerable research. \par However, there are still numerous challenges to this task. On the one hand, the algorithm requires hardware computing power. With the widespread use of deep learning algorithms in grasp detection, deep learning models are deployed directly at the edge (robotic arms). And the hardware computing power is often not well executed, leading to delays and errors in data processing and grasp configuration. At present, most of the robotic arm's grasp detection work is calculated directly at the edge, only with the help of local computing power. This leads to the low efficiency of image detection, and can not meet the requirements of automatic grasp. On the other hand, security issues in the process of grasp detection are often ignored, leading to the leakage of critical information. In recent years, there are also some studies that try to use cloud computing to solve the problem of insufficient local computing power. They upload the image data directly to the cloud (or fog), and with the help of the cloud's powerful computing power, this way greatly improves the efficiency of grasping. However, the direct transmission of data may lead to the problem of privacy leakage, while the transmission of real-time RGB images is often a major challenge for network bandwidth. \begin{figure}[htbp] \centerline{\includegraphics[height=5cm]{concept_final-eps-converted-to.pdf}} \caption{The figure shows how the robot arm unloads the local grasp detection task to the cloud. Our model realizes secure and high-fidelity transmission through this encoder-decoder structure. The image is collected locally, and transmitted to the cloud after being compressed. The reconstructed image will be obtained by the decoder in the cloud. Use cloud computing capabilities to assist in grasp detection and return the results to the robot arm.} \label{fig1} \end{figure} \par In this work, we propose a robotic arm grasping detection model with an edge-cloud collaboration method. Figure 1 shows the execution flow of our technology model. We use an encoder to compress the images grasped by the camera locally and upload them to the cloud. The uploaded encoded information does not occupy local computing resources, and since it occupies less bandwidth and requires less network configuration, it is more suitable for real scenarios' deployment. In the cloud, our model reconstructs the image by a corresponding decoder, after which it performs a two-stage multi-object grasp detection and returns the obtained grasp configuration to the local side. \par The encoding and decoding network of our model is implemented by a GAN (Generative Adversarial Network), which consists of a generator and a discriminator. The generator continuously learns the real image distribution and generates a more realistic image to fool the discriminator. At the same time, the discriminator needs to discriminate the authenticity of the received images. Through the constant confrontation between the generator and the discriminator, they form a min-max game, both sides continuously optimize themselves during the training process until they reach an equilibrium. Compared with other methods, GAN can achieve compression for full-resolution images and compression for images with extreme code ratios, which has wide applicability. Also, the reconstructed images have sharper textures and get better picture results. In our model, the decoder is used as the generator and is trained together with the encoder. The customization of the model is very flexible. Besides, it can set the compression ratio by adjusting the feature map size and the number of channels before and after compression. When working, the encoder will be reserved locally, and RGB images will be extracted as feature maps for compression and upload. In the cloud, the images will be reconstructed by the decoder. \par The main contribution of this paper is to propose a safe and efficient multi-object grasping detection scheme for robotic arms. This scheme has three advantages: \par (1) High fidelity: We have achieved good results on DIV2K, flickr30k, and Cornell datasets. The compression ratio can be achieved, and the structural loss of the reconstructed image after the transmission is less than 7$\%$, and there is almost no difference in the result of grasp detection before and after compression. \par (2) Strong security: Transmitting the compressed tensor to the server instead of the original image. This method avoids the leakage of production information or privacy. Compared with traditional image compression algorithms such as JPEG and JP2000, the uploaded data is difficult to be decrypted and is highly reliable. Theoretically, without the corresponding decoder parameters, there is no way to reconstruct the picture even if the transmission information is intercepted. \par (3) High execution efficiency: First, the local side of the operation is offloaded in the cloud, and the limited local arithmetic power is complemented by the arithmetic power provided by the cloud. Second, the compressed information occupies less bandwidth and is transmitted faster. Third, the lightweight neural network fits the actual application scenario. \section{Related Work} The method of achieving automatic grasping of the robotic arm has been improving over the course of long-term research. The traditional methods of perception-based grasping, reconstructing 3D models of objects, and analyzing the geometric features and forces of models, it has gradually expanded to the use of deep learning network models for image object detection and pose estimation\cite{bicchi2000robotic}. \par The work uses the CNN (FAST R-CNN VGG16) network model to complete the pose estimation after image detection. This work proves the practicality of the object in the case of obscuration through experiments\cite{zhihong2017vision}. Another work proposes a multimodal model method for image detection using ResNet for RGB, which has better performance than VGG16\cite{kumra2017robotic}. \par Others use deep learning networks to calibrate and control the behaviour of robotic arms. \par Leoni’s work is based on the RNN network model, through the sensor data to learn and train the robot's grasping behaviour, thus making sure the system can achieve the goals \cite{Leoni1998ImplementingRG}. \par Several works use RL technology to optimize and train a robot’s gripping ability. After a lot of training, these methods have achieved good experimental results in limited scenes. However, in more complex and practical scenarios, the scalability of RL is still unknown \cite{quillen2018deep}. \par It is worth noting that the work of Chu et al.\cite{2018Real} on multi-object grasping detection has achieved good results in recent years. Our work is based on the model they proposed. \par Due to the demand for computing power in deep learning, the use of cloud edge fog computing is also more applied in robot-related fields. For example, in the work of Sarker et al., \cite{sarker2019offloading} the use of offload cloud computing work reduces the energy consumption and hardware requirements of the robot. This treatment reduces a lot of pressure on the hardware part of the robot and the robotic arm. \par Kumar et al.\cite{tanwani2020rilaas} builds a cloud computing framework. Through this framework, any robot can call the infinite computing power of the cloud to calculate. Deng et al. \cite{deng2016optimal} proposed a set of invocation algorithms for fog computation. This method can allocate resources more reasonably and efficiently in a limited computing power environment that is closer to the actual situation. \par The processing of cloud edge fog often relies on the stability of the connection and relatively high bandwidth. And in practical application scenarios, the compression of images is an essential part. \par Some traditional image compression algorithms can achieve certain results in conventional scenes. Dhawan’s summary had already analyzed the advantages and disadvantages of methods such as JPEG. However, this method does not present a good direction for further improvement \cite{dhawan2011review}. \par Compared with traditional algorithms, the direction of image compression using deep learning has yielded many results. \par Johannes et al. use the CNN network as a decoder to deal with image compression problems and obtain good theoretical data. This method is processed by the convolutional neural network, which reduces the amount of both computation and image compression data \cite{balle2018variational}. But in the case of practical applications, end-to-end joint optimization is often difficult to complete high-effect compression and high-quality reconstruction of the image at the same time. \par In addition to this, the limited sensory field of the convolutional kernel makes the training often fail to achieve the expectation. This is because the achievement of full-resolution compression tends to increase the difficulty of training network structures. \par Toderici et al. use the LSTM network model and the CNN+RNN network model for image compression\cite{Toderici2016VariableRI} \cite{Toderici2017FullRI}. And the network model built by using the LSTM network framework is more robust for different pictures. However, experiments have shown that the training of the model is quite complex. Besides, the image correlation relationship cannot be well grasped, and it can only be limited to small-size pictures. \par On the other hand, studied the application of VAE networks in image compression. By increasing the mass ratio factor of VAE, linear proportion and other methods achieved a fairly good compression effect \cite{Jiang2021OnlineMA} \cite{Chen2020VariableBI}. However, since VAE networks learn the general and original picture by calculating the mean squared error, the resulting image is more likely to occur the edge blur problem. \par Rippel et al. was the first to propose the application of GAN networks to image compression \cite{Rippel2017RealTimeAI}. The decoded data is processed and generated by using a GAN network, and it is opposed to the discriminator supported by the real data. The model can not only complete the compression of full-resolution images but also achieve image compression at a limited bit rate. This results in a reconstructed image with a clear texture for better visual sensory effects. \par A large number of applications of cloud, edge, and fog computing systems has spawned a very urgent information security problem. In \cite{Joshi2021AnalyticalRO}, for example, the authors analyze the data security issues posed by cloud computing. In addition, the review of Randeep and Jagroo \cite{Kaur2015CloudCS} points to the security issues that cloud computing can bring. They summarize techniques for overcoming data privacy issues and define pixel key patterns and image steganography techniques for overcoming data security issues. \par Some work\cite{Li2020AchievingSA}\cite{Shini2012CloudBM} \cite{Yao2014CloudBasedHI} discussed the security of medical information in cloud storage and data sharing environments and gave some feasible solutions. Overall, these studies highlight the security of information (communications) under the cloud computing system. \par To sum up, most of the existing robot arm's grasp detection work is highly dependent on their edge computing ability, and the safety problems in the process of grasp detection are not considered enough. \section{Methodology} \par The RGB image is grasped by the local camera and sent out by the edge side after encoding and compression. The cloud side receives the data and then the decoder reconstructs the image for grasp detection. The parameters of the encoder and decoder are obtained using generative adversarial networks for training. Two tasks are completed in the grasp detection phase: grasp proposals and grasp configuration, the former determines the location of the object and the latter configures the grasp angle. The system flowchart is shown in Fig \ref{fig2}. and comprises a number of components, which we will be introduced below. \subsection{Image compression part} \par In this section, we will focus on feature extraction, network architecture design, and customized loss function. \begin{figure*} \begin{center} \includegraphics[height=4.5cm]{GAN+grasp-eps-converted-to.pdf} \end{center} \caption{The figure shows the general technical flow chart of our approach. The input image is collected at the edge, compressed by the encoder, and then uploaded to the cloud. The image will be reconstructed by the decoder in the cloud. Then grasp parameters are obtained through classification and regression networks to get the bounding box. According to the probability, the most likely bounding box is selected as the final result of grasp detection. The blue box and green box in the figure represent the edge end and cloud end, respectively.} \label{fig2} \end{figure*} \subsubsection{Feature Extraction and Compression} \par Our model uses global generative compression for image compression. Before encoding and decoding, the input image is first passed through two layers of convolution to achieve feature extraction and image compression. We found that by adjusting the number of feature channels and feature map size output here, we not only balance the processing speed and image compression quality, but also easily change its compression ratio. \par We preprocess the image so that the input image is an RGB image with a height of 210 and a width of 150. The encoded image obtained by the encoder is a feature map of 52x37 of 2,4,8,16 channels, the corresponding compression ratio is 32.58$\%$, 16.29$\%$, 8.14$\%$ and 4.07$\%$, respectively. The calculation of the compression ratio is given by Equation \eqref{eq1}. It represents the ratio of the parametric quantities of the output tensor $R^{C_{c}\times H_{c}\times W_{c}} $ to the input image $R^{C_{i}\times H_{i}\times W_{i}} $ . The reconstructed images are similar to the original images, whose structural similarity index is greater than 0.93. \begin{equation}\label{eq1} \boldmath Compression\ ratio= \frac{C_{i} \times H_{i} \times W_{i} }{C_{c} \times H_{c} \times W_{c} } \times 100\% \end{equation} \par The number of parameters for different compression ratios is shown in Table 1, and the detailed results under different compression ratios will be given in the experimental section. In Figure \ref{fig3}, we show the reconstructed image results under different compression ratios. \begin{figure}[!htbp] \centerline{\includegraphics[height=6cm]{8p.jpg}} \caption{Results of reconstructed images. The left column is the original image, and the right three columns are the reconstructed images with 32.58\%, 16.29\%, and 8.14\% compression respectively} \label{fig3} \end{figure} \renewcommand\arraystretch{1.5} \begin{table}[!htbp] \centering \setlength{\tabcolsep}{4mm} \caption{Number of parameters before and after image compression} \begin{tabular}{ccc} \toprule \tabincell{c}{ Input image }&\tabincell{c}{Compressed tensor}&\tabincell{c}{Compression ratio}\\ \midrule \tabincell{c}{94500}&30784&32.58$\%$ \\ \midrule \tabincell{c}{94500}&15392&16.29$\%$ \\ \midrule \tabincell{c}{94500}&7296&8.14$\%$\\ \midrule \tabincell{c}{94500}&3648&4.07$\%$\\ \bottomrule \end{tabular} \end{table} \subsubsection{Network Architectures} \par In order to make the network structure as simple as possible, here we have built a lightweight Generator advertising network that is similar to DCGAN \cite{Radford2016UnsupervisedRL}. The network consists of a Generator and a Discriminator. It uses a decoder as a generator and trains the encoder and decoder by using the same loss function in training. During training, the goal of the generator is to try to generate real images to deceive the discriminator. And the goal of the discriminator is to try to separate the images generated by the generator from the real images and then paste the 0 and 1 labels respectively. \par In the encoder (compressor network), we used three consecutive layers of simple residual layers (ResNet\cite{He2016DeepRL}) for coding. Correspondingly, in the decoder (decompressor network), the two upsample and three layers of residual layers are crossed, and eventually received a reconstructed image. We implemented upsample with transposed convolution and restored the dimensions of the output picture. In the encoding and decoding network, we use LeakyReLU as the activation function and use Tanh in the last layer. In the convolution block during the encoding and decoding phases, we keep the size of the feature map constant by setting the stride and padding, which reduces the loss of information. For the discriminator, we built a simple model based on a combination of convolution and dropout layers. \subsubsection{Loss Function} \par Generally in GAN, we tend to use L1 loss (MAE) and L2 loss (MSE) to train discriminators for binary classification problems. However, it cannot be ignored that the simple use of L1 loss for judgment often fails to accurately reflect the level of detail of the image compression and restoration. Structural loss is also a structural loss consideration in image compression tasks. To consider both, we divide the loss function into two parts, namely, the adversarial loss and the structural loss weighting add up to the final loss function. \par There are many types of loss functions based on deep learning image algorithms, such as L1 loss and L2 loss. However, for image compression and restoration work, these two loss functions are not easy to recover for the detailed structure of the image and are not enough to intuitively express people's cognitive feelings. In addition, there is also PSNR (Peak Signal-to-Noise Ratio) as a common evaluation criterion, but it has a common problem with L1 and L2: their principle is based on pixel-by-pixel comparison differences, without considering human visual perception, so the PSNR index is high, not necessarily representing image quality. \par So here we use MSSIM\cite{wang2004image} as a structural loss, which is based on SSIM. SSIM is a commonly used image quality evaluation index, which is based on the assumption that the human eye will extract the structural similarity variables when viewing the image. Its final loss value is obtained by comprehensively considering the brightness, contrast, and structural similarity variables. For images x and y, their SSIM is calculated as follows: \begin{equation}\label{eq2} \boldmath l(x,y)=\frac{2\mu_{x}\mu_{y}+C_{1}}{\mu_{x}^{2}+\mu _{y}^{2}+C_{1}} \end{equation} \begin{equation}\label{eq3} \boldmath c(x,y)=\frac{2\sigma_{x}\sigma_{y}+C_{2}}{\sigma_{x}^{2}+\sigma_{y}^{2}+C_{2}} \end{equation} \begin{equation}\label{eq4} \boldmath s(x,y)=\frac{\sigma_{xy}+C_{3}}{\sigma_{x}\sigma_{y}+C_{3}} \end{equation} \par In Equation \eqref{eq2} \eqref{eq3} \eqref{eq4}, $l(x, y)$ is used to estimate luminance by mean, $c(x,y)$ is used to estimate contrast with variance, and $s(x,y)$ is used to estimate structural similarity with covariance. The SSIM definition is shown in Equation \eqref{eq5}, where $\alpha$, $\beta$, and $\gamma$ are used to adjust the weights of each portion. By default, we set all three of them to 1, and then we can get the Equation\eqref{eq6} \begin{equation}\label{eq5} \boldmath SSIM(x,y)=[l(x,y)]^{\alpha }\cdot[c(x,y)]^{\beta }\cdot[s(x,y)]^{\gamma} \end{equation} \begin{equation}\label{eq6} \boldmath SSIM(x,y) = \frac{(2\mu_{x}\mu_{y}+C_{1})(2\sigma_{xy}+C_{2})}{(\mu_{x}^{2}+\mu_{y}^{2}+C_{1})(\sigma_{x}^{2}+\sigma_{y}^{2}+C_{2})} \end{equation} \par MSSIM takes the reference image and the distortion image as input and divides the image into N blocks by sliding window. Then it will weight the mean, variance and covariance of each window, and the weight $w_{ij}$ meets the $\sum_{i=0}^{n}\sum_{j=0}^{n} w_{ij} = 1$. We usually use the Gaussian kernel to calculate the structural similarity SSIM of the corresponding block and use the average value as a final structural similarity measure of the two images. Let’s suppose the original image is scale 1, and the highest scale is scale M obtained by the M-1 iteration. For the $j^{th}$ scale, only the contrast $c(x,y)$ and the structural similarity $s(x,y)$ will be calculated. Brightness similarity $l(x,y)$ is calculated only in Scale M. The final result is to link the results of the various scales: \begin{equation}\label{eq7} \begin{split} \boldmath {MSSIM}(x, y)= \left[l_{M}(x, y)\right]^{\alpha_{M}} \\ \cdot \prod_{j=1}^{M}\left[c_{j}(x, y)\right]^{\beta_{j}} \cdot\left[s_{j}(x, y)\right]^{\gamma_{j}} \end{split} \end{equation} \par Hang et al.\cite{Zhao2017LossFF} demonstrates the quality of these loss functions through three experiments. It shows that MSSIM is more appropriate in comparison. In order to make the output image gain higher quality and easier training, here we use the loss function combined with MSSIM and L1 loss. \subsection{Grasp detection part} \par The entire grasp detection task is divided into two tasks, grasp proposals, and grasp configuration. The former determines the location of the object, and the latter configures the angle of the grasp. \subsubsection{Grasp Proposals} \par Grasp proposals are implemented by using the two-stage detection algorithm and consist of two branches: regression and classification. The model chose the ResNet-50 network as the backbone of the model. First, the location of the bounding box is determined by regression, which generates the region proposals, avoiding the time-consuming sliding window method and directly predicting the region proposals on the entire image. \par These region proposals will make feature extraction through RPN (Region Proposal Network)\cite{ren2015faster}, and the region frame proposals classification is completed when the region frame proposals extraction is performed. The classification process classifies region feature into background and object. \par When the RPN network generates a region proposal, the position of the object is preliminarily predicted. During this time, the two links of regional classification and location refinement are completed. As soon as it obtains the region proposals, the ROI pooling layer will accurately refine and regress the position of the region proposals. \par After the region target corresponds to the features on the feature map, the characteristics of the region proposals will be further represented through a fully connected layer. Later, the category of the region target and the refinement of the region target position will be completed by classification and regression, so the real category of the object will be obtained. While the regression will get the specific coordinate position of the current target, which is represented as a rectangular box represented by four parameters. \subsubsection{Grasp Configuration} \par The determination of the grasp configuration is achieved through classification. Grasp orientation coordinate $\theta$ divides the direction of the grasp into 20 classes and chooses the class directly with the highest confidence level to grasp. \par There is a non-grasp direction class in classes. If the confidence level of the output is lower than that of the non-grasp direction class, this grasp recommendation is considered to be ungraspable in that direction. Setting non-grasp classes instead of setting specific thresholds will be a better way to handle multi-object and multi-grasp component tasks. \par The final output is shown in Fig \ref{fig4}. In the figure’s output bounding box, the red line represents the open length of a two-fingered gripper, while the blue line represents the parallel plates of the gripper. \begin{figure}[!htbp] \begin{center} \includegraphics[height=4cm]{grasp_label-eps-converted-to.pdf} \end{center} \caption{Results of grasp detection output. The red line represents the open length of a two-fingered gripper, while the blue line represents the parallel plates of the gripper.} \label{fig4} \end{figure} \par In this scheme, the loss function is designed to be two parts: the grass proposal loss $L_{gpn}$ and the grasp configuration loss $L_{gcr_{-}reg}$. As shown in Equation \eqref{eq7}, $L_{gp_{-}cls}$ is the cross-entropy loss of the grasp direction classification, and the $L_{gp_reg}$ and weight $\lambda$ are the L1 regression loss of the grasp recommendation. In the case of no grasping, $p_{i}^{\ast } = 0$. Correspondingly, the $p_{i}^{\ast } = 1$ when it can be grasped. The parameter, $t_{i}^{\ast }$ , and $p_{i}^{\ast }$ are corresponding to the ground truth. \begin{equation}\label{eq8} \boldmath \begin{split} L_{g p n}\left(\left\{\left(p_{i}, t_{i}\right)_{i=1}^{\mathbf{I}}\right\}\right)= \sum_{i} L_{g p_{-} c l s}\left(p_{i}, p_{i}^{\ast}\right) \\ + \lambda \sum_{i} p_{i}^{\ast} L_{g p_{-} r e g}\left(t_{i}, t_{i}^{\ast}\right) . \end{split} \end{equation} \par Equation \eqref{eq8} defines the loss function that executes the fetch configuration prediction. In this equation, $L_{gp_{-}cls}$ is the cross-entropy loss of the grasp orientation classification, and $\rho_{l}$ is the confidence level of each classification. $L_{gcr_{-}reg}$ is the regression loss of the bounding box, and the $\beta_{c}$ records the corresponding prediction of the grasp bounding box. $\beta_{c}^{\ast}$ is the correct bounding box. $\lambda_{2}$ is the relative weight. \begin{equation}\label{eq9} \boldmath \begin{split} L_{g p n}\left(\left\{\left(\rho_{l}, \beta_{l}\right)_{i=1}^{\mathbf{C}}\right\}\right)= \sum_{i} L_{gcr_{-} c l s}\left(p_{l}\right) \\ + \lambda_{2} \sum_{c}1_{c\neq 0}(c) L_{gcr_{-} r e g}\left(\beta_{i},\beta_{i}^{\ast}\right) . \end{split} \end{equation} \\ \hspace*{\fill} \\ \par The total loss consists of the addition of $L_{gpn}$ and $L_{gcr_{-}reg}$, as shown in Equation \eqref{eq9}: \begin{equation}\label{eq10} \boldmath L_{\text {total }}=L_{g p n}+L_{g c r}. \end{equation} \\ \hspace*{\fill} \\ \section{Experimental} \subsection{Experimental Environment} \par The training environment of the model is an Intel (R) Xeon (R) platinum 8255c, 47GB memory, 12 cores computer equipped with 24g video memory GeForce RTX™ 3090 graphics card. The computer system environment is Ubuntu 20.04 operating system. Later, the test experiment was conducted on another GeForce RTX ™ 2080 Ti graphics card. \subsection{Dataset and Data Preprocessing} \par We used the Flickr30k \cite{Jia2015GuidingLT} dataset alone for training the image compression reconstruction, and then validated on all four datasets, Flickr30k, Div2k\cite{Agustsson_2017_CVPR_Workshops}, Cornell\cite{ainetter2021end}, and OCID\cite{suchi2019easylabel}. The image reconstruction achieved good results on both PSNR and SSIM values. The grasping training and validation were then performed using the OCID dataset with 92\% accuracy in general. \par Flickr30k: Flickr30k is the first image description dataset that contains 158,915 descriptions and 31,783 images. This dataset is based on the previous Flickr8k dataset and focuses on describing everyday human activities. Of these, 25,426 images were used for training, and 6,357 images each for validation and testing. \par Div2k: The DIV2K dataset is a commonly used dataset for super-resolution image reconstruction. The dataset contains 1000 2K resolution images, including 800 training images, 100 validation images, and 100 test images. And the low-resolution images with 2, 3, 4, and 8 reduction factors are provided. \par Cornell: The Cornell grasping dataset is a required dataset for robotic autonomous grasping tasks. The dataset contains 885 RGB-D images of 640 × 480px size with 240 graspable objects. The correct grasping candidate is given by a manually annotated rectangular box. Each object corresponds to multiple images with different orientations or poses, and each image is labelled with multiple ground truth grasps, corresponding to the many possible ways of grasping the object. \par OCID: We use the OCID\_grasp dataset part, which is composed of 1763 selected RGB-D images, of which there are more than 75,000 hand-annotated grasp candidates. \subsection{Training schedule} \par We train the whole network for 10 epochs on a single GeForce RTX™ 3090. The initial learning ratio is set to 0.0002. The batch size is set to 30, and the log is output every 50 batches The input image is first cropped to 210 × 160 sizes. \subsection{Evaluation Metric} \subsubsection{Compressed image quality metrics} \par PSNR (Peak Signal to Noise Ratio) PSNR is defined as Equation (\ref{eq11}), $MAX_I^{2}$ which is the maximum possible pixel value of the image, and MSE is the mean square error of each pixel point of the two images. the minimum value of PSNR is 0, and the larger the PSNR, the smaller the difference between the two images. We test 100 images and finally take the average as the final value. \begin{eqnarray} \label{eq11} P S N R & = & 10 \cdot \log _{10}\left(\frac{M A X_{I}^{2}}{M S E}\right) \end{eqnarray} \par SSIM (Structure Similarity Index Measure) Equation (\eqref{eq6}) is the definition of SSIM. SSIM is based on the assumption that the human eye extracts structured information from an image and integrations the differences between two images in terms of luminance, contrast, and structure. $SSIM\le 1$, the larger the SSIM, the more similar the two images are. We test 100 images and finally take the average as the final value. \subsubsection{Grasping accuracy metrics} \par The accuracy of the grasping parameters is evaluated by comparing the closeness of the grasp candidate to ground truth. \par A grasp candidate is considered as a successful grasp detection after satisfying the following two metrics, \par (1) The difference between the angle of predicted grasp $g_{p} $ and ground truth $g_{t} $ does not exceed 30°. \par (2) Intersection over Union (IoU) of $g_{p} $ and $g_{t} $ is greater than 25$\%$, which means \begin{eqnarray} \label{eq12} I o U & = & \frac{\left|g_{p} \cap g_{t}\right|}{\left|g_{p} \cup g_{t}\right|}>0.25 \end{eqnarray} \subsection{Comparative experiment} \subsubsection{Image Compression Quality Experiment} \par We conducted compression encoding experiments on the pictures of Flickr30k, Cornell, and DIV2K datasets respectively. The encoding tensor sizes obtained under different datasets and different compression ratios are shown in Table 2. The data in Table 2 show that the magnitude of the compression tensor is proportional to the compression ratio. And satisfies the previously derived formula to present a linear relationship. \renewcommand\arraystretch{1.5} \begin{table}[!htbp] \centering \setlength{\tabcolsep}{4mm} \caption{Output tensor size of the encoder with different compression ratios} \begin{tabular}{cccc} \toprule \tabincell{c}{ Compression Ratio }&\tabincell{c}{ Flickr30k }&\tabincell{c}{ Cornell }&\tabincell{c}{DIV2K}\\ \midrule \tabincell{c}{2.03\%}&15.5kb&74.8kb&74.8kb\\ \midrule \tabincell{c}{4.07\%}&30.3kb&148kb&148kb \\ \midrule \tabincell{c}{8.14\%}&60kb&297kb&297kb\\ \midrule \tabincell{c}{16.29\%}&119kb&593kb&593kb\\ \midrule \tabincell{c}{32.58\%}&237kb&1187kb&1187kb \\ \bottomrule \end{tabular} \end{table} \par We select 200 images from each of the three datasets of Flickr30k, Cornell, and DIV2K, and divide them into 2:3 batches according to the complexity of the images. The reconstructed image is compared with the original image at different compression ratios. The reconstructed image is compared with the original image at different compression ratios. We get their PSNR and SSIM values and average them to get Table 3 and Table 4. The data in tables 3 and 4 show that our model has achieved good results under picture input of different complexity. The average value of PSNR and SSIM reached 35.576 and 0.948 respectively \renewcommand\arraystretch{1.5} \begin{table*}[] \centering \setlength{\tabcolsep}{1.3mm} \caption{PSNR of different dataset} \label{tab:my-table} \begin{tabular}{@{}cclccclccc@{}} \toprule \multicolumn{1}{c}{Compression} & \multicolumn{3}{c}{DIV2K} & \multicolumn{3}{c}{Flickr30K} & \multicolumn{3}{c}{Cornell} \\ \cline{2-10} \multicolumn{1}{c}{Ratio} & High complexity & Low complexity & Avg & High complexity & Low complexity & Avg & High complexity & Low complexity & Avg \\ \midrule 32.58\% & 24.48 & 30.31 & 27.98 & 21.31 & 29.78 & 26.392 & 34.785 & 36.10 & 35.5 \\ 16.29\% & 24.885 & 30.26 & 28.108 & 22.29 & 29.56 & 26.652 & 31.41 & 32.01 & 31.768 \\ 8.14\% & 24.16 & 29.31 & 27.248 & 21.82 & 28.31 & 25.714 & 29.93 & 30.41 & 30.216 \\ 4.07\% & 22.265 & 27.98 & 25.692 & 19.7 & 25.85 & 23.39 & 31.57 & 32.66 & 32.226 \\ 2.03\% & 19.515 & 24.44 & 22.472 & 19.52 & 24.44 & 22.472 & 26.875 & 27.91 & 27.496 \\ 0.99\% & 18.255 & 22.67 & 20.9 & 17.31 & 22.34 & 20.328 & 29.535 & 30.44 & 30.078 \\ 0.50\% & 15.55 & 20.51 & 18.528 & 15.91 & 19.02 & 17.776 & 24.38 & 26.86 & 25.866 \\ 0.13\% & 15.175 & 18.92 & 17.422 & 14.90 & 18.45 & 17.03 & 22.49 & 24.26 & 23.552 \\ 0.06\% & 16.225 & 21.21 & 19.218 & 15.46 & 18.49 & 17.276 & 19.475 & 19.11 & 19.254 \\ 0.04\% & 13.09 & 16.41 & 15.084 & 14.26 & 17.91 & 16.446 & 11.915 & 11.43 & 11.626 \\ 0.03\% & 10.875 & 14.99 & 13.344 & 12.24 & 13.59 & 13.05 & 11.915 & 11.43 & 11.626 \\ \bottomrule \end{tabular} \end{table*} \renewcommand\arraystretch{1.5} \begin{table*}[] \centering \setlength{\tabcolsep}{1.5mm} \caption{SSIM of different dataset} \label{tab:my-table} \begin{tabular}{@{}cccccclccc@{}} \toprule \multicolumn{1}{c}{Compression} & \multicolumn{3}{c}{DIV2K} & \multicolumn{3}{c}{Flickr30K} & \multicolumn{3}{c}{Cornell} \\ \cline{2-10} \multicolumn{1}{c}{Ratio} & High complexity & Low complexity & Avg & High complexity & Low complexity & Avg & High complexity & Low complexity & Avg \\ \midrule 32.58\% & 0.83 & 0.86 & 0.86 & 0.75 & 0.85 & 0.81 & 0.945 & 0.95 & 0.948 \\ 16.29\% & 0.835 & 0.89 & 0.868 & 0.78 & 0.86 & 0.828 & 0.945 & 0.95 & 0.948 \\ 8.14\% & 0.805 & 0.87 & 0.846 & 0.75 & 0.82 & 0.792 & 0.945 & 0.95 & 0.948 \\ 4.07\% & 0.66 & 0.81 & 0.752 & 0.64 & 0.87 & 0.778 & 0.935 & 0.94 & 0.938 \\ 2.03\% & 0.48 & 0.7 & 0.612 & 0.51 & 0.78 & 0.672 & 0.905 & 0.92 & 0.914 \\ 0.99\% & 0.545 & 0.61 & 0.586 & 0.47 & 0.59 & 0.542 & 0.885 & 0.89 & 0.888 \\ 0.50\% & 0.365 & 0.46 & 0.422 & 0.49 & 0.6 & 0.556 & 0.845 & 0.87 & 0.858 \\ 0.13\% & 0.33 & 0.43 & 0.39 & 0.35 & 0.48 & 0.428 & 0.825 & 0.83 & 0.83 \\ 0.06\% & 0.405 & 0.49 & 0.458 & 0.37 & 0.53 & 0.466 & 0.77 & 0.80 & 0.79 \\ 0.04\% & 0.27 & 0.34 & 0.312 & 0.305 & 0.478 & 0.408 & 0.555 & 0.56 & 0.556 \\ 0.03\% & 0.225 & 0.29 & 0.262 & 0.23 & 0.35 & 0.302 & 0.555 & 0.56 & 0.556 \\ \bottomrule \end{tabular} \end{table*} \par In Tables 3 and 4, we use eleven compression ratios to test the image compression and reconstruction effects under different compression ratios. The results show that when the compression ratio is above 4.07\%, the accuracy will not decrease too much with the decrease of the compression ratio. When the compression ratio is 2.03\% or less, the loss gradually manifests. The results show that our model has a strong feature extraction ability and a large range of customizable compression ratios. \par With the increase of the compression ratio, the image reconstruction quality increases sub-linearly and finally tends to a higher value. From the comparison of eleven groups of values, it can be seen that by weighing the compression ratio and image quality, from the image reconstruction quality index, 8.14\% and 16.29\% are the best compression ratio settings of the network. The data in the table shows that the SSIM value of image reconstruction on the three datasets is greater than 0.82 under these two radios. The Cornell dataset in the actual grasping environment has the highest score, with a PSNR of 31.768 and an average SSIM of 0.948, which is sufficient to meet the needs of grasping. However, in the actual process of grasping and detecting, the requirements for images are not the same as those for human eyes. We will conduct further experiments in combination with grasping in the two experiments of the grasp detection accuracy experiment and the network architecture experiment. \begin{figure}[!h] \begin{center} \includegraphics[height=6cm]{muti_display4.png} \end{center} \caption{Performance of our model on OCID dataset. In the output bounding box, the red line represents the open length of a two-fingered gripper, while the blue line represents the parallel plates of the gripper} \label{fig6} \end{figure} \subsubsection{Grasp Detection Accuracy Experiment} \par In order to evaluate the effect of encoding and decoding on grasp detection, we compared the results of grasping detection using the original image and the reconstructed image. The results are shown in Fig \ref{fig7}. \begin{figure} \begin{center} \includegraphics[height=9.5cm]{3x4.png} \end{center} \caption{The first column is the original input pictures. The second column is the result of grasp detection based on the original image. The third column shows the compressed and reconstructed image. The fourth column is the result of the grasp detection based on the compressed and reconstructed image. It can be seen from the figure that the loss of compression accuracy is little, and grasping accuracy has not been greatly affected.} \label{fig7} \end{figure} \par We can see from Fig \ref{fig7} that under the compression ratio of 8.14\%, the accuracy does not decrease too much after being compressed. At the same time, the processing speed of our grasp detection algorithm can reach 13.62fps in the implementation environment. \begin{figure} \begin{center} \includegraphics[height=5cm]{angel-eps-converted-to.pdf} \end{center} \caption{Grasp detection experiment based on the same object from multiple angles. The figure shows that our model can accurately mark the bounding box in different directions.} \label{fig8} \end{figure} \par The Cornell dataset provides images and grasp labels from multiple angles of each object. We carry out the grasp detection experiment based on the same object from multiple angles. Fig.\ref{fig8} shows the effect. Our model can accurately mark the bounding box at different angles. \par We detect the accuracy of multi-object grasp task in the environment of a single object, less than ten objects, and more than ten objects. Count the number of successful grasp detection and calculate the grasp accuracy. The results are shown in Table 4. The results show that when the number of objects is less than 5, our model can basically achieve 100\% error-free detection on the OCID dataset. Fig \ref{fig6} shows the performance of our model on the OCID dataset. \begin{figure*} \begin{center} \includegraphics[height=11cm]{layer.png} \end{center} \caption{Reconstruction and grasp detection result of different models. As the number of convolution layers increases, feature extraction becomes more sufficient, and the quality of reconstructed pictures becomes better.} \label{fig9} \end{figure*} \subsubsection{Network Architectures Experiment} \par In order to reasonably design the parameters of the neural networks, we carried out parameter optimization experiments from two dimensions of network depth and the number of channels. \par We designed different models with two, three, and five convolution blocks of network architectures respectively for image reconstruction experiments. The results are shown in Fig.\ref{fig9} and Table 5, to compare the effect of the number of encoder-decoder network layers on the model effect. By comparing these figures and tables, it can be concluded that the reconstructed image of the three-layer convolution block model is better than that of the two-layer coding block network. However, for the five-layer network, considering the operation speed and guarantee ratio, we think that three layers are the better network layers. \par Comparison of reconstructed image quality under different channel numbers is shown in Fig.\ref{fig15}-\ref{fig11} in the appendix. The image is blurred at a low compression ratio, but it can still be detected and judged. With the increase in compression ratio, the result of grasp detection is close to the original image. The three lines from top to bottom are the input image, reconstructed image, and result with the label. It can be concluded from these five groups of pictures that the compression ratio of 0.13\% and above is similar to the original picture, which can ensure the accuracy of grasping. \renewcommand\arraystretch{1.5} \begin{table}[] \centering \caption{Comparison of different models} \setlength{\tabcolsep}{7mm} \begin{tabular}{ccc} \toprule Number of convolution layers & PSNR & SSIM \\ \midrule 2-layer & 28.346 & 0.9 \\ \midrule 3-layer & 31.768 & 0.948 \\ \midrule 5-layer & 35.438 &0.952\\ \bottomrule \end{tabular} \end{table} \begin{figure*} \begin{center} \includegraphics[height=19.5cm]{zhengwen-eps-converted-to.pdf} \end{center} \caption{The first line is the result of the grasp detection of the original image, and the lines below are the results under the compression ratio of 16.29\%, 4.07\%, 0.99\%, 0.5\%, 0.13\%, 0.05\%, 0.04\%, and 0.03\%. From top to bottom, the loss of pictures due to compression gradually increases, which slowly affects the bounding box results of grasp detection.} \label{fig10} \end{figure*} \par Cornell datasets are all composed of a single object target, which is less difficult to grasp. To further refine our choice of compression ratio, we performed the same experiment on the multi-object grasp dataset OCID. In the case of fewer objects and fewer stacking occlusions, the grasping detection accuracy does not change too much with the reduction of the compression ratio. However, when the number of objects increases and numerous stacks appear, the influence of different compression ratios on the results gradually appears. As shown in Fig \ref{fig10}, the first line is the result of the grasp detection of the original image, and the eight lines below are the results under the compression ratio of 16.29\%, 4.07\%, 0.99\%, 0.5\%, 0.13\%, 0.05\%, 0.04\%, and 0.03\%(additional images are shown in figure X in the appendix). When the compression ratio is relatively large, the reduction of the compression ratio does not mean the reduction of the accuracy. In some cases, the interference of impurities may even be eliminated to improve the accuracy of grasping detection. However, we can clearly see that when the compression ratio is reduced to 0.5\%, it is difficult to distinguish the stacked occluded objects. When the compression ratio reaches 0.03\% of the limit, it is impossible to perform grasping detection. Therefore, we think that the compression ratio of at least 0.5\% should be selected for multi-object grasp detection. \par In conclusion, we can draw a conclusion. In the case of a single object or objects without stack occlusion, the compression ratio of 0.13\% or above has high accuracy. In a complex scene where multiple objects or objects are stacked and blocked, a compression ratio of 0.5\% or above is required. \section{Conclusion} \par We propose a new grasping detection model and perform grasping detection in RGB images. With the scheme of multi-object multi-grasp, our model improves the mission success ratio of grasping. With the help of edge-cloud collaboration, the computing task is transferred to the cloud with powerful computing power, which greatly improves the speed and accuracy of grasp detection. The encoder and decoder trained by GAN enable the image to be encrypted while compressing, ensuring the security of privacy. The model proves that the combination of autonomous robot grasping and edge-cloud collaboration has great prospects. The model achieves 92\% accuracy on the OCID dataset, the image compression ratio reaches 2.03\%, the structural difference value is higher than 0.91, and the average detection speed reaches 13.62fps. In the future, we will improve compression, and refine the distribution of tasks between on-premises and cloud to further improve the efficiency of the model. At the same time, our solution can be fully applied to other work of robots to promote the development of the field of robotics. This work is also potential in some other fields as federated learning \cite{liu2019lifelong, liu2019federated, zheng2021applications}, cloud-edge cooperate robotics \cite{liuelasticros, liu2021peer},smart city, etc. \section{Data Availability} All data included in this study are available upon request by contact with the author. \section{Conflicts of Interest} The authors declare that there are no known conflicts of interest or personal relationships that may affect the work of this research report. \columnsep 0.12in \bibliographystyle{IEEEtran}
1,108,101,565,071
arxiv
\section{} $\bf{Abstract}$: A quantum lattice algorithm (QLA) is developed for the solution of Maxwell equations in scalar dielectric media using the Riemann-Silberstein representation. For x-dependent and y-dependent inhomogeneities, the corresponding QLA requries 8 qubits/spatial lattice site. This is because the corresponding Pauli spin matrices have off-diagonal components which permit the collisional entanglement of two qubits. However, z-dependent inhomogeneities require a QLA with 16 qubits/lattice site since the Pauli spin matrix $\sigma_z$ is diagonal. QLA simulations are performed for the time evolution of an initial electromagnetic pulse propagating normally to a boundary layer region joining two media of different refractive index. There is excellent agreement between all three representations, as well as very good agreement with nearly all the standard plane wave boundary condition results for reflection and transmission off a dielectric discontinuity. In the QLA simulation, no boundary conditions are imposed at the continuous, but sharply increasing, boundary layer. \section{Introduction} Quantum lattice algorithms (QLA) [1-20] have been shown to be excellent perturbative representations of physical problems that are ideally parallelized on classical supercomputers, and able to be directly encoded on a quantum computer. QLAs consist of an interleaved sequence of unitary collision and streaming operators on a set of qubits. The collision and streaming operators do not commute. Typically this part of QLA will recover the differential structures of the physical problem under consideration. For physics problems like the Nonlinear Schrodinger equation (NLS), the nonlinear terms are handled perturbatively by an exponential potential operator. The ensuing QLA [9-11] accurately recovered all the physics of soliton collisions, including the signature phase change induced by the actual collisions themselves. The extension of the NLS equation to three dimensions (3D) now permits the examination of the ground state of a Bose Einstein condensate (BEC) using QLA [4, 12-16, 19]. This has led to quantum turbulence studies and vortex reconnection in scalar and spinor BECs. Here we continue our studies of a QLA for Maxwell equations in inhomogeneous media. In our earlier papers [17, 31] we presented QLA for 1D propagation of electromagnetic fields in a scalar dielectric medium. These QLAs were based on the Riemann-Silberstein vectors, which in essence give the two polarizations of an electromagnetic pulse. For homogeneous dielectrics, there is remarkable similarity between the Dirac equation and the 4-vector Riemann-Silberstein representation of Maxwell equations. Thus the Pauli spin-$1/2$ matrices play a significant role in the QLA. Khan [21] showed that for inhomogeneous media, the terms proportional to the gradient of the refractive index $n'(x)$ will lead to non-unitary operators in the time evolution of Maxwell equations. When one determines a QLA for Maxwell equations in an inhomogeneous medium, some of the evolution operators will necessarily be Hermitian, rather than unitary. In particular, for 1D propagation in the y-direction [17] two of the evolution operators are Hermitian, while for 1D x-propagation only one of the operators is Hermitian. Interestingly, Childs and Wiebe [22] have shown that algorithms utilizing sum of unitary operators (rather than the standard product of unitary operators) can still be encoded onto a quantum computer. In [17] we discussed how to determine a 1D QLA for Maxwell equations with a y-dependent dielectric and in [31] that for an x-dependent dielectric. Here we discuss how to develop 1D QLA for Maxwell equations for the z-dependent refractive indices. Once having determined these three orthogonal 1D QLA representations one can immediately stitch these representations together to develop both 2D and fully 3D QLA representations of Maxwell equations. For both the x-dependent and y-dependent dielectrics, an 8-qubit representation is sufficient. However, for z-dependent dielectrics one will require a 16-qubit representation. This difference between these representations arises from the fact that the z-component Pauli spin matrix $\sigma_z$ is diagonal. The collision operator requires the coupling of at least 2 qubiits locally at each lattice site in order to get entanglement. This entanglement is then spread throughout the lattice by the streaming operators. Khan's representation [21] of the Maxwell equations in an inhomogeneous medium using the Riemann-Silberstein vectors is presented in Sec. 2. In Sec. 3 we present the QLA formulation for x-, y-, and z-dependent media, while simulation results for z- dependent media are given in Sec. 4. Once one has these 1D modular representations, the QLA for Maxwell equations with either 2D or 3D dielectric inhomogeneities can be readily determined. Some QLA preliminary simulation results for 2D Maxwell equations are presented in Sec. 5. \section{General Theory of Khan [21] for Maxwell equations in Inhomogeneous Media} Soon after Dirac [23] was able to determine the square root of the Klein-Gordon wave operator and thus obtain a relativistically invariant counterpart to the Schr\"odinger equation, interest developed in making a formal theoretical connection between the relativistically invariant Maxwell equations and the Dirac equation [24-28]. One particularly intriguing approach has been through the use of the Riemann-Silberstein vectors [21, 24] \begin{equation} \label{R-S vector} \mathbf{F^{\pm}} = \sqrt{\epsilon} \mathbf{E} \pm i \frac{\mathbf{B}}{\sqrt{\mu}}. \end{equation} where $\mathbf{E}$ is the (real) electric field, $\mathbf{B}$ the magnetic flux density, and $\epsilon$ and $\mu$ are the (scalar) permittivity and permeability of the medium, respectively. Thus the electric displacement $\mathbf{D} = \epsilon \mathbf{E}$, and the magnetic field $\mathbf{H} = \mathbf{B}/ \mu$ . The Maxwell equations (with free charge density $\rho$ and free current density $\mathbf{J}$) \begin{align} \nabla \bsdot \mathbf{D} \ &= \ \rho \ & \ \nabla \bsdot \mathbf{B} \ &= \ 0 \label{gauss} \\ \nabla \times \mathbf{E} \ &= \ - \frac{\partial \mathbf{B}}{\partial t} \ & \ \nabla \times \mathbf{H} \ &= \ \mathbf{J} + \frac{\partial \mathbf{D}}{\partial t} \label{fam} \end{align} can be written in the Riemann-Silberstein form [21], \begin{equation} i \frac{\partial \mathbf{F^{\pm}}}{\partial t} = \pm v \nabla \times \mathbf{F^{\pm}} \pm \frac{1}{2} \nabla v \times \mathbf{F^{\pm}} \pm \frac{v}{2 h} \nabla h \times \mathbf{F^{\mp}} + \frac{i}{2} \left( \frac{\partial \ln v}{\partial t} \mathbf{F^{\pm}} + \frac{\partial \ln h}{\partial t} \mathbf{F^{\mp}} \right) - i \sqrt{\frac{v h}{2}} \mathbf{J}, \end{equation} \begin{equation} \nabla \bsdot \mathbf{F^{\pm}} = \frac{1}{2 v} \nabla v \bsdot \mathbf{F^{\pm}} + \frac{1}{2 h} \nabla h \bsdot \mathbf{F^{\pm}} + \sqrt{\frac{v h}{2}} \rho . \end{equation} where $v$ is the normalized electromagnetic phase velocity of the wave in the medium, and $h$ is a normalized resistance \begin{equation} v = \frac{1}{\sqrt{\epsilon \mu}},\ \ \ \ h = \sqrt{\frac{\mu}{\epsilon}}. \end{equation} Note that the coupling between the two field polarizations $\mathbf{F^{\pm}}$ occurs through the space-time variations in $h$. In determining the QLA representation of Maxwell equations it is convenient to rewrite the system in matrix form [21],, \begin{equation} \begin{aligned} \renewcommand\arraystretch{1.8} \frac{\partial}{\partial t} \begin{pmatrix} \Psi^+ \\ \Psi^- \end{pmatrix} - & \renewcommand\arraystretch{1.8} \frac{1}{2} \frac{\partial \ln v}{\partial t} \begin{pmatrix} \Psi^+ \\ \Psi^- \end{pmatrix} +\frac{i}{2} M_z \alpha_y \frac{\partial \ln h}{\partial t} \begin{pmatrix} \Psi^+ \\ \Psi^- \end{pmatrix} = \\ & \renewcommand\arraystretch{2.3} - v \begin{pmatrix} \mathbf{M} \bsdot \nabla + \boldsymbol{\Sigma} \bsdot \displaystyle{\frac{\nabla \nu}{2 \nu}} & -i M_z \boldsymbol{\Sigma} \bsdot \displaystyle{\frac{\nabla h}{h}} \alpha_y \\ -i M_z \boldsymbol{\Sigma}^{*} \bsdot \displaystyle{\frac{\nabla h}{h}} \alpha_y & \mathbf{M}^{*} \bsdot \nabla + \boldsymbol{\Sigma}^{*} \bsdot \displaystyle{\frac{\nabla \nu}{2 \nu}} \end{pmatrix} \begin{pmatrix} \Psi^+ \\ \Psi^- \end{pmatrix} - \begin{pmatrix} W^+ \\ W^- \end{pmatrix}, \end{aligned} \end{equation} where the 8-spinor Cartesian components of the Riemann-Silberstein vectors are, \begin{equation} \begin{aligned} \renewcommand\arraystretch{1.6} {\Psi^{\pm} } = \begin{pmatrix} - F^{\pm}_x \pm i F^{\pm}_y \\ F^{\pm}_z \\ F^{\pm}_z \\ F^{\pm}_x \pm i F^{\pm}_y \end{pmatrix} . \end{aligned} \end{equation} The $4 \times 4$ matrices $\mathbf{M}$ are the tensor products of the Pauli spin matrices, $\boldsymbol{\sigma} = (\sigma_x , \sigma_y , \sigma_z) $ \begin{equation} \label{Pauli_spin} \sigma_x = \begin{pmatrix} 0 & 1 \\ 1 & 0 \\ \end{pmatrix} \ \ \ , \sigma_y = \begin{pmatrix} 0 & - i \\ i & 0 \\ \end{pmatrix} \ \ \ , \sigma_z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \\ \end{pmatrix}, \end{equation} with the $2 \times 2$ identity matrix $\mathbf{I_2}$, \begin{equation} \mathbf{M} = \boldsymbol{\sigma} \otimes \mathbf{I_2} , \ \ \ and \ \ \ M_z = \sigma_z \otimes \mathbf{I_2}. \end{equation} The $4 \times 4$ matrices $\boldsymbol{\alpha}$ and $\boldsymbol{\Sigma}$ are given by, \begin{equation} \boldsymbol{\alpha} = \begin{pmatrix} 0 & \boldsymbol{\sigma} \\ \boldsymbol{\sigma} & 0 \\ \end{pmatrix}, \ \ \ \boldsymbol{\Sigma} = \begin{pmatrix} \boldsymbol{\sigma} & 0 \\ 0 & \boldsymbol{\sigma} \\ \end{pmatrix}. \end{equation} The current density and charge density source matrix is, % \begin{equation} {W^{\pm} } = \frac{1}{\sqrt 2 \epsilon} \renewcommand\arraystretch{1.6} \begin{pmatrix} -J_x \pm i J_y \\ J_z - v \rho \\ J_z + v \rho \\ J_x \pm i J_y \end{pmatrix} . \end{equation} Moreover we shall find that the QLA representation can be determined in modular form : one need only examine 1D pulse propagation in each of the 3 orthogonal Cartesian directions. We explicitly write down these modular components: \subsection{1D Pulse Propagation in the x-direction} \begin{equation} \label{R_S InHomox1} \frac{\partial}{\partial t} \begin{bmatrix} q_0 \\ q_1 \\ q_2 \\ q_3 \\ \end{bmatrix} = - \frac{1}{n(x)} \frac{\partial}{\partial x} \begin{bmatrix} q_2 \\ q_3 \\ q_0 \\ q_1 \\ \end{bmatrix} + \frac{n^\prime (x)}{2n^2(x)} \begin{bmatrix} q_1 + q_6 \\ q_0 - q_7 \\ q_3 - q_4 \\ q_2 + q_5 \\ \end{bmatrix}, \end{equation} \begin{equation} \label{R_S InHomox2} \frac{\partial}{\partial t} \begin{bmatrix} q_4 \\ q_5 \\ q_6 \\ q_7 \\ \end{bmatrix} = - \frac{1}{n(x)} \frac{\partial}{\partial x} \begin{bmatrix} q_6 \\ q_7 \\ q_4 \\ q_5 \\ \end{bmatrix} + \frac{n^\prime (x)}{2n^2(x)} \begin{bmatrix} q_5 + q_2 \\ q_4 - q_3 \\ q_7 - q_0 \\ q_6 + q_1 \\ \end{bmatrix}. \end{equation} \subsection{1D Pulse Propagation in the y-direction} \begin{equation} \label{R_S InHomo1} \frac{\partial}{\partial t} \begin{bmatrix} q_0 \\ q_1 \\ q_2 \\ q_3 \\ \end{bmatrix} = i \frac{1}{n(y)} \frac{\partial}{\partial y} \begin{bmatrix} q_2 \\ q_3 \\ -q_0 \\ -q_1 \\ \end{bmatrix} + i \frac{n^\prime (y)}{2n^2(y)} \begin{bmatrix} q_1 - q_6 \\ -q_0 - q_7 \\ q_3 + q_4 \\ -q_2 + q_5 \\ \end{bmatrix}, \end{equation} \begin{equation} \label{R_S InHomo2} \frac{\partial}{\partial t} \begin{bmatrix} q_4 \\ q_5 \\ q_6 \\ q_7 \\ \end{bmatrix} = i \frac{1}{n(y)} \frac{\partial}{\partial y} \begin{bmatrix} -q_6 \\ -q_7 \\ q_4 \\ q_5 \\ \end{bmatrix} + i \frac{n^\prime (y)}{2n^2(y)} \begin{bmatrix} -q_5 - q_2 \\ q_4 - q_3 \\ -q_7 + q_0 \\ q_6 + q_1 \\ \end{bmatrix}. \end{equation} \subsection{1D Pulse Propagation in the z-direction} \begin{equation} \label{R_S InHomoz1} \frac{\partial}{\partial t} \begin{bmatrix} q_0 \\ q_1 \\ q_2 \\ q_3 \\ \end{bmatrix} = - \frac{1}{n(z)} \frac{\partial}{\partial z} \begin{bmatrix} q_0 \\ q_1 \\ -q_2 \\ -q_3 \\ \end{bmatrix} + \frac{n^\prime (z)}{2n^2(z)} \begin{bmatrix} q_0 - q_7 \\ -q_1 - q_6 \\ q_2 + q_5 \\ -q_3 + q_4 \\ \end{bmatrix}, \end{equation} \begin{equation} \label{R_S InHomoz2} \frac{\partial}{\partial t} \begin{bmatrix} q_4 \\ q_5 \\ q_6 \\ q_7 \\ \end{bmatrix} = - \frac{1}{n(z)} \frac{\partial}{\partial z} \begin{bmatrix} q_4 \\ q_5 \\ -q_6 \\ -q_7 \\ \end{bmatrix} + \frac{n^\prime (z)}{2n^2(z)} \begin{bmatrix} q_4 - q_3 \\ -q_5 - q_2 \\ q_6 + q_1 \\ -q_7 + q_0 \\ \end{bmatrix}. \end{equation} \section {QLA Representation of Maxwell Equations in Inhomogeneous Media} From our previous work [3-20] on forming QLA for 1D, 2D, 3D Nonlinear Schrdoinger/Gross Pitaevskii equations, the QLA takes on modular form. One needs only consider an interleaved sequence of unitary collide-stream operators along a particular axis of the generic form \begin{equation} \begin{aligned} \label{U_X} U &= S_{-} C S_{+} C^{\dagger} \bsdot \: S_{+} C S_{-} C^{\dagger}, \\ U^{+} &= S_{+} C^{\dagger} S_{-}^{01} C \bsdot \: S_{-} C^{\dagger} S_{+} C \end{aligned} \end{equation} acting on the 8-qubit vector $Q = (q_0 \quad q_1 \quad ... q_7)^{T}$ to give a time-advancement \begin{equation} Q_{t+\delta t} = U^{+} U Q_{t} \end{equation} Since all quantum gates can be boiled down to 1-qubit and 2-qubit gates, we need only consider collision operators that couple 2 qubits. Hence the collision matrices will be, in general, sparse. The interleaved collide-stream operator sequence is used to recover the differentials in the partial differential equation of interest in the continuum limit of the lattice equations. The non-differential terms (in the case of Maxwell equations these are the inhomogeneous refractive index terms) will be modeled by potential matrices $V_{pot}$. We now turn to specific details for the 3 orthogonal propagation directions. \subsection{QLA for 1D inhomogeneous Maxwell Equations for x-propagation} We first consider the construction of the collide-stream operators to recover any partial spatial x-derivatives to second order. For an 8-qubit vector, the collide-stream matrices will be $8 \cross 8$. From Eqs. (13) - (14) we see the pairwise coupling of qubits $(q_0 - q_2), (q_1 - q_3), (q_4 - q_6), (q_5 - q_7) $. The simplest unitary matrix $C$ with this structure has the form \begin{equation} \label{COLL_X 8x8} C (\theta) = \begin{bmatrix} \cos \theta & 0 & \sin \theta & 0 & 0 & 0 & 0 & 0 \\ 0 & \cos \theta & 0 & \sin \theta & 0 & 0 & 0 & 0 \\ - \sin \theta & 0 & \cos \theta & 0 & 0 & 0 & 0 & 0 \\ 0 & - \sin \theta & 0 & \cos \theta & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \cos \theta & 0 & \sin \theta & 0 \\ 0 & 0 & 0 & 0 & 0 & \cos \theta & 0 & \sin \theta \\ 0 & 0 & 0 & 0 & - \sin \theta & 0 & \cos \theta & 0 \\ 0 & 0 & 0 & 0 & 0 & - \sin \theta & 0 & \cos \theta \\ \end{bmatrix}. \end{equation} for some angle $\theta$. The streaming operator S will stream 4 qubits at a time, and leave the other 4 qubits untouched at their lattice site. With the specific 2-qubit coupling $(q_0 - q_2), (q_1 - q_3), (q_4 - q_6), (q_5 - q_7) $ one will choose to stream the 4-qubits $(q_0 \quad q_1 \quad q_4 \quad q_5)$ at one instant and then the other qubits $(q_2 \quad q_3 \quad q_6 \quad q_7)$ at the next instant. Thus we have 2 basic unitary streaming operators which are diagonal matrices \begin{equation} \label{Streamx012} S_{\pm 1}^{0145} \begin{bmatrix} q_0 (x) \\ q_1 (x)\\ q_2 (x) \\ q_3 (x) \\ q_4 (x) \\ q_5 (x)\\ q_6 (x)\\ q_7 (x) \\ \end{bmatrix} = \begin{bmatrix} q_0 (x \pm 1) \\ q_1 (x \pm 1)\\ q_2 (x) \\ q_3 (x) \\ q_4 (x \pm 1) \\ q_5 (x \pm 1)\\ q_6 (x)\\ q_7 (x) \\ \end{bmatrix}, \quad S_{\pm 1}^{2367} \begin{bmatrix} q_0 (x) \\ q_1 (x)\\ q_2 (x) \\ q_3 (x) \\ q_4 (x) \\ q_5 (x)\\ q_6 (x)\\ q_7 (x) \\ \end{bmatrix} = \begin{bmatrix} q_0 (x) \\ q_1 (x)\\ q_2 (x \pm 1) \\ q_3 (x \pm 1) \\ q_4 (x) \\ q_5 (x)\\ q_6 (x \pm 1)\\ q_7 (x \pm 1) \\ \end{bmatrix} \end{equation} If we now define the unitary interleaved sequence of collision-stream operators \begin{equation} \begin{aligned} U = S_{- \epsilon} ^{0145} . C(\theta) . S_{+ \epsilon} ^{0145} . C^{+}(\theta) . S_{+ \epsilon} ^{2367} . C(\theta) . S_{- \epsilon} ^{2367} . C^{+}(\theta) , \\ U^{+} = S_{+\epsilon} ^{0145} . C^{+}(\theta) . S_{- \epsilon} ^{0145} . C(\theta) . S_{- \epsilon} ^{2367} . C^{+}(\theta) . S_{+ \epsilon} ^{2367} . C(\theta) \end{aligned} \end{equation} and symbolically evaluate \begin{equation} U^{+} . U . Q(x,t) \end{equation} for what we can define as one time instant $\delta t$ of propagation, we find \begin{equation} \label{Algorx} \begin{bmatrix} q_0 (x,t+\delta t) \\ q_1 (x,t+\delta t)\\ q_2 (x,t+\delta t) \\ q_3 (x,t+\delta t) \\ q_4 (x,t+\delta t) \\ q_5 (x,t+\delta t)\\ q_6 (x,t+\delta t)\\ q_7 (x,t+\delta t) \\ \end{bmatrix} = \begin{bmatrix} q_0 (x,t) \\ q_1 (x,t)\\ q_2 (x,t) \\ q_3 (x,t) \\ q_4 (x,t) \\ q_5 (x,t)\\ q_6 (x,t)\\ q_7 (x,t) \\ \end{bmatrix} - \frac{1}{n(x)} \frac{\partial}{\partial x} \begin{bmatrix} q_2 (x,t) \\ q_3 (x,t)\\ q_0 (x,t) \\ q_1 (x,t) \\ q_6 (x,t) \\ q_7 (x,t)\\ q_4 (x,t)\\ q_5 (x,t) \\ \end{bmatrix} \epsilon^2 + O(\epsilon^4) \end{equation} on choosing the collision angle \begin{equation} \theta = \frac{\epsilon}{4 n(x)} , \quad with \quad \epsilon << 1. \end{equation} Thus to recover the required differentials in the long-time long-wavelength continuum limit we must enforce diffusion ordering on the time scales $\delta t = \epsilon^2$, where the spatial lattice unit spacing is $\epsilon$ : \begin{equation} \label{R_S InHomoxxx1} \frac{\partial}{\partial t} \begin{bmatrix} q_0(x,t) \\ q_1(x,t) \\ q_2(x,t) \\ q_3 (x,t) \\ q_4 (x,t) \\ q_5 (x,t) \\ q_6 (x,t) \\ q_7 (x,t) \\ \end{bmatrix} = - \frac{1}{n(x)} \frac{\partial}{\partial x} \begin{bmatrix} q_2 (x,t) \\ q_3 (x,t) \\ q_0 (x,t) \\ q_1 (x,t) \\ q_6 (x,t) \\ q_7 (x,t) \\ q_4 (x,t) \\ q_5 (x,t) \\ \end{bmatrix} + O(\epsilon^2) . \end{equation} So far the QLA is unitary. To recover the two terms associated with the inhomogeneous refractive index \begin{equation} \frac{n^{\prime}(x)}{2 n^2 (x)} \end{equation} we will introduce two potential-like collision operators. Their basic form can be deduced from the required couplings in the x-dependent Maxwell equations. The first potential collision operator couples qubits $(q_0 - q_1, q_2 - q_3, q_4 - q_5, q_6 - q_7)$ while the second potential collision operator couples qubits $(q_0 - q_6, q_1 - q_7 , q_2 - q_4, q_3 - q_5)$. An appropriate choice for these $8 \cross 8$ matrices is \begin{equation} \label{COLL_Y 8x8} V_1 (\alpha) = \begin{bmatrix} \cos \alpha & -\sin \alpha & 0 & 0 & 0 & 0 & 0 & 0 \\ -\sin \alpha & \cos \alpha & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \cos \alpha & -\sin \alpha & 0 & 0 & 0 & 0 \\ 0 & 0 & -\sin \alpha & \cos \alpha & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \cos \alpha & -\sin \alpha & 0 & 0 \\ 0 & 0 & 0 & 0 & -\sin \alpha & \cos \alpha & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \cos \alpha & -\sin \alpha \\ 0 & 0 & 0 & 0 & 0 & 0 & -\sin \alpha & \cos \alpha \\ \end{bmatrix}. \end{equation} and \begin{equation} V_2 (\alpha) = \begin{bmatrix} \cos \alpha & 0 & 0 & 0 & 0 & 0 & -\sin \alpha & 0 \\ 0 & \cos \alpha & 0 & 0 & 0 & 0 & 0 & \sin \alpha \\ 0 & 0 & \cos \alpha & 0 & \sin \alpha & 0 & 0 & 0 \\ 0 & 0 & 0 & \cos \alpha & 0 & -\sin \alpha & 0 & 0 \\ 0 & 0 & -\sin \alpha & 0 & \cos \alpha & 0 & 0 & 0 \\ 0 & 0 & 0 & \sin \alpha & 0 & \cos \alpha & 0 & 0 \\ \sin \alpha & 0 & 0 & 0 & 0 & 0 & \cos \alpha & 0 \\ 0 & -\sin \alpha & 0 & 0 & 0 & 0 & 0 & \cos \alpha \\ \end{bmatrix}. \end{equation} for some angle $\alpha$. From symbolic manipulations one finds that the appropriate potential collision angle $\alpha$ is \begin{equation} \alpha = \epsilon^2 \frac{n^{\prime}(x)}{2 n^2 (x)}. \end{equation} It is interesting to note that while $V_2(\alpha)$ is unitary, the potential collision martix $V_1(\alpha)$ is not unitary, but just Hermitian. The final QLA for 1D propagation in an x-dependent inhomogeneous medium is thus \begin{equation} Q(t + \delta t) = V_2(\alpha). V_1(\alpha). U^{+}(\theta). U(\theta). Q(t) \end{equation} \subsection{QLA for 1D inhomogeneous Maxwell Equations for y-propagation} As can be seen from Eqs. (15)-(16), the y-dependent 1D Maxwell equations are very similar to those for the x-dependent ones. Of course, this is predicated by the similarity of the Pauli spin matrices $\sigma_x$ and $\sigma_y$ : \begin{equation} \label{Pauli_spin} \sigma_x = \begin{pmatrix} 0 & 1 \\ 1 & 0 \\ \end{pmatrix} \ \ \ , \sigma_y = \begin{pmatrix} 0 & - i \\ i & 0 \\ \end{pmatrix} \end{equation} Hence we will simply write down the corresponding unitary collision-streaming operators. \begin{equation} \label{COLL_Y 8x8} C (\theta) = \begin{bmatrix} \cos \theta & 0 & i \sin \theta & 0 & 0 & 0 & 0 & 0 \\ 0 & \cos \theta & 0 & i \sin \theta & 0 & 0 & 0 & 0 \\ i \sin \theta & 0 & \cos \theta & 0 & 0 & 0 & 0 & 0 \\ 0 & i \sin \theta & 0 & \cos \theta & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \cos \theta & 0 & -i \sin \theta & 0 \\ 0 & 0 & 0 & 0 & 0 & \cos \theta & 0 & -i\sin \theta \\ 0 & 0 & 0 & 0 & -i \sin \theta & 0 & \cos \theta & 0 \\ 0 & 0 & 0 & 0 & 0 & -i \sin \theta & 0 & \cos \theta \\ \end{bmatrix}. \end{equation} Since the collision operator couples the same two qubits as for the x-dependent Maxwell equation, the streaming operators will be unchanged. Moreover, for the inhomogeneous refractive index the two qubit coupings are also unchanged. Hence the two potential collision matrices are \begin{equation} \label{V11 8x8} V_{1} (\beta) = \begin{bmatrix} \cos \beta & \sin \beta & 0 & 0 & 0 & 0 & 0 & 0 \\ -\sin \beta & \cos \beta & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \cos \beta & \sin \beta & 0 & 0 & 0 & 0 \\ 0 & 0 & -\sin \beta & \cos \beta & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \cos \beta & -\sin \beta & 0 & 0 \\ 0 & 0 & 0 & 0 & \sin \beta & \cos \beta & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \cos \beta & -\sin \beta \\ 0 & 0 & 0 & 0 & 0 & 0 & \sin \beta & \cos \beta \\ \end{bmatrix}, \end{equation} \begin{equation} \label{V22 8x8} V_{2} (\beta) = \begin{bmatrix} \cos \beta & 0 & 0 & 0 & 0 & 0 & -\sin \beta & 0 \\ 0 & \cos \beta & 0 & 0 & 0 & 0 & 0 & -\sin \beta \\ 0 & 0 & \cos \beta & 0 & \sin \beta & 0 & 0 & 0 \\ 0 & 0 & 0 & \cos \beta & 0 & \sin \beta & 0 & 0 \\ 0 & 0 & -\sin \beta & 0 & \cos \beta & 0 & 0 & 0 \\ 0 & 0 & 0 & -\sin \beta & 0 & \cos \beta & 0 & 0 \\ \sin \beta & 0 & 0 & 0 & 0 & 0 & \cos \beta & 0 \\ 0 & \sin \beta & 0 & 0 & 0 & 0 & 0 & \cos \beta \\ \end{bmatrix}. \end{equation} The 1D y-dependent Maxwell equations are recovered from this QLA provided the collision angles \begin{equation} \theta = \frac{\epsilon}{4 n(y)} , \quad \beta = - i \epsilon^2 \frac{n^{\prime} (y)}{2n^2(y)}. \end{equation} Because of the complex collision angle $\beta$, both the potential collision matrices are Hermitian, but not unitary. The final QLA for y-dependent refractive index has a slightly different collide-stream interleaved sequence \begin{equation} \begin{aligned} U = S_{- \epsilon} ^{2367} . C(\theta) . S_{+ \epsilon} ^{2367} . C^{+}(\theta) . S_{+ \epsilon} ^{0145} . C(\theta) . S_{- \epsilon} ^{0145} . C^{+}(\theta) , \\ U^{+} = S_{+\epsilon} ^{2367} . C^{+}(\theta) . S_{- \epsilon} ^{2367} . C(\theta) . S_{- \epsilon} ^{0145} . C^{+}(\theta) . S_{+ \epsilon} ^{0145} . C(\theta) \end{aligned} \end{equation} so that for the 8-qubit vector Q: \begin{equation} Q(t + \delta t) = V_2(\beta). V_1(\beta). U^{+}(\theta). U(\theta). Q(t) \end{equation} \subsection{QLA for 1D inhomogeneous Maxwell Equations for z-propagation} The QLA for z-dependent propagation is very different from that for the other two orthogonal directions. This is because the Pauli spin matrix $\sigma_z$ is diagonal \begin{equation} \label{Pauli_spin z} \sigma_z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \\ \end{pmatrix} \end{equation} resulting in the coupling of $\partial q_i / \partial t$ to $\partial q_i / \partial z$, for each i, i = 0 ... 7. Since the unitary collision operators must couple two different qubits, there is no $8 \cross 8$ representation for z-propagation. Hence we turn to a 16-qubit representation. In our earlier work on developing QLA for solitons and Bose-Einstein condensates [15-16], the physical order-parameter equations (nonlinear Schrodinger equation or the Gross-Pitaevskii equation) were represented at a mesoscopic level by twice as many qubits as field components. Especially when dealing with a single scalar field equation, one needed at least 2 qubits per spatial lattice grid to represent the field so that there could be quantum entanglement arising from the unitary qubit collision operator. Because of vector nature of the electromagnetic fields in Maxwell equations there were sufficient qubits per lattice site to represent the fields directly. However, for z-dependent propagation, the diagonal form of the Pauli spin matrix $\sigma_z$ forces us into a mesoscopic qubit representation. An appropriate unitary collision matrix which couples qubits $(q_0 - q_2, q_1 - q_3, q_5 - q_7, q_6 - q_8, q_9 -q_{11}, q_{12} - q_{14}, q_{13} - q_{15})$ has the following $4 \cross 4$ block structure \begin{equation} \label{C4x4} C (\theta) = \begin{bmatrix} V_4 (\theta) & 0 & 0 & 0 \\ 0 & V_4 (\theta)^{Tr} & 0 & 0 \\ 0 & 0 & V_4 (\theta) & 0 \\ 0 & 0 & 0 & V_4 (\theta)^{Tr} \\ \end{bmatrix}, \end{equation} where \begin{equation} \label{Csub4x4} V_4 (\theta) = \begin{bmatrix} \cos \theta & 0 & \sin \theta & 0 \\ 0 & \cos \theta & 0 & \sin \theta \\ -\sin \theta & 0 & \cos \theta & 0 \\ 0 & -\sin \theta & 0 & \cos \theta \\ \end{bmatrix}, \end{equation} and $V_4 (\theta)^{Tr}$ is the transpose of the $4 \cross 4$ matrix $V_4 (\theta)$, Eq. (42). The streaming operators each stream 8 qubits : let us denote one of these operators $S^{08}$ which streams qubits $(q_0, q_1, q_4, q_5, q_8, q_9, q_{12}, q_{13})$ while $S^{2 \:10}$ streams the 8 qubits $(q_2, q_3, q_6,q_7, q_{10}, q_{11}, q_{14},q_{15})$. The first of the potential collision operators mimics the coupling of the unitary collision matrix $C$ so its $4 \cross 4$ block structure is \begin{equation} \label{C4x4} P_1 (\gamma) = \begin{bmatrix} PV_4 (\gamma) & 0 & 0 & 0 \\ 0 & PV_4 (\gamma) & 0 & 0 \\ 0 & 0 & PV_4 (\gamma) & 0 \\ 0 & 0 & 0 & PV_4 (\gamma) \\ \end{bmatrix}, \end{equation} where \begin{equation} \label{Csub4x4} PV_4 (\gamma) = \begin{bmatrix} \cos \gamma & 0 &- \sin \gamma & 0 \\ 0 & \cos \theta & 0 & - \sin \gamma \\ - \sin \gamma & 0 & \cos \gamma & 0 \\ 0 & - \sin \gamma & 0 &\cos \gamma \\ \end{bmatrix}, \end{equation} while the second potential collision operator has diagonal-like structure of two $8 \cross 8$ matrices \begin{equation} \label{Csub4x4} P_2 (\gamma) = \begin{bmatrix} PV_{81} (\gamma) & PV_{82} (\gamma) \\ PV_{82} (\gamma) & PV_{81} (\gamma) \\ \end{bmatrix}, \end{equation} where \begin{equation} \label{PV81 8x8} PV_{81} (\gamma) = \begin{bmatrix} \cos \gamma & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & \cos \gamma & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \cos \gamma & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \cos \gamma & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \cos \gamma & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \cos \gamma & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \cos \gamma & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cos\gamma \\ \end{bmatrix}. \end{equation} and \begin{equation} \label{PV82 8x8} PV_{82} (\gamma) = \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & - \sin \gamma \\ 0 & 0 & 0 & 0 & 0 & 0 & - \sin \gamma & 0 \\ 0 & 0 & 0 & 0 & 0 & - \sin \gamma & 0 & 0 \\ 0 & 0 & 0 & 0 & - \sin \gamma & 0 & 0 & 0 \\ 0 & 0 & 0 & \sin \gamma & 0 & 0 & 0 & 0 \\ 0 & 0 & \sin \gamma & 0 & 0 & 0 & 0 & 0 \\ 0 & \sin \gamma & 0 & 0 & 0 & 0 & 0 & 0 \\ \sin \gamma & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{bmatrix}. \end{equation} With the unitary operators \begin{equation} \begin{aligned} U_{[16]} =S_{-\epsilon}^{0,8}.C(\theta). S_{+\epsilon}^{0,8} . C^{+}(\theta).S_{+\epsilon}^{2,10}.C(\theta). S_{-\epsilon}^{2,10} . C^{+}(\theta) \\ U_{[16]}^{+} = S_{+\epsilon}^{0,8}.C^{+}(\theta). S_{-\epsilon}^{0,8} . C(\theta).S_{-\epsilon}^{2,10}.C^{+}(\theta). S_{+\epsilon}^{2,10} . C(\theta) \end{aligned} \end{equation} one obtains from \begin{equation} Q_{[16]}(t + \delta t) = P_2(\gamma) P_1(\gamma) U_{]16]}^{+}. U_{[16]} . Q_{[16]} (t) \end{equation} on using the collision angles \begin{equation} \theta = \frac{\epsilon}{4 n(z)} , \quad \gamma = \frac{\epsilon^2 n^\prime{z}}{2 n^2{z}} \end{equation} the mesoscopic evolution of the 16-qubits. This evolution falls into a 4-block structure of the form \begin{equation} \begin{aligned} q_{0+4k}(t+ \delta t) = q_{0+4k}(t) + \Big\{ \frac{n^{\prime}(z)}{4 n^2(z)} [-q_{2+4k}(z) +(-1)^{k} q_{15-4k}(z) ] - (-1)^{k}\frac{1}{4 n(z)} \frac{\partial q_{2+4k}}{\partial z} \Big \} \epsilon^2 + O(\epsilon^4) \\ q_{1+4k}(t+ \delta t) = q_{1+4k}(t) + \Big\{ \frac{n^{\prime}(z)}{4 n^2(z)} [+q_{3+4k}(z) + (-1)^{k} q_{14-4k}(z) ] - (-1)^{k} \frac{1}{4 n(z)} \frac{\partial q_{3+4k}}{\partial z} \Big \} \epsilon^2 + O(\epsilon^4) \\ q_{2+4k}(t+ \delta t) = q_{2+4k}(t) + \Big\{ \frac{n^{\prime}(z)}{4 n^2(z)} [-q_{0+4k}(z) + (-1)^{k} q_{13-4k}(z) ] -(-1)^{k} \frac{1}{4 n(z)} \frac{\partial q_{0+4k}}{\partial z} \Big \} \epsilon^2 + O(\epsilon^4) \\ q_{3+4k}(t+ \delta t) = q_{3+4k}(t) + \Big\{ \frac{n^{\prime}(z)}{4 n^2(z)} [+q_{1+4k}(z) +(-1)^{k} q_{12-4k}(z) ] -(-1)^{k} \frac{1}{4 n(z)} \frac{\partial q_{1+4k}}{\partial z} \Big \} \epsilon^2 + O(\epsilon^4) \\ \end{aligned} \end{equation} with k = 0, 1, 2 or 3. To recover the 8-spinor representation of the 1D Maxwell equations in Riemann-Silberstein form we need only define \begin{equation} \begin{aligned} \overline{q_0} = q_0 + q_2 ,\quad \overline{q_1} = q_1 + q_3. \quad. \overline{q_2} = q_4 + q_6 , \quad \overline{q_3} = q_5 + q_7. \\ \overline{q_4} = q_8 + q_{10} , \quad \overline{q_5} = q_9 + q_{11} , \quad \overline{q_6} = q_{12} + q_{14} , \quad \overline{q_7} = q_{13} + q_{15} . \\ \end{aligned} \end{equation} \section{Some Simulations for Electromagnetic Pulses propagating in the z-direction} We have previously considered the QLA for 1D Maxwell equations in inhomogeneous media for propagation in the y-direction [17] and in the x-direction [31]. Both these QLAs require an 8-qubit representation. Here we present some longer time evolution of z-propagating Gaussian pulse which now requires 16 qubits/node. In particular we shall consider multiple reflections and transmissions at strongly varying dielectric boundary layers. We shall consider an inhomogeneous medium with vacuum refractive index $n(z) = 1.0 $ for $0 < z < 3700$ and for $4300 < z < 6500$ and refractive index $n(z) = 2 $ for $3700 < z < 4300$, Fig. 1. \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig1} \caption{A localized inhomogeneous dielectric region, $3700 < z < 4300$ within a vacuum. } \end{center} \end{figure} \subsection{Normal incident electromagnetic pulse} The simplest Gaussian vacuum electromagnetic pulse propagating in the z-direction has for the non-zero components of the electric $\mathbf{E}$ and magnetic $\mathbf{B}$ fields \begin{equation} E_x(z,t=0) = 0.01 \;exp[- \frac{\epsilon^2 (z-z_0)^2}{1500}] = B_y(z,t=0) \end{equation} where the small parameter $\epsilon = 0.3$ and the initial center of the Gaussian pulse is at $z_0 = 2300$. This incident pulse propagates undistorted towards the dielectric slab, as seen in Fig. 2 for times $t = 0$, and $t < 4000$. By $t = 4000$ the forward part of the pulse is just starting to interact with the dielectric boundary layer. \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig2} \caption{ The Gausssian pulse at $t = 0 (red)$ and $t = 4000 (black)$. In the vacuum, $E_x = B_y$ and these fields overlay each other. The vertical dashed lines indicate the dielectric slab of refractive index $n = 2$. } \end{center} \end{figure} Fig. 3 shows the pulse straddling the vacuum-dielectric slab region at time $t = 4800$. Within the dielectric slab, the transmitted pulse has $E_x \ne B_y$, with $max \, B_y = 2 \, max \,E_x$. The part of the pulse in the vacuum is predominantly the transient reflected part with the beginnings of a phase shift in $E_x$ since the reflection is occurring at a low-to-higher refractive index. \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig3} \caption{ At t = 4800, the incident vacuum pulse is being split into a partly transmitted pulse within the dielectric slab of $n_{slab} = 2$, and partially reflected pulse back into the vacuum. Because the pulse is moving from vacuum into a higher refractive index region it is the electric field component $E_x$ that exhibits phase change. ($E_x - blue, B_y - red$ , dielectric slab lies in $3700 < z < 4300$, its boundaries denoted by the dashed vertical lines). } \end{center} \end{figure} By $t = 6000$ the transients have died down and one sees the transmitted pulse within the dielectric slab, with $B_y \simeq E_x$ and in phase, while the reflected pulse back into the vacuum has $E_x \pi$ out of phase with $B-y$. The speed of the transmitted pulse is half that of the incident and reflected pulse in the vacuum (Fig. 4). \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig4} \caption{The reflected and transmitted pulses at time $t = 6000$. Since the initial pulse is incident onto a higher refractive index medium, the reflected $E_x$ undergoes a $\pi$ phase change, but with $\abs{E^{refl}_x} = B^{ref;}_y$. The transmitted pulse's speed is half of the initial vacuum pulse's, and half the initial pulse width, with $B^{trans}_y \simeq 2 E^{trans}_x$. ($E_x - blue, B_y - red$ , dielectric slab lies in $3700 < z < 4300$, its boundaries denoted by the dashed vertical lines). } \end{center} \end{figure} By $t = 9000$ the transmitted pulse has reached the right boundary of the dielectric slab and it is undergoing its transmission and reflection, Fig. 5. Since the reflected pulse is going back into a higher refractive medium it is now $B_y$ that undergoes a $\pi$ phase change. \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig5} \caption{By t = 9000, the right traveling pulse reaches the right end of the dielectric slab and it itself undergoes transient transmission and reflection. Now, however it is the magnetic field $B_y$ that undergoes a phase change ($E_x - blue, B_y - red$ , dielectric slab lies in $3700 < z < 4300$, its boundaries denoted by the dashed vertical lines). } \end{center} \end{figure} The time asymptotic state of this stage of pulse evolution is shown in Fig. 6 ($t = 10000$). The transmitted pulse ($z > 4300$ is back in the vacuum region so that $E_x \simeq B_y$ and the field components overlay each other, as at t = 0. This pulse has the same speed and width of the initial vacuum pulse, but its amplitude is somewhat reduced (to preserve total energy conservation). The reflected pulse in the dielectric slab is propagating to the left with half the speed of the initial vacuum pulse, with half its width and now with the $B_y$ out-of-phase by $\pi$ with its $E_x$ field component. \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig6} \caption{A quasi-asymptotic state ($t = 10000$) with the first reflection and transmission off the back interface around $z=4300$. For the reflect pulse in the dielectric slab, it is the magnetic field $B_y$ that undergoes a $\pi-$ phase transition. ($E_x - blue, B_y - red$ , dielectric slab lies in $3700 < z < 4300$, its boundaries denoted by the dashed vertical lines). } \end{center} \end{figure} In our simulations we then follow the reflection and transmission of the left-traveling pulse within the dielectric as it hits the inner edge around $z=3700$. The subsequent transmitted pulse then keeps propagating to the left into the vacuum has its $B_y$ out of phase with its companion $E_x$. The part of the pulse that is reflected from the dielectric boundary around $z=3700$ has another $\pi-$ phase change induced in the magnetic field component $B_y$ so that this right traveling pulse within the dielectric has its components again in phase (see Fig. 7) \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig7} \caption{A quasi-asymptotic state ($t = 14000$) following a reflection and transmission off the front interface around $z=3700$. In the outgoing pulses traveling to the left, the two pulses are out of phase with each other. ($E_x - blue, B_y - red$ , dielectric slab lies in $3700 < z < 4300$, its boundaries denoted by the dashed vertical lines). } \end{center} \end{figure} Finally, in these 1D simulations of the normal incidence of a Gaussian pulse onto a dielectric slab we computer the instantaneous Poynting flux S(t) \begin{equation} S(t)= \int_0^L \mathbf{E}(z,t) \times \mathbf{ B}_y(z,t) \cdot \mathbf{n} \;dz \end{equation} It is seen from Fig. 8, that QLA conserves energy very well throughout the simulation except during the overlap of incident/reflected pulses around the slab boundaries $z=3700$ or $z=4300$. During these time intervals it is very difficult to distinguish which part of the pulse is incident and which part of the pulse is due to reflection -- see e.g., Fig. 3 (for $t=4800$) or FIg. 5 ($t=9000$) - thus making the identification of the outward pointing normal difficult. \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig8} \caption{The instantaneous Poynting flux. The time intervals $4200 < t < 5300$, $8200 < t < 9300$ and $12400 < t < 13300$ are those in which there is pulse overlap with either the front or the back boundaries of the dielectric slab. } \end{center} \end{figure} \section{Preliminary 2D QLA Scattering from a Dielectric Obstacle} We can now readily stitch together the various three orthogonal QLAs to obtain a 2D or 3D Maxwell solver for electromagnetic fields in an arbitrary scalar dielectric medium. Here some preliminary 2D $x-z$ QLA simulations for an initial Gaussian pulse propagating towards a conical dielectric obstacle are presented. The $x-z$ QLA is obtained simply by stitching together the evolution equations (32) and (49). A 2D dielectric cone is situated with a base $75 < x/4 < 225$ and $175 < z/4 < 325$ and rising to a refractive index of 3 around $x/4 = 150, z/4 = 255$. The simulations were performed on a $2028 \times 2048$ grid, but the data was plotted at every 4th point - hence the figures are drawn on a $0 < x/4, z/4 < 512$ grid. \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig9} \caption{The refractive index of a conical dielectric obstacle } \end{center} \end{figure} The initial Gaussian plane pulse is independent of $z$ and propagates in the x-direction towards the dielectric obstacle, Fig. 10. \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig10} \caption{The initial Gaussian pulse propagating in the $x-$ direction. } \end{center} \end{figure} By $t=1250$, the Gaussian pulse is interacting with the dielectric cone. Since the cone's refractive index $> 1$, the Gaussian pulse front slows down within the dielectric region (Fig. 11) \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig11} \caption{The electromagnetic pulse as it starts to interact with the dielectric cone, $t = 1250$ } \end{center} \end{figure} \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig12} \caption{The electromagnetic pulse at $t=2500$ } \end{center} \end{figure} \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig13} \caption{The electromagnetic pulse at $t=3250$ } \end{center} \end{figure} \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig14} \caption{The electromagnetic pulse at $t=3500$ } \end{center} \end{figure} \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig15} \caption{The electromagnetic pulse at $t=4000$ } \end{center} \end{figure} \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig16} \caption{The electromagnetic pulse at $t=4500$ } \end{center} \end{figure} \begin{figure}[!h!p!b!t] \ \begin{center} \includegraphics[width=5.1in]{Fig17} \caption{The electromagnetic pulse at $t=5500$ } \end{center} \end{figure} \section{Conclusion and Summary} We have determined the QLA for Maxwell equations for 1D propagation in an inhomogeneous medium. From the modular form of the Cartesian coordinates, one can readily move to 2D and to 3D inhomogenous dielectric media. It was found that for z-propagation one required 16 qubits per lattice site because of the diagonal structure of the Pauli spin matrix $\sigma_z$. For the non-diagonal Pauli spin matrices $\sigma_x$ and $\sigma_y$ one needs only 8 qubits per lattice site for either x-propagation or y-propagation. \section{Acknowledgments} LV was partially supported by an AFRL STTR Phase I with Semicyber LLC contract number FA864919PA049. GV, LV and MS were partially supported by an AFRL STTR Phase 2 with Semicyber LLC contract number FA864920P0419. AKR was supported by DoE Grant Number DE-FG02-91ER-54109 and DE-SC0018090. The 2D simulations used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231, as well as the U.S. Department of Defense High Performance Supercomputer at ERDC. \section{References} [1] YEPEZ, J. 2002 An efficient and accurate quantum algorithm for the Dirac equation. arXiv: 0210093. [2] YEPEZ, J. 2005 Relativistic Path Integral as a Lattice-Based Quantum Algorithm. Quant. Info. Proc. 4, 471-509. [3] YEPEZ, J, VAHALA, G $\&$ VAHALA, L. 2009a Vortex-antivortex pair in a Bose-Einstein condensate, Quantum lattice gas model of theory in the mean-field approximation. Euro. Phys. J. Special Topics 171, 9-14 [4] YEPEZ, J, VAHALA, G, VAHALA, L $\&$ SOE, M. 2009b Superfluid turbulence from quantum Kelvin wave to classical Kolmogorov cascades. Phys. Rev. Lett. 103, 084501. [5] YEPEZ, J. 2016 Quantum lattice gas algorithmic representation of gauge field theory. SPIE 9996, paper 9996-22 [6] OGANESOV, A, VAHALA, G, VAHALA, L, YEPEZ, J $\&$ SOE, M. 2016a. Benchmarking the Dirac-generated unitary lattice qubit collision-stream algorithm for 1D vector Manakov soliton collisions. Computers Math. with Applic. 72, 386 [7] OGANESOV, A, FLINT, C, VAHALA, G, VAHALA, L, YEPEZ, J $\&$ SOE, M 2016b Imaginary time integration method using a quantum lattice gas approach. Rad Effects Defects Solids 171, 96-102 [8] OGANESOV, A, VAHALA, G, VAHALA, L $\&$ SOE, M. 2018. Effects of Fourier Transform on the streaming in quantum lattice gas algorithms. Rad. Eff. Def. Solids, 173, 169-174 [9] VAHALA, G, VAHALA, L $\&$ YEPEZ, J. 2003 Quantum lattice gas representation of some classical solitons. Phys. Lett A310, 187-196 [10] VAHALA, G, VAHALA, L $\&$ YEPEZ, J. 2004. Inelastic vector soliton collisions: a lattice-based quantum representation. Phil. Trans: Mathematical, Physical and Engineering Sciences, The Royal Society, 362, 1677-1690 [11] VAHALA, G, VAHALA, L $\&$ YEPEZ, J. 2005 Quantum lattice representations for vector solitons in external potentials. Physica A362, 215-221. [12] VAHALA, G, YEPEZ, J, VAHALA, L, SOE, M, ZHANG, B, $\&$ ZIEGELER, S. 2011 Poincaré recurrence and spectral cascades in three-dimensional quantum turbulence. Phys. Rev. E84, 046713 [13] VAHALA, G, YEPEZ, J, VAHALA, L $\&$SOE, M, 2012 Unitary qubit lattice simulations of complex vortex structures. Comput. Sci. Discovery 5, 014013 [14] VAHALA, G, ZHANG, B, YEPEZ, J, VAHALA. L $\&$ SOE, M. 2012 Unitary Qubit Lattice Gas Representation of 2D and 3D Quantum Turbulence. Chpt. 11 (pp. 239 - 272), in Advanced Fluid Dynamics, ed. H. W. Oh, (InTech Publishers, Croatia) [15] VAHALA, G, VAHALA, L $\&$ SOE, M. 2020. Qubit Unitary Lattice Algorithm for Spin-2 Bose Einstein Condensates: I – Theory and Pade Initial Conditions. Rad. Eff. Def. Solids 175, 102-112 [16] VAHALA, G, SOE, M $\&$ VAHALA, L. 2020 Qubit Unitary Lattice Algorithm for Spin-2 Bose Einstein Condensates: II – Vortex Reconnection Simulations and non-Abelian Vortices. Rad. Eff. Def. Solids 175, 113-119 [17] VAHALA, G, VAHALA, L, SOE, M $\&$ RAM, A, K. 2020. Unitary Quantum Lattice Simulations for Maxwell Equations in Vacuum and in Dielectric Media, arXiv: 2002.08450 [18] VAHALA, L, VAHALA, G $\&$ YEPEZ, J. 2003 Lattice Boltzmann and quantum lattice gas representations of one-dimensional magnetohydrodynamic turbulence. Phys. Lett A306, 227-234. [19] VAHALA, L, SOE, M, VAHALA, G $\&$ YEPEZ, J. 2019a. Unitary qubit lattice algorithms for spin-1 Bose-Einstein condensates. Rad Eff. Def. Solids 174, 46-55 [20] VAHALA, L, VAHALA, G, SOE, M, RAM, A $\&$ YEPEZ, J. 2019b. Unitary qubit lattice algorithm for three-dimensional vortex solitons in hyperbolic self-defocusing media. Commun Nonlinear Sci Numer Simulat 75, 152-159 [21] KHAN, S. A. 2005 Maxwell Optics: I. An exact matrix representation of the Maxwell equations in a medium. Physica Scripta 71, 440-442; also arXiv: 0205083v1 (2002) [22] CHILDS, A, N $\&$ WIEBE, N. 2012. Hamiltonian simulation using linear combinations of unitary operations. Quantum Info. Comput.12, 901–924. [23] DIRAC, P. A. M, 1928 The Quantum Theory of the Electron. Proc. Roy. Soc. A 117, 610-624. [24] BIALYNICKI-BIRULA, I. 1996 Photon Wave Function , in Progress in Optics, Vol. 34, pp. 248-294, ed. E. Wolf (North-Holland). [25] LAPORTE, O. $\&$ UHLENBECK, G. E. 1931 Application of spinor analysis to the Maxwell and Dirac equations. Phys. Rev. 37, 1380-1397. [26] OPPENHEIMER, J. R. 1931 Note on light quanta and the electromagnetic field. Phys. Rev. 38, 725-746. [27] MOSES, E. 1959 Solutions of Maxwell’s equations in terms of a spinor notation: the direct and inverse problems. Phys. Rev. 113, 1670-1679 [28] COFFEY, M, W. 2008 Quantum lattice gas approach for the Maxwell equations. Quantum Info. Processing 7, 275-281 [29] KULYABOV, D, S, KOLOKOVA, A, V $\&$ SEVATIANOV,L , A. 2017 Spinor representation of Maxwell’s equations. IOP Conf. Series: J. Physics: Conf. Series 788, 012025 [30] JACKSON, J, D. 1998. “Classical Electrodynamics”, 3rd Ed., (Wiley, New York) [31] VAHALA, G, VAHALA, L, SOE, M $\&$ RAM, A, K. 2020. Unitary Quantum Lattice Simulations for Maxwell Equations in Vacuum and in Dielectric Media (to be published 2020, J. Plasma Physics \end{document}
1,108,101,565,072
arxiv
\section{Introduction} A significant and growing body of astrophysical \cite{Sofue:2000jx,Markevitch:2003at,Massey:2010hh} and cosmological \cite{Primack:2015kpa,Aghanim:2018eyx} observations strongly suggests the existence of ``dark matter'', a massive substance which interacts very weakly---perhaps only through gravity---with ordinary, visible matter. This dark matter has not yet been observed at particle colliders or in dedicated searches \cite{RevModPhys.90.045002}. Many dark matter direct detection experiments to date have focused on weakly interacting massive particles (WIMPs) with masses around $100~\rm{GeV}$. These technologies are reaching full maturity, and will have either detected or largely excluded WIMPs as viable dark matter candidates within the next generation of experiments \cite{arcadi2018waning}. There is thus a clear need for searches of new dark matter candidates, with new experimental techniques \cite{battaglieri2017us}. Precision measurement techniques have already been deployed in the search for dark matter (see e.g.\ \cite{Ahmed:2018oog,RevModPhys.90.025008} for reviews). In this white paper, we discuss approaches to searching for dark matter using massive, mechanical sensing devices. We include applications of purely classical mechanical sensors, as well as many devices which are now operating in the ``quantum-limited'' regime, in which the dominant noise contributions come from the quantum mechanics of measurement itself. These ultra-high precision systems can enable tests of a wide range of dark matter models with extremely small couplings to ordinary matter (both electromagnetic and otherwise). These approaches complement existing search strategies, and in many cases provide better sensitivity than other available options. The development of mechanical detectors has a rich history. Precision measurement in the context of gravitational physics has utilized a range of large-scale systems such as optical interferometers~\cite{abramovici1992ligo}, atom interferometers~\cite{kasevich1992measurement,peters2001high}, torsion balances~\cite{hoyle2001submillimeter,wagner2012torsion}, and Weber bars~\cite{weber1966observation,cerdonio1997ultracryogenic}. The broader landscape of study of mechanical systems, as both classical and quantum detectors, is wide reaching--ranging from single ions \cite{biercuk2010ultrasensitive,ivanov2016high}, to tens of thousands of atoms \cite{schreppler2014optically}, to microscale resonators \cite{teufel2009nanomechanical,peterson2016laser} and up to kilogram-scale devices \cite{abramovici1992ligo,hoyle2001submillimeter}. In this white paper, we consider how a variety of mechanical systems can open fundamentally new avenues to search for dark matter over a large range of energy scales. In particular, monitoring solid, massive objects allows for coherent integration of long-wavelength interactions, and for integration of small cross sections over large volumes or large numbers of target atoms or nuclei. Mechanical devices that are read out interferometrically at the shot-noise limit, or even at or below the standard quantum limit (SQL) enforced by quantum backaction \cite{caves1980quantum}, have been demonstrated across a wide range of mass scales, with natural frequencies ranging from millihertz to terahertz in recent years (see \cite{aspelmeyer2014cavity} for a review). Hence, multiple technologies are at an opportune point for contemplating their role in precision experiments. Dark matter detection is a particularly compelling and challenging problem, which may require the development of fundamentally new technologies. Mechanical detection may be poised to contribute to these challenging searches in both near-term and long-term experiments. Development of new technologies will necessarily proceed with researchers in the sensing and particle physics communities working in tandem. In the following, we outline opportunities and objectives in this new direction in the search for dark matter. We note that the mechanical sensing techniques we focus on have many similarities to proposed dark matter searches with atom interferometry \cite{graham2016dark,geraci2016sensitivity,Coleman:2018ozp} and atomic clock systems \cite{derevianko2014hunting,arvanitaki2015searching,stadnik2015searching}, but here we focus on the domain of solid objects. \section{Motivations for mechanical sensors} \label{section-mass} The present landscape of viable dark matter candidates is enormous, leading to a wide variety of potential experimental signatures. Dark matter particles could range in mass from $10^{-22}~{\rm eV}$ up to hundreds of solar masses, a range of some 90 orders of magnitude.\footnote{In this paper, we use natural units $\hbar = c = 1$ to quote particle physics quantities like masses and momenta.} Moreover, dark matter could interact with the standard model through many possible interactions, although perhaps only through gravity. To span this diverse range of possible models, different regions of parameter space will require different detector architectures and measurement techniques. In particular, for models interacting with the standard model only through mass or other extensive quantities such as nucleon number, massive mechanical sensors may be required. Mechanical sensing technologies offer an extensive set of platforms, as discussed in section~\ref{section-detectors}, and thus have the potential to search for a wide range of such dark matter candidates in regions of parameter space that are complementary to existing searches. The ability to monitor a large number of atoms in aggregate offers two key advantages over other approaches. The first advantage is the large volume integration of any putative dark matter signal. Any dark-visible interactions are necessarily tiny, so using a large volume (or a large mass of target nuclei or atoms, for models that can resolve the underlying substructure of the masses) is key to meaningful detection prospects. The second advantage is that long-wavelength signals can be integrated coherently across the full device, leading to greatly enhanced sensitivities. Such coherent detection has applications in the search for signals from wave-like dark matter signals like the axion or other ultralight bosons, as well as in the case of impulses delivered with extremely small momentum transfers. In section~\ref{section-applications}, we give some examples of dark matter models leading to these types of signals, and discuss prospects for their detection with mechanical sensors. \section{Detection targets and techniques} \label{section-applications} \begin{figure}[t!] \centering \includegraphics[scale=0.4]{whitepaper-massrange.pdf} \caption{\emph{Range of available dark matter candidates}. Current observations allow for dark matter to consist of quanta with an enormous range of masses. Here we classify these candidates as particle-like when $m \gtrsim 1~{\rm eV}$, and ultralight, wave-like dark matter when $m \lesssim 1~{\rm eV}$. A few prototypical models are listed as examples.} \label{figure-massrange} \end{figure} Possible signals of dark matter are controlled by a few key parameters. Astrophysical observations tell us that the dark matter mass density in our neighborhood is $\rho \sim ~0.3~{\rm GeV/cm^3}$ \cite{read2014local}. Assuming this dark matter consists of a single component, with (unknown) mass of an individual dark matter quantum, $m_{\chi}$, this means that the local number density is around \begin{equation} \label{nchi} n_{\chi} = \frac{0.3}{\rm cm^3} \times \left( \frac{1~{\rm GeV}}{m_{\chi}} \right). \end{equation} Moreover, the Earth is moving through the virialized background dark matter with ``wind speed'' $v_{DM} \sim 200~{\rm km/s}$. These parameters fix the kinematics of any detection experiment. The only additional information is what non-gravitational couplings, if any, the dark matter has with visible matter. See eg. \cite{Lin:2019uvt} for a review and further references. Broadly speaking, the above properties mean that potential dark matter signals fall into two classes determined by the dark matter particle mass (see Fig.~\ref{figure-massrange}). Traditional DM detection has focused on dark matter candidates of masses greater than around $m_{\chi} \gtrsim 1~{\rm eV}$, which appear as distinct particles. If these interact with visible matter, they will deposit tiny, discrete impulses (on the order of $p = m_{\chi} v_{DM}$) when they collide with a detector. On the other hand, ultralight dark matter fields of mass $10^{-22}~{\rm eV} \lesssim m_{\chi} \lesssim 1~{\rm eV}$ have enormous occupation numbers, given Eqn.~\eqref{nchi}. The low mass and high occupation number of the quanta mean that the field is bosonic and behaves as a background of oscillating waves of wavelength $\lambda_{\rm dB} \gtrsim 1~{\rm mm}$. This background of waves will be coherent over a timescale $T_{\rm coh} \sim 10^6/\omega_{\chi}$ set by Doppler broadening, where $\omega_{\chi} = m_{\chi} c^2/\hbar$ is the natural frequency of the field~\cite{sikivie1983experimental,hu2000fuzzy}. These models thus produce extremely weak, coherent, persistent signals. Searching for these two classes of signals requires different measurement techniques, which we now discuss separately in more detail. \subsection{Ultralight searches} \begin{figure}[t!] \centering \includegraphics[scale=0.38]{vector.pdf} ~ \includegraphics[scale=0.38]{scalar.pdf} \caption{\emph{Ultralight dark matter searches}. Left: Detection reach for accelerometer searches of ultralight dark matter \cite{graham2016dark,Carney:2019cio}, taking a vector $B-L$ boson as an example. We assume one day of integration time, and the use of a pair of accelerometers with differential neutron-to-nucleon ratio $\Delta = N_1/A_1 - N_2/A_2 = 0.05$. Upper shaded regions are ruled out by existing torsion-balance \cite{schlamminger2008test,wagner2012torsion,arvanitaki2016sound} and satellite experiments \cite{hees2018violation,berge2018microscope}. Right: Detection reach for strain sensors \cite{arvanitaki2016sound,manley2019searching}, using a scalar field coupled to electrons as an example. The AURIGA Weber bar experiment provides an additional narrow-band constraint \cite{branca2017search}. In both plots, the colored lines labeled by sensitivities represent the lower limit of dark matter parameter space which can be probed with a detector of the given sensitivity. The lower shaded regions give some examples of conjectural theory input: the region in the left plot conflict with a version of the weak gravity conjecture \cite{ArkaniHamed:2006dz,cheung2018proof}, here applied assuming the lightest $B-L$ coupled particle is a neutrino of mass $0.01~{\rm eV}$. In the right plot, the lower shaded region is favored by naturalness arguments \cite{arvanitaki2016sound}.} \label{figure-ultralightplot} \end{figure} Consider a scenario where a sizeable fraction of the dark matter mass density is made up of a single ultralight field. Examples of such ultralight dark matter candidates include the axion~\cite{sikivie1983experimental}, vector bosons arising by gauging the conservation of baryon minus lepton number ($B-L$) \cite{graham2016dark}, scalar and pseudoscalar fields coupled through the Higgs portal \cite{piazza2010sub} or the stress tensor \cite{arvanitaki2015searching} (see Table 1 of Ref.~\cite{graham2016dark} for a collation of allowed couplings). These models are minimal in the sense that they add only a single field to the standard model of particle physics, and introduce no ultraviolet anomalies. The axion couples directly to the electromagnetic and gluon fields, and can thus be searched for using a variety of systems including microwave cavities \cite{du2018search,zhong2018results} and NMR systems \cite{arvanitaki2014resonantly}. The other candidates, however, can couple to quantities proportional to mass density. It is thus natural to search for these types of DM with massive sensors. If DM consists primarily of one of these ultralight fields, the observable signature is an oscillating background of ultralight bosons. This produces a nearly monochromatic, sinusoidal force signal in a massive detector, with strength proportional to the mass, leading to a variety of physical effects. For scalar DM the variations of fundamental constants such as the electron mass, or fine structure constant would lead to a periodic strain in macroscopic devices, and the possibility of detecting it has been explored in several mechanical structures \cite{arvanitaki2016sound,branca2017search,manley2019searching,geraci2019searching}. For pseudoscalar DM candidates, observable signatures can include time-varying nucleon electric dipole moments, spin-torques, and EMFs along magnetic fields \cite{graham2016dark}. For vector DM one can obtain material dependent couplings, leading to differential accelerations. For a concrete example, consider a vector boson field $A_{\mu}$ arising from a gauged $B-L$ symmetry. This couples to the neutron field $n$ through the neutron number density, that is, through a coupling $g_{B-L} \slashed{A} \overline{n} n$. The dark matter background of vector bosons then leads to a force on a sensor given by \begin{equation} \label{uldmsignal} F(t) = F_0 N_n g_{B-L} \cos(m_{\chi}c^2 t/\hbar) \end{equation} where $N_n$ is the number of neutrons in the sensor, $F_0 \sim 10^{-15}~{\rm N}$ is set by the dark matter density \eqref{nchi}, and $g_{B-L}$ is an unknown but weak coupling strength \cite{graham2016dark,Carney:2019cio}. Since the coupling is to neutron number as opposed to total mass, a pair of sensors with different neutron-to-nucleon ratios $N/A$ can be used to search for the differential acceleration produced by \eqref{uldmsignal}. In Fig. \ref{figure-ultralightplot}, we plot the available parameter space in this scenario and the acceleration sensitivities needed for novel searches. At the core, the detection problem here is to sense a weak, persistent, narrow-band signal. Coherent sensing of narrowband forces is a prototypical application of mechanical sensors, and so these are ideal detection targets for which mechanical sensors are poised to make an immediate impact, particularly at higher frequencies (Hz-GHz) and/or using multiple sensors to coherently integrate the signal. \subsection{Particle-like searches based on recoils} To detect heavier ($m_{\chi} \gtrsim 1~{\rm eV}$), particle-like dark matter candidates, a variety of techniques can be used. The key challenges in this regime can be illustrated by reviewing traditional WIMP detection (see Ref.~\cite{Schumann:2019eaa} for a review). In a liquid noble detector, the WIMPs would occasionally strike an atomic nucleus, causing it to recoil. If sufficient energy was deposited, the nucleus ionizes or excites nearby atoms, leading to either electron-ion pairs or emission of scintillation photons which can then be detected by charge sensors or photodetectors at the edges of the detector. This example demonstrates the basic issues: the events are very rare (owing to the tiny dark matter-nucleon cross sections, $\sigma \lesssim 10^{-36}~{\rm cm^2}$ \cite{Akerib:2016vxi}) and the energy deposition is very small (a given WIMP has mass of about $\sim$100 protons and velocity $10^5~{\rm m/s}$) leading to only small amounts of ionization or scintillation. Thus any detection program needs to have sufficient target mass to see enough events, as well as very low detection thresholds to see these small energy deposits. We note that many other signals of interest, in particular low-energy neutrinos \cite{cabrera1985bolometric}, have precisely the same properties. The massive mechanical sensing paradigm offers a straightforward solution to the issue of mass: for example, the LIGO detectors have mechanical elements (the interferometer mirrors) with masses of tens of kilograms! On the other hand, smaller mechanical detectors can also enable extremely low-threshold energy detection. There are two basic strategies: detection of localized phonons in bulk materials, and direct monitoring of impulses to the center of mass motion of a single device. \begin{figure}[t] \floatbox[{\capbeside\thisfloatsetup{capbesideposition={left,top},capbesidewidth=.5\linewidth}}]{figure}[\FBwidth] {\caption{Schematic of a phonon-counting experiment with liquid helium in an optomechanical cavity \cite{shkarin2019quantum}. Darker blue indicates superfluid helium, light blue is glass. Blue shading indicates a typical paraxial acoustic mode, and the red shows the optical mode to which it couples. Optical modes with wavelength $1550~{\rm nm}$ couple to acoustic modes with frequency $315~{\rm MHz}$, corresponding to energies around $1.5~{\rm \mu eV}$. An excited phonon mode can convert into an off-resonance photon through a Stokes or anti-Stokes process. By filtering out the resonant photons, this enables counting of the phonon excitations with temporal resolution set by the photodetector (here on the order of $50~{\rm ns}$). In this example, the fluid is held at a temperature $25~{\rm mK}$ and individual thermal phonons are being counted. These phonons can be cooled out of the cavity mode, to enable detection of athermal phonons (as e.g.\ produced by dark matter collisions with the helium).} \label{figure-phonons}} {\includegraphics[width=.98\linewidth]{harriscavity.pdf}} \end{figure} A number of proposals for the detection of dark matter through bulk phononic excitations currently exist \cite{guo2013concept,Schutz:2016tid,griffin2018directional,knapen2018detection,Kurinsky:2019pgb}, which may extend the sensitivity beyond existing implementations of phonon sensing in cryogenic calorimeters (e.g.~\cite{SuperCDMSSNOLAB:2017,CRESST:2019,Edelweiss:2019}). For example, when a dark matter particle interacts with a nucleus in a bulk crystal, it generates a distortion of the lattice. In particular, if the inverse momentum transfer is larger than the lattice spacing, phonons are excited. The phonons then travel through the material, and can be sensed by calorimetric detectors at the edges of the material. As an example, state-of-the-art transition edge sensors can resolve a total deposited energy in phonons down to energies around ${\rm few} \times 10 ~{\rm meV}$ \cite{Fink:2020noh}. This means that searches of this type are sensitive to ``light'' dark matter candidates, of masses in the eV-MeV range. Optomechanical readout of phonons in small samples can reach substantially lower thresholds. For example, single phonons at the micro-eV level can be read out in micromechanical oscillators \cite{cohen2015phonon,riedinger2016non} superfluid helium~\cite{shkarin2019quantum} or bulk crystals \cite{jain2020listening}; we show the superfluid helium example in Fig.~\ref{figure-phonons}. The primary challenge in such systems is not energy threshold, but instead coupling energy into the phonon modes of interest (which are often purposefully decoupled from the bulk phonon modes in the system to avoid thermal noise). In addition, such systems are small (with mode masses at the $\mu$g to mg scale), so scaling up to a sufficient volume for non-trivial dark matter detection reach is an interesting open problem. If coupling of phonons into the modes of interest could be engineered (even with relatively low efficiencies) such techniques would provide an exciting complement to calorimetric phonon detection experiments. \begin{figure}[t!] \centering \includegraphics[height=4.8cm]{whitepaper-impulse.pdf} \includegraphics[height=4.8cm]{whitepaper-impulse2.pdf} \caption{\emph{Searches for particle-like dark matter}. Here we consider dark matter consists primarily of particles of mass $m_X$, coupling to neutrons through a light mediator (eg. through a potential $V = \alpha_n/r$, where $\alpha_n$ is a small, unknown coupling strength) as an example search target for mechanical impulse sensors. In the left plot, each curve represents a hypothetical sensor (labeled by its mass, readout frequency, and noise level benchmarked to \eqref{SQL}). Sensitivity is lost at low mass because the incoming DM will not have enough momentum to deliver to the device, and at high mass because of the loss of flux (see Eqn.~\eqref{nchi}). In the right plot, we use a nanogram-scale sensor operated at the SQL as an example and show projected constraints compared to currently-existing bounds. To draw the current bounds, we assume a microscopic realization in which dark matter consists of ``nuggets'' of total mass $m_X$ made of multiple constituents of mass $m_{\chi} \sim 1~{\rm MeV}$, coupled to neutrons through a $B-L$ vector boson of mass $m_{\phi} \sim 0.05~{\rm eV}$ (for discussion of the parametrization of the fiducial DM-nucleon cross section $\sigma_{Xn}$, see Ref.~\cite{coskuner2019direct,Monteiro2020compositeDM}). The XENON1T \cite{Aprile:2017iyp} and CDMS \cite{Agnese:2017jvy} bounds come from pre-existing particle physics experiments while the fifth-force bounds come from torsion-balance searches \cite{schlamminger2008test,wagner2012torsion,arvanitaki2016sound,heeck2014unbroken}.} \label{figure-impulse} \end{figure} Alternatively, one can monitor the center of mass motion of an entire object (i.e. the zero-mode phonon). This technique could be particularly advantageous in the setting where the collision acts coherently on the entire mechanical component, for example when the dark matter couples to the sensor through a long-range force. Here one continuously monitors the center of mass position and looks for small transfers of momenta greater than the typical noise on the device. The noise floor is ultimately limited by thermal coupling with the environment and by quantum mechanical measurement noise coming from the monitoring of the device \cite{caves1980quantum,caves1981quantum}. Concretely, the standard quantum limit (SQL) provides a benchmark for a detectable impulse \cite{mozyrsky2004quantum,clerk2004quantum}: \begin{equation} \label{SQL} \Delta p_{\rm SQL} = \sqrt{\hbar m \omega} \approx 1.5~{\rm MeV} \times \left( \frac{m}{1~{\rm ng}} \right)^{1/2} \left( \frac{\omega/2\pi}{1~{\rm kHz}} \right)^{1/2}, \end{equation} where $m, \omega$ are the mass and frequency of the mechanical sensor.\footnote{Here, the frequency $\omega$ should be replaced by the inverse measurement times scale when this exceeds the mechanical frequency, such as the free-mass case $\omega\to 0$.} While methods exist to go below this noise level (see Sec.~\ref{section-detectors}), currently existing devices acting at or even slightly above the SQL are already capable of searching novel regions of DM parameter space, as demonstrated by the initial search in~\cite{Monteiro2020compositeDM}. We describe an example in Fig. \ref{figure-impulse}. \subsection{Direct gravitational interaction with particle-like dark matter} As an ultimate long-term goal, mechanical sensing could open the possibility of direct detection of particle dark matter \emph{purely through its gravitational interaction with visible matter} \cite{PhysRevD.99.023005,PhysRevD.98.083019,Carney:2019pza}. This coupling is the only one guaranteed to exist, so an experiment with sufficient sensitivity would have the ability to find or completely rule out any dark matter candidate in the mass range for which it is sensitive. This proposal involves the direct monitoring of impulses delivered to sizeable (gram-scale) mechanical sensors, and exploits the coherent nature of the gravitational interaction. Achieving this goal would require realizing noise levels well below the SQL impulse sensing limit, as well as the ability to build and read out a large array of sensors. However, the concept employed is precisely the same as that described in the previous section, namely observation of an impulse to the center of mass of an object. The basic idea can thus be tested in prototype experiments, for example \cite{Monteiro2020compositeDM}. \renewcommand\arraystretch{1.5} \begin{table}[t] \tiny \begin{flushleft} \begin{tabular}{|p{3.5cm}|p{1.3cm}|p{1.7cm}|p{1.2cm}|p{2.2cm}|p{5.6cm}|} \hline Physical device & Mass & Frequency & Temp. & Quantum limit & Sensitivity, e.g. acceleration, strain, force... \\ \hline \end{tabular} \vspace{5pt} Resonant acoustic wave: \vspace{1pt} \\ \begin{tabular}{|p{3.5cm}|p{1.3cm}|p{1.7cm}|p{1.2cm}|p{2.2cm}|p{5.6cm}|} \hline BAW/Weber bar \cite{branca2017search} & 1000 kg & 1 kHz & 4 K & & $h_s \sim 10^{-21}/\sqrt{\rm Hz}$ \\ \cline{1-1} \hline HBAR/phonon counting \cite{chu2017quantum} & 50 $\mu$g & 10 GHz & 10 mK & single phonon & \makecell[lt]{ $\sigma_E \sim 30\ \mu$eV \\ $ h_s \sim 10^{-15}/\sqrt{\rm Hz} $ \\ $(h_s \sim 10^{-9}/\sqrt{\rm Hz} {\rm broadband below res}) $} \\ \cline{1-1} \hline superfluid helium cavities \cite{shkarin2019quantum} & 1 ng & 300 MHz & 50 mK & single phonon & \makecell[lt]{ $\sigma_E \sim 1\ \mu$eV } \\ \cline{1-1} \hline \end{tabular} \vspace{5pt} Resonant and below-resonance detectors: \vspace{1pt} \\ \begin{tabular}{|p{3.5cm}|p{1.3cm}|p{1.7cm}|p{1.2cm}|p{2.2cm}|p{5.6cm}|} \hline cantilever optomechanical accelerometer~\cite{guzman2014high} & 25 mg & 10 kHz & 300 K & & \makecell[lt]{$\sqrt{S_{a}} \sim 3 \times 10^{-9}~\mathrm{g/\sqrt{Hz}}$ \\ ($\sqrt{S_{a}} \sim 10^{-7}~\mathrm{g/\sqrt{Hz}}$ broadband below res) } \\ \cline{1-1} \hline SiN-suspended test mass accelerometer \cite{zhou2019testing,krause2012high} & 10 mg & 10 kHz & 300 K & & \makecell[lt]{ $\sqrt{S_{a}} \sim 10^{-7}~\mathrm{g/\sqrt{Hz}}$ \\ ($\sqrt{S_{a}} \sim 10^{-6}~\mathrm{g/\sqrt{Hz}}$ broadband below res)} \\ \cline{1-1} \hline membrane optomechanics \cite{kampel2017improving, mason2019continuous, underwood2015measurement,tsaturyan2017ultracoherent,norte2016mechanical,reetz2019analysis,st2019swept} & 10 ng & 1.5 MHz & 100 mK & at SQL & \makecell[lt]{$\sqrt{S_{a}} \sim 10^{-7} \mathrm{g/\sqrt{Hz}}$ \\ $\sqrt{S_{f}} \sim 10^{-17}~\mathrm{N/\sqrt{Hz}}$ } \\ \cline{1-1} \hline crystalline cantilever for force sensing \cite{mamin2001sub-attonewton} & 0.2 ng & 1 kHz & 200 mK & & \makecell[lt]{$\sqrt{S_{a}} \sim 3 \times 10^{-7} \mathrm{g/\sqrt{Hz}}$ \\ $\sqrt{S_{f}} \sim 10^{-18}~\mathrm{N / \sqrt{Hz}}$ } \\ \cline{1-1} \hline \end{tabular} \vspace{5pt} Pendula above resonance: \vspace{1pt} \\ \begin{tabular}{|p{3.5cm}|p{1.3cm}|p{1.7cm}|p{1.2cm}|p{2.2cm}|p{5.6cm}|} \hline LIGO mirror \cite{martynov2016sensitivity} & 10 kg & 10 Hz -- 10 kHz & 300 K & SN limited above 100 Hz & \makecell[lt]{$\sqrt{S_{a}} \sim 4 \times 10^{-15}~\mathrm{g/\sqrt{Hz}}$ at 100 Hz \\ $\sqrt{S_{x}} \sim 10^{-19}~\mathrm{m/\sqrt{Hz}}$} \\ \cline{1-1} \hline suspended mg mirror \cite{corbitt2007all-optical,matsumoto2019demonstration,catano2019high} & 1 mg & 1 -- 10 kHz & 300 K & factor of 20 in displacement from (off-resonant) SQL & \makecell[lt]{$\sqrt{S_{a}} \sim 7 \times 10^{-11} ~\mathrm{g/\sqrt{Hz}}$ at 600 Hz \\ $\sqrt{S_{x}} \sim 5 \times 10^{-17}~\mathrm{m/\sqrt{Hz}}$ } \\ \cline{1-1} \hline crystalline cantilever \cite{cripe2019measurement} & 50 ng & 10 -- 100 kHz & 300 K & at (off-resonant) SQL & \makecell[lt]{$\sqrt{S_{a}} \sim 2 \times 10^{-7}~\mathrm{g/\sqrt{Hz}}$ at 20 kHz \\ $\sqrt{S_{x}} \sim 10^{-16}~\mathrm{m/\sqrt{Hz}}$ } \\ \cline{1-1} \hline \end{tabular} \vspace{5pt} Levitated and free-fall systems: \vspace{1pt} \\ \begin{tabular}{|p{3.5cm}|p{1.3cm}|p{1.7cm}|p{1.2cm}|p{2.2cm}|p{5.6cm}|} \hline LISA pathfinder~\cite{anderson2018experimental} & 15 kg & 1 -- 30 mHz & 300 K & & $\sqrt{S_{a}} \sim 10^{-15}~\mathrm{g/\sqrt{Hz}}$ \\ \cline{1-1} \hline mm magnetically-levitated sphere \cite{timberlake2019magnetic} & 4 mg & 20 Hz & 5~K & & \makecell[lt]{ $\sqrt{S_{a}} \sim 2 \times 10^{-7}~\mathrm{g/\sqrt{Hz}}$ \\ $\sqrt{S_{f}} \sim 8 \times10^{-12}~\mathrm{N / \sqrt{Hz}}$ } \\ \cline{1-1} \hline sub-mm magnetically-levitated sphere \cite{Lewandowski:2020cuq} & 0.25 $\mu$g & 1--20 Hz & laser cool to $<9$~K & & \makecell[lt]{$\sqrt{S_{a}} \sim 10^{-7}~\mathrm{g/\sqrt{Hz}}$ \\ $\sqrt{S_{f}} \sim 2 \times10^{-16}~\mathrm{N / \sqrt{Hz}}$ } \\ \cline{1-1} \hline optically trapped microsphere \cite{monteiro2020optical} & 1 ng & 10 -- 100 Hz & laser cool to 50~$\mu$K & factor of 100 in displacement from (off-resonant) SQL & \makecell[lt]{$\sqrt{S_{a}} \sim 10^{-7} ~\mathrm{g/\sqrt{Hz}}$ \\ $\sqrt{S_{f}} \sim 10^{-18}~\mathrm{N / \sqrt{Hz}}$ } \\ \cline{1-1} \hline optically trapped nanosphere \cite{delic2019motional,tebbenjohanns2020optical} (rotational \cite{2020NatNa..15...89A}) & 3 fg & 300 kHz & laser cool to 12 $\mu$K & ground state & \makecell[lt]{ $\sqrt{S_{a}} \sim 7 \times 10^{-4}~\mathrm{g/\sqrt{Hz}}$ \\ $\sqrt{S_{f}} \sim 2\times10^{-20}~\mathrm{N / \sqrt{Hz}}$ \\ $ \sqrt{S_\tau} \sim 10^{-27}~\mathrm{N m/\sqrt{Hz}}$ } \\ \cline{4-6} \hline trapped ion crystal \cite{biercuk2010ultrasensitive} & $10^{-6}$ fg & 1 MHz & & & \makecell[lt]{$\sqrt{S_{a}} \sim 50~\mathrm{g/\sqrt{Hz}}$ \\ $\sqrt{S_{f}} \sim 4\times10^{-22}~\mathrm{N / \sqrt{Hz}}$ } \\ \cline{1-1} \hline \end{tabular} \end{flushleft} \caption{Examples of currently-available mechanical sensors. Sensitivities for continuous sensing are represented by the relevant noise power spectral densities (e.g.\ $S_a$ is the acceleration noise power), or threshold ($\sigma_E$ is the single-phonon detection threshold). Here we summarize solid-state mechanical detectors, although atom interferometers can be characterized by similar metrics.} \label{device_table} \end{table} \section{Available mechanical sensors and future challenges} \label{section-detectors} Mechanical devices have been demonstrated with masses from single ions to kilograms, and on frequency scales from millihertz to terahertz. Precision sensing has long used massive detectors in the context of gravitational wave searches employing interferometric or resonant detectors, e.g.~LIGO. On a smaller scale, accelerometers and other mechanical devices are ubiquitous in modern technology, and increasingly specialized mechanical systems with extreme environmental isolation are important tools for storage and transduction of quantum information~\cite{aspelmeyer2014cavity}. As discussed above, many of the scientific motivations favor larger volumes or masses to increase the rate of dark matter interactions in the detector. This motivates use of more massive systems, which also provide better sensitivity to accelerations (scaling as the square root of the mass). However, also important are the energy range of interest, the available probes of specific mechanical modes, ever-present noise sources, and scalability. To understand the scope of different available platforms, we present in Table~\ref{device_table} different detector types and a sampling of sensitivities achieved to date in specific experiments. This list is meant to be exemplary, and not exhaustive. It can also be considered a starting point, i.e.\ rapid progress in mechanical detectors is being made in many fields, and as exemplified in the workshop on which this white paper is based, there is increasing cross-development between sensors of widely differing scales that will lead to fruitful technical improvements. A central issue is to map the advantages of different physical architectures to different searches. For cases where an impulse detector is desired, an essentially free mass can be created by using a low-frequency pendulum measured above its resonance frequency, i.e. at time-scales faster than an oscillation period. An interesting alternative is to levitate particles and then release them after state preparation to perform measurements in free-fall. Ultralight searches are likely to be first pursued by resonant detectors---ideally tunable resonant detectors. The center of mass motion of a cantilever, membrane \cite{Manley:2020mjq}, or even levitated sphere are appropriate in this situation. For ultralight searches that result in changes in atomic strain due to effective signatures that appear as time-variations in fundamental constants or atomic length scales, and hence excitation of effective breathing modes, bulk acoustic modes are of interest~\cite{manley2019searching}. Importantly, detection of such bulk acoustic waves may scale to large volumes using clever readout techniques, as exemplified by recent single-phonon detection of a bulk acoustic resonator~\cite{chu2017quantum}, and in the long-standing ability to read out motion of very large Weber bars~\cite{weber1966observation,cerdonio1997ultracryogenic}. Athermal phonon detection may also benefit from this scaling if athermal phonons created in the bulk of a material could be coupled into the readout modes of interest, but could also be pursued in arrays of smaller sensors. Different devices can also support detection of additional signatures or couplings, e.g.\ electric or magnetic charges or the material polarizability. The quest to go beyond the sensitivities presented in Table~\ref{device_table} is ongoing, and we list here a few examples of how advances in both conventional and non-conventional technologies for precision sensors are poised to make interesting progress. Superfluid helium is a pristine system that hosts mechanical modes; recent advances \cite{shkarin2019quantum} in observing the quantum motion of this liquid in a small cavity are promising, and this system could be easily scalable to larger volumes and number of samples by simply immersing more probes in a single vat of liquid helium. SiN micromechanical membranes offer a unique possibility to use strain to move the resonant frequency of a mechanical detector by orders of magnitude while maintaining low dissipation~\cite{thompson2008strong}, allowing searches over a wide range of DM masses. By expanding to larger membranes \cite{moura2018centimeter,Manley:2020mjq} it should be possible to achieve kHz-scale resonant detectors with much larger masses than traditional cantilevers. While optical readout is typical of precision interferometry, electrical readout is poised to make important contributions, both in the context of phonon readout through superconducting qubits~\cite{chu2017quantum}, but also through advances in magnetic couplings~\cite{zoepfl2019single}. Detection of the motion of levitated nanospheres is reaching quantum measurement limits~\cite{delic2019motional}. Scaling the mass of levitated systems in the quantum regime to the ng scale and above may offer extremely low threshold mechanical sensors with substantial mass that are well-isolated from environmental noise \cite{childress2017cavity,monteiro2020optical,timberlake2019magnetic}. Readout of ultra low-energy phonons is currently achieved in small devices; if these techniques could be adapted to read out larger volumes---and if the challenging problem of coupling energy from such a volume into the modes of interest could be overcome---the potential gains are significant. Lastly, the growth of gravitational wave astronomy will undoubtedly bring advances in materials for mirrors, mirror coatings, and suspensions that will advance all precision measurements based upon suspended pendula. Reducing both technical and quantum measurement-added noise sources will allow for progressively increasing sensitivity to dark matter. In general, devices operating at lower frequencies tend to be dominated by thermal or other technical noise sources, while higher-frequency devices are limited by shot noise or more generally by quantum measurement noise. For systems in a $10~{\rm mK}$ dilution refrigerator, for example, the cutoff is at $\omega \sim k T/\hbar \sim 1~{\rm MHz}$. The primary contaminant in a dark matter search is the heating rate of a sensor, $\Gamma \sim T_{\rm bath}/Q$, where $Q$ is the mechanical quality factor. Thus fabrication of lower dissipation (higher-$Q$) devices will be of critical importance. We can see directly in Table~\ref{device_table} that a range of experiments are now impinging on quantum noise limits, and so methods to operate devices well into the quantum-limited regime (i.e.\ true ``quantum sensors'') are of substantial interest. Measurement-added noise has been suppressed below the shot noise limit at LIGO~\cite{abadie2011gravitational}, and it has likewise been driven to the standard quantum limit~\cite{kampel2017improving,cripe2019measurement} and beyond~\cite{mason2019continuous} with membranes and cantilevers. Quantum sensing techniques can further reduce these noise levels using squeezed readout light \cite{aasi2013enhanced,tse2019quantum} and/or a variety of backaction-evasion techniques \cite{braginsky1980quantum,pereira1994backaction,clerk2008back,Ghosh:2019rsc}. In the context of free-mass targets, nanogram levitated spheres have been cooled to their quantum ground state~\cite{delic2019motional}. Ultimately, to detect momentum transfers far below the SQL, it may be necessary to prepare the mechanics in a more extreme non-classical state, such as a coherent spatial superposition, and then perform interferometric measurement \cite{geraci2015sensing,wan2016free,pino2018chip}. The sensitivity of such superpositions to small impulses is in principle unbounded, scaling with the spatial extent and temporal duration of the quantum coherence that is achieved. In addition to sub-SQL sensitivities to classical forces, such an approach can offer the unique possibility of detecting sources of anomalous test-mass diffusion (e.g., DM-induced Brownian motion), which can cause decoherence in a matter interferometer \cite{riedel2013direct,riedel2017decoherence} even when the mean momentum transfer is negligible \cite{riedel2015decoherence}. Construction and operation of an \textit{array of mechanical sensors} poses an interesting technical challenge with applications to many of the dark matter searches described above. Performing differential measurements on multiple sensors would allow for rejection of many backgrounds. In particular, use of sensors with different materials will enable discrimination against signals which act in a material-independent fashion, for example gravitational noise. Relative accelerations between objects with different numbers of neutrons could identify ultralight fields coupling to $B-L$. Coherent integration of multiple sensors would be highly valuable, enabling scaling in sensitivity that is linear with the number $N$ of sensors as opposed to the incoherent $\sqrt{N}$ enhancement. Understanding the detailed nature of sensor-sensor interactions in a tightly packed array will be important. These interactions could be exploited to enhance measurement sensitivity, in particular through entanglement of multiple sensors \cite{giovannetti2004quantum}. In the near term, a number of demonstrator experiments could pave the way for future, scalable dark matter detection. Given the current constraints on ultralight dark matter, current or near future devices could already perform non-trivial searches in this parameter space. Operating a small array of sensors as a coherent detector of ultralight dark matter would demonstrate the basic techniques needed as well as help to identify challenges in scaling to larger numbers. Moving toward detection of short impulses, demonstration of ultra-low threshold phonon readout in a meaningful volume would be of substantial value. Demonstrating that optomechanical impulse sensing allows for backaction noise evasion would likewise be extremely valuable, and allow for a more detailed understanding of the potential limitations of such a technique, in particular due to optical losses. \section{Conclusions} Dark matter constitutes one of the most fundamental mysteries in modern science: what is the nature of this strange mass, taking up a quarter of the universe's energy budget? As the search for dark matter enters maturity, new theoretical and experimental directions are needed. Mechanical sensing technologies, especially with quantum-sensing techniques that can enable measurement past traditional quantum limits, offer an exciting route to new experimental searches. Deploying currently available technology could have immediate impact, while longer-term prospects will require some technical advances. On the experimental side, a number of basic technological challenges to be overcome and demonstrations of the core search techniques will be of critical importance. Data processing techniques and the application of lessons learned from previous experiments about the nature of potential background signals will require development tailored to these experimental approaches. Looking toward the longer term, interdisciplinary collaborative efforts and the construction and use of multiple sensors as a coherent detector offer a fascinating set of problems. Overall, the wide variety of platforms and scales available with these techniques has the potential to make significant impact across a wide swath of the dark matter landscape. Future developments should only continue to improve sensitivities and detection reach. Further collaboration between the mechanical quantum sensing and particle physics communities will undoubtedly lead to even more possibilities than those outlined here. \section*{Acknowledgements} We thank Charles W. Clark, Yiwen Chu, Tom Lebrun, and Jon Pratt for comments, and Yoni Kahn and Masha Baryakhter for suggesting the relevance of the weak gravity conjecture in Fig.~\ref{figure-ultralightplot}. Yogesh S. S. Patil, Lucy Yu, and Sean Frazier produced the images in Fig.~\ref{figure-phonons}. This white paper originated with a workshop held at the Joint Quantum Institute at the University of Maryland, October 28-29, 2019. This workshop was funded in part by the Gordon and Betty Moore Foundation, through Grant GBMF6210. We also gratefully acknowledge support from the JQI (an NSF Physics Frontier Center, award number 1430094), and from JILA (an NSF PFC, award number 1734006) to run the workshop. We thank the Aspen Center for Physics for hospitality during the workshop ``Quantum Information and Systems for Fundamental Physics'', where part of the writing was completed. \bibliographystyle{utphys-dan}
1,108,101,565,073
arxiv
\section{Introduction} The smart grid (SG) is the next generation of the traditional power grid. It integrates information and communication technologies with traditional power grid to provide two-way communications between the grid’s major entities including grid operators, electricity vendors, and electricity consumers to ensure the efficient and reliable operation of the grid. One of the main components of the SG is the advanced metering infrastructure (AMI) networks which connect smart meters (SMs) installed at consumers' houses to the grid operators and vendors. Multi-authority AMI networks, which are deployed in most European countries and several states in the U.S., allow energy deregulation, i.e., electricity retailing through different electricity vendors \cite{EU, DEP2SA, EPDA}. Therefore, consumers not only can choose from a number of independent third party electricity vendors, but pricing options are also more plentiful due to the competition between these different vendors. \autoref{fig:multicast_network_model} shows the conceptual architecture a multi-authority AMI network \cite{EPDA}. As shown in the figure, data communication can be either uplink or downlink communication. In the uplink data communication, data is sent by SMs to grid operators and vendors. This can allow the automated collection of metering data in which grid operators and vendors collect fine-grained power consumption data (PCD) at high rates, e.g., few minutes, for real-time grid monitoring, and energy distribution management. For example, fine-grained data analysis can be used for the reduction of the peak-to-average ratio which can help in preventing blackouts, failures to supply electricity \cite{managment1,managment2,report_june_2017}. Also, fine-grained PCD are needed for real-time price-based demand/response programs in which electricity prices vary depending on the supply-to-demand ratio especially during peak hours \cite{DR1,DR2}. On the other hand, in the downlink data communications, data is sent by grid operators or electricity vendors to a group of SMs. The downlink communication should ensure secure multicast for different applications. For example, sending firmware and configuration updates to a group of SMs in specific areas requires secure multicast \cite{baza2018blockchain,tonyali2017attribute2}. Also, in direct load control (DLC) demand/response programs, grid operators need to send DLC messages to a group of users, that subscribe to the same DLC demand/response plan, in order to turn off/on some specific load during peak/regular hours \cite{roy2014lte,saxena2015exploiting}. Moreover, electricity vendors may send charging schedules to a group of users in selected areas to charge their electric vehicles or home batteries \cite{nabil2019priority,pazos2018secure}. Furthermore, electricity vendors may send energy trading requests to a group of users in selected areas that are subscribed to energy charging/discharging plans, asking for energy injection to the SG during peak hours \cite{sherif2018privacy,baza2018blockchain2}. \begin{figure*}[!t] \centering \includegraphics[clip=true,width=0.9 \textwidth]{Figs/Multicast_Network_Model.pdf} \caption{Network model for SG downlink communications.} \label{fig:multicast_network_model} \vspace{-5mm} \end{figure*} Extensive research has been conducted to study the security and privacy issues in AMI networks. However, most of the existing schemes address consumers' privacy, data integrity and authenticity in uplink communication \cite{survey1}. Few schemes have been proposed to study the security of the downlink communications in AMI networks. Moreover, the IEEE 802.11 protocol, which is the underlying protocol for AMI networks cannot be used for secure multicast communication efficiently and effectively \cite{tonyali2017attribute}. Therefore, there is a need for a scheme that not only allows secure multicast communication, but also considers the unique characteristics of multi-authority AMI networks. Specifically, a good secure multicast scheme should ensure message confidentiality, i.e., the multicasted messages can be decrypted only by the intended users. In addition, the scheme should allow dynamic group memberships, i.e., members' enrollment/revocation should be done efficiently and promptly since they can subscribe/unsubscribe to any plan with any vendor at any time. Moreover, users should be able to authenticate the senders of the multicasted messages. Furthermore, message non-repudiation property should be achieved. In order to address the aforementioned challenges, we propose in this paper a multi-authority attribute-based signcryption scheme that can be used to secure SG downlink communications. We construct our scheme based on, but not limited to, the multi-authority attribute based encryption (MA-ABE) scheme proposed in \cite{RW2015_MA_ABE}. To the best of our knowledge, this paper proposes the first fully decentralized multi-authority attribute based signcryption scheme that can ensure data confidentiality, sender authentication, and non-repudiation, and allow prompt attribute revocation, simultaneously. The remainder of this paper is organized as follows. Related works are discussed in Section \ref{sec:multicast_related_works}. The considered system models and the design goals are presented in Section \ref{sec:multicast_model_requirements}. Preliminaries are given in Section \ref{sec:multicast_preliminaries}. The proposed scheme is explained in Section \ref{sec:multicast_scheme}. The security analysis and performance evaluation are given in Sections \ref{sec:multicast_security_analysis} and \ref{sec:multicast_performance}, respectively. Conclusions are drawn in Section \ref{sec:multicast_conclusions}. \section{Related Works} \label{sec:multicast_related_works} The SG downlink communication has been considered in several schemes \cite{ye2015hibass,alharbi2016efficient,baza2015efficient}. However, these schemes consider only broadcast downlink communication and cannot support multicast communication. Several schemes have been proposed to ensure fine-grained access control and/or secure multicast for the SG communications \cite{liu2014achieving,fadlullah2012toward,ABSC2015}. In the ABE scheme proposed in \cite{LW2011_MA_ABE}, Liu \textit{et al. } proposed a multi-authority access control scheme with attribute revocation for the SG \cite{liu2014achieving}. In the proposed scheme, if a user's attributes can satisfy the access policy associated with a ciphertext, this user can decrypt that ciphertext only after receiving a unique token from a central entity called third party auditor (TPA). Therefore, the proposed scheme \cite{liu2014achieving} cannot support multicast communication efficiently since the TPA must send a unique token to each member in a multicast group using unicast communication, i.e., a unicast downlink communication is needed to decrypt any multicast message. In \cite{fadlullah2012toward}, Fadlullah \textit{et al. } have proposed a secure multicast scheme for SG communications using the key-policy attribute-based encryption (KP-ABE) \cite{ABE2}. However, the scheme is limited to single attribute authority, i.e, a single authority controls all attributes. Also, it does not support sender authentication and non-repudiation. In \cite{ABSC2015}, Hu \textit{et al. } have proposed an attribute-based signcryption scheme to secure multicast communications in the SG. The scheme proposes a modification to CP-ABE \cite{ABE3} in order to achieve attribute-based encryption, data-origin authenticity and non-repudiation. However, the scheme is limited to single attribute authority and does not support attribute revocation. Different from the above schemes, our scheme allows (1) multiple authorities to issue and control their own attributes; (2) data-origin authenticity and non-repudiation; and (3) prompt attribute revocation, simultaneously. \section{System Models and Design Goals} \label{sec:multicast_model_requirements} \subsection{Network Model} The considered network model is shown in \autoref{fig:multicast_network_model}. This model was used in \cite{DEP2SA, EPDA} to secure uplink smart grid communications. In this paper, we aim to secure the downlink communications. The network model has the following entities. \begin{itemize} \item \textit{Distribution Network Operators} (DNOs). We consider a set of DNO companies, $\mathbb{D}=\{D_j, 1 \leq j \leq N_d\}$. Each $D_j$ is licensed to distribute electricity in a particular geographic area $j$. Each DNO manages and operates the distribution networks within its area. \item \textit{Electricity Vendors}. We consider a set of electricity vendor companies, $\mathbb{V}=\{V_k, 1 \leq k \leq N_s\}$. Each $V_k$ is responsible for supplying electricity to its users who may be located at different areas. \item \textit{Users}. We consider a set of users $\mathbb{U}=\{u_i, 1 \leq i \leq N_u\}$. Users can change from one vendor to another at any time. In addition, users can add, change, and remove plans offered by the same vendor at any time. An SM is installed at each user's house that communicates with the DNOs and electricity vendors through a node called the data communication company (DCC). \item \textit{Data Communication Company} (DCC). It has the responsibility of delivering the downlink communications received from operators and vendors to users. \item \textit{Networking Facilities}. They form a hierarchical network structure to connect the DCC to SMs at users' side through a WAN-GW, a NAN-GW, and a BAN-GW as shown in the figure. \end{itemize} \subsection{Threat Model} There exist an adversary $\mathscr{A}$ that can eavesdrop all the transmitted messages. $\mathscr{A}$ may try to decrypt the multicasted messages to revel any sensitive information sent to any group of users. Users also may try to breach data confidentiality, i.e, they may try to decrypt the multicasted messages intended to other groups of users. In addition, a malicious user may collude with other users or $\mathscr{A}$ in order to decrypt a ciphertext that they can not decrypt individually. Moreover, $\mathscr{A}$ may try to launch active attacks by injecting malicious messages to any group of users, e.g. sending malwares instead of firmware updates to have full control on their devices. \subsection{Design Goals} Based on the aforementioned network and threat models, the following goals should be achieved. \begin{enumerate} \item \textit{Secure multicast and data confidentiality}. Only selected users by the grid operators or vendors should be able to decrypt the multicasted messages. Other users should be prevented from accessing these messages. \item \textit{Collusion resistance}. Users that are not supposed to decrypt a ciphertext individually, should not be able to decrypt it even if they collude together by using their secret keys. \item \textit{Sender authentication and non-repudiation}. Users should be able to authenticate the sender of the multicasted messages. Messages form $\mathscr{A}$ should be detected and discarded. Also, non-repudiation property should be achieved. \item \textit{Prompt Revocation}. Only valid, i.e. non-revoked, users should be able to decrypt the multicasted ciphertext. Revocation process should be done immediately without any delays. \end{enumerate} \section{Preliminaries} \label{sec:multicast_preliminaries} \subsection{The Chinese Remainder Theorem} Let $\{q_1, q_2, \dots, q_m\}$ be $m$ pairwise relatively primes and let $\{b_1, b_2, \dots, b_m\}$ be $m$ arbitrary integers. The Chinese Remainder Theorem (CRT) states that the system of congruences $ B \equiv b_i \mod q_i \text{ for } 1 \leq i \leq m $ has a unique solution modulo $Q= \prod_{i=1}^{n} q_i$. The unique solution $B$ is given by \begin{equation*} B=\sum_{i=1}^{m} b_iQ_iy_i \mod Q \end{equation*} where $Q_i=\frac{Q}{q_i}$ and $y_i \equiv \frac{1}{Q_i} \mod q_i$ for $1 \leq i \leq m$. \subsection{Multi-Authority Attribute-Based Encryption \cite{RW2015_MA_ABE}} Let $\mathcal{U}=\{u_1, \dots, u_n\}$ be the universe of the attributes, $\mathcal{U_\theta}=\{\theta_1, \dots, \theta_n\}$ be the universe of the attribute authorities controlling the $n$ attributes, and $\mathcal{GID}=\{\text{GID}_1, \dots, \text{GID}_m\}$ be the universe of the global identities that identify $m$ users. \subsubsection{Linear Secret Sharing and Access Policy} We use the same definition for linear secret sharing (LSS) and access policy as in \cite{RW2015_MA_ABE} and \cite{LW2011_MA_ABE}. Any monotonic boolean formula over $\mathcal{U}_\Theta$ can be represented as an access matrix as follows. Let $p$ be a prime. A secret-sharing scheme $(\Pi)$ over a set of attributes $\mathcal{U}$ is called linear (over $\mathbb{Z}_p$) if \begin{enumerate} \item The $\ell$ shares of a secret $z \in \mathbb{Z}_p$ for each attribute form a vector $\boldsymbol \lambda$ over $\mathbb{Z}_p$. \item There exists a matrix $A$ called the share-generating matrix for $\Pi$. The matrix $A$ has $\ell$ rows and $n$ columns. For all $ 1 \leq x \leq \ell$, the $x^{th}$ row of $A$ is labeled by an attribute $\delta(x)$, where $\delta$ is a function that maps rows of $A$ to attributes from $\mathcal{U}$, i.e., $\delta$$: \{1,\dots, \ell\} \rightarrow \mathcal{U}$. When we consider the column vector $\mathbolditalic{v} = (z, r_2, \dots , r_n)$, where $\{r_2, \dots, r_n\} \xleftarrow[]{R} \mathbb{Z}_p$ are randomly chosen, then $\boldsymbol \lambda = A\mathbolditalic{v}$ is the vector of $\ell$ shares of the secret $z$ according to $\Pi$. The share $\lambda_x$ belongs to the attribute $\delta(x)$. \end{enumerate} As mentioned in \cite{RW2015_MA_ABE,LW2011_MA_ABE}, each secret-sharing scheme should satisfy the following requirements: \begin{itemize} \item A reconstruction requirement, i.e., each authorized set of attributes can reconstruct the secret. \item A security requirement, i.e., other sets of attributes, unauthorized sets, cannot reveal any information about the secret. \end{itemize} For example, let S denote an authorized set of attributes and let I be the set of rows whose labels are in S. There exist constants $\{c_i\}_{i \in I} \in \mathbb{Z}_p$ such that for any valid shares of a secret $z$ according to $\Pi$, it is true that: $\sum_{i \in I} c_i \lambda_i=z$, or equivalently $\sum_{i \in I} c_i \mathbolditalic{A}_i=(1, 0, \dots, 0)$, where $\mathbolditalic{A}_i$ is the $i^{th}$ row of $A$. In Appendix \ref{appx_A}, we give an example of generating the access matrix, computing the vector of secret shares $\boldsymbol \lambda$ and the reconstruction coefficients $\{c_i\}_{i\in I}$ from a boolean formula. \subsubsection{Algorithms} The scheme in \cite{RW2015_MA_ABE} consists of the following algorithms. \begin{itemize} \item $\mathsf{GlobalSetup}(1^\kappa)\rightarrow \text{GP}$ This algorithm takes a security parameter $\kappa$ and outputs the public global parameters for the system. The global parameters (GP) includes $\mathcal{U}$, $\mathcal{U}_\Theta$, $\mathcal{GID}$, and $\mathsf{T}$ which is a mapping function that maps each attribute in $\mathcal{U}$ to a unique authority in $\mathcal{U}_\Theta$, i.e., $\mathsf{T}$$: \mathcal{U}\rightarrow \mathcal{U}_\Theta$. \item $\mathsf{AuthoritySetup}(\text{GP}, \theta)\rightarrow \text{PK}_\theta,\text{SK}_\theta$ This algorithm generates a public/private key pair $\{\text{PK}_\theta,\text{SK}_\theta\}$ for each attribute authority $\theta \in \mathcal{U}_\Theta$. \item $\mathsf{KeyGen}(\text{GID}, \theta , u, \text{SK}_\theta , GP) \rightarrow \text{SK}_{\text{GID}, u}$ This algorithm takes the global identity of a user $\text{GID} \in \mathcal{GID}$, an attribute $u \in \mathcal{U}$, the authority $\theta$ controlling the attribute $u$, the secret key of the authority $\text{SK}_\theta$, and the global parameters $\text{GP}$. The algorithm outputs $\text{SK}_{\text{GID}, u}$ which is a secret key for the identity-attribute pair used for decryption. \item $\mathsf{Encrypt}(M, (A,\delta) , \{\text{PK}_\theta\}, \text{GP}) \rightarrow \text{CT}$\\ This algorithm takes a message $M$, an access policy $(A,\delta)$, a set of public keys $\{\text{PK}_\theta\}$ of the authorities controlling attributes in the access policy, and the global parameters $\text{GP}$. The algorithm outputs the ciphertext $\text{CT}$. \item $\mathsf{Decrypt}(\text{CT}, \{\text{SK}_{\text{GID}, u}\}, \text{GP}) \rightarrow M \text{ or } \perp$\\ This algorithm takes the ciphertext $\text{CT}$, the set of secret keys $\{\text{SK}_{\text{GID}, u}\}$ of a single user with identity $\text{GID}$ corresponding to different attributes, and the global parameters, and outputs $M$ if and only if the attribute set associated with $\{\text{SK}_{\text{GID}, u}\}$ can satisfy the access policy of the ciphertext, otherwise, decryption fails. \end{itemize} \section{The proposed scheme} \label{sec:multicast_scheme} In this section, we first provide the construction of our attribute-based signcryption scheme. Then, we discuss how the scheme can be applied to secure the SG downlink communications. \subsection{Definitions} Let $\mathcal{U}_1=\{u_1, \dots, u_j\}$ be the universe of $j$ attributes, $\mathcal{U}_\Theta=\{\theta_1, \dots, \theta_j\}$ be the universe of the attribute authorities controlling the $j$ attributes, $\mathcal{U}_\Phi=\{\phi_1, \dots, \phi_k\}$ be the universe of entities allowed to signcrypt messages, $\mathcal{U}_2=\{s_1, \dots, s_k\}$ be the universe of $k$ identity attributes corresponding to $k$ signers, $\mathcal{GID}=\{\text{GID}_1, \dots, \text{GID}_m\}$ be the universe of the global identities that identifies $m$ users, and $\mathcal{Q}=\{q_1, \dots, q_m\}$ be $m$ pairwise relatively prime positive integers where each prime $q_i$ is assigned to a user with $\text{GID}_i$. Let $\mathcal{U} = \mathcal{U}_1 \cup \mathcal{U}_2$ and $\mathcal{U}_\Psi = \mathcal{U}_\Theta \cup \mathcal{U}_\Phi$. In addition, we use the same access structure as \cite{RW2015_MA_ABE,LW2011_MA_ABE} with an additional restriction. The access structure encoded as a monotonic boolean formula over $\mathcal{U}_\Theta$ and $\mathcal{U}_\Phi$ should be on the form ``The signer identity attribute $s \in \mathcal{U}_2$'' \textbf{AND} ``any monotonic boolean formula over $\mathcal{U}_1$''. Therefore, in order to designcrypt a signcryppted text, a user should do the following \begin{itemize} \item Uses the verification key corresponding to the signer $\phi$ controlling the signer identity attribute $s$. \item Possess attributes satisfying the second part of the boolean formula. \end{itemize} Moreover, let $G_x \subset \mathcal{GID}$ be the set of users who holds an attribute $u_x$. We refer to $G_x$ as the access list of the attribute $u_x$. Let $\mathcal{G}=\{G_1, \dots, G_j\}$ be the universe of access lists of the $j$ attributes defined in $\mathcal{U}_1$. \subsection{Algorithms} Our scheme consists of the following eight algorithms. \begin{enumerate} \item $\mathsf{GlobalSetup}(1^\kappa)\rightarrow \text{GP}$.\\ This algorithm takes a security parameter $\kappa$ and outputs the public global parameters for the system. The global parameters (GP) includes $\mathcal{U}$, $\mathcal{U}_\Psi$, $\mathcal{GID}$, and $\mathsf{T}$ which is a mapping function that maps each element in $\mathcal{U}$ to a unique element in $\mathcal{U}_\Psi$, i.e., $\mathsf{T}$$: \mathcal{U}\rightarrow \mathcal{U}_\Psi$. More specifically, $\mathsf{T}$ maps each element in $\mathcal{U}_1$ to a unique element in $\mathcal{U}_\Theta$ and maps each element in $\mathcal{U}_2$ to a unique element in $\mathcal{U}_\Phi$. \item $\mathsf{SignKeyGen}(\text{GP}, \phi)\rightarrow \text{SK}_\phi$.\\ This algorithm generates a private key $\text{SK}_\phi$ for each entity $\phi \in \mathcal{U}_\Phi$. $\text{SK}_\phi$ is used by a signer $\phi$ to add the signature component to the signcrypted text. \item $\mathsf{AuthoritySetup}(\text{GP}, \theta)\rightarrow \text{PK}_\theta,\text{SK}_\theta$.\\ This algorithm generates a public/secret key pair $\{\text{PK}_\theta,\text{SK}_\theta\}$ for each attribute authority $\theta \in \mathcal{U}_\Theta$. $\text{PK}_\theta$ is used during the signcryption process, whereas $\text{SK}_\theta$ is used by the authority $\theta$ to generate users' decryption keys. \item $\mathsf{DecKeyGen}(\text{GID}, \theta , u, \text{SK}_\theta , \text{GP}) \rightarrow \text{DK}_{\text{GID}, u}$.\\ This algorithms takes the global identity of a user $\text{GID} \in \mathcal{GID}$, an attribute $u \in \mathcal{U}_1$, the authority $\theta$ controlling the attribute $u$, the secret key of the authority $\text{SK}_\theta$, and the global parameters. The algorithm outputs $\text{DK}_{\text{GID}, u}$ which is a decryption key for the identity-attribute pair. \item $\mathsf{VerKeyGen}(\text{GID}, \phi, s, \text{SK}_\phi , \text{GP}) \rightarrow \text{VK}_{\text{GID}, \phi}$. This algorithms takes the global identity of a user $\text{GID} \in \mathcal{GID}$, the signer identity attribute $s \in \mathcal{U}_2$ corresponding to the signer entity $\phi$, the signer private key $\text{SK}_\phi$, and the global parameters. The algorithm outputs $\text{VK}_{\text{GID}, \phi}$ which is the key used by user with $\text{GID}$ to verify the messages signcrypted by entity $\phi$. \item $\mathsf{Signcrypt}(M, (A,\delta), \text{SK}_\phi , \{\text{PK}_\theta\}, \text{GP}) \rightarrow \text{ST}$ This algorithm takes a message $M$, an access structure $(A,\delta)$, the signer private key $\text{SK}_\phi$, a set public keys of the attributes' authorities in the access policy $\{\text{PK}_\theta\}$, and the global parameters $\text{GP}$ and outputs the signcrypted text $\text{ST}$. \item $\mathsf{Revoke}(\text{ST}, \mathcal{Q}, \{G\}, \text{GP}) \rightarrow \text{ST}'$ This algorithm takes a signcrypted text $\text{ST}$ including its access policy $(A,\delta)$, the set of prime numbers $\mathcal{Q}$, $\{G\}$ which is a set of access lists corresponding to the attributes defining $A$, and the global parameters $\text{GP}$. The algorithm outputs the re-encrypted signcrypted text $\text{ST}'$ such that only users with valid attributes satisfying the access policy can perform designcryption. \item $\mathsf{Designcrypt}(\text{ST}', \text{VK}_{\text{GID}, \phi}, \{\text{DK}_{\text{GID}, u}\}, \text{GP}) \rightarrow M$.\\ This algorithm takes the re-encrypted signcrypted text $\text{ST}'$, the verification key $\text{VK}_{\text{GID}, \phi}$, the set of decryption keys $\{\text{DK}_{\text{GID}, u}\}$ of a single user with identity $\text{GID}$ corresponding to its attributes, and the global parameters. The algorithm outputs the message $M$ if and only if the following three conditions are satisfied: (1) the message was signed by $\phi$; (2) the attribute set associated with $\{\text{SK}_{\text{GID}, u}\}$ can satisfy the access policy of the ciphertext, and (3) all the attribute set associated with $\{\text{SK}_{\text{GID}, u}\}$ are valid, i.e., none of them has not been revoked, otherwise, designcryption process fails. \end{enumerate} \subsection{System Setup} \textbf{Generation of Global Parameters}. At the initial system setup phase, an offline trusted authority (TA) runs the $\mathsf{GlobalSetup}$ algorithm. First, it defines the universe of the attributes $\mathcal{U}_1$, the universe of the attribute authorities $\mathcal{U}_\Theta$, the universe of the signer entities $\mathcal{U}_\Phi$, the universe of the signers' identity attributes $\mathcal{U}_2$, the universe of the global identities $\mathcal{GID}$, and the mapping function $\mathsf{T}$$: \mathcal{U}\rightarrow \mathcal{U}_\Psi$, where $\mathcal{U} = \mathcal{U}_1 \cup \mathcal{U}_2$ and $\mathcal{U}_\Psi = \mathcal{U}_\Theta \cup \mathcal{U}_\Phi$. Then, it generates a bilinear pairing parameters $\left(p, \mathbb{G}, \mathbb{G}_{T}, g, \hat{e}\right)$ where $\mathbb{G}$, $\mathbb{G}_T$ are a multiplicative cyclic group of prime order $p$, $g$ is a generator of $\mathbb{G}$, and $e: \mathbb{G} \times \mathbb{G} \rightarrow \mathbb{G}_T$. It also chooses two functions, $H$ and $F$, that map the global identities and the attributes to elements in $\mathbb{G}$, respectively, i.e., $H$$:\mathcal{GID}\rightarrow\mathbb{G}$, and $F$$:\mathcal{U}\rightarrow\mathbb{G}$. Finally, it publishes the global parameters $\text{GP}$ as $\text{GP}=\{p, \mathbb{G}, g, H, F, \mathcal{U}, \mathcal{U}_\Psi, \mathsf{T}\}$. Moreover, the TA generates the set of $m$ pairwise relatively prime positive integers $\mathcal{Q}=\{q_1, \dots, q_m\}$ and assigns $q_i$ to a user with $\text{GID}_i$. \textbf{Setup of Attribute Authorities}. Each attribute authority $\theta \in \mathcal{U}_\theta$ runs the $\mathsf{AuthSetup}$ algorithm to generate its public/sectet key pair $\{\text{PK}_\theta,\text{SK}_\theta\}$. The algorithm chooses two random exponents $\alpha_\theta, y_\theta \in \mathbb{Z}_p$ and publishes $\text{PK}_\theta=\{e(g,g)^{\alpha_\theta}, g^{y_\theta}\}$ as the public key of authority $\theta$, whereas the secret key $\text{SK}_\theta=\{\alpha_\theta,y_\theta\}$ is kept secret. \textbf{Private Key Generation}. Each signer $\phi \in \mathcal{U}_\Phi$ runs the $\mathsf{SignKeyGen}$ algorithm to generate its private key $\text{SK}_\phi$ that is used to add the signature component to the ciphertext. The algorithm chooses two random exponents $\alpha_\phi, y_\phi \in \mathbb{Z}_p$ as the private keys that are known only to $\phi$. \subsection{Users' Key Generation} Key generation phase consists of two operations; (1) the generation of decryption keys which is executed by attribute authorities and (2) generation of verification keys which is executed by the signers. \textbf{Generation of Decryption Keys}. Each attribute authority $\theta$ runs the $\mathsf{DecKeyGen}$ algorithm to generate a decryption key for each identity-attribute pair, i.e., for user with identity $\text{GID}$ holding an attribute $u$. First, the algorithm chooses a random element $t \in \mathbb{Z}_p$. Then, it computes two components $\text{K}_{\text{GID},u}=g^{\alpha_\theta} H(\text{GID})^{y_\theta} F(u)^t$ and $\text{K}_{\text{GID},u}'=g^{t}$. Finally, the algorithm outputs the decryption key as $\text{DK}_{\text{GID}, u}=\{\text{K}_{\text{GID},u}, \text{K}_{\text{GID},u}'\}$. \textbf{Generation of Verification Keys}. Each signer $\phi$ runs the $\mathsf{VerKeyGen}$ algorithm to generate a verification key for each user. First, the algorithm chooses a random element $r \in \mathbb{Z}_p$. Then, it computes $\text{K}_{\text{GID},\phi}=g^{\alpha_\phi} H(\text{GID})^{y_\phi} F(s)^r$ and $\text{K}_{\text{GID},\phi}'=g^{r}$. Finally, it outputs the verification key as $\text{VK}_{\text{GID}, \phi}=\{\text{K}_{\text{GID},\phi}, \text{K}_{\text{GID},\phi}'\}$. \subsection{Signcryption} \label{sub:signcryption} When an signer $\phi$ wants to signcrypt a message $M$, it defines the access matrix $A$ as explained in Appendix \ref{appx_A}. The boolean formula that should generate the access policy should be on the form ``The signer identity attribute $s \in \mathcal{U}_2$'' \textbf{AND} ``any monotonic boolean formula over $\mathcal{U}_1$''. It should be the case that $s \in \mathsf{T}^{-1} (\phi)$, i.e., this signer identity attribute $s$ is controlled by the signer $\phi$. Then, the signcryptor $\phi$ runs the $\mathsf{Singcrypt}$ algorithm to generate the signcrypted text. The algorithm takes a message $M$, an access policy $(A,\delta)$ with $A \in \mathbb{Z}_p^{\ell \times n}$, the public keys of the relevant authorities, the private key $\text{SK}_\phi$, and the global parameters. Let $\rho$ be a mapping function that maps rows from the access policy to attribute authorities, i.e, $\rho$$: \{1, \dots, \ell\} \rightarrow \mathcal{U}_\Theta$ defined as $\rho(.)=\mathsf{T}(\delta(.))$. First, the algorithm creates vectors $v=(z,v_2,\dots,v_n)^\top$ and $w=(0,w_2,\dots,w_n)^\top$ where $\{z,v_2,\dots,v_n,w_2,\dots,w_n\} \xleftarrow[]{R} \mathbb{Z}_p$. For a secret $z\in\mathbb{Z}_p$, let $\lambda_x$ represents the share corresponding to row $x$ and $w_x$ represents the share of a $0$. In Appendix \ref{appx_A}, we explain how these shares can be computed. For each row $x$ of $A$, the algorithm chooses a random $t_x \in \mathbb{Z}_p$ and the signcrypted text is computed as \begin{equation} \text{ST}= \ \begin{pmatrix} \begin{split} & \ (A,\delta)\ , \ C=Me(g,g)^{z}\ , \ \\ &\begin{Bmatrix*}[l] &C_{1,x}=e(g,g)^{\lambda_x} e(g,g)^{\alpha_{\rho(x)}t_x},\\ & C_{2,x}=g^{-t_x},\\ &C_{3,x}=g^{y_{\rho(x)}t_x}g^{w_x},\\ &C_{4,x}=F(\delta(x))^{t_x}\\ \end{Bmatrix*}_{x \in A}\\ \end{split} \end{pmatrix} \end{equation} Note that, based on our definition to the boolen formula generating the access structure, there is only one row $x$ for which $\rho(x)=\phi$. Therefore, in order to correctly compute the components $C_{1,x}=e(g,g)^{\lambda_x} e(g,g)^{\alpha_{\phi}t_x}$ and $C_{3,x}=g^{y_{\phi}t_x}g^{w_x}$, the signcryptor must have the knowledge of $\alpha_{\phi}$ and $y_{\phi}$. Since these two parameters are the private keys known only to entity $\phi$, no other entity except $\phi$ can compute these components. \subsection{Revocation} Before sending the signcrypted text to users, the $\mathsf{Revoke}$ algorithm is used to re-encrypt the signcrypted text such that only users with valid attributes can perform the designcryption process. For each access list $G_x \in \mathcal{G}$ corresponding to row $x$ in the access policy, the algorithm chooses a random key $\beta_x \in \mathbb{Z}_p^*$ which is a group key for the members of $G_x$ and re-encrypts only one component of the signcrypted text $C_{2,x}$ to be $C_{2,x}'=C_{2,x}^{\beta_x}=g^{-\beta_x t_x}$. Only users with valid attribute $\delta(x)$ should be able to recover $\beta_x$ and thus can perform designcryption. Therefore, for every user with valid attribute, i.e., for every $\text{GID}_i \in G_x$, the algorithm computes $b_i=\beta_x \oplus q_i$ where $q_i$ is the prime number corresponding to $\text{GID}_i$ and $\oplus$ is the XOR operation. Then, the algorithm computes the solution of the CRT congruence system modulo $Q_x=\prod _{i \in G_x} q_i$ as \begin{equation*} B_x=\sum_{i \in G_x} b_iQ_iy_i \mod Q_x \end{equation*} and attaches $B_x$ to re-encrypted signcrypted text as follows \begin{equation} \text{ST}'= \ \begin{pmatrix} \begin {split} & (A,\delta)\ , \ C=Me(g,g)^{z}\ , \ \\ & \begin{Bmatrix*}[l] &C_{1,x}=e(g,g)^{\lambda_x} e(g,g)^{\alpha_{\rho(x)}t_x},\\ &C_{2,x}'=g^{-\beta_x t_x},\\ &C_{3,x}=g^{y_{\rho(x)}t_x}g^{w_x},\\ &C_{4,x}=F(\delta(x))^{t_x},\\ &B_x\\ \end{Bmatrix*}_{x \in A}\\ \end{split} \end{pmatrix} \end{equation} $B_x$ is used to help the users with valid attributes to recover the secret $\beta_x$ and thus only this set of users can perform decryption as explained in the next subsection. \subsection{Designcryption} If a user with identity $\text{GID}$ has a set of valid attributes $S$ that can satisfy the access policy $(A, \delta)$ associated with the signcrypted text and has the verification key of the signer $\phi$, then for each row $x$ corresponding to the attributes in $S$, the user first recovers the group key $\beta_x$ as follows \begin{equation} \label{eq:decryption_step0} \beta_x=(B_x \mod q_i) \oplus q_i \end{equation} This is true as the CRT states that $B_x \equiv b_i \mod q_i$ and $b_i \oplus q_i= (\beta_x \oplus q_i) \oplus q_i = \beta_x$. It is clear that, since the solution of the CRT congruence is constructed using only the prime numbers of users with valid attributes, then only this set of users can reconstruct $\beta_x$ and can proceed with the designcryption process. Then, the user can recover $C_{2,x}$ from $C_{2,x}'$ as $C_{2,x}=C_{2,x}'^{\frac{1}{\beta_x}}$. After that, the user computes \begin{equation} \label{eq:decryption_step1} \begin {split} D_x & = C_{1,x} \ . \ e (\text{K}_{\text{GID},\delta(x)},C_{2,x}). e(H(\text{GID}),C_{3,x}) . \\% \ . \ & \ \ \ \ e(\text{K}_{\text{GID},\delta(x)}',C_{4,x})\\ & = e(g,g)^{\lambda_x} e(H(\text{GID}),g)^{w_x} \end {split} \end{equation} The correctness proof of \autoref{eq:decryption_step1} is as follows. \begin{equation} \begin {split} D_x & = C_{1,x e (\text{K}_{\text{GID},\delta(x)},C_{2,x} e(H(\text{GID}),C_{3,x})\\ & \ \ \ \ e(\text{K}_{\text{GID},\delta(x)}',C_{4,x})\\ & = e(g,g)^{\lambda_x} e(g,g)^{\alpha_{\rho(x)}t_x e (g^{\alpha_\theta} H(\text{GID})^{y_\theta} F(u)^t,g^{-t_x})\\ & \ \ \ \ e(H(\text{GID}),g^{y_{\rho(x)}t_x}g^{w_x} e(g^t,F(\delta(x))^{t_x})\\ & = e(g,g)^{\lambda_x} e(g,g)^{\alpha_\theta t_x e (g^{\alpha_\theta} H(\text{GID})^{y_\theta} F(u)^t,g^{-t_x}) \\% \ . \ & \ \ \ \ e(H(\text{GID}),g^{y_\theta t_x}g^{w_x} e(g^t,F(u)^{t_x})\\ & = e(g,g)^{\lambda_x} \ e(g,g)^{\alpha_\theta t_x % e (g, g)^{- \alpha_\theta t_x e (H(\text{GID}), g)^{- y_\theta t_x}\\% \ . \ & \ \ \ \ e (F(u), g)^{-t t_x % % e(H(\text{GID}), g)^{y_\theta t_x \\& \ \ \ \ \ e(H(\text{GID}), g)^{w_x % e(g,F(u))^{t t_x}\\ & = e(g,g)^{\lambda_x} \ e(H(\text{GID}),g)^{w_x} \end {split} \end{equation} Then, the user computes \begin{equation} \label{eq:decryption_step2} D = \prod_{x \in S} D_x^{c_x} = e(g,g)^z \end{equation} The correctness proof of \autoref{eq:decryption_step2} is as follows. \begin{equation} \label{eq:dec_step2_proof} \begin {split} D & = \prod_{x \in S} D_x^{c_x}\\ & = \prod_{x \in S} \Big(e(g,g)^{\lambda_x} \ e(H(\text{GID}),g)^{w_x}\Big)^{c_x}\\ & = \prod_{x \in S} e(g,g)^{\lambda_x c_x} \ e(H(\text{GID}),g)^{w_x c_x}\\ & = e(g,g)^{\sum_{x \in S} \lambda_x c_x} \ e(H(\text{GID}),g)^{\sum_{x \in S} w_x c_x}\\ & = e(g,g)^{z} \ e(H(\text{GID}),g)^{0}\\ & = e(g,g)^z \end {split} \end{equation} Finally, the user can recover the message as \begin{equation} \frac{C_0}{D} = \frac{M \ e(g,g)^z}{e(g,g)^z} = M \end{equation} \subsection{Using our scheme in multi-authority AMI networks} The aforementioned design goals can be achieved using our attribute-based signcryption scheme by mapping the multi-authority AMI network entities to our scheme as follows. The DNOs' set $\mathbb{D}$ and the vendors' set $\mathbb{V}$ are mapped to the universe of attribute authorities $\mathcal{U}_\Theta$. This is because each DNO and vendor should be able to issue different attributes for their customers such as location attribute, electricity plan attribute, DLC membership attribute, etc. Also, the same sets are mapped to the universe of signer entities $\mathcal{U}_\Phi$. This is because DNOs and vendors need to send authenticated multicast messages to their customers using signcryption. The users' set $\mathbb{U}$ is mapped to the universe of global identifiers $\mathcal{GID}$. Upon registration, each user $U_i$ should receive its unique prime number $q_i$, decryption keys and a verification key from the DNOs and vendors. In order to send a multicast message, a DNO or vendor should define the monotonic boolean relation under which a message is signcrypted and call the $\mathsf{Signcrypt}$ algorithm. Then, the signcrypted text is broadcasted to all users through the DCC and the hierarchal network structure. According to \cite{DEP2SA} and \cite{EPDA}, the DCC is the entity that manages the supplier-users relationship, i.e., the DCC by default learns the set of attributes of each user, but it does not know their decryption keys. Therefore, the DCC is the entity that can run the $\mathsf{Revoke}$ algorithm to ensure that users with revoked attributes cannot decrypt the multicasted messages. Finally, upon receiving a multicast message, the user checks if his non-revoked attributes can satisfy the access policy, then he calls the $\mathsf{Designcrypt}$ algorithm to decrypt the message, otherwise, the message cannot be decrypted and should be discarded. \section{Security Analysis} \label{sec:multicast_security_analysis} \subsection{Collusion Resistance} In collusion attack, several users may collude by combining their attributes to satisfy the access policy of a ciphertext and decrypt it, i.e., they combine their decryption keys to run the $\mathsf{designcrypt}$ algorithm to decrypt a signcrypted text which they cannot decrypt individually. This attack cannot succeed in our scheme. During the designcryption process the shares of the ``0'', $w_x$ values, are crucially engaged to the global identifier of the secret key of the user as in \cite{RW2015_MA_ABE} and \cite{LW2011_MA_ABE}. This is clear in \autoref{eq:dec_step2_proof} where the term $\prod e(H(\text{GID}),g)^{w_x c_x}$ can be reduced to $ e(H(\text{GID}),g)^{\sum_{x \in S} w_x c_x}=e(H(\text{GID}),g)^{0}$ if and only if a single $\text{GID}$ is used. Therefore, in case that two or more users collude and try to decrypt the same signcrypted text, the ``0-shares'' will result in a failed decryption, which can thwart collusion attacks. \subsection{Signature Forgery Resistance and Non-repudiation} In forgery attack, an adversary $\mathscr{A}$ may try to forge the singcryption of a signer $\phi$. As discussed in \autoref{sub:signcryption}, computing a valid signature of the signer $\phi$ requires the knowledge of $\alpha_{\phi}$ and $y_{\phi}$ to correctly compute the components $e(g,g)^{\alpha_{\phi}}$ and $g^{y_{\phi}}$. $\mathscr{A}$ cannot obtain $\alpha_{\phi}$ and $y_{\phi}$ from any verification key $\text{K}_{\text{GID},\phi}=g^{\alpha_\phi} H(\text{GID})^{y_\phi} F(s)^r$ as this requires $\mathscr{A}$ to split the three components and solve the discrete logarithmic problem (DLP) for each component which is infeasible. Therefore, our scheme can resist forging signatures and thus ensure sender authentication and message non-repudiation since entity $\phi$ is the only entity that can compute a valid signature. \section{Performance Evaluation} \label{sec:multicast_performance} \begin{figure}[!t] \centering \includegraphics[clip=true,width=0.4 \textwidth]{Figs/multicast_signcryption.pdf} \caption{Signcryption time vs access policy size.} \label{fig:multicast_signcryption} \vspace{-5mm} \end{figure} In this section, we compare our multi-authority attribute-based signcryption (MA-ABSC) scheme to the closest similar scheme in \cite{ABSC2015} which is a single-authority attribute-based signcryption scheme (ABSC). We implemented both schemes using Python charm cryptographic library \cite{charm}. Supersingular elliptic curve with the symmetric Type 1 pairing of size 512 bits (SS512 curve) is used for all pairing operations. All cryptographic operations were run 1,000 times and average measurements are reported. Typically, DNOs, vendors and the DCC have powerful computational resources. Therefore, in our experiments, they are implemented by a workstation with Intel Core i7-4765T 2.00 GHz and 8 GB RAM. The operations done by the DNOs and vendors are the signcryption processes, whereas the revocation process is executed by the DCC. On the other hand, to implement the resource-limited SMs, we used Tennessee Tech. University AMI testbed of 30 \textit{Raspberry-Pi} 3 devices with an ARM Cortex-A53, 1.2 GHz processor and 1 GB RAM. The operation executed by the SMs is the designcryption. \begin{figure}[!t] \centering \includegraphics[clip=true,width=0.4 \textwidth]{Figs/multicast_designcryption.pdf} \caption{Deigncryption time vs access policy size.} \label{fig:multicast_designcryption} \vspace{-5mm} \end{figure} \begin{figure}[!t] \centering \includegraphics[clip=true,width=0.4 \textwidth]{Figs/revoc.pdf} \caption{Revocation time vs number of users.} \label{fig:revoc} \vspace{-5mm} \end{figure} Figure \ref{fig:multicast_signcryption} gives the signcryption time versus the access policy size used to signcrypt a message. As shown in the figure, our scheme has slightly higher signcryption time than the scheme in \cite{ABSC2015}. Figure \ref{fig:multicast_designcryption} gives the designcryption time versus the number of attributes used during the designcryption process. As shown in the figure, the designcryption time is close to that of \cite{ABSC2015}. As compared to ABSC \cite{ABSC2015}, the increased computation cost in the signcryption and designcryption processes are needed to allow multiple authorities to control their own attributes which cannot be achieved in \cite{ABSC2015} that allows only a single authority to control the whole attribute set. Lastly, we plot in \autoref{fig:revoc} the revocation computation cost of our scheme as the number of users with valid attributes increases. ABSC \cite{ABSC2015} is not considered in this evaluation since it does not support attribute revocation. As shown in the figure, the revocation process adds an acceptable cost to our multicast scheme and the times are in the range of milliseconds. For instance, the revocation computation cost is only 0.34 second for a case in which the access list contains 250 users with valid attribute. To conclude, compared to the baseline attribute-based signcryption scheme in \cite{ABSC2015}, our scheme achieves more features with acceptable additional computation cost. \section{Conclusions} \label{sec:multicast_conclusions} In this paper, we proposed an attribute-based signcryption scheme that can be used to secure SG downlink multicast communication. The proposed scheme can achieve data confidentiality, message source authentication, message non-repudiation, and immediate attribute revocation, simultaneously which are required for secure multicast communications. In addition, the scheme can resist collusion attacks in which several users collude to decrypt a ciphertext they cannot decrypt individually. Our security analysis confirms that the proposed scheme is secure and can achieve the aforementioned features. Our experiments conducted on the AMI testbed at Tennessee Tech. University confirms that the proposed scheme has low values of computational overheads which is required for resource-constrained SMs. \appendices \section{Generating LSS Matrices from Monotonic Boolean Formulas} \label{appx_A} A monotonic boolean formula can be represented as a binary access tree in which interior nodes are AND and OR gates while the leaf nodes represent attributes. \autoref{fig:access_tree} shows the access tree for the boolean formula $\text{W AND } \big(\text{X OR }(\text{Y AND Z})\big)$ where W, X, Y, and Z are the attributes. \begin{figure} \centering \tikzstyle{circle1} = [circle, draw, fill=white, text width=2.4em, text centered, rounded corners, minimum height=1.5em, line width=0.3mm] \scalebox{0.75} { \begin{tikzpicture}[level distance=1.5cm, level 1/.style={sibling distance=4cm}, level 2/.style={sibling distance=2cm}] \node[circle1] {AND} child { node[circle1] {W} } child { node[circle1] {OR} child { node[circle1] {X} } child { node[circle1] {AND} child { node[circle1] {Y} } child { node[circle1] {Z} } } }; \end{tikzpicture} } \caption{Access Tree Example.} \label{fig:access_tree} \vspace{-5mm} \end{figure} According to \cite{LW2011_MA_ABE}, the following algorithm can convert a monotonic boolean formula into an equivalent LSS matrix. First, a counter $c$ is initialized by one and the tree root node is labeled with a vector $(1)$ (a vector of length $c$). Then, each child node is labeled with a vector determined by the vector assigned to its parent node as follows. If the parent node is an OR gate labeled by the vector $v$, then its children are labeled by $v$ (and the value of $c$ stays the same). If the parent node is an AND gate labeled by $v$, first $v$ is padded with zeros at the end (if necessary) to make it of length $c$. Then, one child node is labeled with the vector $v|1$ (where $|$ denotes appending a new element to vector $v$) and the other child node is labeled with the vector $(0, \dots, 0)|-1$, where $(0, \dots, 0)$ denotes a zero vector of length $c$. Note that the summation of these two vectors is $v|0$. Finally, $c$ is incremented by one. The process continues in a top-bottom manner until all the entire tree nodes are labeled. Once the entire tree is labeled, the vectors labeling the leaf nodes form the rows of the LSS matrix. If these vectors have different lengths, the shorter vectors are padded with zeros at the end. For the tree shown in \autoref{fig:access_tree}, the root AND node is labeled (1), its left child, node (W) node, is labeled (1, 1) while its right child, node (OR), is labeled by $(0,-1)$. Then, both children of the OR node, nodes (X) and (AND), are labeled $(0,-1)$ as their parent. Finally, the left child of the AND node, node (Y), is labeled $(0,-1,1)$ while the right child, node (Z), is labeled $(0,-1,-1)$. \autoref{fig:access_tree_labeled} shows the fully labeled tree. The resulting LSS matrix after padding leaf nodes is \begin{figure} \centering \tikzstyle{circle1} = [circle, draw, fill=white, text width=2.4em, text centered, rounded corners, minimum height=1.5em, line width=0.3mm] \scalebox{0.75} { \begin{tikzpicture}[level distance=1.5cm, level 1/.style={sibling distance=4cm}, level 2/.style={sibling distance=2cm}] \node[circle1] {1} child { node[circle1] {1,1} } child { node[circle1] {0,-1} child { node[circle1] {0,-1} } child { node[circle1] {0,-1} child { node[circle1] {0,-1,1} } child { node[circle1] {0,0,-1} } } }; \end{tikzpicture} } \caption{Access Tree with Labels.} \label{fig:access_tree_labeled} \vspace{-5mm} \end{figure} \begin{equation*} A= \begin{pmatrix} 1 & 1 & 0 \\ 0 & -1 & 0 \\ 0 & -1 & 1 \\ 0 & 0 & -1 \\ \end{pmatrix} \end{equation*} To generate the secret shares, let a secret $z=65$, construct the column vector $\mathbolditalic{v} = (z, r_2, \dots , r_n)= (65, 3, 4)$ where $3, 4$ are random numbers, then compute the shares vector $\boldsymbol \lambda = A\mathbolditalic{v}$ as \begin{equation*} \boldsymbol \lambda=A\mathbolditalic{v}= \begin{pmatrix} 1 & 1 & 0 \\ 0 & -1 & 0 \\ 0 & -1 & 1 \\ 0 & 0 & -1 \\ \end{pmatrix} \begin{pmatrix} 65 \\ 3 \\ 4 \\ \end{pmatrix} = \begin{pmatrix} 68 \\ -3 \\ 1 \\ -4 \\ \end{pmatrix} \end{equation*} To reconstruct the secret, we recall $\sum_{i \in I} c_i \lambda_i=z$. However, the aforementioned algorithm forces the reconstruction coefficients $\{c_i\}_{i \in I}$ to have a value of one. This means that adding the secret shares corresponding to attributes validating the boolean formula can reconstruct the secret. For example (W AND X) can satisfy the boolean formula, therefore, adding the shares corresponding to W, which is $68$, to the share corresponding to X, which is $-3$, can reconstruct the secret $65$. The same applies to (W AND Y AND Z). \bibliographystyle{IEEEtran}
1,108,101,565,074
arxiv
\section{Introduction} Spheroidal functions or precisely spheroidal wave functions are issued from the wave equation $$ \nabla^2w+k^2w=0. $$ By considering solutions of separated variables in an elliptic cylinder coordinates system, or the prolate or oblate spheroids, such solutions satisfy a second order ODE of the form $$ (1-t^2)w''+2\alpha tw'+(\beta-\gamma^2t^2)w=0. $$ for both radial and angular functions. In fact, prolate and oblate spheroidal coordinate systems are results of rotating the two-dimensional elliptic coordinate system, consisting of confocal ellipses and hyperbolas, about the major and minor axes of the ellipses. See \cite{Abramowitzetal}, \cite{Lietal}, \cite{Morais}, \cite{Stratton}, \cite{Strattonetal} \cite{Osipovetal}. This last equation leads to special functions such as Bessel, Airy, ... and special polynomials such as Gegenbauer, Legendre, Chebyshev, .... This is a first idea behind the link between these functions and a first motivation of our work and its titling as spheroidal wavelets. Besides, spheroidal functions have been in the basis of modeling physical phenomena where the wave behaviour is pointed out such as radars, antennas, 3D-images, ... Recall also that Gegenbauer polynomials themselves are strongly related to spheroidal functions since their appearance and these are called ultraspheroidal polynomials. See \cite{Antoine-Murenzi-Vandergheynst}, \cite{DeSchepper}, \cite{Delanghe}, \cite{Lehar}, \cite{Lietal}, \cite{Michel}, \cite{Moussa}, \cite{Saillardetal}. The use of wavelets in the analysis of functions is widespread especially in the last decades. Nowadays, wavelets are interesting and useful tools in many fields such as mathematics, quantum physics, electrical engineering, time/image processing, bio-signals, seismology, geology, ..... Wavelets have been created to meet a need in signal processing that is not well understood by Fourier theory. Classical Fourier analysis provides a global approach for signals as it replaces the analyzed function with a whole-space description (See (\ref{Fourier-transform-onRm}) later). Wavelet analysis in contrast decomposes the signal in both time and frequency and describes it locally and globally, as the need. Wavelet analysis of a function $f$ in the space of able analyzed functions (generally $L_2$) starts by convoluting it with local copy of a wavelet mother function $\psi$ known as the analyzing wavelet relatively to 2-parameters; One real number parameter $a>0$ defines the dilation parameter or the scale and one space parameter $b$ in the same space as the function $f$ and $\psi$ domains defines the translation parameter or the position. Such copy is denoted usually by $\psi_{a,b}$ and is defined by \begin{equation}\label{psiab} \psi_{a,b}(x)=a^{-\frac{1}{2}}\psi(\displaystyle\frac{x-b}{a}). \end{equation} To be a good candidate as a wavelet mother, an admissibility assumption on the function $\psi$ is usually assumed. It states that \begin{equation}\label{admissibility-conditionofpsi} \mathcal{A}_\psi=\displaystyle\int_{-\infty}^{+\infty}\displaystyle\frac{|\widehat{\psi}(u)|^2}{|u|}du<+\infty, \end{equation} where $\widehat{\psi}$ is the Fourier transform of $\psi$. The convolution of the analyzed function $f$ with the copy $\psi_{a,b}$ defines the so-called wavelet transform of $f$ or exactly the Continuous Wavelet Transform (CWT) expressed by \begin{equation}\label{waveletcoefficientcab(f)} C_{a,b}(f)=<f,\psi_{a,b}>=\,\displaystyle\int_{-\infty}^{+\infty}f(x) \overline{\psi_{a,b}(x)}dx. \end{equation} Whenever the admissibility condition is fulfilled, the analyzed function $f$ may be reconstructed in an $L_2$ sense as \begin{equation} f(x)=\displaystyle\frac{1}{\mathcal{A}_\psi}\displaystyle\int_{\mathbb{R}}\displaystyle\int_{0}^{+\infty} C_{a,b}(f)\psi_{a,b}(x)\displaystyle\frac{da}{a^2}db, \end{equation} where the equality has to be understood in the $L_2$-sense (See \cite{Jaffard1}, \cite{Jaffard2}). This equality will be proved later in the present context of Clifford Gegenbauer-Jacobi type wavelets. Usually analyzing wavelets are related also to moments. The regularity of the analyzing wavelet $\psi$ is related to a number of vanishing moments that should be satisfied \begin{equation}\label{vanishingmomentsofpsi} \displaystyle\int_{-\infty}^{+\infty} x^n \psi(x) dx=0,\quad n=0,1,\dots,N. \end{equation} Such a condition helps to analyze functions of some fixed regularity. In wavelet theory, the first result relating regularity to wavelet transforms is due to Jaffard (See \cite{Holschneider-Tchamitchan}, \cite{Jaffard1}, \cite{Jaffard2}) and is stated as follows. \begin{prop}\label{PropJaddard} Let $\psi$ be a $C^{r}(\mathbb{R}^m)$ function with all moments of order less than $r$ vanishing and all derivatives of order less than $r$ well localized. \begin{itemize} \item $f\in\mathcal{C}^{\alpha}(\mathbb{R}^m)$ if and only if $|C_{a,b}(f)|\leq\,Ca^{\alpha}$ for all $b$ and $0<a<<1$. \item If $f\in\mathcal{C}^{\alpha}(x_0)$, then for $0<a<<1$ and $|b-x_0|\leq1/2$, \begin{equation}\label{ch3:coeff} |C_{a,b}(f)|\leq\,Ca^{\alpha}\left(1+\frac{|b-x_0|}{a}\right)^{\alpha}. \end{equation} \item If (\ref{ch3:coeff}) holds and if $f\in\mathcal{C}^\varepsilon(\mathbb{R}^m)$ for an $\varepsilon>0$, then there exists a polynomial $P$ such that, if $|x-x_0|\leq1/2$, \begin{equation}\label{ch3:eq:1.11} |f(x)-P(x-x_0)|\leq\,C|x-x_0|^{\alpha}\log\left(\frac{2}{|x-x_0|}\right). \end{equation} \end{itemize} \end{prop} More about regularity, admissibility, vanishing moments and wavelet properties may be found in \cite{Craddocketal}, \cite{Delanghe}, \cite{Kilbas}, \cite{Mitrea}, \cite{Pena}, \cite{Vieira}. It holds that wavelet theory on the real line and generally on Euclidian spaces has been extended in some cases of Clifford analysis. The classical wavelet theory can be constructed in the framework of Clifford analysis. Clifford analysis deals with so-called monogenic functions which are described as solutions of the Dirac operator and/or direct higher dimensional generalizations of holomorphic functions in the complex plane. Clifford wavelets and the possibility to construct orthogonal wavelet bases and consequently multiresolution analyses associated has been the object of several works, but remain to be a fascinating subject of researches. In \cite{Askarietal} a multiresolution analysis in the context of Clifford analysis has been provided. Clifford scaling functions, Clifford wavelets as well as related wavelet filters has been developed.and proved to be applicable in quantum mechanics. In \cite{Kumar1} and \cite{Kumar2}, spheroidal wavelets leading to frames as well as multiresolution analysis have been developed. It was proved that spheroidal functions may induce good candidates characterized by localizations in both frequency and space and thus lead to good wavelets. More facts about Clifford wavelets and discussions on possible associated multiresolution analyses may be found in \cite{Brackx-Schepper-Sommen0}, \cite{Brackx-Schepper-Sommen1}, \cite{Brackx-Schepper-Sommen2}, \cite{Brackx-Schepper-Sommen3}, \cite{Brackx-Schepper-Sommen4}, \cite{Brackx-Schepper-Sommen5}, \cite{Hitzeretal}. Let $\Omega$ be an open subset of $\mathbb{R}^m$ or $\mathbb{R}^{m+1}$ and $f:\Omega\rightarrow\mathbb{A}$, where $\mathbb{A}$ is the real Clifford algebra $\mathbb{R}_{m}$ (or $\mathbb{C}_{m}$). $f$ may be written in the form \begin{equation}\label{fincliffordalbebra} f=\displaystyle\sum_{A}f_{A}e_{A} \end{equation} where the functions $f_A$ are $\mathbb{R}$ (or $\mathbb{C}$)-valued and $(e_A)_A$ is a suitable basis of $\mathbb{A}$. Despite the fact that Clifford analysis generalizes the most important features of classical complex analysis, monogenic functions do not enjoy all properties of holomorphic functions of one complex variable. For instance, due to the non-commutativity of the Clifford algebras, the product of two monogenic functions is in general not monogenic. It is therefore natural to look for specific techniques to construct monogenic functions. See \cite{Brackx-Schepper-Sommen0}, \cite{Delanghe}, \cite{Pena}. In the literature, there are several techniques available to generate monogenic functions such as the Cauchy-Kowalevski extension (CK-extension) which consists in finding a monogenic extension $g^*$ of an analytic function $g$ defined on a given subset in $\mathbb{R}^{m+1}$ of positive codimension. For analytic functions $g$ on the plane $\{(x_0, \underline{x}) \in\mathbb{R}^{m+1}, \quad x_0 = 0\}$ the problem may be stated as follows: \textit{Find $g^*\in\mathbb{A}$ such that \begin{equation}\label{cauchy-kowalevski-extension} \partial_{x_0}g^*=-\partial_{\underline{x}} g^*\quad in \quad \mathbb{R}^{m+1}\quad\hbox{and}\quad g^*(0,\underline{x})=g(\underline{x}). \end{equation}} A formal solution is \begin{equation}\label{ck-formal-solution} g^*(x_0,\underline{x})=\exp(-x_0\partial_{\underline{x}}) g(\underline{x})=\displaystyle\sum_{k=0}^{\infty}\displaystyle\frac{(-x_0)^k}{k!} \partial_{\underline{x}}^kg(\underline{x}). \end{equation} It may be proved that (\ref{ck-formal-solution}) is a monogenic extension of the function $g$ in $\mathbb{R}^{m+1}$. Moreover, by the uniqueness theorem for monogenic functions this extension is also unique. See \cite{Brackx-Schepper-Sommen0}, \cite{Delanghe}, \cite{Pena}, \cite{Vieira}, \cite{Winkler} and the references therein. The organization of this paper is as follows: In section 2, a brief overview of some properties of the Clifford and Fourier analysis has been conducted. Section 3 is devoted to a review of the class of Gegenbauer-Jacobi polynomials in the framework of Clifford analysis. In section 4, some new classes of polynomials generalizing those of section 3 are developed by adapting 2-parameters weights and thus applied to introduce some new wavelets. Section 5 is devoted to the link and discussions about the present case and Legendre and Tchebyshev polynomials as well as the role of the parameters $\alpha$ and $\beta$ in the Clifford weight function applied here. We concluded afterward. \section{Clifford analysis revisited} In this section we revisit some basic concepts that will be used later. Let $f$ be in $L^1(\mathbb{R}^m)$. Its Fourier transform denoted usually $\widehat{f}$ or $\mathcal{F}(f)$ is given by \begin{equation}\label{Fourier-transform-onRm} \widehat{f}(\eta)=\mathcal{F}(f)(\eta)=\displaystyle\frac{1}{(2\pi)^{\frac{m}{2}}}\displaystyle\int_{\mathbb{R}^m}\exp(-ix.\eta)f(x)dx, \end{equation} where $dx$ is the Lebesgues measure on $\mathbb{R}^m$ and $x.\eta$ is the standard inner product of $x$ and $\eta$ in $\mathbb{R}^m$. Clifford analysis appeared as a generalization of the complex analysis and Hamiltonians. It extended complex calculus to some type of finite-dimensional associative algebra known as Clifford algebra endowed with suitable operations as well as inner products and norms. It is now applied widely in a variety of fields including geometry and theoretical physics. See \cite{Brackx-Schepper-Sommen0}, \cite{Delanghe}, \cite{Hitzeretal}, \cite{Lehar}, \cite{Lietal}, \cite{McIntoshetal1}, \cite{Pena}, \cite{Saillardetal}, \cite{Son}, \cite{Vieira} and the references therein. Clifford analysis offers a functional theory extending the one of holomorphic functions of one complex variable. Starting from the real space $\mathbb{R}^m,\;(m\geq2)$ (or $\mathbb{C}^m$) endowed with an orthonormal basis $(e_1,\dots, e_m)$, the Clifford algebra $\mathbb{R}_m$ (or $\mathbb{C}_m$) starts by introducing a suitable interior product. Let $$ e_j^2=-1,\quad j=1,\dots,m, $$ $$ e_je_k+e_ke_j=0,\quad j\neq k,\quad j,k=1,\dots,m. $$ It is straightforward that this is a non-commutative multiplication. Two anti-involutions on the Clifford algebra are important. The conjugation is defined as the anti-involution for which $$ \overline{e_j}=-e_j,\quad j=1,\dots, m. $$ The inversion is defined as the anti-involution for which $$ e_j^{+}=e_j,\quad j=1,\dots,m. $$ This yields a basis of the Clifford algebra ($e_A:A\subset\{1,\dots,m\}$) where $e_{\emptyset}=1$ is the identity element. As these rules are defined, the Euclidian space $\mathbb{R}^m$ is then embedded in the Clifford algebras $\mathbb{R}_m$ (or $\mathbb{C}_m$) by identifying the vector $x=(x_1,\dots,x_m)$ with the vector $\underline{x}$ given by $$ \underline{x}=\displaystyle\sum_{j=1}^{m}e_jx_j. $$ The product of two vectors is given by $$ \underline{x}\,\underline{y}=\underline{x}.\underline{y}+\underline{x}\wedge\underline{y} $$ where $$ \underline{x}.\underline{y}=-<\underline{x},\underline{y}>=-\displaystyle\sum_{j=1}^{m}x_j\,y_j $$ and $$ \underline{x}\wedge\underline{y}=\displaystyle\sum_{j=1}^{m}\displaystyle\sum_{k=j+1}^{m}e_j\,e_k(x_j\,y_k-x_ky_j). $$ is the wedge product. In particular, $$ \underline{x}^2=-<\underline{x},\underline{x}>=-|\underline{x}|^2. $$ An $\mathbb{R}_m$ or $\mathbb{C}_m$-valued function $F(x_1,\dots,x_m)$, respectively $F(x_0, x_1,\dots,x_m)$ is called right monogenic in an open region of $\mathbb{R}^m$, respectively, or $\mathbb{R}^{m+1}$, if in that region $$ F\partial_{\underline{x}}=0, \quad\mbox{respectively}\quad F(\partial_{x_0}+\partial_{\underline{x}})=0. $$ Here $\partial_{\underline{x}}$ is the Dirac operator in $\mathbb{R}^m$ defined by $$ \partial_{\underline{x}}=\displaystyle\sum_{j=1}^{m} e_j \partial_{x_j} $$ and which splits the Laplacian in $\mathbb{R}^m$ as $$ \Delta_m=-\partial_{\underline{x}}^2, $$ whereas $\partial_{x_0}+\partial_{\underline{x}}$ is the Cauchy-Riemann operator in $\mathbb{R}^{m+1}$ for which $$ \Delta_{m+1}=(\partial_{x_0}+\partial_{\underline{x}})(\partial_{x_0}+\overline{\partial_{\underline{x}}}) $$ Introducing spherical co-ordinates in $\mathbb{R}^m$ by $$ \underline{x}=r\underline{\omega},\quad r=|\underline{x}|\in[0,+\infty[,\,\underline{\omega}\in S^{m-1}, $$ where $S^{m-1}$ is the unit sphere in $\mathbb{R}^m$, the Dirac operator takes the form $$ \partial_{\underline{x}}=\underline{\omega}\left( \partial_r+\displaystyle\frac{1}{r} \Gamma_{\underline{\omega}}\right) $$ where $$ \Gamma_{\underline{\omega}}=-\displaystyle\sum_{i<j}e_ie_j(x_i\partial_{x_j}-x_j\partial_{x_i}) $$ is the so-called spherical Dirac operator which depends only on the angular co-ordinates. As for the Euclidian case, Fourier analysis is extended to Clifford Fourier analysis \cite{Brackx-Schepper-Sommen0}, \cite{Brackx-Schepper-Sommen2}, \cite{Brackx-Schepper-Sommen3}, \cite{Craddocketal}. The idea behind the definition of the Clifford Fourier transform originates from the operator exponential representation of the classical Fourier transform by means of Hermite operators. Throughout this article the Clifford-Fourier transform of $f$ is given by $$ \mathcal{F}(f(x))(y)=\displaystyle\int_{\mathbb{R}^m} e^{-i<\underline{x},\underline{y}>}\, f(\underline{x}) dV(\underline{x}), $$ where $dV(\underline{x})$ is the Lebesgue measure on $\mathbb{R}^m$. In the present work, we propose to apply such topics to output some generalizations of multidimensional Continuous Wavelet Transform in the context of Clifford analysis. \section{Some old orthogonal polynomials revisited} Firstly, we stress on the fact that the results presented in this section are not purely new. The same problem is already studied in \cite{Brackx-Schepper-Sommen1}. (See also \cite{Brackx-Schepper-Sommen4}, \cite{Brackx-Schepper-Sommen5}, \cite{DeSchepper} for similar results) We propose to review the context of real Gegenbauer polynomials on $\mathbb{R}$ which are associated in this case to the real weight function $\omega(x)=(1+x^2)^\alpha$, $\alpha\in\mathbb{R}$, to the context of Clifford algebra-valued polynomials by considering the same weight function on the Clifford algebra $\mathbb{R}_m$. So, consider the Clifford algebra-valued weight function $$ \omega_\alpha(\underline{x})=(1+|\underline{x}|^2)^\alpha,\, \alpha\in \mathbb{R}. $$ The general Clifford-Gegenbauer polynomials, denoted by $G_{\ell,m,\alpha}(\underline{x})$, are generated by the CK-extension $F^{*}(t,\underline{x})$ defined by $$ F^{*}(t,\underline{x})=\displaystyle\sum_{\ell=0}^{\infty} \displaystyle\frac{t^\ell}{\ell!}G_{\ell,m,\alpha}(\underline{x})\,\omega_{\alpha-\ell}(\underline{x});\;\;t\in\mathbb{R},\;\;\underline{x}\in\mathbb{R}_m. $$ As for the real case of orthogonal polynomials, we impose a left monogenic property on $F^{*}$ in $\mathbb{R}^{m+1}$ to obtain a recursive relation on the general Clifford-Gegenbauer polynomials $G_{\ell,m,\alpha}$. Hence, $F^{*}$ is monogenic means that \begin{equation}\label{monogenic-property-GCGP} (\partial_{t}+ \partial_{\underline{x}}) F^{*}(t,\underline{x})=0. \end{equation} The first part related to the time derivative is evaluated as $$ \partial_tF^*(t,\underline{x})=\displaystyle\sum_{\ell=0}^{\infty}\displaystyle\frac{t^\ell}{\ell!}G_{\ell+1,m,\alpha}(\underline{x}) \,\omega_{\alpha-\ell-1}(\underline{x}). $$ \begin{lem}\label{lemmaevenodd} The Dirac operator of $\underline{x}^n$ is given by \begin{equation} \partial_{\underline{x}}(\underline{x}^n)=\gamma_{n,m} \underline{x}^{n-1}. \end{equation} where, $$ \gamma_{n,m}= \begin{cases} -n \quad \hbox{if} \quad n\, \mbox{is \,even}.\\ -(m+n-1) \quad \mbox{if}\quad n \,\mbox{is\, odd}. \end{cases} $$ \end{lem} \hskip-20pt Now, observing that $$ \partial_{\underline{x}}(\underline{x})=-m,\quad\partial_{\underline{x}}(\underline{x}^2)=-2\underline{x}\quad\hbox{and}\quad \partial_{\underline{x}}(|\underline{x}|^2)=2\underline{x}, $$ we get $$ \partial_{\underline{x}}F^*(t,\underline{x})=\displaystyle\sum_{\ell=0}^{\infty}\displaystyle\frac{t^\ell}{\ell!}\left(\partial_{\underline{x}} G_{\ell,m,\alpha}(\underline{x})\omega_{\alpha-\ell}(\underline{x})+G_{\ell,m,\alpha}(\underline{x}) \partial_{\underline{x}}\omega_{\alpha-\ell}(\underline{x})\right). $$ Observing again that $$ \partial_{\underline{x}}\omega_{\alpha-\ell}(\underline{x})=2(\alpha-\ell)\underline{x}\,\,\omega_{\alpha-\ell-1}(\underline{x}), $$ the monogenicity property (\ref{monogenic-property-GCGP}) leads to the recurrence relation $$ \begin{array}{lll} &&G_{\ell+1,m,\alpha}(\underline{x})\omega_{\alpha-\ell-1}(\underline{x})+\omega_{\alpha-\ell}(\underline{x})\partial_{\underline{x}}G_{\ell,m,\alpha}(\underline{x})\\ &&+2(\alpha-\ell) \omega_{\alpha-\ell-1}(\underline{x})\underline{x}G_{\ell,m,\alpha}(\underline{x})=0, \end{array} $$ or equivalently \begin{equation}\label{rec} G_{\ell+1,m,\alpha}(\underline{x})=-2(\alpha-\ell)\underline{x}G_{\ell,m,\alpha}(\underline{x})-(1+|\underline{x}|^2) \partial_{\underline{x}}G_{\ell,m,\alpha}(\underline{x}). \end{equation} Starting from $G_{0,m,\alpha}(\underline{x})=1$, we obtain as examples $$ G_{1,m,\alpha}(\underline{x})=-2\alpha \underline{x}, $$ $$ G_{2,m,\alpha}(\underline{x})=2\alpha[(2(\alpha-1)+m)\underline{x}^2-m], $$ $$ G_{3,m,\alpha}(\underline{x})=[-4\alpha((2\alpha-1)+m)(\alpha-1)]\underline{x}^3+4\alpha(\alpha-1)(m+2)\underline{x}. $$ The Clifford-Gegenbauer polynomials may be also introduced via the Rodrigues formula. \begin{prop} \begin{equation}\label{rod} G_{\ell,m,\alpha}(\underline{x})=(-1)^{\ell}\,\,\omega_{\ell-\alpha}(\underline{x})\partial_{\underline{x}}^\ell(\,\omega_{\alpha}(\underline{x})). \end{equation} \end{prop} \hskip-20pt\textbf{Proof.} We proceed by recurrence on $\ell$. For $\ell=1$, we have $$ \partial_{\underline{x}}\,\omega_{\alpha}(\underline{x})=2\alpha\,\underline{x}\,\omega_{\alpha-1}(\underline{x}) =(-1)(-2\alpha\underline{x}) \,\omega_{\alpha-1}(\underline{x})=(-1)\,\omega_{\alpha-1}(\underline{x}) G_{1,m}^{\alpha}(\underline{x}). $$ Which means that $$ G_{1,m}^{\alpha}=(-1)\,\omega_{1-\alpha}(\underline{x}) \partial_{\underline{x}}\,\omega_{\alpha}(\underline{x}). $$ For $\ell=2$, we get $$ \begin{array}{lll} \partial_{\underline{x}}^{(2)}\, \omega_{\alpha} (\underline{x})&=& 2\alpha[ 2(\alpha-1)\underline{x}^2(1-\underline{x}^2)^{\alpha-2}-m(1-\underline{x}^2)^{\alpha-1} ]\\ &=& (-1)^2 \omega_{\alpha-2}[2\alpha[ 2(\alpha-1)+m]\underline{x}^2-m ]\\ &=& (-1)^2 \omega_{\alpha-2}(\underline{x}) \,G_{2,m,\alpha}(\underline{x}). \end{array} $$ Hence, $$ G_{2,m}^{\alpha,\beta}=(-1)^2\omega_{2-\alpha}(\underline{x})\partial_{\underline{x}}^{(2)}\omega_{\alpha}(\underline{x}). $$ So, assume that $$ G_{\ell,m}^{\alpha,\beta}(\underline{x})=(-1)^\ell\omega_{\ell-\alpha}(\underline{x})\partial_{\underline{x}}^{(\ell)}\omega_{\alpha}(\underline{x}). $$ Denote $$ \Im(\underline{x})=-2(\alpha-\ell)\underline{x}(-1)^{\ell}\omega_{\ell-\alpha}(\underline{x}) \partial_{\underline{x}}^{(\ell)}\omega_{\alpha}(\underline{x}), $$ and $$ \Re(\underline{x})=(1+|\underline{x}|^2)(-1)^\ell2(\ell-\alpha)\underline{x} \omega_{\ell-\alpha-1}(\underline{x})\,\partial_{\underline{x}}^{\ell}\omega_{\alpha}(\underline{x}). $$ From (\ref{rec}) and (\ref{rod}) we obtain $$ \begin{array}{lll} G_{\ell+1,m}^{\alpha,\beta}(\underline{x})&=&-2(\alpha-\ell)\underline{x}(-1)^{\ell}\omega_{\ell-\alpha}(\underline{x}) \partial_{\underline{x}}^{(\ell)}\omega_{\alpha}(\underline{x})\\ &\quad& -(1+|\underline{x}|^2)\partial_{\underline{x}}[(-1)^{\ell}\omega_{\ell-\alpha}(\underline{x}) \partial _{\underline{x}}^{(\ell)} \omega_{\alpha}(\underline{x})]\\ &=& \Im(\underline{x})- \Re(\underline{x}) -(1+|\underline{x}|^2)(-1)^{\ell}\omega_{\ell-\alpha}(\underline{x}) \partial_{\underline{x}}^{(\ell+1)} \omega_{\alpha}(\underline{x}). \end{array} $$ Simple calculus yield that $$ \begin{array}{lll} \Re(\underline{x})&=&(1+|\underline{x}|^2)(-1)^\ell2(\ell-\alpha)\underline{x} \omega_{\ell-\alpha-1}(\underline{x})\,\partial_{\underline{x}}^{\ell}\omega_{\alpha}(\underline{x}) \\&=&(-1)^\ell 2(\ell-\alpha)\underline{x} \omega_{\ell-\alpha}(\underline{x})\,\partial_{\underline{x}}^{\ell}\omega_{\alpha}(\underline{x}) \\ &=& \Im(\underline{x}). \end{array} $$ Hence, $$ \begin{array}{lll} G_{\ell+1,m}^{\alpha,\beta}(\underline{x})&=& -(1+|\underline{x}|^2)(-1)^{\ell}\omega_{\ell-\alpha}(\underline{x})\partial_{\underline{x}}^{(\ell+1)} \omega_{\alpha}(\underline{x})\\ &=& (-1)^{\ell+1}\omega_{\ell-\alpha+1}(\underline{x})\partial_{\underline{x}}^{(\ell+1)}\omega_{\alpha}(\underline{x}). \end{array} $$ \section{A 2-parameters Clifford-Jacobi polynomials and associated wavelets } We propose in this section to introduce a 2-parameters class of polynomials based on Clifford-Jacobi ones. We denote such polynomials along the whole section by $Z_{\ell,m}^{\alpha,\beta}(\underline{x})$. These are generated by the weight function $$ \omega_{\alpha,\beta}(\underline{x})=(1-|\underline{x}|^2)^\alpha (1+|\underline{x}|^2)^\beta $$ and its CK-extension $F^*$ expressed by $$ \begin{array}{lll} F^*(t,\underline{x})&=& \displaystyle\sum\limits_{\ell=0}^{\infty}\dfrac{t^\ell}{\ell!}Z_{\ell,m}^{\alpha,\beta}(\underline{x})\,\omega_{\alpha-\ell,\beta-\ell}(\underline{x}). \end{array} $$ For more details about the definition of powers of the form $(1\pm\underline{u})^\alpha$ in Clifford analysis, we may refer to \cite{Brackx-Schepper-Sommen0}, \cite{Brackx-Schepper-Sommen1}, \cite{Brackx-Schepper-Sommen2}, \cite{Brackx-Schepper-Sommen3}, \cite{Brackx-Schepper-Sommen4} or \cite{Brackx-Schepper-Sommen5}. Next, we have in one hand $$ \dfrac{\partial F^*(t,\underline{x})}{\partial t}=\displaystyle\sum_{\ell=0}^{\infty}\dfrac{t^\ell}{\ell!} Z_{\ell+1,m}^{\alpha,\beta}(\underline{x}) \,\omega_{\alpha-\ell-1,\beta-\ell-1}(\underline{x}), $$ and on the other hand, $$ \begin{array}{lll} \dfrac{\partial F^*(t,\underline{x})}{\partial \underline{x}} &=&\displaystyle\sum_{\ell=0}^{\infty}\dfrac{t^\ell}{\ell!}\left( Z_{\ell,m}^{\alpha,\beta}(\underline{x}) \partial_{\underline{x}}\, \omega_{\alpha-\ell,\beta-\ell}(\underline{x})\right.\\ &&\qquad\qquad\quad\left.+\partial_{\underline{x}} (Z_{\ell,m}^{\alpha,\beta}(\underline{x}))\, \omega_{\alpha-\ell,\beta-\ell}(\underline{x}) \right), \end{array} $$ where $$ \begin{array}{lll} \partial_{\underline{x}} \omega_{\alpha-\ell,\beta-\ell}(\underline{x}) &=&-2(\alpha-\ell)\underline{x}\,\omega_{\alpha-\ell-1,\beta-\ell}(\underline{x})\\ &&\qquad\qquad+2(\beta-\ell)\underline{x}\,\omega_{\alpha-\ell,\beta-\ell-1}(\underline{x}). \end{array} $$ Then $$ \begin{array}{lll} \dfrac{\partial F^*(t,\underline{x})}{\partial \underline{x}} &=&\displaystyle\sum_{\ell=0}^{\infty}\dfrac{t^\ell}{\ell!} Z_{\ell,m}^{\alpha,\beta}(\underline{x}) [-2(\alpha-\ell)\underline{x}\,\omega_{\alpha-\ell-1,\beta-\ell}(\underline{x})\\ &&\qquad+2(\beta-\ell)\underline{x}\,\omega_{\alpha-\ell,\beta-\ell-1}(\underline{x})] +\partial_{\underline{x}} (Z_{\ell,m}^{\alpha,\beta}(\underline{x})) \,\omega_{\alpha-\ell,\beta-\ell}(\underline{x}). \end{array} $$ From the monogenicity relation, we obtain $$ \begin{array}{lll} &&(\partial_t+\partial_{\underline{x}})F^*(t,\underline{x})\\ &=& Z_{\ell+1,m}^{\alpha,\beta}(\underline{x}) \,\omega_{\alpha-\ell-1,\beta-\ell-1}(\underline{x}) +Z_{\ell,m}^{\alpha,\beta}(\underline{x}) [-2(\alpha-\ell)\underline{x}\,\omega_{\alpha-\ell-1,\beta-\ell}\underline{x})\\ &&+2(\beta-\ell)\underline{x}\,\,\omega_{\alpha-\ell,\beta-\ell-1}(\underline{x})] +\,\omega_{\alpha-\ell,\beta-\ell}(\underline{x})\,\partial_{\underline{x}}( Z_{\ell,m}^{\alpha,\beta}(\underline{x})) \\ &=& 0. \end{array} $$ Finally, we get the following result. \begin{prop} The 2-parameters Clifford-Jacobi Polynomials $Z_{\ell,m}^{\alpha,\beta}$ satisfy the recurence relation \begin{equation}\label{24} \begin{array}{lll} Z_{\ell+1,m}^{\alpha,\beta}(\underline{x}) &=&[2(\alpha-\ell)\underline{x}(1-\underline{x}^2)-2(\beta-\ell)\underline{x}\,(1+\underline{x}^2)] Z_{\ell,m}^{\alpha,\beta}(\underline{x})\\ &&-\,\omega_{1,1}(\underline{x})\partial_{\underline{x}} (Z_{\ell,m}^{\alpha,\beta}(\underline{x})). \end{array} \end{equation} \end{prop} \hskip-20pt For example, starting with $Z_{0,m}^{\alpha,\beta}(\underline{x}) =1$, a simple calculation yields that $$ \begin{array}{lll} Z_{1,m}^{\alpha,\beta}(\underline{x}) &=& 2\alpha\underline{x}(1-\underline{x}^2)-2\beta\underline{x} (1+\underline{x}^2)\\ &=&2(\alpha-\beta)\underline{x}-2(\alpha+\beta)\underline{x}^3 \end{array} $$ For $\ell=1$, we get $$ \begin{array}{lll} &&Z_{2,m}^{\alpha,\beta}(\underline{x})\\ &=& [2(\alpha-1)\underline{x} (1-\underline{x}^2)-2(\beta-1)\underline{x}(1+\underline{x}^2)] [2(\alpha-\beta)\underline{x}-2(\alpha+\beta)\underline{x}^3] \\ &&+(1-\underline{x}^4) [-2(\alpha-\beta)m+2(\alpha+\beta)(m+2)\underline{x}^2] \\ &=& 2(\alpha-\beta)m+[4\alpha(\alpha-1)+4\beta(\beta-1)-8\alpha\beta-2(\alpha+\beta)m]\underline{x}^2\\ &&+[8\beta(\beta-1)-8\alpha(\alpha-1)+2(\beta-\alpha)m]\underline{x}^4\\ &&+[4\alpha(\alpha-1)+4\beta(\beta-1)+8\alpha\beta+2(\beta+\alpha)m]\underline{x}^6. \end{array} $$ For $\ell=2$, we obtain $$ \begin{array}{lll} &&Z_{3,m}^{\alpha,\beta}(\underline{x})\\ &=&[[4(\alpha-\beta)^2-4(\alpha+\beta)]m+8\alpha(\alpha-1)+8\beta(\beta-1)-8\alpha\beta]\underline{x}\\ &+& [-26\alpha(\alpha-1)+40\beta(\beta-1)-16\alpha\beta-4(\alpha+\beta)+4(\beta-\alpha)[\alpha+\beta-2]]\underline{x}^3\\ &+& [16\beta(\beta-1)(\alpha-\beta)-16\alpha(\alpha-1)(\alpha-\beta)-4(\alpha-\beta)^2m]\\ &-&[4\alpha(\alpha-1)+4\beta(\beta-1)-8\alpha\beta-2(\alpha+\beta)m][2(\alpha+\beta-2)] \underline{x}^5\\ &+& [8\alpha(\alpha-1)+8\beta(\beta-1)+16\alpha\beta+4(\beta+\alpha)m](\alpha-\beta)\\ &-&2[8\beta(\beta-1)-8\alpha(\alpha-1)+2(\beta-\alpha)m](\alpha+\beta)\underline{x}^7\\ &-& [4\alpha(\alpha-1)+4\beta(\beta-1)+8\alpha\beta+2 (\beta+\alpha)m[2\alpha+2\beta-2] \underline{x}^9. \end{array} $$ Remark that $Z_{\ell,m}^{\alpha,\beta}(\underline{x})$ is a polynomial of degree $3\ell$ in $\underline{x}$. \begin{prop} The 2-parameters Clifford-Jacobi polynomials $Z_{\ell,m}^{\alpha,\beta}$ may be obtained via the Rodrigues formula \begin{equation}\label{25} Z_{\ell,m}^{\alpha,\beta}(\underline{x})=(-1)^\ell \,\omega_{\ell-\alpha,-\ell-\beta}(\underline{x}) \, \partial_{\underline{x}}^{\ell} [(1+\underline{x}^2)^\alpha (1-\underline{x}^2)^\beta]. \end{equation} \end{prop} \hskip-20pt\textbf{Proof.} For $\ell=0$, the situation is obvious. For $\ell=1$, we have $$ \begin{array}{lll} \partial_{\underline{x}} ( \,\omega_{\alpha,\beta}(\underline{x})) &=&-2\alpha\underline{x}\,\omega_{\alpha-1,\beta}(\underline{x})+2\beta\underline{x}\,\omega_{\alpha,\beta-1}(\underline{x})\\ &=&(-1)\,\,\omega_{\alpha-1,\beta-1}(\underline{x})[2\alpha\underline{x}(1-\underline{x}^2)-2\beta\underline{x}(1+\underline{x}^2)]\\ &=&(-1)\,\,\omega_{\alpha-1,\beta-1}(\underline{x})Z_{1,m}^{\mu,\alpha}(\underline{x}). \end{array} $$ Thus, $$ Z_{1,m}^{\alpha,\beta}(\underline{x})=(-1)\omega_{1-\alpha,1-\beta}(\underline{x})\partial_{\underline{x}}(\omega_{\alpha,\beta}(\underline{x})). $$ For $\ell=2$, we get $$ \begin{array}{lll} && \partial_{\underline{x}}^2(\omega_{\alpha,\beta}(\underline{x}))\\ &=&-2\alpha[\partial_{\underline{x}}(\underline{x}(1+\underline{x}^2)^{\alpha-1})(1-\underline{x}^2)^\beta +\underline{x}(1+\underline{x}^2)^{\alpha-1}\,\partial_{\underline{x}}(1-\underline{x}^2)^{\beta}]\\ &&+ 2\beta[\partial_{\underline{x}}(\underline{x}(1+\underline{x}^2)^{\alpha})(1-\underline{x}^2)^{\beta-1} +\underline{x}(1+\underline{x}^2)^{\alpha}\partial_{\underline{x}}(1-\underline{x}^2)^{\beta-1}]\\ &=& 2m\alpha\omega_{\alpha-1,\beta}(\underline{x})+4\alpha(\alpha-1)\underline{x}^2\omega_{\alpha-2,\beta}(\underline{x})\\ &&- 4\alpha\beta\underline{x}^2\omega_{\alpha-1,\beta-1}(\underline{x})-2m\beta\,\omega_{\alpha,\beta-1}(\underline{x})\\ &&- 4\alpha\beta\underline{x}^2\omega_{\alpha-1,\beta-1}(\underline{x})+4\beta(\beta-1)\underline{x}^2\omega_{\alpha,\beta-2}(\underline{x})\\ &=&(-1)^2\omega_{\alpha-2,\beta-2}(\underline{x})[2m\alpha\omega_{1,2}(\underline{x})\\ &&+4\alpha(\alpha-1)\underline{x}^2(1-\underline{x}^2)^2-4\alpha\beta\underline{x}^2\omega_{1,1}(\underline{x})\\ &&-2m\beta\omega_{2,1}(\underline{x})-4\alpha\beta\underline{x}^2\omega_{1,1}(\underline{x})+4\beta(\beta-1)\underline{x}^2(1+\underline{x}^2)^2]\\ &=&(-1)^2\omega_{\alpha-2,\beta-2}(\underline{x})Z_{2,m}^{\alpha,\beta}(\underline{x}). \end{array} $$ Then $$ Z_{2,m}^{\alpha,\beta}(\underline{x})=(-1)^2\omega_{2-\alpha,2-\beta}(\underline{x})\partial_{\underline{x}}(\omega_{\alpha,\beta}(\underline{x}). $$ Now assume that $$ Z_{\ell,m}^{\alpha,\beta}(\underline{x})=(-1)^\ell \,\omega_{\ell-\alpha,\ell-\beta}(\underline{x}) \, \partial_{\underline{x}}^{\ell} \,\omega_{\alpha,\beta}(\underline{x}). $$ Denote $$ \begin{array}{lll} \wp&=&[2(\alpha-\ell)\underline{x}(1-\underline{x}^2)-2(\beta-\ell)\underline{x}\,(1+\underline{x}^2)]\\ &&\qquad\qquad\qquad[(-1)^\ell \,\omega_{\ell-\alpha,\ell-\beta}(\underline{x}) \, \partial_{\underline{x}}^{\ell} \,\omega_{\alpha,\beta}(\underline{x})] \end{array} $$ and $$ \begin{array}{lll} \aleph&=&(-1)^\ell\,\omega_{1,1}(\underline{x})\, [2(\alpha-\ell)\underline{x}\,\omega_{\ell-\alpha-1,\ell-\beta}(\underline{x})\\ &&\qquad\qquad\qquad-2(\beta-\ell)\underline{x}\,\,\omega_{\ell-\alpha,\ell-\beta-1}(\underline{x})] \partial_{\underline{x}}^{\ell} \,\omega_{\alpha,\beta}(\underline{x}) . \end{array} $$ Then we derive from (\ref{24}) and (\ref{25}) that $$ \begin{array}{lll} \;&\;& Z_{\ell+1,m}^{\alpha,\beta}(\underline{x})\\ &=& [2(\alpha-\ell)\underline{x}(1-\underline{x}^2)-2(\beta-\ell)\underline{x}\,(1+\underline{x}^2)]Z_{\ell,m}^{\alpha,\beta}(\underline{x})\\ &&\qquad\qquad\qquad-\,\omega_{1,1}(\underline{x})\partial_{\underline{x}} Z_{\ell,m}^{\alpha,\beta}(\underline{x}) \\ &=&\wp-\aleph-(-1)^\ell \,\omega_{\ell-\alpha+1,\ell-\beta+1}(\underline{x})\,\partial_{\underline{x}}^{\ell+1}\omega_{\alpha,\beta}(\underline{x}). \end{array} $$ Otherwise, we have $$ \begin{array}{lll} \aleph&=& (-1)^\ell [2(\alpha-\ell)\underline{x}\,\omega_{\ell-\alpha,\ell-\beta+1}(\underline{x})\\ &&\qquad\qquad\qquad-2(\beta-\ell)\underline{x}\,\,\omega_{\ell-\alpha+1,\ell-\beta-1}(\underline{x})] \partial_{\underline{x}}^{\ell}\omega_{\alpha,\beta}(\underline{x})\\ &=& (-1)^\ell[2(\alpha-\ell)\underline{x}(1-\underline{x}^2)\\ &&\qquad\qquad\qquad-2(\beta-\ell)\underline{x}(1+\underline{x}^2)]\,\omega_{\ell-\alpha,\ell-\beta}(\underline{x}) \partial_{\underline{x}}^{\ell}\omega_{\alpha,\beta}(\underline{x})\\ &=& \wp. \end{array} $$ Hence, $$ Z_{\ell+1,m}^{\alpha,\beta}(\underline{x})= (-1)^{\ell+1} \,\omega_{\ell-\alpha+1,\ell-\beta+1}(\underline{x})\,\partial_{\underline{x}}^{\ell+1}(\omega_{\alpha,\beta}(\underline{x})). $$ The following orthogonality relation is proved. \begin{prop}\label{ZlmOrthogonality} Let $$ I_{\ell,t,p}^{\alpha,\beta}=\displaystyle\int_{\mathbb{R}^m}\underline{x}^{\ell} Z_{t,m}^{\alpha+p,\beta+p}(\underline{x})\, \omega_{\alpha,\beta}(\underline{x})\, dV(\underline{x}). $$ For $4t<1-m-2(\alpha+\beta)$ we have \begin{equation}\label{26} I_{\ell,t,t}^{\alpha,\beta}=0 . \end{equation} \end{prop} \hskip-20pt\textbf{Proof.} Denote $$ I_{\ell,t}= \displaystyle\int_{\mathbb{R}^m} \underline{x}^{\ell}\,\partial_{\underline{x}}^t( \omega_{\alpha+t,\beta+t}(\underline{x})\,dV(\underline{x})). $$ Using Stokes's theorem, we obtain $$ \begin{array}{lll} &&\displaystyle\int_{\mathbb{R}^m}\underline{x}^{\ell} Z_{t,m}^{\alpha+t,\beta+t}(\underline{x})\omega_{\alpha,\beta}(\underline{x})dV(\underline{x})\\ &=&\displaystyle\int_{\mathbb{R}^m} \underline{x}^{\ell} (-1)^t \omega_{t-\alpha-t,t-\beta-t}(\underline{x}) \partial_{\underline{x}}^t(\omega_{\alpha+t,\beta+t}(\underline{x}))\omega_{\alpha,\beta}(\underline{x}) \,dV(\underline{x})\\ &=& (-1)^t\displaystyle\int_{\mathbb{R}^m} \underline{x}^{\ell}\,\partial_{\underline{x}}^t( \omega_{\alpha+t,\beta+t}(\underline{x})\,dV(\underline{x}))\\ &=& (-1)^t\displaystyle\int_{\mathbb{R}^m} \underline{x}^{\ell}\,\partial_{\underline{x}}\partial_{\underline{x}}^{t-1}(\omega_{\alpha+t,\beta+t}(\underline{x}))\,dV(\underline{x}))\\ &=&(-1)^t\left[\displaystyle\int_{\partial\mathbb{R}^m}\underline{x}^{\ell}\partial_{\underline{x}}^{t-1}\omega_{\alpha+t,\beta+t}(\underline{x})\,dV(\underline{x})\right.\\ &&\qquad\qquad\qquad \left.-\displaystyle\int_{\mathbb{R}^m}\partial_{\underline{x}}(\underline{x}^{\ell})\partial_{\underline{x}}^{t-1}\omega_{\alpha+t,\beta+t}(\underline{x})dV(\underline{x})\right]. \end{array} $$ Denote already $$ I=\displaystyle\int_{\partial\mathbb{R}^m}\underline{x}^{\ell}\partial_{\underline{x}}^{t-1}\omega_{\alpha+t,\beta+t}(\underline{x})\,dV(\underline{x}) $$ and $$ II=\displaystyle\int_{\mathbb{R}^m}\partial_{\underline{x}}(\underline{x}^{\ell})\partial_{\underline{x}}^{t-1}\omega_{\alpha+t,\beta+t}(\underline{x})dV(\underline{x}). $$ The integral $I$ vanishes due to the assumption $$ 0<t<\dfrac{1-m-2(\alpha+\beta)}{4}. $$ Due to Lemma \ref{lemmaevenodd}, the second satisfies $$ II= \gamma_{l,m}\displaystyle\int_{\mathbb{R}^m}\underline{x}^{\ell-1} \partial_{\underline{x}}^{t-1}\omega_{\alpha+t,\beta+t}(\underline{x})\,dV(\underline{x})=\gamma_{l,m} I_{\ell-1,t-1}. $$ Hence, we obtain $$ \begin{array}{lll} &&\displaystyle\int_{\mathbb{R}^m}\underline{x}^\ell Z_{t,m}^{\alpha+t,\beta+t}(\underline{x})\omega_{\alpha,\beta}(\underline{x}) dV(\underline{x})\\ &=&(-1)^{t+1} \gamma_{l,m} I_{\ell-1,t-1}\\ &=& (-1)^{t+1} \gamma_{l,m}[ (-1)^{t} \gamma_{l-1,m}I_{\ell-2,t-2}]\\ &=& (-1)^{2t+1} \gamma_{l,m} \gamma_{l-1,m} I_{\ell-2,t-2}\\ &\vdots&\\ &=&C(m,\ell,t) I_0\\ &=& 0. \end{array} $$ where $C(m,\ell,t)=(-1)^{ml+1}\displaystyle\prod_{k=0}^{m}\gamma_{k,m}$. \begin{defn} The generalized 2-parameters Clifford-Jacobi wavelet mother is defined by $$ \psi_{\ell,m}^{\alpha,\beta}(\underline{x}) =Z_{\ell,m}^{\alpha+\ell,\beta+\ell}(\underline{x}) \omega_{\alpha,\beta}(\underline{x}) =(-1)^\ell\partial_{\underline{x}}^{(\ell)}\omega_{\alpha+\ell,\beta+\ell}(\underline{x}). $$ \end{defn} \hskip-20pt Furthermore, the wavelet $\psi_{\ell,m}^{\alpha,\beta}(\underline{x})$ have vanishing moments as is shown in the next proposition. \begin{prop} The following assertions hold. \begin{enumerate} \item For $0<k<-m-\ell-2(\alpha+\beta) $ and $k<\ell$ we have \begin{equation}\label{27} \displaystyle\int_{\mathbb{R}^m} \underline{x}^k \psi_{\ell,m}^{\alpha,\beta}(\underline{x}) dV(\underline{x})=0. \end{equation} \item Its Clifford-Fourier transform is \begin{equation} \widehat{\psi_{\ell,m}^{\alpha,\beta}(\underline{u})}=(-i)^\ell\,\underline{\xi}^\ell(2\pi)^{\frac{m}{2}}\rho^{1-\frac{m}{2}+\ell}\,\displaystyle\int_{0}^\infty\widetilde{\omega}_{\alpha,\beta}^l(r)\,J_{\frac{m}{2}-1}(r\rho)dr. \end{equation} where $$ \widetilde{\omega}_{\alpha,\beta}^l(r)=((1-r^2)\varepsilon_r)^{\alpha+\ell} (1+r^2)^{\beta+\ell} r^{\frac{m}{2}} $$ with $\varepsilon_r=\mbox{sign}(1-r)$. \end{enumerate} \end{prop} \hskip-20pt\textbf{Proof.} The first assertion is a natural consequence of Proposition \ref{ZlmOrthogonality}. We prove the second. We have $$ \begin{array}{lll} \widehat{\psi}_{\ell,m}^{\alpha,\beta}(\underline{u}) &=&\displaystyle\int_{\mathbb{R}^m}\psi_{\ell,m}^{\alpha,\beta}(\underline{x})e^{-i\underline{x}.\underline{u}}\,dV(\underline{x})\\ &=&(-1)^\ell\displaystyle\int_{\mathbb{R}^m}\partial_{\underline{x}}^\ell\left(\omega_{\alpha+\ell,\beta+\ell}(\underline{x})\right)e^{-i\underline{x}.\underline{u}}\,dV(\underline{x})\\ &=&(-1)^\ell\displaystyle\int_{\mathbb{R}^m}\omega_{\alpha+\ell,\beta+\ell}(\underline{x})e^{-i\underline{x}.\underline{u}}(i\underline{u})^\ell \,dV(\underline{x})\\ &=&(-1)^\ell\,(i\underline{u})^\ell\displaystyle\int_{\mathbb{R}^m}\omega_{\alpha+\ell,\beta+\ell}(\underline{x})\, e^{-i\underline{x}.\underline{u}}\,dV(\underline{x})\\ &=&(-1)^{\ell}\,(i\underline{u})^{\ell}\displaystyle\int_{\mathbb{R}^m} (1-|\underline{x}|^2)^{\alpha+\ell}(1+|\underline{x}|^2)^{\beta+\ell}\,e^{-i\underline{x}.\underline{u}} \,dV(\underline{x})\\ &=&(-1)^\ell(i\underline{u})^\ell\widehat{\omega_{\alpha+\ell,\beta+\ell}}(\underline{u}).\end{array} $$ This Fourier transform can be simplified by using the spherical co-ordinates. By definition, we have \begin{equation}\label{spherical-co-ordinates} \widehat{\omega_{\alpha+\ell,\beta+\ell}}(\underline{u})=\displaystyle\int_{\mathbb{R}^m} (1-|\underline{x}|^2)^{\alpha+\ell} (1+|\underline{x}|^2)^{\beta+\ell}\, e^{-i<\underline{x},\underline{u}>} dV(\underline{x}) \end{equation} Introducing spherical co-ordinates $$ \underline{x}=r\underline{\omega},\quad \underline{u}=\rho\underline{\xi},\quad r=|\underline{x}|,\quad \rho=|\underline{u}|, \quad \underline{\omega}\in S^{m-1}, \,\underline{\xi}\in S^{m-1} $$ expression (\ref{spherical-co-ordinates}) becomes $$ \begin{array}{lll} \widehat{\omega_{\alpha+\ell,\beta+\ell}(\underline{u})}&=&\displaystyle\int_{0}^\infty\widetilde{\omega}_{\alpha,\beta}^l(r)\,r^{\frac{m}{2}-1}\,dr\displaystyle\int_{S^{m-1}} e^{-i<r\underline{\omega},\rho\underline{\xi}>} d\sigma(\underline{\omega}) \end{array} $$ where $d\sigma(\underline{\omega})$ stands for the Lebesgue measure on $S^{m-1}$.\\ We now use the following technical result which is known in the theory of Fourier analysis of radial functions and the theory of bessel functions. \begin{lem}\label{BesselFourierTransform} $$ \displaystyle\int_{S^{m-1}}e^{-i<r\underline{\omega},\rho\underline{\xi}>} d\sigma(\underline{\omega})=\displaystyle\frac{(2\pi)^{\frac{m}{2}} J_{\frac{m}{2}-1} (r\rho)}{r^{\frac{m}{2}-1} \rho^{\frac{m}{2}-1}} $$ where $J_{\frac{m}{2}-1}$ is the bessel function of the first kind of order $\frac{m}{2}-1$. \end{lem} The proof of this result is a good exercice in Fourier analysis and orthogonal transformations and may be found in \cite{Stein-Weiss}.\\ Now, according to Lemma \ref{BesselFourierTransform}, we obtain $$ \begin{array}{lll} \widehat{\omega_{\alpha+\ell,\beta+\ell}(\underline{u})}&=&(2\pi)^{\frac{m}{2}}\rho^{1-\frac{m}{2}+\ell}\,\displaystyle\int_{0}^\infty \widetilde{\omega}_{\alpha,\beta}^l(r)J_{\frac{m}{2}-1} (r\rho) dr. \end{array} $$ Consequently, we obtain the following expression for the Fourier transform of the $(\alpha,\beta)$-Clifford-Jacobi wavelets $$ \begin{array}{lll} \widehat{\psi_{\ell,m}^{\alpha, \beta}(\underline{u})}&=&(-i)^\ell\,\xi^\ell (2\pi)^{\frac{m}{2}}\rho^{1-\frac{m}{2}+\ell}\,\displaystyle\int_{0}^\infty \widetilde{\omega}_{\alpha,\beta}^l(r)J_{\frac{m}{2}-1} (r\rho) dr. \end{array} $$ \begin{defn} The copy of the generalized 2-parameters Clifford-Jacobi wavelet at the scale $a>0$ and the position $\underline{b}$ is defined by $$ _a^{\underline{b}}\psi_{\ell,m}^{\alpha,\beta}(\underline{x})=a^{-\frac{m}{2}}\psi_{\ell,m}^{\alpha,\beta}(\dfrac{\underline{x}-\underline{b}}{a}). $$ \end{defn} \begin{defn} The wavelet transform of a function $f$ in $L_2$ according to th generalized 2-parameters Clifford-Jacobi wavelet at the scale $a$ and the position $\underline{b}$ is $$ C_{a,\underline{b}}(f)=<f,\,_a^{\underline{b}}\psi_{\ell,m}^{\alpha,\beta}>= \displaystyle\int_{\mathbb{R}^m}f(x)\,_a^{\underline{b}}\psi_{\ell,m}^{\alpha,\beta}(\underline{x})dV(\underline{x}). $$ \end{defn} \hskip-20pt The following Lemma guarantees that the candidate $\psi_{\ell,m}^{\alpha,\beta}$ is indeed a mother wavelet. Analogue result is already checked in \cite{Brackx-Schepper-Sommen1}. The proof is based on the asymptotic behaviour of Bessel functions and thus left to the reader. \begin{lem}\label{Admissibilityofthenewwavelets} The quantity $$ \mathcal{A}_{\ell,m}^{\alpha,\beta}=\dfrac{1}{\omega_m}\displaystyle\int_{\mathbb{R}^m}|\widehat{\psi_{\ell,m}^{\alpha,\beta}}(\underline{x})|^2 \dfrac{dV(\underline{x})}{|\underline{x}|^m} $$ is finite. ($\omega_m$ is the volume of the unit sphere $S^{m-1}$ in $\mathbb{R}^m$. \end{lem} To state the final result dealing with the reconstruction formula relatively to the constructed new wavelets, we introduce firstly the inner product $$ <C_{a,\underline{b}}(f),C_{a,\underline{b}}(g)>=\dfrac{1}{\mathcal{A}_{\ell,m}^{\alpha,\beta}}\displaystyle\int_{\mathbb{R}^m}\displaystyle\int_{0}^{+\infty}\overline{C_{a,\underline{b}}(f)}C_{a,\underline{b}}(g)\dfrac{da}{a^{m+1}}dV(\underline{b}). $$ We obtain the following result. \begin{thm}\label{ReconstructionFormula} Then, the function $f$ may be reconstructed by $$ f(x)=\dfrac{1}{\mathcal{A}_{\ell,m}^{\alpha,\beta}}\displaystyle\int_{a>0}\displaystyle\int_{b\in\mathbb{R}^m}C_{a,\underline{b}}(f)\,\psi\left(\dfrac{\underline{x}-\underline{b}}{a}\right)\dfrac{da\,dV(\underline{b})}{a^{m+1}}. $$ where the equality has to be understood in the $L_2$-sense. \end{thm} \hskip-20pt The proof reposes on the following result. \begin{lem}\label{ProduitScalaireCoefficient} It holds that $$ \displaystyle\int_{a>0}\displaystyle\int_{b\in\mathbb{R}^m}\overline{C_{a,\underline{b}}(f)}C_{a,\underline{b}}(g)\displaystyle\frac{da\,dV(\underline{b})}{a^{m+1}} =\mathcal{A}_{\ell,m}^{\mu,\alpha}\displaystyle\int_{\mathbb{R}^m}f(\underline{x})\overline{g(\underline{x})}dV(\underline{x}). $$ \end{lem} \hskip-20pt\textbf{Proof.} Using the Clifford Fourier transform we observe that $$ C_{a,\underline{b}}(f)(\underline{b})=\widetilde{a^{\frac{m}{2}}\widehat{\widehat{f}(\underline{.})\widehat{\psi}(a\underline{.})}}(\underline{b}), $$ where, $\widetilde{h}(\underline{u})=h(-\underline{u})$, $\forall\,h$. Thus, $$ \overline{C_{a,\underline{b}}(f)}C_{a,\underline{b}}(g)=\overline{\widehat{\left(\widehat{f}(\underline{.})a^{\frac{m}{2}} \widehat{\psi}(a\underline{.})\right)}}(-\underline{b}) \widehat{\left(\widehat{g}(\underline{.})a^{\frac{m}{2}} \widehat{\psi}(a\underline{.})\right)}(-\underline{b}). $$ Consequently, $$ \begin{array}{lll} <C_{a,\underline{b}}(f),C_{a,\underline{b}}(g)> &=&\displaystyle\int_{a>0}\displaystyle\int_{\mathbb{R}^m}\overline{\widehat{\widehat{f}(\underline{.})a^{\frac{m}{2}}\widehat{\psi}(a\underline{.})}}\,\widehat{\widehat{g}(\underline{.})a^{\frac{m}{2}}\widehat{\psi}(a\underline{.})}\displaystyle\frac{da\,dV(\underline{b})}{a^{m+1}}\\ &=& \displaystyle\int_{a>0}\displaystyle\int_{\mathbb{R}^m} \overline{\widehat{f}(\underline{b})}\widehat{g}(\underline{b}) \displaystyle\frac{a^m|\widehat{\psi}(a\underline{b})|^2}{a^{m+1}} \,da\,dV(\underline{b})\\ &=&{\mathcal{A}_{\ell,m}^{\mu,\alpha}}\displaystyle\int_{\omega\in\mathbb{R}^m}\overline{\widehat{f(\underline{b})}}\widehat{g}(b)dV(\underline{b})\\ &=&{\mathcal{A}_{\ell,m}^{\mu,\alpha}}<\widehat{f},\widehat{g}>\\ &=& <f,g> . \end{array} $$ \textbf{Proof of Theorem \ref{ReconstructionFormula}.} It follows immediately from lemma \ref{ProduitScalaireCoefficient}. \section{Back to Legendre and Chebyshev polynomials} This section is devoted to multi folds. One of the aims is to discuss the link of the present work and the possibility to construct Clifford-Legender and Clifford-Techebyshev wavelets. Such an aim is itself a motivation to show the role of the parameters appearing in the present construction and the fact that it consists of a large class of polynomials and consequently wavelets that may englobe for instence a Legendre and a Techebyshev case. Befor doing that we stress on the fact that in our knwledge there are no previous works that have developed the special cases of Legendre and Techebyshev polynomials in the context of Clifford analysis. On work found is dealing with general orthogonal polynomials in the Clifford context reminiscent of some brief comming back to the explicit expression of Legendre polynomials in the real case. Recall firstly that when relaxing the parameter $\alpha$ for the weight function applied in the present work, we get the Gegenbauer existing case developed in \cite{Brackx-Schepper-Sommen1} and \cite{DeSchepper}. Recall also that classical real analysis affirm that Legendre, Techebyshev and Gegenbauer polynomials are issued from three-level recurence relations such as \begin{equation}\label{LegendrePolynomials} L_{n+1}=\dfrac{2n+1}{n+1}XL_{n}-\dfrac{n}{n+1}L_{n-1},\quad \forall n \in\mathbb{N}^{*} \end{equation} for Legendre polynomials, \begin{equation}\label{TchebychevPolynomials} T_{n+1}=2XT_{n}-T_{n-1},\quad \forall n \in\mathbb{N}^{*} \end{equation} for Tchebyshev polynomials and \begin{equation}\label{GegenbauerPolynomials} mG_{m}^{p}(x)=2x(m+p-1)G_{m-1}^{p}(x)-(m+2p-2)G_{m-2}^{p}(x), \end{equation} for Gegenbauer ones. These three classes of polynomials are defined also by means of the Rodrigues rule. Legendre polynomials are given by \begin{equation}\label{LegendrePolynomialsRodrigues} L_{n}(x)=\dfrac{d^n}{dx^{n}}[\dfrac{(x^{2}-1)^{n}}{2^{n}n!}] \end{equation} which with the Leibnitz rule yields an explicit form \begin{equation}\label{LegendrePolynomialsExplicit} L_{n}(x)=\dfrac{1}{2^{n}}\displaystyle\sum_{k=0}^n(C_n^k)^2(x-1)^{n-k}(x+1)^k. \end{equation} Tchebyshev polynomials are expressed via the Rodrigues rule as \begin{equation}\label{ChebyshevPolynomialsRodrigues} T_{n}(x)=\dfrac{2^n(-1)^nn!}{(2n)!}(1-x^{2})^{\frac{1}{2}}\dfrac{d^{n}}{dx^{n}}((1-x^{2})^{n-\frac{1}{2}}), \end{equation} which also with the Leibnitz rule induces the explicit form \begin{equation}\label{TchebyshevPolynomialsExplicit} T_{n}(x)=\dfrac{1}{2^{n}}\displaystyle\sum_{k=0}^nC_{2n}^{2k}(x-1)^{n-k}(x+1)^k. \end{equation} Gegenbauer polynomials called also ultra-spheroidal polynomials may be introduced via the Rodrigues rule as \begin{equation}\label{GegenbauerPolynomialsRodrigues} G^{p}_{m}(x)=(-1)^{m}\omega_{m,p}(1-x^{2})^{\frac{1}{2}-p}\dfrac{d^{m}}{dx^{m}}\left((1-x^{2})^{p+m-\frac{1}{2}}\right), \end{equation} where $$ \omega_{m,p}=\dfrac{2^{m}m!\Gamma(p+\frac{1}{2})\Gamma(m+2p)}{\Gamma(2p)\Gamma(p+m+\frac{1}{2})}. $$ Applying already the Leibnitz derivation rule, we obtain \begin{equation}\label{GegenbauerPolynomialsExplicit} G^{p}_{m}(x)=\dfrac{\omega_{m,p}}{2^{2m}m!}\displaystyle\sum_{k=0}^mC_{2m}^{2k}(x-1)^{m-k}(x+1)^k. \end{equation} Relatively to weight functions, Legendre polynomials are related to the weight $\omega_L(x)=(1-x^2)^n$. Tchebyshev polynomials are issued from the weight function $\omega_T(x)=(1-x^2)^{-1/2}$. Gegenbauer polynomials are deduced from the weight function $\omega_G(x)=(1-x^2)^{p-\frac{1}{2}}$. This means that both Legendre plynomials and Tchebyshev ones may be deduced from Gegenbauer case by simple choices of the parameter $p$ in $\omega_G$, such that $p=n+\frac{1}{2}$ for Legendre polynomials and $p=0$ for Techebyshev ones. So, a motivated extension in the case of Clifford analysis may be conducted by applying the same values for the parameter $p$ in Clifford-Gegenbauer polynomials explicit form to obtain explicit forms for Clifford-Legendre and Clifford-Techebyshev extensions. Therefore, an eventual form for Clifford-Legendre polynomials will be deduced from (\ref{LegendrePolynomialsExplicit}) as \begin{equation}\label{CliffordLegendrePolynomialsExplicit} L_{n}(\underline{x})=\dfrac{1}{2^{n}}\displaystyle\sum_{k=0}^n(C_n^k)^2(-1)^{n-k}(1-\underline{x})^{n-k}(1+\underline{x})^k \end{equation} and similarly an explicit Clifford extension of Techebyshev polynomials may be \begin{equation}\label{CliffordTchebyshevPolynomialsExplicit} T_{n}(\underline{x})=\dfrac{1}{2^{n}}\displaystyle\sum_{k=0}^nC_{2n}^{2k}(-1)^{n-k}(1-\underline{x})^{n-k}(1+\underline{x})^k. \end{equation} A second idea may be developed by comparing the induction rules in the real case and adopt similar changes on the Gegenbauer induction rule in Clifford analysis to obtain induction rules for Legendre and Tchebyshev ones. The explicit forms (\ref{CliffordLegendrePolynomialsExplicit}) and (\ref{CliffordTchebyshevPolynomialsExplicit}) may be good starting points to explicit the induction rules expected. We acheive our paper by proposing these facts as future works. As a result of these facts, it is immediate that the present case englobe a large set of Clifford polynomials and consequently of Clifford wavelets that may be adopted to the cited cases above. \section{Conclusion} In this paper we have introduced new classes of orthogonal polynomials relatively to new 2-parameters weight in the context of Clifford analysis. The new class generalizes the well known Jacobi and Gegenbauer polynomials. Such polynomial sets are next applied to introduce new wavelets in Clifford analysis. Fourier-Plancherel type results are proved for the new classes.
1,108,101,565,075
arxiv
\section{Introduction} \label{sec:mot} The top quark is the heaviest among the standard model (SM) quarks and is therefore the best candidate to be studied for any departure from particle point-like behavior. Such a departure would point to physics beyond the SM, possibly related to the dynamics behind the electro-weak symmetry breaking (EWSB) mechanism. The Large Hadron Collider (LHC) is expected to produce by the end of its run-3, with a collected integrated luminosity of $300\;$fb$^{-1}$, roughly $2\times 10^8$ top quarks pairs, effectively acting as a \emph{top factory} and thus providing the possibility of scrutinising the top quark intrinsic properties with an unprecedented precision. Moreover, the top quark enters in the dominant Higgs production mechanism at the LHC, the production via gluon fusion, which is also expected to be measured with high accuracy by the end of the LHC program. The study of the properties of the top quark has been performed both in terms of anomalous couplings~\cite{Grzadkowski:2003tf,Lillie:2007hd,Choudhury:2009wd,Hioki:2013hva,Kamenik:2011dk,Biswal:2012dr,Rindani:2015vya} and of SM higher dimensional effective operators~\cite{Kumar:2009vs,Zhang:2010dr,Zhang:2012cd,Degrande:2010kt,Englert:2012by,Degrande:2012gr,Cirigliano:2016nyn,Englert:2016aei}, often with an overlap between the two approaches. While the anomalous-coupling approach has the advantage of a more direct physical interpretation and the lower number of parameters, the effective lagrangian framework provides a more general and unbiased view, based on the possibility of performing global fits on a larger number of operators affecting various processes, see {\emph{e.g}}.~\cite{Buckley:2015nca,Englert:2015hrx}. In this work, motivated by the fact that strong interactions dominate $t\bar t$ production at the LHC, we follow the anomalous coupling approach, by studying the top-quark hypothetical structure only by means of its interaction to gluons. We parametrize it in terms of the following $SU(3)_C \times U(1)_{em}$ effective operators \begin{equation} {\cal O}_1 = \frac{C_1}{\Lambda^2}\, \bar t \gamma^\mu T^a t \, D^\nu G^a_{\mu\nu} \label{q1} \end{equation} \begin{equation} {\cal O}_2 = \frac{C_2}{\Lambda^2}\, v \, \bar t \sigma^{\mu\nu} T^a t \, G^a_{\mu\nu} \label{q2}\, , \end{equation} where $T^a = \lambda^a/2$ are the $SU(3)_{C}$ generators, $[T^a,T^b]=i f_{abc}T^c$ and Tr$[\lambda^a \lambda^b]=2 \delta^{ab}$, $D^\nu=\partial^\nu-i g_s G^{\nu,a}T^a$ and $G^a_{\mu\nu}=\partial_\mu G^a_\nu-\partial_\nu G^a_\mu + g_s f^{abc}G_\mu^b G_\nu^c$ are the $SU(3)_C$ covariant derivative and the field strength tensor respectively and $\sigma^{\mu\nu} = i/2 [\gamma^\mu,\gamma^\nu]$. These two effective operators can also be seen as the leading terms coming from the Taylor expansion of the strong version of the Dirac and Pauli form factors in the top gluon interaction~\cite{Fabbrichesi:2013bca}, thus making perhaps more evident the relationship with the study of the internal structure of the top quark. This point will be discussed in Section~\ref{sec:form_factors}. The vacuum expectation value $v=174$ GeV in Eq.~\eqref{q2} is a reminder of the presence of the Higgs boson in the $SU(2)_L$ invariant operator before EWSB. This will induce further interactions affecting Higgs phenomenology which we will discuss in Sec.~\ref{sec:higgs}. The effective operators of Eq.~\eqref{q1} and Eq.~\eqref{q2} affect both $t\bar t$ and Higgs production processes, which can then be used to constrain the corresponding Wilson coefficients. The relation between the operator $\mathcal O_1$ of Eq.~\eqref{q1} and the four-fermions operators in the Warsaw basis~\cite{Grzadkowski:2010es} is given in Appendix~\ref{appendix}. Because of its space-time structure, the three point function arising from the operator ${\cal O}_1$ vanishes when coupled to on-shell gluons, and thus does not affect the dominant Higgs production mechanism at the LHC, the one via gluon fusion, which can then be used to constrain the size of the operator ${\cal O}_2$ independently of ${\cal O}_1$. On the other hand, even though the operator ${\cal O}_2$ enters both processes, its contribution only marginally modifies the shape of the top quark pair invariant mass and transverse momentum distributions~\cite{Franzosi:2015osa}. Modifications are present in the high energy regime when quadratic terms in $C_2$ are retained. Therefore for small values of the Wilson coefficient negligible departures with respect to the SM predictions are expected. In order words, the shapes of the normalized $1/\sigma\;\mbox{d}\sigma/\mbox{d}{m_{t \bar t}}$ and $1/\sigma\;\mbox{d}\sigma/\mbox{d}{p_{T}^t}$ distributions are essentially unaffected by the presence of the ${\cal O}_2$ operator. We thus conclude that the combined study of the inclusive Higgs production and of the differential cross sections for $t\bar t$ production could offer two observables constraining the operators ${\cal O}_1$ and ${\cal O}_2$ independently of each other, thus providing, in principle, more stringent limits than those we can obtain from other processes, like the total cross section, where the simultaneous presence of both operators requires some marginalization in order to set the constraints. As we will show, the use of these independent observables to set more stringent limits is only possible if the uncertainties in the differential cross section measurements can be reduced, especially in the high momentum-transfer region. While this is expected to happen as more data will be collected, it is not the case yet for those currently available. For this reason we still use the total $t\bar t$ production cross section---where both ${\cal O}_1$ and ${\cal O}_2$ enter--- to set the strongest limits available today. We then identify the expected reduction in uncertainty necessary to have the $t\bar t$ differential cross section and Higgs production process to set the most stringent limit on the operator ${\cal O}_1$ independently of ${\cal O}_2$ at the LHC with the future luminosity of 300 and 3000 fb$^{-1}$. \subsection{Form factors and gauge invariance} \label{sec:form_factors} The physical interpretation of the contribution of the operators in Eq.~\eqref{q1} and Eq.~\eqref{q2} to cross sections measurements is in terms of a departure from the point-like behavior of the top quark. From this point of view, as already mentioned in the previous section, these operators can be seen as the leading terms coming from the Taylor expansion of the strong version of the electromagnetic form factors. In order to explain this point, it is useful to first recall how nucleon electromagnetic form-factors are defined. They are usually introduced through an effective parametrization of the nucleon-photon vertex $\Gamma_\mu$ which in momentum space reads as follows: \begin{equation} \label{emff} \Gamma_\mu(q,k)=e\, \gamma_\mu F_1(q^2)+i e\,\frac{\sigma_{\mu\nu}}{2M}q^\nu F_2(q^2)\,, \end{equation} where $q$ is the photon momentum and $F_1$ and $F_2$ are. respectively, the Dirac and Pauli form factors, with $F_1(0)=1$ and $F_2(0)=\kappa$. This parametrization of the vertex respects electromagnetic gauge invariance when considering on-shell external nucleons. In the case of strong interactions, where the underlying $SU(3)_C$ symmetry is non-abelian, a parametrization similar to that of Eq.~\eqref{emff} would violate gauge invariance. Therefore, form factors that respect gauge invariance have to be introduced by considering, in addition to the covariant kinetic term, the following operators: \begin{equation} \small \label{strff} \bar \psi \left[ \frac{C_1}{\Lambda^2}\gamma^\mu f_1\left (\frac{D^2}{\Lambda^2} \right) D^\nu G_{\mu\nu}+\frac{C_2}{\Lambda}\sigma^{\mu\nu}f_2 \left( \frac{D^2}{\Lambda^2} \right) G_{\mu\nu} \right]\psi\,, \end{equation} where $D^2 =D_\mu D^\mu$. The functions $f_1$ and $f_2$ are the strong analogous of the Dirac and Pauli form factors. These form factors are assumed to admit a Taylor expansion. The leading terms of the expansion is what we consider in our study and are represented by the operators introduced in Eq.~\eqref{q1} and Eq.~\eqref{q2}. While, in the case of electromagnetic interactions, form factors can be introduced in a way that their presence affects just the interaction vertex between one single photon and the fermion, in the case of strong interactions, gauge invariance requires that form factors affect also interaction vertices between the fermion and a multiple number of gluons. This can be seen by expanding the functions $f_1$ and $f_2$ in Eq.~\eqref{strff} and substituting the explicit expression of the covariant derivative. \subsection{The fine print} \label{sec:fine_print} The reliability of the perturbative expansion of the effective theory depends on the relative size of the higher order operators with respect to those we retain in the cross section. This size is controlled by the energy of the process, the energy scale of the effective theory and the estimated strength of couplings. Concerning the leading corrections to the SM result, the size of which is controlled by $g_{SM}$, and indicating with $\bar E$ the energy probed in the process, we have terms \begin{equation} O\left( \frac{g_{SM} C^{(6)}\bar{E}^2}{\Lambda^2}\right) \, , \label{inter} \end{equation} which arise from the interference between the SM amplitude and the leading dimension-six operators, terms \begin{equation} O\left( \frac{C^{(6)} \bar{E}^2}{\Lambda^2}\right)^2 \label{square} \, , \end{equation} which come from the square (or the double insertion) of the same dimension-six operators, and terms \begin{equation} O\left( \frac{g_{SM} C^{(8)}\bar{E}^4}{\Lambda^4}\right) \label{8} \, , \end{equation} which originate from the interference between the SM amplitude and the dimension-eight operators. The terms in Eq.~\eqref{8} are formally comparable to those in Eq.~\eqref{square}. Without any assumption about the strength of the interactions behind the effective operators, it is not possible to decide whether the terms in Eq.~\eqref{8} should be included or can be safely neglected. To make such an assumption manifest we can re-write the coefficients $C^{(6)}$ and $C^{(8)}$ as $g_\star \tilde C^{(6)}$ and $g_\star \tilde C^{(8)}$, where $g_\star$ indicates the strength of these interactions. Accordingly, the condition for the terms in Eq.~\eqref{8} to be smaller than those in Eq.~\eqref{square} is simply \begin{equation} g_\star > g_{SM} \, . \label{gg} \end{equation} In our study, we look into departures from point-like behavior of the top quark. It is then reasonable to assume that such physics originates in interactions that are at least stronger than those of the standard model. This assumption makes the condition in Eq.~\eqref{gg} satisfied. This argument must be taken with a grain of salt: it is an assumption that $C^{(8)} = g_\star \tilde C^{(8)}$, rather than higher powers of $g_\star$, and it is another assumption that the numerical coefficients are sufficiently small for making dimensional analysis valid. When the terms in Eq.~\eqref{inter} are larger than the SM result itself, it is necessary to include also those in Eq.~\eqref{square} in order to make a likelihood test well defined (this point was already made in \cite{Contino:2016jqw}). The reason is that otherwise the observable to be estimated could be negative for negative values of the coefficient $C^{(6)}$. This can happen if the energy $\bar E$ in the process is large enough to overcome in Eq.~\eqref{inter} the suppression from $O(g_{SM} C^{(6)}/\Lambda^2)$. This is the case in our estimate of the total and differential cross sections because $\bar E=m_{t\bar t}$ (where $m_{t\bar t}$ is the invariant mass of the system of top quark pairs) can become large enough. We therefore must include the terms in \eq{square}. On the other hand, the cross section for the Higgs boson production is safe because $\bar E = m_H$ (where $m_H$ is the mass of the Higgs boson) and we can keep only terms of the type of Eq.~\eqref{inter}. Another comment is in order. Next-to-leading order (NLO) corrections to the processes under consideration are crucial in order to match the theoretical predictions with the experimental measurements. It is therefore in principle necessary to evaluate all amplitudes at least to this order, both for the SM and in the case of the presence of the operators ${\cal O}_1$ and ${\cal O}_2$ of Eq.~\eqref{q1} and Eq.~\eqref{q2}. It has however been recently shown~\cite{Franzosi:2015osa} that these corrections, at least for what concerns the operator ${\cal O}_2$, only affect the cross section by an overall $k$-factor which is equal for the SM and for the SM augmented by the operator ${\cal O}_2$. This holds true both for the total cross section, as well for the differential ones, where the $k$-factors are now approximately equal to each other bin per bin. Pending a formal proof, we will assume that the same holds true also for the operator ${\cal O}_1$. For this reason, we perform our calculation at the leading order (LO). \section{Top pair production cross section measurements} \label{sec:top} In order to calculate total and differential event rates for the $t\bar t$ process, we implement the operators of Eq.~\eqref{q1} and Eq.~\eqref{q2} in the {\tt UFO}~\cite{Degrande:2011ua} format through the {\tt Feynrules}~\cite{Alloul:2013bka} package and use {\tt MadGraph5\_aMC@NLO}~\cite{Alwall:2014hca} as event generator. We then analyse the generated event via the {\tt MadAnalysis5}~\cite{Conte:2012fm} package. We perform our calculation at the leading order and in comparing our results with the $t\bar t$ rates (both total and differential) we assume that the central value of the experimental measurement corresponds to the SM predicted cross section, computed by fixing $C_1=C_2=0$ in our numerical calculation. In other words, we are computing {\emph{expected limits}} on the two Wilson coefficients, as it is usually done when calculating limits for projected measurements. In the case of actual data, we are assuming that the mismatch between the measured values and the SM predictions, when folded with the relevant $k$ factors, are due to statistical fluctuation that we ignore. \subsection{Limits from the total cross section} \label{sec:tt_total} The contribution of the two operators in Eq.~\eqref{q1} and Eq.~\eqref{q2} to the total cross section for $t \bar t$ production has been previously estimated and limits on their size obtained. The most recent analysis of the two operators taken by themselves can be found in~\cite{Fabbrichesi:2013bca}, while one considering the full set of operators affecting top quark phenomenology has been presented in~\cite{Buckley:2015nca} by the use of the dedicate package {\tt TopFitter}. \begin{figure}[ht!] \begin{center} \includegraphics[width=0.22\textwidth]{./diag_ggtt_1.pdf} \includegraphics[width=0.22\textwidth]{./diag_ggtt_2.pdf}\\ \includegraphics[width=0.22\textwidth]{./diag_ggtt_3.pdf} \includegraphics[width=0.22\textwidth]{./diag_qqtt.pdf} \caption{\small Representative Feynman diagrams for $t\bar t$ production through gluon fusion (a)-(c) and quark-antiquark annihilation (d). The black dot represent the insertion of one of the two operators of Eq.~\eqref{q1} and Eq.~\eqref{q2}.} \label{fig:ttbar-diagrams} \end{center} \end{figure} We update here these constraints by means of the most precise 13 TeV LHC data. The CMS collaboration recently released a measurement of the top quark pair total cross section performed in the single lepton channel with an integrated luminosity of $3.2\;$fb$^{-1}$~\cite{Sirunyan:2017uhy}. This measurement yields a value for the total cross section of \begin{equation} \sigma(pp\to t \bar t)=835\pm 3\;(\rm stat) \pm 23\;(\rm syst) \pm 23\;(\rm lum)\;{\rm pb}. \end{equation} The relative error on this measurement, after having summed in quadrature the various sources of uncertainty, is about 3.9\%, comparable to the one obtained with the combination of 7 and 8 TeV data in the dileptonic channel~\cite{Khachatryan:2016mqs}. \begin{figure}[ht!] \begin{center} \includegraphics[width=0.45\textwidth]{./C1_exclusive_new.pdf} \vskip0.4cm \includegraphics[width=0.45\textwidth]{./C2_exclusive_new.pdf} \caption{Relative modification of the $t\bar t$ total cross section, $\Delta\sigma(t\bar t)/\sigma(t\bar t)=\sigma(t\bar t)^{\rm BSM}/\sigma(t\bar t)^{\rm SM}-1$, induced by the presence of the operator $\mathcal O_1$ and $\mathcal O_2$. The blue and green shaded regions correspond to the 95\% confidence level intervals on the Wilson coefficient $C_1$ and $C_2$ from the cross section determination from LHC and Tevatron data respectively. The limits can be found by looking at the intersections of the curves with the regions of the same color: $-5.48/{\rm TeV^2}<C_1<1.08/{\rm TeV^2}$ and $-0.30/{\rm TeV^2}<C_2<0.28/{\rm TeV^2}$ for the LHC and $-0.38/{\rm TeV^2}<C_1<0.35/{\rm TeV^2}$ and $-0.49/{\rm TeV^2}<C_2<0.45/{\rm TeV^2}$ for Tevatron.} \label{fig:exclusive} \end{center} \end{figure} The operator of Eq.~\eqref{q1} does not affect the partonic process $gg\to t\bar t$, thus only modifying the $q\bar q$ initiated reaction, which at the LHC is subdominant in the $t\bar t$ cross section, given that the anti-quark parton has to be extracted from the sea quarks of the proton. This comes about because of gauge invariance and the presence of a contact vertex with two gluons attached to the quark lines (see Fig.~\ref{fig:ttbar-diagrams} (a) and (c)), a contribution which cancels out that of the vertex with a single gluon. For this reason the Wilson coefficient $C_1$ can be more effectively constrained by Tevatron data, where the anti-quark state is extracted from the valence quarks of the colliding anti-proton. A combined results from the CDF and D0 collaboration gives the following measurement of the total $t\bar t$ cross section~\cite{Schilling:2013nca} \begin{equation} \sigma(p\bar p\to t \bar t)=7.65 \pm 0.42\;{\rm pb} \end{equation} with a relative precision of about 5.5\%, which we use throughout our analysis. \begin{figure}[ht!] \begin{center} \includegraphics[width=0.45\textwidth]{./C1_C2_total.pdf}\hfill \caption{95\% confidence intervals on the Wilson coefficient $C_1$ and $C_2$ from the measurements of the $t\bar t$ total cross section at the LHC, blue, and Tevatron, green. The corresponding combined limits are listed in Table~\ref{tab:current}.} \label{fig:combined} \end{center} \end{figure} Following the procedure described at the beginning of this Section, and by fixing one of the two Wilson coefficient to zero, we obtain the limits from the total $t\bar t$ cross sections measurements which are shown in Fig.~\ref{fig:exclusive}, where the blue and green shaded areas correspond to the 95\% confidence level uncertainties on the cross section determination at the LHC and Tevatron respectively, and the solid lines correspond to the relative modification of the SM cross section due to the presence of the operator $\mathcal O_1$ and $\mathcal O_2$. If we allow for the presence of both operators at the same time, we obtain the limits shown in Fig.~\ref{fig:combined}. The two exclusion regions have different inclination because the operator $\mathcal O_1$ contribution depends on the different relative importance of the gluon and quark initiated reaction at the Tevatron and the LHC. The importance of the Tevatron data in constraining the $C_1$ Wilson coefficient is thus manifest, the bound being a factor three better for $C_1>0$ (taking $C_2=0$). \subsection{Limits from the differential cross sections} \label{sec:tt} \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{./C1_C2_shape_mtt.pdf} \vskip 0.4cm \includegraphics[width=0.45\textwidth]{./C1_C2_shape_pt.pdf} \caption{ Differential distribution for top quark pair production with respect to the top pair invariant mass and the top transverse momentum normalized to unity for the case where just the operator ${\cal O}_1$ is inserted, solid blue, and just the operator ${\cal O}_2$ is inserted, solid green. The SM prediction is shown in dashed black. The relative independence from ${\cal O}_2$ is manifest. Also, it is for large $m_{t\bar t}$ and $p^t_T$ that the distributions are most sensitive to the insertion of ${\cal O}_1$.} \label{fig:shape} \end{center} \end{figure} The current center of mass energy for LHC proton collisions and the large number of top quark pairs expected to be produced during the present run of the CERN machine, will allow to measure top quark differential cross sections with an unprecedented precision and with a potential large number of events populating the tails of such distributions, thus allowing for a more stringent comparison between experimental measurements and theoretical predictions. In fact, other than modifying the total rate for $t\bar t$ production, the effective operators in Eq.~\eqref{q1} and Eq.~\eqref{q2} can in principle affect the shape of the cross sections differential distributions, altering them with respect to the SM predictions. Therefore the possibility of using {\emph{differential measurements}} other than total cross sections potentially offers a powerful mean to constrain the coefficients of these higher dimensional operators. In particular, both the top quark pair invariant mass differential distribution ($\mbox{d} \sigma/\mbox{d} m_{t\bar t}$) and the top quark transverse momentum differential distribution ($\mbox{d} \sigma/\mbox{d} p^t_{T}$) present an interesting behavior with respect to the two operators of Eq.~\eqref{q1} and Eq.~\eqref{q2}. The insertion of the ${\cal O}_1$ operator gives rise to the typical tail enhancement in the distributions at large invariant masses and transverse momentum, as shown in Fig.~\ref{fig:shape}, where the differential rates normalized to the total cross section are computed for both the case of the top pair invariant mass distribution and top quark transverse momentum. On the other hand, for high invariant masses and transverse momenta, the shapes of the differential distributions computed in presence of the operator ${\cal O}_2$ are not modified with respect to the SM when just the linear order in the Wilson coefficient $C_2$ is retained. This is true at LO~\cite{Zhang:2010dr} but also at NLO, as shown in~\cite{Franzosi:2015osa}, where both the SM and the EFT contributions are evaluated at NLO order. The computation of~\cite{Franzosi:2015osa} shows that evaluating both terms at NLO order avoids an overestimation of the enhancement of the contribution of the ${\cal O}_2$ operator in the high energy regime. The inclusion of quadratic terms in $C_2$ modifies the high energy tails of the distributions already at tree level. In~\cite{Aguilar-Saavedra:2014iga}, the authors retain up to quartic terms in the effective operator coefficients for computing the cross sections and find an enhancement of the sensitivity in the ultra boosted regime. However, the contribution of these quadratic terms is negligible if the specific values of $C_2$ used to generate the distributions in the relevant energy range are sufficiently small (see Fig.~\ref{fig:shape}). For what concerns our analysis, the different behavior of the two operators suggests that the \emph{normalized} differential cross section measurements can be used to set a limit on the coefficient of the ${\cal O}_1$ operator, irrespective of the value taken by the ${\cal O}_2$ operator. From the experimental side, while the invariant mass distribution of the top quark pairs, $m_{t\bar t}$, has been previously measured by both the CDF and D0 collaboration at Tevatron~\cite{Aaltonen:2009iz,Abazov:2014vga}, more recently both the ATLAS and CMS collaborations have provided unfolded measurements of this and others observable both normalized to the total event rate and to unity~\cite{ATLAS:2016soq,CMS-PAS-TOP-16-007,CMS-PAS-TOP-16-013,ATLAS:2016jct,Aaboud:2016syx}. We will use the ATLAS differential measurements of~\cite{ATLAS:2016jct} which have been performed in the all hadronic channel with an integrated luminosity of 14.7 fb$^{-1}$ exploiting a final state with highly boosted top, which have been shown to be effective in testing the top quark intrinsic structure~\cite{Aguilar-Saavedra:2014iga}. \begin{table}[ht] \begin{tabular}{c || c | c | c | c | c | c | c | c | c } $m_{t\bar t}$ [TeV] & 1.0 & 1.1 & 1.2 & 1.3 & 1.4 & 1.5 & 1.7 & 2.0 & 2.3-3.0 \\ \hline Error [\%] & 36 & 20 & 25 & 30 & 31 & 32 & 63 & 58 & 123 \\ \end{tabular} \vskip 10pt \begin{tabular}{c || c | c | c | c | c | c } $p_{T}^t$ [TeV] & 0.5 & 0.55 & 0.6 & 0.65 & 0.75 & 0.9-1.2 \\ \hline Error [\%] & 19 & 25 & 28 & 45 & 73 & 95 \\ \end{tabular} \caption{Top pair invariant mass and top quark transverse momentum binning of the ATLAS measurements of $t\bar t$ invariant mass differential cross section and relative errors in \%~\cite{ATLAS:2016jct}. The observable values indicate the lower edge of the considered bin except for the last bin where the upper values value are explicitly indicated.} \label{tab:binning} \end{table} We thus perform a $\chi^2$ fit to the measured top quark normalized invariant mass and transverse momentum distributions, see Fig.~\ref{fig:chi}, again assuming that the central value of the experimental measurements coincides with our predictions when $C_1=C_2=0$, with the uncertainties reported in Tab.~\ref{tab:binning}. The number of degrees of freedom for the $\chi^2$ fit correspond to the number of bins of the considered distribution minus one, since one degree of freedom is fixed by the requirement that the area under the curve is equal to unity. With this procedure, and taking the data for the $p^t_T$ distribution which turn out to provide the most stringent constraint, we set a limit of $-0.80/{\rm TeV^2}<C_1/\Lambda^2<0.68/{\rm TeV^2}$, which is comparable with the one that can be obtained through the total cross section measurements. We will show in Sec.~\ref{sec:combine} the prospects for the determination of the $C_1$ coefficient with the increase of the data collected by the LHC. \begin{figure}[ht!] \begin{center} \includegraphics[width=0.45\textwidth]{./C1_shape.pdf} \caption{$\chi^2$ distribution for the Wilson coefficient $C_1$ from the differential cross section measurement in $t\bar t$ invariant mass and top quark transverse momentum of~\cite{Aaboud:2016syx}. The horizontal lines represent the 95\% confidence level limit taken for a $\chi^2$ with 8 degrees of freedom corresponding to the 9 bins of data considered in the $1/\sigma\;\mbox{d}\sigma/\mbox{d} m_{t\bar t}$ distribution and 5 degrees of freedom corresponding to the 6 bins of data considered in the $1/\sigma\;\mbox{d}\sigma/\mbox{d} p^t_{T}$ distribution.} \label{fig:chi} \end{center} \end{figure} \section{Higgs production cross section measurements} \label{sec:higgs} \begin{figure*}[ht!] \begin{center} \includegraphics[width=0.22\textwidth]{./diag_ggh_1.pdf} \includegraphics[width=0.22\textwidth]{./diag_ggh_2.pdf} \includegraphics[width=0.22\textwidth]{./diag_ggh_3.pdf} \caption{\small Representative Feynman diagrams for Higgs boson production through gluon fusion. The black dot represents the insertion of the operator of Eq.~\eqref{q2}.} \label{gghdiagrams} \end{center} \end{figure*} The production of the Higgs boson at the LHC is dominated by the gluon fusion channel. This process arises in the SM from a one loop diagram mediated by colored fermions, the amplitude being dominated by the top quark contribution because of its large Yukawa coupling to the Higgs boson. It has been discussed as an observable sensitive to the operator ${\cal O}_2$ in \cite{Degrande:2010kt,Chien:2015xha}. The presence of the higher dimensional operators of Eq.~\eqref{q1} and Eq.~\eqref{q2} introduces modifications to the coupling between the top quark and the gluon thus affecting the Higgs boson production rate. Only the ${\cal O}_2$ operator contributes to this process because for on-shell gluons the correction arising from ${\cal O}_1$ identically vanishes. This is obvious if one recalls that the operator ${\cal O}_1$ can be written in terms of four-fermion operators, as shown in Appendix~\ref{appendix}. Therefore the amplitude for $gg\to H$ can be written as the sum of two contributions \begin{equation} {\cal M}={\cal M}_{\rm SM}+{\cal M}_{{\cal O}_2}\,, \label{eq:higgs-amplitude} \end{equation} where ${\cal M}_{\rm SM}$ is the SM contribution and ${\cal M}_{\rm {\cal O}_2}$ is the contribution coming from one insertion of the ${\cal O}_2$ operator. Terms coming from two insertion of the dipole operator are neglected, since in the end we are going to retain only contributions linear in $C_2$, as discussed in section~\ref{sec:fine_print}. We assume that the Yukawa coupling between the top quark and the Higgs boson takes its SM value and we take a zero finite contribution from the operator ${\cal O}_{HG}=H^\dag H G^a_{\mu\nu} G^{\mu\nu}_a$. Furthermore we assume that their mixing with the operators of Eq.~\eqref{q1} and Eq.~\eqref{q2} is negligible. With these assumptions we can use the $gg\to H$ process to set a direct limit on the coefficient of the ${\cal O}_2$ operator. We rewrite the effective operator of Eq.~\ref{q2} in its $SU(2)_L \times U(1)_Y$ invariant form in order to correctly take into account all the contributions affecting Higgs phenomenology arising from the operator $\mathcal O_2$, see Fig.~\ref{gghdiagrams}. We compute the Higgs production cross section analytically, cross checking the results by means of {\tt Package X}~\cite{Patel:2015tea}. The final numerical integration of the Feynman integrals has also been checked against {\tt FormCalc8}~\cite{ChokoufeNejad:2013qja}. A factor 4 takes into account the identical contributions coming from crossing of the gluon lines and switching the vertex insertion of the dipole operator. The contribution of the diagram (b) of Fig.~\ref{gghdiagrams} turns out to be identically zero in dimensional regularization. We therefore have \begin{widetext} \begin{equation} \begin{split} & ({\cal M}_{\mathcal O_2})^{ab}_{\lambda_1 \lambda_2} = 4 \times g_s \frac{m_t}{\sqrt{2}}\frac{2\;C_2}{\Lambda^2} \frac{1}{16\pi^2}(m_H^2 g_{\mu\nu} - 2 q_{2\mu} q_{1\nu}) \varepsilon^\mu_{\lambda_1} (q_1) \varepsilon^\nu_{\lambda_2} (q_2) \, \mbox{Tr}\, \left[T^a T^b \right] \times \left\{ \frac{1}{\bar \epsilon} + 1- \log \frac{\mu^2}{m_t^2} \right. \\ & \left. + \frac{m_t^2}{m_H^2} \log^2 \left( \frac{ \sqrt{m_H^4 - 4 m_t^2 m_H^2} + 2 m_t^2 - m_H^2}{2 m_t^2} \right) + \frac{\sqrt{m_H^4 - 4 m_t^2 m_H^2}}{m_H^2} \log \left( \frac{\sqrt{m_H^4 - 4 m_t^2 m_H^2} + 2 m_t^2 - m_H^2}{2 m_t^2}\right) \right\} \, \end{split} \end{equation} \end{widetext} where $m_t$ and $m_H$ are the masses of the top quark and the Higgs boson. The vectors $\varepsilon^\mu_{\lambda_1}(q_1)$ and $\varepsilon^\nu_{\lambda_2} (q_2)$ represent the polarizations for the two incoming gluons with momenta $q_1$ and $q_2$. We regularize the divergent loop integral by means of dimensional regularization where the pole in 4 dimensions is written in the $\overline{MS}$ scheme, {\it i.e.} $ 1/\bar \epsilon= 1/\epsilon-\gamma_E+\log(4\pi)$. In order to have a finite amplitude we subtract the $ 1/\bar \epsilon$ pole by a counter-term proportional to the effective operator describing the direct coupling of the Higgs boson to the gluon fields: ${\cal O}_{HG}=H^\dag H G^a_{\mu\nu} G^{\mu\nu}_a$. This renormalization procedure leaves a logarithmic dependence on the subtraction scale, which we take $\mu=m_H$ to match the factorization scale for the process. We also explicitly checked that indeed the double insertion of the $\mathcal O_2$ operator gives rise to a small correction that can be neglected, as discussed in section~\ref{sec:fine_print}. \begin{figure}[ht!] \begin{center} \includegraphics[width=0.45\textwidth]{./ggH.pdf} \caption{ $\chi^2$ distribution for the Wilson coefficient $C_2$ from the Higgs production from gluon fusion. The horizontal line represents the 95\% confidence level limit taken for a $\chi^2$ with 1 degrees of freedom. \label{fig:chi2} } \end{center} \end{figure} In computing the squared amplitude of Eq.~\eqref{eq:higgs-amplitude} the leading correction to the SM cross section is a term linear in the $C_2$ Wilson coefficient. By fixing $m_{t}=172\;$ GeV and $m_H=125\;$ GeV we find that the ratio of the gluon fusion Higgs production cross section with respect to its SM value is \begin{equation} \mu_{{\cal O}_2} \simeq 1 + 0.375~{\rm TeV^2} \frac{C_2}{\Lambda^2} \,. \label{eq:higgs-mod} \end{equation} This ratio is measured experimentally and usually presented by the experimental collaborations either in terms of {\emph signal strengths} values, which is precisely the ratio of the experimental measurements with respect to the SM expectation, or of coupling modifier, the ratio of the Higgs to gluon gluon effective coupling compared with the SM prediction. In either cases, the results of Eq.~\eqref{eq:higgs-mod} allows us to directly use the current precision on the Higgs production measurements and set a limit on the $C_2$ Wilson coefficient. As for the computation of the $t\bar t$ production cross section, this ratio has been obtained at LO. We however assume this results to hold also at NLO since the $k$ factor induced by higher order corrections are expected to be the same for the SM and the effective operator cases, therefore cancelling out in performing the ratio. The ATLAS and CMS collaborations have performed a combined measurements of the Higgs signal strength with about 5 and 20 fb$^{-1}$ of data collected during the 7 and 8 TeV run of the LHC, yielding a value for the gluon fusion Higgs production signal strength~\cite{TheATLASandCMSCollaborations:2015bln} \begin{equation} \mu_{ggH} = 1.03^{+0.17}_{-0.15}. \end{equation} The $\chi^2$ value for the parameter $C_2$ is shown in Fig.~\ref{fig:chi2} from which we find the 95\% confidence level limits $-0.77/{\rm TeV^2}<C_2/\Lambda^2<0.93/{\rm TeV^2}$, also reported in Table~\ref{tab:current}. This estimate provides limits on the coefficient of the ${\cal O}_2$ operator, which are not yet competitive with those obtained from the measurements of the top pair production cross section. We will show in the next Section how the expected improvement on the determination of this signal strength will provide stronger limits on the $C_2$ Wilson coefficient. \section{Combination and prospects} \label{sec:combine} In the previous sections we have shown that the measurement of the normalized top quark transverse momentum differential distribution in top pair production and the measurement of the Higgs boson production cross section through gluon fusion can be used to set \textit{independent limits} on the coefficient of the operators ${\cal O}_1$ and ${\cal O}_2$ respectively. We show in Fig.~\ref{fig:results_present} the limits on the $C_1$ and $C_2$ Wilson coefficient obtained through this method, together with those obtained only by means of the measurements of the total top pair pair production cross sections performed at both Tevatron and LHC. Table~\ref{tab:current} summarizes the various bounds. These bounds are the most stringent among those so far available for the operators ${\cal O}_1$ and ${\cal O}_2$ (compare with those in \cite{Fabbrichesi:2013bca} and \cite{Buckley:2015nca}). \begin{figure}[h!] \begin{center} \includegraphics[width=0.45\textwidth]{./C1_C2_total_diff_higgs.pdf}\hfill \caption{95\% confidence intervals for $C_1$ and $C_2$ from the measurements of the top quark transverse momentum differential cross section and Higgs production via gluon fusion cross section, vertical gray and horizontal gray shaded area respectively, with current available data. The limits from the measurements of $t\bar t$ total cross section at the LHC (blue) and Tevatron (green) are also shown.} \label{fig:results_present} \end{center} \end{figure} \begin{table*}[ht!] \begin{center} \vspace{0.2cm} \begin{tabular}{|c|c|c|} \hline $\sigma_{t\bar t}$ (Tevatron + LHC) & $\mu_{ggH}$ & $ \mbox{d} \sigma_{t \bar t}/\mbox{d} p_{T}^t$ \cr \hline \hline $\qquad -0.74 < C_1/\Lambda^2 < 0.71 \qquad$ & --- & $\qquad -0.80 < C_1/\Lambda^2 < 0.68 \qquad $ \cr \hline $\qquad -0.49 < C_2/\Lambda^2 < 0.42 \qquad$ & $\qquad -0.77 < C_2/\Lambda^2 < 0.93 \qquad $ & --- \cr \hline \end{tabular} \end{center} \caption{Limits at 95\% confidence level on the coefficients $C_{1}$ and $C_2$ from current data. Values in the first column come from the total cross sections and are obtained by marginalization of one operator against the other. The limits in the next two columns are obtained for the two operators independently by means of Higgs production and the indicated differential cross section. All values are in units of TeV$^{-2}$.} \label{tab:current} \end{table*} The proposed method thus sets limits comparable to those obtained from total $t\bar t$ cross section measurements on the operator $\mathcal O_1$ and roughly a factor two weaker on the operator $O_2$. However, while the current uncertainties on the measurement of the top quark pair total cross section, which are about 4\%, are not going to improve substantially, this is not the case for the top quark differential cross sections as well as for the Higgs production cross section measurements which are expected to become more precise. In order to infer the projected limits on the $C_1$ and $C_2$ Wilson coefficients we thus proceed in the following way. For the measurements of the top quark transverse momentum differential cross section we rescale the uncertainties reported in Tab.~\ref{tab:binning} by the luminosity dependent factor $\sqrt{\mathcal L_0/\mathcal L}$ where $\mathcal L_0=14.7$ fb$^{-1}$ indicates the current collected luminosity and $\mathcal L$ the projected luminosity. We finally take the error associated with this measurement to be \begin{equation} \frac{\Delta \sigma}{\sigma}\bigg\rvert_\mathcal L = {\rm Max}\left[ 0.15,\frac{\Delta \sigma}{\sigma}\bigg\rvert_{\mathcal{L}_0}\times \sqrt{\frac{\mathcal L_0}{\mathcal L}} \right] \end{equation} thus assuming a conservative floor of 15\% for the error estimation. For the Higgs production through gluon fusion process, we use the projected uncertainties on the measurements as provided by the CMS collaboration~\cite{CMS-NOTE-2012-006} which are 5.7\% (2.7\%) for a collected integrated luminosity of 300 (3000) fb$^{-1}$. \begin{figure}[ht!] \begin{center} \includegraphics[width=0.45\textwidth]{./C1_C2_total_diff_higgs_proj.pdf}\\ \caption{95\% confidence intervals for $C_1$ and $C_2$ from the measurements of the top quark transverse momentum differential cross section and Higgs production via gluon fusion cross section, vertical gray and horizontal gray shaded area respectively. The lighter (darker) gray area correspond to an integrated luminosity of 300 (3000) fb$^{-1}$ respectively. The limits from the measurements of $t\bar t$ total cross section at the LHC (blue) and Tevatron (green) are also shown.} \label{fig:results_proj} \end{center} \end{figure} Through this procedure we obtain the expected limits on the Wilson coefficient $C_1$ and $C_2$ shown in Fig.~\ref{fig:results_proj} where light and dark gray regions correspond to an integrated luminosity of 300 and 3000 fb$^{-1}$ respectively. For comparison, the previous limits obtained from the measurements of the total $t\bar t$ cross section at Tevatron and LHC are also shown. The plot shows that with an integrated luminosity of 300 fb$^{-1}$ the combination of the differential measurements in $t\bar t$ production together with the measurements of the Higgs production rate through gluon fusion will be able to set a comparable limits on the $C_2$ Wilson coefficient, and a stronger limit on $C_1$ for $C_1>0$. At the end of the LHC program, that is with a integrated luminosity of 3000 fb$^{-1}$, these measurements will provide the most stringent limits on the coefficient of the $\mathcal O_1$ and $\mathcal O_2$ operators. We report these values in Tab.~\ref{tab:future}. All the limits can be turned around to be re-expressed as lower bounds on $\Lambda$, the scale of the effective theory, by fixing $C_1=C_2= 4 \pi $ and taking the absolute value of the limits in Table~\ref{tab:current}. Accordingly we find \begin{equation} \Lambda \ \raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$>$}\ 4.3 \; \mbox{TeV}\quad \mbox{and} \quad \Lambda \ \raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$>$}\ 5.5 \; \mbox{TeV} \label{eq:lambda-limit} \end{equation} from, respectively, the operator in Eq.~\eqref{q1} and Eq.~\eqref{q2}. The reliability of the expansion in the effective field theory approach is verified if the probed energies $\bar E < \Lambda$. This is true for the Higgs production. It holds for the differential top-pair production measurements analysis as well, even though in this case, as the explored transferred energies go up to about 3 TeV, we are approaching the limit. The bounds of Eq.~\eqref{eq:lambda-limit} could be raised to almost 9 TeV with the expected reduced uncertainties. \begin{table}[ht!] \small \begin{center} \vspace{0.2cm} \begin{tabular}{|c|c|} \hline LHC 300 fb$^{-1}$ & LHC 3000 fb$^{-1}$ \cr \hline \hline $\quad -0.49 < C_1/\Lambda^2 < 0.19 \quad$ & $ \quad -0.47 < C_1/\Lambda^2 < 0.19 \quad $ \cr \hline $\quad -0.30 < C_2/\Lambda^2 < 0.30 \quad$ & $ \quad -0.14 < C_2/\Lambda^2 < 0.14 \quad $ \cr \hline \end{tabular} \end{center} \caption{Expected limits at 95\% confidence level on the coefficients $C_{1}$ and $C_2$ from future data from $ \mbox{d} \sigma_{t \bar t}/\mbox{d} p_{T}^t$ and $\mu_{ggH}$ respectively. Values are in units of TeV$^{-2}$.} \label{tab:future} \end{table} \begin{acknowledgments} We thank Marina Cobal and Michele Pinamonti for discussions. MF is associated to SISSA and the Department of Physics, University of Trieste. The work of AT is supported by Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'ivel Superior (CAPES). AT would like to thank T. Hahn for the help with \texttt{FormCalc} and \texttt{LoopTools}. AT would like to thank ICTP-SIAFR and IFT-UNESP for hospitality. \end{acknowledgments}
1,108,101,565,076
arxiv
\section{Notation and Preliminary Results}\label{sec:notation} \subsection{Notation} For an $r$-graph $\mc{F}$ and $v \in V(\mc{F})$, we denote by $ L_{\mc{F}}(v)$ the \emph{link of the vertex \(v\)}: $$ L_{\mc{F}}(v):=\{I \in (V(\mc{F}))^{(r-1)} \: | \: I \cup \{v\} \in \mc{F} \}.$$ More generally, for \(I\subseteq V(\mc{F})\) the \emph{link $L_{\mc{F}}(I)$ of \(I\)} is defined as $$L_{\mc{F}}(I) := \{J\subseteq V(\mc{F}) \: | \: J \cap I = \emptyset, I \cup J\in \mc{F}\}.$$ In the above mentioned notation, we will skip the index $\mc{F}$ whenever $\mc{F}$ is understood from the context. We say that an $r$-graph $\mc{G}$ is obtained from an $r$-graph $\mc{F}$ by \emph{cloning a vertex $v$ to a set $W$} if $\mc{F} \subseteq \mc{G}$, $V(\mc{G}) \setminus V(\mc{F}) = W\setminus\{v\}$ and $ L_{\mc{G}}(w)= L_{\mc{F}}(v)$ for every $w \in W$. We say that $\mc{G}$ is \emph{a blowup of $\mc{F}$} if $\mc{G}$ is isomorphic to an $r$-graph obtained from $\mc{F}$ by repeatedly cloning and deleting vertices. We denote the set of all blowups of $\mc{F}$ by $\mf{B}(\mc{F})$. We say that a family $\mf{F}$ of $r$-graphs is \emph{clonable} if every blowup of any $r$-graph in $\mf{F}$, also lies in $\mf{F}$. The Hypergraph Removal Lemma~\cite{Gow07,RodSko06} allows one to restrict many arguments related to Tur\'an-type problems to clonable families, and some of the more general results of this paper hold for all clonable families. Let us introduce another class of hypergraph families, which are important for us. For a family of $r$-graphs $\mf{F}$, let $$m(\mf{F},n):=\max_{\substack{\mc{F} \in \mf{F} \\ \brm{v}({\mc{F}}) = n}} |\mc{F}|.$$ We say that $\mf{F}$ is \emph{smooth} if there exists $\lim_{n \to \infty}m(\mf{F},n)/n^r$. For a smooth family $\mf{F}$ we denote the above limit by $m(\mf{F})$. Our first lemma establishes a connection between clonable and smooth families. \begin{lem}\label{lem:smooth} Every clonable family is smooth. \end{lem} \begin{proof}Let $\mf{F}$ be a clonable family of $r$-graphs. Let $$d:=\limsup_{n \to \infty} \frac{m(\mf{F},n)}{n^r}.$$ We need to show that for every $0<\varepsilon<1$ there exists $N>0$ such that $ m(\mf{F},n)/n^r \geq d-\varepsilon$ for every $n \geq N$. Let $\mc{F} \in \mf{F}$ be chosen so that $|\mc{F}| \geq (d -\delta)\brm{v}(\mc{F})^r$ for $\delta:=\varepsilon/(d+1)$. Let $s:=\brm{v}(\mc{F})$. For a positive integer $k$, let $\mc{F}^{(k)}$ be an $r$-graph obtained by cloning every vertex of $\mc{F}$ to a set of size $k$. Then $\mc{F}^{(k)} \in \mf{F}$, $\brm{v}(\mc{F}^{(k)})=ks$ and $|\mc{F}^{(k)}|=k^r|\mc{F}| \geq (d -\delta)(ks)^r$. Therefore, for $n \geq (s-1)r/\delta$, we have \begin{align*} \frac {m(\mf{F},n)}{n^r} &\geq (d -\delta)\left(\frac{s\lfloor n/s \rfloor}{n}\right)^r \geq (d -\delta)\left(1 -\frac{s-1}{n}\right)^r \\ &\geq (d -\delta)\left(1 -\frac{(s-1)r}{n}\right)\geq (d-\delta)(1-\delta) \geq d -\varepsilon, \end{align*} as desired. \end{proof} \subsection{Stability} \label{sec:stability} In this subsection we formalize and extend the notion of stability, which is ubiqutious in the analysis of Tur\'an-type problems. Let $\mf{F}$ and $\mf{H}$ be two families of $r$-graphs. The definitions in this subsection will be typically applied to situations when $\mf{F}$ is the family whose maximum density we are trying to determine, $\mf{H}$ is a substantially more structured subfamily of $\mf{F}$, and our goal is to show that $m(\mf{F},n)=m(\mf{H},n)$ for sufficiently large $n$. We define \emph{the distance $d_{\mf{F}}(\mc{F})$ from an $r$-graph $\mc{F}$ to a family $\mf{F}$} as \[ d_{\mf{F}}(\mc{F}):=\min_{\substack{\mc{F'}\in\mf{F} \\ \brm{v}(\mc{F}) = \brm{v}(\mc{F}')}}{|\mc{F}\triangle\mc{F}'|}.\] For $\varepsilon, \alpha>0 $, we say that $\mf{F}$ is $(\mf{H}, \varepsilon, \alpha)$-\emph{locally stable} if there exists $n_0 \in \mathbb{N}$ such that for all $\mc{F}\in\mf{F}$ with $\brm{v}(\mc{F}) =n \geq n_0$ and $d_{\mf{H}}(\mc{F})\leq \varepsilon n^r$ we have \begin{equation}\label{eq:localstability} |\mc{F}|\leq m(\mf{H},n) - \alpha d_{\mf{H}}(\mc{F}). \end{equation} We say that $\mf{F}$ is $\mf{H}$-\emph{locally stable} if $\mf{F}$ is $(\mf{H}, \varepsilon, \alpha)$-locally stable for some choice of $\varepsilon$ and $\alpha$. We say that $\mf{F}$ is $(\mf{H}, \alpha)$-\emph{stable} if it is $(\mf{H}, 1, \alpha)$-\emph{locally stable}, that is the inequality (\ref{eq:localstability}) holds for all $\mc{F}\in\mf{F}$ with $\brm{v}(\mc{F}) =n \geq n_0$. We say that \(\mf{F}\) is \(\mf{H}\)-stable, if \(\mf{F}\) is \((\mf{H}, \alpha)\)-stable for some choice of \(\alpha\). \begin{remark}\label{rem:stability} The classical notion of stability differs from the one we introduced here. To parallel that notion, we could define $\mf{F}$ to be $\mf{H}$-stable if for every $\varepsilon>0$ there exists $\delta>0$ such that for all $\mc{F}\in\mf{F}$ with $v(\mc{F}) =n$ and $|\mc{F}|\geq m(\mf{H},n) - \delta n^r$ one has $d_{\mf{H}}(\mc{F}) \leq \varepsilon n^r$. Our notion of stability is stronger in two respects: \begin{itemize} \item It implies linear dependence between $\delta$ and $\varepsilon$ in the above definition. \item It is meaningful in the regime $d_{\mf{H}}(\mc{F}) = o(n^r)$, allowing us to compute Tur\'an numbers exactly. Note that if $\mf{F}$ is $\mf{H}$-stable using our definition then $m(\mf{H},n) \geq m(\mf{F},n)$ for sufficiently large $n$. \end{itemize} We refer to our notion of stability as simply ``stability" as opposed to, for example, ``sharp stability", for brevity. \end{remark} \subsection{Vertex local stability} We also introduce a weaker version of stability (i.e. the requirements imposed on the family are stronger), however, in certain cases, as we will see, stability (as defined in Section~\ref{sec:stability}) can be derived from this version. Let $\mf{H}$ be a smooth family of $r$-graphs. For $\varepsilon, \alpha>0$, we say that a family $\mf{F}$ of $r$-graphs is $(\mf{H}, \varepsilon, \alpha)$-\emph{vertex locally stable} if there exists $n_0 \in \mathbb{N}$ such that for all $\mc{F}\in\mf{F}$ with $\brm{v}(\mc{F}) =n \geq n_0$, $d_{\mf{H}}(\mc{F})\leq \varepsilon n^{r}$, and $| L_{\mc{F}}(v)| \geq \left(rm(\mf{H}) - \varepsilon \right) n^{r-1}$ for every $v \in V(\mc{F})$, we have \[|\mc{F}|\leq m(\mf{H},n) - \alpha d_{\mf{H}}(\mc{F}).\] We say that $\mf{F}$ is $\mf{H}$-\emph{vertex locally stable} if $\mf{F}$ is $(\mf{H}, \varepsilon, \alpha)$-vertex locally stable for some $\varepsilon,\alpha$. In some cases vertex local stability implies local stability, which informally means that when proving inequality~(\ref{eq:localstability}) for an $r$-graph $\mc{F}$, we can assume that all the vertices of $\mc{F}$ have large degree. \subsection{Weighted hypergraphs and Lagrangians} Let $\mc{F}$ be an $r$-graph. Let $\mc{M}(\mc{F})$ denote the set of probability distributions on $V(\mc{F})$, that is, the set of functions $\mu: V(\mc{F}) \to [0,1]$ such that $\sum_{v \in V(\mc{F})}\mu(v)=1$. We call a pair $(\mc{F},\mu)$, where $\mu \in \mc{M}(\mc{F})$, a \emph{weighted graph}. Two weighted graphs $(\mc{F},\mu)$ and $(\mc{F}',\mu')$ are \emph{isomorphic} if there exists an isomorphism $\varphi: V(\mc{F}) \to V(\mc{F}')$ between $\mc{F}$ and $\mc{F}'$ such that $\mu'(\varphi(v))=\mu(v)$ for every $v \in V(\mc{F})$. As in the case of unweighted graphs, we generally do not distinguish between isomorphic weighted graphs. We define \emph{the density $\lambda(\mc{F},\mu)$ of a weighted graph $(\mc{F},\mu)$}, by $$\lambda(\mc{F},\mu):=\sum_{F \in \mc{F} }{\prod_{v \in F}\mu(v)}.$$ The \emph{Lagrangian $\lambda(\mc{F})$ of an $r$-graph $\mc{F}$} is defined by $$\lambda(\mc{F}):=\max_{\mu \in \mc{M}(\mc{F})}\lambda(\mc{F},\mu).$$ For a family of $r$-graphs $\mf{F}$, let $\lambda(\mf{F}):=\sup_{\mc{F} \in \mf{F}}\lambda(\mc{F})$. If an $r$-graph $\mc{F}'$ is obtained from an $r$-graph $\mc{F}$ by cloning a vertex $u \in V(\mc{F})$ to a set $W$, $\mu \in \mc{M}(\mc{F})$, $\mu' \in \mc{M}(\mc{F'})$, then we say that $(\mc{F}', \mu')$ is \emph{a one vertex blowup of $(\mc{F}, \mu)$}, if $\mu(v)=\mu'(v)$ for all $v \in V(\mc{F}) \setminus \{u\}$ and $\mu(u)=\sum_{w \in W}\mu'(w)$. We say that $(\mc{F}', \mu')$ is \emph{a blowup} of $(\mc{F}, \mu)$ if $(\mc{F}', \mu')$ is isomorphic to a weighted $r$-graph which can be obtained from $(\mc{F}, \mu)$ by repeatedly taking one vertex blowups. We denote by $\mf{B}(\mc{F},\mu)$ the family of weighted graphs isomorphic to the blowups of $(\mc{F},\mu)$. \begin{remark}\label{rem:blowup} An $r$-graph $\mc{F}'$ is a blowup of $\mc{F}$ with $V(\mc{F})=[n]$ if and only if there exists a partition $\{P_1,P_2,\ldots,P_n\} $ of $V(\mc{F}')$ such that $\{v_1,v_2,\ldots,v_r\} \in \mc{F'}$, $v_j \in P_{i_j}$ for $j \in [r]$ if and only if $\{i_1,i_2,\ldots,i_r\} \in \mc{F}$. When $\mc{F}$ is understood from the context we refer to $\mc{P}=\{P_1,P_2,\ldots,P_n\}$ as \emph{a blowup partition of $\mc{F'}$}. If $\mc{F}$ \emph{covers pairs}, that is, for every \(u,v\in V(\mc{F})\), there exists some $F\in \mc{F}$ containing $u$ and $v$, then the blowup partition is unique up to the order of parts and its elements are the maximal independent sets in $\mc{F}$. Let us also note that a weighted $r$-graph $(\mc{F}', \mu')$ is a blowup of $(\mc{F}, \mu)$ if and only if there exists a partition as above with the additional property $\sum_{v \in P_i}\mu'(v)=\mu(i)$, for every $i \in [n]$. \end{remark} Next we define the distance between weighted graphs. If $\mc{F}_1,\mc{F}_2$ are two $r$-graphs such that $V(\mc{F}_1)=V(\mc{F}_2)$ and $\mu \in \mc{M}(\mc{F}_1)(=\mc{M}(\mc{F}_2))$, we define $$d'(\mc{F}_1,\mc{F}_2,\mu):=\sum_{F \in \mc{F}_1 \triangle \mc{F}_2}\prod_{v\in F}{\mu(v)}.$$ We define \emph{the distance between general weighted $r$-graphs $(\mc{F}_1, \mu_1)$ and $(\mc{F}_2, \mu_2)$}, as $$d((\mc{F}_1, \mu_1), (\mc{F}_2, \mu_2)):=\inf d'(\mc{F}'_1,\mc{F}'_2,\mu),$$ where the infimum is taken over all $r$-graphs $\mc{F}'_1,\mc{F}'_2,$ with $V(\mc{F}'_1)=V(\mc{F}'_2)$ and $\mu \in \mc{M}(\mc{F}_1')=\mc{M}(\mc{F}_2')$ satisfying $(\mc{F}'_i,\mu) \in \mf{B}(\mc{F}_i,\mu_i)$ for $i=1,2$. If $(\mc{F}, \mu)$ is a weighted $r$-graph and $\mf{F}$ is a family of $r$-graphs we define \emph{the distance from $(\mc{F}, \mu)$ to $\mf{F}$} as $$d^w_{\mf{F}}(\mc{F}, \mu) :=\inf_{\mc{F'} \in \mf{F}, \mu' \in \mc{M}(\mc{F}')}d((\mc{F}, \mu), (\mc{F}', \mu')).$$ We write \(d_{\mf{F}}(\mc{F}, \mu)\) instead of \(d_{\mf{F}}^{w}(\mc{F}, \mu)\), except for the cases when we want to emphasize the difference between weighted and unweighted distance. \begin{lem}\label{lem:distancevsweight} For any family \(\mf{H}\), if \(\mc{F}\) is a graph with $\brm{v}(\mc{F})=n$, and $\xi \in \mc{M}(\mc{F})$ then $$d_{\mf{H}}(\mc{F}) \leq \frac{r! n}{n-r^2} \binom{n}{r}d^w_{\mf{H}}(\mc{F},\xi).$$ \end{lem} \begin{proof} Choose an arbitrary $0<\varepsilon<1$ and let $d:=d^w_{\mf{H}}(\mc{F},\xi)$. Let $(\mc{B},\mu)$ be a blowup of $(\mc{F},\xi)$ such that there exists $\mc{H} \in \mf{H}$ satisfying $d((\mc{B},\mu),(\mc{H},\mu)) \leq d+\varepsilon$. Let $\mc{P}=\{P_1,P_2,\ldots,P_n\}$ be a blowup partition of $V(\mc{B})$. Suppose $v_1,v_2,\ldots,v_r$ are chosen independently at random from $V(\mc{H})$ according to the distribution $\mu$. Let $A$ be the event that that $\{v_1,v_2,\ldots,v_r\}$ is a \emph{transversal} of $\mc{P}$, that is, $|\{v_1,v_2,\ldots,v_r\} \cap P_j| \leq 1$ for every $P_j \in \mc{P}$. We have $$\Pr[A]= \prod_{i=0}^{r-1}\left(1-\frac{i}{n}\right) \geq \left(1-\frac{r}{n}\right)^r \geq 1-\frac{r^2}{n}.$$ Thus, it follows that \begin{equation}\label{eq:distancevsweight1} \Pr\left[\{v_1,v_2,\ldots,v_r\} \in \mc{B} \triangle \mc{H} \:|\: A\right] \leq \frac{r!(d+\varepsilon)n}{n-r^2}. \end{equation} Now consider $v_1,v_2,\ldots,v_n$ to be chosen independently at random according to the distribution given by $\mu$, such that $v_i \in P_i$ for every $i \in [n]$. Let $\mc{H}'$ and $\mc{B}^{'}$ be the random subgraphs induced by $\{v_1,v_2,\ldots,v_n\}$, respectively, in $\mc{H}$ and $\mc{B}$. It follows from (\ref{eq:distancevsweight1}) and the linearity of expectation that \begin{equation}\label{eq:distancevsweight2} \mathbb E[|\mc{B}' \triangle \mc{H}'|] \leq \frac{r!(d+\varepsilon)n}{n-r^2} \binom{n}{r}. \end{equation} As $\mc{B}'$ is isomorphic to $\mc{F}$, the inequality (\ref{eq:distancevsweight2}) implies the lemma. \end{proof} \subsection{Weighted Stability} \label{sec:weghtedstability} In this section we introduce the notion of weighted stability and relate it to (unweighted) stability. Let \(\mf{F}, \mf{H}\) be two graph families. For $\varepsilon, \alpha>0 $, we say that $\mf{F}$ is $(\mf{H}, \varepsilon, \alpha)$-\emph{weight locally stable} if for every $\mc{F} \in \mf{F}, \mu \in \mc{M}(\mc{F})$ such that $d_{\mf{H}}(\mc{F}, \mu)\leq \varepsilon$, we have $$\lambda(\mc{F}, \mu) \leq \lambda(\mf{H})-\alpha d_{\mf{H}}(\mc{F}, \mu).$$ We say that $\mf{F}$ is $\mf{H}$-\emph{weight locally stable} if $\mf{F}$ is $(\mf{H}, \varepsilon, \alpha)$-weight locally stable for some choice of $\varepsilon$ and $\alpha$. We say that $\mf{F}$ is $(\mf{H}, \alpha)$-\emph{weight stable} if $\mf{F}$ is $(\mf{H}, 1, \alpha)$-\emph{weight locally stable}. We say that $\mf{F}$ is $\mf{H}$-\emph{weight stable} if $\mf{F}$ is $(\mf{H}, \alpha)$-weight stable for some choice of $\alpha$. Finally, for weighted graphs we would also consider the direct analogue of the classical notion of stability discussed in Remark~\ref{rem:stability}. We say that $\mf{F}$ is $\mf{H}$-\emph{weakly weight stable} if for every $\varepsilon>0$ there exists $\delta>0$ such that for every $\mc{F} \in \mf{F}$ and $\mu \in \mc{M}(\mc{F})$ if $\lambda(\mc{F},\mu) \geq \lambda(\mf{H})-\delta$, then $d_{\mf{H}}(\mc{F}, \mu) \leq \varepsilon$. The following lemma, establishes a connection between weighted and unweighted stability. \begin{lem}\label{lem:localplusweight} Let \(\mf{H}\) be a clonable family. If the family $\mf{F}$ is $\mf{H}$-locally stable and $\mf{H}$-weight stable, then $\mf{F}$ is $\mf{H}$-stable. \end{lem} \begin{proof} Let $\alpha,\varepsilon>0$ be such that the family $\mf{F}$ is $(\mf{H},\varepsilon,\alpha)$-locally stable and $(\mf{H},\alpha)$-weight stable. We will show that $\mf{F}$ is $(\mf{H},\alpha/2)$-stable, that is, for every $\mc{F} \in \mf{F}$ with $n:=\brm{v}(\mc{F})$ sufficiently large, \begin{equation}\label{eq:localplusweight} |\mc{F}| \leq m(\mf{H},n)-\frac{\alpha}{2} d_{\mf{H}}(\mc{F}). \end{equation} We can assume that $d_{\mf{H}}(\mc{F})>\varepsilon n^r$, since otherwise (\ref{eq:localplusweight}) holds because $\mf{F}$ is $(\mf{H},\varepsilon,\alpha)$-locally stable. By Lemma~\ref{lem:smooth} the family $\mf{H}$ is smooth. We choose $n$ to be sufficiently large so that $ 1 - r^2/n \geq 1/2 $ and $$m(\mf{H},n) \geq \left(m(\mf{H})-\frac{\alpha\varepsilon}{2}\right)n^r.$$ Using Lemma~\ref{lem:distancevsweight} and the fact that $\mf{F}$ is $(\mf{H},\alpha)$-weight stable, we have \begin{align*}\label{eq:localplusweight2} \frac{|\mc{F}|}{n^r}&= \lambda(\mc{F},\xi_{\mc{F}})\leq m(\mf{H})-\alpha d^w_{\mf{H}}(\mc{F},\xi) \notag\\ &\leq \left(\frac{m(\mf{H},n)}{n^r}+\frac{\alpha\varepsilon}{2}\right)-\alpha\left(1-\frac{r^2}{n}\right) \frac{d_{\mf{H}}(\mc{F})}{r!\binom{n}{r}}\notag \\ &\leq \left(\frac{m(\mf{H},n)}{n^r}+\frac{\alpha\varepsilon}{2}\right)-\frac{\alpha d_{\mf{H}}(\mc{F})}{2n^r}\notag \\&=\frac{(m(\mf{H},n) - \alpha/2 d_{\mf{H}}(\mc{F})) + \alpha/2(\varepsilon n^r - d_{\mf{H}}(\mc{F}))}{n^r} \\&\leq \frac{m(\mf{H},n) - \alpha d_{\mf{H}}(\mc{F})/2}{n^r}, \end{align*} implying (\ref{eq:localplusweight}). \end{proof} When both families \(\mf{F}\) and \(\mf{H}\) are clonable, weight local stability implies local stability, as follows. For an $r$-graph $\mc{F}$, let $\xi_{\mc{F}} \in \mc{M}(\mc{F})$ denote the uniform distribution on $V(\mc{F})$, that is, $\xi_{\mc{F}}(v)=1/\brm{v}(F)$ for every $v \in V(\mc{F})$. We will omit the index and write $\xi$ instead of $\xi_{\mc{F}}$ when $\mc{F}$ is understood from the context. Note that $\lambda(\mc{F},\xi)=|\mc{F}|/(\brm{v}(\mc{F}))^r$. In the other direction, let $(\mc{F},\mu)$ be a weighted graph, choose $k$ integer such that $\mu(v)k$ is an integer for every $v \in V(\mc{F})$. Let $\mc{F}'$ be an $r$-graph obtained by cloning $v \in V(\mc{F})$ to a set of size $\mu(v)k$. Then, clearly, $\brm{v}(\mc{F}')=k$ and $|\mc{F}'|=\lambda(\mc{F},\mu)k^r$. This second observation routinely implies the following lemma. \begin{lem}\label{lem:weightedbasics} For every weighted $r$-graph $(\mc{F}, \mu)$ there exists a sequence $\{\mc{F}_n\}$ of blowups of $\mc{F}$, such that \begin{itemize} \item $\brm{v}(\mc{F}_n) \to_{n \to \infty} \infty$ \item $\lim_{n \to \infty}\frac{|\mc{F}_n|}{\brm{v}(\mc{F}_n)^r}=\lambda(\mc{F},\mu)$ \item $\lim_{n \to \infty}\frac{d_{\mf{H}}(\mc{F}_n)}{\brm{v}(\mc{F}_n)^r}=d_{\mf{H}}(\mc{F},\mu)$ for every clonable family $\mf{H}$. \end{itemize} \end{lem} Lemma~\ref{lem:weightedbasics} immediately implies the following. \begin{corollary}\label{lem:localtoweighted} Let $\mf{F}, \mf{H}$ be two clonable families. If $\mf{F}$ is $\mf{H}$-locally stable then $\mf{F}$ is $\mf{H}$-weight locally stable. \end{corollary} \begin{section}{Local Stability From Vertex Local Stability} \label{sec:genstability} The main result of this section is the following important tool used in the proof of Theorem~\ref{thm:general} \begin{theorem} \label{thm:narrowlocaltolocal} Let $\mf{F},\mf{H}$ be families of $r$-graphs such that $\mf{H}$ is clonable. If $\mf{F}$ is $\mf{H}$-vertex locally stable, then $\mf{F}$ is $\mf{H}$-locally stable. \end{theorem} In the proof of Theorem~\ref{thm:narrowlocaltolocal} we use the following two auxiliary lemmas. \begin{lem}\label{degreelem}Let $\mf{F}$ be a clonable family of $r$-graphs. Then for every $\varepsilon>0$ there exist $\delta>0$ and $n_0\in\mathbb{N}$ satisfying the following. For every $\mc{F}\in \mf{F}$ with $\brm{v}(\mc{F})=n \geq n_0$ and $|\mc{F}|\geq \left(m(\mf{F}) - \delta\right) n^r $ there exists $X\subseteq V(\mc{F})$ such that $|X|\geq (1-\varepsilon)n$ and \[\left| | L_{\mc{F}}(v)| - rm(\mf{F}) n^{r-1}\right| \leq \varepsilon n^{r-1}\] for every $v\in X$. \end{lem} \begin{proof} Clearly, it is enough to prove the lemma for sufficiently small $\varepsilon$. Thus we assume without loss of generality that $\max\{\varepsilon,\varepsilon^2r^2m(\mf{F})\}<1$. We show that $\delta:=(\varepsilon^6 -\varepsilon^8r^2m(\mf{F}))/(1+r+r^2)$ satisfies the lemma for sufficiently large $n_0$. Let $X \subseteq V(\mc{F})$ be the set of all $v \in V(\mc{F})$ satisfying \[\left| | L_{\mc{F}}(v)| - rm(\mf{F}) n^{r-1}\right| \leq \varepsilon n^{r-1}.\] To prove that \(|X|\geq (1-\varepsilon)n\), we first show the following claim. \begin{claim} \[| L_{\mc{F}}(v)| \leq (rm(\mf{F}) + \varepsilon^2) n^{r-1}\] for every $v \in V(\mc{F})$. \end{claim} \begin{proof} Suppose for a contradiction that $$| L_{\mc{F}}(v)| > (rm(\mf{F}) + \varepsilon^2) n^{r-1}$$ for some $v \in V(\mc{F})$. Let $n':=\lceil(1+\varepsilon^4)n\rceil$ and let $\mc{F}'$ be obtained from $\mc{F}$ by cloning $v$ into a set of size $\lceil\varepsilon^4 n\rceil+1$. We have $\mc{F}'\in \mf{F}$, as $\mf{F}$ is clonable. For sufficiently large $n$, we have \begin{align}m(\mf{F},n') &\leq (m(\mf{F}) + \delta)n'^r \leq (m(\mf{F}) + \delta)(1+\varepsilon^4 r+ \varepsilon^8r^2) n^r. \label{eq:Fprimeupperbound} \end{align} On the other hand, \begin{align}m(\mf{F}, n') \geq |\mc{F}'| &> |\mc{F}| + \varepsilon^4 n (rm(\mf{F}) + \varepsilon^2 ) n^{r-1}\notag \\ &\geq (m(\mf{F}) - \delta) n^r + \varepsilon^4 (rm(\mf{F}) + \varepsilon^2 ) n^{r}\label{Fprimelowerbound}. \end{align} But now (\ref{eq:Fprimeupperbound}) and (\ref{Fprimelowerbound}) together imply that $$\varepsilon^6 - \delta < \delta(1+\varepsilon^4 r + \varepsilon^8r^2)+\varepsilon^8r^2m(\mf{F}),$$ which contradicts to our choice of $\delta$. Thus, the claim holds. \end{proof} By the preceding claim we have that \[| L_{\mc{F}}(v)| < \left(rm(\mf{F}) - \varepsilon\right) n^{r-1}\] for all $v \in V(\mc{F}) \setminus X$. Now suppose for a contradicton that $|X|< (1-\varepsilon)n$. Then \begin{align*}|\mc{F}| &= \frac{1}{r} \left( \sum_{v \in V(\mc{F}) \setminus X}{| L_{\mc{F}}(v)|}+ \sum_{v\in X}{| L_{\mc{F}}(v)|}\right) \\ &< \frac{1}{r} \left( (n-|X|) \left(rm(\mf{F}) - \varepsilon\right) + |X| \left(rm(\mf{F}) + \varepsilon^2\right) \right)n^{r-1}\\ &=m(\mf{F})n^r + \frac{\varepsilon}{r}\left((1+\varepsilon)|X| - n\right) n^{r-1} \\&<m(\mf{F})n^r -\frac{\varepsilon^3}{r}n^r\\ &\leq (m(\mf{F}) -\delta) n^r, \end{align*} a contradiction. \end{proof} \begin{lem}\label{lem:upperbound} Let $\mf{F}$ be a clonable family of $r$-graphs. Then for every $\varepsilon >0$ there exist $n_0\in\mathbb{N}$ such that for all $n_2 \geq n_1 \geq n_0$, we have \[m(\mf{F}, n_2) \geq m(\mf{F}, n_1) + (n_2-n_1) (rm(\mf{F}) - \varepsilon)n_1^{r-1}\] \end{lem} \begin{proof} Consider $\mc{F}_1\in \mf{F}$ with $v(\mc{F}_1)= n_1$ such that $|\mc{F}_1| = m(\mf{F}, n_1).$ For large enough $n_1$ we have \begin{align*}m(\mf{F},n_1) &\geq \left(m(\mf{F}) - \frac{\varepsilon}{r}\right)n_1^r. \end{align*} By averaging, there exists $v \in V(\mc{F}_1)$ such that \begin{equation}\label{highdegvertex}| L_{\mc{F}_1}(v)| \geq \left(r m(\mf{F}) - \varepsilon\right) n_1^{r-1}. \end{equation} Let $\mc{F}_2$ be obtained from $\mc{F}_1$ by cloning $v$ to a set of size $n_2-n_1+1$. As $\mc{F}_2\in \mf{F}$, we have \begin{align*}m(\mf{F}, n_2) \geq |\mc{F}_2| &\geq |\mc{F}_1| + (n_2-n_1)\left(rm(\mf{F}) - \varepsilon\right) n_1^{r-1} \\ &= m(\mf{F}, n_1)+ (n_2-n_1)\left(rm(\mf{F}) - \varepsilon\right) n_1^{r-1}, \end{align*} as desired. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:narrowlocaltolocal}:] Let $\varepsilon, \alpha$ be such that $\mf{F}$ is $(\mf{F}',\varepsilon,\alpha)$-vertex locally stable. We choose constants \(\varepsilon', \varepsilon''\) such that $0< \varepsilon' \ll \varepsilon'' \ll \varepsilon$ so that the inequalities throughout the proof are satisfied. Let $\alpha':=\min\{\alpha, 2\varepsilon''r^2(1 - m(\mf{H})) \}$. We will show that $\mf{F}$ is $(\mf{H},\varepsilon',\alpha')$-locally stable. Consider $\mc{F}\in \mf{F}$ with $V(\mc{F})=[n]$ and $d_{\mf{H}}(\mc{F}) \leq \varepsilon' n^r$. We assume that \[|\mc{F}| \geq m(\mf{H}, n)- \varepsilon' n^r, \] since otherwise the result follows, as $\alpha'<1$. Let $\mc{H}\in \mf{H}$ be such that $|\mc{F}\triangle \mc{H}| = d_{\mf{H}}(\mc{F})$. For large enough $n$, we have $|\mc{H}| \geq (m(\mf{H}) - \varepsilon')n^r$. By Lemma~\ref{degreelem} applied to $\mc{H}$ with $\varepsilon=\varepsilon''$, there exists $X\subseteq [n]$ with $|X| \geq (1-\varepsilon'')n$ such that for each $v\in X$, \begin{equation} \label{degreecond} \left|| L_{\mc{H}}(v)| - rm(\mf{H}) n^{r-1}\right| \leq \varepsilon '' n^{r-1}. \end{equation} Consider the set \[J = \{v\in V(\mc{F}) : | L_{\mc{F}}(v)| < (rm(\mf{H}) - (2r^2+1)\varepsilon'')n^{r-1}\}.\] We will show that \(J\) has relatively small size. From the definition of $J$ and \(X\), it follows that for each \(v\in J\cap X\), we have \(| L_{\mc{F}}(v)\triangle L_{\mc{H}}(v)|\geq \varepsilon'' n^{r-1}.\) Thus, \[|J\cap X|\varepsilon'' n^{r-1} \leq \sum_{v\in V(\mc{F})}{| L_{\mc{F}}(v) \triangle L_{\mc{H}}(v)|} = r|\mc{F}\triangle \mc{H}| \leq \varepsilon' r n^r,\] and therefore, \(|J|\leq | J \cap X| + |J\setminus X| \leq (\frac{\varepsilon'r}{\varepsilon''} + \varepsilon'')n \leq 2\varepsilon''n.\) Let $\mc{F}':=\mc{F}|_{V(\mc{F})\setminus J}$, $\mc{H}':=\mc{H}|_{V(\mc{F})\setminus J}$ and $n':=n-|J|$. We have \begin{equation} \label{distanceOfF1}d_{\mf{H}}(\mc{F}')\leq |\mc{F}' \triangle \mc{H}'| \leq |\mc{F} \triangle \mc{H}| \leq \varepsilon' n^r \leq \varepsilon n'^r. \end{equation} Also, for every $v\in V(\mc{F})\setminus J$, we have \begin{align} | L_{\mc{F}'}(v)|\geq | L_{\mc{F}}(v)| - |J| n^{r-2} &\geq \left(rm(\mf{H}) - 2r\varepsilon''-2\varepsilon''\right)n^{r-1} \notag \\ &\geq(rm(\mf{H}) - \varepsilon)n'^{r-1} \label{degreeF2}. \end{align} Since $\mf{F}$ is $(\mf{H}, \varepsilon, \alpha)$-vertex locally stable, (\ref{distanceOfF1}) and (\ref{degreeF2}) imply that \begin{equation} \label{boundonF2}|\mc{F}'|\leq m(\mf{H}, n') - \alpha d_{\mf{H}}(\mc{F}'). \end{equation} Let $\mc{H}''\in \mf{H}$ be such that $|\mc{H}''\triangle\mc{F}'| = d_{\mf{H}}(\mc{F}')$. Let $\mc{H}_0$ be obtained from $\mc{H}''$ by blowing up a vertex in $V(\mc{F})\setminus J$ to a set of size $n -n'+1$. We have \begin{align} |\mc{F}\triangle \mc{H}_0| &\leq |\mc{F}'\triangle \mc{H}''| + |J|n^{r-1}\label{upperboundonFB}. \end{align} By Lemma~\ref{lem:upperbound}, for sufficiently large $n$, we have \begin{align}m(\mf{H}, n) &\geq m(\mf{H}, n') + (n-n')\left(rm(\mf{H}) -\frac{\varepsilon''}{1-2r\varepsilon''}\right)n'^{r-1}\notag \\ &\geq m(\mf{H}, n') + |J|\left(rm(\mf{H}) - \frac{\varepsilon''}{1-2r\varepsilon''}\right)(1-2r\varepsilon'')n^{r-1}. \label{maxFprimeOnN:maxFprimeOnN2} \end{align} Now we are ready to put all the obtained inequalities together to show that $\mf{F}$ is $(\mf{H}, \varepsilon', \alpha')-$ locally stable. \begin{align*} |\mc{F}|&\leq |\mc{F}'| + |J|(rm(\mf{H}) - (2r^2+1)\varepsilon'')n^{r-1} \\ &\stackrel{(\ref{boundonF2})}{\leq}\ m(\mf{H}, n') - \alpha d_{\mf{H}}(\mc{F}') + |J|(rm(\mf{H}) - (2r^2+1)\varepsilon'')n^{r-1} \\ &\stackrel{(\ref{maxFprimeOnN:maxFprimeOnN2})}{\leq} m(\mf{H}, n) - |J| \left(rm(\mf{H}) -\frac{\varepsilon''}{1-2r\varepsilon''}\right)(1-2r\varepsilon'')n^{r-1} \\&\qquad \qquad \;\;\;- \alpha |\mc{F}'\triangle\mc{H}''| + |J|(rm(\mf{H}) - (2r^2+1)\varepsilon'')n^{r-1} \\ &= m(\mf{H}, n) - \alpha |\mc{F}'\triangle \mc{H}''| - 2\varepsilon''r^2(1- m(\mf{H}))|J|n^{r-1} \\ &\leq m(\mf{H}, n) - \alpha'|\mathcal{F}'\triangle \mc{H}''| - \alpha' |J| n^{r-1} \\ &\stackrel{(\ref{upperboundonFB})}{\leq} m(\mf{H}, n) - \alpha'|\mathcal{F}\triangle \mc{H}_0| \\ &\leq m(\mf{H}, n) - \alpha' d_{\mf{H}}(\mc{F}), \end{align*} as desired. \end{proof} \end{section} \section{Weak stability from lagrangians}\label{sec:lagrangian} In this section we prove that, under certain restrictions, every sufficiently dense graph in a family is close to some graph maximizing the lagrangian in that family. The arguments we use in this and the next section are continuous in nature. We say that an $r$-graph $\mc{F}$ is \emph{thin} if for every $(r-1)$-tuple $I\subseteq V(\mc{F})$, there exists at most one edge containing \(I\). In other words, $\mc{F}$ is thin if and only if it is $\mc{D}_r$-free, where $\mc{D}_r$ is an $r$-graph with two edges $D_1$ and $D_2$ such that $|D_1 \cap D_2|=r-1$. Note that every $(m,r,r-1)$ Steiner system is thin. We say that the family $\mf{F}$ is \emph{thin} if every $\mc{F} \in \mf{F}$ is thin. In the applications of the next result the family \(\mf{F}^*\) will consist of the \(r\)-graphs which cover pairs. In particular, we do not assume that \(\mf{F}^*\) is clonable. \begin{theorem}\label{thm:compactness} If the family $\mf{F}^*$ is thin and the family $$\mf{F}^{**}=\{\mc{F}^*|_{\pl{supp}(\mu)} \: | \: \mc{F}^* \in \mf{F}^{*} , \: \lambda(\mc{F}^*,\mu)= \lambda(\mf{F}^{*})\; {for\; some} \; \mu \in \mc{M}(\mc{F}^*)\}.$$ \ is not empty, then $\mf{F}^*$ is $\mf{F}^{**}$-weakly weight stable. \end{theorem} \begin{proof} We will consider infinite $r$-graphs in the proof of this theorem. Let $\mf{F}_{\bb{N}}$ denote the family of $r$-graphs such that $V(\mc{F})=\bb{N}$ for every $\mc{F} \in \mf{F}_{\bb{N}}$ and every finite subgraph $\mc{H}$ of a graph in $\mf{F}_{\bb{N}}$ is obtained from a subgraph of a graph in $\mf{F}^*$ by adding isolated vertices. Clearly, $\mf{F}_{\bb{N}}$ is thin. We enhance $\mf{F}_{\bb{N}}$ with a metric $\varsigma$ defined as follows. For $\mc{F},\mc{F}' \in \mf{F}_{\bb{N}}$, let $\varsigma(\mc{F},\mc{F}'):=1/2^k$, where $k$ is the minimum integer such that $\mc{F}|_{[k]} \neq \mc{F}'|_{[k]}$. Note that $(\mf{F}_{\bb{N}},\varsigma)$ is compact. Let $$\mc{M}(\bb{N}):=\{\mu: \bb{N} \to \bb{R}_+ \: | \: \mu(1) \geq \mu(2) \geq \mu(3) \geq \ldots, \; \sum_{i=1}^{\infty}\mu(i) \leq 1\}.$$ It is not hard to verify that $\mc{M}(\bb{N})$ is compact with $L^1$ norm $\|\cdot\|_1$. Let $\mf{X}$ be the product of $(\mf{F}_{\bb{N}},\varsigma)$ and $(\mc{M}(\bb{N}),\|\cdot\|_1)$. Note that every pair $(\mc{F}, \mu)$ with $\mc{F} \in \mf{F}^*, \mu \in \mc{M}(\mc{F})$ naturally corresponds to an element of $\mf{X}$, as we can assume that $V(\mc{F})=[v(\mc{F})]$ and $\mu(i) \geq \mu(j)$ for all $i \leq j$, $i,j \in V(\mc{F})$. For $(\mc{F},\mu) \in \mf{X}$, define $\lambda(\mc{F},\mu):=\sum_{F \in \mc{F}} \mu(F).$ \begin{claim} \label{lambdacontonX} $\lambda$ is continuous on $\mf{X}$. \end{claim} \begin{proof} It is easy to see that $$|\lambda(\mc{F},\mu) - \lambda(\mc{F},\mu')| \leq \|\mu-\mu'\|_{1}$$ for every $\mc{F} \in \mf{F}_{\bb{N}}$ and all $\mu,\mu' \in \mc{M}(\bb{N})$. Thus, it suffices to show that for all $\mc{F},\mc{F}' \in\mf{F}_{\bb{N}}$ and every $\varepsilon >0$ there exists $N\in\mathbb{N}$ such that if $\mc{F}'|_{[N]}=\mc{F}|_{[N]}$ then $|\lambda(\mc{F},\mu) - \lambda(\mc{F}',\mu)| \leq \varepsilon$ for every $\mu \in \mc{M}(\bb{N})$. We show that $N:=\lceil \frac{1}{\varepsilon(r-1)!} \rceil$ satisfies the above. Let $\mc{H}:=\mc{F}'|_{[N]}=\mc{F}|_{[N]}$. It suffices to show that $\lambda(\mc{F},\mu) \leq \lambda(\mc{H},\mu)+\varepsilon$. We have \begin{align*} \lambda(\mc{F},\mu) - \lambda&(\mc{H},\mu) = \sum_{F \in \mc{F}, F \not \subseteq [N]} \prod_{i \in F}\mu(i) \\ &\leq \mu(N+1) \sum_{I \subseteq \bb{N}^{(r-1)}} \prod_{i \in I}\mu(i) \\ &\leq \mu(N+1)\frac{1}{(r-1)!}\left(\sum_{i \in N}\mu (i)\right)^{r-1} \leq \frac{1}{N(r-1)!} \leq \varepsilon, \end{align*} as desired. Note that in the second inequality above we use the fact that \(\mc{F}\) is thin. \end{proof} It follows from the above claim that \begin{equation} \label{eq:lambdamax} \lambda(\mf{F}^{*})=\max\limits_{(\mc{F},\mu) \in \mf{X}}\lambda(\mc{F},\mu), \end{equation} as every \((\mc{F},\mu)\in \mf{X}\) is a limit of a sequence of weighted graphs in $\mf{F}^*$. Let $$\mf{X}^{**}=\{ (\mc{F},\mu) \in \mf{X} \: | \: \mc{F}|_{\brm{supp}(\mu)}\in \mf{F}^{**}\}, $$ That is, $\mf{X}^{**}$ is a set of weighted graphs in $\mf{X}$ with finite support, coinciding with some graph in $\mf{F}^{**}$ on its support. \begin{claim}\label{claim:weak2} If $\lambda(\mc{F},\mu)=\lambda(\mf{F}^{*})$ for some \((\mc{F},\mu)\in \mf{X}\), then $(\mc{F},\mu) \in \mf{X}^{**}$. \end{claim} \begin{proof} Suppose for a contradiction that there exists some $(\mc{F},\mu) \in \mf{X}\setminus\mf{X}^{**}$ such that $\lambda(\mc{F},\mu)=\lambda(\mf{F}^{*})$. By definition of \(\mf{F}^{**}\), it follows that $\brm{supp}(\mu)$ must be infinite, and hence, $\brm{supp}(\mu) = \mathbb{N}$, since $\mu$ is non-decreasing. As $\lambda(\mc{F}, \nu)$ considered as a function of $\nu$ is maximized at $\nu=\mu$ we have \[\frac{ \partial{\lambda(\mc{F}, \nu)}}{ \partial \nu(i)}\Big|_{\nu=\mu} = r\lambda(\mf{F}^{*}),\] for every \(i\in \mathbb{N}\). Thus, we have \begin{equation}\label{eq:lagrangianderivative} \sum_{\substack{J \in \bb{N}^{(r-1)}, |J|=r-1 \\ J \cup \{i\} \in \mc{F}}} \prod_{j \in J} \mu(j)=r\lambda(\mf{F}^{*}) \end{equation} for every $i \in \bb{N}$. To show that (\ref{eq:lagrangianderivative}) cannot hold we employ an argument similar to the one used in the proof of the previous claim. Choose an integer $N$ such that $N> \frac{1}{r(r-2)!\lambda(\mf{F}^{*})}$, and let $i$ be such that $|F \cap [N]| \leq r-2$ for every $F \in \mc{F}$ with $i \in F$. Then \begin{align*} \sum_{\substack{J \in \bb{N}^{(r-1)}, |J|=r-1 \\ J \cup \{i\} \in \mc{F}}} \prod_{j \in J} \mu(j) &\leq \mu(N+1)\sum_{K \in \bb{N}^{(r-2)}, |K|=r-2} \prod_{j \in K}\mu(j)\\ & \leq \frac{1}{N(r-2)!} < r\lambda(\mf{F}^{*}). \end{align*} This contradiction finishes the proof of the claim. \end{proof} Now we are ready to finish the proof. We will show that for every \(\varepsilon>0\) there exists \(\delta>0\) such that for every \(\mc{F}\in \mf{F}^*\) and \(\mu \in \mc{M}(\mc{F})\), if \(\lambda(\mc{F}^*, \mu) \geq \lambda(\mf{F}^{*}) - \delta\), then \(d_{\mf{F}^{**}}(\mc{F}^*, \mu) \leq \varepsilon\). (Clearly $\lambda(\mf{F}^*)$=$\lambda(\mf{F}^{**})$ so the above implies the theorem.) Abusing notation slightly we consider pairs $(\mc{F},\mu)$ as above as elements of $\mf{X}.$ From continuity of $\lambda$ and Claim~\ref{claim:weak2} it follows that for every \(\varepsilon>0\) there exists \(\delta>0\) such that for every $(\mc{F},\mu) \in \mf{X}$ satisfying \(\lambda(\mc{F}^*, \mu) \geq \lambda(\mf{F}^{*}) - \delta\) there exists \((\mc{F}^{**}, \mu^{**})\in \mf{X}^{**}\) such that \(\mc{F}|_{[n]} = \mc{F}^{**}|_{[n]}\) for all \(n\leq \frac{2}{\varepsilon}(r-1)! + 1\). Following the argument in Claim~\ref{lambdacontonX}, let \(\mc{H}: = \mc{F}|_{[N]} (= \mc{F}^{**}|_{[N]}\)), for \(N:= \lceil \frac{2}{\varepsilon}(r-1)!\rceil\). As in Claim~\ref{lambdacontonX} we have $$\lambda(\mc{F},\mu^*) - \lambda(\mc{H},\mu^*) \leq \frac{1}{N(r-1)!},$$ $$\lambda(\mc{F}^{**},\mu^*) - \lambda(\mc{H},\mu^*) \leq \frac{1}{N(r-1)!}.$$ Finally, we have \begin{align*}d_{\mf{F}^{**}}(\mc{F}^*, \mu^*) &\leq d((\mc{F}^*, \mu^*), (\mc{H}, \mu^*)) + d((\mc{H}, \mu^*), (\mc{F}^{**}, \mu^*)) \\ &\leq (\lambda(\mc{F}^*,\mu^*) - \lambda(\mc{H},\mu^*)) + (\lambda(\mc{F}^{**},\mu^*) - \lambda(\mc{H},\mu^*)) \\ &\leq \frac{2}{N(r-1)!} \leq \varepsilon, \end{align*} as desired. \end{proof} \begin{section}{Stability from local stability}\label{sec:symmetrization} Our next result can be considered as a generalization of the symmetrization argument of Sidorenko~\cite{sidorenko}, which was subsequently modified and employed by Pikhurko~\cite{pikhurko} and Hefetz and Keevash~\cite{HefKee13}. It can serve as a general tool to obtain global stability from local stability for clonable families. However, note that although our main result, Theorem~\ref{thm:general}, uses this tool, it is not a direct application, since the family in our interests, \(\pl{Forb}(\mc{T}_r)\), is not clonable. \begin{theorem}\label{thm:symmetrization} Let $\mf{F},\mf{H}$ be clonable families of $r$-graphs. Let $\mf{F^*}$ consist of all $r$-graphs in $\mf{F}$ that cover pairs. If $\mf{F^*}$ is $\mf{H}$-weakly weight stable and $\mf{F}$ is $\mf{H}$-locally stable then $\mf{F}$ is $\mf{H}$-stable. \end{theorem} \begin{proof} By Lemma~\ref{lem:localplusweight}, it suffices to show that $\mf{F}$ is $\mf{H}$-weight stable. By Corollary~\ref{lem:localtoweighted} the family $\mf{F}$ is $\mf{H}$-weight locally stable. Let $\varepsilon, \alpha>0$ be such that $\mf{F^*}$ is $(\mf{H},\alpha)$-weakly weight stable and $\mf{F}$ is $(\mf{H},\varepsilon,\alpha)$-locally weight stable. Define $\delta:=\alpha\varepsilon/2$. We will prove that for every $\mc{F} \in \mf{F}$ and $\mu \in \mc{M}(\mc{F})$ such that \begin{equation} \label{eq:stability0} \lambda(\mc{F},\mu) \geq \lambda(\mf{H})-\delta, \end{equation} we have \begin{equation} \label{eq:stability1} d_{\mf{H}}(\mc{F},\mu) \leq \varepsilon. \end{equation} Note that this statement implies that $\mf{F}$ is $(\mf{H},\delta)$-weight stable as $\mf{F}$ is $(\mf{H},\varepsilon,\alpha)$-locally weight stable and $\delta \leq \alpha$. The proof is by induction on $\brm{v}(\mc{F})$. The base of induction is trivial. For the induction step we assume that $\mc{F} \not \in \mf{F}^*$, as otherwise (\ref{eq:stability1}) holds. Indeed, if $\mc{F} \in \mf{F}^*$, we have $$d_{\mf{H}}(\mc{F},\mu) \leq \frac{\lambda(\mf{H})-\lambda(\mc{F},\mu)}{\alpha} \leq \frac{\delta}{\alpha} \leq \varepsilon,$$ as $\mf{F^*}$ is $(\mf{H},\alpha)$-weight stable and $\delta \leq \alpha\varepsilon$. Thus, \(\mc{F}\in \mf{F}^*\) and there exist $v_1,v_2 \in V(\mc{F})$, such that $\{v_1,v_2\} \not \subseteq F$ for every $F \in \mc{F}$. We assume that \(\mu(v_1)\neq 0\) and \(\mu(v_2)\neq 0\), since otherwise the conclusion follows from the induction hypothesis. We will consider a family of probability distributions on $V(\mc{F})$ defined as follows. For $t \in [0,1]$, let $\mu_t \in \mc{M}(\mc{F})$ be defined by $\mu_t(v) = \mu(v)$ for all $v \in V(\mc{F})\setminus\{v_1,v_2\}$, $\mu_t(v_1)=t(\mu(v_1)+\mu(v_2))$, and $\mu_t(v_2)=(1-t)(\mu(v_1)+\mu(v_2))$. Note that $\mu=\mu_x$, for $x:=\mu(v_1)/(\mu(v_1)+\mu(v_2)$). As \(\mu(v_1)\neq 0\) and \(\mu(v_2)\neq 0\), it follows that $x \not \in \{0,1\}$. Note that $(\mc{F},\mu_0)$ and $(\mc{F},\mu_1)$ can be considered as weighted $r$-graphs on $\brm{v}(\mc{F})-1$ vertices and, therefore, the induction hypothesis is apllicable to them. Moreover, \begin{equation}\label{eq:stability2} \lambda(\mc{F},\mu)=x\lambda(\mc{F},\mu_0)+(1-x)\lambda(\mc{F},\mu_1). \end{equation} If $\lambda(\mf{F},\mu_i) < \lambda(\mf{H}) - \delta$ for $i=1,2$, then by (\ref{eq:stability2}), $\lambda(\mc{F},\mu) < \lambda(\mf{H})-\delta$, in contradiction with (\ref{eq:stability0}). Thus, without loss of generality, we assume that $\lambda(\mf{F},\mu_0) \geq \lambda(\mf{H}) - \delta$. By the induction hypothesis we have $d_{\mf{H}}(\mc{F},\mu_0) \leq \varepsilon$. Now suppose for a contradiction that $d_{\mf{H}}(\mc{F},\mu) > \varepsilon$. As $d_{\mf{H}}(\mc{F},\mu_t)$ is a continuous function of $t$, there exists $y \in [0,x]$ such that $d_{\mf{H}}(\mc{F},\mu_y)=\varepsilon$. Since $\mf{F}$ is $(\mf{H},\varepsilon,\alpha)$-locally weight stable, we have \begin{equation}\label{eq:stability3} \lambda(\mc{F},\mu_y) \leq \lambda(\mf{H}) - \alpha\varepsilon. \end{equation} On the other hand, \begin{align}\label{eq:stability4} \lambda&(\mc{F},\mu_y) = \frac{x-y}{x}\lambda(\mc{F},\mu_0)+\frac{y}{x}\lambda(\mc{F},\mu_x)\notag \\ &\geq \frac{x-y}{x}(\lambda(\mf{H}) - \delta) + \frac{y}{x}(\lambda(\mf{H}) - \delta) = \lambda(\mf{H}) - \delta> \lambda(\mf{H}) - \alpha\varepsilon, \end{align} as $\delta< \alpha\varepsilon$. The contradiction between inequalities (\ref{eq:stability3}) and (\ref{eq:stability4}) concludes the proof. \end{proof} \section{Erd\H{o}s-Simonovits Stability Theorem via local and weighted stability.}\label{sec:example} In this subsection we give a sample application of the techniques we developed thus far. We give a proof of the classical Erd\H{o}s-Simonovits Stability Theorem~\cite{Sim68}, which can be stated in the language of this paper as follows. \begin{theorem}[Erd\H{o}s-Simonovits Stability Theorem~\cite{Sim68}] Let $t \geq 2$ be a fixed positive integer, and let $K_t$ denote the complete graph on $t$ vertices. Then $\brm{Forb}(K_t)$ is $\mf{B}(K_{t-1})$-stable. \end{theorem} \noindent \emph{Proof.} Let $\mf{F}:=\brm{Forb}(K_t)$ and $\mf{H}:=\mf{B}(K_{t-1})$. \begin{claim}\label{claim:erdosstoneaux}\(\mf{F}\) is \(\mf{H}\)-vertex locally stable. \end{claim} Our theorem follows from this claim. Indeed, by Theorem~\ref{thm:narrowlocaltolocal}, Claim~\ref{claim:erdosstoneaux} implies that $\mf{F}$ is $\mf{H}$-locally stable. Theorem~\ref{thm:symmetrization} in turn implies that \(\mf{F}\) is \(\mf{H}\)-stable, as the family $\mf{F}^*$ in the statement of Theorem~\ref{thm:symmetrization} is the family of cliques on at most $(t-1)$ vertices, and is, trivially, $\mf{H}$-weakly weight stable. Thus it remains to prove the claim. \begin{proof}[Proof of Claim~\ref{claim:erdosstoneaux}] We will show that $\mf{F}$ is $(\mf{F}',\varepsilon,1)$-vertex locally stable, that is, there exist $\varepsilon>0$, \(n_0\in \mathbb{N}\) such that if $\mc{F} \in \mf{F}$ satisfies $\brm{v}(\mc{F})=n \geq n_0$, $d_{\mf{H}}(\mc{F}) \leq \varepsilon n^2$ and \begin{equation}\label{eq:degree} | L_{\mc{F}}(v)| \geq \left(\frac{t-2}{t-1} -\varepsilon\right)n, \end{equation} for every $v \in V(\mc{F})$, then $|\mc{F}| \leq m(\mf{H},n)-d_{\mf{H}}(\mc{F})$. In fact, we prove a stronger statement. We show that if the above conditions hold then there exists $\mc{H}_0 \in \mf{H}$ such that $\mc{F} \subseteq \mc{H}_0$, that is, $\mc{F}$ is $(t-1)$-partite. \begin{remark}An even stronger result was proved by Andr\'{a}sfai, Erd\H{o}s and S\'{o}s~\cite{AndErdSos74}. They show that the condition $d_{\mf{H}}(\mc{F}) \leq \varepsilon n^2$ is unnecessary, and (\ref{eq:degree}) suffices to deduce that $\mc{F}$ is $(t-1)$-partite for $\varepsilon < \frac{1}{(3t-4)(t-1)}$. We, however, include the proof which exploits the bound on the distance from $\mc{F}$ to $\mf{H}$ to demonstrate the methods used in the proof of Theorem~\ref{thm:general}. \end{remark} Let $0 \ll \varepsilon \ll \gamma \ll 1/t$ be chosen to satisfy the inequalities appearing further in the proof and let \(n\) be sufficiently large. Given $\mc{F}$ as above, let $\mc{H} \in \mf{H}$ be such that $V(\mc{H})=V(\mc{F})$ and $|\mc{F} \triangle \mc{H}| = d_{\mf{H}}(\mc{F})$. Since, $d_{\mf{H}}(\mc{F}) \leq \varepsilon n^2$, we have \begin{equation}\label{eq:ES2} |\mc{H}| \geq |\mc{F}|- \varepsilon n^2 \geq \left(\frac{t-2}{t-1} -3\varepsilon\right)\frac{n^2}{2}. \end{equation} Let $\mc{P}=\{P_1,P_2,\ldots,P_{t-1}\}$ be the blowup partition of $V(\mc{H})$. It is easy to see that (\ref{eq:ES2}) implies that $$\left||P_i| - \frac{n}{t-1}\right| \leq \gamma n,$$ for all $i \in [t-1]$ with an appropriate choice of $\varepsilon \ll\gamma$. Next we show that the neighborhood of every vertex in $\mc{F}$ is ``close" to the neighborhood of some vertex in $\mc{H}$. The corresponding part of the proof of Theorem~\ref{thm:general}, Lemma~\ref{theorem2}, is longer and more technical then the argument below, yet the main ideas are very similar. For $v \in V(\mc{F})$, let $I(v)=\{ i \: | \: |N(v) \cap P_i| \geq \gamma n \}$, where $N(v)$ denotes the neighborhood of $v$. Then (\ref{eq:degree}) implies that $|I(v)| \geq t-2$ for every $v \in V(\mc{F})$. Suppose that $|I(v)|=t-1$, and choose $Q_i \subseteq N(v) \cap P_i$ so that $|Q_i|=\gamma n$ for $i \in [t-1]$. For simplicity, we assume that $\gamma n$ is an integer. Let $Q = \cup_{i\in [t-1]}Q_i \subseteq N(v)$. Then $\mc{F}|_{Q}$ is $K_{t-1}$-free and, therefore, Tur\'{a}n's theorem implies that \begin{equation}\label{eq:ES3} |\mc{F}|_{Q}| \leq \frac{(t-3)((t-1)\gamma n)^2}{2 (t-2)} \end{equation} On the other hand, $\mc{H}|_{Q}$ is $K_t$-free, thus, \begin{equation}\label{eq:ES4} |\mc{H}|_{Q}| \leq \frac{(t-2)((t-1)\gamma n)^2}{2 (t-1)}. \end{equation} Combining (\ref{eq:ES2}) and (\ref{eq:ES3}), we deduce that \begin{align*} |\mc{F} \triangle \mc{H}| &\geq|\mc{F}|_Q \triangle \mc{H}|_Q | \\ &\geq \left( \frac{t-2}{t-1} - \frac{t-3}{t-2}\right) \frac{((t-1)\gamma n)^2}{2} > \varepsilon n^2.\end{align*} This contradiction implies that $|I(v)|= t-2$ for all $v \in V(\mc{F})$. Finally, we construct a partition $\mc{P}'=\{P_1',P_2',\ldots,P_{t-1}'\}$ of $V(\mc{F})$ so that $\mc{F} \subseteq \mc{F}''$, where $\mc{F}''$ is a blowup of $K_{t-1}$ with the blowup partition $\mc{P'}$. Define $P_i':= \{v \in V(\mc{F}) \: | \: i \not \in I(v)\}$ for $i \in [t-1]$. Note that (\ref{eq:degree}) and the bounds on the size of $P_j$ imply that $$|N(v) \cap P_j| \geq n/(t-1) - (t-1)\gamma n$$ for every $v \in P_i$, $i \neq j$. It follows that, if $v,v' \in P_i$, then $\{v,v'\} \not \in \mc{F}$. (Otherwise, $\mc{F}|_{N(v) \cap N(v')}$ is $K_{t-2}$-free and $|N(v) \cap N(v') \cap P_j| \geq n/(t-1) - (2t-1)\gamma n$ for every $j \in [t-1] \setminus \{i\}$. This leads to a contradiction using an argument completely analogous to the one used in the preceding paragraph.) Thus, $\mc{F} \subseteq \mc{F}''$, as desired. \end{proof} \end{section} \begin{section}{Local stability of Forb($\mc{T}_r$)}\label{sec:local} Recall that an $(m,r,r-1)$ \emph{Steiner system} is an $r$-graph on $m$ vertices such that every $(r-1)$-tuple is contained in a unique $r$-edge. Let $\mc{S}$ be an $(m,r,r-1)$ Steiner system, it is easy to see that $|\mc{S}| =\frac{{m \choose r-1}}{r}$ and $| L_{\mc{S}}(v)|=\frac{{m-1 \choose r-2}}{r-1}$ for every $v \in V(\mc{S})$. We frequently use the following notation for related densities: \begin{align*} \pl{e}(m,r)&:=\frac{{m \choose r-1}}{rm^r},\\ \pl{d}(m,r)&:=\frac{{m-1 \choose r-2}}{(r-1)m^{r-1}}. \end{align*} We say that an $(m,r,r-1)$ Steiner system $\mc{S}$ is \emph{balanced} if $\lambda(\mc{S})=\lambda(\mc{S},\xi_{\mc{S}})$ (recall that $\xi_{\mc{S}}$ is defined in Section~\ref{sec:weghtedstability}; it is the uniform distribution on $V(\mc{S})$). It is easy to see that $m(\mf{B}(\mc{S}))=\pl{e}(m,r)$ when $\mc{S}$ is balanced. The main result of this section, stated below, applies to all balanced Steiner systems. \begin{theorem} \label{thm:localstability} If $\mc{S}$ is a balanced $(m,r,r-1)$ Steiner system for some $m \geq r \geq 3$, then Forb$(\mc{T}_r)$ is $\mathfrak{B}(\mc{S})$-vertex locally stable. \end{theorem} In all the following statements, $m\geq r \geq 3$ are fixed and $\mc{S}$ is a balanced $(m,r,r-1)$ Steiner system. We denote $\mf{B}(\mc{S})$ simply by $\mf{B}$. The proof of Theorem~\ref{thm:localstability} uses three auxiliary lemmas. The first ensures that if a large blowup \(\mc{B}\in \mf{B}\) has density close to the maximum possible (i.e. \(\pl{e}(m,r)\)), then the blowup partition is close being an equipartition. More formally, we say that the blowup \(\mc{B}\in \mf{B}\) with the blowup partition \(\mc{P}=\{P_1, P_2, \dots, P_m\}\) is \(\varepsilon\)-\emph{balanced} for some \(0<\varepsilon <1\), if for each $ j \in [m]$, \[\left||P_j| - \frac{n}{m}\right|\leq \varepsilon n.\] \begin{lem} \label{sizelemma} For every $\varepsilon >0$ there exists $\delta >0$ and $n_0\in \mathbb{N}$ such that the following holds. If $\mc{B}\in\mf{B}$ with $\pl{v}(\mc{B})=n\geq n_0$ and $|\mc{B}|\geq \left(\pl{e}(m,r) - \delta \right) n^r$, then \(\mc{B}\) is \(\varepsilon\)-balanced. \end{lem} \begin{proof}Let \(\mc{P}=\{P_1,P_2, \dots,P_m\}\) be the blowup partition of \(\mc{B}\). Define a vector \(\pl{y}\) with $y_j = \frac{|P_j|}{n}$ for each $j \in [m]$. We have $\sum_{i=1}^m{y_j} = 1$ and \begin{equation}\label{lagr}\lambda(\mc{S},\pl{y}) = \frac{|\mc{B}|}{n^r} \geq \pl{e}(m,r)-\delta. \end{equation} Since $\mc{S}$ is balanced and $\lambda(\mc{S},\cdot)$ is a continuous function, for every $\varepsilon>0$ there exists $\delta>0$ such that (\ref{lagr}) implies that $|y_j-1/m|\leq \varepsilon,$ as desired. \end{proof} Before stating the second auxiliary lemma, we introduce additional definitions. Let $\mc{B} \in \mf{B}$ with the partition \(\mc{P}=\{P_1, P_2, \dots, P_m\}\) and $\mc{F} $ be an \(r\)-graph with $V(\mc{F})=V(\mc{B})$. We call the edges in $\mathcal{F}\setminus \mathcal{B}$ \emph{bad}, the edges in $\mc{B}\setminus \mc{F}$ \emph{missing} and, finally, the edges in $\mathcal{F} \cap \mc{B}$ \emph{good}. Given a collection of sets $\mc{X}=\{X_1,X_2,\ldots,X_k\}$ we say that a set $F$ is \emph{$\mc{X}$-transversal} if $|X_i \cap F| \leq 1$ for every $1 \leq i \leq k$. We say that an $r$-graph $\mf{F}$ is \emph{$\mc{X}$-transversal} if every $F \in \mc{F}$ is $\mc{X}$-transversal. Informally speaking, the next lemma tells us that if graphs \(\mc{F}\) and \(\mc{B}\) are ``locally sufficiently close" and \(\mc{F}\) has density close to \(\pl{e}(m,r)\), then \(\mc{F}\) must be \(\mc{P}\)-transversal. This result will be useful in the proof of Lemma~\ref{theorem2}, where working with bad edges we will be able to restrict our attention to transversal ones. \begin{lem}\label{transversal} There exist $\varepsilon>0$ and $n_0\in \mathbb{N}$ such that the following holds. Let $\mc{F}$ be a $\mc{T}_r$-free $r$-graph with $\pl{v}(\mc{F}) = n\geq n_0$ vertices, $\mathcal{B}\in\mf{B}$ with $\pl{v}(\mc{B})=n$ and the blowup partition \(\mc{P}=\{P_1, P_2, \dots, P_m\}\). If $| L_{\mathcal{F}}(v)\triangle L_{\mathcal{B}}(v)|\leq \varepsilon n^{r-1}$ for every $v\in V(\mc{F})$, and $|\mc{F}|\geq \left(\pl{e}(m,r) - \varepsilon \right) n^r$, then $\mc{F}$ is $\mc{P}$-transversal. Moreover, if $\mc{F}'$ is a $\mc{T}_r$-free $r$-graph such that \(\mc{F} \subseteq \mc{F}'\), then $\mc{F}'$ is \(\mc{P}\)-transversal. \end{lem} \begin{proof} Clearly, it suffices to verify the last conclusion. Note that our choice of $n_0$ here (and in later proofs as well) is not explicit. We assume, for a contradiction, that there exists a non-transversal edge \(F \in \mc{F'}\), with \(v_1,v_2\in F\cap {P_j}\) for some \(j\). We will show then that \(\mc{F} \cup \{F\}\) contains a copy of \(\mc{T}_r\). We will find such a copy by showing the existence of an \((r-1)\)-tuple \(F'\in L(v_1)\cap L(v_2)\) that is disjoint from \(F\). Then, clearly \(F, F'\cup \{v_1\}\) and \(F'\cup\{v_2\}\) together will induce a \(\mc{T}_r\). Let us specify the choice of constants used in the proof. Fix $\varepsilon_{\ref{sizelemma}} :=\frac{1}{m+1}$. Let $\delta_{\ref{sizelemma}}$ be derived from Lemma~\ref{sizelemma} applied with $\varepsilon = \varepsilon_{\ref{sizelemma}}$. We choose $0<\varepsilon <1$ satisfying the following constraints \begin{align} \varepsilon &< \pl{e}(m,r)\\ \varepsilon&\left(1+\frac{1}{r}\right) \leq \delta_{\ref{sizelemma}} \label{epsilontrans:deltasize} \\ \varepsilon &<\frac{1}{2}\pl{d}(m,r)\left(\frac{1}{m}-\varepsilon_{\ref{sizelemma}}\right)^{r-1} \label{epsilontrans:epsilonsize}. \end{align} First, note that the links of both \(v_1\) and \(v_2\) have large size. We have \begin{equation}\label{linksize}|L(v_i)| \geq d(m,r) \min_{i}{|P_i|^{r-1}}- \varepsilon n^{r-1}. \end{equation} for $i = 1,2$. But \(\mc{B}\) is an \(\varepsilon_{\ref{sizelemma}}\)-balanced partition. Indeed, since \[|\mc{F}\triangle\mc{B}| =\frac{1}{r}\sum_{i\in [n]}{| L_{\mc{F}}(i)\triangle L_{\mc{B}}(i)|} \leq \frac{1}{r}\varepsilon n^r,\] we have that \[|\mc{B}|\geq |\mc{F}| - \frac{1}{r}\varepsilon n^r \geq \left(\pl{e}(m,r) - \varepsilon\left(1+\frac{1}{r}\right)\right)n^r \stackrel{(\ref{epsilontrans:deltasize})}{\geq} (\pl{e}(m,r)-\delta_{\ref{sizelemma}})n^r.\] By Lemma~\ref{sizelemma}, applied to $\mc{B}$ with $\varepsilon = \varepsilon_{\ref{sizelemma}}$, we have \[\left||P_j|-\frac{n}{m} \right|\leq \varepsilon_{\ref{sizelemma}}n.\] for each \(j\in[m]\). Thus, from (\ref{linksize}) it follows that \[|L(v_i)| \geq \pl{d}(m,r)\left(\frac{1}{m} - \varepsilon_{\ref{sizelemma}}\right)^{r-1}n^{r-1} - \varepsilon n^{r-1}\] for each $i\in \{1,2\}$. Now we can show that the intersection of the links of \(v_1\) and \(v_2\) is large as well. Note that every $(r-1)$-tuple in $L(v_1)\triangle L(v_2)$ is either in a bad or in a missing edge with $v_1$ or $v_2$, but the total number of such edges is bounded by the initial assumptions, hence \[|L(v_1)\triangle L(v_2)| \leq 2\varepsilon n^{r-1}. \] Thus, \begin{align}\label{commonlink}|L(v_1)\cap L(v_2)| &= \frac{1}{2}\left(|L(v_1)| + |L(v_2)| - |L(v_1)\triangle L(v_2)| \right)\\ &\geq \pl{d}(m,r)\left(\frac{1}{m} - \varepsilon_{\ref{sizelemma}}\right)^{r-1}n^{r-1} - 2\varepsilon n^{r-1}\\ &>rn^{r-2}, \end{align} where the last inequality is true for \(n\) sufficiently large. On the other hand, the number of $(r-1)$-tuples that do not contain both $v_1$ and $v_2$ and have a common vertex with $F$ is bounded by $(r-2)n^{r-2}$. Hence, there exists an $(r-1)$-tuple \(F'\) in $L(v_1)\cap L(v_2)$ that is disjoint from $F$ and, as we discussed at the beginning of the proof, a contradiction follows. \end{proof} In the next lemma we show that for every $r$-graph $\mc{F} \in \brm{Forb}(\mc{T}_r)$ with sufficiently large minimum degree there exists a blowup $\mc{B}_0$ of $\mc{S}$ such that every vertex of $\mc{F}$ has ``similar'' neighborhoods in $\mc{F}$ and $\mc{B}_0$. The proof of this lemma contains the bulk of technical difficulties involved in proving Theorem~\ref{thm:localstability}. \begin{lem} \label{theorem2} For all integers $m\geq r\geq 3$ and $\varepsilon>0$ there exists $\delta >0$ and $n_0\in \mathbb{N}$ such that the following holds. If $\mathcal{F}$ is a $\mc{T}_r $-free $r$-graph with $\pl{v}(\mc{F})= n \geq n_0$, $d_{\mf{B}}(\mathcal{F})\leq \delta n^r $, $|\mc{F}|\geq (\pl{e}(m,r)-\delta)n^r$ and for every $v\in V(\mc{F})$, $| L_{\mc{F}}(v)|\geq (\pl{d}(m,r) - \delta) n^{r-1}$, then there exists $\mc{B_0}\in\mf{B}$ with $v(\mc{B_0})=n$ such that for every $v\in V(\mc{F})$ \[| L_{\mathcal{F}}(v)\triangle L_{\mathcal{\mc{B}_0}}(v)|\leq \varepsilon n^{r-1}.\] \end{lem} \begin{proof}[Proof of Lemma~\ref{theorem2}:] Let $\varepsilon_{\ref{transversal}}$ be chosen to satisfy Lemma~\ref{transversal}. We choose $$ 0<\delta\ll\varepsilon_{\ref{sizelemma}} \ll \gamma\ll\min\{\varepsilon_{\ref{transversal}},\varepsilon\}$$ to satisfy the constraints appearing further in the proof. Let $\delta_{\ref{sizelemma}}$ be chosen to satisfy Lemma~\ref{sizelemma} applied with $\varepsilon=\varepsilon_{\ref{sizelemma}}$. We assume that $\delta \ll \delta_{\ref{sizelemma}}$. Let $\mc{B}\in \mf{B}$ be such that $ |\mc{F}\triangle\mc{B}|= d_{\mf{B}}(\mc{F})$, and let $\mc{P}=\{P_1, P_2, \dots, P_m\}$ be the blowup partition of $\mc{B}$. Since $\delta< \delta_{\ref{sizelemma}}$ we have \[|\mc{F}|\geq (\pl{e}(m,r) - \delta)n^r \geq (\pl{e}(m,r) - \delta_{\ref{sizelemma}})n^r.\] Hence, \(\mc{B}\) is \(\varepsilon_{\ref{sizelemma}}\)-balanced by Lemma~\ref{sizelemma}. Consider the set \[J:=\left\{v\in V(\mc{F}) | \left| L_{\mc{F}}(v)\triangle L_{\mc{B}}(v)\right| > \gamma n^{r-1}\right\}.\] We have \[|J|\gamma n^{r-1}<\sum_{i\in [n]}{\left| L_{\mc{F}}(i)\triangle L_{\mc{B}}(i)\right| } =r|\mc{F}\triangle \mc{B}|\leq \delta rn^r.\] Let $\delta_1:=\delta r /\gamma$, then $|J| \leq \delta_1 n$, by the above. Let $\mathcal{F}' := \mathcal{F}|_{V(\mc{F})\setminus J}$, $n'=\pl{v}(\mc{F}')$, $\mathcal{B}':= \mathcal{B}|_{V(\mc{F})\setminus J}$, $P_j':=P_j\setminus J$ for each $j\in[m]$, and $\mc{P}'=\{P'_1, P'_2, \dots, P'_m\}$. The graph \(\mc{F}'\) satisfies the assumptions of Lemma~\ref{transversal}. Indeed, for every \(v\in V(\mc{F}')\), \[| L_{\mc{F}'}(v)\triangle L_{\mc{B}'}(v)| \leq \gamma n^{r-1} \leq\varepsilon_{\ref{transversal}} (1-\delta_1)^{r-1} n^{r-1} \leq \varepsilon_{\ref{transversal}} (n')^{r-1}.\] Similarly, \begin{align*}|\mc{F}|\geq (\pl{e}(m,r) -\varepsilon_{\ref{transversal}}) (n')^{r-1}. \end{align*} Thus both $\mc{F}'$ and $\mc{F}$ are $\mc{P}$-transversal by Lemma~\ref{transversal}. Our next goal is to extend \(\mc{B}'\) to a blowup \(\mc{B}_0\) of $\mc{S}$ with $V(\mc{B}_0)=V(\mc{F})$, as follows. For each $u\in J$ we will find a unique index $j_{u} \in [m]$, such that $u$ ``behaves" as the vertices in the partition class $P_{j_u}'$, and add the vertex \(u\) to this partition class. By doing so for all vertices of \(J\), we will extend the partition \(\mc{P}'\), and since \(J\) has relatively small size, this operation will not increase the degrees of vertices in \(\mc{F}'\) drastically. So let us fix some \(u \in J\) and show that such an index \(j_u\) exists. For $I \subseteq [m]$, let $$E_I(u):=\{F \in \mc{F} \: | u\in F, \: |F \cap P'_i| =1 \: \mathrm{for\: every}\: i \in I \}.$$ We construct an auxiliary $(r-1)$-graph $\mc{L}(u)$ with $V(\mc{L}(u))=[m]$ such that $I\in \mc{L}(u)$ if and only if $\left|E_I(u)\right|\geq \gamma n^{r-1}.$ We aim to show that there exists a unique $j_u \in [m]$ such that $\mc{L}(u)$ is isomorphic to the link graph of $j_u$ in $\mc{S}$. We start by proving that $\mc{L}(u)$ is at least as large as any of the link graphs \(L_{\mc{S}}(j)\), for \(j\in [m]\). Denote by $E_J(u)$ the set of all the edges in $\mc{F}$ that contain $u$ and at least one other vertex from $J$. Clearly, \(|E_J(u)|\leq |J|n^{r-2} \leq \delta_1 n^{r-1}.\) Therefore, \begin{align*} (\pl{d}(m,r)-\delta)n^{r-1}&\leq | L_{\mc{F}}(u)| \leq |E_J(u)|+\sum_{I\in \mc{L}(u)}{|E_I(u)|} + \sum_{I\notin \mc{L}(u)}{|E_I(u)|}\\ &\leq \delta_1 n^{r-1}+ |\mc{L}(u)| \left(\frac{1}{m}+\varepsilon_{\ref{sizelemma}}\right)^{r-1}n^{r-1}+ \gamma {m \choose r-1} n^{r-1}. \end{align*} It follows that \begin{align*} |\mc{L}(u)|&\geq \frac{\pl{d}(m,r)m^{r-1}}{(1+\varepsilon_{\ref{sizelemma}}m)^{r-1}} - \frac{(\delta+\delta_1 +\gamma/(r-1)!)m^{r-1}}{(1+\varepsilon_{\ref{sizelemma}}m)^{r-1}}\\ &> \pl{d}(m,r)m^{r-1}-1, \end{align*} where the last inequality holds, as long as $\varepsilon_{\ref{sizelemma}},\delta, \delta_1$ and $\gamma$ are sufficiently small compared to $1/m^r$. It follows that $|\mc{L}(u)|\geq \pl{d}(m,r)m^{r-1} = |L_{\mc{S}}(j)|$ for any \(j\in[m]\). Next, we find \(j_u\) such that \(\mc{L}(u)\subseteq L_{\mc{S}}(j_u)\). For every $j \in [m]$ consider \[ L_{j}(u) := \{v \in P_j' : |L_{\mc{F}}(\{u,v\})|\geq \gamma n^{r-2} \},\] that is, \( L_{j}(u)\) is the set of vertices in the partition class \(P_j'\) which are in relatively many edges with \(u\). Let \(K=\{j : \left| L_{j}(u)\right|< \gamma n\}\). We want to show that $|K|=1$, from which it will follow that \(u\) essentially behaves as the vertices of the partition class corresponding to this unique index in \(K\). First, let us prove that \(K \neq \emptyset\). Fix $I\in \mc{L}(u)$. As $\mc{S}$ is a Steiner system, there exists unique $j$ such that $I\cup \{j\}\in \mc{S}$. We claim that $j \in K$. Assume not, and further assume, without loss of generality, that \(I=\{1,2,\dots, r-1\}\). Then there exists $\{v_1,v_2, \dots, v_{r-1}\}\in E_I(u)$ and $v_r\in L_{j}(u)$, such that $\{v_1,v_2, \dots, v_{r-1},v_r\}\in\mc{F}$. Otherwise, for every $F\in E_I(u)$ and every \(v\in L_j(u)\), $\left(F\setminus\{u\}\right)\cup \{v\}$ is a missing edge. Hence, \[|\mc{F}\triangle \mc{B}| \geq |E_I(u)| | L_j(u)| \geq \gamma n^{r-1}\cdot \gamma n >\delta n^r, \] a contradiction. Let $v_1,v_2, \dots, v_{r-1}, v_r$ be as above. Since $\mc{F}$ is $\mc{T}_r $-free, every edge in $\mc{F}$ that contains both $u$ and $v_r$, must also contain a vertex among $\{v_1,v_2, \dots, v_{r-1}\}$. Therefore, we must have \(|L(\{u,v_r\})| \leq (r-1)n^{r-3}\), while, by definition of \( L_j(u)\), \(|L(\{u,v_r\})|\geq\gamma n^{r-2}\), yielding a contradiction when $n$ is large enough. Thus $K \neq \emptyset$. Note that if we prove that $K = \{j_u\}$ for some index \(j_u\), then since $|\mc{L}(u)|\geq m^{r-1}\pl{d}(m,r)=|L_{\mc{S}}(j_u)|$, it will follow that $\mc{L}(u)=L_{\mc{S}}(j_u)$. \begin{claim}\label{claim:K=1} $|K|= 1$. \end{claim} \begin{proof} Let \(k:=|K|\), we have already shown that \(k\geq 1\). Suppose for a contradiction that \(k\geq 2\). Let \(A\) be a $\mc{P}'$-transversal $(r-2)$-tuple. We want to show that \begin{equation}\label{linkofiandI}|L(A\cup \{u\})|\leq\left (\frac{1}{m} + \varepsilon_{\ref{sizelemma}} + \gamma(m-1)\right) n. \end{equation} Suppose that there exist $j_1\neq j_2$ such that $|L(A\cup \{u\})\cap P_{j_1}| \geq \gamma n$ and $|L(A\cup \{u\})\cap P_{j_2}| \geq \gamma n$. Since $\mc{F}$ is $\mc{T}_r $-free, for every $v_1\in L(A\cup \{u\})\cap P_{j_1}$ and $v_2\in L(A\cup \{u\})\cap P_{j_2}$, we must have $$|L(\{v_1,v_2\})|\leq (r-1)n^{r-3}.$$ It follows that \begin{align*} |\mc{F}\triangle\mc{B}|&\geq \gamma^2 n^2\left(\left(\frac{1}{m}-\varepsilon_{\ref{sizelemma}}\right)^{r-2} n^{r-2} - (r-1){n \choose r-3}\right)\\ &{\geq} \frac{1}{2}\gamma^2 \left(\frac{1}{m}-\varepsilon_{\ref{sizelemma}}\right)^{r-2} n^{r}>\delta n^r, \end{align*} which is a contradiction. Thus, no such $j_1$ and $j_2$ exist, and (\ref{linkofiandI}) follows. Using (\ref{linkofiandI}), we obtain an upper bound on \(E_I(u)\), for every \((r-2)\)-tuple \(I\subseteq [m]\). Without loss of generality, suppose $I=\{1,2, \dots, {r-2}\}$. We apply (\ref{linkofiandI}) to every \(A\in [n]^{r-2}\) which is \(I\)-transversal in \(\mc{F}\) (i.e. $|A\cap P_i| =1$ for every $i\in I$). As $$\prod_{j=1}^{r-2} |P_{j}| \leq \left(\frac{n}{m}+\varepsilon_{\ref{sizelemma}}n\right)^{r-2},$$ we derive \begin{equation}\label{linkofiandI2}|E_I(u)|\leq\left(\frac{1}{m}+\varepsilon_{\ref{sizelemma}}\right)^{r-2} \left(\frac{1}{m} + \varepsilon_{\ref{sizelemma}} + \gamma(m-1)\right)n^{r-1}. \end{equation} And finally, we are ready to derive an upper bound on the size of \( L_{\mc{F}}(u)\), which will contradict the initial assumption \(| L_{\mc{F}}(u)| \geq (\pl{d}(m,r)-\delta) n^{r-1}\): \begin{align*} | L_{\mc{F}}(u)|&\leq |E_J(u)|+ \sum_{I\subseteq[m],|I|=r-1 \atop{I\cap K = \emptyset}}{|E_I(u)|} + \sum_{I\subseteq[m],|I|=r-1 \atop{I\cap K \neq \emptyset}}{|E_I(u)|} \\ & \leq |J|n^{r-2}+ \frac{1}{r-1}\sum_{I\subseteq[m],|I|=r-2 \atop{I\cap K = \emptyset}}{|E_I(u)|} \\& + \sum_{j\in K}\left(| L_{j}(u)| n^{r-2} +(n-| L_{j}(u)|)\gamma n^{r-2}\right) \\ &\stackrel{(\ref{linkofiandI2})}\leq \frac{1}{r-1}\binom{m-s}{r-2}\left(\frac{1}{m}+\varepsilon_{\ref{sizelemma}}\right)^{r-2} \left(\frac{1}{m} + \varepsilon_{\ref{sizelemma}} + \gamma(m-1)\right)n^{r-1} \\&+\delta_1n^{r-1}+ 2\gamma m n^{r-1}\\ &\leq \left( \left(\frac{\binom{m-2}{r-1}}{m^{r-1}} +\gamma\right)+2\gamma m + \delta_1\right) n^{r-1}\\ &< (\pl{d}(m,r)-\delta) n^{r-1}, \end{align*} a contradiction. Thus, \(k = 1\). \end{proof} As discussed above, Claim~\ref{claim:K=1} implies that for every \(u\in J\) there exists unique \(j_u\) such that $\mc{L}(u)=L_{\mc{S}}(j_u)$. We extend the blowup \(\mc{B}'\) as we discussed earlier. For every \(j\in [m]\), define $$P_{j}^{0}:=P_j'\cup \{u\in J \: |\: j_u = j\}.$$ Let $\mc{B}_0 \supseteq \mc{B}'$ be the blowup of $\mc{S}$ with the blowup partition \(\mc{P}_0\). \begin{claim}\label{B0claim}For every \(v\in V(\mc{F})\), \[| L_{\mc{B}_{0}}{(v)}\triangle L_{\mc{F}}(v)|\leq \varepsilon n^{r-1}.\] \end{claim} \begin{proof} For each $v\in V(\mc{F})\setminus J$, we have \begin{align*}| L_{\mc{B}_{0}}{(v)}\triangle L_{\mc{F}}(v)|&\leq | L_{\mc{B}'}{(v)}\triangle L_{\mc{F'}}(v)| + |J|n^{r-2} \\ &\leq \gamma n^{r-1} + \delta_1 n^{r-1} \leq \varepsilon n^{r-1}. \end{align*} We now consider $v\in J$. Since $\mc{F}$ is $\mc{P}'$-transversal, it follows that for every $F \in L_{\mc{F} \setminus \mc{B}_0}(v)$, either $F \cap J \neq \emptyset$, or there exists $I \not \in \mc{L}(v)$ such that $F \in L_I(v)$. Thus, \begin{equation}\label{eq:theorem2} | L_{\mc{F} \setminus \mc{B}_0}(v) | \leq \delta_1 n^{r-1} + \left({m \choose r-1} - |\mc{L}(v)|\right)\gamma n^{r-1}< \frac{\varepsilon}{8}n^{r-1}. \end{equation} Finally, \begin{align*} |& L_{\mc{F}}(v)\triangle L_{\mc{B}_0}(v)| = 2| L_{\mc{F} \setminus\mc{B}_0}(v) | + | L_{\mc{B_0}}(v)| - | L_{\mc{F}}(v)| \\ &\stackrel{(\ref{eq:theorem2})}\leq \frac{\varepsilon}{2}n^{r-1} +\pl{d}(m,r) \left(\frac{1}{m}+\varepsilon_{\ref{sizelemma}}+\delta_1\right)^{r-1}n^{r-1}-(\pl{d}(m,r)-\delta)n^{r-1}\\&\leq \varepsilon n^{r-1}, \end{align*} as desired. \end{proof} By Claim~\ref{B0claim} the blowup \(\mc{B}_0\) satisfies the conclusion of the lemma, thus finishing the proof. \end{proof} \vskip 10pt We are now ready for the proof of Theorem~\ref{thm:localstability}. \begin{proof}[Proof of Theorem~\ref{thm:localstability}.] Our goal is to show that there exist $\varepsilon,\alpha,n_0>0$ such that the following holds. If $\mc{F}\in \brm{Forb}(\mc{T}_r)$ with $\pl{v}(\mc{F})= [n]$, $n \geq n_0$ such that $d_{\mf{B}}(\mc{F})\leq \varepsilon n^r$, and $| L_{\mc{F}}(v)|\geq (\pl{d}(m,r)- \varepsilon)n^{r-1}$ for every $v\in V(\mc{F})$, then \begin{equation} \label{eq:localstab} |\mc{F}|\leq m(\mf{B},n) - \alpha d_{\mf{B}}(\mc{F}). \end{equation} In fact, we show that one can take $\alpha = \frac{1}{2}$. Now we specify dependencies between constants used further in the proof. Let $\varepsilon_{\ref{transversal}}$ be taken to satisfy Lemma~\ref{transversal}. Define $\varepsilon_{\ref{sizelemma}}:=\frac{1}{4m}$. Let $\delta_{\ref{sizelemma}}$ be taken to satisfy Lemma~\ref{sizelemma} applied with $\varepsilon=\varepsilon_{\ref{sizelemma}}$. We choose $0\ll \varepsilon \ll \varepsilon_{\ref{theorem2}}\ll \min\{\delta_{\ref{sizelemma}},\varepsilon_{\ref{transversal}}\}$ to satisfy the inequalities appearing in the proof. In particular, we will use $\varepsilon < \delta_{\ref{theorem2}}/2$, where $\delta_{\ref{theorem2}}$ is chosen to satisfy Lemma~\ref{theorem2} applied with $\varepsilon_{\ref{theorem2}}$. We can assume that \[|\mathcal{F}|\geq (\pl{e}(m,r)-2\varepsilon)n^r \geq (\pl{e}(m,r)-\delta_{\ref{theorem2}})n^r,\] since otherwise the result follows directly with $\alpha =1$. By Lemma~\ref{theorem2} there exists $\mc{B}\in \mf{B}$ with $V(\mc{B}) =V(\mc{F})$ such that $$| L_{\mc{F}}(v)\triangle L_{\mc{B}}(v)|\leq \varepsilon_{\ref{theorem2}}n^{r-1}$$ for every $v\in V(\mc{F}).$ Recall the definitions of missing and bad edges at the beginning of this section. Generalizing these notions, we introduce the following notation. For every $I\subset V(\mc{F})$ with $0\leq |I|\leq r$, we denote \[A(I) :=\{F\in \mc{B}\setminus \mathcal{F}| I\subseteq F\},\] \[B(I) :=\{F\in \mathcal{F}\setminus \mc{B}| I\subseteq F\},\] $a(I) := |A(I)|$ and $b(I) :=|B(I)|$. So \(a(I)\) and \(b(I)\) respectively denote the number of missing and bad edges that the tuple \(I\) is in. We have $\mc{F}\triangle \mc{B} = A(\emptyset) \cup B(\emptyset)$ and \(|\mc{F}\triangle \mc{B}| = a(\emptyset) + b(\emptyset)\). It is easy to see that for every $I$, such that $0\leq |I|\leq r-1$, the following inequalities hold \begin{equation} \label{a} \sum_{j\notin I}{a(I\cup \{j\})}\geq a(I)\geq \frac{1}{r}\sum_{j\notin I}{a(I\cup \{j\})}, \end{equation} \begin{equation} \label{b} \sum_{j\notin I}{b(I\cup \{j\})}\geq b(I)\geq \frac{1}{r}\sum_{j\notin I}{b(I\cup \{j\})}. \end{equation} It is not hard to see that to derive the inequality (\ref{eq:localstab}) it suffices to show that $a(\emptyset)\geq 3 b(\emptyset)$. Let us assume for a contradiction that $b(\emptyset)>\frac{1}{3}a(\emptyset)$. Our next claim shows that we can bound the number of bad edges that contain some \(i\)-tuple from above by the proportion of the missing edges that contain any of its \((i-1)\)-subtuples. \begin{claim}\label{amplificationlem}There exists $c>0$ such that for every $I \subseteq V(\mc{F}), 1 \leq |I| \leq r$, and every $I'\subset I$ with $|I'|=|I|-1$, we have $a(I')\geq c b(I)n$. \end{claim} \begin{proof} We proceed by induction on $r-|I|$. We prove that for each $1 \leq i \leq r$, and every $I\subseteq [n]$ with $|I|=i$ there exists $c_i>0$ such that for all $I'\subset I$ and $|I'|=i-1$, we have $a(I')\geq c_ib(I)n$. This clearly implies the claim. We start the base case: $|I|=r$ and we assume that $I$ is a bad edge, as otherwise the statement is trivial. Let $\mc{P}=\{P_1, P_2, \dots, P_m\}$ be the blowup partition of $\mathcal{B}$. By our assumptions, $|\mathcal{F}|\geq (\pl{e}(m,r)-\varepsilon_{\ref{transversal}})n^r$ and $| L_{\mc{F}}(v) \triangle L_{\mc{B}}(v)|\leq \varepsilon_{\ref{theorem2}} n^r \leq \varepsilon_{\ref{transversal}}n^r$ for every \(v\in V(\mc{F})\). Thus by Lemma~\ref{transversal} all bad edges in $\mc{F}$ are $\mc{P}$-transversal. Without loss of generality, assume $I=\{v_1,v_2,\dots,v_r\}$, where $v_j\in P_j$, and $I'=\{v_1,v_2,\dots,v_{r-1}\}$. Since \(I\) is a bad edge, it means that $\{1,2,\dots, r\}\notin \mc{S}$ which implies that $\{1,2,\dots, r-1, k\}\in \mc{S}$ for some \(k\neq r\). Without loss of generality, we assume $k=r+1$. Let $N:=L(I') \cap { P_{r+1}}$. For every $u\in N$, we have $$a(\{u,v_r\}) \geq (\min_{i}|P_i|)^{r-2}-|L(\{u,v_r\})|.$$ However, every edge that covers $u$ and $v_r$, must have a non-empty intersection with $\{v_1,v_2,\dots,v_{r-1}\}$, as $\mc{F}$ is $\mc{T}_r $-free, therefore \[|L(\{u,v_r\})|\leq (r-1)n^{r-3}.\] As $|\mathcal{F}| \geq (\pl{e}(m,r)-2\varepsilon)n^r \geq (\pl{e}(m,r)-\delta_{\ref{sizelemma}})n^r$ the blowup \(\mc{B}\) is \(\epsilon_{\ref{sizelemma}}\)-balanced by Lemma~\ref{sizelemma}. Therefore \[a(\{v_r\}) \geq |N|\left(\left(\frac{n}{m}-\varepsilon_{\ref{sizelemma}} n\right)^{r-2} - (r-1)n^{r-3} \right).\] But \( a(\{v_r\})\leq \varepsilon_{\ref{theorem2}} n^{r-1}\) and we have \[|N|\leq \frac{2\varepsilon_{\ref{theorem2}}}{\left(\frac{1}{m}-\varepsilon_{\ref{sizelemma}}\right)^{r-2}}n = 2\varepsilon_{\ref{theorem2}}\left(\frac{4m}{3}\right)^{r-2}n \leq \frac{n}{2m}, \] for sufficiently large $n$. The latter directly implies that \(a(I')\geq |P_{r+1}\setminus N| \geq \frac{n}{4m}\), thus concluding the proof of the base case with $c_r= \frac{1}{4m}$. We now turn to the induction step. For every $I'\subset I$ with $|I'|=|I|-1$ we have \begin{align*} ra(I') &\stackrel{(\ref{a})}{\geq} \sum_{I'\subset J, \atop{ |J| = i}}{a(J)} &\geq\sum_{I'\subset J, J\neq I \atop{ |J| = i}}{c_{i+1}b(J\cup I)n} &\stackrel{(\ref{b})}{\geq}c_{i+1}b(I)n, \end{align*} where the second inequality follows from th induction hypothesis. Thus $a(I')\geq c_{i} b(I) n$, where $ c_i:=\frac{c_{i+1}}{r}>0$, as desired. \end{proof} Let $c$ be as in Claim~\ref{amplificationlem}. Then $a(\emptyset)\geq cb(v)n$ for every $v\in V(\mc{F})$. Direct averaging shows that for every $I \subseteq V(\mc{F})$ with $0\leq |I|\leq r-1$ and every $c'>0$ such that $b(I) > c' a(I)$, there exists $v\notin I$ such that $b(I\cup \{v\})> c'a(I\cup \{v\})$. Therefore, since $b(\emptyset)>\frac{1}{3}a(\emptyset)$, there exists $v_1\in V(\mc{F})$ such that $b(\{v_1\})>\frac{1}{3}a(\{v_1\})$. Similarly, $a(\{v_1\})\geq cb(\{v_1,v\})$ for every $v\in V(\mc{F}) \setminus \{v_1\}$, and there exists $v_2\in V(\mc{F}) \setminus \{v_1\}$, such that $b(\{v_1,v_2\})>\frac{1}{3}a(\{v_1,v_2\})$. Applying this argument iteratively, we get the following series of inequalities: \begin{align*} a(\emptyset)&\geq cb(\{v_1\})n >\frac{c}{3}a(\{v_1\})n\geq\frac{c^2}{3}b(\{v_1,v_2\})n^2>\frac{c^2}{9}a(\{v_1,v_2\})n^2\geq \dots \\ &>\frac{c^{r-1}}{3^{r-1}}a(\{v_1,v_2,\dots,v_{r-1}\})n^{r-1} \geq \frac{c^r}{3^{r-1}}b(\{v_1,v_2,\dots,v_r\})n^r \\& > \frac{c^r}{3^{r}}a(\{v_1,v_2,\dots,v_r\})n^r. \end{align*} In particular, $b(\{v_1,v_2,\dots,v_r\})>0$, i.e. $b(\{v_1,v_2,\dots,i_r\})=1$. Thus, \[a(\emptyset) > \frac{c^r}{3^{r-1}}n^r \geq\frac{\varepsilon_{\ref{theorem2}}}{r}n^r \geq |\mc{F}\triangle\mc{B}|,\] a contradiction. \end{proof} \end{section} \begin{section}{Proof of Theorem~\ref{thm:maintheorem}}\label{sec:finale} In this section we combine all of the preceding results to prove Theorem~\ref{thm:maintheorem}. In fact, we prove a stronger theorem which directly implies Theorem~\ref{thm:maintheorem}. We adopt the following notation for the rest of the section: $\hat{\mf{F}}=\brm{Forb}(\Sigma_r)$ and \(\mf{F}^*:=\{\mc{F}\in \hat{\mf{F}} \: | \: \mc{F} \text{ covers pairs }\}\). We say that an $r$-graph $\mc{F}$ is \emph{uniquely dense} (\emph{around} \(\Sigma_r\)) if $\lambda(\mc{F},\xi_{\mc{F}}) \geq \lambda(\mc{F}^*,\mu)$ for every $\mc{F}^* \in \mf{F}^*$, $\mu \in \mc{M}(\mc{F}^*)$ and, further, the equality holds only when $\mc{F}^*$ is isomorphic to $\mc{F}$ and $\mu = \xi_{\mc{F}^*}$. \begin{theorem}\label{thm:general} If \(\mc{S}\) is a uniquely dense $(m,r,r-1)$ Steiner system for some \(m\geq r\geq 3\), then $\brm{Forb}(\mc{T}_r)$ is $\mf{B}(\mc{S})$-stable. \end{theorem} Let \(\mc{S}_5\) and \(\mc{S}_6\) denote the unique $(11,5,4)$ and $(12, 6,5)$ Steiner systems respectively. The following result of Frankl and F\"{u}redi allows us to immediately derive Theorem~\ref{thm:maintheorem} from Theorem~\ref{thm:general}. \begin{theorem}[P. Frankl, Z. F\"{u}redi, \cite{franklfuredi}] \label{thm:FranklFuredi} $\mc{S}_5$ and $\mc{S}_6$ are uniquely dense. \end{theorem} It remains to prove Theorem~\ref{thm:general}. \begin{proof}[Proof of Theorem~\ref{thm:general}.] Let $\mf{F}:=\brm{Forb}(\mc{T}_r)$ and $\mf{B}:=\mf{B}(\mc{S})$. Note that a uniquely dense Steiner system is, in particular, balanced. Therefore, by Theorem~\ref{thm:localstability}, $\mf{F}$ is $\mf{B}$-vertex locally stable. Clearly $\mf{B}$ is clonable, thus from Theorem~\ref{thm:narrowlocaltolocal} it follows that $\mf{F}$ is $\mf{B}$-locally stable. We derive $\mf{B}$-stability of $\mf{F}$ from $\mf{B}$-stability of $\hat{\mf{F}}$ (which we will prove in a second) and the $\mf{B}$-local stability of \(\mf{F}\), combined with the following application of the Hypergraph Removal Lemma. \begin{theorem}\label{FisFhatstable}For every $\varepsilon>0$ there exists $n_0\in \mathbb{N}$ such that for every $\mc{F} \in \mf{F}$ with $\brm{v}(\mc{F})=n \geq n_0$ there exists $\hat{\mc{F}} \subseteq \mc{F}$, $\hat{\mc{F}} \in \hat{\mf{F}}$ such that $$|\hat{\mc{F}}| \geq |\mc{F}|-\varepsilon n^r.$$ \end{theorem} We will omit the proof of Theorem~\ref{FisFhatstable}, an interested reader can find it in ~\cite{pikhurko}. It is easy to see that $\mf{F}^*$ is thin. Since \(\mc{S}\) is uniquely dense, we have $$\{\mc{S}\}=\{\mc{F} \in \mf{F}^{*} \: | \: \lambda(\mc{F},\mu)= \lambda(\mf{F}^{*})\; \mathrm{for\; some} \; \mu \in \mc{M}(\mc{F})\}=:\mf{F}^{**}.$$ Thus Theorem~\ref{thm:compactness} implies that \(\mf{F}^*\) is \(\mf{B}\)-weakly weight stable, and therefore, by Theorem~\ref{thm:symmetrization} the family $\hat{\mf{F}}$ is $\mf{B}$-stable. Note that here we are using the fact that both $\hat{\mf{F}}$ and $\mf{B}$ are clonable families (unlike $\mf{F}$ which is not clonable). Let $\alpha,\varepsilon>0$ be such that $\mf{F}$ is $(\mf{B},\alpha,\varepsilon)$-locally stable and $\hat{\mf{F}}$ is $(\mf{B},\alpha)$-stable. We claim that $\mf{F}$ is $(\mf{B},\alpha/2)$-stable. Indeed, consider $\mc{F} \in \mf{F}$ with $\brm{v}(\mc{F})=n$. We want to show that \begin{equation}\label{eq:localstabilityend} |\mc{F}|\leq m(\mf{B},n) - \frac{\alpha}{2} d_{\mf{B}}(\mc{F}), \end{equation} if $n$ is sufficiently large. If $d_{\mf{B}}(\mc{F}) \leq \varepsilon n^r$ then (\ref{eq:localstabilityend}) holds, as $\mf{F}$ is $(\mf{B},\alpha,\varepsilon)$-locally stable, and so we can assume that $d_{\mf{B}}(\mc{F}) > \varepsilon n^r$. By Theorem~\ref{FisFhatstable} there exists $\hat{\mc{F}} \subseteq \mc{F}$ such that $|\hat{\mc{F}}| \geq |\mc{F}|-\varepsilon' n^r$, where we choose $\varepsilon':=\frac{\alpha}{2(\alpha+1)}\varepsilon$. As $\hat{\mf{F}}$ is $(\mf{B},\alpha)$-stable, we have \begin{align*} |\mc{F}| &\leq |\hat{\mc{F}}| + \varepsilon' n^r \leq m(\mf{B},n) - \alpha d_{\mf{B}}(\hat{\mc{F}})+ \varepsilon' n^r \\&\leq m(\mf{B},n) - \alpha( d_{\mf{B}}(\mc{F}) - \varepsilon'n^r)+ \varepsilon' n^r \\&= m(\mf{B},n) - \frac{\alpha}{2} d_{\mf{B}}(\mc{F}) + \left((\alpha+1)\varepsilon'n^r-\frac{\alpha}{2} d_{\mf{B}}(\mc{F})\right)\\&\leq m(\mf{B},n) - \frac{\alpha}{2} d_{\mf{B}}(\mc{F}), \end{align*} where the last inequality holds by the choice of $\varepsilon'$. This concludes the proof of the theorem. \end{proof} \end{section} \bibliographystyle{amsplain}
1,108,101,565,077
arxiv
\section{Introduction} As imaging systems become ubiquitous, the ability to recognize human actions is becoming increasingly important. \textbf{Activity Recognition} is an elemental task in the computer vision that realises human actions. These actions are detected post complete action execution in a video. Through this approach, action and its purpose can be identified. Applications of AR are becoming highly recognisable within surveillance, video retrieval, human robot interactions and self driving vehicles. From a group of videos $S$ and its corresponding action labels $L$, each video $V \in S$ contains one or more actions $l_V$. Thus the aim of activity recognition problem is to predict labels $l_V$ based on video understanding $V$. In case of action recognition, the video is generally segmented to contain only one execution of a human action. In more general cases, the video may contain multiple actions and the goal of action detection is not just to recognize the actions being performed in the video but also determine the spatio-temporal boundaries of them. One of the reasons why the transition from 2D images to 3D videos hasn't been smooth is lack of temporal understanding. This paper proposes network that enhances not only temporal features but also spatial features. The spatial dimensions X and Y in 2D images are considered with equal importance.\cite{feichtenhofer2019slowfast} paper introduces that time dimension, T in a video is not as important as the spatial dimensions, X and Y. Hence our paper exploits this property to perform action recognition from videos. \begin{figure*}[h] \begin{center} \includegraphics[width=1.0\linewidth]{ attention3.PNG} \end{center} \caption{An overview of our three stream network with attention head. The network architecture for our baseline i.e. three stream network with bi-LSTM follows similar architecture with bidirectional LSTM block instead of Attention block} \label{fig:short} \end{figure*} Our method proposes a three lane network where each pathway is differentiated in frame rates. The single pathway, operates at single frame rate captures spatial information, the slow pathway operates at low frame rates captures the spatial information and the fast pathway operates at high frame rates that captures fine temporal information. Inspired from textual tasks, we propose two networks one with bi-directional LSTM head (we call it baseline) and other with attention mechanism to achieve state of art accuracy. We test our models on multiple datasets, UCF-101 \cite{soomro2012ucf101}, Kinetics-600 \cite{carreira2017quo} and Atomic Visual Actions (AVA) \cite{gu2018ava}. AVA is a particularly difficult dataset as it requires detecting multiple people in videos semi-densely in time, and recognizing multiple basic actions. With three stream network and bi-directional LSTM head we achieve state of art accuracy for UCF-101 and Kinetics dataset while the three stream network with attention head outperforms all methods for all of 3 datasets. \section{Related Works} Traditionally, for video action recognition, human-created features, like Histogram of Oriented Gradients (HOG)\cite{hog} and Histogram of Optical Flow (HOF)\cite{hof} were used by the research community. These features can be either sparsely\cite{hof} and densely\cite{dense} sampled. These early methods consider independent interest points across frames. In recent times, deep learning has brought remarkable improvements to image understanding\cite{krizhevsky2012imagenet}. The high performance from image classification networks has propelled these networks to inspire networks for video understanding \cite{karpathy2014large} with minimal modifications. One of the methods, extracts features independently from each frame of the video, applies image classification CNN and finally pools predictions across all the frames of the video. The drawback of this method is that it ignores the temporal structure example: models can't conceivably recognize opening an entryway from shutting an entryway. Adding a recurrent layer to CNN can encode state although unrolling RNNs. An interesting and practical model was proposed by\cite{2str}. Here a 2D ConvNet was used on short temporal snapshots of videos. The output from single RGB frame and stack of 10 optical flow frames were averaged, after passing them through 2D ConvNet. This model appeared to get extremely superior on existing datasets and computationally efficient as well. 3D Convolution Networks are important to video modeling, and are similar standard convolutional networks, but with spatio-temporal filters. 3D CNNs have an key attribute: ability to generate hierarchical representations of the input video data. Temporal Segment Network \cite{tsn} is based on the idea of long-range temporal structure modelling. It was successful at tackling limited information to temporal context, as the other networks operated on unit frames or a single stack of multiple frames. Instead of working on short snippets, TSN work on short snippets sparsely sampled from the entire video. \subsection{Two stream networks} Two stream networks were introduced by \cite{2str} for action recognition in videos. They used two CNNs for each spatial and temporal information in videos. In this architecture, RGB images are extracted from frames of video and optical flow from consecutive frames are fed into different streams. This work was then expanded to give inflated two stream 3D network. This work \cite{lan2017deep} proposed to use neural networks along with shallow local features. \cite{2str} proposed a spatiotemporal architecture and explored various layer fusion schemes. They claimed that fusing last convolution layers spatially increases accuracy. \cite{tsn} introduced a neural network performs pixel level action recognition and segmentation by adding a two-stream network along with temporal aggregation. \cite{feichtenhofer2019slowfast} proposes a two stream network where CNN on each stream works on different frame rates. \subsection{Temporal Understanding} 2D image models have also been extended to videos as seen in \cite{carreira2017quo}. Long-term filtering and pooling using temporal strides \cite{varol2017long} were then used. Video compression methods were also used \cite{wu2018compressed} for a faster network. Extending on 3D CNNs, some works have separated 2D spatial and 1D temporal filters \cite{tran2018closer}. Eventually 2D image classification networks were inflated to 3D extended to action recognition. Our work proposes to perform temporal filtering at different rates along with adding extras blocks to understand the temporal structure. \subsection{Bidirectional LSTM} Since RNNs can encode long-term sequence information in a data, they have recently been added into action recognition tasks. Bidirectional LSTMs add a hidden layer that allows information to flow in background direction. Therefore output layer receives information from past and future states simultaneously. Thus Bi-LSTM \cite{graves2013hybrid} increases the amount of input data that can be ingested by network by the network. CNNs have restrictions on the input information adaptability, as they require fixed input data. A normal RNN also has restriction as upcoming input data can not be attained from the present state. On the contrast, Bi-LSTM don't require the fixed input data and their future input information can be accessed from the current state. In literature, LSTMs have been used seldom. \cite{yue2015beyond} connects LSTM cells to CNN feature maps for action recognition. \cite{ullah2017action} proposes to use bi-LSTM along with a convolutional encoder. \subsection{Attention models} The concept of attention was introduced by Bahdanau, Cho, and Bengio (2014) for the objective of machine translation. Attention mechanism is based on concept that the nwural network learns how admissible are some feature maps with regarding to output state. These values of importance are specified as weights of attention and are generally calculated at the same time as other model parameters trained for a specific goal. Attention was used in first person action recognition by having a joint learning of gaze and actions \cite{li2018eye}, by using object-centric for ego centric activity recognition \cite{sudhakaran2018attention} and by event modulated attention \cite{hu2018squeeze}. Attention was used in to extract spatial information by generating spatial masks by traning on video labels\cite{zhang2018image}. Temporal attention was used for action recognition by detecting change in gaze \cite{shen2018reinforced}. \section{Methodology} Since prior works weren't able to extract temporal information effectively, we explore the network under multiple temporal settings namely bidirectional LSTM and Attention. For better spatial understanding we have a combined three stream CNN based network as encoder. We have our baseline (three stream network with bi-LSTM) perform well over UCF-101 and Kinetics dataset. To further improve performance, we use with attention mechanism that gives state-of-art performance over all three datasets. \subsection{Three stream} Section 2 discusses works relating to two stream networks. These (and current) methods cannot unfortunately show decent performance on a difficult dataset like AVA. Hence we suggest a novel architecture with three streams. \subsubsection{Single pathway} The main aim of single pathway is extract spatial features. The network can have any 2D architecture. We see \cite{karpathy2014large} that single 2D net can extract a great deal of spatial feature. This pathways is essentially equivalent to having 3D network with temporal stride of $\theta_1$, where $\theta_1$ is the equivalent to the speed at which the video is processed. Hence for 30fps video, the stride is 30, therefore ingesting only first frame. This is a very light weight stream with much lower MACs as compared to other streams. \subsubsection{Slow pathway} The main aim is to extract spatio-temporal features. Here the temporal stride, $\theta_2$ is smaller than $\theta_1$. After experimenting as mentioned in section 7, the best value for $\theta_2$ is 16, also as mentioned in paper \cite{feichtenhofer2019slowfast}. This means that approx. 2 frames in a 30fps input video are processed. \subsubsection{Fast pathway} Similar to slow pathway, this is a 3d CNN but with much smaller temporal stride to particularly encode for more temporal info. To understand finer representations, we select alternate frames, making stride 2. Hence $\theta_3$ < $\theta_2$ < $\theta_1$. one of the key features of three stream network is different values of $\theta$ that works on different temporal speeds, and thus drives the expertise of the two subnets instantiating the two pathways. Our Fast pathway also distinguishes with existing models in that it can use significantly lower channel capacity. The low channel capacity can also be interpreted as a weaker ability of representing spatial semantics. \subsection{Bi-directional LSTM} In bi-directional LSTM \cite{ullah2017action}, the output at time $t$ is not only dependent on the previous frames in the sequence, but also on the upcoming frames. In our work, we use multiple LSTM layers, so our scheme has two LSTM layers for both forward pass and backward pass. Figure 2 shows the overall concept of bidirectional LSTM used in the proposed method. The input data is fed to the bidirectional RNN, and the hidden states of forward pass and backward pass are combined in the output layer. Both forward and backward consist of two LSTM cells, making our model a deeper. The proposed method outperforms other state-of-the-art methods due to its mechanism of computing the output. The output of a frame at time $t$ is calculated from the previous frame at time $t-1$ and the upcoming frame at time $t + 1$ because layers are performing processing in both directions. \begin{figure}[h] \begin{center} \includegraphics[width=1.0\linewidth]{ bi-lstm.PNG} \end{center} \caption{Our bi-directional LSTM head } \label{fig:long} \label{fig:onecol} \end{figure} \subsection{Attention head} We use the self mechanism explained in \cite{plizzari2020spatial} for our attention head. Our self-attention head computes correlation between arbitrary positions of a sequence input. An attention function consists of a query $A_Q$, keys $A_K$ and values $A_V$. The query and keys have same vector dimension $d_k$, and values and outputs have same size, $d_v$. The output is computed as a weighted sum of the values and its weight computed by scaled dot-product of query and keys. The attention function is defined as \begin{center} $Attention(Q,K,V) = softmax(\frac{QK^T}{\sqrt{d_k}})$ \end{center} $\sqrt{d_k}$ is a scaling factor. The equation computes scaled dot-product attention and the network computes the attention multiple times in parallel (multi-head) to extract different correlation information. The multi-head attention outputs are concatenated and transformed to the same vector dimension the input sequence. A residual connection is adopted to take the input and output of the multi-head self attention layer and a layer normalization is applied to the summed output. A fully-connected feed-forward network with a residual connection is applied to the normalized self-attention output. \section{Network Architecture} A major advantage of this skeleton is that one can use any network architecture for each of the streams. We use inflated ResNet \cite{resnet} as the 3D backbone network, for its promising performance on various datasets \cite{res3d}. We have studied performance difference for various other network backbones to three stream in Ablation study. Meanwhile, original ResNet-52 serves as our 3D backbone. We use the output features of res2, res3, res4, res5 to build our network, where they are spatially downsampled respectively. \begin{figure}[h] \begin{center} \includegraphics[width=1\linewidth]{ threestreamtable.png} \end{center} \caption{Overview of ResNet-52 for different pathways of the network encoder} \label{fig:long} \label{fig:onecol} \end{figure} \subsection{Pathway link} Similar to \cite{2str}, \cite{yang2020temporal}, we attach one lateral connection between the three pathways for every stage. Specifically for ResNets , these connections are right after pool1, res2, res3, and res4. The two pathways have different temporal dimensions, so the lateral connections perform a transformation to match them. We use unidirectional connections that fuse features of the Fast pathway into the Slow pathway and finally Slow pathway to Single pathway. To aggregate all of features, and to ensure the compatibility of the addition between consecutive features, during aggregation a down/up-sampling operation, we multiply by a factor. \section{Experiments} We evaluate our approach on four video recognition datasets using standard evaluation protocols. For the action classification experiments, presented in this section we consider the widely used Kinetics-600 \cite{DBLP:journals/corr/abs-1808-01340} and UCF-101 \cite{soomro2012ucf101}. For action detection experiments in Sec. 5, we use the challenging AVA \cite{gu2018ava} dataset. \subsection{Dataset} In this subsection, we discuss about the different datasets we use to evaluate the performance of our networks. \subsubsection{UCF-101} UCF \cite{soomro2012ucf101} dataset is an open source dataset collected from videos on YouTube. It contains 101 action classes with over 13000 videos and 27 complete hours of data. The dataset consists of pragmatic videos consisting camera motion and untidy and uneven background uploaded by the user. The videos are recorded unconstrained environments and typically includes camera movement, different lighting environments, partial obstructions, low quality clips. The action categories are divided into five types - Human Object actions, Body-Motion actions, Human Human actions, Playing Musical Instruments actions, Sports based action. Categories like sports has multiple actions, where actions performed with similar background, like greenery in most cases. Some clips were captured with different illuminations, poses, and from different viewpoints. A crucial challenge of this collection of data is actions are performed in real life and are very realistic, which is significant compared to other datasets where an actor is used to perform the actions. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{ confusionmatrixucf.png} \end{center} \caption{Performance of Bi-LSTM based three stream network on UCF-101 dataset, here shown as confusion matrix.} \label{fig:long} \label{fig:onecol} \end{figure} \subsubsection{Kinetics} Kinetics is a large open source dataset by Google that around seven hundred action classes. Each class contains atleast 600 clips. Length of each video is 10s and is taken from a different YouTube video. The actions in video are human centric and cover a wide assortment of classes including human object relations such as playing instruments, human human interactions such as hugging. The videos have a variable resolution and frame rate. \subsubsection{AVA} The AVA dataset densely annotates 80 atomic visual actions in 430 15-minute video clips, where actions are localized in space and time, resulting in 1.58M action labels with multiple labels per person occurring frequently. The dataset is sourced from the 15th to 30th minute time intervals of 430 different movies, which given the 1 Hz sampling frequency gives us nearly 900 keyframes for each movie. Training/validation/test sets are split at the video level, so that all segments of one video appear only in one split. There are 211k training and 57k validation video segments. The performance metric is mean Average Precision (mAP) over 80 classes, using a frame-level IoU threshold of 0.5. \subsection{Training} Following the setting in \cite{feichtenhofer2019slowfast}, the input frames are sampled at the $\theta_3$ and $\theta_3$ frames from the video frames. We randomly crop 224×224. A dropout of 0.5 are adopted to reduce overfitting. BatchNorm is not frozen. We use a momentum of 0.9, a weight decay of 0.00001 and a synchronized SGD training over 8 GPUs. Each GPU has a batch-size of 8, resulting in a mini-batch of 64 in total. \subsubsection{AVA dataset} The AVA \cite{gu2018ava} dataset focuses on spatiotemporal localization of human actions. Hence for this dataset, unlike the other datasets, we need a detection architecture. The architecture we use is inspired from and is similar to Faster RCNN \cite{ren2015faster} and \cite{feichtenhofer2019slowfast}. We use three stream network backbones along with modified RCNNs. The fifth residual block in our implementation for AVA, we set the stride as 1 instead of 2 and dilate its filters by 2. Therefore region-of-interest (ROI) is extracted via the features of last(fifth) residual block. Like \cite{DBLP:journals/corr/abs-1808-01340}, we extend 2D ROI to 3D ROI by replicating it along temporal dimension. These features are fed through max pooling and a per-class sigmiod based classifier. \subsection{Inference} We uniformly sample 10 clips from a video along its temporal axis. Following \cite{wang2018videos} and \cite{feichtenhofer2019slowfast}, we scale the shorter spatial side to 256 pixels and take 3 crops of 256×256 to cover the spatial dimensions, as an approximation of fully-convolutional testing. We average the softmax scores for prediction. \section{Results} \begin{figure*}[h] \begin{center} \includegraphics[width=1.0\linewidth]{ avabargraph.PNG} \end{center} \caption{Bar chart for each action on AVA dataset, per category AP} \label{fig:short} \end{figure*} We evaluate our method on baseline method and attention based model along with current state of art models on Kinetics, AVA and UCF dataset in Table 1. We observe that our model outperforms the state of art networks with attention based model performing the best. It is worth noting that we obtain this performance only using raw RGB frames as input, while prior works use RGB, flow, and in some cases audio as well. \subsection{Baseline model for UCF-101 and Kinetics} Table 1 shows comparison with various network architechures along with our implementation. In comparison with previous state of art our model provides 3.1$\%$ higher top-1 accuracy for Kinetics dataset and 1.4$\%$ UCF-101 dataset and. Note that our results are better even without Image-net pre training. In particular our model is 6$\%$ per better than best result of previous kind. We had experimented with ImageNet pre-training and observed that there is less than 1 percent improvement when compared to training from scratch. \begin{table}[] \centering \begin{tabular}{|l|l|l|l|} \hline Model & \multicolumn{2}{l|}{Kinetics-600} & UCF-101 \\ \hline & top-1 & top-5 & \\ \hline Two stream \cite{2str} & 63.2 & 79.9 & 88 \\ \hline I3D \cite{carreira2017quo} & 72.1 & 89.9 & 94 \\ \hline Two stream I3D \cite{carreira2017quo} & 75.7 & 90.1 & 95.6 \\ \hline S3D \cite{s3d} & 69.4 & 88.0 & 95.9 \\ \hline Nonlocal \cite{nl} & 77.3 & 92.6 & 93.2 \\ \hline R(2+1)D \cite{tran2018closer} & 73.8 & 88.5 & 96.8 \\ \hline STC \cite{diba2018spatio} & 68.2 & 88.4 & 85.3 \\ \hline ARTNet \cite{wang2018appearance} & 69.2 & 88.0 & 89.7 \\ \hline ECO \cite{zolfaghari2018eco} & 70.0 & 89.4 & 90.1 \\ \hline SlowFast \cite{feichtenhofer2019slowfast} & 77.0 & 92.6 & 96.8 \\ \hline CoViAR \cite{covair} & 65.4 & 75.6 & 86.4 \\ \hline \textbf{3S+Bi-LSTM(ours)} & 80.1 & 94.0 & 98.2 \\ \hline \textbf{3S+Attention(ours)} & \textbf{82.3} & \textbf{95.5} & \textbf{99.0} \\ \hline \end{tabular} \caption{A comparison of different methods incl. current state-of-art with our proposed methods on UCF-101 and Kinetics-600. Both of our methods evidently outperform the networks proposed by current literature.} \end{table} \subsection{Attention based model for UCF-101 and Kinetics} As we observe in Section 6.1 that our baseline demonstrates best performance, we note that attention based three stream network outperforms that as well. This is particularly due to the ability of attention mechanism that can generate more effective temporal understandings. Our attention based model outperforms the current state of art by 5.3 $\%$. Although attention based model has enhanced temporal understandings, spatial features are not compromised as observed by lack of improvement when using ImageNet pre-trained weights. \subsection{Results on AVA dataset} We compare the performance of both of our methods along with previous results. We observe that the 3 stream with Bi-LSTM has shown improvement by +0.1 mAP and our method with attention mechanism shows improvement by +6.1 mAP. It is important to note that the methods with flow stream as well can double computational cost. We observe that Kinetics pre-training shows a great improvement in accuracy too with +6.8 mAP. \begin{table}[h] \centering \begin{tabular}{|l|l|} \hline \textit{Model(+our version RCNN)} & \textit{AVA (mAP \%)} \\ \hline Two stream \cite{2str} & 7.4 \\ \hline I3D \cite{carreira2017quo} & 14.5 \\ \hline Two stream I3D \cite{carreira2017quo} & 15.8 \\ \hline S3D \cite{s3d} & 17.2 \\ \hline Nonlocal \cite{nl} & 20.0 \\ \hline R(2+1)D \cite{tran2018closer} & 21.1 \\ \hline STC \cite{diba2018spatio} & 15.6 \\ \hline ARTNet \cite{wang2018appearance} & 13.2 \\ \hline ECO \cite{zolfaghari2018eco} & 12.8 \\ \hline SlowFast \cite{feichtenhofer2019slowfast} & 26.3 \\ \hline CoViAR \cite{covair} & 21.9 \\ \hline \textbf{3S+bi-LSTM(ours)} & 26.4 \\ \hline \textbf{3S+Attention(ours)} & \textbf{32.2} \\ \hline \end{tabular} \caption{A comparison of different methods incl. current state-of-art with our proposed methods on AVA. While our baseline model shows very small improvement over AVA dataset, the attention head shows significant improvement from the current state-of-art.} \end{table} \section{Ablation Study} All of our models have used class agnostic regression data augmentation (and Kinetics pre-training for AVA models) techniques we observed early on to be critical for good performance. A key intuition for designing the different pathways was to use lower channel capacity for Fast pathway. We experimented with different values of $\theta_2$ ranging from 4 to 32. As expected the accuracy (measured on Kinetics) decreased with the values of $\theta_2$. But this also increased the GFLOPs. Hence to balance, we choose 16 frames. \begin{table}[h] \centering \begin{tabular}{|l|l|l|} \hline \textit{$\theta_2$ value} & \textit{top-1 ($\%$)} & \textit{top-5 ($\%$)} \\ \hline 4 & 79.9 & 94.8 \\ \hline 6 & 80.2 & 94.9 \\ \hline 12 & \textbf{80.3} & \textbf{95.6} \\ \hline 16 & 80.0 & 95.3 \\ \hline 32 & 80.1 & 95.0 \\ \hline \end{tabular} \caption{Different methods to laterally fuse} \end{table} We also compared the effectiveness of adding temporal block i.e. Bi-LSTM and attention blocks to our three stream network on the AVA dataset. Temporal blocks improves performance on 74 out of 80 actions as compared to vanilla three stream networks. The categories that showed major gains were: \textit{run/jog} (21.2 AP), \textit{hand wave} (15.7 AP), \textit{hand clap} (24.6 AP), \textit{eat} (13.5 AP). These are categories where modeling dynamics are of vital importance. Our three stream CNN encoder can have any architecture type. We experimented with various networks. We observe that Res-152 performs best as the backbone as compared to InceptionV3\cite{szegedy2016rethinking}, Res-101 \cite{resnet}, Res-52, EfficientNet B5 \cite{tan2019efficientnet}, and EfficientNet B7. But we select ResNet-52 as trade-off against the GFLOPs trade-off. \begin{figure}[h] \begin{center} \includegraphics[width=1.0\linewidth]{ boxesinaclip.PNG} \end{center} \caption{Performance of Attention based three stream network with respective to the number of bounding boxes in the clip. } \label{fig:long} \label{fig:onecol} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=1.0\linewidth]{ areacovered.PNG} \end{center} \caption{Performance of Attention based three stream network with respective to the area covered by bounding boxes of the clip. } \label{fig:long} \label{fig:onecol} \end{figure} \section{Conclusion} In this paper we proposed two network architechures that signifies improvement in spatio-temporal understanding of actions in a video. It achieves state-of-the-art accuracy for video action classification and detection. We hope three stream network will foster further research in action and video understanding. {\small \bibliographystyle{ieee_fullname}
1,108,101,565,078
arxiv
\section{Introduction} Malaria has always been a public health problem and, since the discovery of malaria parasites in human blood by Charles Laveran in 1880, remains so despite more than 100 years of research. Malaria continues to have a significant impact on the world with over 400,000 deaths alone each year \cite{WorldHealthOrganization2019}. It is a vector-borne disease caused by five plasmodial species: {\it Plasmodium falciparum, P. vivax, P. malariae, P. ovale, and P. knowlesi}, with \textit{P. falciparum} being the most pathogenic species infecting humans \cite{KhouryEtAl2018}. The malaria parasite has a complex life cycle involving sexual reproduction occurring in the insect vector \cite{AlanoCarter1990} and two stages of infection within a human (or animal host), a liver stage \cite{Frevert2004} and blood stage \cite{BannisterMitchell2003}. Human infection starts by the bite of an infected mosquito, which injects the sporozoite form of \textit{Plasmodium} during a blood meal. The sporozoites enter the host peripheral circulation, and rapidly transit to the liver where they infect liver cells (hepatocytes) \cite{Frevert2004}. The parasite replicates within the liver cell before rupturing to release extracellular parasite forms (merozoites), into the host circulation, where they may invade red blood cells (RBCs) to initiate blood stage infection \cite{MillerEtAl2013}. Then follows a series of cycles of replication, rupture, and re-invasion of the RBC. Some asexual parasite forms commit to an alternative developmental pathway and become sexual forms (gametocytes) \cite{RussellEtAl2013}. Gametocytes can be taken up by mosquitoes during a blood meal where they undergo a cycle of sexual development to produce sporozoites \cite{AlanoCarter1990}, which completes the parasite life cycle. The classical model of within-host parasite multiplication in malaria infections was formulated by Anderson {\it el al.} \cite{AndersonEtAl1989}. This model tracks uninfected red blood cells (RBCs), parasitized RBCs (pRBCs) and merozoites. The pioneer work of Anderson {\it el al.} \cite{AndersonEtAl1989}, has been further developed in several directions including in particular immune response, see for instance \cite{GravenorLloyd1998,Hellriegel1992,HetzelAnderson1996,HoshenEtAl2000,LiEtAl2011,MitchellCarr2010,MolineauxDietz1999,AgustoEtAl2019} for human malaria infection. We also mention discrete-time models such as in \cite{DietzEtAl2006}. Those models use an exponential process to describe the rate of rupture of pRBCs and, as a consequence, then fail to capture realistic lifetimes of the pRBCs on short time scales \cite{Saul1998}. One reason for this is that they are essentially Markovian, {\it i.e.} 'memoryless', a RBC that has been parasitized for 40 hours has the same probability of producing merozoites as {\it e.g.} a RBC parasitized less than a hour ago. Moreover, those models are treating some processes that are likely to be kinda continuous as occurring only in a narrow window ({\it e.g.}, the development of parasites within RBCs and the rupture of pRBC followed by the merozoites release phenomenon). To correct this issue, some models of malaria infection include $K$-compartments ordinary differential equations (ODEs) representing a progression through a parasite's developmental cycle, {\it e.g.} \cite{GravenorLloyd1998,SaralambaEtAl2011,ZaloumisEtAl2012,IggidrEtAl2006}, or delay differential equations (DDEs) to capture the time pRBCs take to mature before producing new merozoites, {\it e.g.} \cite{HoshenEtAl2000,KerlinGatton2013,CaoEtAl2019,McKenzieBossert2005,SuEtAl2011}. Other approaches are the use of partial differential equations (PDEs) to track the age-structure of the pRBC population \cite{AntiaEtAl2008,KhouryEtAl2018,CromerEtAl2009,DemasseDucrot2013}. It is shown in \cite{FonsecaVoit2015} that DDEs perform better than the ODEs in representing the dynamics of red blood cells during malaria infection. The $K$-compartments ODE model can be interpreted as the application of the method of stages (or the ''linear chain trick'') to the life cycle of pRBC, {\it e.g.} see \cite{IggidrEtAl2006,FengEtAl2007,HurtadoKirosingh2019} and references therein. One problem with the $K$-compartments ODE model is how to decide upon the number of repeated compartments to capture the realistic dynamics of the pRBCs \cite{GravenorEtAl2002}. Determining the distribution of mean waiting times across compartments for the ODE model is also an issue. If the compartments can be considered equivalent to the developmental stages of pRBCs, then parasites might not spend equal time in each stage. We first introduce both mathematical models (PDE and $K$-compartments ODE) and define the model's parameters and outputs. Next, using gametocyte production as a proxy variable of infectiousness, we compared the model outputs from a PDE stage-structured formulation to those from classical $K$-compartments ODE. Furthermore, the output of both mathematical models is used to qualitatively recover the time course of parasitemia, defined as the proportion of all infected RBCs among the total number of RBCs. Finally, the $K$-compartments ODE model (when $K$ is properly chosen) and the PDE model are used to highlight a strong qualitative connection between gametocyte density and parasitemia. \section{Material and method} \subsection{Data and methodology} Our analysis is based on data collected from malariatherapy taken in \cite{EichnerEtAl2001}. Malaria inoculation was a recommended treatment for neurosyphilis between 1940 and 1963. We also refer to \cite{Chernin1984} for a review paper on malariatherapy and the knowledge gained in the understanding of malaria infection. The data we shall use consist in daily records of gametocyte density for twelve patients. Although malariatherapy has been dismissed for obvious ethical reasons, the advantages to use such data are multiple. Indeed, patients are naive to malaria infection and the dynamics are not perturbed by anti-malarial treatments. Let us notice that such data have been widely used in the literature and in particular to estimate mathematical model parameters. We refer to \cite{EichnerEtAl2001} and the references therein. The method we shall develop consists of devising a mathematical model to describe the intra-host development of the infection and fitting the model to the available data. The output of the mathematical model will allow us to access various quantities related to the time course of the infection, including parasitemia. \begin{figure} \begin{center} \centerline{\includegraphics[width=1.1\textwidth] {Fig1.png}} \caption{($S_1$) The RBC development chain, ($S_2$) the parasite development chain. $T_D$= average duration ($\pm$ one standard deviation) spent in an RBC age class given in \cite{McQueenEtAl2013}, $\Lambda_0$ is the RBC production rate from the marrow source. In our model, the parameter $1/\mu_{r\to m}$ (resp. $1/\mu_{m\to s }$, $1/\mu_{s\to d}$) is the time spent in RBC reticulocyte (resp. mature, senescent) class. A continuous parameter $a$ denotes the time since the concerned RBC is parasitized: ring stage ($0<a< 26$ hours), trophozoite ($26<a< 38$ hours) and schizont ($38<a<48$ hours). In the case of \emph{P. falciparum} infection, one has ($\gamma_r =\gamma_m = \gamma_s = 1$) while for \emph{P. vivax} one has ($\gamma_r =1$, $\gamma_m = \gamma_s = 0$) and for \emph{P. malariae} ($\gamma_r = \gamma_m =0$, $\gamma_s = 1$) \cite{PaulEtAl2003}.} \label{flow_diagram} \end{center} \end{figure} \subsection{Mathematical model} As discussed above, we now present the mathematical model we shall use to recover parasitemia for twelve patients from observed time courses of gametocyte density. We shall describe the within-host malaria infection coupled with red blood cells (RBCs) production as well as immune effectors. Fig. \ref{flow_diagram} presents the flow diagram of the model considered in this note. Our model is divided into four parts: (i) uninfected RBC (uRBCs) dynamics; (ii) changes in parasite stage or parasite maturity; (iii) Gametocyte production and dynamics and (iv) immune response dynamics. For uRBCs dynamics, we divide cells into three age classes: reticulocyte (young), mature and senescent. All three ages are vulnerable to \textit{P. falciparum} infection. This can be different for other species of \emph{Plasmodium}. Although we focus in this work on the case of \emph{P. falciparum}, the model described below could be applied to study other species such as \emph{P. vivax} or \emph{P. malariae}, which have specific RBC-age preferences \cite{PaulEtAl2003}. Such age-structured dynamics for uRBC are well known in the literature, see for instance \cite{McQueenMcKenzie2008}. For the parasites, we consider stage-structured dynamics for their development within pRBC. Here the stage is a continuous variable representing the time since the concerned RBC is parasitized. Such a continuous stage structure will allow us to track the development of parasites within RBCs, but also to have a refined description of the pRBC rupture and of the merozoites release phenomenon. We also emphasize that such a model easily allows for inclusion of anti-malarial treatments acting on only some parasite developmental stages. \paragraph{Uninfected RBC dynamics.} We denote by $R_r(t)$, $R_m(t)$ and $R_s(t)$ respectively the density of reticulocytes, mature RBCs and senescent RBCs at time $t$. In the absence of malaria parasites, the evolution of circulating red blood cells is assumed to follow a discrete age maturation system of ordinary differential equations that take the form \begin{equation}\label{uRBC-model} \begin{cases} \frac{dR_r(t)}{dt} = \Lambda_0-\mu_{r\to m}R_r(t),\\ \frac{dR_m(t)}{dt} = \mu_{r\to m}R_r(t)-\mu_{m\to s}R_m(t),\\ \frac{dR_s(t)}{dt} = \mu_{m\to s}R_m(t)-\mu_{s\to d}R_s(t). \end{cases} \end{equation} The parameters $1/\mu_{r\to m}$, $1/\mu_{m\to s}$ and $1/\mu_{s\to d}$ respectively denote the average duration of RBCs in the reticulocyte, mature and senescent age classes while $\Lambda_0$ represents the normal value of the RBC production from marrow source (i.e. the production rate of RBC). System \eqref{uRBC-model} can also be found in \cite{McQueenMcKenzie2008}. The parameters of this system are selected from \cite{HetzelAnderson1996,McQueenMcKenzie2008} (see Table \ref{Tab1}) so that in the absence of parasites, the equilibrium age distribution is given by \begin{equation}\label{eq-initial-RBC} \left(R^*_r;R^*_m;R^*_s\right)=\left(62.50; 4853;83.30\right)\times 10^6 \hbox{ cell/ml}. \end{equation} This leads to the homeostatic equilibrium concentration of RBC $\left(R^*_r+R^*_m+R^*_s\right)$ around $4.99\times10^{9}$ cells/ml which is in the range expected for humans. \paragraph{Parasite dynamics with stage-structured formulation (PDE model).} Here we consider the interaction between free merozoites together with the circulating RBCs. We, respectively, denote by $m(t)$, $p(t,a)$ and $G(t)$ the density of merozoites, parasitized RBC, and mature gametocytes at time $t$. The variable $a$ denotes the time since the concerned RBC is parasitized (i.e. $\int_{a_1}^{a_2}p(t,a)da$ corresponds to the density of pRBC at time $t$ which are infected since the time $a_1<a<a_2$). The system we shall consider reads as: \begin{equation}\label{model1} \begin{cases} p(t,0)=\beta m(t) \sum\limits_{j=r,m,s}\gamma_jR_j(t),\\ \partial_t p(t,a)+\partial_a p(t,a)= -\left(\mu(a)+d_0\right)p(t,a),\\ \dot m(t)=(1-\alpha_G)\int_0^\infty r\mu(a)p(t,a)da -\mu_{m}m(t)-\beta m(t) \sum\limits_{j=r,m,s} \gamma_jR_j(t),\\ \dot G(t) =\alpha_G \int_0^\infty r\mu(a)p(t,a)da -\mu_G G(t). \end{cases} \end{equation} We briefly sketch the interpretation of the parameters arising in \eqref{model1}. Parameters $d_0$, $\mu_m$ and $\mu_G$, respectively, denote the natural death rates for uRBC, for free merozoites and for mature gametocytes. Function $\mu(a)$ denotes the additional death rate of pRBC due to the parasites at stage $a$ and leading to the rupture. The rupture of pRBC at stage $a$ results in the release of an average number $r$ of merozoites into the blood stream, so that pRBC then produce, at stage $a$, merozoites at rate $r \mu(a)$. Together with this description, the quantity $\int_0^\infty r \mu(a) p(t,a)da$ corresponds to the number of merozoites produced by pRBC at time $t$. The parameter $\beta$ describes the contact rate between uRBC and free merozoites. Parameters $\gamma_k$ with $k=r,m,s$ describe the age preference of parasites' targets. Here we shall be concerned in \emph{P. falciparum} infection that does not have any preference for RBC so that $\gamma_r =\gamma_m = \gamma_s = 1$. However when considering \emph{P. vivax} infection one has $\gamma_r =1$ and $\gamma_m = \gamma_s = 0$, so that target RBCs mostly consist in reticulocyltes while when \emph{P. malariae} infection is concerned then target RBCs are mostly senescent cells, that is $\gamma_r = \gamma_m =0$ and $\gamma_s = 1$ \cite{PaulEtAl2003}. The parameter $\alpha_G$ represents the proportion of merozoites from a bursting asexual schizonts that will enter the gametocyte compartment, {\it i.e.}, are ''committed'' to the gametocyte developmental pathway. For simplicity, we have ignored the age structure of gameteocytes and consider $G$ as capturing mature, measurable gametocytes. \paragraph{Parasite dynamics with $K$-compartments ODE formulation (ODE model).} For the ODE model formulation, we consider $K$ stages for the pRBC before rupture and set $p=(p_1,p_2,p_3,\cdots,p_K)$, such that $p_j(t)$ denotes the concentration of pRBC at time $t$. Then, setting $\dot z= \frac{\rm{d}z}{\rm{d} t}$ the ODE model writes \begin{equation}\label{model-ODE} \begin{cases} \dot p_1(t)=\beta m(t)\sum\limits_{j=r,m,s}\gamma_jR_j(t) - \left(\mu_1+d_1\right)p_1(t),\\ \dot p_2(t)=\mu_1p_1(t)- \left(\mu_2+d_2\right)p_2(t),\\ \vdots \\ \dot p_K(t)=\mu_{K-1}p_{K-1}(t)- \left(\mu_K+d_K\right)p_K(t),\\ \dot m(t)=(1-\alpha_G) r\mu_K p_K(t) -\mu_{m}m(t)-\beta m(t) \sum\limits_{j=r,m,s} \gamma_jR_j(t),\\ \dot G(t) =\alpha_G r\mu_K p_K(t) -\mu_G G(t), \end{cases} \end{equation} wherein $1/\mu_i$ the duration of the $i$-stage and $d_i$ the death rate of pRBC. The number of stages $K$ is variable and other parameters and state variables are the same as for the PDE model. \paragraph{The immune responses.} Following \cite{DietzEtAl2006}, here we consider two immune responses (IRs) controlling the growth of the parasite population: (i) an innate IR $S_I(t)$ at time $t$ representing the effect of the pro-inflammatory cytokine cascade and (ii) an adaptive IR $S_A(t)$ at time $t$. The effect of the innate IR is a function of the present parasite (merozoite) density that takes the form \begin{equation}\label{Sc} S_I(t)= \frac{m(t)}{m(t)+S_I^*}, \end{equation} where $S_I^*$ is the critical parasite density at which the current multiplication factor is reduced by $50\%$. The adaptive IR is a function of the cumulative parasite density; this function is determined by two host-specific parameters and one constant: (1) $S^*_A$ is the critical cumulative parasite density at which the current multiplication factor is reduced by $50\%$; (2) $\Delta_0=16$ days is the average delay required by adaptive IR to become effective \cite{DietzEtAl2006}, i.e., for time $t$ before $\Delta_0$ the cumulative density is set to zero (the adaptive IR has no effect and $S_A(t)=0$ for $t\leq \Delta_0$) and (3) $\Delta_1=8$ days is the delay that determines the last term in the cumulative density for times $t\ge \Delta_0$, i.e., \begin{equation}\label{Sm} S_A(t)= \left\{ \begin{split} &\frac{\int_{\Delta_0}^{t}m(s)ds}{ \int_{\Delta_0}^{t}m(s)ds+S_A^*},\quad \Delta_0 \le t< \Delta_0+\Delta_1;\\ & \frac{\int_{\Delta_0}^{\Delta_0+\Delta_1}m(s)ds}{ \int_{\Delta_0}^{\Delta_0+\Delta_1}m(s)ds+S_A^*},\quad t \ge \Delta_0+\Delta_1. \end{split} \right. \end{equation} Thus, including these two IR effects, the dynamics of asexual parasite concentration $m(t)$ should be replaced in Models \eqref{model1} and \eqref{model-ODE} respectively by: \begin{equation}\label{model1-IR} \dot m(t) = (1-\alpha_G) \int_0^\infty r\mu(a)p(t,a)da - \left(\mu_{m} +\beta \sum\limits_{j=r,m,s} \gamma_jR_j(t)+ S_A(t)\right)m(t) -S_I(t), \end{equation} and \begin{equation}\label{model1-IR-ODE} \dot m(t) = (1-\alpha_G) r\mu_K p_K(t) - \left(\mu_{m} +\beta \sum\limits_{j=r,m,s} \gamma_jR_j(t)+ S_A(t)\right)m(t) -S_I(t). \end{equation} \paragraph{Initial conditions.} For both PDE and ODE models, the initial RBCs are assumed to be at their homeostatic equilibrium distribution in the absence of parasites given by \eqref{eq-initial-RBC}, {\it i.e.}, $R_r(0) = R^*_r$; $R_m(0)=R^*_m$; $R_s(0)=R^*_s$. The above models are also assumed to be free of pRBCs at the initial time, and the initial density of malaria parasites is such that $m(0)=m_0$, with $m_0$ a positive constant. These initial conditions are summarized in Table \ref{Tab2}. \paragraph{Parasitemia.} The output of both mathematical models can be used to recover the time course of parasitemia, defined as the proportion of all infected RBC among the total number of RBC. Using the notation of the model, the parasitemia at time $t$, denoted by $P(t)$ is calculated as follows \begin{equation}\label{eq-parasitemia} P(t)=\underbrace{\frac{\int_{0}^\infty p(t,a)da}{\int_{0}^\infty p(t,a)da+\sum\limits_{j=r,m,s} R_j(t)}}_{\text{PDE model}} \text{ or } \underbrace{\frac{\sum_{l=1}^K p_l(t)} {\sum_{l=1}^K p_l(t)+ \sum\limits_{j=r,m,s} R_j(t)}}_{\text{ODE model}}. \end{equation} \section{Results} \subsection{Development of parasites within RBCs and rupture of pRBCs} An important characteristic of {\it P. falciparum} is the development of parasites within RBCs. The parasite within a RBC then takes an average of 48 hours to mature and release free merozoites. With a sequential progression through $K$ stages of parasite maturity before the rupture of the pRBC, the ODE model quantifies the average parasite's development period by \begin{equation}\label{eq-duration-sum-mui} \frac{1}{\mu_1} + \cdots + \frac{1}{\mu_K}=48 \text{ hours}, \end{equation} where $1/\mu_i$ is the waiting time across the $i$-stage of maturity. Indeed, the probability of the pRBC of being in the $i$-stage after $a$ hours of infection is then given by $D_i(a)= \mathbb{P}(\tau_i> a)$, where $\tau_i $ denotes the duration within the $i$-th compartment. One assumption of the $K$-compartments ODE model is that variables $\tau_i$,s are independents and exponentially distributed with parameter $\mu_i$ ({\it i.e.} $D_i(a)= e^{-\mu_i a}$, without taking into account other mechanisms such as natural mortality), such that \eqref{eq-duration-sum-mui} is satisfied. Thus, $\sum_{i=1}^K \mathbb{E}[\tau_i] = \sum_{i=1}^K 1/\mu_i =48$ hours. With the PDE model, the development of parasites within RBCs is characterized by the rupture function $\mu(a)$, which takes the form $$ \mu(a)=\begin{cases} 0\text{ if $a<48$ hours},\\ \overline{\mu} \text{ if $a\ge 48$ hours}, \end{cases} $$ where $a$ is the age of the pRBC and $\overline{\mu}$ is a positive parameter. With such formulation, the overall average development period is $\approx 48$ hours as for the ODE model. Indeed, let $D(a)= \exp{\left(- \int_0^a \mu(\sigma) d \sigma\right)}$ the probability that a pRBC remains parasitized after $a$ hours (without taking into account other mechanisms such as the natural mortality). Then, the average parasite's development period is $$\int_0^\infty D(a) da= 48 + \dfrac{1}{\overline{\mu}}.$$ Here we fix, for {\it e.g.} $\overline{\mu}=10$, such that $\int_0^\infty D(a) da= 48 + \dfrac{1}{\overline{\mu}} \approx 48$. The value of $\overline{\mu}$ is therefore not strictly significant as soon as the last approximation holds. Consequently, the PDE model formulation allows to continuously track the development of parasites within RBCs and then to have a refined description of the pRBC rupture followed by the merozoites release phenomenon. By contrast, besides the issue of determining the maturation probability $\{D_i\}_{i=1,\cdots,K}$, such a continuous process is quite difficult to capture with the ODE model with $K$ repeated stages. One option for defining the maturation probability $\{D_i\}_{i=1,\cdots,K}$ can be obtained through a ''linear chain trick'' formulation. Indeed, by assuming that the duration of each repeated compartments is the same ({\it i.e.}, $\mu_i=\mu_0$, for all $i=1,\cdots,K$), by \eqref{eq-duration-sum-mui}, we then have $D_i(a)= e^{-a K /48}$, for all $i=1,\cdots,K$. The total duration before rupture becomes $T_K=\sum_{i=1}^K D_i$. Here $D_i$,s are independent and identically distributed with exponential law of parameter $\mu_0=K/48$. Hence, $T_K$ follows a Gamma distribution $\Gamma(K,\mu_0^{-1})=\Gamma\left(K,\frac{48}{K}\right)$. We recover that the mean value of $T_K$ is $48h$ and also that its variance is given by ${\rm var}(T_K)=48^2/K$. The latter quantity tends to $0$ as $K\to \infty$ meaning that $T_K\to 48$ as $K\to \infty$. As a consequence of the above computations, when $K$ is very large the probability that a pRBC remains parasitized after $a$ hours, is approximately given by $\mathbb{P}(T_K\geq a)\approx 1$ if $a\leq 48$ else $0$, that is close to $D(a)$ when $\overline{\mu}$ is large (Fig. \ref{Fig-Sequestration}). However, the main problem with such formulation is that a very large number of repeated compartments is necessary to capture a quite realistic development process of parasites within a RBC. For instance, the probability that a pRBC is still parasitized after 48 hours of infection is approximately 37\%, 45\%, 48\%, respectively for $K=1,10,50$ (Fig. \ref{Fig-Sequestration}). \begin{figure} \begin{center} \centerline{\includegraphics[width=.7\textwidth] {Fig2.pdf}} \caption{The probability that a pRBC is still parasitized after $a$ hours of infection for both PDE and ODE models. Different values of $K$ correspond to the maturation probability $\mathbb{P}(T_K\ge a)$ for the ODE model, while values of $\overline{\mu}$ are for the function $D(a)$ of the PDE model.} \label{Fig-Sequestration} \end{center} \end{figure} \subsection{Within-host reproduction number} The basic reproduction number, usually denoted as $\mathcal{R}_0$, is defined as the total number of parasites arising from one newly pRBC introduced into an uninfected host. It can be used to study the spread of the malaria parasite in an uninfected host, and the parasite will spread if $\mathcal{R}_0>1$. The $\mathcal{R}_0$ of the $K$-compartments ODE and PDE models write \begin{equation}\label{R0-ODE-PDE} \mathcal{R}_0= \begin{cases} \dfrac{ \beta }{\mu_{m} + \beta \sum\limits_{j=r,m,s} R_j^*} (1-\alpha_G) r \prod_{i=1}^K \dfrac{\mu_i}{\mu_i+d_i} \left( \sum\limits_{j=r,m,s} R_j^*\right), \text{ ODE} \\ \dfrac{ \beta }{\mu_{m} + \beta \sum\limits_{j=r,m,s} R_j^*} (1-\alpha_G) r \dfrac{\overline{\mu} }{\overline{\mu} +d_0} e^{-48\times d_0} \left( \sum\limits_{j=r,m,s} R_j^*\right), \text{ PDE}. \end{cases} \end{equation} We refer to \cite{IggidrEtAl2006,DemasseDucrot2013} for details on the derivation of \eqref{R0-ODE-PDE}. While the probability for merozoites production for each infection cycle is always $1$ for the ODE model, such probability is $\frac{\overline{\mu} }{\overline{\mu} +d_0}$ for the PDE model. One reason for this is that the $K$-compartments ODE model is essentially Markovian, {\it i.e.} 'memoryless'. With the $K$-compartments ODE model, a RBC that has been parasitized for 40 hours has the same probability of producing merozoites as {\it e.g.} a RBC parasitized less than a hour ago. However, parameter $\overline{\mu}$ can then be chosen such that these probabilities are close to unity. For instance, for the PDE model, $\frac{\overline{\mu} }{\overline{\mu} +d_0} \approx 1$ as soon as $\overline{\mu}$ is sufficiently large compared to $d_0$. Therefore, with the value of $\overline{\mu}=10$ introduced in the previous section, we have $\frac{\overline{\mu} }{\overline{\mu} +d_0} \approx 0.99$. Finally, one of the main differences between the $\mathcal{R}_0$ expressions of both models is the probability at which pRBCs survive the 48 hours of the parasite's development period for each infection cycle. While such probability is quantified by the term $ e^{-48\times d_0}$ for the PDE model \eqref{R0-ODE-PDE}, it is $\prod_{i=1}^K \frac{\mu_i}{\mu_i+d_i}$ for the ODE model \eqref{R0-ODE-PDE}. However, in some configurations of parameters $\mu_i$,s and for $K$ sufficiently large we can have $\prod_{i=1}^K \frac{\mu_i}{\mu_i+d_i} \approx e^{-48\times d_0}$. For instance, by assuming that the duration of each repeated compartments is the same for the ODE model ({\it i.e.}, $\mu_i=\mu_0$, for all $i=1,\cdots,K$), equality \eqref{eq-duration-sum-mui} gives $K/\mu_0=48$ such that \begin{equation*} \begin{split} \prod_{i=1}^K \frac{\mu_i}{\mu_i+d_i}= \left(1+d_0/\mu_0\right)^{-K} = \exp\left[-K\ln \left(1+\frac{48d_0}{K}\right)\right] \approx \exp(-48d_0)\text{ if $K\gg 1$}. \end{split} \end{equation*} \subsection{Fitting the model parameters with data} The model presented above is solved numerically by using finite volume numerical schemes (implemented with the MatLab Programming Language). The model is then fitted to the data for the time course of gametocytes of the patients. To fit our model, let us observe that most of the parameters are estimated from the literature \cite{AndersonEtAl1989,McQueenMcKenzie2008,EichnerEtAl2001,McQueenEtAl2013,HetzelAnderson1996}. Table \ref{Tab1} provides the values we shall use for the fixed parameters. Three parameters need to be estimated from the data, these are: the proportion of parasitized cells that produce asexual merozoites ($\alpha_G$), the merozoite initial density ($m_0$), and the duration of sexual stage ($1/\mu_G$). Additionally to the above three parameters, the number $K$ of compartments is also estimated for the ODE model. These parameters are adjusted from the data for each patient by using a least square method. Basically, we find the values which minimize the difference between the ODE model prediction gametocyte density and the observed data by using MatLab nonlinear least-squares solver {\it lsqcurvefit}. Those optimal parameters for the ODE model are then used to run the PDE model. The superposition of the data and gametocyte density output of the mathematical models are presented in Fig. \ref{figDataModel}, while the estimated parameter values for each patient are given in Table \ref{Table-parameter-fitted}. \subsection{Comparison of ODE and PDE model outputs} We have presented two modelling frameworks to properly model the within-host infection of malaria. Within this context, we compare a classical model based on ordinary differential equations (ODE) with a model based on partial differential equations (PDE). Our first observation is on the parameterisation of both models. More precisely, a good description of the rupture of pRBC requires at least one additional parameter $K$ for the repeated compartments, see \eqref{model1} versus \eqref{model-ODE}. Such parameter $K$ is necessary to capture the delay in the production or quantification of gametocytes imposed by the development of parasites within RBCs for each infection cycle. This delay in gametocytes production is nicely highlighted by the PDE model formulation (Fig. \ref{figDataModel}). Through a ''linear chain trick'' formulation, it is then possible to find the number $K$ of repeated compartments such that the ODE model can slow down the production dynamics of gametocytes. Indeed, by assuming that the duration of each repeated compartment is the same, we can suitably find the parameter $K$ such that both models perform quite similarly in terms of goodness-of-fit for gametocyte production. Overall, for our dataset, we find between 47 to 68 compartments for a goodness-of-fit of the ODE model (Fig. \ref{figDataModel}). While both models perform quite similarly in terms of goodness-of-fit for a suitable value of $K$, the $K$-compartments ODE model particularly overestimates parasite densities early on in infections when the number of repeated compartments is not large enough (Fig. \ref{figDataModel}). Importantly, the number $K$ of compartments for the goodness-of-fit of the ODE model is quite large and can be variable across individuals (Fig. \ref{figDataModel}). According to the infection dynamics, our comparative results show that the PDE model and the $K$-compartments ODE model (when $K$ is suitably chosen) reproduced, at least qualitatively, the true dynamics of malaria infection parasitemia. Indeed, although we do not have parasitemia real data for patients considered here, but in a qualitative comparison to some studies ({\it e.g.}, \cite{ChildsBuckee2015}), the dynamics of both models seem to mimic qualitatively the parasitemia dynamics when the number of compartments for the ODE model is adequately defined (Fig. \ref{figPara}). \begin{figure} \centerline{\includegraphics[width=\textwidth] {Fig3.png}} \caption{Comparison with data and mathematical model output for gametocyte density for patients G161, G104, S1050 and G299. The ODE model is illustrated for three values of $K$ ($K=1$, $K=100$, and an intermediate value corresponding to the optimal $K$ for each patient). Comparison for other patients is provided by Figure \ref{figDataModel-SuppMat}.} \label{figDataModel} \end{figure} \begin{figure} \centerline{\includegraphics[width=\textwidth] {Fig4.png}} \caption{The time course of parasitemia (in percentage) for patients G161, G104, S1050 and G299. The ODE model is illustrated for three values of $K$ ($K=1$, $K=100$, and an intermediate value corresponding to the optimal $K$ for each patient). Other patients are provided by Figure \ref{figPara-SuppMat}.} \label{figPara} \end{figure} \begin{figure} \centerline{\includegraphics[width=\textwidth] {Fig5.png}} \caption{The time course of parasitemia (in percentage) and gametocyte density computed from the PDE model for patients G161, G104, S1050 and G299. The ODE model is illustrated only for the optimal $K$ of each patient. Other patients are provided by Figure \ref{figGametoPara-SuppMat}.} \label{figGametoPara} \end{figure} \subsection{Relationship between parasitemia and gametocyte density} Our mathematical model has been fitted to the available data for each patient under consideration, which consists of gametocyte densities over time. We now use the output of this mathematical model to recover the time course of parasitemia, defined as the proportion of all infected RBC among the total number of RBC, see \eqref{eq-parasitemia}. The time course of parasitemia, $P(t)$, computed from our model are presented in Fig. \ref{figGametoPara} for each patient together with the fitted gametocyte trajectories. As is observed for each patient, the relationship between these curves exhibits two different regimes. During some period of time $[2,T_0]$, the two curves are increasing with rather similar shape up to a time shift (of length $2$ days). This means that, in this increasing regime, the gametocyte density at time $t$ depends on the parasitemia at time $t-2$, a delay which reflects the life cycle of the parasites inside the RBCs. After this period of increasing parasitemia and gametocyte density, namely after time $T_0$, both curves are decreasing and the shapes seem to depend upon the specific patient considered. To make these comments more quantitative, we introduce the following formula from an estimation of the gametocyte density $G(t)$ from the parasitemia $P(t)$: \begin{equation}\label{eq-form} G(t)=\begin{cases} k_1 P(t-2)^{\theta_1}&\text{ if } 2\leq t\leq T_0\text{ days},\\ k_2 P(t)^{\theta_2}&\text{ if $T_0\leq t\leq 30$},\end{cases} \end{equation} where $k_1$, $k_2$, $\theta_1$ and $T_0$ are four positive parameters while $\theta_2$ is a negative parameter. Let us mention that this 2 days delay between gametocyte density and parasitemia should not to be confused, for example, with the time to distinguish the mature gametocytes via microscopy. Such time is longer than 2 days and can be well captured by the PDE model or the ODE model when $K$ is adequately chosen (Fig. \ref{figDataModel}). While there is a precise biological relationship between parasitemia and gametocytes density at some point in the future, here we seek a robust statistical relationship and so the delay need not match with, {\it e.g.}, gametocyte maturation time. To determine the unknown parameters $k_1$, $k_2$, $\theta_1$, $\theta_2$ and the changing time $T_0$ for each patient, we perform a least square analysis. More specifically, we adjust these parameters through a logarithmic scale, that is, through the following formula \begin{equation*} \log_{10}G(t)=\begin{cases}\log_{10} k_1+\theta_1\log_{10}P(t-2) &\text{ if $2\leq t\leq T_0$},\\ \log_{10} k_2+\theta_2\log_{10}P(t) &\text{ if $T_0< t\leq 30$}. \end{cases} \end{equation*} To be more precise, our analysis couples the estimate for the time parameter $T_0$, at which the above formula is changing formulation, together with two linear regressions on each part of the graph. We find that parameters $k_1$, $k_2$, $\theta_1$ and $T_0$ are remarkably robust with respect to individuals while $\theta_2$ depends upon each individual. The results of this analysis as well as the estimated parameters are presented in Fig. \ref{figRegression} and summarized by Table \ref{Tab_parameters_fit} for each patient. The quality of the fit is quantified using the coefficient of determination $R^2$ (for linear regression). It is computed for each patient and for the two regimes independently. This adjustment metric is computed using the sample of points induced by the time discretization of the partial differential equation model. For our four cases, this coefficient of determination $R^2$ is approximately $0.99$ in the first part of the curve and even closer to one in the second part of the graph. Coming back to the adjusted parameters described in Table \ref{Tab_parameters_fit}, one may observe that the four parameters $k_1$, $k_2$, $\theta_1$ and $T_0$ have robust values with respect to patients while the parameter $\theta_2$ depends on the patient. Using the average values of the adjusted parameters on the set of malariatherapy data, we derive the following clinical formula: \begin{equation}\label{operational_formula} G(t)=\begin{cases} 3.843\times 10^7 \cdot P(t-2)^{+1.0304}&\text{ if } 2< t\leq \overline{T}_0\text{ days},\\ 2.981\times 10^9 \cdot P(t)^{-0.0470}&\text{ if } \overline{T}_0\leq t\leq 30\text{ days},\end{cases} \end{equation} with $\overline{T}_0=14.5636 \pm 0.0064$ days. The relative error for \eqref{operational_formula} is such that $\left|\frac{\Delta G}{G} \right|^2 \le 2.3884\times 10^{-4}+1.0617 \left|\frac{\Delta P}{P} \right|^2$, where $\left|\frac{\Delta P}{P} \right|$ is the relative error on the measurement of parasitemia. Therefore, if $\left|\frac{\Delta P}{P} \right|\le 5\%$, then $\left|\frac{\Delta G}{G} \right| < 5.38\%$. From practical point of view, notice that formula \eqref{operational_formula} can be really useful to estimate the gametocyte density from the parasitemia measurement without necessarily using the quite `complex' mathematical model described in this note. \section{Discussion} Many models of within-host malaria infection dynamics have been formulated since the pioneer work of Anderson {\it el al.} \cite{AndersonEtAl1989} in 1989. These models are based on dynamical systems, with standard approaches ranging from ordinary differential equations (ODEs), to delay differential equations (DDEs) or partial differential equations (PDEs). Most ODE model formulations \cite{GravenorLloyd1998,Hellriegel1992,HetzelAnderson1996,HoshenEtAl2000,LiEtAl2011,MitchellCarr2010,MolineauxDietz1999} assume an exponential process to describe the rate of pRBCs rupture and therefore fail to capture realistic lifetimes of the pRBCs. This issue is somewhat corrected when the development of parasites within RBCs and rupture of pRBCs are modeled either by a set of $K$-compartments ODE \cite{GravenorLloyd1998,SaralambaEtAl2011,ZaloumisEtAl2012,IggidrEtAl2006}, or DDEs \cite{HoshenEtAl2000,KerlinGatton2013,FonsecaVoit2015,CaoEtAl2019}. Other approaches are the use of PDEs to track the infection history of a pRBC \cite{AntiaEtAl2008,KhouryEtAl2018,CromerEtAl2009,DemasseDucrot2013}. Using gametocyte production, parasitemia (the proportion of all infected RBC among the total number of RBC) as proxy variables and malariatherapy data, we found that the PDE model and the ODE model perform similarly in terms of goodness-of-fit when a suitable value of $K$ is chosen (Fig. \ref{figDataModel}). Some disadvantages of the ODE model are that the number $K$ of compartments required to achieve a good fit is quite large and can be variable across individuals, and the ODE model particularly overestimates parasite densities early on in infections when the number of repeated compartments is not large enough (Fig. \ref{figDataModel}). Similar comparison holds for the parasitemia dynamics (Fig. \ref{figPara}). Not least, the $K$-compartments ODE model (for suitably chosen $K$) and the PDE model highlight a strong qualitative connection between gametocyte density and parasitemia. From a practical point of view, such a relation given by \eqref{operational_formula} can be really useful to estimate the gametocyte density from the parasitemia measurement without necessarily using the quite 'complex' mathematical model described in this note. Here, immune-mediated parasite killing is only considered for merozoites. This choice for immunity targeting merozoites, rather than parasitized red blood cells is mostly because it is a lot easier with our PDE model formulation, particularly in terms of parameterization. However, in some studies, {\it e.g.} \cite{DietzEtAl2006}, parasite levels are not distinguished by merozoite and parasitized red blood cells, such that immunity is acting against merozoite and parasitized red blood cells. Also note that, while there is evidence that the mature live gametocytes evade the immune response clearance pathway, immune-mediated parasite killing of immature gametocytes has been shown in some studies \cite{BansalKumar2018}. Finally, taking the best fit parameters and then altering immunity parameters between their minimum, median, and maximum values estimated in \cite{DietzEtAl2006} has very little impact on the model outputs, namely, parasitemia and gametocytes density (figures not shown). However, this can be explained by the fact that (i) merozoites are only short lived and (ii) the relatively short-term validation of the model presented here. Reducing infections in mosquitoes---vectors of \emph{Plasmodium} parasites---is a crucial component of global efforts to control and eliminate malaria \cite{AlonsoEtAl2011}. Because a strong correlation exits between the gametocyte density within a host and infectivity of mosquitoes \cite{CollinsJeffery2003,GravesEtAl1988,BousemaDrakeley2011,ChurcherEtAl2013}, progress towards this goal would be bolstered by quantifying gametocytes and identifying highly infectious hosts \cite{StoneEtAl2015,Transmission2017,BousemaDrakeley2017}. On the other hand, parasitemia is easily quantified by light microscopy and therefore is more technically accessible, particularly in regions where malaria is endemic. Therefore, quantifying the relationship between the gametocyte density and parasitemia is of great interest to define more simple tools for the prediction of mosquito infection. The results presented in this note provide one such tool. From a public health or population dynamics point of view, the time course of the disease at the between-host level is strongly related to the basic reproduction number (also denoted here by $\mathcal{R}_0$). At the between-host level, the $\mathcal{R}_0$ is defined as the number of secondary infections from a single infected individual introduced in a fully susceptible population. This important metric can be estimated from real data but also using mathematical models. The simplest (deterministic) mathematical model reads as the Ross system of equations from which one can compute this threshold number $\mathcal R_0$ as follows: \begin{equation*} \text{Between-host} \left\| \mathcal R_0=\mathcal R_0^{VH}\times \mathcal R_0^{HV}\text{ with }\begin{cases} \mathcal R_0^{VH}=abd_M,\\ \mathcal R_0^{HV}=macd_H.\end{cases} \right. \end{equation*} Note that the above $\mathcal R_0$ is for between-host malaria dynamics and is not for the within-host models presented here. In the above formula of $\mathcal R_0$, $m$ represents the number of mosquitoes per person, $a$ denotes the mosquito biting rate, $b$ and $c$ denote the per bite transmission probability respectively from mosquito to human and from human to mosquito, while $d_H$ and $d_M$ correspond respectively to the human recovery and the mosquito death rates. Although more ingredients can been included into the mathematical model, leading to different formulations for $\mathcal R_0$, the above expression contains the main important parameters \cite{Macdonald1955,SmithEtAl2012,MandalEtAl2011,RuanEtAl2008}. Parameters $b$ and $c$ serve as the link between within- and between-host dynamics, since the transmission rates from (to) a host will depend on the dynamics of what is happening within that host (vector). Furthermore, there is a clear relationship between gametocyte density ($G$) and the transmission probability per bite from human to mosquito ($c$) \cite{JohnstonEtAl2013,CollinsJeffery2003,BousemaEtAl2012,BousemaDrakeley2011,ChurcherEtAl2013,StepniewskaEtAl2008,DrakeleyEtAl1999}. From a practical point of view, the parameter $c$ is difficult to estimate in a relevant way. Indeed, an efficient measurement of $c$ requires a good measure of the gametocyte density which is quite difficult to obtain in practice. Indeed, while we can get either gametocytes or parasitemia from microscopy, but with gametocytes tending to be at lower densities (sometimes orders of magnitude lower), there is a detectability issue. Thus, a simple way to estimate the gametocyte density will help to infer the parameter $c$. Thanks to formula \eqref{eq-form} proposed here, we then have robust relationships between parasitemia (easier to measure) and gametocyte density, at least during the first days of infection (approximately the first two weeks). Overall, the proposed model for the dynamics of gametocytes is probably valid for the first asexual wave which last approximately 40 days for each patient in this malariatherapy dataset. This relatively short-term validation is enough for the aim of the current study. However, for the long-term gametocyte dynamics, we need to bring more complexity in the model proposed here. Indeed, the conversion probability of asexual parasites to circulating gametocytes ($\alpha_G$) should be considered to vary among successive waves of asexual parasitaemia to tackle such issue of the long-term gametocyte dynamics \cite{EichnerEtAl2001}, particularly since smear-positive asymptomatic malaria infections detectable by microscopy are an important gametocytes reservoir and often persist for months \cite{LinEtAl2014}. The robustness of the model proposed here, especially the formula linking parasitaemia and gametocyte density, is only guaranteed during the first two weeks after infection. Beyond this time, this estimate is highly variable from one patient to another. This variability is explained, at least in part, by the variability of the duration of sexual stage ($1/\mu_G$) which governs the decrease in gametocyte density (Table \ref{Table-parameter-fitted}). One interpretation of the variation in this parameter is that there exists variation in how well individuals clear gametocytes or kill them through immune responses \cite{LinEtAl2014,DoolanEtAl2009}. Less variability is expected at the beginning of infection, where the whole system is less constrained by immunity. This is likely to be true for the malariatherapy patient data presented here, since hosts were initially naive. However in high transmission settings, acquired immunity---particularly in older hosts---may obscure the relationship between early gametocyte and asexual parasite densities our work has revealed. Furthermore, fine scale, longitudinal data could assess the applicability of this relationship across settings and age groups, although such data is understandably difficult to obtain. Finally, while our work reveals a simple tool for linking aspects of the early dynamics of malaria infections, it also offers specific suggestions for how best to mathematically describe those infection dynamics more broadly. Both PDE and $K$-compartments ODE models have been adopted to capture the subtleties of malaria parasite life cycles in blood-stage infections \cite{KhouryEtAl2018}. Our work provides more understanding among using the PDE model or the ODE model for the within-host human malaria dynamics. \begin{table}[!h] \begin{center} \caption{Fixed model parameters} \label{Tab1} \hspace{-3cm} \begin{tabular}{llll} \hline Parameters & Description (unit) & Values & References \\ \hline $\Lambda_0$ & Production rate of RBC (RBC/h/ml)& $1.73\times 10^{6}$ & \cite{AndersonEtAl1989,McQueenEtAl2013} \\ $1/\mu_{r\to m}$ & Duration of the RBC reticulocyte stage (h) & $36$ & \cite{McQueenMcKenzie2008} \\ $1/\mu_{m\to s}$ & Duration of the RBC mature stage (day) & $116.5$ & \cite{McQueenMcKenzie2008} \\ $1/\mu_{s\to d}$ & Duration of the RBC senescent stage (h) & $48$ & \cite{McQueenMcKenzie2008} \\ $\beta$ & Infection rate of uRBC (RBC/ml/day) & $6.27\times 10^{-10}$ & \cite{AndersonEtAl1989} \\ $d_0$ & Natural death rate of uRBC (RBC.day$^{-1}$) & $0.00833$ & \cite{AndersonEtAl1989} \\ $\mu_{m}$ & Decay rates of malaria parasites (RBC.day$^{-1}$) & $48$ & \cite{HetzelAnderson1996} \\ $r$ & Merozoites multiplication factor (dimensionless) & 16 & \cite{AndersonEtAl1989}\\ $\alpha_G$ & Proportion of sexual merozoites (dimensionless) & 0.05 & \cite{McQueenEtAl2013} \\ $S^*_I$ & Innate IR density for $50\%$ of parasite killing (cells.$\mu l^{-1}$) & 2,755 & \cite{DietzEtAl2006} \\ $S^*_A$ & Adaptive IR density for $50\%$ of parasite killing (cells.$\mu l^{-1}$) & 20.4 & \cite{DietzEtAl2006} \\ $\Delta_0$ & Delay required by adaptive IR to be effective (day) & $16$ & \cite{DietzEtAl2006} \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[!h] \begin{center} \caption{Initial values for the model} \label{Tab2} \hspace{-1cm} \begin{tabular}{lll} \hline Variables & Description & Initial Values \\ \hline $R_r(0)$ & Population of reticulocytes RBC & $62\times 10^6$ RBC.$ml^{-1}$ \\ $R_m(0)$ & Population of mature RBC & $4.85\times 10^9$ RBC.$ml^{-1}$ \\ $R_s(0)$ & Population of senescent RBC & $83\times 10^6$ RBC.$ml^{-1}$ \\ $p(0,.)$& Population of pRBC for the PDE model & $0$ cells.$ml^{-1}$ \\ $p_j$ & Population of pRBC for the ODE màdel & $0$ cells.$ml^{-1}$ \\ $G(0)$ & Population of mature gametocyte & $0$ cells.$ml^{-1}$ \\ $m(0)$ & Population of malaria parasites & variable \\ \hline \end{tabular} \end{center} \end{table} \section*{Acknowledgements} Authors thank Samuel Alizon, Laurence Ayong, Carole Eboumbou and Christophe Rogier for comments and suggestions to improve the manuscript. \section*{Code availability} The code (with the MatLab Programming Language) used to simulate the model can be accessed through the Zenodo platform at \url{https://doi.org/10.5281/zenodo.5526271} \bibliographystyle{abbrv}
1,108,101,565,079
arxiv
\section{Introduction} \IEEEPARstart{D}{ementia} affects $850,000$ people in the UK and over 50 million globally, and is set to become the developed world's largest socioeconomic healthcare burden over coming decades \cite{world2012dementia, alz}. In the absence of any current treatment, there is an urgent need to focus on reducing the effects of symptoms and help to improve the quality of life and well-being of those already affected \cite{3livingston2020dementia}. The 2020 report of the Lancet Commission on dementia prevention, treatment, and care stresses the importance of individualised interventions to address complex medical problems, multimorbidity and neuropsychiatric symptoms in dementia, which lead to unnecessary hospital admissions, faster functional decline, and worse quality of life \cite{4pickett2018roadmap}. People with dementia have complex problems with symptoms in many domains. It is estimated that up to $90\%$ will develop behavioural and physical symptoms of dementia (BPSD) over the course of their illness, with agitation being one of the most common symptoms \cite{5feast2016behavioural}, and a frequent reason for nursing home placement \cite{6buhr2006caregivers}. Furthermore, patients with dementia often suffer from a number of co-morbid conditions and have a higher frequency of medical problems such as falls, incontinence, dehydration or urinary tract infection (UTI) - the commonest bacterial infection in the older patient population, and the commonest cause of sepsis in older adults \cite{7peach2016risk} with an associated in-hospital mortality of $33\%$ in this age group \cite{8tal2005profile}. If not detected and treated early, both BPSD and medical comorbidities frequently lead to emergency hospital admissions in dementia patients. Alzheimer's Research UK estimates that $20\%$ of hospital admissions in dementia patients are for preventable conditions, such as urinary tract infections. Besides significant costs, hospitalisation places dementia patients at risk of serious complications, with longer hospital stays, higher risk of iatrogenic complications, delayed discharge and functional decline during admission, which contributes to higher rates of transfer to residential care and in-patient mortality \cite{9fogg2018hospital}. Therefore, increased medical supervision, early recognition of deterioration in health status and rapid treatment are key to preventing unnecessary hospitalization for 'ambulatory' conditions, that could be treated outside of hospital, such as UTIs. Furthermore, ongoing monitoring of people with dementia allows immediate detection of behavioural disturbances, enabling earlier psychosocial and environmental interventions to reduce patients’ distress and prevent further escalation and hospitalization. However, monitoring and supporting individuals in an ongoing manner is a resource and cost-intensive task, often not scalable to larger populations. Utilising remote monitoring technologies with the help of caregivers can allow creating practical and generalisable solutions. As part of the research in the Care Research and Technology Centre at the UK Dementia Research Institute (UK DRI), we have been developing and deploying in-home monitoring technologies to help and support people affected by dementia. Our research has led to the development of a digital platform that allows collecting and integrating in-home observation and measurement data using network-connected sensory devices \cite{Enshaeifar20}. In this paper, we discuss how our in-home monitoring data and machine learning algorithms are used to detect early symptoms of agitation and UTI in people with dementia living in their own homes. Sensing technologies have been increasingly used to monitor activities and movements of elderly patients living in their own homes \cite{11majumder2017smart, 12turjamaa2019smart, 13peetoom2015literature}. Interpreting this information; however, demands considerable human effort, which is not always feasible. The use of analytical algorithms allows integration and analysis of rich environmental and physiological data at scale, enabling rapid detection of clinically significant events and development of personalized, predictive and preventative healthcare. Deep learning models have been applied in a variety of healthcare scenarios to identify the risk of various clinical conditions or predict outcomes of treatment \cite{miotto2016deep, ross2017risk}. Recently, there have been several implementations of Recurrent Neural Networks (RNNs) to create learning models for time-series healthcare data analysis \cite{lipton2015learning, esteban2016predicting, choi2016doctor}. The behavioural and physiological symptoms and patterns in long-term conditions such as dementia appear in the data over a long period of time and can fluctuate and change over the course of disease. Machine learning models such as RNNs; however, are not suitable for analysing long sequences of time-points. To address the long sequence analysis issue in RNNs, other methods such as Bidirectional RNN, LSTM and GRU have been used \cite{baytas2017patient, harutyunyan2019multitask}. There also have been attempts to apply attention mechanisms to clinical datasets \cite{choi2016retain, ma2017dipole, bai2018interpretable, ma2019adacare,song2018attend} to improve the performance of analysing imbalanced and long-tail time-series data. A fundamental limitation in these models is the adaptivity and generalisability. When long-distance symptoms and patterns are related to a specific condition, the generalisability and performance of the existing models are limited. The long sequences of data points and the changes in the ongoing conditions vary in patients, and often there are no large labelled training samples to train the models for all the variations. Deep learning models offer a new opportunity to training models that can pay attention to correlations and long-distance relations between the patterns and sequences. However, the off-the-shelf and existing deep learning model require large training samples. While applying neural networks to clinical data, there are two main challenges: 1) selecting the important timesteps and features from long sequences of data to create generalisable models; and 2) imbalance in datasets. Neural networks are very effective in finding a trend in datasets. Models such as Recurrent Networks use the positions of the input and output sequences to generate a sequence of hidden states. This is computationally expensive and limited computing of the global dependencies \cite{vaswani2017attention}. In these models, the computational complexity to relate input or output positions also grows as the distance between positions increases. This latter makes it very challenging to learn dependencies and correlations between long-distance patterns and time points \cite{hochreiter2001gradient}. Additionally, clinical datasets are often imbalanced, with content spanning ensembles of heterogeneous data. Most of the clinical datasets contain more normal cases (i.e. True positives) than abnormal data points (i.e. True Negatives). In our dataset, which includes a large set of in-home environmental and physiological data from people with dementia, the number of positive cases for infections is much smaller than the true negative cases. In large parts of the data, the true status of the infection is unknown (i.e. the data is partially labelled due to the limitations in accessing the patients' clinical records or knowing the presence of any infections without a test). This issue causes the learning models to exhibit a bias towards the majority class. It may ignore the minority class or make a decision based on a partial set which is not a broad representation of the cases \cite{johnson2019survey}. There have been several works on implementing attention mechanisms \cite{vaswani2017attention} to improve the generalisability of learning models in analysing time-series data. However, Jian \textit{et. al} \cite{jain2019attention} found that there are limitations in the weights generated by attention-based models which can lead to wrong predictions. Hence, we need to be more cautious in using the attention mechanisms and their explanations in designing deep learning models. While the attention-based models are promising in healthcare time-series data analysis, considering the time and features dependencies of the predictions poses a challenge for this type of models. Over-sampling which augments the data by generating synthetic samples \cite{chawla2002smote}, down-sampling which prunes the samples in the majority classes are among the typical models that are used to deal with the imbalance issues in datasets \cite{liu2008exploratory}. However, samples in clinical data and variations in the real-data are important aspects of the observations and measurements that may not be present in augmented data generated by sampling methods. It is crucial to find an efficient way to address the imbalance issue without modifying or reducing the original data in pre-processing steps \cite{krawczyk2016learning}. Our goal is to propose a model to address the challenges mentioned above. To support the clinical treatment and adapt to the real-world sensory data readings, the model should filter the redundant and less informative data. Furthermore, the model can explain the predictions by telling us which time periods and sensors are important to give the predictions. Last but not least, the model can adapt to the imbalanced data. \begin{figure*} \centering \includegraphics[width=\linewidth]{Figures/introduction/architecture.png} \caption{An overview of the proposed solution for healthcare data analysis. The data is encoded by positional encoding before passing to the model. The proposed rationalising extract important information and pass to the higher layers. The proposed rationalising block contains a rational layer to extract important time steps. A Long-Short Term Memory (LSTM) model processes the extracted data. The attention layer to pay attention to suitable features. The rationalising process of the data changes during the rationalising block. The rationalising block extracts the important time steps at first. Then it pays attention to different emphasis features of the pruned data. Then the data is given to make a prediction. All the layers are trained simultaneously. } \label{fig:overview} \end{figure*} \section{Design, setting and participants} Real-time, continuous measurement methodologies enabled by the recent advances in pervasive computing and ‘smart-home’ technologies provide opportunities to monitor the behaviour and health status of elderly people using wearable technology or environmental sensors \cite{11majumder2017smart, 12turjamaa2019smart, 13peetoom2015literature}. Computer-derived algorithms have been developed to analyse sensor data and identify patterns of activity over time. These can be applied to detect changes in activities of daily living in order to predict disease progression and cognitive decline. For instance, ORCATECH group used continuous in-home monitoring system and pervasive computing technologies to track activities and behaviours such as sleep, computer use, medication adherence to capture changes in cognitive status \cite{35lyons2015corrigendum}. They also demonstrated the ability of machine learning algorithms to autonomously detect mild cognitive impairment in older adults \cite{36akl2015autonomous}. Machine learning models have also been used to detect clinically significant events and changes in health status. Much of the previous work focused on detection and prediction of falls using wearable accelerometers or other motion detectors \cite{37schwickert2013fall}, as well as tracking behavioural symptoms such as sleep disturbances \cite{38lazarou2016novel}, agitation \cite{39bankole2012validation}, and wandering \cite{40fleiner2016sensor} in elderly patients. However, there is limited research on the use of machine learning models for detection of health changes such as infection in the context of smart-homes. An early supervised UTI detection model has been described using in-home PIR sensors \cite{41rantz2011using}, however it relied on the activity labels and annotations in the training dataset, which is extremely time-consuming and not generalisable to the real-world situations with large amount of unlabelled data collected from uncontrolled environments. We have previously proposed an unsupervised technique that could learn individual’s movement patterns directly from the unlabelled PIR sensor data \cite{42enshaeifar2019machine}. Furthermore, the existing research and the data-driven solutions are either applied to small scale pilot studies and do not provide evidence for scalability and generalisability. They are also limited in analysing long-term patterns and correlations that appear in the data. Attention-based models which can overcome these problems have never been applied to sensor data for detecting clinically significant events or changes in health status in dementia patients. This is the first to use deep learning and attention-based methods to perform risk analysis for behavioural symptoms and health conditions such as UTIs in people living with dementia. The proposed model improves the accuracy and generalisability of machine learning models that use imbalanced and noisy in-home sensory data for the risk analysis. An analysis of the suitability of the digital markers and the use of in-home sensory data is explored in an ablation study. The proposed model is compared with several baseline models and state-of-the-art methods. The proposed approach has been evaluated in an observational clinical study. Participants (n=88, age=81 +/- 6.5) were recruited for a six months trial period. The proposed solution provides a recall of $91\%$ and precision of $83\%$ in detecting the risk of agitation and UTIs. We have also set up a framework and a clinical response team that use the risk alerts generated by the models for ongoing support and management of the conditions in people living with dementia. Using high-resolution in-home observation and measurement data in association with advance machine learning methods leads to early and timely interventions and has a significant impact on reducing preventable and unplanned hospital admissions in people affected with dementia. A key challenge in using analytical and predictive models for risk analysis is identifying and collecting digital markers data using in-home sensory devices. The capacity of the proposed model to address time-series feature identification and data imbalance enables use in a very wide range of healthcare and risk analysis applications using in-home digital markers. \begin{figure*} \centering \includegraphics[width=0.95\linewidth]{Figures/method/raw_activity.png} \caption{Visualisation of the sensor readings. The x-axis represents the time of the day for activation of the sensors. The y-axis represents the days for a period of 8 months for a patient. Each colour represents a type of an environmental activity sensor. Similar colour along the y-axis represent similar patterns of activities around the same time in consecutive days. The more colour distortion/merge of colours along the y-axis represent more changes in pattern of activity over time. } \label{fig:visual_raw_data} \end{figure*} \section{Method} We introduce a model that can identify the important time steps and features and utilise long-distance dependencies to make better predictions. The proposed model provides a prediction based on the selected time points and the selected features from the raw observation and measurement data. Figure \ref{fig:overview} shows how the data changes during the processing. The model selects important time steps through a pruning process. After pruning the data, it pays attention to different features and uses them to make the predictions. Different from methods such as clustering sampling \cite{wu2020stratified}, we select the important time steps of each sample instead of selecting a portion of samples for training. In contrast to statistic feature selection methods such as sequential feature selection \cite{aha1996comparative}, the proposed model selects important time steps based on different data. We use focal loss \cite{lin2017focal} to assign priority to the minority class without generating synthetic samples. \begin{Figure} \centering \includegraphics[width=0.9\linewidth]{Figures/method/agg_data.png} \captionof{figure}{A heat-map of the aggragation of the raw data. The readings are aggregated per hour within each day.} \label{fig:visual_agg_data} \end{Figure} \subsection*{Data sources and pre-processing} We have collected the data as part of an observational clinical study in people living with dementia from December 2018 to April 2020. Each of the participants has had a confirmed diagnosis of dementia (mild to severe) within, at least, the past three months of recruitment and have been stable on dementia medication. The collected data contains continuous environmental sensor data from houses of patients with dementia who live in the UK. The sensors include Passive Infra-Red (PIR), smart power plugs, motion and door produced by Develco in Aarhus, Denmark. The sensors were installed in the bathroom, hallway, bedroom, living room (or lounge) and kitchen in the homes and also on the fridge door, kettle and microwave (or toaster). The sensors also include network-connected physiological monitoring devices that are used for submitting daily measurements of vital signs, weight and hydration. The data is integrated into a digital platform, which is designed in collaboration with clinicians and user group to support the people with dementia, that we have developed in our past research \cite{Enshaeifar20}. A clinical monitoring team that is set up as part of our observational study has used the platform to daily annotate the data and very the risk analysis alert. Based on the annotations, we select four incidents including agitation, Urinary Tract Infection (UTI), abnormal blood pressure and abnormal body temperature to label our data binarily. More specifically, a label is set to true when the abnormal incident is verified by the monitoring team and vice versa. We then use the environmental data to inference if there is any incident happen within one day. Fig \ref{fig:visual_raw_data} shows an example of collected data. To pre-process the data, we aggregate the readings of the sensors within each hour of the day, shown in Fig \ref{fig:visual_agg_data}. Appendix 1 shows a list of potential digital markers and sensory data that can be used in dementia care. In the appendix, we also show a screenshot of the platform that is used for collecting the data. \subsection*{Machine learning model} We aim to use the environmental sensors to predict possible incidents and avoid delayed treatment. Furthermore, the model should provide the reason, i.e. which period of time and sensors are important to give the predictions, to explain the inference. In other words, the model can remove the redundant or less informative information and use the rest of the data to give the prediction, shown in Fig \ref{fig:visual_select_data}. \begin{Figure} \centering \includegraphics[width=0.9\linewidth]{Figures/method/selected.png} \captionof{figure}{Selected time steps from the raw data. These time steps are selected by the model. The model learns to identify time steps that are more important in predicting the outcome.} \label{fig:visual_select_data} \end{Figure} As discussed earlier, in healthcare data analysis, often, the predictions are based on a long sequence of data measured and collected at different time-points. Accessing and feeding more data helps to train more accurate models. However, more information can also mean more noise in the data, and the imbalance in the samples that are given to the model can also lead to decision bias. An efficient model should be able to process and utilise as much data as available. However, the model should also avoid the common pitfalls of noise and bias. To address these issues, we have studied the use of attention-based models. This group of models will utilise all the available information and, in each sequence, will identify the time-points that provide the most information to the training and prediction. This attention and selection process is an embedded step in the model. It will allow the model to be flexible and generalisable for different sequences with variable lengths and for a different combination of features and values that are represented in the data. Before explaining our proposed models and its contributions to creating a generalisable solution for time-series healthcare data analysis, we provide an overview of the related work. We discuss the use of attention-based models in other domains and explain how the ideas presented in the existing work has led to the design of our current model. \begin{Figure} \centering \includegraphics[width=0.9\linewidth]{Figures/method/attention.png} \captionof{figure}{After selecting the important time steps, the model learns which sensors should be attention. In this case, the model think the bathroom sensor has the most contribution the prediction.} \label{fig:visual_attention_map} \end{Figure} The attention mechanisms have been introduced in Neural Language Processing (NLP) by Bahdanau \textit{et. al} \cite{bahdanau2014neural}. The attention-based models are widely used in NLP due to their capability of detecting important parts of a sequence and efficiently interpreting it. The attention-based models have also been used in continuous healthcare and clinical data analysis \cite{usama2020self}. Continuous clinical data are multivariate time-series data with temporal and sequential relationships. For each patient, the data is a set of time steps, and each time step contains medical features ($\mathbf{X} \in \mathbb{R}^{t \times d}$). REverse Time AttentIoN model (RETAIN) is one of the first systems, that used in using attention mechanism for medical data \cite{choi2016retain}. In this model, there are two separate RNNs, one to generate the visit-level attention weights ($\boldsymbol{\alpha}$) and the other one for variable-level ($\boldsymbol{\beta}$) attention weights. In this model, the most relevant time step is the one associated with the largest value in $\boldsymbol{\alpha}$. Choi \textit{et. al} provided a method to find the most influential medical feature \cite{choi2016retain}. However, RETAIN cannot handle long-distance dependencies. To deal with this issue, Ma \textit{et. al} proposed Dipole, a predictive model for clinical data using Bidirectional RNNs \cite{ma2017dipole}. They have implemented the model using two different attention mechanisms: General attention and Concatenation-based attention. The results show that Concatenation-based attention outperforms because of incorporating all the long-distance dependencies. In the above models, the input layer is simple, and the data has the same pipeline, but in the Timeline model, Bai \textit{et. al} adapted the pipeline of data \cite{bai2018interpretable}. They use attention layer to aggregate the medical features, and by modelling each disease progression pattern, they find the most important timesteps. To deal with long-distance dependencies, Timeline implements Bidirectional LSTMs. One of the recent studies in this area is AdaCare \cite{ma2019adacare}, which uses Gated Recurrent Units (GRU). AdaCare utilises convolutional structure to extract all the dependencies in the clinical data. AdaCare showed promising results in the explainability of the model. The models mentioned above have been developed based on recurrent networks. However, the sequential aspect of recurrent models is computationally inefficient. The SAnD model was developed solely based on multi-head attention mechanism \cite{song2018attend}. Song \textit{et. al} implemented a positional encoding to include the sequential order in the model. The models mentioned above show significant improvements in the accuracy and performance of predictive models in the clinical field. However, incorporating both long-distance dependencies and feature associations is a challenging task. In the existing models, the analysis is either on time step-level or feature-level. In this paper, we propose a model to detect and predict the risk of healthcare conditions by analysing long-distance dependencies in the patterns and sequences of the data. This information can be useful for clinical experts in ongoing management of the conditions. The work also helps to use an automated process to alert the risk of adverse health conditions and explore the symptoms related to the detected conditions. Our proposed model consists of two main components, a rationalising block and the classification block, as shown in Figure \ref{fig:overview}. In a high-level overview, the rational layers select the important time steps and pass to an LSTM layer. The LSTM layer will ignore the trivial time steps and process the data for the attention block. The classifier uses these time points for predictions. After processing by the attention block, the model will give a prediction. The details of these blocks are explained in the following sections. \subsection*{Positional Encoding} To use the order of sequence in the analysis, we add positional encoding (PE) before passing the data into the model. We use the sine and cosine positional encoding \cite{vaswani2017attention}. Shown in Equation \ref{eq:pe}, where $pos$ is the position of the time step, $i$ is the position of the sensor, $d$ is the dimension of each time step. \begin{equation} \label{eq:pe} \begin{split} PE(pos, 2i) = sin(pos/10000^{2i/d}) \\ PE(pos, 2i + 1) = cos(pos/10000^{2i/d}) \end{split} \end{equation} \begin{comment} \begin{table*}[] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline & Rational & Attention & Residual & Focal loss & PE\\ \hline AUC - PR & 0.6412 & 0.7429 & 0.7827 & 0.7584 & 0.7239\\ AUC - RC & 0.7675 & 0.8110 & 0.8360 & 0.8244 & 0.7801\\ \hline \end{tabular} \caption{Ablation Study Results. We remove each component one at a time to evaluate the performance of the model.} \label{tab:ablation} \end{table*} \end{comment} \subsection*{Rationalising Prediction} To add more focus on the time steps in the data that are more relevant to the predictions, the generator produces a binary mask to select or ignore a specific time points. For example: $\textbf{x} \in \mathbb{R}^{k \times f}$ contains $k$ time point and $f$ features for each time point, the generator will produce a binary vector $\textbf{z}=\{z_1,z_2,\dots,z_k\}$. The $i^{th}$ variable $z_i \in \{0,1\}$ indicates whether the $i^{th}$ time point in $\textbf{x}$ is selected or not. Whether the $i^{th}$ time point is selected or not is a conditional probability given the input $x$. We assume that the selection of each time point is independent. The Generator uses a probability distribution over the $\mathbf{z}$, which could be a joint probability of the selections. The joint probability is given by: \begin{equation} \label{eq:joint_prob} p(z|x) = \prod^k_{i=1}p(z_i|x) \end{equation} \subsection*{Classifier} After exploring and selecting the most relevant time points, we train a classifier to provide the predictions. The trained classifier contains attention blocks and residual blocks. Attention block is an application of self-attention mechanism to detect the important features. The attention mechanism detects important parts of a sequence. It has three key components: the inputs structure, the compatibility function and the distribution function \cite{galassi2019attention}. \noindent There are three inputs in the structure; Keys ($\mathbf{K} \in \mathbb{R}^{{n}_{k} \times {d}_{k}}$), Values ($\mathbf{V} \in \mathbb{R}^{{n}_{v} \times {d}_{v}}$) and Query ($\mathbf{Q} \in \mathbb{R}^{{n}_{q}}$), where the $n$ is the dimension of the inputs, the $k, v, q$ are the dimension of the outputs. They could have different or same sources. If $\mathbf{K}$ and $\mathbf{q}$ come from the same source, it is self-attention \cite{vaswani2017attention}. $\mathbf{K}$ and $\mathbf{V}$ represent input sequence which could be either annotated or raw data. $\mathbf{q}$ illustrates the reference sequence for computing attention weights. For combining and comparing the $\mathbf{q}$ and $\mathbf{K}$ values, compatibility function has been used. Distribution function computes the attention weights ($\mathbf{a} \in \mathbb{R}^{{d}_{k}}$) using the output of compatibility function ($\mathbf{c} \in \mathbb{R}^{{d}_{k}}$). We obtain the attention by Equation \ref{eq:attention}. The $Q, K, V$ are matrices formed by queries, keys and values vectors, respectively. Since we use the self-attention, the $Q, K, V$ are calculated by the inputs with different weight matrices. \begin{equation} \label{eq:attention} \textup{Attention}(Q,K,V) = \textup{softmax}(\frac{QK^T}{\sqrt{d_k}})V \end{equation} The architecture of the attention block is the same described in \cite{vaswani2017attention}. We employ a residual connection \cite{he2016deep} followed by a normalisation layer \cite{ba2016layer} inside the attention block. Residual blocks and the output layer process the output of the attention block. \subsection*{Objective function} The training samples in healthcare datasets are often imbalanced due to the low prevalence and sporadic occurrences. In other words, some of the classes contain more samples than others. For example, only 25\% of the data we collected are labelled as positive. More details of the dataset will be clarified in the following section. To deal with the imbalance issue, we use focal loss \cite{lin2017focal} as the objective function of the classifier, shown in Equation \ref{eq:loss_clf}: \begin{equation} \label{eq:loss_clf} \mathit{L_c} = - \alpha(1-p)^\beta \log(p) \end{equation} \noindent where $\alpha$ and $\beta$ are hyper-parameters to balance the variant of the focal loss, $p=f(x,z)*y + (1-f(x,z)*(1-y)$. $f(x,z)$ is the probability estimated by the classifier and $y \in \{0,1\}$ is the label of $x$. In addition to the loss function used in the classifier, the generator produces a short rational selection and calculates the loss. Shown in Equation \ref{eq:loss_gen}, where the $\lambda$ is the parameter to weight the selection: \begin{equation} \label{eq:loss_gen} \mathit{L_g} = \lambda||\textbf{z}|| \end{equation} We then combine the focal loss and the loss from generator to construct loss function as shown in Equation \ref{eq:loss_combin}: \begin{equation} \label{eq:loss_combin} \mathit{L} = \sum_{(x,y)\in D}\mathbb{E}[\mathit{L_c} + \mathit{L_g}] \end{equation} \section{Results}\label{sec:res} \begin{figure*} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\linewidth]{Figures/results/TIHM/PR.png} \caption{PR} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\linewidth]{Figures/results/TIHM/ROC.png} \caption{ROC} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\linewidth]{Figures/results/TIHM/loss.png} \caption{Loss} \label{fig:TIHM_loss} \end{subfigure} \caption{Evaluation of the proposed methods using the in-home sensory dataset. (a) shows the precision; (b) evaluates the Receiver Operating Characteristics (ROC) curve and (c) shows the changes to the loss during the training. In (a) and (b) the results of the proposed model is also compared with a set of baseline models.} \label{fig:evaluation_TIHM} \end{figure*} \begin{figure*} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\linewidth]{Figures/results/TIHM/ablation_PR.png} \caption{PR} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\linewidth]{Figures/results/TIHM/ablation_ROC.png} \caption{ROC} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\linewidth]{Figures/results/TIHM/Rate_changes.png} \caption{Selection Rate changes} \label{fig:TIHM_rate} \end{subfigure} \caption{An ablation study to evaluate the model; (a) shows the precision; (b) evaluates the Receiver Operating Characteristics (ROC) curve and (c) shows the selection rate changes. In (a) and (b) the results of the evaluation is by eliminating different components from the model. } \label{fig:ablation_tihm} \end{figure*} \noindent\textbf{Evaluation Metrics}: To evaluate our proposed method and compare it with the baseline models, we calculated different metrics. One of the primary metrics to assess the model is accuracy which is the measure of how close is the predicted class to the actual class. However, accuracy alone cannot be a good measure to evaluate the performance of a classifier. As a result, we also calculated the Area Under the Curve of Receiver Operating Characteristic (ROC) and Precision-Recall (PR). The precision of class A is the ratio of samples predicted as class A which are correct, and Recall is the ratio of samples as true class A which have been detected. ROC curve is the measure of model capability in differentiating between classes. We do not report the results in terms of specificity and sensitivity. The reason is that in this study, we do not have access to the full electronic healthcare records and hospital admission data of all the participants. So report the specificity and sensitivity only based on the detected and evaluated labels in our dataset, which can only be a sub-set of true and false cases for the cohort, can be misleading in terms of an actual and generalisable clinical finding. Instead, we have opted to evaluate the precision and generalisability of the prediction algorithm based on the existing labelled data and the known cases that we could evaluate and verify the performance of the model. \\ \noindent\textbf{Baseline Models}: We compare our model with the Linear Regression (LR) \cite{sperandei2014understanding}, Long-Short Term Memory (LSTM) neural networks \cite{gers1999learning} and a fully connected Neural Network (NN) model \cite{hassoun1995fundamentals}. LR is a discriminative model which can avoid the confounding effects by analysing the association of all variables together \cite{sperandei2014understanding}. It is also a commonly used baseline model to evaluate the performance of the proposed models \cite{harutyunyan2019multitask}. NN has the ability to learn a complex relationship. Unlike LR, NN does not need to assume the variables are linearly separated. It is also applied to a variety of clinical data sets \cite{lasko2013computational, che2015deep}. In the experiment, we used a Neural Network with one hidden layer contains 200 neurons, a softmax output layer contains two neurons, cross entropy loss and adam optimiser. LSTM is a powerful neural network to analyse the sequential data, including time-wised clinical datasets \cite{choi2016doctor, baytas2017patient}. It can associate the relevant inputs even if they are widely separated. Since our dataset consists of time-series sequences, we take the LSTM as another baseline model. In the experiment, we used a model that contains one residual block, one LSTM layer contains 128 neurons, and a softmax output layer contains two neurons, cross entropy loss and adam optimiser. In the experiments, we aggregate the readings of each sensor per hour. Hence each data point contains 24-time points and eight features. We set the batch size to $32$, learning rate to $0.0001$, sparsity to $0.001$. We divide the data into a train set and a test set. The numbers of training and testing samples in the datasets are 209 and 103 cases with their associated time-series data, respectively. The data is anonymous, and only the anonymous data without any personally identifiable information is used in this research. \\ \noindent\textbf{Experiments}: The ROC and PR changes during training are shown in the first two graphs in Figure \ref{fig:evaluation_TIHM}. Overall, the proposed model outperforms other baseline methods. The LSTM performs well in dealing with the time-series data. Compared to the other methods, the neural network converges much faster. However, the performance of the model fluctuates around 30 epochs. The convergence and the fluctuation are due to the rational process. The model has to learn how to extract important time steps and pay attention to the features. This process is also reflected in Figure \ref{fig:TIHM_loss}, the loss fluctuates during that period. However, the model adjusts this fluctuation automatically and improves the performance. The overall results are also summarised in Table \ref{tab:results}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|c|} \hline & LR & LSTM & NN & Proposed method\\ \hline AUC - PR & 0.3472 & 0.6901 & 0.5814 & \textbf{0.8313}\\ AUC - RC & 0.5919 & 0.7644 & 0.7601 & \textbf{0.9131}\\ \hline \end{tabular} \caption{The evaluation results in comparison with a set of baseline models: Linear Regression (LR), Long-Short Term Memory (LSTM) neural networks and a fully connected Neural Network (NN) model. Since the dataset is imbalance, we calculated the Area Under the Curve (AUC) of Receiver Operating Characteristic (ROC) and Precision-Recall (PR) to evaluate the performance.} \label{tab:results} \end{table} \section{Discussion} \textbf{Ablation Study}: We begin the discussion with an ablation study. Our model contains five important components: Rational layers, Attention layers, Residual layers, focal loss and positional encoding. We omit each component one at a time and explore how removing one of the components will impact the performance of the model. The experiments are shown in the first two graphs of Figure \ref{fig:ablation_tihm}. The orange line represents the model without rational layer. Although the performance of the model without rational layer keeps increasing, it underperforms in others significantly. In other words, the rational layer plays an important role in the model. Removing the positional encoding, attention layer, residual layer, or the focal loss decrease the performance as well. The performance change caused by omitting each of these four components are quite similar. As shown in Figure \ref{fig:ablation_tihm}, the positional encoding helps the model to identify relevant patterns of the data over time and plays an important role in the performance of the model. The rate of selected timesteps changes is shown in Figure \ref{fig:TIHM_rate}. The rate of selected timesteps changes is shown in Figure \ref{fig:TIHM_rate}. \\ \textbf{Rationalising prediction}: the Rational component helps to increase the accuracy of the model. Generally, the proposed rationalising method shows that the model knows which time steps and features to give the prediction. These patterns and time steps can also be explored to identify and observer relevant data and symptoms to a condition in each patient. Using this component, a personalised set of patterns and symptoms can be explored for each patient. The last graph in Figure \ref{fig:ablation_tihm} shows the selection rate changes during the training phase. The model learns to extract the time steps, and the accuracy increases after the changes become stable. As mentioned in the ablation study, after learning to extract the important time steps, the proposed model outperforms the baseline models without rational mechanisms. In other words, the model extracts a sub-set of the time steps (e.g. part of the time steps are extracted from Figure \ref{fig:visual_agg_data} to Figure \ref{fig:visual_select_data}) to obtain a better prediction. As the learning process continues, the model tries different selections and finds the optimised selection rate. Comparing to other models, the performance of the proposed model does not decrease during the training. The model learns to pay attention to the most relevant segments of the data and consider long-distance dependencies in the time-series data. In summary, the proposed model can not only explain the prediction but also abandon the redundant information in the data automatically. According to our experiments, the proposed model in average selects $61\%$ of the time points in the datasets to estimate the predictions.\\ \begin{figure*} \centering \includegraphics[width=\linewidth]{Figures/discussion/rational_pair_sim.png} \caption{Visualisation of the outputs within the rational block. The top figure visualises a sample which is validated with a True incident. The bottom figure is a sample which is validated with a False incident.} \label{fig:rational_pair} \end{figure*} \textbf{Pair analysation}: We then analyse the rational block processing on the positive and negative samples. As shown in Figure \ref{fig:rational_pair}, the rational block assigns weights to the positive and negative samples differently. More specifically, the model has learnt to extract different amount and series of time steps based on the inputs. In this case, the model extracts more time steps on the positive case than the negative case. Furthermore, the model pays attention differently based on the input data. In the example above, the model assumes the bathroom is the most important sensors in the positive samples. However, the model takes the bathroom and kettle almost as equally important sensors for predicting the negative case. After the model pays attention to the sensors of selected time steps, the classifier gives the predictions correctly. \begin{comment} \begin{itemize} \item We propose a novel rationalising block which is based on the rational and attention mechanism to process healthcare time-series data. \item We show how the focal loss helps to deal with the imbalanced issues. \item The proposed model can be paralleled, and this improves the scalability of the data in dealing with large datasets. \item We demonstrate how attention-based models can be used effectively in healthcare data analysis. \item We have evaluated our model on an observational clinical study and show how it outperforms conventional machine learning methods. \item We present the effectiveness of the model in a real-world setting and describe how it is used to support people with dementia. \end{itemize} \end{comment} \subsection*{Translating machine learning research into clinical practice} Improving the quality of life by preventing illness‐related symptoms and negative consequences of dementia has been set out as a major goal to advance dementia care. Agitation and infections have been highlighted as areas for priority development \cite{6buhr2006caregivers}. Our proposed model directly addresses these priorities in dementia care and intervention by enabling early detection of agitation and urinary tract infections in remote healthcare monitoring scenario, providing an opportunity for delivering more personalised, predictive and preventative healthcare. When applied to real-world clinical dataset in the context of the current clinical study our proposed algorithm provided a recall of 91\% and precision of 83\% in detecting early signs of agitation and UTI from physiological and environmental sensor data. A clinical monitoring team verified the predictions by contacting the patient or carer when an agitation or UTI alert was generated. A set of clinical pathways for early interventions has also been developed for the clinical monitoring team to use when responding to the alerts. \subsubsection*{Relevance to patient outcomes} We would like to highlight an important aspect of using this type of analysis to evaluate healthcare and patient outcomes. Focusing only on accuracy as a metric for assessment of the solution within a specific cohort goes only so far \cite{MITNews}. Large studies and further experiments with different cohorts and various in-home deployment settings are required to assess how such algorithms will perform in the noisy and dynamic real-world environments. There are several examples of AI and machine learning algorithms that perform very well in controlled and laboratory settings, but the real-world experience is different \cite{MITNews}. In this study, the sensors and data collection happens in uncontrolled, real-world environment. We have done several cross-validations, comparison and ablation studies to avoid overfitting the model and make sure the results are robust and reproducible. However, further independent trials and validation studies with larger cohorts are required to transform the current work into a product that can be used in real-world clinical and care settings. Another important item is that only focusing on the accuracy of the algorithm will not give a complete picture of the real effectiveness and impact of the solution on patient outcomes. Our agitation intervention protocol follows all current guidelines, which agree that individualised and person-centred non-pharmacological therapies are the first-line treatment for agitation in people with dementia \cite{52duff2018dementia, 53ijaopo2017dementia}. In line with the current guidelines, the initial assessment explores possible reasons for patients’ distress and addresses clinical or environmental causes first. The clinical monitoring team asks a set of standardised questions to evaluate the symptoms and to help the carer to identify potential causes of agitation such as pain, illness, discomfort, hunger, loneliness, boredom or environmental factors (temperature, light, noise level). The recognition and treatment of possible organic causes or triggering factors remains the mainstem of the intervention. In particular detection of delirium and a possible underlying infection is of great importance and the clinical monitoring team facilitates early diagnosis and treatment by liaising with the study’s clinical team and patient’s GP. Finally, the clinical monitoring team provides psychological support for the caregivers in order to reduce the caregiver distress. In the future, we are planning to use multimodal sensor data to improve the classification of agitation state which will include measuring sound levels along with activity detected by environmental sensors. Similarly to the agitation protocol, in case of a UTI alert the clinical monitoring team first responds by contacting the patient/carer to evaluate the symptoms. However, the diagnosis of UTI in dementia patients can be problematic, as these patients are less likely to present with a typical clinical history and localised urinary symptoms compared with younger patients \cite{54lutters2008antibiotic}. The team, therefore, arranges a home visit to perform a dipstick urine test. If the urine dipstick test is suggestive of infection (positive nitrates or leukocytes) clinical monitoring team advises the person with dementia/carer to visit the GP the same day to obtain a prescription for antibiotics. Monitoring Team also informs the GP of test results and requesting for antibiotics to be prescribed. One potential criticism of our UTI intervention algorithm could be the possibility of antibiotic over-prescribing contributing to the spread of antibiotic resistance. However, recent evidence demonstrates that in elderly patients with a diagnosis of UTI in primary care, no antibiotics and delayed antibiotics are associated with a significant increase in bloodstream infection and all-cause mortality compared with immediate treatment \cite{55gharbi2019antibiotic}. Therefore, early prescription of antibiotics for this vulnerable group of older adults is advised in view of their increased susceptibility to sepsis after UTI and despite a growing pressure to reduce inappropriate antibiotic use. The impact of our in-home monitoring technologies and the embedded machine learning models on clinical outcomes including hospitalisation, institutionalisation and mortality rates is part of an ongoing study. Nevertheless, the current work demonstrates the effectiveness of the proposed algorithm and its translation into real-life clinical interventions. Fig \ref{fig:rational_pair} illustrates individual cases of agitation and UTI correctly identified by the algorithm, with the digital markers demonstrating a behavioural anomaly. \section{Conclusion} To avoid unplanned hospital admissions and provide early clues to detect the risk of agitations and infections, we collected the daily activity data and vital signs by in-home sensory devices. The noise and redundant information in the data lead to inaccuracy predictions for the traditional machine learning algorithms. Furthermore, the traditional machine learning models cannot give explanation of the predictions. To address these issues, we proposed a model that can not only outperform the traditional machine learning methods but also provide the explanation of the predictions. The proposed rationalising block, which is based on the rational and attention mechanism, can process healthcare time-series data by filtering the redundant and less informative information. Furthermore, the filtered data can be regarded as the important information to support clinical treatment. We also demonstrate the focal loss can help to improve the performance on the imbalanced clinical dataset and attention-based models can be used effectively in healthcare data analysis. The evaluation shows the effectiveness of the model in a real-world clinical dataset and describes how it is used to support people with dementia. \\ \section*{Acknowledgment} This research is funded by the UK Medical Research Council (MRC), Alzheimer's Society and Alzheimer's Research UK and supported by the UK Dementia Research Institute. \bibliographystyle{IEEEtran}
1,108,101,565,080
arxiv
\section{Introduction} \label{sec:intro} Radio observations have always played an important role in \ac{GRB} studies. Besides complementing the broadband spectrum analysis, they allow for direct and indirect measurements of the source geometry and dynamics. Specifically, just by using observations of total flux density around the \ac{LC} peak, it is very challenging to constrain the observing angle, \textit{i.e.}, the angle between the \ac{GRB} jet axis and the observer \ac{LOS}. This results in degeneracy among the model parameters \cite{Nakar:2020pyd}. Observations of the shift of the radio image centroid allow us to break this degeneracy. However, \ac{GRB} jets at cosmological distances are less then a parsec in size and thus their imaging is complicated even with the most sensitive \ac{VLBI} facilities. Examples of a successful imaging include \ac{GRB}030329 and GRB170817A{}. \ac{GRB}030329 was imaged via global \ac{VLBI}, that reached sub-\ac{mas} resolution. The image size, approximated with \ac{FWHM} (assuming a circular Gaussian model for the image) was $0.07\,$\ac{mas} and $0.17\,$\ac{mas} at $23\,$ and $83$ days, respectively \cite{Taylor:2004wd}. Multiple observations at different epochs yielded an average expansion of $3-5\,c$. This superluminal motion hinted at a relativistic expansion of the \ac{GRB} jet. This source was also imaged $217\,$days \cite{Taylor:2004ru} and $806\,$days \cite{Pihlstrom:2007zz} after the original trigger. Combined analysis of the radio images and broadband data yielded estimates on the jet parameters and its lateral spreading as well as on the angle beween jet axis and the \ac{LOS} \cite{Granot:2004qf,Pihlstrom:2007zz,Mesler:2012,Mesler:2013fza}. Another example of a successful jet imaging is GRB170817A{} \citep{Savchenko:2017ffs,Alexander:2017aly,Troja:2017nqp,Monitor:2017mdv,Nynka:2018vup,Hajela:2019mjy}, a short \ac{GRB} detected by the space observatories Fermi \citep{TheFermi-LAT:2015kwa} and INTEGRAL \citep{Winkler:2011} and localized to the S$0$ galaxy NGC$4993$. GRB170817A{} was an electromagnetic counterpart to the \ac{GW} even GW170817~\citep{TheLIGOScientific:2017qsa,Abbott:2018wiz,LIGOScientific:2018mvr}. This \ac{GRB} was dimmer than other events of its class and followed by an afterglow with a prolonged rising part. The most widely accepted explanation for this is that GRB170817A{} was a structured jet observed off-axis \citep[\textit{e.g.}][]{Fong:2017ekk,Troja:2017nqp,Margutti:2018xqd,Lamb:2017ych,Lamb:2018ohw,Ryan:2019fhz,Alexander:2018dcl,Mooley:2018dlz,Ghirlanda:2018uyx}. This interpretation is in contrast to the commonly considered uniform jet structure, also called ``top-hat'' \cite{Rhoads:1997ps,Panaitescu:1998zf,Sari:1999mr,Kumar:2000gj,Moderski:1999ct,Granot:2001cw,Granot:2002za,Ramirez-Ruiz:2004cvd,Ramirez-Ruiz:2004gvs}, where energy and momenta do not depend on the angle (outside the jet opening angle). This explanation was in part derived from the analysis of radio images at $75\,$ and $230\,$ days after the burst by the Karl G. Jansky \ac{VLA} and the Robert C. Byrd Green Bank Telescope \cite{Mooley:2018dlz}. The observations showed that the position of the flux centroid has changed between two observational epochs, with the mean apparent velocity along the plane of the sky $\beta_{\rm app}=4.1 \pm 0.5$. The source, however, remains unresolved. That gave a possible upper limit on the source size of $1\,$mas and $10\,$\ac{mas} in the direction perpendicular and parallel to the motion respectively \cite{Mooley:2018dlz}. The high compactness of the source was further supported by the observed quick turnover around the peak of the radio \acp{LC} and a steep decline $F_{\nu}\propto t_{\rm obs}^{-2}$ after $200\,$days \cite{Mooley:2018clx}. Notably, the superluminal motion was also observed in the optical band \cite{Mooley:2022uqa}. \citet{Ghirlanda:2018uyx} also obtained a radio image at $207\,$days confirming the previous findings. Together with the analysis of multi-wavelength \acp{LC}, the information obtained from radio images allowed to confirm that GRB170817A{} was produced by a narrow, core-dominated jet rather than by a wide, quasi-isotropic ejecta \cite{Hotokezaka:2018gmo,Gill:2018kcw}. A comparison with \ac{GRB}030329, where no proper motion was observed, only the expansion speed, indicates a difference in source geometry A sizable fraction of \acp{GRB} occurs further off-axis than GRB170817A{}. For them, the prompt $\gamma$-ray emission, as well as early afterglow may not be seen as they would be beamed away from the observer's \ac{LOS}. At later times, however, as the jet decelerates and spreads laterally, the afterglow should become visible. Such afterglow is referred as ``orphan afterglow'' \cite{Rhoads:1997ps}. No such afterglow has been found so far despite extensive search campaigns in X-ray \cite{Woods:1999vx,Nakar:2002un}, optical \cite{Dalal:2001ym,Totani:2002ay,Nakar:2002ph,Rhoads:2003ya,Rau:2006}, and radio \cite{Perna:1998ny,Levinson:2002aw,Gal-Yam:2005gbc,Soderberg:2005vp,Bietenholz:2013hha} (see also \citet{Huang:2020pxr}). In addition to \ac{GRB} and its afterglow, GW170817{} was accompanied by a quasi-thermal electromagnetic counterpart, \ac{kN} \AT~\citep{Arcavi:2017xiz,Coulter:2017wya,Drout:2017ijr,Evans:2017mmy,Hallinan:2017woc,Kasliwal:2017ngb,Nicholl:2017ahq,Smartt:2017fuw,Soares-santos:2017lru,Tanvir:2017pws,Troja:2017nqp,Mooley:2018dlz,Ruan:2017bha,Lyman:2018qjg}. The ejecta responsible for the \ac{kN} was enriched with heavy elements, lanthinides and actinidies, produced via $r$-process{} nucleosynthesis{} \citep{Lattimer:1974slx,Li:1998bw,Kulkarni:2005jw,Rosswog:2005su,Metzger:2010,Roberts:2011,Kasen:2013xka,Tanaka:2013ana}. The angular and velocity distributions of these ejecta are quite challenging to infer due to the complex atomic properties of these heavy elements. Nevertheless, at least two ejecta components to account for the observed \acp{LC} were needed: a lanthanide-poor (for the early blue signal) and a lanthanide-rich (for the late red signal) one \cite{Cowperthwaite:2017dyu,Villar:2017wcc,Tanvir:2017pws,Tanaka:2017qxj,Perego:2017wtu,Kawaguchi:2018ptg,Coughlin:2018fis}. A fit of \AT{} \acp{LC} to a semi-analytical two-components spherical \ac{kN} model yielded blue (red) components of mass $2.5\times10^{-2}M_{\odot}$ ($5.0\times10^{-2}M_{\odot}$) and velocity $0.27$c ($0.15$c) \citep{Cowperthwaite:2017dyu,Villar:2017wcc}. The estimated ejecta mass and velocity could be significantly modified if anisotropic effects are taken into account~\cite{Kawaguchi:2018ptg}. \Ac{NR} simulations of \ac{BNS} mergers predict that mass ejection can be triggered by different mechanisms acting on different timescales (see \citet{Metzger:2019zeh,Shibata:2019wef,Radice:2020ddv,Bernuzzi:2020tgt} for reviews on various aspects of the problem). Specifically, dynamical ejecta of mass $\mathcal{O}(10^{-4}-10^{-2})\,{\rm M_{\odot}}$ can be launched during mergers at average velocities of $0.1-0.3\,$c, \textit{e.g.}~ \cite{Rosswog:1998hy,Rosswog:2005su,Hotokezaka:2013iia,Bauswein:2013yna,Wanajo:2014wha,Sekiguchi:2015dma,Radice:2016dwd,Sekiguchi:2016bjd,Vincent:2019kor,Zappa:2022rpd,Fujibayashi:2022ftg}. After the merger, quasi-steady state winds were shown to emerge from a post-merger disk \cite{Dessart:2008zd,Fernandez:2014bra,Perego:2014fma,Just:2014fka,Kasen:2014toa,Metzger:2014ila,Martin:2015hxa,Wu:2016pnw,Siegel:2017nub,Fujibayashi:2017puw,Fahlman:2018llv,Metzger:2018uni,Fernandez:2018kax,Miller:2019dpt,Fujibayashi:2020qda,Nedora:2020pak,Nedora:2019jhl}. \ac{NR} simulations also show that a small fraction of dynamical ejecta $({\sim}(10^{-6}-10^{-5})\,{\rm M_{\odot}})$ has velocity exceeding ${\simeq}0.6\,c$, \citep{Hotokezaka:2013b,Metzger:2014yda,Hotokezaka:2018gmo,Radice:2018pdn,Radice:2018ghv,Nedora:2021eoj,Fujibayashi:2022ftg}. Such a fast ejecta are capable of producing bright non-thermal late-time afterglow-like emission, with \ac{SED} peaking in radio band \citep[\textit{e.g.}][]{Nakar:2011cw,Piran:2012wd,Hotokezaka:2015eja,Radice:2018pdn,Hotokezaka:2018gmo,Kathirgamaraju:2018mac,Desai:2018rbc,Nathanail:2020hkx,Hajela:2021faz,Nakar:2019fza}. The mechanisms behind the fast tail of the ejecta is not yet clear. Possible options include shocks launched at core \bnc{} \citep{Hotokezaka:2013b,Radice:2018pdn} and shocks generated at the collisional interface between \acp{NS} \cite{Bauswein:2013yna}. Notably, despite a large amount of \ac{BNS} \ac{NR} simulations there is no robust relationship between the binary parameters \acp{NS} and \ac{NS} \ac{EOS} and the properties of the ejected matter. And while there exist fitting formulae of various complexity to the properties of the bulk of the ejecta, \textit{e.g.}, mass and velocity \cite{Dietrich:2016fpt,Radice:2018pdn,Kruger:2020gig,Nedora:2020qtd,Dietrich:2020efo}, even such formulae for the fast ejecta tail are currently absent. Thus, we are limited to employing published dynamical ejecta profiles from \ac{NR} simulations. In \citet{Nedora:2021eoj} (hereafter \citetalias{Nedora:2021eoj}) we showed how a \ac{kN} afterglow emission from the fast tail of the dynamical ejecta may contribute to the radio \acp{LC} of the GRB170817A{}, employing \ac{NR}-informed ejecta profiles \cite{Perego:2019adq,Nedora:2019jhl,Nedora:2020pak,Bernuzzi:2020tgt}. In \citet{Nedora:2022kjv} (hereafter \citetalias{Nedora:2022kjv}) we modified the afterglow model by including an additional electron population that assumes Maxwellian distribution in energy behind the \ac{kN} \ac{BW} shock. We showed that the radio flux from these ``thermal electrons'' can be higher than the radio flux from commonly considered ``power-law'' electrons at early times and if ejecta is sufficiently fast. It is thus natural to investigate, whether the emission from thermal electrons affects the radio image of the source. As in \citetalias{Nedora:2022kjv} we consider a GRB170817A{}-inspired \ac{GRB} afterglow model of a Gaussian jet, seen off-axis, while for \ac{kN} afterglow we consider \ac{NR}-informed ejecta profiles extracted from \ac{NR} simulations with various \ac{NS} \acp{EOS} and system \mr{}s. The paper is organized as follows. In Sec.~\ref{sec:method}, we recall the main assumptions and methods used to calculation the observed \ac{GRB}- and \ac{kN}- afterglow emission as well as how to compute the sky map. In Sec.~\ref{sec:result:kn}, we present and discuss the \ac{kN} afterglow sky maps, focusing on the overall properties, \textit{e.g.}, image size and the flux centroid position and their evolution. In Sec.~\ref{sec:result:kn_grb}, we consider both \ac{GRB} and \ac{kN} afterglow and discuss how the properties of the former change when the later is included in the modeling. In Sec.~\ref{sec:result:interaction}, we briefly remark on how the \ac{GRB} plus \ac{kN} sky map changes if the \ac{ISM} in front of the \ac{kN} ejecta has been pre-accelerated and partially removed by a passage of the laterally spreading \ac{GRB} ejecta. Finally, in Sec.~\ref{sec:conclusion}, we provide the discussion and conclusion. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{fig01a.pdf} \includegraphics[width=0.49\textwidth]{fig01b.pdf} \includegraphics[width=0.49\textwidth]{fig01c.pdf} \includegraphics[width=0.49\textwidth]{fig01d.pdf} \caption{ \textit{Top figure}: sky map for a \ac{BNS} merger simulation with BLh \ac{EOS} and $q=1.00$ computed at $120\,$ days after the merger at $\nu=1\,$GHz and observed at $\theta_{\rm obs}=45\,$deg. Left and right columns of plots corresponds to different \ac{ISM} densities, $n_{\rm ISM}=0.05\,\,\text{cm}^{-3}$ on the left and $n_{\rm ISM}=0.00031\,\,\text{cm}^{-3}$ on the right. In each plot column, the top and top-right subplots display the $X$ and $Z$ averaged brightness distributions respectively. Dotted lines mark \ac{FWHM} and dashed lines mark the location of the flux centroid of the image. \ac{FWHM} and the location of the flux centroid are also shown on the main panel of the figure as error bars and the circular marker respectively. % Thin gray dotted lines indicate the $X$ and $Z$ axis. % Notably, we are plotting $I_{\nu}/I_{\nu;\,\rm max}\in (0.1,1)$ range of the normalized specific intensity in order to resolve the image structure more clearly. % \textit{Bottom panel}: same sky map but viewed at three difference angles, $\theta_{\rm obs}$. } \label{fig:results:skymap_example} \end{figure*} \section{Methods} \label{sec:method} In order to compute the \ac{GRB} and \ac{kN} afterglows, we employ the semi-analytic code \texttt{PyBlastAfterglow}{} discussed in \citetalias{Nedora:2022kjv} and \citetalias{Nedora:2021eoj}. In the model, both ejecta types are discretized into velocity and angular elements, for each of which the equations of \ac{BW} evolution are solved independently, and the synchrotron radiation is computed, accounting for relativistic and time-of-arrival effects. The effect of the pre-processing of \ac{ISM} medium by a passing \ac{GRB} \ac{BW} is considered in Sec.~\ref{sec:result:interaction}, otherwise this effect is not included, and \ac{kN} \acp{BW} evolved independently from the \ac{GRB} \acp{BW}. For \ac{kN} afterglow, both thermal and non-thermal electron populations are considered, while for \ac{GRB} afterglow only the latter is employed in the model. The sky maps are computed using the spherical coordinate system discussed in Sec.~$2$ in \citetalias{Nedora:2022kjv} (figure~1). For both ejecta types axial symmetry is assumed. Then, each elemental \ac{BW} has radial coordinate $R_{ij}$, and angular coordinates $\theta_{i}$ and $\phi_{ij}$, where the single index of $\theta_{i}$ reflects the axial symmetry. The coordinate vector of the elemental \ac{BW} is given by $\vec{v}_{ij} = R_{ij}\big( \sin{(\theta_{i})}\cos{(\phi_{ij})}\vec{x},\, \sin{(\theta_{i})}\sin{(\phi_{ij})}\vec{y},\, \cos{(\theta_{i})}\vec{z} \big)$. The cosine of the angle between the \ac{LOS} and $\vec{v}_{ij}$ reads, \begin{equation} \label{eq:method:mu_obs} \mu_{ij} = \sin{(\theta_{i})}\sin{(\phi_{ij})}\sin(\theta_{\rm obs}) + \cos{(\theta_{i})}\cos(\theta_{\rm obs}) \, . \end{equation} The image plane, $xz$ is perpendicular to the \ac{LOS} of the observer. We chose the basis with which the principal jet moves in the positive $\tilde{x}$-direction. The basis vectors then $\tilde{\vec{x}}_{ij}=\sin(\theta_{\rm obs})\vec{z}_{ij}-\cos(\theta_{\rm obs})\vec{x}_{ij}$, $\tilde{\vec{y}}_{ij}=\vec{x}_{ij}$ of the plane as in \citet{Fernandez:2021xce} and the coordinates of the $ij$ \ac{BW} on the image plane (for the principle jet) are given by \begin{equation}\label{eq:method:xccoord} \begin{aligned} \tilde{x}_{ij} & = -R_{ij}[\cos(\theta_{\rm obs})\sin(\theta_i)\sin(\phi_{ij}) \\ & + \sin(\theta_{\rm obs})\cos(\theta_{i}))], \\ \tilde{z}_{ij} & = R_{ij}\sin(\theta_{i})\cos(\phi_{ij})\, . \end{aligned} \end{equation} In the following, we omit the use of tildas for simplicity. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{fig02.pdf} \caption{ Evolution of the sky map for the \ac{BNS} merger simulation with BLh \ac{EOS} and $q=1.00$, observed at $\theta_{\rm obs}=45\,$deg, $\nu_{\rm obs}=1\,$GHz. The \ac{ISM} density is $n_{\rm ISM}=0.05\,\,\text{cm}^{-3}$. As in Fig.~\ref{fig:results:skymap_example}, the marker and the error bar indicate the location of the flux centroid and the \ac{FWHM} of the image, while gray dotted lines mark the axis. } \label{fig:results:kn_skymap_example_evolution} \end{figure*} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{fig03.pdf} \caption{ Evolution of the sky map spectral index for the \ac{BNS} merger simulation with BLh \ac{EOS} and $q=1.00$, observed at $\theta_{\rm obs}=45\,$deg. and at $\nu=1\,$GHz. Here $n_{\rm ISM}=0.05\,\,\text{cm}^{-3}$. Thin dotted lines mark the axis. For clarity, we did not apply the Gaussian smoothing kernel to this image. } \label{fig:results:kn_skymap_spectral_example_evolution} \end{figure*} In order to characterize sky maps we consider the following main quantities. Specifically, following \citet{Zrake:2018eml,Fernandez:2021xce} we compute the surface brightness-weighted center of the image, image centroid, defined as \begin{equation} x_c = \frac{1}{\int I_{\nu} dx dz}\int x I_{\nu} dx dz, \end{equation} where $I_{\nu}$ is computed via Eq.~(37) in \citetalias{Nedora:2022kjv}. We also compute the $X$ and $Z$-averaged brightness distributions \begin{equation} \begin{aligned} I_{\nu; \rm m}(x) &= \frac{1}{\Delta z} \int I_{\nu}(x,z) dz\, , \\ I_{\nu; \rm m}(z) &= \frac{1}{\Delta x} \int I_{\nu}(x,z) dx\, . \end{aligned} \end{equation} As the available ejecta profiles are limited in the angular resolution, which severely limits the accuracy of the sky map analysis, we ``rebin'' the angular ejecta distribution histograms. To do this rebinning we assume a uniform distribution within each bin \cite{Knoll:2000fj}. \section{Results} \label{sec:result} For an extended source with uniform \ac{LF} $\Gamma$, the maximal apparent velocity $\beta_{\rm app} < \Gamma$, while the image size increases with $\Gamma$ \cite{Boutelier:2011}. A spherically symmetric source that expands isotropically, would appear as a ring expanding with $\Gamma$ with no motion in the image centroid. Due to non-trivial ejecta angular and velocity structure a \ac{kN} sky map shape, size and structure have a complex dependency on the observer time $t_{\rm obs}$ and angle $\theta_{\rm obs}$. Moreover, if both, thermal and non-thermal electron populations are present behind the shock, there is a non-trivial dependency on the microphysical parameters and \ac{ISM} density. It is beyond the scope of this work to study all possible combinations of free parameters. Instead, we focus on several representative cases. Specifically, we fix the source to be located at luminosity distance, $D_L=41.3\,$Mpc with redshift $Z=0.0099$. The micropthysics parameters are the following. Fractions of the shock energy that goes into electron acceleration and magnetic field amplification are $\epsilon_e=0.1$, $\epsilon_b=0.001$, $\epsilon_t=1$. The slope of the power-law electron distribution is $p=2.05$. Unless stated otherwise, the observational frequency is $1\,$GHz., and the observer angle is $45\,$deg. We focus on two $n_{\rm ISM}$: the fiducial value $n_{\rm ISM} = 0.05\,\,\text{cm}^{-3}$ and the value inferred for GRB170817A{}, $n_{\rm ISM} = 0.00031\,\,\text{cm}^{-3}$. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{fig04a.pdf} \includegraphics[width=0.49\textwidth]{fig04b.pdf} \includegraphics[width=0.49\textwidth]{fig04c.pdf} \includegraphics[width=0.49\textwidth]{fig04d.pdf} \caption{ Time evolution of \ac{kN} afterglow sky map properties. \textit{Top left panel} shows the evolution of the image size, FWHM$_{x}${} Circular markers indicate the image size at the \ac{LC} peak. \textit{Top right panel} shows the evolution of the flux centroid, $x_c$ position. Square markers indicate the minimum of the \ac{LC} spectral index $A_{\nu}$. \textit{Bottom left panel} and \textit{Bottom right panel} display the image size and the position of the flux centroid at the time of the \ac{LC} peak respectively for three values of the observational angle. In each panel there is a subpanel, enlarging an early-time part of the plot. Here $n_{\rm ISM} = 0.00031\,\,\text{cm}^{-3}$. } \label{fig:results:kn_skymaps_width_evol_all_sims} \end{figure*} \subsection{Kilonova afterglow sky maps} \label{sec:result:kn} We begin by considering the \ac{BNS} merger simulation with BLh \ac{EOS} and $q=1.00$. The sky map for $\theta_{\rm obs}=45\,$deg, $\nu_{\rm obs}=1\,$GHz and $t_{\rm obs}=120\,$days after merger is shown in Fig.~\ref{fig:results:skymap_example}. At $t_{\rm obs}=120\,$days the \ac{kN} afterglow at $1\,$GHz for this \ac{BNS} merger model is dominated by the emission from thermal electron population behind shocks. The fast tail of the dynamical ejecta in this simulation is predominantly equatorial, confined to ${\gtrsim}60\,$deg (see Fig.~3 in \citetalias{Nedora:2021eoj}) with mass-averaged half-\ac{RMS} angle $\theta_{\rm RMS}\simeq70\,$deg. As $\theta_{\rm RMS} > \theta_{\rm obs}$, the synthetic image resembles a wheel with the brightest parts offset from the center into the negative half of the $x$ axis, \textit{i.e.}, $x_c < 0$. (See $\theta_{\rm obs}=15\,$deg. and $\theta_{\rm obs}=45\,$deg. subpanels in Fig.~\ref{fig:results:skymap_example}). An observer with $\theta_{\rm obs} \gtrsim \theta_{\rm RMS}$ would, however, be able to see the beamed emission from the fast ejecta tail (bright spots at $x\simeq0\,$\ac{mas}, $z\simeq\pm0.3\,$\ac{mas} on $\theta_{\rm obs}=75\,$deg. sub-panels of Fig.~\ref{fig:results:skymap_example}). Correspondingly, the image flux centroid lies near $x_c \simeq 0$ and the brightest part of the image laying in $x>0$ plane. As the \ac{kN} \acp{BW} propagates through the \ac{ISM}, the size of a sky map increases in both $x$ and $z$ directions. Due to the axial symmetry of the ejecta properties, $\theta_{\rm obs}$ and relativistic effects primarily affect the FWHM$_{x}${} and $x_c$. The example of a sky map evolution is shown in Fig.~\ref{fig:results:kn_skymap_example_evolution}. Deceleration of \ac{kN} \acp{BW} reduces the contribution from thermal electron population to the observed flux. Additionally, relativistic effects become increasingly less important. Consequently, the image becomes more spherically symmetric and centered around $x_c=z_c=0$. Specifically for this simulation, at $t_{\rm obs}=600\,$days after the merger the emission from equatorial and polar \acp{BW} becomes comparable with each other and thereafter the sky map resembles a circle with two bright spots near the image's outer boundaries on the $x=0$ axis. These spots mark the geometrically overlapping emitting areas and reflect the equatorial nature of the ejecta fast tail. Notably, the presence of thermal electrons that we assume in our model does not affect this qualitative picture, as the emissivity from both thermal and non-thermal electron populations depend on the shock velocity albeit to a different degree \citep[\textit{e.g.}][]{Ozel:2000,Margalit:2021kuf}. The presence of two electron populations behind \ac{BW} shocks, however, implies a spectral evolution of the emission in every pixel of the sky map. We define a sky map spectral index as $A_{\nu} = d\log_{10}(I_{\nu})/d\log_{10}(\nu)$ and show its evolution in Fig.~\ref{fig:results:kn_skymap_spectral_example_evolution} for $\theta_{\rm obs}=45\,$deg. and $\nu=1\,$GHz. At early times, most of the sky map displays relatively low $A_{\nu}\simeq-1.25$, indicative of the emission from thermal electron populations (figure~3 in \citetalias{Nedora:2022kjv}). As the \acp{BW} decelerate and emission from the thermal electron population subsides. At the point where the spectrum transitions, the spectral index reaches a minimum. After that, the spectral index rises as the sky map becomes increasingly dominated by emission from the non-thermal electron population. At very late times the spectral map becomes uniform, as the emission from power-law electrons with fixed distribution slope $p$ dominates in every pixel. If resolved in observations, such evolution of the spectral sky map would allow a detailed study of the ejecta velocity and angular distribution, besides constraining the physics of particle acceleration at mildly relativistic shocks. It is interesting to examine the evolution of the key sky map properties, image size FWHM$_{x}${}, and the position of the flux centroid, $x_c$, at very low \ac{ISM} density, that was generally inferred for GRB170817A{}. In Fig.~\ref{fig:results:kn_skymaps_width_evol_all_sims}, we show the evolution of the FWHM$_{x}${} and $x_c$ as well as these values at the peak time $t_p$ of the respective \acp{LC}. The sky map size at a given epoch is primarily determined by the energy budget of the ejecta. Simulations with $q=1$ and soft \ac{EOS}, \textit{e.g.}, SFHo and SLy4 \acp{EOS} display larger image sizes throughout the evolution. On the other hand, equal mass simulations with stiffer \ac{EOS}, such as BLh and LS220 \acp{EOS} demonstrate smaller image sizes, More asymmetric binaries display in general intermediate image sizes. At the time of the \ac{LC} peak, the image size depends on whether the emission from thermal or non-thermal electron population dominates the observed flux. If former is true, $t_p$ is generally small, $t_p < 500\,$ days for our simulations and assumed $n_{\rm ISM}=0.00031\,\,\text{cm}^{-3}$, and the image size case does not exceed $4\,$\ac{mas}. Notably, at higher $n_{\rm ISM}$ $t_p$ is shorter and thus, the FWHM$_{x}${} is smaller. Simulations with $q=1.00$ and soft (SLy4 and SFHo) \acp{EOS} are examples of that. If the emission from power-law electrons dominates the observed flux at the time of the \ac{LC} peak, the image size is significantly larger, ${\simeq}15-20\,$\ac{mas}. Importantly, $t_p$ depends also on the observer angle $\theta_{\rm obs}$ due to relativistic beaming of the early-time emission from thermal electrons. For example, a simulation with a sufficiently spherically symmetric distribution of the fast tail, simulation with SLy4 \ac{EOS} and $q=1.00$ display an early $t_p < 500\,$days at all three observing angles considered. A characteristic feature of the changing dominant contributor (\textit{e.g.}, electron population) to the observed emission is seen here as a sharp increase in the evolution of image size (sub-panel in the top left panel in Fig.~\ref{fig:results:kn_skymaps_width_evol_all_sims}). This rapid increase in FWHM$_{x}${} occurs when the emission from fast \acp{BW}, dominating the observed flux at first, subsides and less beamed, more isotropic emission from non-thermal electrons becomes equally important. As discussed before, the evolution of the image flux centroid position, $X_c$, besides the ejecta energy budget, depends strongly on the observational angle. At $\theta_{\rm obs}=45\,$deg. for \ac{BNS} merger models with sufficiently fast and equatorial fast tail, $x_c$ is negative at an early time (\textit{e.g.}, for simulation with BLh \ac{EOS} and $q=1.00$). For simulations with $\theta_{\rm RMS} < \theta_{\rm obs}$, $x_c$ moves into the positive half of $x$ axis at the beginning, as is the the case for the equal mass simulations with SFHo, SLy4 and LS220 \acp{EOS}. The time evolution of the $x_c$ in most cases exhibits an extremum after which $x_c \rightarrow 0$. We find that the time of the extremum corresponds to the time where the spectral index evolution of the \ac{LC} reaches minimum (see figure~$4$ in \citetalias{Nedora:2022kjv} for the \ac{LC} spectral index evolution). In the top right panel of Fig.~\ref{fig:results:kn_skymaps_width_evol_all_sims} this point is shown with square marker. At the time of the \ac{LC} peak the position of the flux centroid is generally determined by whether the thermal or non-thermal electrons dominate the observed flux. This in turn depends on $\theta_{\rm obs}$. In the former case $|x_c|$ tends to be larger, reaching $|x_c| \leq 0.5\,$\ac{mas}, as bright beamed emission from thermal electrons in fast \acp{BW} makes the image very asymmetric. Consequently, if the \ac{LC} peaks at late times, $|x_c|$ is closer to zero for most models. \subsection{kN and GRB skymaps} \label{sec:result:kn_grb} One of the key observables of GRB170817A{} that confirmed the jetted nature of the outflow and allowed for a more precise estimate of the inclination angle $\theta_{\rm obs}$, was the motion of the \ac{GRB} flux centroid \cite{Mooley:2018dlz}. Here we investigate, how the presence of the \ac{kN} afterglow affects the \ac{GRB} afterglow sky map $x_c$ and FWHM$_{x}${} assuming that these two ejecta types do not interact. We briefly remark on this interaction in Sec.~\ref{sec:result:interaction}. For modeling \ac{GRB} afterglows, we consider the same parameters as in \citetalias{Nedora:2022kjv}, motivated by the analysis of GRB170817A{} \citep[\textit{e.g.}][]{Hajela:2019mjy,Fernandez:2021xce}, varying only the observer angle, $\theta_{\rm obs}$ and the \ac{ISM} density $n_{\rm ISM}$. Specifically, we set the jet half-opening angle $\theta_{\rm w} = 15\,$deg. and core half-opening angle $\theta_{\rm c} = 4.9\,$deg. The isotropic equivalent energy is $E_{\rm iso}=10^{52}\,$ergs, and the initial \ac{LF} of the core is $\Gamma_{\rm c} = 300$. The microphysical parameters are set as: $\epsilon_e=0.05$, $\epsilon_B=0.0045$, and $p=2.16$. Luminosity distance to the source is set to $D_{\rm L} = 41.3\,$Mpc. Unless stated otherwise, we consider $\theta_{\rm obs}=45\,$deg., and $n_{\rm ISM} = 0.00031\,\,\text{cm}^{-3}$, as fiducial values. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{fig05.pdf} \caption{ Combined \ac{kN} and \ac{GRB} sky map. The former can be seen as a dim blob on the left, while the latter -- as a bright crescent on the right. The size and the location of the flux centroid of two individual components are shown with yellow and cyan colors respectively. The size and $X_c$ of the combined image are shown as lime color. As in Fig.~\ref{fig:results:skymap_example} the top and right sub-panels display the $z$- and $x$-averaged brightness distributions respectively. Sky map corresponds to $\nu_{\rm obs}=1\,$GHz, $\theta_{\rm obs}=45\,$deg and $t_{\rm obs}=60\,$days, $n_{\rm ISM}=0.00031\,\,\text{cm}^{-3}$. } \label{fig:results:kn_grb_skymap_example} \end{figure} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{fig06a.pdf} \includegraphics[width=0.49\textwidth]{fig06b.pdf} % \includegraphics[width=0.49\textwidth]{fig06c.pdf} \includegraphics[width=0.49\textwidth]{fig06d.pdf} % \includegraphics[width=0.49\textwidth]{fig06e.pdf} \includegraphics[width=0.49\textwidth]{fig06f.pdf} \caption{ \textit{Top panels}: time evolution of the combined sky map properties shown in terms of the $\Delta x_c^{\rm GRB} = (x_c^{\rm GRB} - x_c^{\rm kN})/x_c^{\rm GRB}$, $\Delta \text{FWHM}_x^{\rm GRB} = \text{FWHM}_x^{\rm GRB} - \text{FWHM}_x^{\rm kN}$ on the \textit{left} and \textit{right} panels respectively. Dashed gray line corresponds to the time of the \ac{GRB} \ac{LC} peak. % \textit{Bottom panels}: properties of the combined sky map extracted at the time of the \ac{kN} afterglow \ac{LC} peak. Different colors correspond to various \acp{EOS}. Filled and empty markers indicate $q=1.00$ and $q=1.43$ simulations respectively. Different markers correspond to various observing angles. % In all panels, an inner sub-panel serves to enlarge the early time part of the figure. } \label{fig:results:kn_grb_skymaps_xc_evol_all_sims} \end{figure*} In Fig.~\ref{fig:results:kn_grb_skymap_example}, we show a combined \ac{kN} plus \ac{GRB} afterglow radio sky map assuming $\theta_{\rm obs}=45\,$deg. and $t_{\rm obs}=60\,$days. At this early time the \ac{GRB} afterglow is significantly brighter than the \ac{kN} one: $F_{\nu=1\,{\rm GHz}}^{\rm GRB}=7.5\times10^{-3}\,$mJy and $F_{\nu=1\,{\rm GHz}}^{\rm kN}=4\times10^{-4}\,$mJy. However, despite being dimmer, \ac{kN} afterglow affects the properties of the total sky map significantly, shifting the position of the image flux centroid back to the center of the explosion. Consequently, the apparent velocity computed from the motion of the flux centroid would be underestimated if the effect of \ac{kN} afterglow is not taken into account. In our case, the apparent velocity is reduced from $2.5\,c$ to $2.1\,c$ at $t_{\rm obs}=60\,$days. Thus systematic underestimation of the apparent velocity may, in turn, result in overestimation of the $\theta_{\rm obs}$ or $\Gamma$. This can be understood from the following considerations. Consider, $(\theta_{\rm s} \leq \theta_{\rm obs} - \theta_{\rm s})$, where $\theta_s$ is the average size of the extended source. There, the maximum apparent velocity $\beta_{\rm app}$ is equal to the source \ac{LF} $\Gamma$, as $\theta_{\rm obs} = 1/\Gamma$. Then, assuming that the observed emission from an extended source comes predominantly from the compact region we have, $(\theta_{\rm obs} - \theta_{\rm s}) \approxeq 1/\beta_{\rm app}$. These arguments were used to infer $\Gamma$ from radio image for GRB170817A{} \cite{Mooley:2018dlz}. Notably, at smaller observational angles, the early \ac{GRB} afterglow is significantly brighter, and at $\theta_{\rm obs}\simeq20\,$deg. that is generally inferred for GRB170817A{}, the \ac{kN} afterglow does not affect the estimated $\beta_{\rm app}$ to an appreciable degree. At slightly later times, when the \ac{GRB} afterglow reaches its peak emission we find that even for $\theta_{\rm obs}=45\,$deg, the effect of the \ac{kN} afterglow on the \ac{GRB} afterglow sky map properties is negligible. At the time of the \ac{GRB} \ac{LC} peak $t_{\rm p}^{\rm GRB}=800\,$days, the $\beta_{\rm app}$ is reduced only by ${\simeq}0.1\,c$. The \ac{kN} afterglow becomes important again later, when the \ac{GRB} afterglow emission subsides. Numerical and semi-analytic jet models show, that both prime and counter jets contribute to the late time flux \cite{Zrake:2018eml,Fernandez:2021xce}. This forces the position of the flux centroid to move back to $x_c^{\rm GRB}\rightarrow 0$. Before that, the jet deceleration reduces the contribution to the observed emission from the fast jet core and consequently slows down the motion of the flux centroid. The jet lateral spreading contributes to this by pushing parts of the jet to $\theta > \theta_{\rm obs}$, making them move back on the image plane. In this regard, the presence of a \ac{kN} afterglow might be confused with a more rapid lateral spreading or earlier emergence of the counter jet. \textit{ Thus, we conclude that even if the \ac{kN} afterglow does not contribute significantly to the observed total flux, it should be taken into account for accurate estimation of the jet energy and geometry from sky map observations. } Importantly, the relative brightness of two afterglows considered here depends on all free parameters of the model \textit{i.e.}, microphysics parameters of both shock types (relativistic and mildly relativistic) as well as the angular and velocity structure of ejecta. Considering the available \ac{BNS} merger simulations, we recall that the \ac{kN} afterglow from $q=1$ and soft \acp{EOS} simulations is brighter, and thus it would affect the properties of the combined sky map more strongly, at least before the \ac{GRB} \ac{LC} peak $t_{\rm p}^{\rm GRB}$. In Fig.~\ref{fig:results:kn_grb_skymaps_xc_evol_all_sims}, we show the change in \ac{GRB} afterglow $x_c^{\rm GRB}$ and FWHM$_{x}${} in terms of $\Delta v = (v^{\rm GRB}-v^{\rm kN}) / v^{\rm GRB}$, where $v\in[x_c,\text{FWHM$_{x}${}}]$. As expected, the general effect of the inclusion of the \ac{kN} afterglow is the decrease in $X_c$ and, consequently, in the apparent velocity $\beta_{\rm app}$, and an increase in the image FWHM$_{x}${} (top right and left panels of Fig.~\ref{fig:results:kn_grb_skymaps_xc_evol_all_sims}). Specifically, $\Delta x_c$ and $\Delta$FWHM$_{x}${} reach ${\gtrsim}0.5$ and ${\gtrsim}-8$ respectively. At $t_{\rm p}^{\rm GRB}$ the effect of the \ac{kN} afterglow presence is minimal in all cases, as the \ac{GRB} afterglow dominates the total emission and the sky map properties. \textit{ Thus, estimated at this time, image properties convey the most reliable information about the \ac{GRB} afterglow. } At higher $n_{\rm ISM}$ and $\theta_{\rm obs}$, the picture is qualitatively similar. Influence of the \ac{kN} afterglow is the most prominent at $t < t_{\rm p}^{\rm GRB}$ and for equal mass \ac{BNS} simulations with soft \ac{EOS}, such as SLy4 and SFHo \acp{EOS}. For $q>1$ simulations, the maximum $\Delta x_c$ and $\Delta$FWHM$_{x}${} are about two times smaller than in $q=1$ cases. On the other hand, at $\theta_{\rm obs}=21.5\,$deg., and $n_{\rm ISM}=0.00031\,\,\text{cm}^{-3}$, the influence of the \ac{kN} afterglow is negligible even at $t<t_{\rm p}^{\rm GRB}$ for all simulations. In this case \ac{GRB} afterglow provides a dominant contribution to the total \ac{LC} and the sky map, and the presence of \ac{kN} afterglow can only be seen at very late times $t \gg t_{\rm p}^{\rm GRB}$, when the \ac{kN} afterglow emission is coming predominantly from the non-thermal electron population. Meanwhile, in cases when the early \ac{GRB} emission is beamed away, $\theta_{\rm obs} \gtrsim 45\,$deg, the maximum in $\Delta x_c$ and $\Delta$FWHM$_{x}${} occurs before the extreme in \ac{kN} afterglow spectral index evolution, in the regime where the emission from thermal electrons dominate the observed flux. \subsection{Effect of the GRB-modified ISM on kN afterglow sky map}\label{sec:result:interaction} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{fig07.pdf} \caption{ The ratio between two \ac{kN} afterglow sky maps with the only difference between them being is whether the \ac{CBM}, altered by a passage of \ac{GRB} \acp{BW}, is taken into account ($I_{\nu}^{\rm w}$) or not ($I_{\nu}^{w/o}$). Image size and the position of the flux centroid are shown as before with error bars and markers with blue color for ``w'' case and red for ``w/o'' case. Sky maps are computed assuming $\nu_{\rm obs}=1\,$GHz, $\theta_{\rm obs}=60\,$deg., and $n_{\rm ISM}=0.05\,\,\text{cm}^{-3}$. } \label{fig:results:kn_wg_example} \end{figure} In \citetalias{Nedora:2022kjv} we showed that when the \ac{kN} ejecta moves behind the \ac{GRB} \ac{BW}, it encounters an altered density profile, that we called an altered \ac{CBM}, and the afterglow signature changes. Specifically, the observed flux first decreases as most of the \ac{kN} ejecta moves subsonically behind the laterally spreading \ac{GRB} \ac{BW}, then increases as the \ac{kN} ejecta shocks the overdense fluid behind the \ac{GRB} \ac{BW} forward shock. However, the decrease and increase in the observed flux were found to be rather small: $\lesssim40\%$ and $\lesssim10\%$, respectively. The reason for this is the non-uniform nature of the \ac{kN} ejecta and finite time that \ac{GRB} lateral spreading takes. Thus, different parts of the \ac{kN} ejecta encounter different regions of the altered \ac{CBM} at a given time producing either an excess or a reduction in observed emission. Nevertheless, for the sake of completeness, it is worth looking at how the \ac{kN} afterglow sky map changes the altered \ac{CBM} is taken into account. In Fig.~\ref{fig:results:kn_wg_example} we show the effect of an altered \ac{CBM} on the \ac{kN} afterglow sky map for $t_{\rm obs}=80\,$days, $\theta_{\rm obs}=60\,$deg., and $n_{\rm ISM}=0.05\,\,\text{cm}^{-3}$. The red and blue colors indicate the excess and the reduction of the observed emission with respect to the sky map computed when the altered \ac{CBM} is not taken into account. As expected, the change in the observed intensity occurs primarily near poles ($z=0$) and corresponds to \ac{kN} ejecta moving subsonically and not producing synchrotron emission. Fast elements of the \ac{kN} ejecta shocked the overdense region behind the \ac{GRB} shock and produced an emission excess. Slower elements of ejecta catch up with the underdense part of the altered \ac{CBM} later and this the part of the image where the emission is suppressed lies ahead of the one with emission excess. The more equatorial part of the ejecta avoids interacting with the altered \ac{CBM} and, thus, its emission remains unchanged (along $z$ axis). The certain parts of the image, the emission excess can be significant, ($I_{\nu}^{\rm w}/I_{\nu}^{\rm w/o}\lesssim3$). However, combined with the emission suppression in other parts of the image, the overall emission excess is rather small. Thus, even at this relatively high $n_{\rm ISM}$ and large $\theta_{\rm obs}$ the effect of the altered \ac{CBM} on the sky map properties, \textit{i.e.}, the position of the flux centroid and the image size are negligible. \section{Discussion and conclusion}\label{sec:conclusion} In this work, we considered synthetic radio images of the \ac{GRB} and \ac{kN} afterglow. For the former we considered GRB170817A{} motivated model settings, \textit{i.e.}, laterally structured jet observed off-axis \cite{Hajela:2019mjy,Fernandez:2021xce}. For the latter, we considered a set of ejecta profiles from \ac{NR} \ac{BNS} merger simulations targeted to GW170817{}, \textit{i.e.}, with corresponding chirp mass. For all calculations, we use the semi-analytic afterglow code \texttt{PyBlastAfterglow}{}, presented and discussed in \citetalias{Nedora:2021eoj} and \citetalias{Nedora:2022kjv}. The key aspect of the input physics is the inclusion of two electron populations behind the \ac{kN} \ac{BW} shocks, that follow power-law (non-thermal electrons) and Maxwellian (thermal electrons) distributions. The main limitation of our work is the semi-analytical nature of the model we employ. It remains to be investigated how \ac{GRB} and \ac{kN} afterglow sky maps computed with \ac{HD} numerical codes compare to ours. It is however numerically very challenging to perform such simulations on a temporal and spatial scales discussed in this work, as well as, to perform them for various possible choices of the model free parameters and \ac{kN} ejecta profiles. The aforementioned limitations notwithstanding, we find that the \ac{kN} afterglow sky map at early times resemble a wheel or a doughnut due to the emission from thermal electrons enhanced by relativistic effects, dominating the observed flux. At later times, the sky map is largely spherical with a remaining ring structure reflecting the $a)$ assumed axial symmetry, $b)$ initial ejecta velocity distribution. The image size evolves monotonically, albeit not smoothly, reaching ${\simeq10}\,$\ac{mas} at $3000\,$days and ${\simeq}25\,$\ac{mas} at $20000\,$days. If the \ac{kN} afterglow \ac{LC} at its peak is dominated by the emission from thermal electrons, the image size is smaller reaching ${\lesssim}5\,$\ac{mas}. Thus, the properties of the fast ejecta tail can be inferred from the sky map size and its evolution. Despite asymmetry in ejecta velocity distribution, however, the position of the image flux centroid $x_c$ does not deviate much from $0$, and is the largest ($|x_c| < 0.4\,$\ac{mas}) at early times, in cases when the emission from thermal electrons dominates the observed flux. Notably, however, the asymmetry can lead to the negative values of $|x_c|$ (assuming more on-axis observers), which if observed might hint at the equatorial nature of the fast ejecta tail. Crucially, the presence of the \ac{kN} ejecta can affect the \ac{GRB} afterglow sky map to an appreciable degree even if the former does not appreciably contribute to the total observed flux. For that to occur, however, the source must be observed sufficiently off-axis so that the early \ac{GRB} afterglow emission is beamed away, while the \ac{kN} afterglow emission, dominated at this time by the emission from thermal electrons, is instead beamed more toward an observer. Specifically, at $t_{\rm obs}=80\,$days and assuming $\theta_{\rm obs}=45\,$deg. the change in the inferred value of the apparent velocity $\beta_{\rm app}$ can reach $0.5\,c$. At smaller $\theta_{\rm obs}$ the \ac{kN} afterglow effects the \ac{GRB} afterglow sky map properties significantly less and at $\theta_{\rm obs}\simeq20\,$deg. we find the effect to be negligible. Importantly, the relative brightness between these two types of afterglow depends on their respective sets of free parameters that are largely unconstrained. It is thus important to conduct a more thorough statistical analysis of the combined parameter space to assess the upper and lower limits of the degree to which the \ac{kN} afterglow influences the combined sky map properties. \defEddins et. al. (2023, in prep.){Eddins et. al. (2023, in prep.)} The detectability of the \ac{kN} and \ac{GRB} sky maps with Next Generation Very Large Array (ngVLA), which is currently in the development will be discussed in a separate study by Eddins et. al. (2023, in prep.){}. Overall, in order for \ac{kN} afterglow itself to be detectable, the flux density at the \ac{LC} peak should be $\gtrsim 5\times10^{-3}\,$mJy in radio \cite{Kathirgamaraju:2019xwu}. For \ac{BNS} merger simulations considered here, this is only possible at sufficiently high density, $n_{\rm ISM}\gtrsim0.005\,\,\text{cm}^{-3}$ at $D_{\rm L}\simeq40\,$Mpc. In order to distinguish \ac{GRB} and \ac{kN} afterglows, the $\theta_{\rm obs}$ should be much larger than the jet opening angle (\textit{e.g.}, see figure~9 in \citetalias{Nedora:2022kjv}). At the same time, at large $\theta_{\rm obs}$ the change in the position of the sky map flux centroid due to the presence of the \ac{kN} afterglow can become detectable. It is, however, difficult to determine what value of $x_{c}^{\rm GRB}-x_{c}^{\rm kN}$ can be resolved. At $n_{\rm ISM}=0.05\,\,\text{cm}^{-3}$ and $\theta_{\rm obs}=60\,$deg, the $x_{c}^{\rm GRB}-x_{c}^{\rm kN}$ reaches $0.5\,$\ac{mas} for equal mass \ac{BNS} models within the first $200\,$days after the burst which, in principle, should be detectable (Eddins et. al. (2023, in prep.)) with angular resolution of $0.1\,$\ac{mas}. \section*{Acknowledgements} The simulations were performed on the national supercomputer HPE Apollo Hawk at the High Performance Computing (HPC) Center Stuttgart (HLRS) under the grant number GWanalysis/44189 and on the GCS Supercomputer SuperMUC at Leibniz Supercomputing Centre (LRZ) [project pn29ba]. \textit{Software:} We are grateful to the countless developers contributing to open source projects that was used in the analysis of the simulation results of this work: \texttt{NumPy} \citep{numpy}, \texttt{Matplotlib} \cite{matplotlib}, and \texttt{SciPy} \cite{scipy}. \section*{Data Availability:} The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
1,108,101,565,081
arxiv
\section{Introduction} Given a set $S$ of dominant rational maps on $\mathbb{P}^N$ and an infinite sequence $\gamma=(\theta_1,\theta_2, \dots)$ of elements of $S$, then we are interested in two types of iterated processes attached to $\gamma$. Namely, the \emph{left iterative sequence} of maps, \[\gamma_n^-:=\theta_n\circ\theta_{n-1}\circ\dots\circ\theta_1\;\;\text{for all $n\geq1$},\] and the \emph{right iterative sequence} of maps, \[\;\;\,\gamma_n^+:=\theta_1\circ\theta_{2}\circ\dots\circ\theta_n\;\;\text{for all $n\geq1$}.\] In particular, given a suitable initial point $P\in\mathbb{P}^N$ we wish to study the \emph{left and right orbits} of the pair $(\gamma,P)$ given by \[\Orb_\gamma^-(P):=\big\{\gamma_n^-(P):n\geq0\big\}\;\;\text{and}\;\; \Orb_\gamma^+(P):=\big\{\gamma_n^+(P):n\geq0\big\}\] respectively; here we include the identity function $\gamma_0:=\text{Id}_{\mathbb{P}^N}$ for convenience. The analytic and topological properties of these orbits have been previously studied in complex dynamics \cite{random1,random2,random3,random4,random5,random6}, and in this paper, we consider arithmetic analogs of this work. Specifically, if both $P$ and the maps in $S$ are defined over $\overline{\mathbb{Q}}$ and $h:\mathbb{P}^N(\overline{\mathbb{Q}})\rightarrow\mathbb{R}_{\geq0}$ is the absolute Weil height function \cite[\S 3.1]{SilvDyn}, then we are interested in the growth rate of $h(\gamma_n^+(P))$ and $h(\gamma_n^-(P))$ as we move within the left and right orbits of $(\gamma,P)$ respectively. For sets of morphisms, the growth rates of $h(\gamma_n^-(P))$ for left iteration were studied first in \cite{Kawaguchi} and revisited in \cite{stochastic}. In particular, one may construct canonical heights in this setting and recover several familiar facts from the standard theory of arithmetic dynamics \cite{SilvDyn}, where one iterates a single function (i.e., $\gamma$ is a constant sequence). However, there appears to be relatively little known about heights when iterating on the right. Moreover when $N=1$, the arithmetic properties of $\Orb_\gamma^+(P)$ (for certain $P$ and certain $S$) control the size of the Galois extensions generated by $\gamma_n^+(x)=0$ and $n\geq1$; see Section \ref{sec:Galois}. Therefore, the growth rate of $h(\gamma_n^+(P))$ may be of interest to those studying dynamically generated Galois groups. \begin{remark} A further application of our work on left and right orbits is to the growing field of monoid (or semigroup) arithmetic dynamics \cite{monoid1,monoid2,stochastic,IJNT,monoid3,monoid4}. Here, one is instead interested in understanding the arithmetic properties of \emph{total orbits}, \begin{equation}\label{eq:totalorbit} \Orb_S(P):=\{f(P):\,f\in M_S\}=\bigcup_\gamma \Orb_\gamma^+(P)=\bigcup_\gamma \Orb_\gamma^-(P); \end{equation} here $M_S$ is the monoid generated by $S$ (and the identity) with the operation of composition. However, in practice, if one understands left and right orbits for sufficiently many $\gamma$, then one has gained nontrivial insight into total orbits; for some examples of this heuristic, see \cite[Corollary 1.4]{stochastic}, \cite[Theorem 1.18]{Me:dyndeg}, \cite[Theorem 1.7]{IJNT}, Theorem \ref{thm:zero-one}, and Section \ref{sec:totalorbits}. \end{remark} As in the case of iterating a single map, some useful tools for analyzing heights in left and right orbits are the left and right dynamical degrees, i.e., the limiting values of $\deg(\gamma_n^-)^{1/n}$ and $\deg(\gamma_n^+)^{1/n}$ respectively. However, without much difficulty, one can construct examples for which the aforementioned limits do not exist \cite[Example 1.1]{Me:dyndeg}. Nevertheless, one expects that these limits converge for most sequences. To test this heuristic, we fix a probability measure $\nu$ on $S$, and extend to a probability measure $\bar{\nu}$ on the set of sequences of elements of $S$ via the product measure; see Section \ref{sec:notation} for more details. With this perspective, we prove that the limits of $\deg(\gamma_n^-)^{1/n}$ and $\deg(\gamma_n^+)^{1/n}$ (as we vary over sequences of $S$) are $\bar{\nu}$-almost surely constant and independent of the direction of iteration. Moreover, for finite sets $S$, we show that this constant bounds both $h(\gamma_n^-(P))^{1/n}$ and $h(\gamma_n^+(P))^{1/n}$ for large $n$; compare to \cite[Theorem 1.8]{Me:dyndeg} and \cite[Theorem 1]{KawaguchiSilverman}. However, to prove this second fact about heights we must enforce a condition on $S$, namely, that as we compose elements of $S$ we manage to avoid maps of degree one: \begin{definition} A set of dominant rational maps $S$ on $\mathbb{P}^N$ is called \emph{degree independent} if $\deg(f)\geq2$ for all $f$ in the semigroup generated by $S$; here the operation is composition. \end{definition} Likewise, since the maps in $S$ may have non-trivial indeterminacy loci, we must take care to ensure that the orbits we consider are actually well defined: \begin{definition} Let $f$ be in the compositional semigroup generated by $S$, and let $I_f\subset \mathbb{P}^N$ be the indeterminacy locus of $f$. Then we set $\mathbb{P}^N(\overline{\mathbb{Q}})_S:=\displaystyle{\mathbb{P}^N(\overline{\mathbb{Q}})\mathbin{\fgebackslash}\cup_{f} I_f}$. \end{definition} With these notions in place, we prove our most general result relating the growth rate of degrees and the growth rate of heights in orbits. The proof is an adaptation and combination of the arguments given for left iteration (only) in Theorems 1.3 and 1.8 of \cite{Me:dyndeg}. Namely, we apply Kingman's subadditive ergodic theorem, Birkhoff's ergodic theorem, and ideas from \cite{SilvermanPN}. In what follows, $\mathbb{E}_\nu[\log\deg(\phi)]=\int_S\log\deg(\phi)d\nu$ denotes the expected value of the random variable $\log\deg$ on $S$. \begin{theorem}\label{thm:rationalmaps} Let $S$ be a set of dominant rational self-maps on $\mathbb{P}^N(\overline{\mathbb{Q}})$ and let $\nu$ be a discrete probability measure on $S$. Then the following statements hold: \vspace{.1cm} \begin{enumerate} \item[\textup{(1)}] If $\mathbb{E}_\nu[\log\deg(\phi)]$ exists, then there is a constant $\delta_{S,\nu}$ such that the limits \vspace{.1cm} \[\lim_{n\rightarrow\infty}\deg(\gamma_n^{-})^{1/n}=\delta_{S,\nu}=\lim_{n\rightarrow\infty}\deg(\gamma_n^{+})^{1/n}\vspace{.1cm}\] hold (simultaneously) for $\bar{\nu}$-almost every $\gamma\in\Phi_S$. \vspace{.25cm} \item[\textup{(2)}] If $S$ is finite and degree independent, then for $\bar{\nu}$-almost every $\gamma\in\Phi_S$ the bounds \vspace{.075cm} \[\limsup_{n\rightarrow\infty} h(\gamma_n^{\pm}(P))^{1/n}\leq\delta_{S,\nu}\vspace{.075cm}\] hold (simultaneously) for all $P\in\mathbb{P}^N(\overline{\mathbb{Q}})_S$. \end{enumerate} \end{theorem} Motivated by the existence of the constant $\delta_{S,\nu}$ we make the following definition: \begin{definition} For $(S,\nu)$ as in Theorem \ref{thm:rationalmaps}, we call $\delta_{S,\nu}$ the \emph{dynamical degree} of $(S,\nu)$.\end{definition} Although Theorem \ref{thm:rationalmaps} gives an upper bound on the growth rate of heights in orbits that is independent of the direction of iteration and the initial point, the same cannot be said in general for lower bounds. Heuristically, if $P$ has small height, then the direction of iteration can matter greatly. We illustrate this point with the following example. \begin{example}{\label{eg:left-right difference}} Let $S=\{x^2-x,3x^2\}$ with $\phi_1=x^2-x$ and $\phi_2=3x^2$, and define $\nu$ on $S$ determined by $\nu(\phi_1)=1/2=\nu(\phi_2)$. Then viewing $S$ as a set of maps on $\mathbb{P}^1$, we consider the possible left and right orbits of $P=1$ and compute that \vspace{.1cm} \begin{equation*} \begin{split} \liminf_{n\rightarrow\infty} h(\gamma_n^+(P))^{1/n}=0\;\;&\text{and}\;\;\limsup_{n\rightarrow\infty} h(\gamma_n^+(P))^{1/n}=2\qquad\text{($\bar{\nu}$-almost surely)}\\ \;\;\liminf_{n\rightarrow\infty} h(\gamma_n^-(P))^{1/n}=0\;\;&\text{and}\;\;\limsup_{n\rightarrow\infty} h(\gamma_n^-(P))^{1/n}=0\qquad\text{($\bar{\nu}$-probability $1/2$)} \\ \;\;\liminf_{n\rightarrow\infty} h(\gamma_n^-(P))^{1/n}=2\;\;&\text{and}\;\;\limsup_{n\rightarrow\infty} h(\gamma_n^-(P))^{1/n}=2\qquad\text{($\bar{\nu}$-probability $1/2$)} \end{split} \end{equation*} In particular, the direction of iteration may greatly affect the growth rate of heights in orbits. \end{example} However, for morphisms and sufficiently generic initial points, we are able to prove fairly uniform results. Namely, outside of a set of points $P$ of bounded height, we prove that the limits (not merely the limsups) of both $h(\gamma_n^-(P))^{1/n}$ and $h(\gamma_n^+(P))^{1/n}$ are equal to the dynamical degree, almost surely. Moreover, the dynamical degree is easy to compute for finite sets of morphisms; it is a weighted geometric mean of the degrees of the maps in $S$; compare to \cite[Theorem 1.5]{Me:dyndeg}. The main tools we use to prove this result are Birkhoff's Ergodic Theorem and the Law of Iterated Logarithms for simple random walks; see Section \ref{sec:notation} for statements. \begin{theorem}\label{thm:iteratedlogs} Let $S$ be a finite set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$ all of degree at least two, and let $\nu$ be a discrete probability measure on $S$. Then there exists a constant $B_S$ such that the following statements hold: \vspace{.25cm} \begin{enumerate} \item[\textup{(1)}] The dynamical degree is given by $\displaystyle{\delta_{S,\nu}=\prod_{\phi\in S}\deg(\phi)^{\nu(\phi)}}$. \vspace{.3cm} \item[\textup{(2)}] For $\bar{\nu}$-almost every $\gamma\in\Phi_S$, the limits \vspace{.1cm} \[\lim_{n\rightarrow\infty}h(\gamma_n^-(P))^{1/n}=\delta_{S,\nu}=\lim_{n\rightarrow\infty}h(\gamma_n^+(P))^{1/n}\vspace{.15cm}\] hold (simultaneously) for all $P$ with $h(P)>B_S$. \vspace{.4cm} \item[\textup{(3)}] If the variance $\sigma_{S,\nu}^2$ of $\log(\deg(\phi))$ is nonzero, then for $\bar{\nu}$-almost every $\gamma\in\Phi_S$, \vspace{.1cm} \[\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{h(\gamma_n^{\pm}(P))}{\delta_{S,\nu}^n}}\bigg)}{\sigma_{S,\nu}\sqrt{2n\log\log n}}=1=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\delta_{S,\nu}^n}{h(\gamma_n^{\pm}(P))}}\bigg)}{\sigma_{S,\nu}\sqrt{2n\log\log n}},\vspace{.3cm}\] hold (simultaneously) for all $P$ with $h(P)>B_S$. \vspace{.1cm} \end{enumerate} \end{theorem} We can rewrite the bounds in Theorem \ref{thm:iteratedlogs} to give improved estimates for $h(\gamma_n^-(P))$ and $h(\gamma_n^+(P))$ that work almost surely. In particular, these bounds have a main term of $\delta_{S,\nu}^n$ and are (at least in an asymptotic sense) independent of both $\gamma$ and $P$; hence, we have reduced the randomness of heights in generic left and right orbits. Specifically, suppose that $S$, $\nu$, $\delta_{S,\nu}$, $B_S$, $\sigma_{S,\nu}^2$ and $P$ satisfy the conditions of the Theorem \ref{thm:iteratedlogs}, and let $\epsilon>0$. Then for almost every $\gamma$ there exists $N_{\gamma,P,\epsilon}$ such that \vspace{.15cm} \[\delta_{S,\nu}^{\, n-(1+\epsilon)\log_{\delta_{S,\nu}}(e)\,\sigma_{S,\nu}\sqrt{2n\log\log n}}\leq h(\gamma_n^{\pm}(P))\leq \delta_{S,\nu}^{\, n+(1+\epsilon)\log_{\delta_{S,\nu}}(e)\,\sigma_{S,\nu}\sqrt{2n\log\log n}} \vspace{.15cm}\] holds for all $n\geq N_{\gamma,P,\epsilon}$. It would be interesting to know if and when similar type bounds hold for rational functions; for a conjecture along these lines in the case of iterating a single rational map, see \cite[Conjecture 2]{SilvermanPN}. As an application, we can use Theorem \ref{thm:iteratedlogs} to count the number of iterates in left and right orbits of bounded height; compare to \cite[Corollary 1.16]{Me:dyndeg} and \cite[Proposition 3]{KawaguchiSilverman}. \vspace{.1cm} \begin{corollary}\label{cor:escapeptshtbds} Let $S$, $\nu$, $\delta_{S,\nu}$, and $B_S$ be as in Theorem \ref{thm:iteratedlogs}. Then for $\bar{\nu}$-almost every $\gamma\in\Phi_S$ the limits \vspace{.15cm} \[\lim_{B\rightarrow\infty}\frac{\#\{Q\in\Orb_\gamma^-(P)\,:\,h(Q)\leq B\}}{\log(B)}=\frac{1}{\log\delta_{S,\nu}}=\lim_{B\rightarrow\infty}\frac{\#\{W\in\Orb_\gamma^+(P)\,:\,h(W)\leq B\}}{\log(B)} \vspace{.15cm}\] hold (simultaneously) for all $P$ with $h(P)>3B_S$. \end{corollary} Although Theorem \ref{thm:iteratedlogs} and Corollary \ref{cor:escapeptshtbds} give nice descriptions of the growth rate of heights in generic left and right orbits, it is natural to ask what can be said in the non-generic case. Is it possible to prove a result somewhere in-between Theorem \ref{thm:rationalmaps} and Theorem \ref{thm:iteratedlogs}? Likewise, can we prove a result for (suitable) infinite sets $S$? For left iteration of morphisms, we have canonical heights at our disposal \cite{stochastic,Kawaguchi}, but this is not the case when iterating on the right; see Remark \ref{rem:nocanht} below. Moreover, understanding heights in right orbits can be useful for understanding (generalized) dynamical Galois groups; see Section \ref{sec:Galois}. As a first step (with the case of left iteration in mind), we assume that $S$ have further properties, which we now discuss. It is well known that if $\phi:\mathbb{P}^N(\overline{\mathbb{Q}})\rightarrow\mathbb{P}^N(\overline{\mathbb{Q}})$ is a morphism defined over $\overline{\mathbb{Q}}$ of degree $d_\phi$, then \begin{equation}\label{functoriality} h(\phi(P))=d_\phi h(P)+O_{\phi}(1)\;\;\;\text{for all $P\in\mathbb{P}^N(\overline{\mathbb{Q}})$;} \vspace{.1cm} \end{equation} see, for instance, \cite[Theorem 3.11]{SilvDyn}. With this in mind, we let \begin{equation}{\label{htconstant}} C(\phi):=\sup_{P \in \mathbb{P}^N(\bar{\mathbb{Q}})} \Big\vert h(\phi(P))-d_\phi h(P)\Big\vert \end{equation} be the smallest constant needed for the bound in (\ref{functoriality}). Then, in order to control height growth rates for sequences in $S$, we define the following fundamental notion; compare to \cite{stochastic,Me:dyndeg,Kawaguchi}. \begin{definition}\label{def:htcontrolled} A set $S$ of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$ is called \emph{height controlled} if the following properties hold: \vspace{.1cm} \begin{enumerate} \item $d_S:=\inf\{d_\phi:\phi\in S\}$ is at least $2$. \vspace{.15cm} \item $C_S:=\sup\{C(\phi): \phi\in S\}$ is finite. \vspace{.1cm} \end{enumerate} \end{definition} \begin{remark}We note first that any finite set of morphisms of degree at least $2$ is height controlled. To construct infinite collections, let $T$ be any non-constant set of maps on $\mathbb{P}^1$ and let $S_T=\{\phi\circ x^d\,: \phi\in T,\, d\geq2\}$. Then $S_T$ is height controlled and infinite; a similar construction works for $\mathbb{P}^N$ in any dimension. For another type of example, let $\mathcal{U}$ be the set of roots of unity in $\overline{\mathbb{Q}}$. Then $S=\{x^2+u\,:\, u\in \mathcal{U}\}$ is a height controlled collection of maps on $\mathbb{P}^1$. Moreover, it is worth pointing out that $S$ has a corresponding probability measure given by embedding $\mathcal{U}$ in the unit circle (in $\mathbb{C}$) and then taking the Haar measure on the circle. \end{remark} With the notion of height control morphisms in place, we prove a result for right iteration in-between Theorem \ref{thm:rationalmaps} and Theorem \ref{thm:iteratedlogs} above; compare to stronger results for left iteration \cite[Theorem 1.2]{stochastic} and \cite[Theorem 1.15]{Me:dyndeg}. However before stating this result, we make a few more notes on the differences between left and right iteration. First, as was mentioned before, canonical heights (in the usual sense) do not exist for right-iteration. That is, in principle one must keep track of both the corresponding liminf and limsup; see statement (1) of Theorem \ref{thm:zero-one} and Remark \ref{rem:nocanht} below. This is a drawback of right-iteration. On the other hand, there are certain advantages as well. For instance, ideally one would like to determine whether or not the total orbit (\ref{eq:totalorbit}) has a certain property by sampling a right or left orbit (and testing that same property). As an example, if a right (or left) orbit of $P$ is finite with positive probability, is it true that $\Orb_S(P)$ is necessarily finite? This statement turns out to be true for right orbits and false for left; for justification, see both Theorem \ref{thm:zero-one} below and \cite[Example 1.10]{Me:dyndeg}. \begin{theorem}\label{thm:zero-one} Let $S$ be a height controlled set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$ all defined over a fixed number field $K$ and let $\nu$ be a discrete probability measure on $S$. Then the following statements hold: \vspace{.3cm} \begin{enumerate} \item[\textup{(1)}] For all $P$ and all $\gamma$, both \[\displaystyle{\liminf_{n\rightarrow\infty}\frac{h(\gamma_n^+(P))}{\deg(\gamma_n^+)}}\;\;\; \text{and}\;\;\;\displaystyle{\limsup_{n\rightarrow\infty}\frac{h(\gamma_n^+(P))}{\deg(\gamma_n^+)}}\] exist and are $h(P)+O(1)$. \\[3pt] \item[\textup{(2)}] For all $P$, the total orbit $\Orb_S(P)$ of $P$ is infinite if and only if \vspace{.1cm} \[\qquad0<\displaystyle{\limsup_{n\rightarrow\infty}\frac{h(\gamma_n^+(P))}{\deg(\gamma_n^+)}}\qquad\qquad\text{($\bar{\nu}$-almost surely).}\vspace{.1cm}\] Hence, $\Orb_S(P)$ is finite if and only if $\Orb_\gamma^+(P)$ is finite with positive $\bar{\nu}$-probability. \\[3pt] \item[\textup{(3)}] If $\Orb_S(P)$ is infinite and $\mathbb{E}_\nu[\log\deg(\phi)]$ exists, then \vspace{.2cm} \[\limsup_{n\rightarrow\infty} h(\gamma_n^+(P))^{1/n}=\delta_{S,\nu}\qquad\qquad\text{($\bar{\nu}$-almost surely).}\] Moreover, the dynamical degree $\delta_{S,\nu}=\exp\big(\mathbb{E}_\nu[\log\deg(\phi)]\big)$ is given explicitly. \vspace{.025cm} \end{enumerate} \end{theorem} \begin{remark}{\label{rem:nocanht}} Note that the $\liminf$ and $\limsup$ in statement (1) of Theorem \ref{thm:zero-one} can be distinct. See Example \ref{eg:left-right difference} above. \end{remark} Having obtained results for left and right orbits, we turn to height counting problems for total orbits. Intuitively, one expects that if the maps in $S$ are related in some way (for instance, if they commute with each other), then this should cut down the number of possible points in total orbits. More formally, the asymptotic growth rate of the set \[\{Q\in\Orb_S(P)\,:\,h(Q)\leq B\}\] should depend on the structure of the compositional monoid $M_S$ that $S$ generates, at least for generic initial points $P$. As an illustration, we have the following related asymptotic, \[\lim_{B\rightarrow\infty}\frac{\#\Big\{f\in M_S\,:\,h\big(f(P)\big)\leq B\Big\}}{(\log B)^s}=\frac{1}{s!\cdot\prod_{i=1}^s\log\deg(\phi_i)}, \vspace{.25cm}\] when $S$ is a free basis (of cardinality $s$) for the commutative monoid $M_S$ and $P$ has sufficiently large height. For justification of this fact, as well as a discussion of the problem of counting points of bounded height in total orbits more generally, see Section \ref{sec:totalorbits}. In particular, we discuss how this problem in dynamics relates to the (weighted) growth rate problem for semigroups and to restricted weighted compositions in combinatorics \cite{growth1, compositions, growth2, growth3}.\\[3pt] \textbf{Acknowledgements:} We are happy to thank Andrew Bridy, James Douthitt, Joseph Gunther, Vivian Olsiewski Healey, Trevor Hyde, Rafe Jones, and Joseph Silverman for discussions related to the work in this paper. \section{Notation and tools from probability}\label{sec:notation} We begin by fixing some notation. For more information on these standard constructions in probability, see \cite{Durrett, ProbabilityText}. \begin{align*} S \;\;\;& \text{a set of dominant rational self maps on $\mathbb{P}^N$, all defined over $\overline{\mathbb{Q}}$}.\\ \nu \;\;\;& \text{a probability measure on $S$}.\\ \Phi_S \;\;& \text{the infinite product $\Phi_S=\Pi_{i=1}^\infty S=S^{\mathbb{N}}$}.\\[2pt] \bar{\nu} \;\;\;& \text{the product measure $\bar{\nu}=\Pi_{i=1}^\infty \nu$ on $\Phi_S$}. \\ \gamma\;\;\;& \text{an element of $\Phi_S$, viewed as an infinite sequence.}\\ \mathbb{E}_{\bar\nu}[f]\;\,& \text{the expected value $\mathlarger{\smallint}_{\hspace{-.1cm}\mathsmaller{\Phi_s}} f\,d\bar{\nu}$ of a random variable $f:\Phi_S\rightarrow\mathbb{R}$} \end{align*} \begin{remark} It is likely that many of our results on dynamical degrees hold without assumptions on the field of definition of the maps in $S$. However, since we wish to study heights, we assume that every map in $S$ has $\overline{\mathbb{Q}}$-coefficients. In particular, the sets $S$ we consider are countable, and for this reason, we assume that $\nu$ is a discrete measure with $\nu(\phi)>0$ for all $\phi\in S$. Likewise, since there may be no natural choice of probability measure $\nu$ on $S$, we keep the measures $\nu$ and $\bar{\nu}$ in much of the notation (e.g., $\mathbb{E}_{\bar\nu}[f]$) to remind the reader of the dependence of our formulas and bounds on the choice of $\nu$. \end{remark} When $S=\{\phi\}$ is a single map, a crucial tool for establishing the convergence of the limit defining the dynamical degree is Fekete's lemma (see the proof of \cite[Proposition 7]{SilvermanPN}), which states that if $a_n$ is a subadditive sequence of non-negative real numbers, then $\lim a_n/n$ exists. In particular, the following landmark theorem due to Kingman \cite{kingman} may be viewed as a random version of Fekete's lemma. In what follows, the expected value $\mathbb{E}_{\mu}[f]$ of a random variable $f: \Omega\rightarrow \mathbb{R}$ on a probability space $(\Omega,\Sigma, \mu)$ is the integral $\int_\Omega f d\mu$. \begin{theorem}[Kingman's Subadditive Ergodic Theorem]\label{thm:kingman} Let $T$ be a measure preserving transformation on a probability space $(\Omega,\Sigma, \mu)$, and let $(g_n)_{n\geq1}$ be a sequence of $L^1$ random variables that satisfy the subadditivity relation \begin{equation}\label{subadd} g_{m+n}\leq g_n+g_m\circ T^n \end{equation} for all $n,m\geq1$. Then there exists a $T$-invariant function $g$ such that \[\lim_{n\rightarrow\infty}\frac{g_n(x)}{n}=g(x)\] for $\mu$-almost every $x\in\Omega$. Moreover, if $T$ is ergodic, then $g$ is constant and \vspace{.1cm} \[\lim_{n\rightarrow\infty}\frac{g_n(x)}{n}=\lim_{n\rightarrow\infty}\frac{\mathbb{E}_\mu[g_n]}{n} =\inf_{n\geq1}\frac{\mathbb{E}_\mu[g_n]}{n}.\] for $\mu$-almost every $x\in\Omega$ \end{theorem} \begin{remark} A transformation $T:\Omega\rightarrow\Omega$ on a probability space $(\Omega,\Sigma,\mu)$ is called \emph{ergodic} if for all $E\in\Sigma$ such that $T^{-1}(E)=E$, either $\mu(E)=0$ or $\mu(E)=1$. \end{remark} We also need a similar (yet weaker) ergodic theorem due to Birkhoff. \begin{theorem}[Birkhoff's Ergodic Theorem]\label{birk} If $T$ is an ergodic, measure preserving transformation on a probability space $(\Omega,\Sigma, \mu)$, then for every random variable $f\in L^1(\Omega)$, \begin{equation}\label{birkhoff} \lim_{n\rightarrow\infty} \frac{1}{n}\sum_{j=0}^{n-1} f\circ T^j(x)=\mathbb{E}_\mu[f]. \end{equation} for $\mu$-almost every $x\in\Omega$. \end{theorem} To apply Kingman's Subadditive Ergodic Theorem to dynamical degrees, we use the following well known example of an ergodic, measure preserving transformation. In particular, the lemma below is a simple consequence of Kolmogorov's $0$\,-$1$ law \cite[Theorem 10.6]{ProbabilityText}; for nice further discussions, see \cite[Example 7.1.6]{Durrett} or \cite[Example 5.5]{steve2} and \cite[Exercise 5.11]{steve2}. \begin{lemma}\label{shift} Let $S$ be a set with probability measure $\nu$ and let $(\Phi_S,\bar{\nu})$ be the corresponding infinite product space. Then the shift map, \[T\big(\theta_1,\theta_2, \dots \big)=(\theta_2, \theta_3,\dots)\] is an ergodic, measure preserving transformation on $\Phi_S$. \end{lemma} \begin{remark} When $S$ is a finite set, the probability space $\Phi_S$ and the map $T$ as in Lemma \ref{shift} are often called Bernoulli schemes and Bernoulli shifts respectively. \end{remark} Finally, to obtain the improved height bounds in part (3) of Theorem \ref{thm:iteratedlogs} with a main term of $\delta^n$, we use the following result due to Hartman and Wintner known as the Law of Iterated Logarithms; see \cite[Theorem 8.11.3]{Durrett}. As with certain classical theorems in probability (e.g., the Law of Large Numbers, The Central Limit Theorem, etc.) the Law of Iterated Logarithms for simple random walks is normally stated in terms of independent and identically distributed (or \emph{i.i.d.} for short) random variables; see \cite[\S2.1]{Durrett} or \cite[\S10]{ProbabilityText} for a definition and discussing of i.i.d sequences. However, for our purposes, it suffices to know that if $f:S\rightarrow\mathbb{R}$ is any $\nu$-measurable function, then the corresponding projection maps $X_n(f):\Phi_S\rightarrow\mathbb{R}$ on the product space $(\Phi_S,\bar{\nu})$ given by $X_{n,f}(\theta_1,\theta_2, \dots)=f(\theta_n)$ form an i.i.d sequence of random variables; this is a simple consequence of the relevant definitions \cite[Corollary 10.2]{ProbabilityText}. \begin{theorem}[Law of Iterated Logarithms]\label{thm:lawiterlogs} Suppose that $X_1$, $X_2$, $\dots$ are i.i.d. random variables on $(\Omega,\Sigma, \mu)$ with $\mathbb{E}_\mu[X_i]=0$ and $\mathbb{E}_\mu[X_i^2]=1$. Then, if $S_n=X_1+\dots+X_n$ denotes the truncated sum, we have that \begin{equation}\label{brownian} \qquad\limsup_{n\rightarrow\infty} \frac{\pm S_n}{\sqrt{2n\log\log n}}=1\qquad\text{($\mu$-almost surely).} \end{equation} \end{theorem} \begin{remark} Interestingly, the Law of Iterated Logarithms (for simple random walks) stated above is proven by first establishing the analogous fact for Brownian motion and then deducing (\ref{brownian}) from that case. \end{remark} \section{Rational maps: dynamical degrees and height bounds} In this section, we prove Theorem \ref{thm:rationalmaps} on dynamical degrees and height bounds for rational maps; for strengthened results on morphisms, see Section \ref{sec:morphisms}. \begin{proof}[(Proof of Theorem \ref{thm:rationalmaps})] We begin with the proof of statement (1) on dynamical degrees. For $n\geq1$, we define the random variables $g_n^{-}:\Phi_S\rightarrow\mathbb{R}_{\geq0}$ and $g_n^{+}:\Phi_S\rightarrow\mathbb{R}_{\geq0}$ given by \[g_n^{-}(\gamma):=\log\deg(\gamma_n^{-})\;\;\text{and}\;\;g_n^{+}(\gamma):=\log\deg(\gamma_n^{+})\] respectively. Note that each $g_n^{\pm}$ is non-negative since $S$ is a collection of dominant maps. We will show that the sequences $(g_n^-)_{n\geq1}$ and $(g_n^+)_{n\geq1}$ satisfy the hypothesis of Kingman's Subadditive Ergodic Theorem. Note first that each $g_n^{\pm}$ factors through the finite product $S^n$ and $S^n$ (a countable set) is equipped with the discrete measure (a finite product of discrete spaces is discrete). In particular, $g_n^{\pm}$ is $\bar{\nu}$-measurable by \cite[Corollary 10.2]{ProbabilityText}. Likewise, define $f_i:\Phi_S\rightarrow\mathbb{R}_{\geq0}$ given by $f_i(\gamma)=\log\deg(\theta_i)$ for $\gamma=(\theta_s)_{s=1}^\infty$. Then $f_i$ is also measurable by \cite[Corollary 10.2]{ProbabilityText}. Moreover, we see that $g_n^{\pm}\leq\sum_{i=1}^nf_i$, since \begin{equation}\label{degbd} \deg(F\circ G)\leq\deg(F)\deg(G)\;\;\;\;\text{for any}\; F,G\in \Dom(\mathbb{P}^N); \end{equation} here, $\Dom(\mathbb{P}^N)$ is the set of dominant self-maps on $\mathbb{P}^N$. In particular, \[\mathbb{E}_{\bar{\nu}}[g_n^{\pm}]\leq\sum_{i=1}^n\mathbb{E}_{\bar{\nu}}[f_i]=n\,\mathbb{E}[f_1]:=n\,\mathbb{E}_\nu[\log\deg(\phi)];\] here we use that $\Phi_S$ consists of i.i.d sequences. In particular, each $g_n^{\pm}$ is an $L^1$ function since $\mathbb{E}_\nu[\log\deg(\phi)]$ is bounded by assumption. Now we check the subadditivity relation in (\ref{subadd}), a simple consequence of (\ref{degbd}). Let $n,m>0$, let $\gamma=(\theta_s)_{s=1}^\infty$, and let $T$ be the shift map on $\Phi_S$. Then we compute that \vspace{.25cm} \begin{equation*} \begin{split} g_{n+m}^{\,-}(\gamma)=\log\deg(\theta_{m+n}\circ\dots\circ\theta_1)&\leq\log\deg(\theta_{m+n}\circ\dots\circ\theta_{n+1})+\log\deg(\theta_n\circ\dots\circ\theta_1)\\[3pt] &=g_m^-(T^n(\gamma))+g_n^-(\gamma)=g_n^-(\gamma)+g_m^-(T^n(\gamma)), \vspace{.25cm} \end{split} \end{equation*} by (\ref{degbd}). Likewise for right iteration, we see that \vspace{.1cm} \begin{equation*} \begin{split} g_{n+m}^{\,+}(\gamma)=\log\deg(\theta_{1}\circ\dots\circ\theta_{n+m})&\leq\log\deg(\theta_{1}\circ\dots\circ\theta_{n})+\log\deg(\theta_{n+1}\circ\dots\circ\theta_{n+m})\\[3pt] &=g_n^+(\gamma)+g_m^+(T^n(\gamma)), \vspace{.25cm} \end{split} \end{equation*} In particular, Theorem \ref{thm:kingman} and Lemma \ref{shift} together imply that \vspace{.2cm} \begin{equation}\label{kinglim} \lim_{n\rightarrow\infty}\log\deg(\gamma_n^{\pm})^{1/n}=\lim_{n\rightarrow\infty}\frac{g_n^{\pm}(\gamma)}{n}=\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{\bar{\nu}}[g_n^{\pm}]}{n}=\inf_{n\geq1}\frac{\mathbb{E}_{\bar{\nu}}[g_n^{\pm}]}{n} \end{equation} for $\bar{\nu}$-almost every $\gamma\in\Phi_S$. However, apriori the limits \[\delta_{S,\nu}^-:=\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{\bar{\nu}}[g_n^{-}]}{n}\;\;\text{and}\;\; \delta_{S,\nu}^+:=\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{\bar{\nu}}[g_n^{+}]}{n}\] could be distinct (in fact, if we were to allow maps over $\mathbb{C}$ so that $S$ could be uncountable, then we expect that this could be the case). But $S$ is countable and discrete by assumption, and so these limits are in fact equal. To see this, we define the bijections $\tau_n:S^n\rightarrow S^n$ given by \[\tau_n(\theta_1,\dots,\theta_n)=(\theta_n,\dots,\theta_1)\] and let $\nu_n=\nu\times\dots\times\nu$ be the product probability measure on $S^n$. Then it follows from the definition of $\nu_n$ that \[\nu_n(\theta_1,\dots,\theta_n)=\nu(\theta_1)\cdots\nu(\theta_n)=\nu(\theta_n)\cdots\nu(\theta_1)=\nu_n(\tau_n(\theta_1,\dots\theta_n))\] see \cite[\S10]{ProbabilityText}. Now let $G_n^{\pm}$ be the random variables on $S^n$ given by \vspace{.1cm} \[G_n^-(\theta_1,\dots,\theta_n)=\log\deg(\theta_n\circ\dots\circ\theta_1)\;\;\;\text{and}\;\;\;G_n^+(\theta_1,\dots,\theta_n)=\log\deg(\theta_1\circ\dots\circ\theta_n)\vspace{.1cm}\] In particular, it is straightforward to check that $G_n^-=G_n^+\circ\tau_n$. Therefore, since $S^n$ is countable/discrete, $\tau_n$ is bijective, and the series below are absolutely convergent:\vspace{.1cm} \begin{equation}{\label{eq:directionswap}} \mathbb{E}_{\nu_n}[G_n^{-}]=\sum_{x\in S^n}G_n^-(x)\nu_{n}(x)=\sum_{x\in S^n}G_n^+(\tau_n(x))\nu_{n}(\tau(x))=\sum_{y\in S^n}G_n^+(y)\nu_{n}(y)=\mathbb{E}_{\nu_n}[G_n^{+}].\vspace{.1cm} \end{equation} On the other hand, $g_n^{\pm}$ factors through $G_n^{\pm}$, so that \cite[Theorem 10.4]{ProbabilityText} and (\ref{eq:directionswap}) together imply that \vspace{.1cm} \begin{equation}\label{eq:swap2} \mathbb{E}_{\bar{\nu}}[g_n^{-}]=\mathbb{E}_{\nu_n}[G_n^{-}]=\mathbb{E}_{\nu_n}[G_n^{+}]=\mathbb{E}_{\bar{\nu}}[g_n^{+}]\qquad\text{for all $n\geq1$}. \vspace{.1cm} \end{equation} Hence, it follows from (\ref{kinglim}) and (\ref{eq:swap2}) that \begin{equation}{\label{eq:swap3}} \lim_{n\rightarrow\infty}\log\deg(\gamma_n^{-})^{1/n}=\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{\bar{\nu}}[g_n^{-}]}{n}=\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{\bar{\nu}}[g_n^{+}]}{n}=\lim_{n\rightarrow\infty}\log\deg(\gamma_n^{+})^{1/n} \end{equation} for $\bar{\nu}$-almost every $\gamma\in\Phi_S$; here we use also that the intersection of almost sure events is almost sure. Moreover, applying the exponential map to (\ref{eq:swap3}) and exchanging $\exp$ with the limit (justified, by continuity) gives \begin{equation}\label{eq:dendegdef} \lim_{n\rightarrow\infty}\deg(\gamma_n^{\pm})^{1/n}=\delta_{S,\nu}:=\exp\Big(\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{\bar{\nu}}[g_n^{-}]}{n}\Big)=\exp\Big(\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{\bar{\nu}}[g_n^{+}]}{n}\Big) \end{equation} for $\bar{\nu}$-almost every $\gamma\in\Phi_S$ as claimed. Now for the proof of statement (2) of Theorem \ref{thm:rationalmaps}. Suppose that $S$ is finite and degree independent. Let $k\geq1$ be an integer, and let \begin{equation}\label{def:strings} M_{S,k}:=\big\{f=\theta_1\circ\dots\circ\theta_k\,\big\vert\;\text{for some}\;(\theta_1,\dots,\theta_k)\in S^k\big\} \end{equation} be the set of possible functions generated by $k$-term strings of elements of $S$. Then a standard triangle inequality estimate (see the proof of \cite[Theorem 3.11]{SilvDyn}) implies that \begin{equation}\label{rat:bd1} \,h(f(Q))\leq\deg(f) \,h(Q)+C(k,S)\qquad \text{for all $f\in M_{S,k}$ and all $Q\in\mathbb{P}^N(\overline{\mathbb{Q}})_S$}. \end{equation} To see this, note that there is such a constant for each $f$ and only finitely many $f$'s, since $S$ is a finite set. Moreover, it is important to note that the estimate above does not depend on the direction of iteration (but simply the length of the string). In particular, we see that if $P\in \mathbb{P}^N(\overline{\mathbb{Q}})_S$, if $n\geq1$, and if $F_{nk}=f_n\circ f_{n-1}\circ\dots\circ f_1$ is an arbitrary element of $M_{S,nk}$ for some choice of $f_i\in M_{S,k}$, then repeated application of the bound in (\ref{rat:bd1}) implies that \vspace{.25cm} \begin{equation}\label{eq:stringbd} \begin{split} h(F_{nk}(P))\leq&\deg(f_n)\deg(f_{n-1})\dots\deg(f_1)\Scale[.84]{\Big(h(P)+\frac{C(k,S)}{\deg(f_1)}+\frac{C(k,S)}{\deg(f_1)\deg(f_2)}+\dots+\frac{C(k,S)}{\deg(f_1)\dots\deg(f_n)}\Big)} \\[5pt] \leq&\deg(f_n)\deg(f_{n-1})\dots\deg(f_1) \Big(h(P)+C(k,S)\Big). \vspace{.15cm} \end{split} \end{equation} Here we use our assumption that $S$ is degree independent, so that $\deg(f_i)\geq2$ for all $i$. Now we apply this bound to sequences. For $\gamma=(\theta_s)_{s=1}^{\infty}\in\Phi_S$ and $i,k\geq1$, let \vspace{.15cm} \[f_{i,k}^-(\gamma)=\theta_{ik}\circ\theta_{ik-1}\circ\dots\circ\theta_{(i-1)k+1}\;\;\;\text{and}\;\;\;f_{i,k}^+(\gamma)=\theta_{(i-1)k+1}\circ \theta_{(i-1)k+1}\dots\theta_{ik}. \vspace{.15cm} \] In particular, it is straightforward to check that \vspace{.15cm} \[\gamma_{nk}^-=f_{n,k}^-(\gamma)\circ f_{n-1,k}^-(\gamma)\circ\dots \circ f_{1,k}^-(\gamma)\;\;\; \text{and}\;\;\;\gamma_{nk}^+=f_{1,k}^+(\gamma)\circ f_{2,k}^+(\gamma)\circ\dots\circ f_{n,k}^+(\gamma). \vspace{.15cm} \] Moreover, each $f_{i,k}^{\pm}(\gamma)\in M_{S,k}$ is the composition of a $k$-term string from $S$. Therefore, (\ref{eq:stringbd}) above applied separately to $F_{nk}=\gamma_{nk}^-$ and $F_{nk}=\gamma_{nk}^+$ implies that \vspace{.15cm} \begin{equation}\label{rat:bd2} \begin{split} h(\gamma_{nk}^-(P))\leq\deg(f_{1,k}^-(\gamma))\deg(f_{2,k}^-(\gamma))\dots\deg(f_{n,k}^-(\gamma)) \,C(k,S,P)\\[8pt] h(\gamma_{nk}^+(P))\leq\deg(f_{1,k}^+(\gamma))\deg(f_{2,k}^+(\gamma))\dots\deg(f_{n,k}^+(\gamma))\,C(k,S,P) \end{split} \end{equation} holds for all $n,k\geq1$, all $\gamma\in\Phi_{S}$ and all $P\in\mathbb{P}^N(\overline{\mathbb{Q}})_S$; here we reverse the order of the product of the degrees for left iteration,\vspace{.1cm} \[\deg(f_{n,k}^-(\gamma))\deg(f_{n-1,k}^-(\gamma))\dots\deg(f_{1,k}^-(\gamma))=\deg(f_{1,k}^-(\gamma))\deg(f_{2,k}^-(\gamma))\dots\deg(f_{n,k}^-(\gamma)),\vspace{.1cm} \] to streamline the argument to come. From here we use Birkhoff's Ergodic Theorem to control the right hand side of (\ref{rat:bd2}) above. Namely, let $T_{(k)}:\Phi_S\rightarrow\Phi_S$ denote the $k$-shift map, $T_{(k)}:=T^k=T\circ T\circ\dots\circ T$. In particular, since the shift map $T$ is ergodic and measure preserving by Lemma \ref{shift}, so is $T_{(k)}$ for all $k\geq1$. Now consider the random variables $F_{(k)}^{-}:\Phi_S\rightarrow\mathbb{R}_{\geq0}$ and $F_{(k)}^{+}:\Phi_S\rightarrow\mathbb{R}_{\geq0}$ given by \vspace{.1cm} \[ F_{(k)}^{\pm}(\gamma)=\frac{\log\deg(\gamma_k^{\pm})}{k}=\frac{\log\deg(f_{1,k}^{\pm}(\gamma))}{k}\;\;\;\;\ \text{for $\gamma\in\Phi_S$}.\] Then, it follows from the definition of $f_{i,k}^\pm$ that $F_{(k)}^{\pm}\circ T_{(k)}^{i-1}=1/k\cdot\log\deg(f_{i,k}^{\pm})$. Hence, rewriting the bounds in (\ref{rat:bd2}) and taking $nk$-th roots, we see that \vspace{.1cm} \begin{equation}\label{rat:bd3} h(\gamma_{nk}^{\pm}(P))^{1/nk}\leq \bigg(\exp\frac{1}{n}\sum_{j=0}^{n-1}F_{(k)}^{\pm}\big(T_{(k)}^j(\gamma)\big)\bigg) \,C(k,S,P)^{1/nk}.\end{equation} In particular, (\ref{rat:bd3}) implies that \begin{equation}\label{rat:bd4} \limsup_{n\rightarrow\infty}h(\gamma_{nk}^{\pm}(P))^{1/nk}\leq\limsup_{n\rightarrow\infty}\bigg(\exp\,\frac{1}{n}\sum_{i=0}^{n-1}F_{(k)}^{\pm}\big(T_{(k)}^i(\gamma)\big)\bigg). \end{equation} However, Birkhoff's Ergodic Theorem \ref{birk} implies that \begin{equation}\label{rat:lim} \lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1}F_{(k)}^{\pm}\big(T_{(k)}^i(\gamma)\big)=\mathbb{E}_{\bar{\nu}}[F_{(k)}^{\pm}] \end{equation} for almost every $\gamma\in\Phi_{S}$; note that this claim is independent of the point $P$. Moreover, since a countable intersection of almost sure events is almost sure, we see that the limit in (\ref{rat:lim}) is \textbf{true for all k} (for both left and right iteration), for almost every $\gamma\in\Phi_S$. On the other hand, (\ref{eq:swap2}) above implies that \begin{equation}\label{eq:exp=} \mathbb{E}_{\bar{\nu}}[F_{(k)}^{-}]=\frac{\mathbb{E}_{\bar{\nu}}[g_k^{-}]}{k}=\frac{\mathbb{E}_{\bar{\nu}}[g_k^{+}]}{k}=\mathbb{E}_{\bar{\nu}}[F_{(k)}^{+}]. \end{equation} Hence, the limit on the righthand side of (\ref{rat:lim}) does not depend on the direction. Therefore, (\ref{rat:bd4}), (\ref{rat:lim}), and the fact that the exponential function is continuous together imply that \vspace{.1cm} \begin{equation}\label{rat:bigbd} \limsup_{n\rightarrow\infty}h(\gamma_{nk}^{\pm}(P))^{1/nk}\leq \exp\bigg(\mathbb{E}_{\bar{\nu}}\Big[\frac{\log\deg(\gamma_k^-)}{k}\Big]\bigg) \vspace{.1cm} \end{equation} holds for all $k$ (for both left and right iteration), for almost every $\gamma\in\Phi_S$. From here, we handle left and right iteration separately and begin with left iteration. In particular, we show that the overall limsup (without $k$) in part (2) of Theorem \ref{thm:rationalmaps} can be computed using the subsequence of multiples of $k$ (for any $k\geq1$). This line of reasoning does not work for right iteration in general; see Example \ref{eg:left-right difference}. To do this, define constants \begin{equation}\label{rat:degbd} d_{S,k}:=\max_{\substack{f\in M_{S,r}\\ 0\leq r<k}}\deg(f)\;\;\; \text{and}\;\;\;\ B_{S,k}:=\max_{0\leq r<k} C(r,S); \end{equation} here, we remind the reader that $C(r,S)$ is the height bound constant given by \vspace{.1cm} \begin{equation}\label{rat:degbd2} C(r,S)=\max_{f\in M_{S,r}}\sup_{Q\in\mathbb{P}^N(\overline{\mathbb{Q}})}\{h(f(Q))-\deg(f)h(Q)\}. \vspace{.1cm} \end{equation} In particular, both $d_{S,k}$ and $B_{S,k}$ are finite since $S$ is a finite set. From here we proceed as in the proof of \cite[Proposition 12]{SilvermanPN}. Namely, for any $k\geq1$ and $m\geq k$, we can write $\gamma_m^-=f\circ\gamma_{nk}^-$ for some $f\in M_{S,r}$, some $0\leq r< k$, and some $n\geq1$. With this in mind, \vspace{.15cm} \begin{equation}\label{rat:subseq} \begin{split} \limsup_{m\rightarrow\infty} h(\gamma_m^{-}(P))^{1/m}&=\limsup_{n\rightarrow\infty} \max_{0\leq r<k} h(\gamma_{r+nk}^{-}(P))^{1/(r+nk)}\\[5pt] &\leq\limsup_{n\rightarrow\infty} \Big(d_{S,k}\,h(\gamma_{nk}^{-}(P))+B_{S,k}\Big)^{1/nk}\;\;\;\;\;\ \text{by (\ref{rat:bd1}), (\ref{rat:degbd}), and (\ref{rat:degbd2})}\\[5pt] &=\limsup_{n\rightarrow\infty} h(\gamma_{nk}^{-}(P))^{1/nk} \vspace{.1cm} \end{split} \end{equation} Hence, combining the bound in (\ref{rat:bigbd}) with (\ref{rat:subseq}), we see that \vspace{.2cm} \begin{equation}{\label{rat:bd6}} \limsup_{m\rightarrow\infty} h(\gamma_m^{-}(P))^{1/m}\leq \exp\bigg(\mathbb{E}_{\bar{\nu}}\Big[\frac{\log\deg(\gamma_k^-)}{k}\Big]\bigg)=\exp\bigg(\frac{\mathbb{E}_{\bar{\nu}}[\log\deg(\gamma_k^-)]}{k}\bigg) \vspace{.2cm} \end{equation} holds for all $k\geq1$, for all $P\in\mathbb{P}^N(\bar{\mathbb{Q}})_S$, for $\bar{\nu}$-almost every $\gamma\in\Phi_S$. Now for iteration on the right. For any $k\geq1$ and $m\geq k$, write $\gamma_m^+=\gamma_{nk}^+\circ f$ for some $f\in M_{S,r}$, some $0\leq r<k$, and some $n\geq1$. Now let \[M_{S,k}(P):=\big\{Q\in\mathbb{P}^N(\overline{\mathbb{Q}})\,:\, Q=f(P)\;\text{for some $f\in M_{S,r}$ and $0\leq r<k$} \big\}\] In particular, $M_{S,k}(P)$ is a finite set of points since $S$ is finite. Therefore, \[\mathcal{C}_{S,k,P}:=\max_{Q\in M_{S,k}(P)}\{h(Q)+B_{S,k}\}\] is a finite constant. Moreover, $h(\gamma_m^+(P))=h(\gamma_{nk}^+(Q))$ for some $Q\in M_{S,k}(P)$ by construction. On the other hand, (\ref{eq:stringbd}) and (\ref{rat:bd3}) hold for all $P\in\mathbb{P}^N(\overline{\mathbb{Q}})_S$. In particular, these bounds hold for all $Q\in M_{S,k}(P)$. Therefore, \begin{equation}{\label{rat:bd7}} h(\gamma_m^+(P))^{1/m}=h(\gamma_{nk}^+(Q))^{1/m}\leq h(\gamma_{nk}^+(Q))^{1/nk}\leq \bigg(\exp\frac{1}{n}\sum_{j=0}^{n-1}F_{(k)}^{+}\big(T_{(k)}^j(\gamma)\big)\bigg) \,\mathcal{C}_{S,k,P}^{1/nk} \end{equation} As before letting $m\rightarrow\infty$ (and therefore $n\rightarrow\infty$), Birkhoff's Ergodic Theorem implies that \begin{equation}{\label{rat:bd8}} \limsup_{m\rightarrow\infty} h(\gamma_m^{+}(P))^{1/m}\leq \exp\bigg(\mathbb{E}_{\bar{\nu}}\Big[\frac{\log\deg(\gamma_k^-)}{k}\Big]\bigg)=\exp\bigg(\frac{\mathbb{E}_{\bar{\nu}}[\log\deg(\gamma_k^-)]}{k}\bigg) \vspace{.2cm} \end{equation} holds for all $k\geq1$, for all $P\in\mathbb{P}^N(\bar{\mathbb{Q}})_S$, for $\bar{\nu}$-almost every $\gamma\in\Phi_S$; recall that the limit of the expected values of $F_{(k)}^-$ and $F_{(k)}^+$ are equal by (\ref{eq:exp=}). In particular letting $k\rightarrow\infty$, we deduce from (\ref{eq:dendegdef}) and our combined bounds in (\ref{rat:bd6}) and (\ref{rat:bd8}), that for $\bar{\nu}$-almost every $\gamma\in\Phi_S$ the bounds \vspace{.1cm} \[\limsup_{m\rightarrow\infty} h(\gamma_m^{\pm}(P))^{1/m}\leq \delta_{S,\nu}\] hold (simultaneously) for all $P\in\mathbb{P}^N(\overline{\mathbb{Q}})_S$. This completes the proof of Theorem \ref{thm:rationalmaps}. \end{proof} \section{Morphisms: dynamical degrees and height bounds}\label{sec:morphisms} Throughout this section, let $S$ be a set of height controlled set of endomorphisms on $\mathbb{P}^N$. Ideally, one would like to strengthen part (2) of Theorem \ref{thm:rationalmaps} for rational maps in two ways: to replace the limsup with a limit, and to replace the inequality with an equality; compare to \cite[Conjecture 6.d]{KawaguchiSilverman} and \cite[Conjecture 1.b]{SilvermanPN}. We succeed in proving this when $S$ is a finite set and the initial point $P$ has sufficiently large height. Moreover (perhaps surprisingly), the resulting limit is (almost surely) independent of the direction of iteration. To prove both Theorems \ref{thm:iteratedlogs} and \ref{thm:zero-one}, we need the following generalization of Tate's telescoping argument. In what follows, $M_S$ is the monoid generated by $S$ under composition, and $d_S$ and $C_S$ are the height controlled constants in Definition \ref{def:htcontrolled}. \begin{lemma}{\label{lem:tate}} Let $S$ be a height controlled set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$, and let $d_S$ and $C_S$ be the corresponding height controlling constants. Then for all $\rho\in M_S$, \[\bigg|\frac{h(\rho(Q))}{\deg(\rho)}-h(Q)\bigg|\leq \frac{C_S}{d_S-1} \;\;\;\; \text{for all $Q\in \mathbb{P}^N(\overline{\mathbb{Q}})$.}\] \end{lemma} \begin{proof} Suppose that $\rho=\theta_r\circ\theta_{r-1}\dots \circ\theta_1$ for $\theta_i\in S$, and let $\theta_0$ to be the identity map on $\mathbb{P}^N$. Then define \[\rho_{i}:=\theta_i\circ\theta_{i-1}\dots \circ\theta_1\circ\theta_0 \;\;\;\; \text{for $0\leq i\leq r$}.\vspace{.05cm}\] Note, that $\rho=\rho_r$ and $\rho_0=\theta_0$ is the identity map. In particular, inspired by Tate's telescoping argument, we rewrite \vspace{.05cm} \begin{equation}{\label{Tate}} \begin{split} \bigg|\frac{h(\rho(Q))}{\deg(\rho)}-h(Q)\bigg|&=\bigg|\sum_{i=0}^{r-1}\frac{h(\rho_{r-i}(Q))}{\deg(\rho_{r-i})}- \frac{h(\rho_{r-i-1}(Q))}{\deg(\rho_{r-i-1})}\bigg|\\[5pt] &\leq \sum_{i=0}^{r-1}\bigg|\frac{h(\rho_{r-i}(Q))}{\deg(\rho_{r-i})}- \frac{h(\rho_{r-i-1}(Q))}{\deg(\rho_{r-i-1})}\bigg| \\[5pt] &=\sum_{i=0}^{r-1}\frac{\Big|h(\rho_{r-i}(Q))-\deg(\theta_{r-i})h(\rho_{r-i-1}(Q))\Big|}{\deg(\rho_{r-i})} \\[5pt] &\leq \sum_{i=1}^{r}\frac{C}{(d_S)^{i}}\leq \sum_{i=1}^{\infty}\frac{C_S}{(d_S)^i}=\frac{C_S}{d_S-1}. \end{split} \end{equation} This completes the proof of Lemma \ref{lem:tate}. \end{proof} With this height bound in place, we are nearly ready to prove our main result for sets of morphisms, Theorem \ref{thm:iteratedlogs}. In fact, we are able to prove a stronger result. Namely, both $h(\gamma_n^{\pm}(P))^{1/n}$ approach the dynamical degree (almost surely) whenever $P$ is a so called escape point for $S$; see Definition \ref{def:escapepts} below. Moreover, every point $P$ of sufficiently large height is an escape point for $S$, and we therefore recover Theorem \ref{thm:iteratedlogs}. \begin{remark} This improved version can be useful for analyzing dynamical Galois groups; see Section \ref{sec:Galois}. For instance, if $S=\{x^{d_1}+c_1, \dots, x^{d_s}+c_s\}$ is a set of unicritical polynomials, then the right orbits of $P=0$ (i.e., the critical orbits) control the ramification in the associated towers of splitting fields; see Proposition \ref{prop:discriminant} below. However, $P=0$ does not have large enough height to apply Theorem \ref{thm:iteratedlogs} directly. Nevertheless, $P=0$ is very often an escape point for $S$ (see Corollary \ref{cor:unicritescape} below), in which case the conclusions of Theorem \ref{thm:iteratedlogs} still hold. \end{remark} To define escape points, recall that $M_{S,r}$ denotes the set of functions generated by tuples of elements of $S$ of length $r$; see (\ref{def:strings}) above. Moreover by convention, $M_{S,0}$ is the singleton set containing the identity function. \begin{definition}\label{def:escapepts} Let $S$ be a height controlled set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$ and define $B_S:=C_S/(d_S-1)$. If there exists $r\geq0$ such that $h(g(P))> B_S$ for all $g\in M_{S,r}$, then we say that $P$ is an \emph{escape point} for $S$. Moreover, we call the minimum such value of $r$ the \emph{escape level} of $P$. \end{definition} The importance of escape points is explained by the following auxiliary result. Namely, if $P$ is an escape point for $S$, then we can bound quantities of the form $h(f(P))/\deg(f)$ from below (in a nontrivial way). This may be viewed as analogous to $P$ having positive canonical height when iterating a single function. However, this is not a perfect analogy, since canonical heights do not exist in general for right iteration; see Example \ref{eg:left-right difference} above. \begin{lemma}\label{lem:escapept} Let $S$ be a finite set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$ all of degree at least two and let $P$ be an escape point for $S$ with escape level $r\geq0$. Then there exist positive constants $B_{S,P,1}$ and $B_{S,P,2}$ (depending on $P$) such that \[0<B_{S,P,1}\leq\frac{h(f(P))}{\deg(f)}\leq B_{S,P,2}\] for all $f\in M_{S,n}$ with $r\leq n$. \end{lemma} \begin{proof} The upper bound on $h(f(P))/\deg(f)$ follows directly from Lemma \ref{lem:tate} applied to the map $\rho=f$ and the point $Q=P$. For the lower bound, let $r\geq0$ be the escape level of $P$ and let $f\in M_{S,n}$ for some $n\geq r$. Then we can write $f=j\circ g$ for some $j\in M_{S,n-r}$ and some $g\in M_{S,r}$. Then Lemma \ref{lem:tate} applied to the map $\rho=j$ and the point $Q=g(P)$ implies that \vspace{.1cm} \begin{equation}\label{lbdespace} \begin{split} \frac{h(f(P))}{\deg(f)}=\frac{h(j(g(P)))}{\deg(j)\deg(g)}&\geq\frac{1}{\deg(g)}\big(h(g(P))-B_S\big)\\[8pt] &\geq \frac{1}{\displaystyle{\max_{g\in M_{S,r}}\{\deg{g}\}}}\cdot\min_{g\in M_{S,r}}\big\{h(g(P))-B_S\big\} \end{split} \end{equation} However, since $S$ is a finite set and $r$ is fixed, the degree of $g\in M_{S,r}$ is absolutely bounded. Likewise, since $P$ is an escape point for $S$, the quantity $h(g(P))-B_S$ is positive for all $g\in M_{S,r}$. Therefore, the minimum on the right hand side of (\ref{lbdespace}) is positive, since it is the minimum value of a finite set of positive numbers. In particular, there is a positive constant $B_{S,P,2}$, depending only on $S$ and $P$, such that $h(f(P))/\deg(f)>B_{2,P}$ as claimed. \end{proof} With Lemma \ref{lem:escapept} in place, we are ready to prove an improved version of Theorem \ref{thm:iteratedlogs} from the Introduction for escape points. \begin{theorem}\label{thm:escapepoints} Let $S$ be a finite set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$ all of degree at least two, and let $\nu$ be a discrete probability measure on $S$. Then the following statements hold: \vspace{.25cm} \begin{enumerate} \item[\textup{(1)}] The dynamical degree is given by $\displaystyle{\delta_{S,\nu}=\prod_{\phi\in S}\deg(\phi)^{\nu(\phi)}}$. \vspace{.3cm} \item[\textup{(2)}] For $\bar{\nu}$-almost every $\gamma\in\Phi_S$, the limits \vspace{.1cm} \[\lim_{n\rightarrow\infty}h(\gamma_n^-(P))^{1/n}=\delta_{S,\nu}=\lim_{n\rightarrow\infty}h(\gamma_n^+(P))^{1/n}\vspace{.15cm}\] hold (simultaneously) for all escape points $P$ for $S$. \vspace{.4cm} \item[\textup{(3)}] If the variance $\sigma^2$ of $\log(\deg(\phi))$ is nonzero, then for $\bar{\nu}$-almost every $\gamma\in\Phi_S$, \vspace{.1cm} \[\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{h(\gamma_n^{\pm}(P))}{\delta_{S,\nu}^n}}\bigg)}{\sigma_{S,\nu}\sqrt{2n\log\log n}}=1=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\delta_{S,\nu}^n}{h(\gamma_n^{\pm}(P))}}\bigg)}{\sigma_{S,\nu}\sqrt{2n\log\log n}},\vspace{.3cm}\] hold (simultaneously) for all escape points $P$ for $S$. \vspace{.1cm} \end{enumerate} \end{theorem} \begin{remark} Note that if $h(P)>B_S$, then $P$ is an escape point for $S$ of level $r=0$. In particular, Theorem \ref{thm:escapepoints} implies Theorem \ref{thm:iteratedlogs} from the Introduction. \end{remark} \begin{proof}[(Proof of Theorem \ref{thm:iteratedlogs})] For statement (1), consider $f_1:\Phi_S\rightarrow\mathbb{R}$ given by: \[f_1(\gamma)=\log\deg(\theta_1)\qquad \text{for\;\,$\gamma=(\theta_i)_{i=1}^\infty\in\Phi_S$}.\] Then Birkhoff's Ergodic Theorem \ref{birk} and Lemma \ref{shift} together imply that \[\lim_{n\rightarrow\infty} \frac{1}{n}\sum_{j=0}^{n-1} f_1\circ T^j(\gamma)=\mathbb{E}_{\bar{\nu}}[f_1].\] for almost every $\gamma\in\Phi_S$; here $T:\Phi_S\rightarrow\Phi_S$ is the shift map. On the other hand, since \[\deg(F\circ G)=\deg(F)\cdot\deg(G)=\deg(G)\cdot\deg(F)=\deg(G\circ F)\] for all endomorphisms $F$ and $G$ on $\mathbb{P}^N$, we have that \[\log\deg(\gamma^{\pm}_n)^{1/n}=\frac{1}{n}\sum_{j=0}^{n-1} f_1\circ T^j(\gamma).\] In particular, $\delta_{S,\nu}=\displaystyle{\lim_{n\rightarrow\infty}\deg(\gamma_n^{\pm})^{1/n}}=\exp\big(\mathbb{E}_{\bar{\nu}}[f_1]\big)$ almost surely. However, $f_1:\Phi_S\rightarrow\mathbb{R}$ factors through $S$, so that \cite[Theorem 10.4]{ProbabilityText} implies that \[\delta_{S,\nu}=\exp\big(\mathbb{E}_{\bar{\nu}}[f_1]\big)=\exp\big(\mathbb{E}_{\nu}[\log\deg(\phi)]\big)=\exp\Big(\sum_{\phi\in S}\log\deg(\phi)\nu(\phi)\Big)=\prod_{\phi\in S}\deg(\phi)^{\nu(\phi)}\] as claimed. For statement (2), let $\gamma\in\Phi_S$ be such that $\lim\deg(\gamma_n^{\pm})^{1/n}=\delta_{S,\nu}$, true of almost every $\gamma\in \Phi_S$, and let $P$ be an escape point for $S$. Then Lemma \ref{lem:escapept} implies that there are positive constants $B_{S,P,1}$ and $B_{S,P,2}$ such that \[\qquad B_{S,P,1}\cdot\deg(\gamma_n^{\pm})<h(\gamma_n^{\pm}(P))<B_{S,P,2}\cdot\deg(\gamma_n^{\pm}),\qquad\text{for all $\gamma\in\Phi_S$ and all $n\geq r$;}\] here $r$ is the escape level of $P$. Therefore, taking $n$th roots of both sides and letting $n$ tend to infinity, we see that \[\delta_{S,\nu}=\lim_{n\rightarrow\infty}B_{S,P,1}^{1/n}\cdot\lim_{n\rightarrow\infty}\deg(\gamma_n^{\pm})^{1/n}\leq h(\gamma_n^{\pm}(P))^{1/n}\leq\lim_{n\rightarrow\infty}B_{S,P,2}^{1/n}\cdot\lim_{n\rightarrow\infty}\deg(\gamma_n^{\pm})^{1/n}=\delta_{S,\nu}.\] Hence, for almost every $\gamma\in\Phi_S$ the limits \[\lim_{n\rightarrow\infty}h(\gamma_n^{\pm}(P))^{1/n}=\delta_{S,\nu}\] hold (simultaneously) for all escape points $P$ for $S$ as claimed. For statement (3), suppose that $P$ is an escape point for $S$ and that the variance $\sigma^2$ of the random variable $\log\deg(\cdot): S\rightarrow\mathbb{R}$ is nonzero; here, $\sigma^2$ is given explicitly by \[\sigma^2=\sum_{\phi\in S} \big(\log\deg(\phi)-\log(\delta_{S,\nu})\big)^2\nu(\phi).\] Then it follows from Lemma \ref{lem:escapept} that \vspace{.1cm} \begin{equation}\label{escapept:loght} \lim_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{h(\gamma_n^{\pm}(P))}{\deg(\gamma_n^{\pm})}}\bigg)}{\sigma\sqrt{2n\log\log n}}=0=\lim_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\deg(\gamma_n^{\pm})}{h(\gamma_n^{\pm}(P))}}\bigg)}{\sigma\sqrt{2n\log\log n}}\qquad \text{for all $\gamma\in\Phi_{S}$;} \vspace{.15cm} \end{equation} here we simply use that the quantities $\log\frac{h(\gamma_n^{\pm}(P))}{\deg(\gamma_n^{\pm})}$ are bounded independently of $n\geq r$ by Lemma \ref{lem:escapept}. On the other hand, consider the i.i.d random variables $Y_n:\Phi_S\rightarrow\mathbb{R}$ given by \[Y_n(\gamma)=\frac{1}{\sigma}\big(\log\deg(\theta_n)-\log\delta\big),\qquad\text{for $\gamma=(\theta_i)_{i\geq1}\in\Phi_S$;}\] In particular, each $Y_n$ has mean $0$ and unit variance. Therefore, the Hartman-Wintner Law of the Iterated Logarithms (Theorem \ref{thm:lawiterlogs}) for the simple random walk $S_n=Y_1+\dots +Y_n$ implies that \vspace{.05cm} \begin{equation}\label{escapept:lawiterlog} \limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\deg(\gamma_n^{\pm})}{\delta_{S,\nu}^n}}\bigg)}{\sigma\sqrt{2n\log\log n}}=1=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\delta_{S,\nu}^n}{\deg(\gamma_n^{\pm})}}\bigg)}{\sigma\sqrt{2n\log\log n}} \qquad \text{($\bar{\nu}$-almost surely).} \end{equation} Hence, the conclusions in both (\ref{escapept:loght}) and (\ref{escapept:lawiterlog}) hold for almost every $\gamma\in\Phi_S$. Therefore, \vspace{.1cm} \begin{equation*} \begin{split} 1&=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\deg(\gamma_n^{\pm})}{\delta_{S,\nu}^n}}\bigg)}{\sigma\sqrt{2n\log\log n}}+\lim_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{h(\gamma_n^{\pm}(P))}{\deg(\gamma_n^{\pm})}}\bigg)}{\sigma\sqrt{2n\log\log n}}\\[10pt] &=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\deg(\gamma_n^{\pm})}{\delta_{S,\nu}^n}}\bigg)+\log\bigg(\mathlarger{\frac{h(\gamma_n^{\pm}(P))}{\deg(\gamma_n^{\pm})}}\bigg)}{\sigma\sqrt{2n\log\log n}}=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{h(\gamma_n^{\pm}(P))}{\delta_{S,\nu}^n}}\bigg)}{\sigma\sqrt{2n\log\log n}}\\[5pt] \end{split} \end{equation*} for almost every $\gamma\in\Phi_S$. Likewise, (\ref{escapept:loght}) and (\ref{escapept:lawiterlog}) imply that \vspace{.3cm} \begin{equation*} \begin{split} 1&=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\delta_{S,\nu}^n}{\deg(\gamma_n^{\pm})}}\bigg)}{\sigma\sqrt{2n\log\log n}}+ \lim_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\deg(\gamma_n^{\pm})}{h(\gamma_n^{\pm}(P))}}\bigg)}{\sigma\sqrt{2n\log\log n}}\\[10pt] &=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\delta_{S,\nu}^n}{\deg(\gamma_n^{\pm})}}\bigg)+\log\bigg(\mathlarger{\frac{\deg(\gamma_n^{\pm})}{h(\gamma_n^{\pm}(P))}}\bigg)}{\sigma\sqrt{2n\log\log n}}=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\delta_{S,\nu}^n}{h(\gamma_n^{\pm}(P))}}\bigg)}{\sigma\sqrt{2n\log\log n}}\\[8pt] \end{split} \end{equation*} holds for $\bar{\nu}$-almost every $\gamma\in\Phi_S$, whenever $P$ is an escape point for $S$ and $\sigma^2$ is nonzero. This complete the proof of Theorem \ref{thm:escapepoints}. \end{proof} As an application of Theorem \ref{thm:escapepoints}, we can prove an asymptotic formula for the number of points in generic left and right orbits. \begin{proof}[(Proof of Corollary \ref{cor:escapeptshtbds})] We mostly follow the proof of \cite[Proposition 3]{KawaguchiSilverman}. However, there is an added step, which allows us to pass from superscripts of $\gamma_{n}^{\pm}(P)$ to points in orbits $Q\in\Orb_\gamma^{\pm}(P)$; see Lemma \ref{lem:n'stopoints} below. Let $P$ be an escape point for $S$ and let $\gamma\in\Phi_S$ be such that $\lim h(\gamma_n^{\pm}(P))^{1/n}=\delta_{S,n}$, true of almost every $\gamma$ by Theorem \ref{thm:escapepoints}. Then for every $\epsilon>0$ there is an integer $n_0=n_0(\epsilon,\gamma)$ so that \[(1-\epsilon)\delta_{S,\nu}\leq h(\gamma_n^{\pm}(P))^{1/n}\leq(1+\epsilon)\delta_{S,\nu}\] for all $n\geq n_0$; here you choose $n_0$ to be max of the corresponding $n_0(\epsilon,\gamma,-)$ and $n_0(\epsilon,\gamma,+)$. In particular, it follows that \vspace{.1cm} \begin{equation}\label{basiccount1} \begin{split} \{n\geq n_0\,:\,(1+\epsilon)\delta_{S,\nu}\leq B^{1/n}\}&\subset\{n\geq n_0\,:\,h(\gamma_n^{\pm}(P))\leq B\} \\[2pt] &\text{and}\\[2pt] \{n\geq n_0\,:\,h(\gamma_n^{\pm}(P))\leq B\}&\subset\{n\geq n_0\,:\,(1-\epsilon)\delta_{S,\nu}\leq B^{1/n}\}. \end{split} \end{equation} Therefore, after counting the number elements in the sets in (\ref{basiccount1}), we see that \begin{equation*} \begin{split} \frac{\log(B)}{\log((1+\epsilon)\delta_{S,\nu})}-n_0&\leq\#\{n\geq0\,:\,h(\gamma_n^{\pm}(P))\leq B\}\\[2pt] &\text{and}\\[2pt] \#\{n\geq0\,:\,h(\gamma_n^{\pm}(P))\leq B\}&\leq\frac{\log(B)}{\log((1-\epsilon)\delta_{S,\nu})}+n_0+1. \\[3pt] \end{split} \end{equation*} Hence, dividing by $\log(B)$ and letting $B$ tend to infinity gives \vspace{.1cm} \begin{equation*} \begin{split} \frac{1}{\log((1+\epsilon)\delta_{S,\nu})}&\leq\liminf_{B\rightarrow\infty}\frac{\#\{n\geq0\,:\,h(\gamma_n^{\pm}(P))\leq B\}}{\log(B)}\\[3pt] &\text{and} \\[3pt] \limsup_{B\rightarrow\infty}\frac{\#\{n\geq0\,:\,h(\gamma_n^{\pm}(P))\leq B\}}{\log(B)}&\leq \frac{1}{\log((1-\epsilon)\delta_{S,\nu})}. \\[4pt] \end{split} \end{equation*} In particular, since $\epsilon$ was arbitrary, we deduce that \vspace{.25cm} \begin{equation}\label{count:n's} \lim_{B\rightarrow\infty}\frac{\#\{n\geq0\,:\,h(\gamma_n^{-}(P))\leq B\}}{\log(B)}=\frac{1}{\log(\delta_{S,\nu})}=\lim_{B\rightarrow\infty}\frac{\#\{n\geq0\,:\,h(\gamma_n^{+}(P))\leq B\}}{\log(B)} \vspace{.2cm} \end{equation} hold (simultaneously) for almost every $\gamma\in\Phi_S$. From here, we pass from superscripts $n$ to points in orbits by the following lemma; however, we must assume that the initial point $P$ has height at least $3B_S$ (instead of $B_S$). In particular, we deduce from (\ref{count:n's}) and Lemma \ref{lem:n'stopoints} below, that for almost every $\gamma\in\Phi_S$ the limits \vspace{.15cm} \[\lim_{B\rightarrow\infty}\frac{\#\{Q\in\Orb_\gamma^-(P)\,:\,h(Q)\leq B\}}{\log(B)}=\frac{1}{\log\delta_{S,\nu}}=\lim_{B\rightarrow\infty}\frac{\#\{W\in\Orb_\gamma^+(P)\,:\,h(W)\leq B\}}{\log(B)} \vspace{.15cm}\] hold (simultaneously) for all $P$ with $h(P)>3B_S$ as claimed. \end{proof} \begin{lemma}\label{lem:n'stopoints} Let $S$ be a height controlled set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$. If $h(P)>3B_S$, then $\gamma_n^{-}(P)\neq\gamma_m^{-}(P)$ and $\gamma_n^{+}(P)\neq\gamma_m^{+}(P)$ for all $n\neq m$ and all $\gamma\in\Phi_S$. \end{lemma} \begin{proof} Suppose that $n>m$ and that $\gamma_n^{\pm}(P)=\gamma_m^{\pm}(P)$. In particular, $h(\gamma_n^{\pm}(P))=h(\gamma_m^{\pm}(P))$. Then Lemma \ref{lem:tate} applied separately to $f=\gamma_n^{\pm}$ and then to $f=\gamma_m^{\pm}$ implies that \[\deg(\gamma_n^{\pm})\cdot(h(P)-B_S)\leq h(\gamma_n^{\pm}(P))=h(\gamma_m^{\pm}(P))\leq \deg(\gamma_m^{\pm})\cdot(h(P)+B_S).\] Rearranging terms, we deduce that \begin{equation}\label{distinctorbits} \frac{\deg(\gamma_n^{\pm})}{\deg(\gamma_m^{\pm})}\leq \frac{(h(P)+B_S)}{(h(P)-B_S)}. \end{equation} However, $n>m$ so that $\gamma_n^-=g_1\circ\gamma_m^-$ and $\gamma_n^+=\gamma_m^+\circ g_2$ for some $g_1,g_2\in M_{S,n-m}$. Moreover, $S$ is height controlled, so that $\deg(g_i)\geq2$. Furthermore, $\deg(\gamma_n^-)=\deg(g_1)\cdot\deg(\gamma_m^-)$ and $\deg(\gamma_n^+)=\deg(g_2)\cdot\deg(\gamma_m^+)$. Combining these facts with (\ref{distinctorbits}), we deduce that \[2\leq \frac{(h(P)+B_S)}{(h(P)-B_S)}.\] However, this statement immediately implies that $h(P)\leq 3B_S$, and the result follows. \end{proof} Since we are particularly interested in arithmetic aspects of right orbits for their relation to dynamical Galois groups (see Section \ref{sec:Galois}), we give a more explicit version of the height bounds in Theorem \ref{thm:escapepoints} for finite sets of unicritical maps. \begin{remark} If one is interested in trying to generalize known primitive prime divisor results to right iteration, especially those which are useful for understanding dynamical Galois groups \cite{Tucker,AvgZig, Riccati}, then one likely needs (among other things) a fairly refined understanding of the growth rates of heights in right orbits. \end{remark} \begin{corollary}{\label{cor:unicritescape}} Let $S=\{x^{d_1}+c_1, \dots, x^{d_s}+c_s\}$ for some $d_i\geq2$ and some $c_i\in\mathbb{Z}\mathbin{\fgebackslash}\{0\}$. Furthermore, assume that $P\in\mathbb{Q}$ satisfies \[ h((P^{d_i}+c_i)^{d_j}+c_j)\geq\max_i\{\log|2c|\}\;\;\;\;\; \text{for all $i,j$.}\] Then $P$ is an escape point for $S$. Therefore, if $S$ is equipped with the uniform measure and the $d_i$ are not all identical, then for all $\epsilon>0$ and almost every $\gamma\in\Phi_S$, there exists $N_{\gamma,P,\epsilon}$ such that \vspace{.15cm} \[(d_1d_2\dots d_s)^{\frac{n-(1+\epsilon)\log_{\delta}(e)\sigma\sqrt{2n\log\log n}}{s}}\leq h(\gamma_n^+(P))\leq (d_1d_2\dots d_s)^{\frac{n+(1+\epsilon)\log_\delta(e)\sigma\sqrt{2n\log\log n}}{s}} \vspace{.15cm}\] for all $n\geq N_{\gamma,P,\epsilon}$. \end{corollary} \begin{remark}{\label{rmk:escape}} In particular, if $|c_i^{d_j}+c_j|\geq2\max\{|c_i|\}$, then $0$ is an escape point for $S$ and the height bounds in Corollary \ref{cor:unicritescape} hold for $P=0$; for an application to Galois theory, see Corollary \ref{cor:Galoisescp}. We also note that in practice, the condition on $P$ in Corollary \ref{cor:escapeptshtbds} holds for every rational point (for many sets $S$). \end{remark} \begin{proof} Let $\phi(x)=x^d+c$ for some $d\geq2$ and some $c\in\mathbb{Z}\mathbin{\fgebackslash}\{0\}$. Then it is straightforward to prove that \[|h(\phi(P))-dh(P)|\leq\log|2c|\;\;\;\;\;\;\;\text{for all $P\in\mathbb{Q}$;}\] see \cite[Lemma 12]{Ingram}. In particular, the set $S$ as in Corollary \ref{cor:unicritescape} is height controlled with height constants $C_S=\max\{\log|2c|\}$ and $d_S\geq2$. Moreover, the condition on $P$ implies that $P$ is an escape point for $S$ with escape level $r=2$; see Definition \ref{def:escapepts} above. The claim then follows from Theorem \ref{thm:iteratedlogs} part (3) and the fact that the dynamical degree $\delta_{S,\nu}=\prod d_i^{1/s}$ is a geometric mean of the degrees of the maps in $S$; see Theorem \ref{thm:iteratedlogs} part (1). \end{proof} We now move on to study right iteration more carefully for more general initial points, including some points of small height. \begin{remark} For left iteration this analysis is accomplished by using canonical heights. In particular, several of our results on arithmetic and dynamical degrees above (for left iteration) hold for so called almost surely wandering points; see \cite[Theorem 1.5]{Me:dyndeg}. \end{remark} Here, the key assumption we make on the initial point $P$ is that it have infinite total orbit, i.e., the action of the entire monoid generated by $S$ on $P$ gives an infinite set; see (\ref{eq:totalorbit}) above. In particular, this condition is weaker than the assumption that $P$ be an escape point for $S$. In this case (and among other things), we prove that $\limsup h(\gamma_n^+(P))=\delta_{S,\nu}$ almost surely; compare to Theorems \ref{thm:rationalmaps} and \ref{thm:lawiterlogs} above. Moreover, this result holds for infinite height controlled sets of endomorphisms as well. For a statement of the following result, see Theorem \ref{thm:zero-one} from the Introduction. \begin{proof}[(Proof of Theorem \ref{thm:zero-one})] For statement (1), let $P\in\mathbb{P}^N(\overline{\mathbb{Q}})$ be any point. Note that if $P$ is fixed and the sequence $\gamma\in\Phi_S$ is allowed to vary, then Lemma \ref{lem:tate} implies that the height-degree quotient sequence $h(\gamma^+_n(P))/\deg(\gamma_n^+)$ is bounded by $h(P)\pm B_S$. Therefore, both \[\displaystyle{\liminf_{n\rightarrow\infty}\frac{h(\gamma_n^+(P))}{\deg(\gamma_n^+)}}\;\;\; \text{and}\;\;\;\displaystyle{\limsup_{n\rightarrow\infty}\frac{h(\gamma_n^+(P))}{\deg(\gamma_n^+)}}\] exist and are $h(P)+O(1)$ for all $P\in\mathbb{P}^N(\overline{\mathbb{Q}})$ and all $\gamma\in\Phi_S$. For statement (2), suppose that all of the maps in $S$ are defined over a fixed number field $K$. Moreover, we assume (without loss of generality) that the initial point $P\in\mathbb{P}^N(K)$. In particular, Northcott's Theorem (over $K$) implies that if $\Orb_S(P)$ is infinite, then there exists $g\in M_S$ such that $h(g(P))>B_S$. On the other hand, the Infinite Monkey Theorem, a simple consequence of Borel-Cantelli \cite[pp. 96-100]{InfiniteMonkey}, implies that \[\gamma_n^+=f_{\gamma,n}\circ g \;\;\text{for some $f_{\gamma,n}\in M_S$ and infinitely many $n$}\] for almost every $\gamma\in\Phi_S$; that is, with probability $1$ the infinite sequence $\gamma$ contains the finite substring $g$ infinitely many times. In particular, for such $\gamma$ and $n$, the bound in Lemma \ref{lem:tate} applied to $Q=g(P)$ and $\rho=f_{\gamma,n}$ implies that \begin{equation}\label{bdas} \frac{h(\gamma^+_n(P))}{\deg(\gamma^+_n)}=\frac{h(f_{\gamma,n}(g(P)))}{\deg(f_{\gamma,n})\deg(g)}\geq\frac{1}{\deg(g)}\big(h(g(P))-B_S\big)>0. \end{equation} It follows that the limsup of the quotient $h(\gamma^+_n(P))/\deg(\gamma^+_n)$ must be strictly positive for almost every $\gamma\in\Phi_S$. Conversely, if the limsup of $h(\gamma^+_n(P))/\deg(\gamma^+_n)>0$ is positive for a single $\gamma$ (in particular, if it's true almost surely), then the right orbit $\Orb_\gamma^+(P)$ must be infinite. Therefore, the total orbit $\Orb_S(P)$ is infinite as well. Finally, statement (3). Let $P$ be any initial point and let $\gamma$ be any sequence. We first show that $\lim h(\gamma_n^+(P))^{1/n}\leq\delta_{S,\nu}$ almost surely. Note that for finite sets, this is known by Theorem \ref{thm:rationalmaps}; however, we wish to allow suitable infinite sets. To do this (and to ease notation), let \begin{equation}\label{upperht} \bar{h}_\gamma^+(P)=\limsup_{n\rightarrow\infty}\frac{h(\gamma_n^+(P))}{\deg(\gamma_n^+)}. \end{equation} Then by definition of $\limsup$ and Theorem \ref{thm:zero-one} part (1), we know that for all $\epsilon>0$ there is an $N_{P,\gamma,\epsilon}$ such that \[\frac{h(\gamma^+_n(P))}{\deg(\gamma^+_n)}\leq(1+\epsilon)\bar{h}^+_\gamma(P)\] holds for all $n>N_{P,\gamma,\epsilon}$. In particular, \begin{equation}\label{arithdegbd1} h(\gamma^+_n(P))^{1/n}\leq(1+\epsilon)^{1/n}\,\bar{h}^+_\gamma(P)^{1/n}\,\deg(\gamma^+_n)^{1/n} \end{equation} holds for such $n$. On the other hand, if $\Orb_S(P)$ is infinite, then $\bar{h}^+_\gamma(P)$ is positive almost surely by part (2) above. Likewise, if $\mathbb{E}_\nu[\log\deg(\phi)]$ exists, then Birkhoff's Ergodic Theorem \ref{birkhoff} (and an identical argument given for Theorem \ref{thm:iteratedlogs} part (1) above) implies that \vspace{.1cm} \[\displaystyle{\lim_{n\rightarrow\infty}\deg(\gamma^+_n)^{1/n}}=\displaystyle{\lim_{n\rightarrow\infty}\deg(\gamma^-_n)^{1/n}}=\delta_{S,\nu}=\exp\big(\mathbb{E}_\nu[\log\deg(\phi)]\big)\qquad\;\;\;\;\text{(almost surely)};\vspace{.1cm}\] alternatively, we can quote \cite[Theorem 1.5]{Me:dyndeg}. Therefore, if both $\Orb_S(P)$ is infinite and the quantity $\mathbb{E}_\nu[\log\deg(\phi)]$ exists, then the bound in (\ref{arithdegbd1}) implies that \[\limsup_{n\rightarrow\infty} h(\gamma_n^+(P))^{1/n}\leq\delta_{S,\nu}\] is true for almost every $\gamma\in\Phi_S$. For the reverse inequality, suppose that $\Orb_S(P)$ is infinite and the quantity $\mathbb{E}_\nu[\log\deg(\phi)]$ exists. Then by definition of $\bar{h}_\gamma^+(P)$, for all $0<\epsilon<1$ there exists an infinite sequence $\{n_k\}\subseteq\mathbb{N}$, depending on both $\epsilon$ and $\gamma$, such that \vspace{.1cm} \[\bar{h}^+_\gamma(P)(1-\epsilon)\leq \frac{h(\gamma_{n_k}^+(P))}{\deg(\gamma_{n_k}^+)}\] for all $n_k$. In particular, we see that \vspace{.1cm} \[\bar{h}_\gamma^+(P)^{1/n_k}(1-\epsilon)^{1/n_k}\deg(\gamma_{n_k}^+)^{1/n_k}\leq h(\gamma_{n_k}^+(P))^{1/n_k}.\] Therefore, it follows that \vspace{.05cm} \begin{equation}\label{ineq:reverse1} \begin{split} \limsup_{n_k\rightarrow\infty}\Big(\bar{h}_\gamma^+(P)^{1/n_k}(1-\epsilon)^{1/n_k}\deg(\gamma_{n_k}^+)^{1/n_k}\Big) &\leq \limsup_{n_k\rightarrow\infty} h(\gamma_{n_k}^+(P))^{1/n_k} \\[3pt] &\leq \limsup_{n\rightarrow\infty} h(\gamma_{n}^+(P))^{1/n}. \end{split} \end{equation} On the other hand, $\bar{h}_\gamma^+(P)$ is almost surely positive by part (2) of Theorem \ref{thm:zero-one} above. Hence, \begin{equation}\label{ineq:reverse2} \begin{split} \lim_{n_k\rightarrow\infty} \bar{h}_\gamma^+(P)^{1/n_k}=1&=\lim_{n_k\rightarrow\infty}(1-\epsilon)^{1/n_k}\\[3pt] \;\;\;\;\;\;\;\;\;\;\text{and}&\\[3pt] \lim_{n_k\rightarrow\infty}\deg(\gamma_{n_k}^+)^{1/n_k}=& \lim_{n\rightarrow\infty}\deg(\gamma_{n}^+)^{1/n}=\delta_{S,\nu} \end{split} \end{equation} almost surely. Therefore, (\ref{ineq:reverse1}) and (\ref{ineq:reverse2}) together imply that \[\delta_{S,\nu}\leq \limsup_{n\rightarrow\infty} h(\gamma_{n}^+(P))^{1/n}\] holds for almost every $\gamma\in\Phi_S$ as claimed. \end{proof} \begin{remark} The liminf and limsup in Theorem \ref{thm:zero-one} part (1) can be distinct for initial points $P$ of small height, even if the total orbit of $P$ is infinite; see Example \ref{eg:left-right difference} above. \end{remark} We note the following consequence of Theorem \ref{thm:zero-one}, a sort of zero-one law for finite orbit points. In particular, the analogous statement fails for left iteration; see \cite[Example 1.10]{Me:dyndeg}. \begin{corollary} Let $S$ be a height controlled set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$ all defined over a fixed number field $K$ and let $\nu$ be a discrete probability measure on $S$. Then for all $P\in\mathbb{P}^N(\overline{\mathbb{Q}})$, the probability that $\Orb_\gamma^+(P)$ is finite is either $0$ or $1$. \end{corollary} \begin{proof} Suppose that $\Orb_S(P)$ is finite. Then $\Orb^+_\gamma(P)\subseteq\Orb_S(P)$ is finite for all $\gamma\in\Phi_S$. In particular, the probability that $\Orb^+_\gamma(P)$ is finite is $1$. On the other hand, if $\Orb_S(P)$ is infinite, then part (2) of Theorem \ref{thm:zero-one} implies that \[I_P=\{\gamma\in\Phi_S\::\; \bar{h}^+_{\gamma}(P)>0\}\] has full measure in $\Phi_S$; see \ref{upperht} for a definition of $\bar{h}^+_{\gamma}(P)$. On the other hand, it is clear that \[\{\gamma\in\Phi_S\::\; \text{$\Orb^+_\gamma(P)$ is finite}\}\subseteq \Phi_S\mathbin{\fgebackslash} I_P.\] Therefore, the probability that $\Orb^+_\gamma(P)$ is finite is $0$. Hence, the probability that $\Orb^+_\gamma(P)$ is finite is either $0$ or $1$ as claimed. \end{proof} As a further application of Theorem \ref{thm:zero-one}, we record the following result for sets of quadratic polynomials with integral coefficients; see \cite{IJNT} for related work on sets of quadratic polynomials with rational coefficients. \begin{corollary}\label{cor:quad} Let $S=\{x^2+c_1,x^2+c_2,\dots,x^2+c_s\}$ for some distinct $c_i\in\mathbb{Z}$. If $s\geq3$, then \[0<\limsup_{n\rightarrow\infty}\frac{h(\gamma_n^+(P))}{\deg(\gamma_n^+)}\qquad\text{(almost surely)}\] for all $P\in\mathbb{Q}$ (independent of the choice of $\nu$). \end{corollary} \begin{proof} Combine Theorem \ref{thm:zero-one} part (2) with \cite[Corollary 1.2]{IJNT}. \end{proof} Finally, we apply Theorem \ref{thm:zero-one} to the height counting problem in orbits; compare to similar results in \cite[Corollary 1.16]{Me:dyndeg} and Corollary \ref{cor:escapeptshtbds} above. However, without further conditions on the initial point $P$, we can only give lower bounds. \begin{corollary}{\label{cor:orbitcount}} Let $S$ be a height controlled set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$ all defined over a fixed number field $K$ and let $\nu$ be a discrete probability measure on $S$. Moreover, suppose the following conditions hold: \vspace{.1cm} \begin{enumerate} \item $\mathbb{E}_\nu[\log\deg(\phi)]$ exists. \vspace{.1cm} \item $\Orb_S(P)$ is infinite. \vspace{.15cm} \end{enumerate} Then \[\frac{1}{\mathbb{E}_\nu[\log\deg(\phi)]}\leq\liminf_{B\rightarrow\infty}\;\frac{\#\{n\geq0\,:\, h(\gamma_n^+(P))\leq B\}}{\log B}\vspace{.15cm}\] for almost every $\gamma\in\Phi_S$. \end{corollary} We suppress the proof of Corollary \ref{cor:orbitcount} due to its similarity to Corollary \ref{cor:escapeptshtbds} above. \section{Height counting in total orbits}\label{sec:totalorbits} We now turn briefly to the height counting problem for total orbits from the Introduction. However, the reader should bear in mind that the work in this section is preliminary. Nevertheless, we include it to motivate future work; for instance, we shall see how this problem relates to growth rates in semigroups and lattice point counting in various domains. As a reminder, if $P\in\mathbb{P}^N(\overline{\mathbb{Q}})$ is fixed, then our overall goal is to understand the asymptotic size of the set of points in the total orbit of $P$ of height at most $B$, \[\{Q\in\Orb_S(P):\, h(Q)\leq B\},\] as $B$ grows. However, at the moment this problems seems quite difficult (since distinct functions can agree on subvarieties), and we instead study the asymptotic size of the related set of functions \begin{equation}\label{monoidcount} \{f\in M_S:\, h(f(P))\leq B\}, \end{equation} in hopes that this count will shed light on the number of points in $\Orb_S(P)$ of bounded height. The basic idea, consistent with our work on orbits coming from sequences, is that the height of a point $f(P)\in \Orb_S(P)$ is roughly determined by the size of $\deg(f)$, as long as the initial point $P$ is sufficiently generic; see Lemma \ref{lem:escapept} With this in mind, to count the number of functions $f\in M_S$ with $h(f(P))\leq B$, we should in some sense simply be counting the number of $f$'s of bounded degree. Moreover, when $M_S$ is (in a nice way) generated by a set of morphisms, this problem may be tractable. To make this heuristic precise, we briefly discuss weighted lengths on monoids. Let $M$ be a monoid generated by a finite set $S=\{\phi_1,\dots,\phi_s\}$ and let $c=(c_1,\dots,c_s)\in\mathbb{R}^s$ be a vector of positive weights. Then we define the \emph{weighted length} $\mathit{l}_{S,c}(f)$ of any $f\in M$ as follows. First let $\Sigma(S)$ be the free monoid generated by $S$ (i.e., $\Sigma(S)$ is the set of all words in the alphabet $S$) and define $\mathit{l}_{S,c}(\phi_i)=c_i$. Then extend $\mathit{l}_{S,c}$ to any word $\sigma\in\Sigma(S)$ by setting $\mathit{l}_{S,c}(\sigma)=\mathit{l}_{S,c}(s_1)+\dots+\mathit{l}_{S,c}(s_k)$ whenever $\sigma=s_1\cdots s_k$ and $s_i\in S$. Finally, for $f\in M$ we define $\mathit{l}_{S,c}(f)$ to be \[\mathit{l}_{S,c}(f):=\inf\big\{\mathit{l}_{S,c}(\sigma): \sigma\in\Sigma(S)\;\text{and $\sigma$ represents $f$}\big\}.\] Moreover, given a notion of length, one can study the growth function $g_{S,c}:\mathbb{R}\rightarrow\mathbb{N}$ given by \begin{equation}\label{growth} g_{S,c}(B):=\#\{f\in M\,:\, \mathit{l}_{S,c}(f)\leq B\}. \end{equation} In particular, the growth rate of $g_{S,c}$ may be used to encode information about the Monoid $M$ and the generating set $S$. \begin{remark} Historically, most of the work on this problem has focused on the case when $M$ is a group and each $c_i=1$ (with some additional work on the case when $c_i\in\mathbb{N}$ also); see \cite{growth1,growth2}. However, the relevant definitions make sense for $c_i\in\mathbb{R}_{>0}$ and monoids, and this is the situation that arises most naturally in our work here. \end{remark} Back to dynamics. Let $S=\{\phi_1, \dots,\phi_s\}$ be a finite set of endomorphisms on $\mathbb{P}^N$ all of degree at least $2$, let $c_i=\log\deg(\phi_i)$, and define $\mathit{l}(f):=\log\deg(f)$ for all $f\in M_S$. Then it is straightforward to check that $\mathit{l}(f)=\mathit{l}_{S,c}(f)$ independent of $S$ (the degree of a composite morphism is the product of the degrees of its components, and the degree of a function is intrinsic, i.e., does not depend on how it is written as a composition of other functions). Now suppose that $P\in\mathbb{P}^N(\overline{\mathbb{Q}})$ is such that $h(P)>B_S:=C_S/(d_S-1)$; here $C_S$ and $d_S$ are the constants from Definition \ref{def:htcontrolled} above. Then, Tate's telescoping Lemma \ref{lem:tate} implies that \vspace{.1cm} \[\deg(f)(h(P)-B_S)\leq h(f(P))\leq\deg(f)(h(P)+B_S).\vspace{.1cm}\] Therefore, for all $B$ we have the subset relations: \vspace{.1cm} \begin{equation}\label{subset} \Scale[.835]{\;\,\bigg\{f\in M_S\,:\,\mathit{l}(f)\leq \log\big(\frac{B}{h(P)+B_S}\big)\bigg\}\subseteq \big\{f\in M_S:\, h(f(P))\leq B\big\}\subseteq\bigg\{f\in M_S\,:\,\mathit{l}(f)\leq \log\big(\frac{B}{h(P)-B_S}\big)\bigg\}}. \vspace{.1cm} \end{equation} In particular, (\ref{growth}) and (\ref{subset}) imply that \begin{equation}\label{ht-wt} \{f\in M_S:\, h(f(P))\leq B\}\sim g_{S,c}(\log\,B) \end{equation} as $B$ tends to infinity. As an application, we consider the case when $S$ is a free basis of the commutative monoid $M_S$ (as an example, one may take $S=\{x^{d_1}, \dots, x^{d_s}\}$ where the $d_i\in\mathbb{N}$ are multiplicatively independent). In this case, $M_S\cong \mathbb{N}^s$ with the operation of coordinate addition, and it is straightforward to check that \[g_{S,c}(B')=\#\{(e_1,\dots, e_s)\in\mathbb{N}^s\,:\,e_1c_1+e_2c_2+\dots+e_sc_s\leq B'\}.\] However, this is evidently a count of the number of lattice points in a dilate of the bounded, Jordon measurable region \[\Omega=\{(x_1,\dots, x_s)\in\mathbb{R}^s\,:\, 0\leq x_i\;\text{and}\; x_1c_1+\dots+x_sc_s\leq 1\}.\] In particular, since the volume of $\Omega$ is $(s!c_1c_2\dots c_s)^{-1}$ it follows that \[g_{S,c}(B')\sim (s!c_1c_2\dots c_s)^{-1} (B')^s\] as $B'$ tends to infinity; see, for instance, \cite[Theorem 12.2]{Pollack}. Letting $B'=\log(B)$, we deduce from (\ref{ht-wt}) that \[\lim_{B\rightarrow\infty}\frac{\#\Big\{f\in M_S\,:\,h\big(f(P)\big)\leq B\Big\}}{(\log B)^s}=\frac{1}{s!\cdot\prod_{i=1}^s\log\deg(\phi_i)}\] as claimed in the Introduction. However, it seems that generically $M_S$ is a free (non-commutative) monoid, and there appears to be little (precise) information known about the growth rate function $g_{S,c}$ in this case, limiting what we can say about the dynamics. \begin{remark} When $M_S$ is a free non-commutative monoid (with basis $S$) and $c_i\in\mathbb{N}$, then $g_{S,c}(B)$ is a sum over restricted compositions of integers $n\leq B$; see \cite[\S2]{compositions}. In particular, one may be able to use the associated generating function to obtain an asymptotic for $g_{S,c}(B)$ in this case. However, the weights coming from dynamics are never integers (they are logs of integers). Nevertheless, since we are mainly interested in asymptotics for (\ref{monoidcount}), it is possible that the integer weight case could provide sufficient information to answer the general case. \end{remark} \section{Galois groups generated by multiple unicritical polynomials}{\label{sec:Galois}} We now discuss the relation between the arithmetic of right orbits and certain dynamical Galois groups. Many of the results in this section are straightforward adaptations of analogous results for constant sequences (i.e., iterating one function); see, for instance, \cite{Jones} and \cite{Jones-Survey}. For additional work on Galois groups generated by iterating multiple maps, see \cite{Ferraguti}. We begin with some notation. Let $K$ be a field of characteristic $0$, and let $S$ be a set of polynomials over $K$. Then given an infinite sequence $\gamma\in\Phi_S$, we can form a tower of Galois extensions $K_{\gamma,n}:=K(\gamma_n^+)$ for $n\geq0$; here $K(\gamma_n^+)$ denotes the splitting field of the equation $\gamma_n^+(x)=0$ in a fixed algebraic closure $\overline{K}$. We note that the direction of iteration is crucial to create nested extensions: \[K\subseteq K_{\gamma,1}\subseteq\dots \subseteq K_{\gamma,n}\,.\] As in the case of iterating a single function (under some separability assumptions), the Galois group $G_{\gamma,n}:=\Gal(K_{\gamma,n}/K)$ acts naturally on the corresponding truncated \emph{preimage tree} with vertices \[T_{\gamma,n}:=\big\{\alpha\in\overline{K}\,:\,\gamma_m^+(\alpha)=0\; \text{for some $1\leq m\leq n$}\big\}\] and edge relation: if $\gamma_m^+(\alpha)=0$ for some $1\leq m\leq n$ and $\gamma_m^+=\theta_1\circ\dots\circ \theta_m$, then there is an edge between $\alpha$ and $\theta_m(\alpha)$. Likewise, the inverse limit of Galois groups $\displaystyle{G_\gamma:=\lim_{\leftarrow} G_{\gamma,n}}$ acts continuously on the complete preimage tree $\displaystyle{T_\gamma=\cup_{n\geq1} T_{\gamma,n}}$ and we obtain an embedding, \[G_{\gamma,K}\leq \Aut(T_\gamma),\] called the \emph{arboreal representation} of $\gamma$; see \cite[\S2]{Ferraguti} for more details. In particular, in light of our probabilistic approach in this paper and the recent finite index theorems and conjectures in \cite{Bridy-Tucker,Jones-Survey}, we pose the following question. \begin{question}\label{question:Galois} Let $\nu$ be a probability measure on $S$. Under what assumptions on the polynomials in $S$ can we conclude that \[\bar{\nu}\Big(\big\{\gamma\in\Phi_S\,:\,[\Aut(T_\gamma):G_{\gamma,K}]<\infty\big\}\Big)>0?\] That is, when are the arboreal representations above finite index subgroups with positive probability? \end{question} As a first step in understanding this problem, we simplify the setup substantially. Let $S$ be set of unicritical polynomials with a common critical point $c\in K$, that is \begin{equation}\label{unicrit} S=\big\{a(x-c)^{d}+b:\,a,b\in K, a\neq0, d\geq2\big\}. \end{equation} \begin{remark} In practice, especially given our work on heights in the previous sections, we usually restrict ourselves to finite subsets of (\ref{unicrit}). However, for completeness, we keep the Galois theory results in this section as general as possible. \end{remark} In particular, if $K$ is a global field and $S$ is a set of polynomials as in (\ref{unicrit}), then we can restrict the ramification of the extensions $K_{\gamma,n}/K$ to the primes dividing elements of the \emph{critical orbits} $\Orb_\gamma^+(c)$ and the primes dividing the leading coefficients or degrees of the polynomials $\gamma_m^+$ for some $1\leq m\leq n$; compare to \cite[Lemma 2.6]{Jones}. In what follows, we use the shorthand $\ell(f)$ and $d(f)$ for the leading term and degree respectively of a polynomial $f\in K[x]$. Moreover, because this section is entirely devoted to right iteration, we (at times) drop the superscript $+$ and simply write $\gamma_n$ for $\gamma_n^+$ when convenient. \begin{proposition}\label{prop:discriminant} Let $S$ be a set of polynomials as in (\ref{unicrit}). Moreover, given $\gamma=(\theta_i)_{i=1}^\infty\in\Phi_S$ and $n\geq0$, let $\ell_{\gamma,n}$, $d_{\gamma,n}$, and $\Delta_{\gamma,n}$ be the leading term, the degree, and the discriminant of $\gamma_n^+$ respectively. Then \[\Delta_{\gamma,n}=\pm\, d(\theta_n)^{d_{\gamma,n}}\cdot\ell_{\gamma,n-1}^{\,d(\theta_n)-1}\cdot\ell(\theta_n)^{d_{\gamma,n-1}(d_{\gamma,n}-1)}\cdot\gamma_n^+(c)\cdot\Delta_{\gamma,n-1}^{d(\theta_n)}\] for all $n\geq1$. \end{proposition} \begin{proof} We begin with a few well known facts about discriminants and resultants; see, for instance, \cite[IV \S8]{Lang}. Let $h_1, h_2, h_3 \in K[x]$ be nonconstant polynomials. Then the resultant $\Res(h_1,h_2)$ of $h_1$ and $h_2$ is given by \begin{equation}\label{resultant} \Res(h_1,h_2)=\ell(h_1)^{d(h_2)}\prod_{h_1(\alpha)=0}h_2(\alpha), \end{equation} where the product above is taken over roots $\alpha\in\overline{K}$ of $h_1$ with multiplicity. Then the discriminant $\Delta(h_1)$ of $h_1$ satisfies \vspace{.1cm} \begin{equation}\label{discriminant1} \Res(h_1,h_1')=(-1)^{d(h_1)(d(h_1)-1)/2}\ell(h_1)\Delta(h_1).\vspace{.1cm} \end{equation} In particular, it is straightforward to check that $\Res(h_1,h_2)=(-1)^{d(h_1)d(h_2)}\Res(h_2,h_1)$, that $\Res(h_1h_2,h_3)=\Res(h_1,h_3)\Res(h_2,h_3)$, and that \vspace{.1cm} \begin{equation}\label{discriminant2} \Res(h_1\circ h_2, h_1'\circ h_2)=\ell(h_2)^{(d(h_1)^2-d(h_1))d(h_2)}\Res(h_1,h_1')^{d(h_2)}. \vspace{.1cm} \end{equation} We now apply these facts to the discriminants in Proposition \ref{prop:discriminant}. Specifically, it follows from (\ref{discriminant1}) and that $\gamma_n^+=\gamma_{n-1}^+\circ\theta_n$ that \vspace{.1cm} \begin{equation}\label{discriminant3} \frac{\Delta_{\gamma,n}}{\Delta_{\gamma,n-1}^{d(\theta_n)}}=\pm\frac{\ell_{\gamma,n-1}^{\,d(\theta_n)}}{\ell_{\gamma,n}}\cdot \frac{\Res(\gamma_n,\gamma_n')}{\Res(\gamma_{n-1},\gamma_{n-1}')^{d(\theta_n)}}; \vspace{.1cm} \end{equation} here we have dropped the superscript $+$ to avoid overly cumbersome notation. On the other hand, the chain rule implies that $\gamma_n'=(\gamma_{n-1}'\circ\theta_n)\cdot\theta_n'$. In particular, the standard resultant facts above together with (\ref{discriminant2}) imply that \vspace{.1cm} \begin{equation}\label{discriminant4} \begin{split} \Res(\gamma_n,\gamma_n')=&\pm \Res(\gamma_n',\gamma_n)\\[3pt] =&\pm\Res((\gamma_{n-1}'\circ\theta_n)\cdot\theta_n', \gamma_n)\\[3pt] =&\pm \Res(\gamma_{n-1}'\circ\theta_n, \gamma_n)\,\Res(\theta_n', \gamma_n)\\[3pt] =&\pm\Res(\gamma_{n-1}'\circ\theta_n, \gamma_{n-1}\circ\theta_n)\,\Res(\theta_n', \gamma_n)\\[3pt] =&\pm\Res(\gamma_{n-1}\circ\theta_n,\gamma_{n-1}'\circ\theta_n)\,\Res(\theta_n', \gamma_n)\\[3pt] =&\pm\ell(\theta_n)^{(d_{\gamma,n-1}^{\,2}-\,d_{\gamma,n-1})d(\theta_n)}\,\Res(\gamma_{n-1},\gamma_{n-1}')^{d(\theta_n)}\,\Res(\theta_n', \gamma_n) \end{split} \end{equation} Therefore, combining the expression in (\ref{discriminant3}) with the bottom line of (\ref{discriminant4}), we see that \begin{equation}\label{discriminant5} \frac{\Delta_{\gamma,n}}{\Delta_{\gamma,n-1}^{d(\theta_n)}}=\pm\frac{\ell_{\gamma,n-1}^{\,d(\theta_n)}}{\ell_{\gamma,n}}\cdot \ell(\theta_n)^{(d_{\gamma,n-1}^{\,2}-\,d_{\gamma,n-1})d(\theta_n)}\,\Res(\theta_n', \gamma_n). \end{equation} However, using the definition of the resultant in (\ref{resultant}) and the fact that $\theta_n$ has a unique critical point $c$, we see that $\Res(\theta_n', \gamma_n)=\ell(\theta_n')^{d_{\gamma,n}}\gamma_n(c)$. Hence, (\ref{discriminant5}) may be rewritten as \begin{equation}\label{discriminant6} \frac{\Delta_{\gamma,n}}{\Delta_{\gamma,n-1}^{d(\theta_n)}}=\pm\frac{\ell_{\gamma,n-1}^{\,d(\theta_n)}}{\ell_{\gamma,n}}\cdot \ell(\theta_n)^{(d_{\gamma,n-1}^{\,2}-\,d_{\gamma,n-1})d(\theta_n)}\,\ell(\theta_n')^{d_{\gamma,n}}\gamma_n(c). \end{equation} Hence, we need only control the relevant leading terms to complete the proof. First, since $\gamma_n=\gamma_{n-1}\circ\theta_n$, we see that $\ell_{\gamma,n}=\ell(\theta_n)^{d_{\gamma,n-1}}\ell_{\gamma,n-1}$. Moreover, $\ell(\theta_n)'=d(\theta_n)\ell(\theta_n)$. Therefore, after substituting these expressions into (\ref{discriminant6}) and simplifying like terms, we obtain the formula in Proposition \ref{prop:discriminant}. \end{proof} In particular for global fields $K$ and finite subsets $S$ of (\ref{unicrit}), we expect that $\Orb_\gamma^+(c)$ controls most of the ramification in $K_{\gamma,n}$. Specifically, suppose that $a_1,\dots, a_s$ and $d_1,\dots, d_s$ are the leading terms and degrees of a subset of the polynomials in $S$ respectively. Then by inducting on the formula in Proposition \ref{prop:discriminant} we see that if $\mathfrak{p}$ is a prime in $K$ that ramifies in $K_{\gamma,n}$, then $\mathfrak{p}\big\vert (d_1d_2\dots d_sa_1a_2\dots a_s)$ or $\mathfrak{p}\big\vert\gamma_m^+(c)$ for some $1\leq m\leq n$. Hence, if the total orbit of $c$ is finite, then Proposition \ref{prop:discriminant} provides a method for constructing many examples of finitely ramified, infinite extensions. \begin{example}\label{eg:finitelyramified} Let $S=\{\pm{x^2}, \pm{(x^2-1)}, 2x^2-1\}$, a finite set of quadratic polynomials of the form in (\ref{unicrit}) over the rational numbers. Then we check that $\Orb_S(0)=\{0,\pm{1}\}$. In particular, it follows from Proposition \ref{prop:discriminant} that the extensions $K_{\gamma,n}=\mathbb{Q}(\gamma_n^+)$ are unramified outside of the prime $p=2$ for all $\gamma\in\Phi_S$ and all $n\geq1$. Moreover, if $\gamma=(2x^2-1, 2x^2-1, \theta_3, \dots)$, then $\gamma_n^+$ is irreducible for all $n\geq1$ by Proposition \ref{prop:irreducible} below; the point here is that after the second stage of iteration, one may choose any element of $S$. In particular, it would be interesting to compute the arboreal representations associated to such $\gamma$. The finite ramification precludes finite index in all of $\Aut(T_\gamma)$, but perhaps some subgroup of $\Aut(T_\gamma)$ furnishes the correct overgroup (for finite index with positive probability). \end{example} \begin{example}\label{eg:finitelyramified2} Likewise, for $a,c\in\mathbb{Z}$ and $a\neq0$, let $S_{a,c}=\big\{a(x-c)^2+\frac{ac-2}{a}, -a(x-c)^2+\frac{ac+2}{a}\big\}$. Then for all sequences $\gamma\in\Phi_{S_{a,c}}$ the extensions (over $\mathbb{Q}$) generated by $\gamma_n^+$ are unramified outside of the primes dividing $a$, $ac-2$, or $ac+2$. \end{example} We now move on to prove an irreducibility test for right iteration when $S$ is a set of quadratic polynomials; compare to \cite[Proposition 4.2]{Jones} and \cite[Lemma 1.2]{Stoll}. \begin{proposition}\label{prop:irreducible} Let $S$ be a set of quadratic polynomials of the form in (\ref{unicrit}), and let $\gamma=(\theta_i)_{i=1}^{\infty}\in\Phi_S$. If \begin{equation}\label{criticalorbit} -\ell_{\gamma,1}\,\gamma_1^+(c),\,\ell_{\gamma,1}\,\gamma_2^+(c),\, \dots,\, \ell_{\gamma,1}\,\gamma_n^+(c) \end{equation} are all non-squares in $K$, then $\gamma_n^+$ is irreducible over $K$. \end{proposition} \begin{proof} We proceed by induction. It is clear that if $-\ell_{\gamma,1}\,\gamma_1^+(c)$ is not a square in $K$, then $\gamma_1^+(x)=\ell_{\gamma,1}(x-c)^2+\gamma_1^+(c)$ is an irreducible quadratic polynomial over $K$. For $n\geq2$, assume that Proposition \ref{prop:irreducible} holds for $n-1$ and that the elements listed in (\ref{criticalorbit}) are all non-squares in $K$. Then $\gamma_{n-1}^+$ is irreducible by the induction hypothesis. Now let $\alpha\in\overline{K}$ be any root of $\gamma_{n-1}^+$ and let $\theta_n(x)=a(x-c)^2+b$. Moreover, assume (for a contradiction) that $\theta_n(x)-\alpha$ is reducible over $K(\alpha)$. Then $a(\alpha-b)$ must be a square in $K(\alpha)$. However, since $\gamma_{n-1}^+$ is irreducible over $K$, we see that $(1/\ell_{\gamma,n-1})\gamma_{n-1}^+(x+b)$ is a minimal polynomial of $\alpha-b$ over $K$. Hence, we have the following norm computation: \begin{equation*} \begin{split} N_{K(\alpha)/K}(a(\alpha-b))=a^{[K(\alpha):K]}\cdot N_{K(\alpha-b)/K}(\alpha-b)&=a^{2^{n-1}}\frac{\;(-1)^{2^{n-1}}}{\ell_{\gamma,n-1}}\,\gamma_{n-1}^+\big(0+b\big) \\[3pt] &=\frac{a^{2^{n-1}}}{\ell_{\gamma,n-1}}\gamma_{n-1}^+(\theta_n(c))=\frac{a^{2^{n-1}}}{\ell_{\gamma,n-1}}\gamma_n^+(c). \vspace{.05cm} \end{split} \end{equation*} Therefore (since norms of squares are squares) if $\theta_n(x)-\alpha$ is reducible over $K(\alpha)$, then $\ell_{\gamma,n-1}\gamma_n^+(c)$ is a square in $K$. On the other hand, it is straightforward to check that \begin{equation}\label{leadingterm} \ell_{\gamma,m}=\ell(\theta_m)^{2^{m-1}}\,\ell(\theta_{m-1})^{2^{m-2}}\dots\,\ell(\theta_1)\;\;\; \text{for all $m\geq1$}. \vspace{.05cm} \end{equation} Hence, the square class of $\ell_{\gamma,n-1}\,\gamma_n^+(c)$ in $K$ is the square class of $\ell(\theta_1)\,\gamma_n^+(c)=\ell_{\gamma,1}\,\gamma_n^+(c)$. In particular, we have contradicted our assumption that $\ell_{\gamma,1}\gamma_n^+(c)$ is a non-square in $K$. Therefore, $\theta_n(x)-\alpha$ must be an irreducible polynomial over $K(\alpha)$. Hence, Capelli's Lemma (stated directly below) applied to $g=\gamma_{n-1}^+$ and $f=\theta_n$ implies that $\gamma_n^+=\gamma_{n-1}^+\circ\theta_n$ is irreducible over $K$ as desired. \end{proof} \begin{lemma}[Capelli's Lemma] Let $K$ be a field, let $f,g\in K[x]$, and let $\alpha\in\overline{K}$ be a root of $g$. Then $g\circ f$ is irreducible over $K$ if and only if both $g$ is irreducible over $K$ and $f-\alpha$ is irreducible over $K(\alpha)$. \end{lemma} \begin{remark}\label{eg:finitelyramified+irre} Let $S=\{\pm{x^2}, \pm{(x^2-1)}, 2x^2-1\}$ be as in Example \ref{eg:finitelyramified}. Then, it is easy to check that if $\gamma$ is of the form $\gamma=(2x^2-1, 2x^2-1, \theta_3, \dots)$, then $\ell_{\gamma,1}\gamma_{n}^+(0)=2$ for all $n\geq1$. In particular, it follows from Proposition \ref{prop:irreducible} that the polynomials $\gamma_n^+$ are irreducible over the rational numbers for all $n\geq1$. Moreover, it is worth noting that the $\gamma_n^+$ (and their reciprocal polynomials for $n\geq2$) are not Eisenstein at $p=2$. \end{remark} In particular, we can use the irreducibility test in Proposition \ref{prop:irreducible} to make some progress towards Question \ref{question:Galois} for finite sets of quadratic polynomials with integral coefficients. For a reminder of the definition of escape points, see Definition \ref{def:escapepts} above. \begin{theorem}\label{thm:stability} Let $S=\{x^2+c_1, x^2+c_2,\dots, x^2+c_s\}$ for some distinct $c_i\in\mathbb{Z}$, and assume that $S$ has the following properties: \vspace{.05cm} \begin{enumerate} \item[\textup{(1)}] Some $-c_i$ is not a square in $\mathbb{Z}$. \vspace{.05cm} \item[\textup{(2)}] $0$ is an escape points for $S$. \vspace{.05cm} \end{enumerate} Then for all discrete probability measures $\nu$ on $S$, we have that \[\bar{\nu}\Big(\big\{\gamma\in\Phi_S\,:\,\gamma_n^+\,\text{is irreducible over $\mathbb{Q}$ for all $n\geq1$}\big\}\Big)>0.\] Equivalently, $G_{\gamma,\mathbb{Q}}$ acts transitively on $T_\gamma$ with positive probability. \end{theorem} \begin{proof} Without loss of generality, we may assume that $-c_1$ is not a square in $\mathbb{Z}$. Therefore, if $\phi_1=x^2+c_1$, then it follows from the proof of \cite[Corollary 1.3]{Stoll} that $\phi_1^n(0)$ is not a square in $\mathbb{Z}$ for all $n\geq2$. In particular, $\phi_1^n$ is irreducible over $\mathbb{Q}$ for all $n\geq1$ by \cite[Corollary 1.3]{Stoll} and our assumption on $c_1$. Now consider the affine equation $E: y^2=\phi_1^2(x)$. Note that $E$ is nonsingular, since $\phi_1^2(x)$ is irreducible. In particular, there are only finitely many integer solutions $(x,y)\in\mathbb{Z}^2$ to $E$ by Siegel's Theorem. Now suppose that $\gamma\in\Phi_S$ is of the form $\gamma=(\phi_1,\phi_1, \theta_3,\dots)$ and that $\gamma_n^+(0)=y_n^2$ for some $y_n\in\mathbb{Z}$ and some $n\geq\max\{r+2\}$; here $r\geq0$ is the escape level of $0$ for $S$. Then $(x,y)=(\theta_3\circ\dots\circ\theta_n(0),y_n)$ is an integral solution to $E$. Therefore, there is a positive constant $B_E$ such that $h(\theta_3\circ\dots\circ\theta_n(0))\leq B_E$. Combining this bound with the lower bound in Lemma \ref{lem:escapept} applied to the function $f=\theta_3\circ\dots\circ\theta_n$ and the point $P=0$, we see that there is a positive constant $B_1=B_{S,0,1}$ such that \[0<B_{1}<\frac{h(\theta_3\circ\dots\circ\theta_n(0))}{2^{n-2}}\leq\frac{B_E}{2^{n-2}};\] here we use that $\deg(\theta_3\circ\dots\circ\theta_n)=2^{n-2}$, since $S$ is a set of quadratic polynomials. Hence, such indices $n$ are bounded: $n\leq n_{E,0}:=\log_2(B_E/B_1)+2$. From here, define $N:=\max\{r+2,n_{E,0}\}$ and consider the sequences \[\Phi_{S,1,N}:=\big\{\gamma\in\Phi_S\,:\,\gamma=(\phi_1,\phi_1,\dots, \phi_1, \theta_{N+1}, \dots)\big\}.\] Then by definition of $N$, if $\gamma\in\Phi_{S,1,N}$, we see that $\gamma_n^+(0)$ cannot be a square in $\mathbb{Z}$ for all $n> N$. On the other hand, if $\gamma\in\Phi_{S,1,N}$, then $-\gamma_1^+(0), \gamma_2^+(0),\dots, \gamma_N^+(0)$ are all non-squares in $\mathbb{Q}$, since $\gamma_m^+(x)=\phi_1^m(x)$ for all $1\leq m\leq N$, since $\phi_1^n(0)$ is not a square in $\mathbb{Q}$ for all $n\geq2$, and since $-\phi_1(0)=-c_1$. Therefore, it follows from Proposition \ref{prop:irreducible} above, that if $\gamma\in\Phi_{S,1,N}$, then $\gamma_n^+$ is irreducible over $\mathbb{Q}$ for all $n\geq1$. However, $\bar{\nu}(\Phi_{S,1,N})=\nu(\phi_1)^N>0$ by \cite[Theorem 10.4]{ProbabilityText}, and the result follows. \end{proof} In particular, we have the following immediate consequence of Theorem \ref{thm:stability} and Corollary \ref{cor:unicritescape} above; see also Remark \ref{rmk:escape}. \begin{corollary}\label{cor:Galoisescp} Let $S=\{x^2+c_1, x^2+c_2,\dots, x^2+c_s\}$ for some distinct $c_i\in\mathbb{Z}$, and assume that $S$ has the following properties: \vspace{.05cm} \begin{enumerate} \item[\textup{(1)}] Some $-c_i$ is not a square in $\mathbb{Z}$. \vspace{.075cm} \item[\textup{(2)}] $|c_i^2+c_j|\geq2\max\{|c_i|\}$ for all $1\leq i,j\leq s$. \vspace{.075cm} \end{enumerate} Then for all discrete probability measures $\nu$ on $S$, we have that \[\bar{\nu}\Big(\big\{\gamma\in\Phi_S\,:\,\gamma_n^+\,\text{is irreducible over $\mathbb{Q}$ for all $n\geq1$}\big\}\Big)>0.\] Equivalently, $G_{\gamma,\mathbb{Q}}$ acts transitively on $T_\gamma$ with positive probability. \end{corollary} We next generalize Stoll's maximality lemma \cite[Lemma 1.6]{Stoll} to sets of quadratic polynomials; see also \cite[Lemma 3.2]{Jones}. In practice, this maximality lemma is the main tool for showing a given arboreal representation has finite index in the automorphism group of its associated preimage tree. \begin{proposition}\label{prop:maximality} Let $S$ be a set of quadratic polynomials of the form in (\ref{unicrit}), and let $\gamma=(\theta_i)_{i=1}^{\infty}\in\Phi_S$. Assume that $n\geq1$ and that $\gamma_{n-1}^+$ is irreducible over $K$. Then the following statements are equivalent: \vspace{.15cm} \begin{enumerate} \item[\textup{(1)}] $[K_{\gamma,n}:K_{\gamma,n-1}]=2^{2^{n-1}}$. \vspace{.15cm} \item[\textup{(2)}] $\ell_{\gamma,1}\,\gamma_n^+(c)$ is not a square in $K_{\gamma,n-1}$. \end{enumerate} \end{proposition} \begin{remark} Since $K_{\gamma,n}/K_{\gamma,n-1}$ is the compositum of at most $2^{n-1}$ quadratic extensions (one for each root of $\gamma_{n-1}^+$), we see that $[K_{\gamma,n}:K_{\gamma,n-1}]=2^{2^m}$ for some $0\leq m\leq n-1$. For this reason, when $m=n-1$ we say that the extension $K_{\gamma,n}/K_{\gamma,n-1}$ is maximal. \end{remark} \begin{proof} We begin with a few observations analogous to those in the proof of \cite[Lemma 1.6]{Stoll}. Let $\theta_n(x)=a(x-c)^2+b$, let $d=2^{n-1}$, and let $\alpha_1, \alpha_2, \dots, \alpha_{d}$ be the roots of $\gamma_{n-1}^+$ in $K_{\gamma,n-1}$. Then $K_{\gamma,n}=K_{\gamma,n-1}\big(\sqrt{a(\alpha_i-b)}:\,1\leq i\leq d\big)$ since $\pm1/a\sqrt{a(\alpha_i-b)}+c$ are roots of $\gamma_n^+$. Hence, $K_{\gamma,n}/K_{\gamma,n-1}$ is a $2$-Kummer extension and $[K_{\gamma,n}:K_{\gamma,n-1}]=2^{d-\dim(V)}$, where $V$ is the $\mathbb{F}_2$-vector space given by \[V:=\Big\{(e_1,\dots, e_{d})\in\mathbb{F}_2^{d}\,:\,\prod_{i=1}^d(a(\alpha_i-b))^{e_i}\in (K_{\gamma,n-1})^2\Big\};\] see \cite[VI \S8]{Lang}. On the other hand, since $G_{\gamma,n-1}:=\Gal(K_{\gamma,n-1}/K)$ permutes the roots of $\gamma_{n-1}^+$, we obtain an induced linear action of $G_{\gamma,n-1}$ on $V$. Moreover, since $G_{\gamma,n-1}$ is a $2$-group, either $\dim(V)=0$ or $V$ has a non-trivial $G_{\gamma,n-1}$-fixed vector; see \cite[I Lemma 6.3]{Lang}. However, $\gamma_{n-1}^+$ is irreducible over $K$, so that $G_{\gamma,n-1}$ acts transitively on the roots of $\gamma_{n-1}^+$. In particular, $(1,\dots,1)$ is the only possible non-trivial fixed vector. Therefore, we have deduced the following fact: either $\dim(V)=0$ or $(1,\dots,1)\in V$. However, if $(1,\dots,1)\in V$, then \[\prod_{i=1}^da(\alpha_i-b)=\frac{a^d\cdot(-1)^{d}}{\ell_{\gamma,n-1}}\cdot\Big(\ell_{\gamma,n-1}\prod_{i=1}^d(b-\alpha_i)\Big)=\frac{a^d}{\ell_{\gamma,n-1}}\cdot\gamma_{n-1}^+(b)=\frac{a^d}{\ell_{\gamma,n-1}}\cdot\gamma_{n}^+(c)\] is a square in $K_{\gamma,n-1}$; here we use that $d$ is even. Moreover, (\ref{leadingterm}) implies that $\ell_{\gamma,n-1}$ is a square in $K$ times $\ell_{\gamma,1}$. In particular, $(1,\dots,1)\in V$ if and only if $\ell_{\gamma,1}\,\gamma_n^+(c)$ is a square in $K_{\gamma,n-1}$. The result easily follows. \end{proof} We combine the discriminant formula and the maximality lemma above to obtain a sufficient criterion for ensuring that a given arboreal representation (associated to a sequence of quadratic polynomials) has finite index in the automorphism group of its preimage tree. To do this, we briefly fix some notation. Let $K$ be a global field of characteristic $0$, i.e., a number field or a finite extension $K/k(t)$ of a rational function field in one variable; here $k$ has characteristic $0$. Given a finite prime $\mathfrak{p}$ of $K$, we let $v_{\mathfrak{p}}$ denote the normalized valuation on $K$ associated to $\mathfrak{p}$. Moreover, when $K$ is a number field, we let $\mathfrak{o}_K$ denote the ring of integers of $K$. When $K$ is a function field, we choose a prime $\mathfrak{p}_0$, and let $\mathfrak{o}_K$ denote the set $\{z\in K\,:\,v_{\mathfrak{p}}(z)\geq0\,\text{for all $\mathfrak{p}\neq\mathfrak{p}_0$}\}$. With these notions in place, we have the following arithmetic finite index test. \begin{theorem}{\label{maxtest}} Let $K$ be a global field of characteristic zero and let $S$ be a set of quadratic polynomials in $\mathfrak{o}_K[x]$ with common critical point $c\in\mathfrak{o}_K$. Assume that a sequence $\gamma=(\theta_i)_{i=1}^{\infty}\in\Phi_S$ is such that $\gamma_m^+$ is irreducible for all $m\geq1$. Moreover, assume that for all $n$ sufficiently large there exists a prime $\mathfrak{p}_{\gamma,n}$ of $K$ with the following properties: \vspace{.1cm} \begin{enumerate} \item[\textup{(1)}] $v_{\mathfrak{p}_{\gamma,n}}(2)=0$. \vspace{.1cm} \item [\textup{(2)}] $\mathfrak{p}_{\gamma,n}\neq\mathfrak{p}_0$ if $K$ is a function field. \vspace{.1cm} \item[\textup{(3)}] $v_{\mathfrak{p}_{\gamma,n}}(\ell(\theta_m))=0$ for all $1\leq m\leq n$.\vspace{.1cm} \item [\textup{(4)}] $v_{\mathfrak{p}_{\gamma,n}}(\gamma_m^+(c))=0$ for all $1\leq m\leq n-1$. \vspace{.1cm} \item [\textup{(5)}] $v_{\mathfrak{p}_{\gamma,n}}(\gamma_n^+(c))\equiv1\pmod{2}$. \vspace{.1cm} \end{enumerate} Then $G_{\gamma,K}$ is a finite index subgroup of $\Aut(T_\gamma)$. \end{theorem} \begin{proof} By the discriminant formula in Proposition \ref{prop:discriminant}, if $\mathfrak{p}_{\gamma,n}$ has properties $(1)$-$(4)$ above, then $\mathfrak{p}_{\gamma,n}$ must be unramified in $K_{\gamma,n-1}$. Hence properties (3) and (5) together imply that $\ell_{\gamma,1}\gamma_n^+(c)$ cannot be a square in $K_{\gamma,n-1}$. In particular, it follows from Proposition \ref{prop:maximality} that $K_{\gamma,n}/K_{\gamma,n-1}$ is maximal for all $n$ sufficiently large. Therefore, $G_{\gamma,K}$ is a finite index subgroup of $\Aut(T_\gamma)$ as claimed. \end{proof} As a consequence of Theorem \ref{maxtest}, we construct examples over the global field $K=\mathbb{Q}(t)$ for which Question \ref{question:Galois} has an affirmative answer; here we take $\mathfrak{o}_K:=\mathbb{Q}[t]$. In what follows, $\frac{d}{dt}$ denotes the usual derivative on polynomials and $\overline{c}\in\mathbb{Z}/2\mathbb{Z}[t]$ denotes the image of $c\in\mathbb{Z}[t]$ under the ring homomorphism $\mathbb{Z}[t]\rightarrow\mathbb{Z}/2\mathbb{Z}[t]$ given by reducing coefficients. \begin{theorem}{\label{thm:functionfield}} Let $K=\mathbb{Q}(t)$ and let $S$ be a set of quadratic polynomials of the form $x^2+c$ such that each $c$ satisfies all of the following conditions: \vspace{.1cm} \begin{enumerate} \item[\textup{(1)}] $c\in\mathbb{Z}[t]$ and $\ell(c)=\pm{1}$. \vspace{.2cm} \item [\textup{(2)}] $\deg(c)=d>0$. \vspace{.1cm} \item[\textup{(3)}] $\displaystyle{\frac{d}{dt}}\,\overline{c}=1$.\vspace{.1cm} \end{enumerate} Then $G_{\gamma,K}=\Aut(T_\gamma)$ for all $\gamma\in\Phi_S$. \end{theorem} \begin{example} In particular, the set $S=\big\{x^2+(-t^2+t+3),\; x^2+(t^2-5t)\big\}$ satisfies the hypothesis of Theorem \ref{thm:functionfield} above (with $d=2$). \end{example} \begin{remark} Although the conditions in Theorem \ref{thm:functionfield} may seem strange, their utility may be summarized as follows: conditions (1) and (3) ensure that $\gamma_n^+(0)$ is square-free and condition (2) ensures that $\deg(\gamma_n^+(0))=2^{n-1}d$. In particular, putting these facts together we deduce that $\gamma_n^+(0)$ has an irreducible factor appearing to exponent $1$, which is coprime to $\gamma_m^+(0)$ for all $1\leq m\leq n-1$ (by simple degree considerations). In particular, it follows that $K_{\gamma,n}/K_{\gamma, n-1}$ is maximal for all $n\geq1$ by Proposition \ref{prop:maximality}. \end{remark} \begin{proof} Suppose that $S$ conditions (1)-(3) of Theorem \ref{thm:functionfield} hold, and let $\gamma=(\theta_n)_{n=1}^\infty\in\Phi_S$. Then it follows easily by induction, using only that $\deg(f+g)=\max\{\deg(f),\deg(g)\}$ when $\deg(f)\neq\deg(g)$ and $\deg(f^2)=2\deg(f)$, that \vspace{.1cm} \begin{equation}\label{fact1} \deg(\gamma_n^+(0))=2^{n-1}d\;\;\;\; \text{for all $n\geq1$, $\gamma\in\Phi_S$}. \end{equation} Likewise, the leading term $\ell(\gamma_n^+(0))=\pm{1}$ by property (1) above. In particular, $\gamma_n^+(0)\in\mathbb{Z}[t]$ is a primitive polynomial (the gcd of its coefficients is $1$). We next show that each polynomial $\gamma_n^+(0)\in\mathbb{Q}[t]$ (a unique factorization domain) is square-free. To see this, suppose for a contradiction, that $\gamma_n^+(0)=f_n\cdot g_n^2$ for some $f_n,g_n\in\mathbb{Q}[t]$ and some non-constant $g_n$. Note that by Gauss' Lemma, we can assume that $f_n,g_n\in\mathbb{Z}[t]$; here we use that $\gamma_n^+(0)$ is primitive. In particular, after writing $\theta_1=x^2+c$ for some $c$ satisfying (1)-(3) above, we have that \begin{equation}\label{fact2} f_n\cdot g_n^2=\gamma_n^+(0)=y_n^2+c \end{equation} for some $y_n\in\mathbb{Z}[t]$. Moreover, since the leading term of $\gamma_n^+(0)$ is $\pm{1}$, the leading term of $g_n$ must be $\pm{1}$ also. Therefore, $\deg(g)=\deg(\overline{g})>0$, and the reduction of $g$ modulo $2$ is non-constant. On the other hand, after reducing coefficients and taking derivatives of both sides of (\ref{fact2}), we see that \[\Big(\frac{d}{dt}\overline{f_n}\,\Big)\cdot{\overline{g_n}}^2=\frac{d}{dt}\overline{c}=1\] by property (3). Hence, $\bar{g_n}$ is a unit in $\mathbb{Z}/2\mathbb{Z}[t]$. However, this contradicts the fact that $\deg(\overline{g})>0$. Therefore, $\gamma_n^+(0)\in\mathbb{Q}[t]$ is square-free as claimed. We use this fact to analyze the relevant Galois groups. Note first that since $\gamma_n^+(0)$ is non-constant and square-free in $\mathbb{Q}[t]$, Proposition \ref{prop:irreducible} implies that $\gamma_n^+$ is irreducible over $K=\mathbb{Q}(t)$ for all $n\geq1$. Likewise, if no prime $\mathfrak{p}_n$ (corresponding to an irreducible polynomial) of $\mathbb{Q}[t]$ as in Theorem \ref{maxtest} exists for $n\geq2$, then each irreducible factor $q(t)$ of $\gamma_n^+(0)$ must also divide some $\gamma_{m_q}^+(0)$ for some $1\leq m_q\leq n-1$: conditions (1)-(3) of Theorem \ref{maxtest} hold trivially, and condition (5) holds since $\gamma_n^+(0)$ is square-free. In particular, it follows that the polynomial $\gamma_n^+(0)$ divides the product $\gamma_1^+(0)\gamma_2^+(0)\cdots\gamma_{n-1}^+(0)$. However, in this case we deduce from (\ref{fact1}) that \[2^{n-1}d=\deg(\gamma_n^+(0))\leq\deg(\gamma_1^+(0)\gamma_2^+(0)\cdots\gamma_{n-1}^+(0))=d+ 2d+\dots 2^{n-2}d=(2^{n-1}-1)d.\] But this inequality forces $d=0$, a contradiction. Therefore, for all $n\geq2$ a prime $\mathfrak{p}_n$ of $K=\mathbb{Q}(t)$ as in Theorem \ref{maxtest} exists. In particular, the argument in the proof Theorem \ref{maxtest} implies that the extensions $K_{\gamma,n}/K_{\gamma,n-1}$ are maximal for all $n\geq2$. Likewise, since $-\gamma_1^+(0)$ is not a square in $K$ (it's square-free), the extension $K_{\gamma,1}/K$ is also maximal. Hence, $G_{\gamma,K}=\Aut(T_\gamma)$ for all $\gamma\in\Phi_S$ as claimed. \end{proof}
1,108,101,565,082
arxiv
\section{Introduction} This paper deals with the solution of the Sturm-Liouville problem on a quantum computer. Quantum computers have shown great promise in solving problems as diverse as the discrete problems of searching and factoring \cite{gro-96a,sho-94} and the continuous problems including integration, path integration, and approximation \cite{nov-01,hei-01,tra-woz-01,hei-03a,hei-03b}. The main motivation for quantum computing is its potential to solve these important problems efficiently. Shor's algorithm achieves an exponential speedup over any known classical algorithm for factoring, but until the classical complexity of factoring is proven, the exponential speedup remains a conjecture. The quantum algorithms for integration provide provable exponential speedups over classical \emph{worst-case} algorithms, but only polynomial speedups over classical \emph{randomized} algorithms. Recently Papageorgiou and Wo{\'z}niakowski introduced a quantum algorithm for the Sturm-Liouville problem \cite{pap-woz-05} which uses the quantum phase estimation algorithm. They showed that quantum algorithms with power queries\footnote{We will define power queries rigorously in Definition \ref{defi:sl-power-query}. Informally they are just an arbitrary (integer) power of a specific unitary matrix.} achieve a provable exponential reduction in the number of power queries over the number of queries needed in the classical worst-case or randomized setting. Naturally query complexity results neglect the cost of actually implementing the queries. At the end of this paper we will discuss this problem for power queries, but it is currently not clear under which conditions power queries are sufficiently inexpensive to implement for the Sturm-Liouville problem. In this paper we will prove \emph{lower bounds} on the number of power queries for quantum algorithms that solve the Sturm-Liouville problem. This can be used to show the optimality of the algorithm proposed in \cite{pap-woz-05}. To prove lower bounds for algorithms with power queries the previously known quantum lower bound techniques, such as the ``polynomial method'' of Beals et. al \cite{bea-buh-cle-mos-wol-98, nay-wu-99} do not suffice. Our lower bound method builds on the ``trigonometric polynomial method'' \cite{bes-04}, which is an extension of the above-mentioned polynomial method and was modified to be used with power queries in \cite{bes-04a} to prove lower bounds for the phase estimation algorithm. Our method uses frequency analysis instead of a maximum degree argument, which is not applicable in the case of arbitrary powers. \section{The Sturm-Liouville eigenvalue problem} Papageorgiou and Wo{\'z}niakowski study in \cite{pap-woz-05} a simplified version of the univariate Sturm-Liou\-vil\-le problem. Consider the eigenvalue problem for the differential equation \begin{equation}\label{eqn:sl-pap-woz} \begin{split} - u''(x) + q(x) u(x) = \lambda u(x)\\ u(0) = u(1) = 0 \end{split} \end{equation} for a given nonnegative function $q$ belonging to the class $\mathbf{Q}$ defined as \begin{equation}\label{eqn:Q} \mathbf{Q} = \Big\{ q:[0,1] \to [0,1] \ : \ q\in C^2([0,1]) \text{ and } \max_{i=0,1,2} \max_{x\in [0,1]}|q^{(i)}(x)| \leq 1 \Big\} . \end{equation} We are looking for the smallest eigenvalue $\lambda$ such that there exists a non-zero function $u_\lambda$ that satisfies (\ref{eqn:sl-pap-woz}). What is the minimal number of queries of $q$ that permits the determination of the smallest eigenvalue $\lambda$ in this equation with error $\varepsilon$ and probability $3/4$ on a classical or quantum computer? The one-dimensional time-independent Schr{\"o}dinger equation \begin{equation}\label{eqn:schroedinger} - \frac{\hbar^2}{2 m} \frac{\mathrm{d}^2}{\mathrm{d} x^2} \Psi(x) + V(x) \Psi(x) = E \Psi(x) \end{equation} of a particle in a box, see \cite{mes-61}, is an instance of (\ref{eqn:sl-pap-woz}). We are given a potential $V$ and are looking for the eigenfunctions $\Psi$ of this equation and their corresponding energies $E$. In particular, we are interested in the ground-state and its energy, i.e., for a given potential $V$, we want to determine the eigenfunction $\Psi_0$ and its energy $E_0$, such that all other eigenfunctions $\Psi_n$ have higher energies $E_n \geq E_0$. Since quantum systems obey equation (\ref{eqn:schroedinger}), it seems plausible that quantum computers could potentially solve the eigenvalue problem faster than a classical computer. In the next section we define a quantum algorithm with power queries. We especially have to tackle the question concerning the form of the input (i.e., the function $q$ in the Sturm-Liouville problem) enters the quantum algorithm. \section{Quantum algorithms for the Sturm-Liou\-ville problem} Let us denote the differential operator associated with the Sturm-Liouville problem for a certain $q \in \mathbf{Q}$ as $\mathbb{L}_q : C^2(\intervalcc{0}{1}) \rightarrow C^0(\intervalcc{0}{1})$, defined by \begin{equation*} \mathbb{L}_q u(x) = - \frac{\mathrm{d}^2}{\mathrm{d} x^2} u(x) + q(x) u(x). \end{equation*} We discretize $\mathbb{L}_q$ by approximating the second derivative at the points $\frac{1}{n+1}$, $\frac{2}{n+1}$, $\ldots$, $\frac{n}{n+1}$ and obtain an $n \times n$ matrix $M_q$: {\small% \begin{equation}\label{eqn:Mq} M_q = (n+1)^2 \begin{bmatrix} 2 & -1 & & & \\ -1 & 2 & -1 & & \\ & \ddots & \ddots & \ddots & \\ & & -1 & 2 & -1 \\ & & & -1 & 2 \end{bmatrix} + \begin{bmatrix} q(\frac{1}{n+1}) \hspace{-5pt} & & & & \\ & \hspace{-5pt} q(\frac{2}{n+1}) \hspace{-5pt} & & & \\ & & \hspace{-5pt} \ddots \hspace{-5pt} & & \\ & & & \hspace{-5pt} q(\frac{n-1}{n+1}) \hspace{-5pt} & \\ & & & & \hspace{-5pt} q(\frac{n}{n+1}) \end{bmatrix} . \end{equation}% }% The eigenvalues of $\mathbb{L}_q$ and $M_q$ are closely related. Let us denote the smallest eigenvalue of $\mathbb{L}_q$ by $\lambda(q)$ and let us write $\lambda_1(M_q)$ for the smallest eigenvalue of $M_q$. Then (see e.g. \cite{kel-68}) \begin{equation}\label{eqn:eigenvalue-discretization-error} \lambda(q) - \lambda_1(M_q) = \mathcal{O} ( n^{-2} ). \end{equation} The input $q \in \mathbf{Q}$ enters the quantum computer in the form of a unitary black-box transformation called a quantum query. For the Sturm-Liouville problem we define this query to be the unitary operator $\exp( \tfrac{i}{2} M_q)$. One can show that the smallest eigenvalue $\lambda(q)$ of the Sturm-Liouville equation satisfies $\pi^2 \leq \lambda(q) \leq \pi^2 + 1$. To avoid ambiguity we use proper scaling, i.e., instead of $\exp(i M_q)$ we use $\exp(\tfrac{i}{2} M_q)$, which defines a unique phase $\varphi \in \intervalco{0}{1}$ by $ 2 \pi i \varphi = \tfrac{i}{2} \lambda(q) $. We now define an associated quantum \emph{power query} for $\exp( \tfrac{i}{2} M_q )$. \begin{defi}\label{defi:sl-power-query} Let $\mathbb{L}_q$ be the differential operator for a Sturm-Liouville problem and $M_q$ its discretization at $n$ points as in (\ref{eqn:Mq}) for $q \in \mathbf{Q}$. We define the power query $W_l^p(\exp(\tfrac{i}{2} M_q))$, where $l \in \Set{1, 2, \ldots c}$ and $p \in \mathbb{N}$, acting on $\mathbb{C}^{2^c} \otimes \mathbb{C}^n$ as \begin{equation*} W_l^p(\exp(\tfrac{i}{2} M_q)) \Ket{x_1} \ldots \Ket{x_c} \Ket{\psi} = \begin{cases} \Ket{x_1} \ldots \Ket{x_c} \exp(\tfrac{i}{2} p M_q) \Ket{\psi} & \text{for } x_l = 1 \\ \Ket{x_1} \ldots \Ket{x_c} \Ket{\psi} & \text{otherwise} \end{cases} \end{equation*} for all $x_1, \ldots, x_c \in \Set{0,1}$ and arbitrary normalized vectors $\Ket{\psi} \in \mathbb{C}^n$ and extend this definition to all quantum states by linearity. \end{defi} Suppose that the $\Ket{\psi_s}$, $s=1, \ldots, n$, are the eigenvectors of $M_q$ and that $M_q \Ket{\psi_s} = \lambda_s \Ket{\psi_s}$. Then for $\Ket{\psi} = \sum_{s=1}^n \alpha_s \Ket{\psi_s}$ and $\Ket{x} = \Ket{x_1} \ldots \Ket{x_c}$ with $x_l = 1$ \begin{equation*} W_l^p(\exp(\tfrac{i}{2} M_q)) \Ket{x} \Ket{\psi} = \Ket{x} \exp(\tfrac{i}{2} p M_q) \Ket{\psi} = \sum_{s=1}^n \alpha_s \Ket{x} e^{\tfrac{i}{2} p \lambda_s} \Ket{\psi_s} . \end{equation*} Quantum algorithms are products of unitary transformations. Every quantum algorithm that approximates $\lambda(q)$ can be divided into stages that use powers of $\exp(\tfrac{i}{2} M_q)$ and therefore depend on $q$, and stages that are independent of $q$. Let us define a quantum algorithm with power queries. \begin{defi}\label{defi:sl-power-query-algo} For a Sturm-Liouville problem given by the input $q \in \mathbf{Q}$ with the solution $\lambda(q)$, we define a quantum algorithm \begin{equation*} \mathcal{A} = (\ket{\psi^{(0)}}; U_0, \ldots, U_T; l_1, p_1, \ldots, l_T, p_T; \widetilde{\lambda}) \end{equation*} with $T$ power queries that solves this problem as follows. Let $U_0$, $U_1$, $\ldots$, $U_{T}$ be arbitrary but fixed unitary transformations and $\ket{\psi^{(0)}}$ a fixed initial state. Let $W_{l_j}^{p_j}(\exp(\tfrac{i}{2} M_q))$ be a power query as in Definition \ref{defi:sl-power-query}. A measurement of the state \begin{equation* \ket{\psi^{(T)}(\exp(\tfrac{i}{2} M_q))} = U_{T} W_{l_{T}}^{p_{T}}(\exp(\tfrac{i}{2} M_q)) \ldots U_1 W_{l_1}^{p_1}(\exp(\tfrac{i}{2} M_q)) U_0 \ket{\psi^{(0)}} \end{equation*} in the standard basis yields a state $\Ket{k}$ with probability $p_{k}(q)$. For each $k$ compute an approximation $\widetilde{\lambda}(k) \in \mathbb{R}$ to the eigenvalue of interest $\lambda(q)$ on a classical computer. For every $q \in \mathbf{Q}$ the probability that an $\varepsilon$-approximation $\widetilde{\lambda}(k)$ of $\lambda(q)$ is computed is given by \begin{equation}\label{eqn:prob-condition} \sum_{ k : | \lambda(q) - \widetilde{\lambda}(k) | < \varepsilon} p_{k}(q) . \end{equation} For any algorithm $\mathcal{A}$ with $T$ power queries we define \begin{equation*} e (\mathcal{A}, T) = \inf \Set{ \varepsilon \, : \, \varepsilon \text{ chosen such that (\ref{eqn:prob-condition}) is larger than } \tfrac{3}{4} \text{ for all } q \in \mathbf{Q} } \end{equation*} as the worst-case quantum error of $\mathcal{A}$. \end{defi} We measure in the standard basis for convenience only; a measurement in any other basis is easily achieved by modifying the operator $U_T$ accordingly. A model like this was introduced in \cite{bea-buh-cle-mos-wol-98} for discrete inputs $q$. It was extended to continuous functions by Heinrich in \cite{hei-01}. Our model is an extension of this model to incorporate power queries. \section{Upper bounds} To estimate $\lambda(q)$ on a quantum computer with power queries Papageorgiou and Wo{\'z}niakowski used the quantum phase estimation algorithm, see e.g. \cite{nie-chu-00}. This algorithm takes a unitary transformation $Q$ with an eigenvector $\Ket{\xi}$ as input, i.e., $ Q \Ket{\xi} = e^{2 \pi i \varphi} \Ket{\xi} $. Here $\varphi \in \intervalco{0}{1}$ is called the ``phase'' of the eigenvalue corresponding to $\Ket{\xi}$, and the phase estimation algorithm gives us an approximation $\widetilde{\varphi}$ to $\varphi$. This algorithm has the final state \begin{equation*} \Ket{\psi^{(T)}(Q)} = (\mathcal{F}_{2^T}^{-1} \otimes I) W_1^{2^{T-1}}(Q) W_2^{2^{T-2}}(Q) \ldots W_T^{2^0}(Q) (H^{\otimes T} \otimes I) \Ket{0} \Ket{\xi}, \end{equation*} and is depicted in Figure \ref{fig:phase-est-algo}. \begin{figure}[htbp] \begin{equation*} \xymatrix @*=<0em> @C=7pt @R=9pt { \lstick{\Ket{0}} & \gate{H} & \qw & \qw & \qw & \qwdot & \qw & \qw & \ctrl{5} & \multigate{4}{\mathcal{F}_{2^T}^{-1}} & \rstick{\Ket{k_{1}}} \qw \\ \lstick{\Ket{0}} & \gate{H} & \qw & \qw & \qw & \qwdot & \qw & \ctrl{4} & \qw & \ghost{\mathcal{F}_{2^T}^{-1}} & \rstick{\Ket{k_{2}}} \qw \\ \lstick{\vdots\ }& & & & & & & & & \mathcal{F}_{2^T}^{-1} & \rstick{\ \vdots} \\ \lstick{\Ket{0}} & \gate{H} & \qw & \ctrl{2} & \qw & \qwdot & \qw & \qw & \qw & \ghost{\mathcal{F}_{2^T}^{-1}} & \rstick{\Ket{k_{T-1}}} \qw \\ \lstick{\Ket{0}} & \gate{H} & \ctrl{1} & \qw & \qw & \qwdot & \qw & \qw & \qw & \ghost{\mathcal{F}_{2^T}^{-1}} & \rstick{\Ket{k_{T}}} \qw \\ \lstick{\Ket{\xi}} & {/} \qw & \gate{Q^{2^0}} & \gate{Q^{2^1}} & \qw & \qwdot & \qw & \gate{Q^{2^{T-2}}} & \gate{Q^{2^{T-1}}} & {/} \qw & \rstick{\Ket{\xi}} \qw \\ } \end{equation*} \caption{The quantum phase estimation algorithm. $\mathcal{F}_{2^T}^{-1}$ is the inverse quantum Fourier transform on $T$ qubits. \label{fig:phase-est-algo}} \end{figure} Suppose $Q$ is a $r$ qubit transformation. A measurement of $\Ket{\psi^{(T)}(Q)}$ returns a state \begin{equation*} \Ket{k} = \Ket{k_1} \ldots \Ket{k_T} \Ket{k_{T+1}} \ldots \Ket{k_{T+r}}. \end{equation*} The algorithm then uses $k$ to compute an approximation $ \widetilde{\varphi}(k) = k_{1} 2^{-1} + k_{2} 2^{-2} + \ldots + k_T 2^{-T} $ to $\varphi$ classically. One can show, see e.g. \cite{nie-chu-00}, that with probability greater than $\frac{3}{4}$ the algorithm approximates $\varphi$ up to precision $\varepsilon$ with $\mathcal{O} ( \log ( (1/\varepsilon) ) )$ power queries. Papageorgiou and Wo{\'z}niakowski use this algorithm to approximate the smallest eigenvalue $\lambda(q)$ of the Sturm-Liouville operator $\mathbb{L}_q$ and use the operator $Q = \exp( \tfrac{i}{2} M_q )$ as a query. Since the phases of $\exp( \tfrac{i}{2} M_q )$ and $\exp( \tfrac{i}{2} \mathbb{L}_q )$ are related through equation (\ref{eqn:eigenvalue-discretization-error}), we have to discretize at $n = \mathcal{O}(\varepsilon^{-1/2})$ points. The quantum phase estimation algorithm requires the knowledge of the eigenvector for which the phase is estimated. For the Sturm-Liouville problem we need the eigenvector $\Ket{z_1(M_q)}$ of $M_q$ corresponding to the smallest eigenvalue $\lambda_1(M_q)$. We can compute $\Ket{z_1(M_q)}$ through the method of Jaksch and Papageorgiou \cite{jak-pap-03}, which computes a superposition of eigenvectors $\Ket{z_j(M_q)}$ of $M_q$, with a sufficiently large $\Ket{z_1(M_q)}$ component. For details see \cite{jak-pap-03,pap-woz-05}. \section{Lower Bounds} Our goal is to prove that the algorithm described in the previous section is optimal with respect to the number of power queries. We have to prove that every quantum algorithm $\mathcal{A}$ with $T$ power queries that returns a correct answer with precision $e(\mathcal{A},T) \leq \varepsilon$ has to use $T = \Omega (\log (1/\varepsilon) )$ power queries. We will show that even for a much simplified version of the problem this lower bound still holds. Consider as input only constant functions $q(x)=q \in \intervalcc{0}{1}$. Obviously $q \in \mathbf{Q}$. It is easy to see that in this case the eigenfunctions which fulfill the boundary condition in (\ref{eqn:sl-pap-woz}) are \begin{equation}\label{eqn:eigfunc-Lq} u_s(x) = \sin (s \pi x) \end{equation} for $s \in \mathbb{N}$ and that they have eigenvalues $\lambda_s = s^2 \pi^2 + q$, which means that the smallest eigenvalue $\lambda(q)$ is $\lambda(q) = \pi^2 + q$. Similarly for the discretization $M_q$ of $\mathbb{L}_q$ with constant $q \in \intervalcc{0}{1}$ the eigenvectors are \begin{equation}\label{eqn:eigvec-Mq} \Ket{u_s} = \sqrt{\tfrac{2}{n+1}} \sum_{x=1}^n \sin \big( \tfrac{s \pi x}{n+1} \big) \Ket{x} \end{equation} with eigenvalues $4 (n+1)^2 \sin^2 \big( \tfrac{s \pi}{2(n+1)} \big) + q$. We want to investigate how different power queries lead to different outputs and turn to the techniques in \cite{bes-04a}. \begin{theo}\label{theo:trig-poly-ap} Any quantum algorithm with power queries $W_l^p(\exp(\tfrac{i}{2} M_q))$ for $q(x) = q \in \intervalco{0}{1}$, see Definition \ref{defi:sl-power-query-algo}, that uses $c \in \mathbb{N}$ control qubits, can be written as \begin{equation}\label{eqn:psiTMq} \begin{split} \Ket{\psi^{(T)}(\exp(\tfrac{i}{2} M_q))} & = U_{T} W_{l_{T}}^{ p_{T}}(\exp(\tfrac{i}{2} M_q)) \ldots U_1 W_{l_1}^{ p_1}(\exp(\tfrac{i}{2} M_q)) U_0 \ket{\psi^{(0)}} \\ & = \sum_{k=0}^{n 2^c - 1} S_k^{(T)} (q) \Ket{k} , \end{split} \end{equation} where $U_1, \ldots, U_T$ are unitary operators and the $S_k^{(T)} (q)$ are trigonometric polynomials of the following form: \begin{equation}\label{eqn:sktq} S_k^{(T)} (q) = \sum_{m \in \mathcal{M}_{T}} \eta^{(T)}_{k,m} e^{ \frac{i}{2} m q} , \end{equation} with $\mathcal{M}_T$ defined as $\mathcal{M}_0 = \Set{0}$ and \begin{equation}\label{eqn:M-recursive} \mathcal{M}_{T+1} = \Set{ m \, : \, m \in \mathcal{M}_T } \cup \Set{ m + p_{T+1} \, : \, m \in \mathcal{M}_T }, \end{equation} and the coefficients $\eta^{(T)}_{k,m} \in \mathbb{C}$ do not depend on $q$ and are normalized: \begin{equation}\label{eqn:eta-norm} \sum_{k} \sum_{m \in \mathcal{M}_{T}} \abs{\eta^{(T)}_{k,m}}^2 = 1 . \end{equation} \end{theo} \begin{proof} The proof is by induction on the number of queries $T$. We will write the state of the algorithm after $T$ steps $\Ket{\psi^{(T)}(\exp(\tfrac{i}{2} M_q))}$ in the basis $(\Ket{k}\Ket{\psi_s})_{k,s}$, $k=0, 1, \ldots, 2^c-1$, $s=1, 2, \ldots, n$, which is split into a control part $\Ket{k}$ and an eigenvector part $\Ket{\psi_s}$. We will not address the ancilla qubits in our proof, but they can easily be treated (after possibly reordering the qubits) as control bits that are never used. For $T=0$ power queries we can write \begin{equation*} \Ket{\psi^{(0)}(\exp(\tfrac{i}{2} M_q))} = U_0 \Ket{\psi^{(0)}} = \sum_{k,s} \eta_{k,s,0}^{(0)} \Ket{k} \Ket{\psi_s}, \end{equation*} which contains only powers $e^{\frac{i}{2} m q}$ from $m \in \mathcal{M}_0 = \Set{ 0 }$ and obviously \begin{equation*} \sum_{k,s} \sum_{m \in \mathcal{M}_0} \abs{\eta_{k,s,m}^{(0)}}^2 = \sum_{k,s} \abs{\eta_{k,s,0}^{(0)}}^2 = 1. \end{equation*} Let us now assume $\Ket{\psi^{(T)}(\exp(\tfrac{i}{2} M_q))}$ can be written as \begin{equation*} \ket{\psi^{(T)}(\exp(\tfrac{i}{2} M_q))} = \sum_{k,s} \sum_{m \in \mathcal{M}_{T}} \eta^{(T)}_{k,s,m} e^{ \frac{i}{2} m q} \Ket{k} \Ket{\psi_s}, \end{equation*} with coefficients $\eta^{(T)}_{k,s,m}$ fulfilling condition (\ref{eqn:eta-norm}). If we apply $W_{l_{T+1}}^{p_{T+1}}(\exp(\tfrac{i}{2} M_q))$ to $\ket{\psi^{(T)}(\exp(\tfrac{i}{2} M_q))}$ we get ($k_{l_{T+1}}$ is the control bit, i.e., the $l_{T+1}$-th bit in the binary representation of $k$): \begin{multline}\label{eqn:W-applied} W_{l_{T+1}}^{p_{T+1}}(\exp(\tfrac{i}{2} M_q)) \ket{\psi^{(T)}(\exp(\tfrac{i}{2} M_q))} \sum_{\substack{k,s\\k_{l_{T+1}} = 0}} \sum_{m \in \mathcal{M}_{T}} \eta^{(T)}_{k,s,m} e^{ \frac{i}{2} m q } \Ket{k} \Ket{\psi_s} \\ + \sum_{\substack{k,s\\k_{l_{T+1}} = 1}} \sum_{m \in \mathcal{M}_{T}} \eta^{(T)}_{k,s,m} e^{ \frac{i}{2} m q} \Ket{k} \exp( \tfrac{i}{2} p_{T+1} M_q ) \Ket{\psi_s} . \end{multline} We define $\zeta_s := e^{ \frac{i}{2} 4 (n+1)^2 \sin^2 \big( \tfrac{s \pi}{2(n+1)} \big) }$ and proceed to analyze the second term in (\ref{eqn:W-applied}), where the control bit $k_{l_{T+1}} = 1$ and get the following \begin{equation*} \begin{split} & \sum_{m \in \mathcal{M}_{T}} \eta^{(T)}_{k,s,m} e^{ \frac{i}{2} m q} \Ket{k} \exp(\tfrac{i}{2} p_{T+1} M_q) \Ket{\psi_s} \\ = & \sum_{m \in \mathcal{M}_{T}} \eta^{(T)}_{k,s,m} e^{ \frac{i}{2} m q} e^{ \frac{i}{2} p_{T+1} \big( 4 (n+1)^2 \sin^2 \big( \tfrac{s \pi}{2(n+1)} \big) + q \big) } \Ket{k} \Ket{\psi_s} \\ = & \sum_{m \in \mathcal{M}_{T}} \eta^{(T)}_{k,s,m} \zeta_s^{p_{T+1}} e^{ \frac{i}{2} (m + p_{T+1}) q } \Ket{k} \Ket{\psi_s} . \end{split} \end{equation*} If we define $\widetilde{\eta}^{(T+1)}_{k,s,m}$ for all $m \in \mathcal{M}_{T+1}$ as \begin{equation*} \widetilde{\eta}^{(T+1)}_{k,s,m} := \begin{cases} \eta^{(T)}_{k,s,m-p_{T+1}} \zeta_s^{p_{T+1}} & \text{ for } k_{l_{T+1}} = 1 \text{ and } m - p_{T+1} \in \mathcal{M}_T \\ \eta^{(T)}_{k,s,m} & \text{ for } k_{l_{T+1}} = 0 \text{ and } m \in \mathcal{M}_{T} \\ 0 & \text{ otherwise} \end{cases} , \end{equation*} we can write \begin{equation*} W_{l_{T+1}}^{p_{T+1}}(\exp(\tfrac{i}{2} M_q)) \ket{\psi^{(T)}(\exp(\tfrac{i}{2} M_q))} = \sum_{k,s} \sum_{m \in \mathcal{M}_{T+1}} \widetilde{\eta}^{(T+1)}_{k,s,m} e^{ \frac{i}{2} m q} \Ket{k} \Ket{\psi_s} . \end{equation*} We check our normalization condition (\ref{eqn:eta-norm}) for $\widetilde{\eta}^{(T+1)}_{k,s,m}$, \begin{equation*} \begin{split} & \sum_{k,s} \sum_{m \in \mathcal{M}_{T+1}} \abs{\widetilde{\eta}^{(T+1)}_{k,s,m}}^2 \\ = & \sum_{\substack{k,s\\k_{l_{T+1}}=0}} \sum_{m \in \mathcal{M}_{T}} \abs{\eta^{(T)}_{k,s,m}}^2 + \sum_{\substack{k,s\\k_{l_{T+1}}=1}} \sum_{m - p_{T+1} \in \mathcal{M}_{T}} \abs{\eta^{(T)}_{k,s,m - p_{T+1}} \zeta_s^{p_{T+1}}}^2 \\ = & \sum_{k,s} \sum_{m \in \mathcal{M}_{T}} \abs{\eta^{(T)}_{k,s,m}}^2 = 1. \end{split} \end{equation*} The next step in the algorithm is to apply the unitary transformation $U_{T+1}$. For $k,l = 0, \ldots, 2^c-1$ and $s,t=1, \ldots, n$ define the coefficients $u_{l,t,k,s} = \Bra{l}\Bra{\psi_t} U_{T+1} \Ket{k}\Ket{\psi_s}$ and let \begin{equation*} \eta_{l,t,m}^{(T+1)} := \sum_{k,s} \widetilde{\eta}^{(T+1)}_{k,s,m} u_{l,t,k,s} \end{equation*} This allows us to write \begin{equation*} \begin{split} & U_{T+1} W_{l_{T+1}}^{p_{T+1}}(\exp(\tfrac{i}{2} M_q)) \ket{\psi^{(T)}(\exp(\tfrac{i}{2} M_q))} \\ = & \sum_{k,s} \sum_{m \in \mathcal{M}_{T+1}} \widetilde{\eta}^{(T+1)}_{k,s,m} e^{ \frac{i}{2} m q} U_{T+1} \Ket{k} \Ket{\psi_s} \\ = & \sum_{l,t} \sum_{m \in \mathcal{M}_{T+1}} \sum_{k,s} \widetilde{\eta}^{(T+1)}_{k,s,m} u_{l,t,k,s} e^{ \frac{i}{2} m q} \Ket{l} \Ket{\psi_t} \\ = & \sum_{l,t} \sum_{m \in \mathcal{M}_{T+1}} \eta^{(T+1)}_{l,t,m} e^{ \frac{i}{2} m q} \Ket{l} \Ket{\psi_t} . \end{split} \end{equation*} It remains to check that \begin{equation*} \begin{split} & \sum_{l,t} \sum_{m \in \mathcal{M}_{T+1}} \hspace{-3pt} \Abs{\eta^{(T+1)}_{l,t,m}}^2 \\ = & \sum_{l,t} \sum_{m \in \mathcal{M}_{T+1}} \left[ \sum_{k,s} \Big( \widetilde{\eta}^{(T+1)}_{k,s,m} \Big)^{\ast} \Big( u_{l,t,k,s} \Big)^{\ast} \right] \hspace{-3.5pt} \left[ \sum_{k',s'} \widetilde{\eta}^{(T+1)}_{k',s',m} u_{l,t,k',s'} \right] \\ = & \sum_{k,s,k',s'} \sum_{m \in \mathcal{M}_{T+1}} \Big( \widetilde{\eta}^{(T+1)}_{k,s,m} \Big)^{\ast} \left[ \sum_{l,t} \Big( u_{l,t,k,s} \Big)^{\ast} u_{l,t,k',s'} \right] \widetilde{\eta}^{(T+1)}_{k',s',m} \\ = & \sum_{k,s} \sum_{m \in \mathcal{M}_{T+1}} \Abs{ \widetilde{\eta}^{(T+1)}_{k,s,m} }^2 = 1 , \end{split} \end{equation*} where we used that $U_{T+1}$ is unitary. This completes the proof. \end{proof} We can use Theorem \ref{theo:trig-poly-ap} to get explicit formulas for the probability of measuring a certain state. \begin{lemm}\label{lemm:pbc} Let $\mathcal{A}$ be a $T$ power query quantum algorithm for the Sturm-Liouville problem with powers $p_1, \ldots, p_T$ and $c \in \mathbb{N}$ control bits as defined in Definition \ref{defi:sl-power-query-algo}. Let $\mathcal{B}$ be a partition of the set of all basis vectors, i.e. \begin{equation*} \bigcup_{B \in \mathcal{B}} B = \Set{ \Ket{k} \, : \, k = 0, 1, \ldots, n 2^c -1 } \text{ and } B \cap C = \emptyset \text{ for } B, C \in \mathcal{B},\ B \neq C . \end{equation*} If the input $q \in \mathbf{Q}$ is a constant function $q(x) = q \in \intervalco{0}{1}$, the probability of measuring a state $\Ket{k}$ from $B \in \mathcal{B}$ is a trigonometric polynomial \begin{equation}\label{eqn:pbc} p_B (q) = \sum_{l \in \mathcal{L}_{T}} \beta^{(T)}_{B,l} e^{ \tfrac{i}{2} l q} , \end{equation} with coefficients $\beta^{(T)}_{B,l} \in \mathbb{C}$ that are bounded by \begin{equation*} \sum_{B \in \mathcal{B}} \abs{\beta^{(T)}_{B,l}} \leq 1 \end{equation*} for all possible partitions $\mathcal{B}$, and the set $\mathcal{L}_T$ is given by $\mathcal{L}_0 = \Set{0}$ and \begin{equation}\label{eqn:L_T} \mathcal{L}_{T+1} = \bigcup_{ l \in \mathcal{L}_T } \Set{ l, l+p_{T+1}, l-p_{T+1} } . \end{equation} \end{lemm} \begin{proof} Consider quantum queries $\exp (\tfrac{i}{2} M_q)$ for constant functions $q(x) = q \in \intervalco{0}{1}$ in the Sturm-Liouville problem. From equations (\ref{eqn:psiTMq}), (\ref{eqn:sktq}) we know that the final state of every $T$ power query algorithm can be written as \begin{equation*} \Ket{\psi^{(T)}(\exp(\tfrac{i}{2} M_q))} = \sum_{k} \sum_{m \in \mathcal{M}_{T}} \eta^{(T)}_{k,m} e^{ \tfrac{i}{2} m q } \Ket{k}. \end{equation*} Let $\mathcal{B}$ be a partition of the set of all basis states $\Ket{k}$. Thus the probability to measure a state from the set $B \in \mathcal{B}$ is \begin{equation*} \begin{split} p_B (q) = & \sum_{k \in B} \Abs{ \sum_{m \in \mathcal{M}_{T}} \eta^{(T)}_{k,m} e^{ \tfrac{i}{2} m q } }^2 \\ = & \sum_{k \in B} \left[ \sum_{m_1 \in \mathcal{M}_{T}} \left( \eta^{(T)}_{k,m_1} \right)^{\ast} e^{- \tfrac{i}{2} m_1 q } \right] \left[ \sum_{m_2 \in \mathcal{M}_{T}} \eta^{(T)}_{k,m_2} e^{ \tfrac{i}{2} m_2 q } \right] \\ = & \sum_{k \in B} \sum_{m_1,m_2 \in \mathcal{M}_{T}} \left(\eta^{(T)}_{k,m_1}\right)^{\ast} \eta^{(T)}_{k,m_2} e^{ \tfrac{i}{2} (m_2-m_1) q } \\ =: & \sum_{l \in \mathcal{L}_{T}} \beta^{(T)}_{B,l} e^{ \tfrac{i}{2} l q } , \end{split} \end{equation*} with coefficients $\beta^{(T)}_{B,l}$ defined as \begin{equation}\label{eqn:beta-def} \beta^{(T)}_{B,l} := \sum_{k \in B} \sum_{\substack{m_1, m_2 \in \mathcal{M}_{T}\\m_2 - m_1 = l}} \left( \eta^{(T)}_{k,m_1} \right)^{\ast} \eta^{(T)}_{k,m_2} , \end{equation} and the set $\mathcal{L}_T$ is given by \begin{equation}\label{eqn:L_Tv2} \mathcal{L}_T = \Set{m_1 - m_2 \, : \, m_1, m_2 \in \mathcal{M}_T }. \end{equation} For any partition $\mathcal{B}$ we can now bound the $\beta^{(T)}_{B,l}$ as follows \begin{equation*} \begin{split} \sum_{B \in \mathcal{B}} \Abs{\beta^{(T)}_{B,l}} = & \sum_{B \in \mathcal{B}} \bigg| \sum_{k \in B} \sum_{\substack{m_1, m_2 \in \mathcal{M}_{T}\\m_2 - m_1 = l}} \left( \eta^{(T)}_{k,m_1} \right)^{\ast} \eta^{(T)}_{k,m_2} \bigg| \\ \leq & \sum_k \sum_{\substack{m_1, m_2 \in \mathcal{M}_{T}\\m_2 - m_1 = l}} \Abs{ \eta^{(T)}_{k,m_1} \eta^{(T)}_{k,m_2} } , \end{split} \end{equation*} where $\sum_k$ is the sum over all possible states $\Ket{k}$. From (\ref{eqn:eta-norm}) we now derive by the Cauchy-Schwarz inequality \begin{equation*} \begin{split} & \sum_k \sum_{\substack{m_1, m_2 \in \mathcal{M}_{T}\\m_2 - m_1 = l}} \Abs{ \eta^{(T)}_{k,m_1} \eta^{(T)}_{k,m_2} } = \sum_k \sum_{m : m, m+l \in \mathcal{M}_{T}} \Abs{ \eta^{(T)}_{k,m} \eta^{(T)}_{k,m+l} } \\ \leq & \sum_k \left( \sum_{m \in \mathcal{M}_{T}} \Abs{\eta^{(T)}_{k,m}}^2 \right)^{1/2} \left( \sum_{m + l \in \mathcal{M}_{T}} \Abs{\eta^{(T)}_{k,m+l}}^2 \right)^{1/2} \leq \sum_k \sum_{m \in \mathcal{M}_{T}} \Abs{\eta^{(T)}_{k,m}}^2 \leq 1 . \end{split} \end{equation*} It remains to show that the two definitions of $\mathcal{L}_T$ in equations (\ref{eqn:L_T}) and (\ref{eqn:L_Tv2}) are identical. The proof is by induction. $T=0$ is trivially true. We use the definition (\ref{eqn:M-recursive}) of $\mathcal{M}_T$ to see that \begin{equation*} \begin{split} \mathcal{L}_{T+1} = & \Set{ m_1 - m_2 \, : \, m_1, m_2 \in \mathcal{M}_{T+1} } \\ = & \big\{ m_1 - m_2, m_1 + p_{T+1} - m_2, m_1 - m_2 - p_{T+1}, \\ & \ \ \ \ \ \ m_1 + p_{T+1} - m_2 - p_{T+1}\, : \, m_1, m_2 \in \mathcal{M}_{T} \big\} \\ = & \Set{ l, l+p_{T+1}, l-p_{T+1} \, : \, l \in \mathcal{L}_T } , \end{split} \end{equation*} which completes the proof. \end{proof} Note that $\Abs{\mathcal{L}_T} \leq 3^T$. This bound is sharp, since for the choice of $p_i = 3^{i-1}$ we have $\mathcal{L}_0 = \Set{0}$, $\mathcal{L}_1 = \Set{-1, 0, 1}$, $\mathcal{L}_2 = \Set{-4, -3, -2, \ldots, 3, 4}$ and in general \begin{equation*} \mathcal{L}_T = \Set{-3^T - 3^{T-1} - \ldots -1, \ldots, 3^T + 3^{T-1} + \ldots +1}. \end{equation*} \subsection{Fourier Analysis of Power Query Algorithms} With Theorem \ref{theo:trig-poly-ap} and Lemma \ref{lemm:pbc} we have the tools needed to provide a lower bound for the Sturm-Liouville problem. We are now able to apply our frequency analysis technique to this problem. \begin{theo}\label{theo:lower-bound-cap} Any quantum algorithm $\mathcal{A}$ with $T$ power queries which estimates the smallest eigenvalue $\lambda(q)$ in the Sturm-Liouville eigenvalue problem for all inputs $q(x) = q \in \intervalco{0}{1}$ with precision $e(\mathcal{A}, T) \leq \varepsilon$ and probability greater than $3/4$ has to use $T = \Omega ( \log (1/\varepsilon) )$ power queries. \end{theo} Notice that a lower bound on the ``easy'' subset of constant functions $q(x)=q$ implies that the same lower bound holds for any set of inputs that includes the constant functions, hence it also holds for the class $\mathbf{Q}$. We also would like to remark that the lower bound $T = \Omega ( \log (1/\varepsilon) )$ does not depend on the number of discretization points $n$. \begin{proof} After $T$ power queries we measure the final state and receive a state $\Ket{k}$ with probability $p_k (q)$. From the integer $k$ we classically compute a solution $\widetilde{\lambda}(k)$. A successful algorithm has to return an $\varepsilon$-approximation for every $q \in \intervalco{0}{1}$ with probability \begin{equation*} \sum_{ k : \Abs{ \lambda(q) - \widetilde{\lambda}(k) } \leq \varepsilon} p_{k}(q) \geq \frac{3}{4}, \end{equation*} see Definition \ref{defi:sl-power-query-algo}. Define \begin{equation*} A_{q, \varepsilon} := \set{ k : \abs{ \lambda(q) - \widetilde{\lambda}(k) } \leq \varepsilon} \end{equation*} as the set of states that are mapped to $\varepsilon$-correct answers for input $q$. Choose $N \in \mathbb{N}$ such that $\frac{1}{N}$ is slightly bigger than $2 \varepsilon$, i.e., $ \frac{1}{N+1} \leq 2 \varepsilon < \frac{1}{N} $ and define the points $x_r := (r+1/2)/N$ for $r=0,1, \ldots,N-1$. For the inputs $q=x_r$ we can visualize the quantum algorithm $\mathcal{A}$ as in Figure \ref{fig:algomapping}. \begin{figure}[!tbh] \begin{center} \ifx\pdfoutput\undefined% \includegraphics[angle=270,width=\columnwidth]{algomapping_cropped}% \else% \includegraphics[width=\columnwidth]{algomapping_cropped}% \fi% \caption{A quantum algorithm for the Sturm-Liouville problem with inputs $q=x_r$, $r=0, \dots, N-1$, will result in a probability distribution $p_k(q)$ on the states $\Ket{k}$ that are measured. Each state $\Ket{k}$ is mapped to an answer $\widetilde{\lambda}(k)$. We write $A_{x_r,\epsilon}$ for the set of all states $\Ket{k}$ that are mapped to $\varepsilon$-approximations of $\lambda(x_r)$. \label{fig:algomapping}}% \end{center}% \end{figure} Notice that the sets $A_{x_r, \varepsilon}$ are mutually disjoint for $r=0, \ldots, N-1$, because $x_r$ and $x_{r+1}$ are chosen such that \begin{equation*} \Abs{\lambda(x_r) - \lambda(x_{r+1})} = \Abs{ 16 \sin^2 \big( \tfrac{\pi}{4} \big) + \tfrac{r+\frac{1}{2}}{N} - 16 \sin^2 \big( \tfrac{\pi}{4} \big) - \tfrac{r+1+\frac{1}{2}}{N} } = \frac{1}{N} > 2 \varepsilon . \end{equation*} Therefore there can be no state $\Ket{k}$ that is mapped to an output $\widetilde{\lambda}(k)$, which is an $\varepsilon$-approximation to $\lambda(x_r)$ and $\lambda(x_{r+1})$ at the same time. Let \begin{equation} \label{eqn:preps} p_{r,\varepsilon}(q) = \sum_{k \in A_{x_r,\varepsilon}} p_k(q) \end{equation} be the probability of measuring an $\varepsilon$-appro\-xi\-ma\-tion to $\lambda(x_r)$. Since the sets $A_{x_r, \varepsilon}$ partition the set of all outputs, Lemma \ref{lemm:pbc} allows us to write \begin{equation*} p_{r,\varepsilon} (q) = p_{A_{x_r,\varepsilon}} (q) = \sum_{l \in \mathcal{L}_{T}} \beta^{(T)}_{r,\varepsilon,l} e^{ \tfrac{i}{2} l q }. \end{equation*} We apply the $N$-point inverse discrete Fourier Transform to $p_{r,\varepsilon}(q)$, which we evaluate at the points $x_n$, and get the following value at $k = 0, 1, \ldots, N-1$: \begin{equation}\label{eqn:fourier-beta} \begin{split} DFT_N[p_{r,\varepsilon}](k) = & \sum_{n=0}^{N-1} p_{r,\varepsilon} (x_n) e^{ - 2 \pi i k n/N } \\ = & \sum_{n=0}^{N-1} \sum_{l \in \mathcal{L}_{T}} \beta^{(T)}_{r,\varepsilon,l} e^{ \tfrac{i}{2} l (n+1/2)/N } e^{ - 2 \pi i k n/N } \\ = & \sum_{l \in \mathcal{L}_{T}} \beta^{(T)}_{r,\varepsilon,l} e^{ \tfrac{i}{2} l/(2N) } \sum_{n=0}^{N-1} e^{ 2 \pi i (\frac{l}{4 \pi} - k) n/N } \\ = & \sum_{l \in \mathcal{L}_{T}} \beta^{(T)}_{r,\varepsilon,l} e^{ \tfrac{i}{2} l/(2N) } \left\{ \begin{array}{ll} \frac{e^{2 \pi i (\frac{l}{4 \pi} - k)} - 1}{e^{2 \pi i (\frac{l}{4 \pi} - k) / N} - 1} \hspace{-3mm} & , \frac{l}{4 \pi} \not\equiv k \hspace{-3mm} \pmod{N} \\ N & , \frac{l}{4 \pi} \equiv k\hspace{-3mm} \pmod{N} \end{array} \right\} \end{split} \end{equation} where $\frac{l}{4 \pi} \equiv k \pmod{N}$ indicates that there exists an integer $z$ such that $\frac{l}{4 \pi} = k + z N$. For every $l$ define $l_{/4 \pi (N)} \in \intervalco{0}{N}$ as \begin{equation*} l_{/4 \pi (N)} := \min \Set{ l/(4 \pi) - z N \, : \, z = 0, 1, 2, ..., \text{ and } l/(4 \pi) - z N \geq 0}. \end{equation*} Then $\exp(2 \pi i \frac{l}{4 \pi}/N) = \exp(2 \pi i l_{/4 \pi (N)} / N)$. To take the absolute value of equation (\ref{eqn:fourier-beta}), we use $ \Abs{e^{i 2 \theta} - 1} = 2 \Abs{\sin (\theta)} $ and get \begin{multline}\label{eqn:fourier-beta-abs} \Abs{DFT_N[p_{r,\varepsilon}](k)} \leq \sum_{l \in \mathcal{L}_{T}} \Abs{\beta^{(T)}_{r,\varepsilon,l}} \left\{ \begin{array}{ll} \frac{ \Abs{ \sin (\pi (l_{/4 \pi (N)} - k)) } }{ \Abs{ \sin (\pi (l_{/4 \pi (N)} - k)/N) } } & , l_{/4 \pi (N)} \neq k \\ N & , l_{/4 \pi (N)} = k \end{array} \right\} \end{multline} We can bound the Fourier transform (\ref{eqn:fourier-beta}) by separating the correct answers, i.e., the $\varepsilon$-approximations to $x_r$, from the rest: if the input $q = x_r$ then the algorithm has to return an answer $\widetilde{\lambda}$ that is $\varepsilon$-close to the correct answer $\lambda(x_r)$ with probability greater or equal $3/4$. This probability is given by $p_{r,\varepsilon} (q)$, i.e., we demand that $p_{r,\varepsilon}(x_r) \geq 3/4$. Then: \begin{equation}\label{eqn:fourier-lower} \begin{split} \Big| \sum_{n=0}^{N-1} p_{r,\varepsilon}( x_n ) e^{- 2 \pi i k n/N } \Big| \geq & \Big| p_{r,\varepsilon}( x_r ) \Big| - \sum_{\substack{n=0\\n \neq r}}^{N-1} \Big| p_{r,\varepsilon}( x_n ) \Big| \\ \geq & \frac{3}{4} - \sum_{\substack{n=0\\n \neq r}}^{N-1} p_{r,\varepsilon}( x_n ) , \end{split} \end{equation} Consider the second term in (\ref{eqn:fourier-lower}), $ \sum_{\substack{n=0\\n \neq r}}^{N-1} p_{r,\varepsilon}( x_n ) $. Recall that $p_{r,\varepsilon} (q)$ is the probability that the algorithm measures a state $\Ket{k}$ that is mapped to an answer $\widetilde{\lambda}(k)$ that is an $\varepsilon$-approximation to $\lambda(x_r)$, i.e., $\Ket{k} \in A_{x_r,\varepsilon}$, see (\ref{eqn:preps}). This probability $p_{r,\varepsilon}(q)$ depends on the actual input $q$. For input $q = x_n \neq x_r$ a state $\Ket{k} \in A_{x_r,\varepsilon}$ will \emph{not} yield an $\varepsilon$-correct answer: we chose the $x_n$, $n=0, \ldots, N-1$, such that $\Abs{ \lambda(x_n) - \lambda(x_r) } > 2 \varepsilon$ for $n \neq r$, and thus there cannot be an $\varepsilon$ close answer for both $x_r$ and $x_n$. The sum $ \sum_{\substack{n=0\\n \neq r}}^{N-1} p_{r,\varepsilon}( x_n ) $ now tells us how often the algorithm chooses a state from $A_{r,\varepsilon}$. If we knew that none of the wrong answers is preferred by our algorithm, say e.g. $ \sum_{\substack{n=0\\n \neq r}}^{N-1} p_{r,\varepsilon}( x_n ) < \frac{1}{2} $, equation (\ref{eqn:fourier-lower}) would read \begin{equation}\label{eqn:fourier-lower-bound} \Big| \sum_{n=0}^{N-1} p_{r,\varepsilon}( x_n ) e^{- \frac{2 \pi i k n}{N} } \Big| \geq \frac{3}{4} - \sum_{\substack{n=0\\n \neq r}}^{N-1} p_{r,\varepsilon}( x_n ) > \frac{1}{4} . \end{equation} We will show that this property has to be true for some $r=0, \ldots, N-1$, indexing the set of states $A_{x_r,\varepsilon}$ that represents numbers $\varepsilon$-close to $x_r$. Let $R^<$ be the set of all $r$ for which $ \sum_{\substack{n=0\\n \neq r}}^{N-1} p_{r,\varepsilon}( x_n ) < \frac{1}{2} $ holds and $R^\geq$ the set for which it does not. We estimate the number of elements of $R^<$ by splitting \begin{equation*} N = \sum_{n = 0}^{N-1} 1 \geq \sum_{n = 0}^{N-1} \sum_{r = 0}^{N-1} p_{r,\varepsilon} (x_n) = \sum_{r = 0}^{N-1} p_{r,\varepsilon} (x_r) + \sum_{r = 0}^{N-1} \sum_{\substack{n=0\\n \neq r}}^{N-1} p_{r,\varepsilon} (x_n) \end{equation*} into the following parts: \begin{alignat*}{3} N & \geq \sum_{r = 0}^{N-1} p_{r,\varepsilon} (x_r) && + \sum_{r \in R^<} \sum_{\substack{n=0\\n \neq r}}^{N-1} p_{r,\varepsilon} (x_n) && + \sum_{r \in R^\geq} \sum_{\substack{n=0\\n \neq r}}^{N-1} p_{r,\varepsilon} (x_n) \\ & \geq N \frac{3}{4} && + \Abs{R^<} \cdot 0 && + \Abs{R^\geq} \frac{1}{2} \end{alignat*} and therefore we can conclude that $ \Abs{R^\geq} \leq \frac{1}{2} N $ and thus $ \Abs{R^<} \geq \frac{1}{2} N $. Now $\Abs{R^<} > 0$ implies that we can actually choose an element $r \in R^<$. Fix such an $r$ and we can combine equations (\ref{eqn:fourier-beta-abs}) and (\ref{eqn:fourier-lower-bound}) to \begin{equation}\label{eqn:fourier-final} 1/4 < \sum_{l \in \mathcal{L}_{T}} \Abs{\beta^{(T)}_{r,\varepsilon,l}} \left\{ \begin{array}{ll} \frac{ \Abs{ \sin (\pi (l_{/4 \pi (N)} - k)) } }{ \Abs{ \sin (\pi (l_{/4 \pi (N)} - k)/N) } } & , l_{/4 \pi (N)} \neq k \\ N & , l_{/4 \pi (N)} = k \end{array} \right\} . \end{equation} We will now fix the parameter $k=0,1,\ldots,N-1$ in inequality (\ref{eqn:fourier-final}) in such a way that the terms in the sum on the right-hand-side of the inequality are as small as possible. This will imply that the sum must be over a large number of elements, i.e., that $\Abs{\mathcal{L}_T}$ is large. Since $\Abs{\mathcal{L}_T} \leq 3^T$ this will help us to ultimately prove that $T = \Omega( \log N)$ if we could show that there is an $\alpha > 0$ such that $\Abs{\mathcal{L}_T}^{\alpha} = \Omega ( N )$. More specifically we will show that $\Abs{\mathcal{L}_T}^2 \geq \frac{1}{10} N$ which proves $T = \Omega( \log N)$. We prove $\Abs{\mathcal{L}_T}^2 \geq \frac{1}{10} N$ by contradiction. Assume $\Abs{\mathcal{L}_T}^2 < \frac{1}{10} N$. This assumption allows us to find a $k$ such that the right-hand-side of inequality (\ref{eqn:fourier-final}) is smaller than the left-hand-side, which will lead to our desired contradiction. If we project $\mathcal{L}_T$ into the interval $\intervalco{0}{N}$ through $l \mapsto l_{/4 \pi (N)}$ we will get a set $\Set{l_{/4 \pi (N)} \, : \, l \in \mathcal{L}_T}$. Order this set as $0 \leq t_1 \leq t_2 \leq \ldots \leq t_{\Abs{\mathcal{L}_T}} < N$. This defines ``gaps'' between these numbers, i.e., intervals $G=\intervaloo{t_j}{t_{j+1}}$ for $j=1, \ldots, \Abs{\mathcal{L}_T}$ if we define $t_{\Abs{\mathcal{L}_T}+1} = t_1+N$ (we ``wrap around''). Define the width $w(G)$ of such a gap $G$ as the distance between its endpoints. Thus $w(\intervaloo{t_j}{t_{j+1}}) = t_{j+1}-t_j$. Let $G_m$ be the gap with the maximal width $w(G_m)$ in the distribution. Its width must be $w(G_m) \geq N / \Abs{\mathcal{L}_T}$, since \begin{equation*} N = \sum_G w(G) \leq \sum_G \max_G w(G) = \Abs{\mathcal{L}_T} \max_G w(G). \end{equation*} Additionally $w(G_m) > 10$, since we assumed $\Abs{\mathcal{L}_T}^2 < \frac{1}{10} N$ and therefore $ \frac{N}{\Abs{\mathcal{L}_T}} > 10 \Abs{\mathcal{L}_T} \geq 10. $ Thus there are at least ten integers $k \in \Set{0, 1, \ldots, N-1}$ that fall into this largest gap $G_m$, i.e $k \in G_m$. One of these $k$ has maximum distance to \emph{both} boundaries $t_j$ and $t_{j+1}$ of $G_m$: it is the $k$ that is closest to the middle $m = \frac{t_{j+1}+t_j}{2}$ of $G_m = \intervaloo{t_j}{t_{j+1}}$. This integer $k$ fulfills $\Abs{k-m} \leq \frac{1}{2}$ and \begin{equation*} \begin{split} \min \Set{ k-t_j, t_{j+1}-k} & = \min \Set{ m-t_j + k-m, m-k + t_{j+1}-m} \\ & \geq \frac{w(G_m)}{2} - \frac{1}{2} \geq \frac{N}{2 \Abs{\mathcal{L}_T}} - \frac{1}{2}. \end{split} \end{equation*} Fix this $k \in \intervaloo{t_j}{t_{j+1}}$. Now $\Abs{\sin(x)} \geq 2/\pi \Abs{x}$ for $-\pi/2 \leq x \leq \pi/2$ and therefore \begin{equation*} \begin{split} \min_{l \in \mathcal{L}_T} \Abs{\sin \frac{\pi (l_{/4 \pi (N)} - k)}{N}} \geq & \min_{l \in \mathcal{L}_T} \frac{2}{N} \Abs{l_{/4 \pi (N)} - k} = \frac{2}{N} \min \Set{ k-t_j, t_{j+1}-k} \\ \geq & \frac{1}{\Abs{\mathcal{L}_T}} - \frac{1}{N}. \end{split} \end{equation*} Then we can use this to estimate (\ref{eqn:fourier-final}): \begin{equation*} 1/4 < \sum_{l \in \mathcal{L}_{T}} \Abs{\beta^{(T)}_{r,\varepsilon,l}} \frac{ \Abs{ \sin (\pi (l_{/4 \pi (N)} - k)) } }{ \Abs{ \sin (\pi (l_{/4 \pi (N)} - k)/N) } } \leq \sum_{l \in \mathcal{L}_{T}} \Abs{\beta^{(T)}_{r,\varepsilon,l}} \frac{1}{1/\Abs{\mathcal{L}_T} - 1/N} \end{equation*} We sum the last inequality over all $r \in R^<$ for which it is valid, and get: \begin{equation}\label{eqn:last-inequality} \sum_{r \in R^<} \frac{1}{4} \leq \frac{1}{1/\Abs{\mathcal{L}_T} - 1/N} \sum_{r \in R^<} \sum_{l \in \mathcal{L}_{T}} \Abs{\beta^{(T)}_{r,\varepsilon,l}} \end{equation} Since the number of elements in $R^<$ is bounded by $\Abs{R^<} \geq \frac{1}{2} N$, the left-hand-side of (\ref{eqn:last-inequality}) is bounded by $\frac{1}{8} N \leq \Abs{R^<} \frac{1}{4} $. The right-hand-side of inequality (\ref{eqn:last-inequality}) can be bounded through Lemma \ref{lemm:pbc}: \begin{equation*} \sum_{r \in R^<} \sum_{l \in \mathcal{L}_{T}} \Abs{\beta^{(T)}_{r,\varepsilon,l}} \leq \Abs{\mathcal{L}_T} . \end{equation*} If we put both sides together again and recall that we assumed $\Abs{\mathcal{L}_T}^2 < \frac{1}{10} N$ we get \begin{equation*} \frac{1}{8} N \leq \frac{\Abs{\mathcal{L}_T}}{1/\Abs{\mathcal{L}_T} - 1/N} = \frac{\Abs{\mathcal{L}_T}^2}{1 - \Abs{\mathcal{L}_T} / N} < \frac{\frac{1}{10} N}{1 - \frac{1}{10 \Abs{\mathcal{L}_T}}} \leq \frac{\frac{1}{10} N}{1 - \frac{1}{10}} = \frac{1}{9} N, \end{equation*} which is a contradiction. Therefore $\Abs{\mathcal{L}_T}^2 \geq \frac{1}{10} N$ must hold. This, together with $\Abs{\mathcal{L}_T} \leq 3^T$, leads us to $ N \leq 10 \cdot 9^T $. Take the logarithm and we get $T = \Omega(\log N)$. We chose $N$ such that $ \frac{1}{N+1} \leq 2 \varepsilon < \frac{1}{N} $ which finally proves that the number of power queries $T$ for any algorithm $\mathcal{A}$ with error $e(\mathcal{A},T) \leq \varepsilon$ has to be of the order $T = \Omega(\log (1/\varepsilon) )$. \end{proof} \section{Discussion} In this paper we have proven lower bounds for the number of quantum power queries for the Sturm-Liouville problem and settled an open problem in \cite{pap-woz-05}. How does this number of $T=\Theta(\log(1/\varepsilon))$ power queries relate to the cost of quantum algorithms? Here we understand ``cost'' as an abstraction on the number of elementary quantum gates or the duration for which a Hamiltonian has to be applied to a quantum system. Suppose the function $q$ is from a class $\mathbf{Q}' \subseteq \mathbf{Q}$ where each power query $W_l^p ( \exp(\tfrac{i}{2} M_q) )$ can be implemented with $\text{cost}(W_l^p ( \exp(\tfrac{i}{2} M_q) )) = \text{cost}(\mathbf{Q}',p)$. If we implement $W_l^p ( \exp(\tfrac{i}{2} M_q) )$ naively as $$W_l^p ( \exp(\tfrac{i}{2} M_q) ) = \left( W_l^1 ( \exp(\tfrac{i}{2} M_q) ) \right)^p, $$ then $\text{cost}(\mathbf{Q}',p) = p \cdot \text{cost}(\mathbf{Q}')$ and the cost of the Sturm-Liouville algorithm with $T = \Theta(\log(1/\varepsilon))$ power queries grows as \begin{equation*} \sum_{j=0}^{T-1} \text{cost}(\mathbf{Q}', 2^j) = \sum_{j=0}^{T-1} 2^j \cdot \text{cost}(\mathbf{Q}') = (2^T - 1) \cdot \text{cost}(\mathbf{Q}') = \Theta ( 1/\varepsilon ) \cdot \text{cost}(\mathbf{Q}') . \end{equation*} This is polynomial in $1/\varepsilon$ just like the Sturm-Liouville algorithm with bit queries discussed in \cite{pap-woz-05}. To take advantage of the proposed power query algorithm it is therefore necessary to realize power queries $W_l^p ( \exp(\tfrac{i}{2} M_q) )$ on a quantum computer in such a way that $\text{cost}(\mathbf{Q}',p) = o ( p ) \cdot \text{cost}(\mathbf{Q}') $ The implementation of power queries with cost that is not linear in the power $p$ of the query is still not settled and requires more work. It would be of interest to identify subclasses $\mathbf{Q}' \subseteq \mathbf{Q}$ for which we are able to prove $\text{cost}(\mathbf{Q}',p) = o ( p ) \cdot \text{cost}(\mathbf{Q}') $. Another open question is whether it is possible to extend the methods we used for upper and lower bounds for the Sturm-Liouville problem in one dimension to similar problems in higher dimensions. Most important for this problem is probably the extension of the results in \cite{jak-pap-03} on approximations of the eigenvector with the smallest eigenvalue to higher dimensions. \section{Acknowledgments} The author would like to thank M. Kwas, A. Papageorgiou, J. Traub and H. Wo{\'z}niakowski for inspiring discussions. Special thanks to an anonymous referee, who pointed out a gap in a previous version of this paper. Partial funding was provided by Columbia University through a Presidential Fellowship. This research was supported in part by the National Science Foundation and the Defense Advanced Research Projects Agency. \bibliographystyle{plain}
1,108,101,565,083
arxiv
\section{Introduction} Over the last decade, numerous efforts have been invested in the digitization of fine art, yielding digital collections allowing the preservation and remote access to cultural heritage. Such collections, even when available online, can only be fruitfully browsed through the metadata associated with images. In recent years, several research teams have developed search engines dedicated to fine arts for different recognition tasks: Replica \cite{seguin_visual_2016} for visual similarity search or the Oxford Painting Search \cite{crowley_art_2016} for semantics recognition of arbitrary objects. Often, those search engines are based on convolutional neural networks (CNN). Transfer learning from large-scale natural image datasets such as ImageNet, mostly by fine-tuning large pre-trained networks, has become a de facto standard for art analysis applications. Nevertheless, there are large differences in dataset sizes, image style and task specifications between natural images and the target artistic images, and there is little understanding of the effects of transfer learning in this context. In this work, we explore some properties of transfer learning for artistic images, by using both visualization techniques and quantitative studies. Visualization techniques permit to understand what the networks have learned on specific artistic datasets, by showing some of their internal representations or giving hints at what aspects of artistic images are important for their understanding. In particular, we will see that the networks can specify some pre-trained filters in order to adapt them to the new modality of images and also that the network can learn new, highly structured filters specific to artistic images from scratch. We also look at the set of the maximal activation images for a given channel to complete our observation. Quantitative results can confirm some intuitive facts about the way networks are modified during fine-tuning. To quantify the amount of change experienced by networks in different fine-tuning modality, we rely on feature similarity and the $\ell_2$ distance between models. We also compute metrics (overlapping ratio and entropy) on the maximal activation images set to this end. Moreover we experimentally show that fine-tuning first a pretrained ImageNet model on an intermediate artistic dataset may lead to better performance than a direct fine tuning on the target small artistic dataset (for a different tasks). Let us emphasize that the goal of this work is not to provide state-of-the-art classification performances, but rather to investigate the way CNNs are modified by classical fine-tuning operations in the specific case of artwork images. \section{Related Work} Our analysis of the adaptation of a deep network to artistic databases uses already well-established tools and methods. In the following we describe these methods and list the relevant related works. \subsection{Deep Transfer Learning for Art Classification Problems} Transfer learning consists in adapting a model trained on a large image database (such as ImageNet \cite{russakovsky_imagenet_2014}) for a new task. This method is the de facto standard when faced with relatively small datasets and has proven its relevance in many works. Two main modalities are possible for transfer learning. The first consists in taking the penultimate output of the pre-trained network to make it the input of a simple classifier \cite{donahue_decaf_2013}. In the following, we refer to this approach as the \emph{off-the-shelf} method. The second option consists in \emph{fine-tuning} (FT) the pre-trained network for the new task \cite{girshick_rich_2014}. One can also argue that the bare architecture of a successful network is in itself a form of transfer learning, as this architecture has proven its relevance to the task of image classification On bigger datasets, one can fine-tune the weights to adapt the network to a new task. This approach is by far the most used one. For the domain of artistic images, it has been used for style classification~\cite{tan_ceci_2016,lecoutre_recognizing_2017,cetinic_finetuning_2018}, object recognition in drawings \cite{yin_object_2016} or iconographic characters \cite{madhu_recognizing_2019}, people detection across a variety of artworks \cite{westlake_detecting_2016}, visual link retrieval \cite{seguin_visual_2016}, author classification~\cite{vannoord_learning_2017,sabatelli_deep_2018} or several of those tasks at the same time \cite{bianco_multitask_2019}. More precisely, \cite{tan_ceci_2016} show that fine-tuning a CNN pretrained on ImageNet outperforms off-the-shelf and training from scratch strategies for style, genre or artist classification. In \cite{cetinic_finetuning_2018}, the authors evaluated the impact of domain-specific weight initialization and the kind of fine-tuning used (number of frozen layers for instance) for different art classification tasks. They compared different pre-training with different natural images datasets. They have shown that the bigger (in terms of training images and number of labels), the better will be the results. A midway strategy between directly fine-tuning a pre-trained network and the mere use of the final network features, when the dataset is small, is to have a two phase fine-tuning, the first one with a relatively large dataset of artworks and the second on the target dataset. This strategy was shown to be helpful in \cite{sabatelli_deep_2018}, using the Rijksmuseum dataset for the first fine-tuning. Their findings suggest that the double fine-tuned model focuses more on fine details to perform artist attribution. In this work, we will look at the two ways of fine-tuning and the various effects they have on what the network learns to adapt itself to artworks. When using a double fine-tuning, the middle dataset will always be the RASTA dataset (described below). We will also look at the transfer of the bare architecture, which means initializing the weights to random values. Intermediate strategies such as partial freezing of the network will also be studied. \subsection{Deep Convolutional Neural Network Understanding } The deep learning community has provided several tools for trying to better understand deep CNNs : feature visualization \cite{erhan_visualizing_2009,olah_feature_2017} and attribution \cite{simonyan_deep_2014}. Feature visualization answers questions about what a deep network is responding to in a dataset by generating examples that yield maximum activation. Nevertheless, to achieve visually good results and output results that are understandable by humans, it is a necessity to add some constraints \cite{olah_feature_2017} to the optimization, thus avoiding getting adversarial structured noise. Based on those works, several papers have proposed methodology to determine the way the different features contribute to a classification \cite{olah_building_2018} by mixing it with attribution methods. Visualization of the optimized images also permits regrouping the filters of the first layers of an InceptionV1 model in some comprehensible groups \cite{olah_overview_2020}. Such works \cite{erhan_visualizing_2009,olah_overview_2020,olah_feature_2017} tend to show that a CNN learns meaningful features (for instance eye detector or dog head detector) whereas others show that those networks are primarily detecting textures \cite{geirhos_imagenettrained_2018}. By looking at the channel responses, the authors of \cite{tan_ceci_2016} concluded that lower layers learn simple patterns and higher ones, complex object parts such as portrait shape. In \cite{strezoski_plugandplay_2017}, the authors look at the feature visualizations and attributions of a small convolutional network trained on an artistic dataset. Some of the characteristic patterns of the classes (as the circle shape for portrait class) can be found in the visualizations. In \cite{szabo_visualizing_2020}, the authors visualize the impact of the fine-tuning of a network on fine-grained datasets. They demonstrate various properties of the transfer learning process such as the speed and characteristics of adaptation, neuron reuse and spatial scale of the represented image features on natural images datasets. Another way to understand the inner structure of networks is to compute feature similarity between different layers or different models. The recent work \cite{kornblith_similarity_2019} proposes to do this through Centred Kernel Alignement (CKA), a measure that we will use later in this work. \subsection{Datasets} Most artistic datasets only contain style or author metadata \cite{lecoutre_recognizing_2017,tan_ceci_2016} instead of depicted objects or iconographic elements. Some datasets are specific to a given class of objects such as person in paintings \cite{westlake_detecting_2016} or to concepts that are specific to art history \cite{cetinic_learning_2019}. In~\cite{wilber_bam_2017}, an annotated database of 2.2M contemporary artworks from Behance (website of portfolios from professional artists) is introduced, on which it is shown that fine-tuning improves recognition performances. The OmniArt dataset introduced in \cite{strezoski_omniart_2018} contains 1M historical artworks of 4 different types (from craft to paintings). Those two large-scale datasets are not openly accessible yet and no models pretrained on them has been shared to the community. For our experiments we use three datasets which come from different research works. The first one contains the largest number of samples and comes from the WikiArt website. It contains 80,000 images tagged with one among 25 artistic styles \cite{lecoutre_recognizing_2017} and is named \emph{RASTA}. Many other works referred to this dataset as the “WikiArt paintings” \cite{tan_ceci_2016} but this variant contains only 25 classes instead of 27. Due to its size and large diversity, we will mainly use this dataset in the experimental section. The second one is the \emph{Paintings} Dataset introduced in \cite{crowley_search_2014}, made of 8629 British painting images with 10 different labels corresponding to common objects The last dataset is the \emph{IconArt} dataset from \cite{gonthier_weakly_2018} composed of 5955 painting images from Wikicommons with 7 iconographic labels, for instance angel or the crucifixion of Jesus. These two datasets are designed for object classification, similarly to ImageNet. \section{Analyzing CNNs Trained for Art Classification Tasks} In this work, we investigate the effect of fine-tuning in the case of artistic images. In order to do so, we rely both on visualization techniques and quantification of the change the network undergoes. Our experimental results are organized in five sections. First, we consider an Inception V1 network~\cite{szegedy_going_2015} pre-trained on ImageNet and fine-tuned on RASTA for artistic style classification (\cref{subsec:ImageNetToArt}). Then we consider the same architecture with a random initialization (from scratch) trained on RASTA (\cref{subsec:FromScratch}). The \cref{sec:perfo_RASTA,sec:CKA_l2_RASTA} are dedicated to the classification performance and the evaluation of the changes implied by the training on RASTA. Finally, we studied the same architecture pre-trained on ImageNet and then fine-tuned first on RASTA and then on a smaller art dataset for object classification (\cref{subsec:ArtToArt}) to see how using an intermediate art dataset can help. \paragraph{Feature visualization } \label{par:featVizu} The first visualization technique we use consists in generating \emph{optimized images}, as introduced in \cite{olah_feature_2017}. These images are obtained by maximizing the response to a given channel. The entire feature map at a given layer can be decomposed in two spatial dimensions and one dimension depending on the convolutional kernels. A \emph{channel} denotes one element according to this last dimension. We use the \href{https://github.com/tensorflow/lucid}{Lucid framework} for visualizing convolutional channels via activation maximization. We use Lucid's 2D FFT image representation with decorrelation and 2048 iterations of the gradient ascent \paragraph{Maximal activation images} We devise another indicator that might be useful for the analysis of the transformation that a network undergoes during its transfer to a different domain. This indicator is the evolution of the \emph{maximal activation images}. For a given channel, we compute the \emph{top 100} images in the target dataset that trigger it the most. We also compute the information \emph{entropy} over classes for each top 100 images, in order to evaluate the clustering power of the corresponding channel. The entropy is defined as $\frac{1}{maxE} \sum\limits_{classes} -p_c log_2 (p_c)$ with $p_c$ the fraction of images in the top 100 belonging to the class $c$ and $maxE$ the maximal entropy with this number of classes. Moreover, the top 100 can be computed twice, once at the beginning and once at the end of the fine-tuning. The percentage of the images that lie in both sets is an indicator of how much the channel has drifted during its adaptation. These percentages are named \emph{overlapping ratio} in the following. They are, in many cases, much higher than what we would expect from a random reshuffling of the dataset. Besides, the combination of this indicator with the visualization technique from \cite{olah_feature_2017} leads to several findings that we will present thereafter. \paragraph{Experimental Setup :} All our visualization experiments use the InceptionV1 \cite{szegedy_going_2015} CNN (also called GoogLeNet). It is a 22 layers with only 7M parameters thanks to the introduction of Inception modules. The last layer of the network is replaced by a fully connected layer with the number of outputs corresponding to the dataset at hand and where activation function is a softmax for RASTA or a sigmoid for Paintings and IconArt datasets. The loss function is the usual cross-entropy in the first case, and the sum over the classes of binary cross-entropy in the two others. The InceptionV1 network is the classical and efficient choice for feature visualization by optimization \cite{olah_feature_2017} although it no longer produces the best classification performances. We ran experiments with a various number of hyperparameters such as the learning rate for the last layer (classification layer), the learning rate for the transferred layers, the use of a deep supervision, the maximum number of epochs or the possible use of random crops within the input image. The input size of the network is 224 $\times$ 224. For all experiments, we selected the model with the best loss value on the corresponding validation set. In the following sections, we analyze how the networks have been modified by fine-tuning processes. We present qualitative observations using optimized images and the maximal activation images, as well as quantitative evaluations relying on the $\ell_{2}$ norm of the difference between convolution kernels and the linear CKA measure~\cite{kornblith_similarity_2019}. \subsection{From Natural to Art Images} \label{subsec:ImageNetToArt} The first feature visualizations we report have been obtained by fine-tuning on the RASTA classification dataset an InceptionV1 architecture pretrained on ImageNet with different sets of hyperparameters. \paragraph{Low-level layers are only slightly modified by the fine-tuning.} The first observation is that low-level layers from the original network trained on ImageNet are hardly modified by the new training on RASTA. This fact will be confirmed by the CKA measure (see \cref{fig:linearCKA_plot}) and the overlapping ratio of the top 100 maximal activation images (see \cref{fig:100_Boxplots_per_layer}) in \cref{sec:CKA_l2_RASTA}. \paragraph{Mid-level layers adapt to the new dataset.} Some of the filters have been modified to the specificity of the new dataset by the fine-tuning process, as illustrated in \cref{fig:AdaptationFiltersRASTA_BlueDrapery,fig:AdaptationFiltersRASTA_montagnes,fig:AdaptationFiltersRASTA_fronton}. In these figures are displayed for some channels, the optimized images defined in the second paragraph of \cref{par:featVizu}. The model learned a red and blue drapery detector, a blue mountain one and a house pediment one. It is worth mentioning that other channels are hardly modified by the fine-tuning process. First, among the 70k training samples, some maximal activation images are present in the top 100 both before and after fine-tuning Those images are surrounded by a green line in the last row of \cref{fig:MidFiltersRASTA_Top100Im_And_Vizu}. Second, in those maximal activation images, we can recognize the pattern that emerged in the optimized image (when we compare the third and last rows). For instance, in the third column of \cref{fig:MidFiltersRASTA_Top100Im_And_Vizu}, a flower-like structure is transformed into a house pediment one. Finally, we observe that the detector fine-tuned on RASTA concentrates images with this specific pattern (last row of \cref{fig:MidFiltersRASTA_Top100Im_And_Vizu}). The first group of images of the last row contains characters with a blue dress (as the Mary character), the second one blue mountains and the last one buildings depicted with some perspective. On the other hand, for other channels, the pattern is already present in the optimized image and the detector is slightly adapted to the new dataset. This appears in the form of a minor modification of the optimized image. An arch detector within the pretrained ImageNet model has been modified to detect bigger arches as it can be seen in \cref{fig:AdaptationFiltersRASTA_archs}. The maximal activation images before the fine-tuning already was composed of many buildings images. In this case, the overlapping ratio between the two sets of maximal activation images is equal to 46\%. Nevertheless, we highlight those visualizations but the reader must keep in mind that some channels are not modified by the fine-tuning or are not interpretable at all\footnote{The reader can find more feature visualizations at \url{https://artfinetune.telecom-paris.fr/data/}}. \begin{figure}[ht!] \begin{center} \resizebox{\columnwidth}{!}{ \begin{tabular}{ cccc } \begin{minipage}[c]{0.24\textwidth} \centering \footnotesize{mixed4c\_3x3\_bottleneck\_pre\_relu:78 ~} \end{minipage} & \begin{minipage}[c]{0.24\textwidth} \centering \footnotesize{mixed4d\_pool\_reduce\_pre\_relu:63 ~} \end{minipage} & \begin{minipage}[c]{0.24\textwidth}\centering \footnotesize{mixed4d\_3x3\_pre\_relu:52 ~} \end{minipage} & \begin{minipage}[c]{0.24\textwidth}\centering \footnotesize{mixed4b\_3x3\_bottleneck\_pre\_relu:35 ~} \end{minipage}\\ \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/pretrained/mixed4c_3x3_bottleneck_pre_reluConv2D_78_Imagnet_Deco_toRGB} \caption{} \label{fig:MidFilters_Vizu_ImageNet_4c_78} \end{subfigure} &\begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/pretrained/mixed4d_pool_reduce_pre_reluConv2D_63_Imagnet_Deco_toRGB} \caption{} \label{fig:MidFilters_Vizu_ImageNet_4d_63} \end{subfigure} & \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/pretrained/mixed4d_3x3_pre_reluConv2D_52_Imagnet_Deco_toRGB} \caption{} \label{fig:MidFilters_Vizu_ImageNet_4d_52} \end{subfigure} & \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/pretrained/mixed4b_3x3_bottleneck_pre_reluConv2D_35_Imagnet_Deco_toRGB} \caption{} \label{fig:MidFilters_Vizu_ImageNet_4b_35} \end{subfigure} \\ \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/pretrained/ActivationsImages/RASTA_mixed4c_3x3_bottleneck_pre_relu_78_Most_Pos_Images_NumberIm100_meanAfterRelu} \caption{} \label{fig:AdaptationFiltersRASTA_BlueDrapery_Top100_Init} \end{subfigure} &\begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/pretrained/ActivationsImages/RASTA_mixed4d_pool_reduce_pre_relu_63_Most_Pos_Images_NumberIm100_meanAfterRelu} \caption{} \label{fig:AdaptationFiltersRASTA_montagnes_Top100_Init} \end{subfigure} & \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/pretrained/ActivationsImages/RASTA_mixed4d_3x3_pre_relu_52_Most_Pos_Images_NumberIm100_meanAfterRelu} \caption{} \label{fig:AdaptationFiltersRASTA_fronton_Top100_Init} \end{subfigure} & \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/pretrained/ActivationsImages/RASTA_mixed4b_3x3_bottleneck_pre_relu_35_Most_Pos_Images_NumberIm100_meanAfterRelu} \caption{} \label{fig:AdaptationFiltersRASTA_archs_Top100_Init} \end{subfigure} \\ \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/RASTA_small01_modif/mixed4c_3x3_bottleneck_pre_reluConv2D_78_RASTA_small01_modif_Deco_toRGB} \caption{} \label{fig:AdaptationFiltersRASTA_BlueDrapery} \end{subfigure} &\begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/RASTA_small01_modif/mixed4d_pool_reduce_pre_reluConv2D_63_RASTA_small01_modif_Deco_toRGB} \caption{} \label{fig:AdaptationFiltersRASTA_montagnes} \end{subfigure} & \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/RASTA_small01_modif/mixed4d_3x3_pre_reluConv2D_52_RASTA_small01_modif_Deco_toRGB} \caption{} \label{fig:AdaptationFiltersRASTA_fronton} \end{subfigure} & \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/RASTA_small01_modif/mixed4b_3x3_bottleneck_pre_reluConv2D_35_RASTA_small01_modif_Deco_toRGB} \caption{} \label{fig:AdaptationFiltersRASTA_archs} \end{subfigure} \\ \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/RASTA_small01_modif/ActivationsImages/RASTA_mixed4c_3x3_bottleneck_pre_relu_78_Most_Pos_Images_NumberIm100_meanAfterRelu_GreenIfInInit} \caption{2\%} \label{fig:AdaptationFiltersRASTA_BlueDrapery_Top100} \end{subfigure} &\begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/RASTA_small01_modif/ActivationsImages/RASTA_mixed4d_pool_reduce_pre_relu_63_Most_Pos_Images_NumberIm100_meanAfterRelu_GreenIfInInit} \caption{2\%} \label{fig:AdaptationFiltersRASTA_montagnes_Top100} \end{subfigure} & \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/RASTA_small01_modif/ActivationsImages/RASTA_mixed4d_3x3_pre_relu_52_Most_Pos_Images_NumberIm100_meanAfterRelu_GreenIfInInit} \caption{18\%} \label{fig:AdaptationFiltersRASTA_fronton_Top100} \end{subfigure} & \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/RASTA_small01_modif/ActivationsImages/RASTA_mixed4b_3x3_bottleneck_pre_relu_35_Most_Pos_Images_NumberIm100_meanAfterRelu_GreenIfInInit} \caption{46\%} \label{fig:AdaptationFiltersRASTA_archs_Top100} \end{subfigure}\\ \end{tabular} } \end{center} \vspace*{-5mm} \caption{First row: optimized images for InceptionV1 pretrained on ImagneNet. Second row: top 100 maximal activation examples for the same model. Third and fourth rows: optimized images and maximal activation examples for the same channel of the model fine-tuned on RASTA. The images surrounded by a green line are already present in the top 100 of the pretrained model. The overlapping ratio between the two sets of maximal activation images is displayed at the bottom of each column.} \label{fig:MidFiltersRASTA_Top100Im_And_Vizu} \end{figure} \paragraph{Learned filters have a high variability.} We ran 2 distinct fine-tunings for each of the 5 considered optimization schemes named Mode A to E\footnote{See \cref{tab:HyperParams}.}. The initial last layer is different as well as the order of the images in the mini-batches during the training process. From a same starting point (the ImageNet weights) but for different hyper-parameters, the training process may sometimes converge to similar optimized images. On the contrary, two optimizations with the same hyper-parameters (same mode) may lead to very different detectors. Those phenomena are illustrated in \cref{tab:SeveralModels_sameFeat_mixed4d_52}. For this given channel, according to the mode and occurrence of the fine-tuning, one can recognize houses (\cref{fig:ModeA_FT1}), flowers (\cref{fig:ModeC_FT1}), a mix of houses or more abstract patterns (\cref{fig:ModeE_FT1}). ImageNet pre-trained filters seem to be a good initialization for learning useful new filters adapted to the artistic style classification and they also permit to learn a variety of new filters. The percentage of overlap between the set of maximal activation images before and after fine-tuning seems to be correlated to the amount of visual change \begin{figure}[ht!] \begin{center} \resizebox{\columnwidth}{!}{ \begin{tabular}{ cccccccccc } \multicolumn{10}{c}{Imagenet Pretrained} \\ \multicolumn{10}{c}{ \includegraphics[width=0.2\textwidth]{im/pretrained/mixed4d_3x3_pre_reluConv2D_52_Imagnet_Deco_toRGB}} \\ \hline \multicolumn{2}{c}{Mode A} & \multicolumn{2}{c}{Mode B} & \multicolumn{2}{c}{Mode C} & \multicolumn{2}{c}{Mode D} & \multicolumn{2}{c}{Mode E} \\ \begin{subfigure}[c]{0.2\textwidth} \caption{Training 1 : 18\%} \label{fig:ModeA_FT1} \includegraphics[width=\textwidth]{im/RASTA_small01_modif/mixed4d_3x3_pre_reluConv2D_52_RASTA_small01_modif_Deco_toRGB} \end{subfigure}& \begin{subfigure}[c]{0.2\textwidth} \caption{Training 2 : 24\%} \label{fig:ModeA_FT2} \includegraphics[width=\textwidth]{im/RASTA_small01_modif1/mixed4d_3x3_pre_reluConv2D_52_RASTA_small01_modif1_Deco_toRGB} \end{subfigure}& \begin{subfigure}[c]{0.2\textwidth} \caption{Training 2 : 34\%} \label{fig:ModeB_FT1} \includegraphics[width=\textwidth]{im/RASTA_small001_modif/mixed4d_3x3_pre_reluConv2D_52_RASTA_small001_modif_Deco_toRGB} \end{subfigure}& \begin{subfigure}[c]{0.2\textwidth} \caption{Training 2 : 42\%} \label{fig:ModeB_FT2} \includegraphics[width=\textwidth]{im/RASTA_small001_modif1/mixed4d_3x3_pre_reluConv2D_52_RASTA_small001_modif1_Deco_toRGB} \end{subfigure}& \begin{subfigure}[c]{0.2\textwidth} \caption{Training 2 : 22\%} \label{fig:ModeC_FT1} \includegraphics[width=\textwidth]{im/RASTA_big001_modif/mixed4d_3x3_pre_reluConv2D_52_RASTA_big001_modif_Deco_toRGB} \end{subfigure}& \begin{subfigure}[c]{0.2\textwidth} \caption{Training 2 : 8\%} \label{fig:ModeC_FT2} \includegraphics[width=\textwidth]{im/RASTA_big001_modif1/mixed4d_3x3_pre_reluConv2D_52_RASTA_big001_modif1_Deco_toRGB}\end{subfigure} & \begin{subfigure}[c]{0.2\textwidth} \caption{Training 2 : 10\%} \label{fig:ModeD_FT1} \includegraphics[width=\textwidth]{im/RASTA_small001_modif_deepSupervision/mixed4d_3x3_pre_reluConv2D_52_RASTA_small001_modif_deepSupervision_Deco_toRGB} \end{subfigure}& \begin{subfigure}[c]{0.2\textwidth} \caption{Training 2 : 13\%} \label{fig:ModeD_FT2} \includegraphics[width=\textwidth]{im/RASTA_small001_modif_deepSupervision1/mixed4d_3x3_pre_reluConv2D_52_RASTA_small001_modif_deepSupervision1_Deco_toRGB} \end{subfigure}& \begin{subfigure}[c]{0.2\textwidth} \caption{Training 2 : 2\%} \label{fig:ModeE_FT1} \includegraphics[width=\textwidth]{im/RASTA_big001_modif_deepSupervision/mixed4d_3x3_pre_reluConv2D_52_RASTA_big001_modif_deepSupervision_Deco_toRGB} \end{subfigure}& \begin{subfigure}[c]{0.2\textwidth} \caption{Training : 3\%} \label{fig:ModeE_FT2} \includegraphics[width=\textwidth]{im/RASTA_big001_modif_deepSupervision1/mixed4d_3x3_pre_reluConv2D_52_RASTA_big001_modif_deepSupervision1_Deco_toRGB} \end{subfigure} \\ \end{tabular} } \end{center} \vspace*{-3mm} \caption{Optimized Image for a given channel (mixed4d\_3x3\_pre\_relu:52) with different trainings. The overlapping ratio between the two sets of maximal activation images is displayed on top of the images.} \label{tab:SeveralModels_sameFeat_mixed4d_52} \end{figure} \paragraph{High-level filters concentrate images from the same classes.} The visualizations of high-level layers (near the classification output) are more difficult to interpret, as illustrated in \cref{fig:HighLevelFiltersRASTA}. The network seems to mix different visual information from the previous layers. Nevertheless, the group of images with maximal activation for those 2 given channels gather images from the same artistic style after fine-tuning. The first channel is mostly fired by Ukiyo-e images (\cref{fig:HighLevel_Ukiyoe}), the second one gathers western renaissance artworks (\cref{fig:HighLevel_Renaissance}). There is no visual clue to such clustering in the optimized images. In the last image, one may see some green tree in front of a blue sky and some drapery. The fact that Early\_Renaissance, High\_Renaissance and Mannerism\_(Late\_Renaissance) classes are clustered together maybe due to their strong visual similarity. Deep model commonly mislabels one of these as another, as mentioned in \cite{lecoutre_recognizing_2017}. \begin{figure}[ht!] \begin{center} \resizebox{\columnwidth}{!}{ \begin{tabular}{ cccc } \multicolumn{2}{c}{mixed5b\_pool\_reduce\_pre\_relu:92 } & \multicolumn{2}{c}{ mixed5b\_5x5\_pre\_relu:82 } \\ \begin{subfigure}[t]{0.25\textwidth} \includegraphics[width=\textwidth]{im/RASTA_small01_modif/mixed5b_pool_reduce_pre_reluConv2D_92_RASTA_small01_modif_Deco_toRGB} \caption{Optimized Image} \label{fig:HighFilters_Vizu_RASTA_5b_92} \end{subfigure} & \begin{subfigure}[t]{0.25\textwidth} \includegraphics[width=\textwidth]{im/RASTA_small01_modif/ActivationsImages/RASTA_mixed5b_pool_reduce_pre_relu_92_Most_Pos_Images_NumberIm100_meanAfterRelu} \caption{Maximal activation examples : 1\%} \label{fig:HighLevel_Ukiyoe} \end{subfigure} & \begin{subfigure}[t]{0.25\textwidth} \includegraphics[width=\textwidth]{im/RASTA_small01_modif/mixed5b_5x5_pre_reluConv2D_82_RASTA_small01_modif_Deco_toRGB} \caption{Optimized Image} \label{fig:HighFilters_Vizu_RASTA_5b_82} \end{subfigure} & \begin{subfigure}[t]{0.25\textwidth} \includegraphics[width=\textwidth]{im/RASTA_small01_modif/ActivationsImages/RASTA_mixed5b_5x5_pre_relu_82_Most_Pos_Images_NumberIm100_meanAfterRelu} \caption{Maximal activation examples : 0\%} \label{fig:HighLevel_Renaissance} \end{subfigure} \\ \multicolumn{4}{c}{Top 100 composition :} \\ \multicolumn{2}{c}{\small{ Ukiyo-e 82 \% }} &\multicolumn{2}{c}{ \small{Early\_Renaissance 48\%}} \\ \multicolumn{2}{c}{\small{ Northern\_Renaissance 14 \% }} &\multicolumn{2}{c}{ \small{High\_Renaissance 27\%}} \\ \multicolumn{2}{c}{\small{ Early\_Renaissance 3 \% }} &\multicolumn{2}{c}{ \small{Mannerism\_(Late\_Renaissance) 12\%}} \\ \end{tabular} } \end{center} \vspace*{-5mm} \caption{Optimized Images and Maximal Activation Examples for two high level layers for the model fine-tuned on RASTA. The overlapping ratio between the set of maximal activation images before and after fine-tuning is displayed under the images. The percentage of the 3 most common class is displayed below.} \label{fig:HighLevelFiltersRASTA} \end{figure} \subsection{Training from Scratch} \label{subsec:FromScratch} \paragraph{Mid-level detectors can be learned from scratch when low-level layers are transferred from ImageNet.} The next experiment consists in fine-tuning a network whose low-level layers are initialized using the training on ImageNet and frozen whereas the mid and high-level layers are initialized randomly. In this case, the network is able to learn useful and comprehensible mid-level detectors such as drapery or checkerboard as illustrated in \cref{fig:AdaptationFiltersRASTA_checkboard,fig:AdaptationFiltersRASTA_draperie}. This phenomenon is most likely triggered by the low-level layers inherited from the ImagNet training, but the emergence of such structured detectors with a relatively small-sized dataset is relatively surprising. \paragraph{The optimized images are more difficult to interpret with a full training from scratch.} A network trained fully from scratch seems yields the same kind of low-level filters that the ones pretrained on ImageNet whereas the mid and high-level layers provide optimized images that are much more difficult to interpret, see \cref{fig:LearnedFromScratch_Ukiyoe,fig:LearnedFromScratch_Magic}. A possible explanation is that the network may not need to learn very specific filters given its high capacity. The training of the network provides filters that are able to fire for a given class such as Ukiyo-e (\cref{fig:LearnedFromScratch_Top100_Ukiyoe}) or Magic\_Realism (\cref{fig:LearnedFromScratch_Top100_Magic}) without being interpretable for humans. \begin{figure}[ht!] \begin{center} \resizebox{\columnwidth}{!}{ \begin{tabular}{ cc|cc } \multicolumn{2}{c|}{The end from scratch} & \multicolumn{2}{c}{All from scratch} \\ \begin{minipage}[c]{0.24\textwidth} \centering \footnotesize{mixed4d\_5x5\_pre\_relu:50} \end{minipage} & \begin{minipage}[c]{0.24\textwidth} \centering \footnotesize{mixed5a\_3x3\_bottleneck\_pre\_relu:1 ~} \end{minipage} & \begin{minipage}[c]{0.24\textwidth} \centering \footnotesize{mixed4d:16} \end{minipage} & \begin{minipage}[c]{0.24\textwidth} \centering \footnotesize{mixed4d:66} \end{minipage} \\ \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/RASTA_big0001_modif_adam_unfreeze50_RandForUnfreezed_SmallDataAug_ep200/mixed4d_5x5_pre_reluConv2D_50_RASTA_big0001_modif_adam_unfreeze50_RandForUnfreezed_SmallDataAug_ep200_Deco_toRGB} \caption{} \label{fig:AdaptationFiltersRASTA_checkboard} \end{subfigure} &\begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/RASTA_big0001_modif_adam_unfreeze50_RandForUnfreezed_SmallDataAug_ep200/mixed5a_3x3_bottleneck_pre_reluConv2D_1_RASTA_big0001_modif_adam_unfreeze50_RandForUnfreezed_SmallDataAug_ep200_Deco_toRGB} \caption{} \label{fig:AdaptationFiltersRASTA_draperie} \end{subfigure} &\begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/RASTA_big001_modif_RandInit_randomCrop_deepSupervision_ep200_LRschedG/mixed4dRelu_16_RASTA_big001_modif_RandInit_randomCrop_deepSupervision_ep200_LRschedG_Deco_toRGB} \caption{} \label{fig:LearnedFromScratch_Ukiyoe} \end{subfigure} & \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/RASTA_big001_modif_RandInit_randomCrop_deepSupervision_ep200_LRschedG/mixed4dRelu_66_RASTA_big001_modif_RandInit_randomCrop_deepSupervision_ep200_LRschedG_Deco_toRGB} \caption{} \label{fig:LearnedFromScratch_Magic} \end{subfigure} \\ \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/RASTA_big0001_modif_adam_unfreeze50_RandForUnfreezed_SmallDataAug_ep200/ActivationsImages/RASTA_mixed4d_5x5_pre_relu_50_Most_Pos_Images_NumberIm100} \caption{ Overlapping : 0\% } \label{fig:RandUnfreeze_Lines} \end{subfigure} & \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/RASTA_big0001_modif_adam_unfreeze50_RandForUnfreezed_SmallDataAug_ep200/ActivationsImages/RASTA_mixed5a_3x3_bottleneck_pre_relu_1_Most_Pos_Images_NumberIm100} \caption{0\% } \label{fig:RandUnfreeze_Renaissance} \end{subfigure} & \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/RASTA_big001_modif_RandInit_randomCrop_deepSupervision_ep200_LRschedG/ActivationsImages/RASTA_mixed4d_16_Most_Pos_Images_NumberIm100} \caption{0\% } \label{fig:LearnedFromScratch_Top100_Ukiyoe} \end{subfigure} & \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{im/RASTA_big001_modif_RandInit_randomCrop_deepSupervision_ep200_LRschedG/ActivationsImages/RASTA_mixed4d_66_Most_Pos_Images_NumberIm100}\caption{0\%} \label{fig:LearnedFromScratch_Top100_Magic} \end{subfigure} \\ \multicolumn{2}{c|}{Top 100 composition :} & \multicolumn{2}{c}{Top 100 composition :} \\ \scriptsize{Abstract\_Expressionism 24\%} & \scriptsize{ Northern\_Renaissance 39\%} & \scriptsize{Ukiyo-e 85\%} & \scriptsize{ Magic\_Realism 78\% } \\ \scriptsize{Minimalism 13\%} & \scriptsize{Romanticism 20\%} & \scriptsize{Art\_Nouveau\_(Modern) 11\%} & \scriptsize{ Ukiyo-e 22\% } \\ \scriptsize{Art\_Informel 9\%}& \scriptsize{Early_Renaissance 18\%} & \scriptsize{Northern\_Renaissance 2\%} & \\ \end{tabular} } \end{center} \vspace*{-5mm} \caption{Optimized Image and Maximal activation examples from different mid-level layers. On the left : fine-tuning is performed starting from low-level layers initialized from ImagNet and upper layers initialized at random. On the right, the fine-tuning is fully performed from scratch (randomly initialized layers). The overlapping ratio between the set of maximal activation images before and after fine-tuning is displayed under the images. The percentage of the 3 most common class is displayed below. } \label{fig:RASTA_RandomNets} \end{figure} \subsection{Classification Performance.} \label{sec:perfo_RASTA} Even though the goal of this work is not to reach the best possible classification performance, we display the corresponding results in \cref{tab:RASTA_accs} to further characterize the considered fine-tuning. From this table, one sees that a simple and short fine-tuning of a pre-trained model yields to a better performance than the off-the-shelf strategy. The former method is based on extracting features from the ImageNet pretrained model and training the last layer. The features extracted may be too specific to the ImageNet classification task and the classification head too small. With training from scratch, we failed to obtain a model as good as the ImageNet pretraining. This can be due to the relatively small size of the RASTA dataset. Some data augmentation and a longer training was required to reach 45.29\% Top1 accuracy. This experiment confirms Lecoutre et al. \cite{lecoutre_recognizing_2017} conclusions for another deep architecture. We conclude this section by observing that a simple way to improve results is to simply average the prediction of three models trained with different strategies (see last row of \cref{tab:RASTA_accs}). \begin{table}[ht!] \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|} \hline Method & Top-1 & Top-3 & Top-5 \\ \hline Off-the-shelf InceptionV1 pretrained on ImageNet & 30.95 & 58.71 &74.10 \\ \hline FT of InceptionV1 pretrained on ImageNet (Mode A training 1) & \textbf{55.18} & \textbf{82.25} & \textbf{91.06}\\ \hline Training from scratch the end of the model with pretrained frozen low-level layers. & 50.35 & 78.04 & 88.42\\ \hline InceptionV1 trained from scratch & 45.29 & 73.44 & 84.67\\ \hline Ensemble of the 3 previous models & \textit{58.76} & \textit{83.99} & \textit{92.23} \\ \hline \end{tabular} } \caption{Top-k accuracies (\%) on RASTA dataset \protect\cite{lecoutre_recognizing_2017} for different methods. The hyperparameters are different between the methods.} \label{tab:RASTA_accs} \end{table} \subsection{Quantitative Evaluation of the CNNs Modification} \label{sec:CKA_l2_RASTA} In order to quantify some of the previous observations, we make use of the linear CKA \cite{kornblith_similarity_2019} as a measure of similarity between two output features at a given layer, for two instances of a network. For computational reason, we used the global spatial average of each channel. The results are shown in \cref{fig:linearCKA_plot}. We can observe a decreasing of the CKA when the depth of the layers increases, when we compare the pretrained model with its fine-tuned version (dark blue line). This is a confirmation of what we observed previously with the optimized images (\cref{subsec:ImageNetToArt}). The fine-tuned models are the closest ones according to the green and light blue lines. The high level layers of those models are closed because those models have been trained on the same dataset from the same initialization point. The CKA also decreases with layers when we compare one model from scratch to its random initialization (purple and orange curves). The values of CKA present here are higher than the ones obtained in \cite{neyshabur_what_2020} for X-ray images. In the case of the model trained from scratch, we even observe several orders of magnitude of difference. This confirms and quantifies the fact that the structure of artistic images is closer to the one of natural images when compared to X-ray images. \begin{figure}[h] \begin{center} \resizebox{\textwidth}{!}{ \input{imInTex/ForPaper__CKA_per_layer.tex} } \end{center} \vspace*{-7mm} \caption{CKA computed on RASTA test set for different models trained or fine-tuned on RASTA train set.} \label{fig:linearCKA_plot} \end{figure} \begin{figure}[!tbp] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \resizebox{\textwidth}{!}{ \input{imInTex/RASTA_small01_modif/Overlapping/OverLap_100_Boxplots_per_layer.tex} } \caption{Boxplots of Overlapping ratio.} \label{fig:100_Boxplots_per_layer} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \resizebox{\textwidth}{!}{ \input{imInTex/RASTA_small01_modif/Overlapping/Purity_entropy100_Boxplots_per_layer.tex} } \caption{Boxplots of Entropy over classes.} \label{fig:Entropy_class_top100_Boxplots_per_layer} \end{subfigure} \caption{Boxplots of some metrics on the top 100 maximal activation images for the model fine-tuned on RASTA (Mode A1). For each box, the horizontal orange line corresponds to the average result and the star to the median.} \end{figure} In addition to feature similarity, we also look at the distance between two models in the parameter space in \cref{tab:RASTA_l2} (as in the recent work \cite{neyshabur_what_2020}). We can see that the fine-tuned models are still close one to another and also close to the ImageNet pretrained initialization. In contrast, the models trained from scratch are much farther away from their initialization. \begin{table \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|} \hline NetA & NetB & mean $\ell_{2}$ norm \\ \hline Pretrained on ImageNet & FT on RASTA (Mode A training 1) & 1.26\\ FT on RASTA (Mode A training 1) & FT on RASTA (Mode B training 1)& 1.24\\ FT on RASTA (Mode A training 1) & FT on RASTA (Mode A training 2) & 1.23\\ \hline The end from scratch & Its Random initialization & 6.52\\ From scratch & Its Random initialization & 8.13\\ \hline \end{tabular} } \caption{Mean over all layers of the $\ell_{2}$ norm of the difference between convolutional kernels between two models} \label{tab:RASTA_l2} \end{table} We also observe the evolution of the overlapping ratio between the ImageNet pretrained model and the fine-tuned one for the top 100 maximal activation images in \cref{fig:100_Boxplots_per_layer}. We can see a monotonic decrease of this ratio with the depth of the layer. This is another illustration of the fact that the high level layers are more modified by the fine-tuning. The behavior is the same if we consider the top 1000 maximal activation images. One also observes that channels with low overlapping ratio seem to correspond to optimized images that are more modified by the fine-tuning. This fact should be investigated further and could yield a simple way to browse through optimized images. Finally, and in order to quantify the class concentration described in \cref{subsec:ImageNetToArt}, we display the entropy over classes, \cref{fig:Entropy_class_top100_Boxplots_per_layer}), showing a decreasing of the average entropy with the layer depth, starting roughly in the middle of the network architecture. \subsection{From One Art Dataset to Another} \label{subsec:ArtToArt} \Cref{tab:ArtUK_IconArt_perf} compares the classification results obtained for the object classification task on the Paintings dataset \cite{crowley_search_2014} and on the IconArt dataset \cite{gonthier_weakly_2018} when using a model pretrained on ImageNet or fine-tuned on RASTA dataset (from an ImageNet initialization or from scratch). On the contrary to \cite{sabatelli_deep_2018} where they compared the pretraining on the Rijkmuseum dataset and ImageNet for the Antwerp dataset, the task is not the same between the two artistic datasets: classification of the artistic style versus object classification. Once again the fine-tuning strategy is better than the off-the-shelf one. The most important observation is that the double fine-tuning (first using RASTA, then using the considered dataset) outperforms the direct fine-tuning using only the dataset at hand. The filters learned on RASTA seem to be more adapted to other artistic datasets and ease the transfer in these two cases (IconArt and Paintings) where the datasets are relatively small. Finally, a model only trained on RASTA (last row of the two tables) will not provide a good initialization point for fine-tuning, neither for IconArt, nor for Paintings. This is most probably due to the size of the RASTA dataset. \begin{table} \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|} \hline Method & Paintings & IconArt \\ \hline Off-the-shelf InceptionV1 pretrained on ImageNet & 56.0 & 53.2 \\ Off-the-shelf InceptionV1 pretrained on ImageNet and RASTA & 52.4 & 54.4 \\ \hline FT of InceptionV1 pretrained on ImageNet (Mode A training 1) & 64.8 & 59.2 \\ \hline FT of InceptionV1 pretrained on ImageNet and RASTA & \textbf{65.6} & \textbf{67.4} \\ FT of InceptionV1 trained from scratch on RASTA the end of the model with pretrained frozen low-level & 59.6 & 59.4 \\ FT of InceptionV1 trained from scratch on RASTA & 49.1 & 50.1 \\ \hline \end{tabular} } \caption{Mean Average Precision (\%) on Paintings \protect\cite{crowley_search_2014} and IconArt test sets \protect\cite{gonthier_weakly_2018}.} \label{tab:ArtUK_IconArt_perf} \end{table} In \cref{tab:SmallArtDataset_cka_l2}, we use the two previously mentioned metrics to compare the different models fine-tuned on the IconArt and Paintings datasets. The model fine-tuned on a small art dataset will stay similar to its ImageNet pretrained initialization (with a CKA of 0.89 or 0.91 for the IconArt and Paintings datasets). A fine-tuning on the large RASTA dataset changes more the network ($CKA=0.77$ and $\ell_2$ norm = 1.26). A double fine-tuning permits to go even further from the original pretrained weights ($CKA=0.73$ and $0.76$). As already mentioned, this method provides the best classification performance. In the case of the model trained from scratch (two last lines of \cref{tab:SmallArtDataset_cka_l2}), the change between initialization and optimal model is also large due to the randomness of the initialization but those models are worst in terms of classification. \begin{savenotes} \begin{table}[ht!] \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|cc|cc|} \hline \multicolumn{2}{|c|}{Nets / Small art dataset used :} & \multicolumn{2}{c|}{IconArt} & \multicolumn{2}{c|}{Paintings} \\ \hline NetA & NetB & mean CKA & mean $\ell_2$ norm & mean CKA & mean $\ell_2$ norm \\ \hline Pretrained on ImageNet & FT on small art dataset & 0.90 & 0.14 & 0.91 & 0.15 \\ Pretrained on ImageNet & FT on RASTA + FT on small dataset & 0.73 & 1.61 & 0.76 & 1.67 \\ FT on RASTA (Mode A) & FT on RASTA + FT on small dataset & 0.79 & 0.78 & 0.77 & 0.86\\ \hline The end from scratch on RASTA &The end from scratch on RASTA + FT on small dataset & 0.70 & 0.91 & 0.72 & 1.01 \\ From scratch on RASTA & From scratch on RASTA + FT on small dataset & 0.83 & 0.27 & 0.79 & 0.52 \\ \hline \end{tabular} } \caption{Mean linear CKA (on IconArt or Paintings test set) and mean $\ell_2$ norm between models.} \label{tab:SmallArtDataset_cka_l2} \end{table} \end{savenotes} \vspace*{-5mm} \section{Conclusion} In this work, we have investigated the effect of fine-tuning a network pre-trained on ImageNet using artistic datasets. We made use of visualization techniques and quantitative assessments of the changes of the networks. Among other things, we have shown that some of the intermediate layers of the networks exhibit easily recognizable patterns that appear to be more related to art images than the patterns learned on natural images, while lower layers of the network are hardly changed. We have also shown that higher layers tend to concentrate classes after fine-tuning. Eventually, we have also shown that a double fine-tuning involving a medium size artistic dataset can help the classification of small-sized artistic datasets in accordance with visual patterns more related to the domain. The classification tasks between the two artistic datasets do not need to be identical. In our case, the intermediate task is style classification whereas the final one is object classification. This study provides some insights on the way networks are modified by fine-tuning in the case of artistic databases. Several of the findings in this work necessitate further confirmation, possible on larger databases \cite{strezoski_omniart_2018,wilber_bam_2017}. Another perspective would be to go further in the use of feature visualization, as it is done in \cite{olah_overview_2020,olah_building_2018}. For instance, it could be more informative to look at the patches that fire a given channel rather than the whole input image as in \cite{olah_building_2018}. In \cite{olah_overview_2020}, the authors claim for universality in different deep convolutional architectures and it is of interest to check if the same is true for artistic datasets. \bibliographystyle{splncs04}
1,108,101,565,084
arxiv
\section{Introduction} \label{sec:intro} Within the $\Lambda$CDM paradigm, the growth of cosmic structure proceeds as overdensities collapse into dark matter halos, which eventually serve as the sites for galaxy formation \citep{white78, blumenthal84, davis85}. Over time, the hierarchical accretion and merging of halos drives the development of substructure, such that some halos reside within the bounds of larger parent halos \citep{moore98}. The galaxies hosted by these subhalos are typically referred to as satellites and are important probes of the evolution of substructure, as they serve as tracers of dark matter on small scales. High-resolution $N$-Body and hydrodynamic simulations confirm this picture of hierarchical structure formation, while also making predictions regarding the properties of subhalos and the satellite galaxies they host. In particular, simulated subhalos are not isotropically distributed with respect to their parent dark matter halo. Instead, simulations across a broad range of mass scales predict that satellite galaxies should preferentially lie along orbits aligned with the major axis of the host halo \citep[e.g.][]{vdb99, knebe04, libeskind05, kang07, lovell11, wang13}. Two possible physical drivers are often associated with this predicted alignment of substructure with the shape of the larger gravitational potential: [\emph{i}] preferential destruction (or suppression) of satellites on orbits anti-aligned with the halo's major axis \citep{zaritsky99, penarrubia02, pawlowski12b} or [\emph{ii}] accretion of satellites along preferred directions, perhaps associated with large-scale filaments \citep{zentner05, libeskind11}. Observations of galaxies in nearby groups and clusters largely support the predicted anisotropies found in simulations, such that satellites in massive dark matter halos are preferentially aligned with the major axis of the central galaxy and with the larger-scale, filamentary structure \citep[e.g.][]{west00, plionis03, faltenbacher07, hao11, tempel15}. When pushing to lower-mass, more-isolated halos, studies based on large spectroscopic samples similarly find that satellites preferentially reside along the major axis of red (or early-type) hosts, while the distribution of satellites around blue (or late-type) hosts is consistent with being isotropic \citep[][but see also \citealt{zaritsky97}]{brainerd05, sales04, sales09, yang06, azzaro07, bailin08}. This apparent lack of spatial anisotropy for satellites of late-type hosts is potentially driven by random misalignment between the major axis of the host's disc and the dark matter halo, such that the satellites may be aligned with the latter but not the former \citep{bailin05, libeskind07, deason11}. In contrast to the satellites of comparable star-forming hosts in the local Universe, observations of the Local Group suggest that the spatial distribution of satellites around both the Milky Way and M31 are significantly anisotropic. In particular, the satellites of the Milky Way preferentially reside near the northern and southern Galactic poles \citep[i.e.~along the minor axis of the Milky Way disc,][]{holmberg69}, possibly following a planar arrangement \citep{lb76, kunkel76, metz07, pawlowski12a}. The satellites of M31 are similarly anisotropic in their distribution, with a large subset belonging to a thin disc or plane \citep{karachentsev96, koch06, mcconnachie06, metz09, conn13}. When including velocity information, the anisotropy of the Local Group satellite distribution becomes even more pronounced, with many of the Milky Way satellites following polar obits, consistent with a vast, coherently-rotating plane \citep{metz08, pawlowski13a, pawlowski13b}. For M31, a yet more-striking planar structure is observed, such that a large number of satellites exhibit coherent rotation along the line-of-sight to the Milky Way, forming a vast plane with a diameter of $\sim400$~kpc and a thickness of less than $\sim15$~kpc \citep{ibata13}. While simulations predict that satellites should preferentially align with the major axis of the host dark matter halo, the strong anisotropies observed for the Local Group satellites (especially those around M31) are inconsistent with the expectations of simulated subhalo populations \citep[][but see also \citealt{buck15}, who argue that co-rotating planar arrangements of satellites are predicted by $\Lambda$CDM]{kroupa05, kroupa10, pawlowski12b, pawlowski14c, ibata14b}. The striking nature of the M31 satellite disc has served as fuel for many recent studies investigating the possibility of similar, strongly-anisotropic satellite distributions around galaxies outside of the Local Group, such as the discovery of possible planar structure in the satellite distribution of the Centaurus A group \citep{tully15, libeskind15}. In particular, recent analysis of satellite pairs in the Sloan Digital Sky Survey \citep[SDSS,][]{york00} points towards the possibility of co-rotating planar satellite structures around nearby massive galaxies \citep[][hereafter I14]{ibata14}; for $20$ out of $22$ systems, with satellite pairs located on diametrically-opposed sides of the host galaxy, I14 detect co-rotation along the line-of-sight, suggesting that thin satellite planes -- similar to that of M31 -- may be relatively common. Specifically, this result, which is supported by an analysis of the spatial positions of photometrically-selected satellite samples, indicates that $\gtrsim50\%$ of the satellite population may reside in thin co-rotating planes \citep[][although \citealt{cautun15} argue that the evidence for the ubiquity of such planar structures is not robust]{ibata14c}. Given the scarcity of such structures in modern simulations \citep[][see \citealt{cautun15b} for an argument that the diversity of properties of these structures accounts for their perceived rarity.]{ibata14b, pawlowski14a}, the analysis of I14 poses a strong test of the $\Lambda$CDM cosmology and thereby warrants further investigation. In this paper, we re-examine the kinematic evidence for the existence of co-rotating planes of satellites around nearby massive hosts by comparing the coherence of line-of-sight velocities of observed satellite galaxies to simple models of satellite spatial distributions and kinematics. The structure of the paper is as follows: in \S\ref{sec:data}, we discuss the selection of the observational sample and the measured abundance of co-rotating satellite pairs. In \S\ref{sec:models}, we introduce our numerical models and compare the mock observations derived from the models to the observational data. Finally, in \S\ref{sec:discuss}, we discuss our results in the context of the search for M31-like planes elsewhere in the Universe. Throughout our analysis, we employ a $\Lambda$ cold dark matter ($\Lambda$CDM) cosmology with WMAP7+BAO+$H_{0}$ parameters $\Omega_{\Lambda} = 0.73$, $\Omega_{m} = 0.27$, and $h =0.70$ \citep{komatsu11}, and unless otherwise noted all logarithms are base $10$. Throughout the paper, we use the terms ``[satellite] disc'' and ``[satellite] plane'' interchangeably, referring to co-rotating planar satellite configurations. \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth]{alpha_def_figure_no_border.jpeg} \caption{Definition of the opening angle ($\alpha$) between satellite pairs (here, S1 and S2), as measured with respect to the host galaxy. Each satellite pair has a uniquely defined $\alpha$, ranging from $0^{\circ}$ to $180^{\circ}$, such that satellites on diametrically-opposed sides of a host correspond to $\alpha = 0^{\circ}$. } \label{fig:alpha} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=6.5in]{zoom.png} \caption{\emph{Left}: the cumulative number of co-rotating (green line) and counter-rotating (orange line) satellite pairs as a function of opening angle ($\alpha$), including uncertainties based on Poisson statistics. \emph{Right}: the fraction of co-rotating satellite pairs as a function of opening angle, computed in distinct bins of $\alpha$ (bins have width $6^{\circ}$), with error bars derived according to the binomial theorem. There is a significant excess of co-rotating satellite pairs over counter-rotating pairs at small $\alpha$ (i.e.~for satellites that are nearly diametrically opposed to each other). This overabundance of co-rotating pairs suggests the possible presence of coherently-rotating planar satellite structures around nearby massive host galaxies (however, see Fig.~\ref{fig:full}).} \label{fig:zoom} \end{center} \end{figure*} \section{Observational data} \label{sec:data} \subsection{Sample Selection} We draw our observational data from Data Release 7 \citep[DR7,][]{abazajian07} of the SDSS, making use of the derived data products from the NYU Value-Added Galaxy Catalog \citep[VAGC,][]{blanton05} including absolute $r$-band magnitudes ($M_{r}$) that are $K$-corrected to $z = 0.1$ using \textsc{KCORRECT} \citep{blanton07}. Throughout this work, we restrict our analysis to regions of the SDSS where the spectroscopic completeness exceeds $70\%$ (i.e.~\textsc{FGOTMAIN} $>$ 0.7). We also reject all galaxies with line-of-sight velocity errors greater than $25~{\rm km}/{\rm s}$. In selecting our galaxy sample, we adhere closely to the procedure of I14. We select a sample of hosts in the magnitude range $-23 < M_{r} < -20$ and within a redshift range of $0.002 < z < 0.05$. A host is considered isolated if there are no brighter objects within $500$~kpc (in projection) on the sky and within $1500~{\rm km}/{\rm s}$ in velocity space. Only isolated systems are retained, reducing the number of hosts to $22,780$ isolated galaxies. From this set of host systems, we identify galaxies as satellites of a given host if \vspace*{0.1in} \noindent (i) their magnitudes fall in the range \\ \indent \indent \indent $M_{r,{\rm host}} + 1 < M_{r,{\rm sat}} < -16$, \vspace*{0.1in} \noindent (ii) they are located between $20$~kpc and $150$~kpc from their \\ \indent host in projected distance ($d_{\rm proj}$), and \vspace*{0.1in} \noindent (iii) their velocity offset from the host lies in the range $$25~{\rm km}/{\rm s} < |V_{\rm sat} - V_{\rm host}| < 300~{\rm km}/{\rm s} \times e^{-(d_{\rm proj}/300~{\rm kpc})^{0.8}}.$$ \noindent This velocity bound is taken from I14, and is designed to reduce the contamination from interlopers in the satellite sample. Since our interest is in pairs of satellites, we retain only hosts with two or more satellites. Our final sample contains $427$ such hosts, with $965$ associated satellites. Note that individual hosts are allowed to harbor more than two satellites; on average, the SDSS hosts (as well as our model hosts, see \S\ref{sec:models}) have $2.3$ satellites. \subsection{Co-rotation signal} In this subsection, we investigate pairs of satellites for evidence of co-rotation with respect to their host. To facilitate this, we introduce the parameter $\alpha$, defined as the angle between the line extending from one satellite through the host and the position vector of the second satellite relative to the host, as projected on the sky (see Figure~\ref{fig:alpha}). We define this opening angle $\alpha$ such that a satellite pair located on diametrically-opposed sides of a host will have an opening angle of $0^{\circ}$. For the duration of this work, we will refer to a satellite pair as ``co-rotating'' if the satellites have opposite-signed (i.e.~one + and one -) line-of-sight velocity offsets relative to their host and their associated opening angle ($\alpha$) is less than $90^{\circ}$, or if they have same-signed velocity offsets relative to their host and their associated $\alpha$ is greater than $90^{\circ}$. Otherwise, the satellite pair is deemed to be counter-rotating. \begin{figure*} \begin{center} \includegraphics[width=6.5in]{full_red_alt_binning.png} \caption{\emph{Left}: the cumulative number of co-rotating (green line) and counter-rotating (orange line) satellite pairs as a function of $\alpha$, including Poisson errors, over the full $180^{\circ}$ domain. \emph{Right}: the fraction of co-rotating satellite pairs as a function of opening angle, computed in distinct bins of $\alpha$ (bins have width $10^{\circ}$), with error bars derived according to the binomial theorem (solid black line). To facilitate visualization, a coarser binning is adopted than that employed in Fig.~\ref{fig:zoom}b; the dash dashed line, however, shows the corresponding dependence of co-rotating fraction on $\alpha$ utilizing this narrower binning. Over the full range of opening angles, the observations are largely consistent with no excess of co-rotating satellite pairs, suggesting an isotropic velocity distribution.} \label{fig:full} \end{center} \end{figure*} Figure~\ref{fig:zoom}a shows the cumulative number of co-rotating and counter-rotating satellite pairs in our sample as a function of opening angle at $\alpha<35^{\circ}$. At small opening angles there is a clear excess of co-rotating pairs, as first reported by I14. The overabundance of co-rotating pairs as a function of $\alpha$ is better illustrated in Figure~\ref{fig:zoom}b, which shows the fraction of satellite pairs that are co-rotating as a function of opening angle, computed in distinct bins of $\alpha$. While the surplus of co-rotating pairs at small opening angle is readily apparent, at $10^{\circ}<\alpha<35^{\circ}$ the signal is consistent with the sample being divided equally between co-rotating and counter-rotating (i.e.~a co-rotation fraction of $0.5$, as would be expected in the absence of any co-rotating structure). At first glance, the data would seem to indicate the presence of coherently rotating structures that can only be detected at small values of $\alpha$ (i.e.~co-rotating planes of satellites viewed close to edge-on). When examining the co-rotating fraction of satellite pairs over the full range of opening angles (i.e.~$0^{\circ}<\alpha<180^{\circ}$), however, the evidence for planes of satellites is far less convincing. In Figure~\ref{fig:full}, we show (a) the cumulative counts of co-rotating and counter-rotating satellite pairs and (b) the co-rotating fraction, again in discrete bins of $\alpha$, over the full range of opening angles.\footnote{Note that the co-rotating fraction as a function of opening angle is computed using different binning procedures in Fig.~\ref{fig:zoom}b and Fig.~\ref{fig:full}b, so as to aid in visualization. Our results do not strongly depend on how the data are binned. The increase in co-rotating fraction at small opening angle appears less significant in Fig.~\ref{fig:full}b as a result of allowing satellite pairs with opening angles of $\sim 10^{\circ}$ in the innermost bin.} In this light, the excess of co-rotating pairs at $\alpha<10^{\circ}$ seems less likely to result from structured, coherent rotation associated with planar distributions of satellites. For example, repeatedly resampling $400$ satellite pairs placed randomly in phase space (as in our ``isotropic'' model in \S\ref{sec:models}) will frequently produce satellite samples with excess co-rotating fractions at random opening angles that are, by definition, not indicative of any underlying physics. Thus, care must be taken not to overinterpret the observed overadundance of co-rotating pairs at small angles, if indeed it is merely the result of random fluctuations associated with undersampling of an underlying isotropic distribution. The remainder of this paper will examine the argument that the excess of co-rotating pairs at small $\alpha$ is significant, and indicative of ubiquitous coherent co-rotation (similar to that observed for M31's satellite population) by comparing the SDSS data to statistical models of satellite kinematics. \section{Comparison to Toy Models} \label{sec:models} In order to gain insight as to whether the data presented in Figures \ref{fig:zoom} and \ref{fig:full} does indeed argue for the existence of coherently rotating satellite structures, we compare the SDSS data to mock observations of simple, idealized ``toy" models of satellite systems. These models are not intended to give a detailed description of satellite phase-space distributions.\footnote{In particular, the toy models do not account for potential velocity correlations due to group infall, which \cite{cautun15} argues may be important, nor do they capture the complexities of observing against a background of interloper galaxies with potentially corellated velocities.} However, they do provide a meaningful basis for comparison to the observational data. We begin by detailing how each toy model is constructed: \vspace*{0.1in} \noindent (i) Isotropic model --- For this model, each host is randomly assigned $2-5$ satellite galaxies.\footnote{In each case, where the model is permitted to have more than two satellites, we set the probability of a host having $n$ satellites to be four times greater than the probability of having $n+1$ satellites.} The position and velocity of each satellite, with respect to the host, is randomly chosen to be between $0$ and $200$~kpc from the host (with random angular coordinates) and $0$ and $200~{\rm km \, s}^{-1}$, respectively. The model is randomly rotated and observed along the $z$ direction --- i.e.~the $z$ direction is taken to be the line-of-sight and the $xy$ plane is taken to be the plane of the sky. \vspace*{0.1in} \noindent (ii) Disc model --- In this model, $2-5$ satellites are placed randomly between $0$ and $200$~kpc from the origin on the $xy$ plane (prior to any rotation taking place) and then randomly given a $z$ coordinate between $-10$ and $10$~kpc. All satellites are assigned a 3D velocity of $100~{\rm km \, s}^{-1}$, such that each satellite is in circular motion about the host, initially rotating in the $xy$ plane. The model is then randomly rotated and viewed along the $z$ axis. Finally, to mimic observational error in the line-of-sight velocities, we add to the $z$ component of each satellite's velocity a random offset drawn from a normal distribution with a standard deviation of $\sigma_{V} = 20~{\rm km \, s}^{-1}$. Our qualitative and quantitative results are not strongly dependent upon the assumed velocity structure or thickness of the model satellite discs; since we are only concerned with the sign (+ or -) of the $z$ component (post-rotation) of the satellite's velocity vector, the magnitude of that vector -- and any radial dependencies it might have -- is largely unimportant. \vspace*{0.1in} \noindent (iii) M31 model --- This model is based on the position and velocities of the 13 satellites belonging to the co-rotating plane identified around M31 by \citet{ibata13}. The three-dimensional positions of the satellites are taken from \citet{mcconnachie12} and the line-of-sight velocities are compiled from \citet{mcconnachie12} and \citet{collins13}. Note that we only consider the 13 satellites exhibiting coherent rotation; the two satellites aligned with the planar structure, but with counter-aligned line-of-sight velocities, are excluded. We assign each mock satellite a velocity, such that the radial component of its velocity is consistent with the observed line-of-sight velocities of the true M31 satellites, such that each satellite's total velocity puts it in circular motion around the host.\footnote{The origin prior to rotation is taken to be the position of a Milky Way observer.} We then randomly select $2-5$ satellites to mock observe (independent of the luminosity of the true M31 satellites), and the system is randomly rotated and viewed along the $z$ axis. We again add to the $z$ component of the velocity a random offset drawn from a normal distribution with $\sigma_{V}=30~{\rm km \, s}^{-1}$, representative of measurement error in the line-of-sight velocity. While constructed using the positions and velocities of the co-rotating satellites in the M31 plane, it is useful to note that our model is not a true analog of the observed system as every member of the M31 planar structure is fainter (by $\sim1-2$ magnitudes or more) than our satellite luminosity limit ($M_{r} < -16$). \vspace*{0.1in} \noindent (iv) Dumbbell model --- In this model, each host is restricted to exactly two satellites. Once the first satellite is randomly placed on the $xy$ plane (again, prior to any rotation), the placement of the second satellite is restricted, such that the opening angle between the two satellites is less than $10^{\circ}$ when the system is viewed along the $z$ axis. From there, each satellite is assigned a $z$ coordinate between $-10$ and $10$~kpc and the system is subject to random rotation, assigned of line-of-sight velocity errors, and finally viewed along the $z$ axis. In essence, this model requires two satellites to be on opposite sides of their hosts and orbiting in rigid-body rotation. As was the case for the disc model, our results do not strongly depend on the adopted thickness of the dumbbell. \vspace*{0.1in} For each realization of a model, we rotate the system to a random orientation before observing along the $z$ axis. In our analysis, we simulate a variety of statistical samples, each consisting of $N=10^6$ model realizations, where most samples include a mix of realizations drawn from the isotropic model along with one of the other three models. For example, the $50\%$ disc + $50\%$ isotropic model consists of $5 \times 10^{5}$ realizations of the disc model and $5 \times 10^5$ realizations of the isotropic model. Note that such a sample does \emph{not} consist of $10^6$ hosts whose satellites have a $50\%$ probability to be placed in a disc and a $50\%$ probability to be placed randomly. While we do not explicitly explore cases of this type, they are equivalent to the cases we explore that have a percentage disc composition of approximately $p_{\rm sat}^{2}$, where $p_{\rm sat}$ is the probability of an individual satellite being placed in a disc. Note that this is only an approximation, as some hosts in our models have three or more satellites. For example, a case where satellites of each host independently have a $50\%$ chance of being placed in a disc, would be equivalent to our $25\%$ disc + $75\%$ isotropic model. \begin{table} \centering \begin{tabular}{l c c c c} \hline \hline Model &$p_{\rm sat}$&$\chi ^2$ & $\tilde{\chi}^2$ & $p$\\ \hline 100\% Disc &1.0&460.07 & 28.76 & $< 0.001$ \\ 50\% Disc + 50\% Isotropic &0.71&179.77 & 11.39 & $< 0.001$ \\ 25\% Disc + 75\% Isotropic &0.50&71.20 & 4.45 & $< 0.001$ \\ 10\% Disc + 90\% Isotropic &0.32&22.44 & 1.40 & 0.13 \\ \hline 50\% M31 + 50\% Isotropic &0.71&221.79 & 13.05 & $< 0.001$ \\ \hline 50\% Dumbbell + 50\% Isotropic &0.71&8.70 & 0.58 & 0.89 \\ 10\% Dumbbell + 90\% Isotropic &0.32&21.62 & 1.44 & 0.12 \\ \hline 100\% Isotropic &0&11.40 & 0.67 & 0.83 \\ \hline \hline \end{tabular} \label{tab:chis} \caption{$\chi ^2$, reduced $\chi ^2$, and $p$ values based on a comparison of the observed fraction of co-rotating satellite pairs in the SDSS versus that for various statistical samples of model satellite distributions as described in \S\ref{sec:models}. Models in which a large fraction of hosts harbor discs of satellites (including the model based on the M31 plane) are disfavored relative to our dumbbell model or the simple isotropic case. Calculations are made taking satellite pairs over the full range of $\alpha$ (i.e.~$0^{\circ} < \alpha < 180^{\circ}$), but the $p$-values are largely unchanged when we restrict the comparison to a narrower range of opening angles (e.g.~$\alpha < 35^{\circ}$). Also shown is the probability that an \textit{individual} satellite is found in a disc traced by bright satellites, $p_{\rm sat}$.} \end{table} In Table 1, we present a summary of our analysis of the $\chi^2$ values describing the goodness of fit for several statistical samples of modeled systems in comparison to the observed fraction of co-rotating satellite pairs as a function of opening angle in the SDSS (see Fig.~\ref{fig:full}b). The number of degrees of freedom that enter into each $p$-value calculation is based on how restrictive the model under consideration is: disc models are taken to have one fewer degree of freedom than the purely isotropic model or the model that is based on the observed positions of M31 satellites, since satellites are restricted to being placed in the disc. The dumbbell model essentially introduces an additional constraint on the disc model, so dumbbell models have yet one fewer degree of freedom. \subsection{Disc Model and M31 model} \label{sec:disc_models} In Figure \ref{fig:discs}, we show the observed fraction of SDSS satellite pairs that are co-rotating as a function of the opening angle $\alpha$ in comparison to mock observations of various disc models. The thick blue line corresponds to a statistical sample comprised purely of satellite discs, while the green, orange, and magenta lines correspond to samples composed of $50\%$, $25\%$, and $10\%$ satellite discs, respectively, with the remainder of each sample consisting of isotropic satellites. In addition, the dashed green line corresponds to a sample with satellites for $50\%$ of the simulated hosts following a M31 model and the other $50\%$ distributed according to an isotropic satellite population. For comparison, the solid black line shows the co-rotating fraction for a purely isotropic sample. \begin{figure*} \begin{center} \includegraphics[width=4.25in]{discs.png} \caption{The fraction of co-rotating satellite pairs as a function of opening angle ($\alpha$) for various versions of the disc model, including the model based on the M31 plane, in comparison to the observed co-rotating fraction measured in the SDSS (cyan line). The observed kinematics of bright satellites in the SDSS are consistent with at most $10\%$ of hosts having disc-like or planar satellite distributions, with the remainder being distributed isotropically.} \label{fig:discs} \end{center} \end{figure*} We find strong disagreement between the fraction of co-rotating satellite pairs as a function of opening angle for models in which $25\%$ or more of hosts harbor planes of satellites in comparison to that for the observed SDSS sample. In particular, the presence of inclined planes (relative to the line-of-sight) in the toy models results in a significant overabundance of co-rotating pairs at $20^{\circ}\lesssim\alpha\lesssim60^{\circ}$ in comparison to the SDSS observations. Overall, the SDSS data agree reasonably well with models where at most $\sim10\%$ of hosts have satellites residing in planes (or $\sim90-100\%$ of the hosts have satellites distributed isotropically in phase space); however, the $100\%$ isotropic model does fail to reproduce the overabundance of co-rotating pairs at very small opening angles ($\alpha \lesssim 10^{\circ}$), as measured in the SDSS sample. As shown in Fig.~\ref{fig:discs}, the M31 sample follows the $50\%$ disc + $50\%$ isotropic sample very closely, as they fundamentally represent the same satellite arrangement (with the caveat that the positions and velocities of the satellites in the M31 model are tailored to match the observed positions of the true M31 satellites). While our results suggest that coherently rotating discs of bright satellites are not common, the objection could be raised that the velocity selection criteria used to select the SDSS systems systematically removes inclined satellite planes (i.e.~those systems with satellite pairs at opening angles of $10^{\circ} \lesssim \alpha \lesssim 170^{\circ}$); we address this possible selection effect in \S \ref{sec:velocity}. \subsection{Velocity Modeling and Cuts} \label{sec:velocity} As highlighted in \S\ref{sec:disc_models}, those toy models, in which a high fraction of host galaxies harbor satellite planes, are disfavored in part due to the lack of excess co-rotating satellite pairs at intermediate opening angles (i.e.~$20^{\circ} \lesssim \alpha \lesssim 60^{\circ}$), corresponding to satellites in discs at non-zero inclination angles. If our sample selection criteria, in particular our velocity cut, are biasing us strongly against such systems, we could perhaps reconcile the apparent discrepancies between the disc models and the SDSS data. In a simplified test case, where each satellite orbits their host in a disc at a velocity of $V_0$, imposing a velocity cut of exactly $V_0$ on the host-satellite velocity offset would retain only perfectly edge-on discs, leading to a signal much like the one observed at small opening angles. Relaxing this velocity cut would permit progressively more face-on discs, and imposing no velocity cut would in principle permit any disc inclination angle. In constructing our model, we assigned a characteristic velocity of $100~{\rm km \, s}^{-1}$ to the satellites and only selected those satellites that have a 1D velocity offset (relative to their host) greater than $\sqrt{2} \times 25~{\rm km \, s}^{-1}$. Since the toy model is essentially scale-free, this is equivalent to removing satellites that have a 1D velocity offset less than $\sqrt{2} \times 25\%$ of the characteristic 3D velocity for satellites of the host --- i.e.~a velocity threshold of $0.35~V_{0}$ is applied, where $V_0$ is the characteristic 3D velocity of the satellites. The higher this velocity cut, the more we would expect a contribution to the co-rotating fraction at intermediate $\alpha$ from inclined discs to be suppressed, as such systems would have a significant component of their velocity moving perpendicular to the line-of-sight. \begin{figure} \begin{center} \includegraphics[width=0.96\columnwidth]{velocity_cuts.png} \caption{The fraction of co-rotating satellite pairs in the SDSS satellite sample (cyan line) and in the $25\%$ disc + $75\%$ isotropic model (red lines), varying the value of the velocity selection limit applied to the models. The dotted, dashed, and solid red lines correspond to models in which only systems with $\Delta V_{los}>0.3$, $0.6$, $0.9~V_{0}$ are included, respectively, and where $V_{0}$ denotes the 3D characteristic velocity of satellites in the plane/disc. The dramatic overabundance of co-rotating pairs at small $\alpha$ in the SDSS data is only captured by models that contain nearly exclusively edge-on discs (i.e.~applying the $\Delta V_{los}>0.9~V_{0}$ cut).} \label{fig:cuts} \end{center} \end{figure} In Figure~\ref{fig:cuts}, we demonstrate the impact of altering our velocity selection criterion on the measured co-rotation fraction versus opening angle for the $10\%$ disc + $90\%$ isotropic model. The various red lines range from selection limits of $30\%$ of the 3D velocity to $90\%$ of the 3D velocity, with the SDSS data included for comparison (cyan line). Reproducing the lack of excess co-rotation at $20^{\circ}\lesssim\alpha\lesssim 60^{\circ}$ --- or similarly the sharpness of the increase in co-rotating satellite pairs at very small $\alpha$ --- requires a very high velocity cut (i.e.~$\sim 0.9 V_0$). Given that our adopted velocity limit is $\Delta V_{los} >\sqrt{2} \times 25~{\rm km \, s}^{-1}$, such a strong selection bias would require that the characteristic velocity ($V_{0}$) of satellites in planes would need to be $\sim40~{\rm km \, s}^{-1}$. If the planar satellites have such low velocities, a velocity cut of $\sqrt{2} \times 25~{\rm km \, s}^{-1}$ would indeed correspond to $0.9$ times the characteristic velocity of the plane members, and we could confidently state that we had removed all but edge-on planes. The ``toy model'' adopted by I14 as a comparison to their measurements of the co-rotating satellite fraction in the SDSS utilizes exactly this characteristic velocity ($40~{\rm km \, s}^{-1}$); as a result, their toy model only includes edge-on (or nearly edge-one) discs, thereby reproducing the overabundance of co-rotating satellite pairs at small opening angles (see their Fig.~1b). Assuming a 3D characteristic velocity of $40~{\rm km \, s}^{-1}$ for satellites in planes, however, is inconsistent with the broad expectations from subhalo kinematics in $N$-body simulations as well as the observed line-of-sight velocities of satellites in the M31 plane, which have a median value of $|\Delta V_{los}| \sim 92~{\rm km \, s}^{-1}$. Moreover, such a low characteristic velocity directly conflicts with the measured line-of-sight velocity offsets of the co-rotating satellite pairs identified by I14 (see their Table~1) as well as those in our sample. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{dvs.png} \caption{The line-of-sight velocity offset for each satellite in the SDSS sample relative to its host as a function of its projected distance from the host (red circles). Those satellites corresponding to co-rotating pairs with opening angles of $\alpha<10^{\circ}$ are highlighted as cyan squares. Given that most satellites in the sample (especially those that are co-rotating and at small opening angles) have velocity offsets greater than the lower selection limit, we conclude that our observational sample is not strongly biased against inclined planes (or discs).} \label{fig:dvs} \end{center} \end{figure} Figure~\ref{fig:dvs} shows the host-satellite (line-of-sight) velocity offset plotted against host-satellite projected distance for each satellite in our SDSS sample, highlighting those systems belonging to pairs with $\alpha<10^{\circ}$. The mean absolute value of the 1D (line-of-sight) velocities for the $44$ satellites in co-rotating pairs with $\alpha < 10^{\circ}$ is $109.4~{\rm km \, s}^{-1}$, consistent with that for the overall sample ($111.4~{\rm km \, s}^{-1}$). This indicates that a velocity cut of $\sqrt{2} \times 25~{\rm km \, s}^{-1}$ would be insufficient to select only edge-on planes, such that we should also detect co-rotation from planes at moderate inclination angles (i.e.~yielding an elevated co-rotating fraction at $10^{\circ} < \alpha < 60^{\circ}$). As a result, we argue that the observational data are inconsistent with co-rotating planes being ubiquitous in the local Universe --- at least with respect to satellites brighter than $M_{r} = -16$. \subsection{Dumbbell model} Recognizing that the observed variation in co-rotating satellite fraction with opening angle is strongly inconsistent with discs or planes of satellites being prevalent, we now discuss a potential alternative scenario: the dumbbell model (see \S\ref{sec:models}). Figure~\ref{fig:bells} shows the co-rotating satellite fraction as a function of opening angle for the SDSS satellite sample in comparison to two formulations of the dumbbell model, one with $50\%$ dumbbells (and $50\%$ isotropic satellites) and one with $10\%$ dumbbells (and $90\%$ isotropic satellites). Both models are in relatively good agreement with the observational data. In particular, the dumbbell model is able to reproduce the observed overabundance of co-rotating pairs at small opening angles and the corresponding sharp decrease at $\alpha\sim10^{\circ}$. In \S\ref{sec:discuss}, we address the physical motivation (or lack thereof) for dumbbell-like satellite configurations. For now, we note that the dumbbell model yields a better fit to the observed kinematic data than the disc model. The $p$-value for the toy model with $10\%$ contribution from dumbbells is $0.948$, meaning we fail to reject it at any confidence level. We also fail, at the $90\%$ confidence level, to reject the $50\%$ dumbbell model ($p = 0.152$). While we similarly fail to reject the $10\%$ disc model at $90\%$ confidence ($p= 0.223$), we reject all other disc models at $>99\%$ confidence (see Table~\ref{tab:chis}). \begin{figure} \begin{center} \includegraphics[width=1\columnwidth]{dumbbells.png} \caption{The fraction co-rotating satellite pairs as a function of opening angle $\alpha$ for the observed SDSS sample (cyan line) and various simulated dumbbell samples (green, magenta, and black lines). While unlikely to be physically accurate, a model in which a small fraction ($\sim10\%$) of massive galaxies host satellites in dumbbell-like distributions is able to reproduce the overabundance of co-rotating satellites pairs at small opening angles.} \label{fig:bells} \end{center} \end{figure} \section{Discussion} \label{sec:discuss} Our analysis is motivated by the work of I14, which found an increase in the incidence of co-rotation in satellite pairs very near diametric opposition with respect to their host. If this excess co-rotation is the result of physical processes, it would tell us something significant about the co-evolution of satellite systems and the behavior of dark matter on small scales -- perhaps indicating a serious problem with the $\Lambda$CDM model. On the other hand, we must consider the possibility that the co-rotation signal of I14 is not robust and is the product of random chance. In this section, we weight these competing possibilities. In selecting our sample, we adhere closely to the selection criteria of I14; however, we did not reproduce the I14 satellite sample exactly. Of the host systems listed in Table~1 of I14, two galaxies fail our selection criteria. In one of these cases, the host magnitude fell just outside of our selection window, while the other host had a neighbor of nearly equal, but slightly brighter magnitude ($\Delta M_{r} = 0.01$). In addition, our selection includes several systems not identified by I14, though we do not expect these differences between our samples to bias our results in any meaningful way. Of the $22$ satellite pairs with $\alpha<8^{\circ}$ in I14, $20$ ($91\%$) appear in our sample. Moreover, we reproduce the principal result from I14, finding an excess of co-rotating satellite pairs located on diametrically-opposed sides of their host galaxy (i.e.~at small opening angles). At $\alpha < 8^{\circ}$, we find $21$ out of $27$ satellite pairs to be co-rotating, corresponding to a co-rotating fraction of $0.78 \pm 0.08\%$. However, when examining the kinematics of the entire population of satellite pairs (i.e.~at all opening angles), the dependence of co-rotating fraction on $\alpha$ is inconsistent with the expectations from simple models of rotating discs (or planes) of satellites. For example, at large $\alpha$, we would naively expect an overabundance of co-rotating satellites mirroring that detected at small opening angles. \cite{ibata14c} argue that caution must be taken when considering satellites on the same side of the host, as relative motion of the satellites with respect to each other (e.g.~in orbital binary pairs) could dominate the motion of a satellite around the host, particularly when viewed in projection. As a result, when examining satellite pairs with large opening angles, \cite{ibata14c} require that the brighter of the satellite pair be at least two magnitudes fainter than the host to minimize the impact of infalling sub-groups; to mitigate deblending problems, identified pairs closer than $25$~kpc in separation are also excluded. With these cuts in place, the I14 sample contains $15$ pairs of satellites with $\alpha>172^{\circ}$, $10$ of which are co-rotating \citep{ibata14c}. Applying identical cuts to our sample, we find $13$ out of $21$ satellites co-rotating over the same range in $\alpha$, while removing these additional selection criteria yields $44$ pairs of satellites, $23$ of which are co-rotating. In each case, the satellite sample at large $\alpha$ is divided nearly evenly between co-rotating and counter-rotating pairs, and the co-rotating excess at large $\alpha$ is significantly less substantive than that at small $\alpha$. This agrees well with the work of \cite{cautun15}, which also finds no evidence of excess co-rotation in satellite pairs located on the same side of their host. The differences between the expected kinematics of satellites belonging to discs (or planes) and that observed in the SDSS satellite sample extend far beyond the large $\alpha$ regime. If the observed overabundance of co-rotating satellite pairs at small $\alpha$ is associated with a large population of satellite planes, then we would expect that planes at non-zero inclination angles to contribute co-rotating pairs at intermediate opening angles to the observed sample, resulting in a gradual decrease in the co-rotating fraction at $0^{\circ} < \alpha < 90^{\circ}$ (see Fig.~\ref{fig:discs}). Instead, what is observed is a precipitous drop in the co-rotating fraction, with $21$ out of $27$ ($78 \pm 8.0\%$) satellite pairs found to be co-rotating at $\alpha < 8^{\circ}$, but only $27$ out of $63$ pairs ($44 \pm 6.2\%$) co-rotating over $8^{\circ} < \alpha < 28^{\circ}$. Moreover, the dependence of co-rotating fraction on $\alpha$ is quite noisy (see Fig.~\ref{fig:full}), such that excesses in the co-rotating fraction -- comparable to that found at $\alpha < 10^{\circ}$ -- are detected at other relatively narrow ranges of opening angle. For example, $20$ out of the $31$ satellite pairs with $70^{\circ} < \alpha < 80^{\circ}$ are co-rotating, which (if significant) is difficult to reconcile with the predicted kinematics of satellites in planes. The discrepancies between the data and the disc models are borne out in the statistical analysis, with models involving more than $10\%$ contribution from discs (or planes) strongly disfavored (see Table 1). This is in good agreement with the predictions of \cite{cautun15}, which argue that only $\sim15\%$ of bright satellites should share a coherent sense of rotation to within $25^{\circ}$, as would be required for planar co-rotation. Our analysis strongly disfavors the claim that planar configurations of satellites are ubiquitous over the magnitude range ($M_{r}<-16$) considered. While the disc model does a relatively poor job of replicating the observational data, our toy model with satellites arranged in dumbbell configurations provides a better fit to the observed satellite kinematics. By construction, the satellites in the dumbbell model are located at opposition, such that they able to replicate the sharpness of the increase in co-rotating fraction at small $\alpha$ while being consistent with no excess co-rotation at larger opening angles. The physicality of the dumbbell model is questionable, however, as it is difficult to imagine a scenario that involves satellites co-rotating in such a highly-constrained configuration. Additionally, the satellites belonging to M31's plane are not in a dumbbell configuration. For these reason, we conclude that the data are not likely to be the result of dumbbell configurations either. In the absence of viable alternative models, we argue that the excess of co-rotating satellite pairs at small $\alpha$ is very likely the result of random noise. Among the disc models, only the model with a $10\%$ contribution from satellite discs is a comparably good fit to the SDSS data as the purely isotropic model. Moreover, 25\% of random realizations of an isotropic model comprised of $400$ hosts (i.e.~comparable in number to the observed SDSS sample) yield excess co- and counter-rotating pairs of satellites comparable in significance to that measured in the SDSS sample, although just $\sim1-2\%$ show an overabundance of co-rotating pairs at small opening angles ($\alpha < 20^{\circ}$). These results should not be taken to say that there are no planes of satellites in the Universe. Statistically, the observational data are roughly consistent with $\lesssim10\%$ of isolated, massive galaxies playing host to planes, or more specifically planes with multiple luminous satellites. Our result does not exclude the possibility that planar structures preferentially populated by faint satellites could be ubiquitous; if true, this could provide powerful insight into the formation of planar satellite structures. With the caveat that all of the satellites under our consideration here are brighter than the satellites belonging to the M31 plane, we exclude the possibility of ubiquitous planes analogous to the M31 plane at a very high confidence level. \section{Conclusions} In this work, we investigate the ubiquity of co-rotating planes of satellites similar to that observed around M31. Using data drawn from the SDSS, we study the orientation and kinematics of bright ($M_{r} < -16$) satellite pairs around isolated galaxies, selected to be roughly analogous to the Milky Way and M31. By comparing the fraction of co-rotating satellites pairs as a function of opening angle to the predictions of simple models, we investigate the signatures of coherent rotation arising from satellites arranged in planar structures. Our findings are as follows: \begin{itemize} \item We confirm an excess of co-rotating pairs of satellites at opening angles of $\alpha<10^{\circ}$ --- i.e.~satellite pairs located on diametrically-opposed sides of their host (see Fig.~\ref{fig:zoom}). This overabundance of co-rotating pairs at small opening angles, as first identified by I14, has been cited as evidence that $50\%$ of satellites belong to coherently-rotating planes \citep{ibata14c}. \vspace*{0.07in} \item We find that the excess of co-rotating pairs at small opening angles is unlikely to be due to ubiquitous co-rotating planes of satellites. While the behavior at small $\alpha$ is suggestive of planes aligned along the line-of-sight (i.e.~with an orbital inclination of zero), the signal is strongly inconsistent with mock observations of satellite galaxy planes (or discs). In particular, we find no contribution to the co-rotating fraction from planes inclined relative to the line-of-sight (i.e.~with satellites configured at opening angles of $10^{\circ} \lesssim \alpha \lesssim 60^{\circ}$). For our sample of isolated systems, we find that at most $\sim10\%$ of hosts harbor satellites planes -- as traced by the luminous (LMC-like) satellite population. \vspace*{0.07in} \item The excess of co-rotating pairs of satellites at small $\alpha$ is better fit by a ``dumbbell'' model, where satellites have co-rotating partners located opposite the host galaxy (but not a true plane). However, this model is very likely unphysical. \vspace*{0.07in} \item We do not rule out the possibility that $\sim10\%$ of hosts having co-rotating planes of satellites, or correspondingly $\sim30\%$ of satellites residing in such planes. This case is similar enough to the isotropic case that the observed SDSS co-rotating fraction as a function of opening angle is roughly consistent with $\lesssim10\%$ of massive host galaxies harboring planes of satellites. \end{itemize} \section{Acknowledgements} We thank Rodrigo Ibata, Manoj Kaplinghat, and Kev Abazajian for helpful discussions regarding this work. JSB was supported by NSF grants AST-1009973 and AST-1009999. Support for this work was provided by NASA through a Hubble Space Telescope theory grant (program AR-12836) from the Space Telescope Science Institute (STScI), which is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under NASA contract NAS5-26555. This research made use of Astropy, a community-developed core Python package for Astronomy \citep{astropy13}. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
1,108,101,565,085
arxiv
\section{Introduction} Quantum computation provided the inspiration for holographic algorithms \cite{valiant_holographic_2008}, which in turn inspired the holant framework for computational counting problems (first introduced in the conference version of \cite{cai_complexity_2014}). Computational counting problems include a variety of computational problems, from combinatorial problems defined on graphs to the problems of computing partition functions in statistical physics and computing amplitudes in quantum computation. They are being analysed in different frameworks, including that of counting constraint satisfaction problems (counting CSPs) and that of holant problems. Computational counting problems are an area of active research, yet so far there appear to have been no attempts to apply knowledge from quantum information theory or quantum computation to their analysis. Nevertheless, as we show in the following, quantum information theory, and particularly the theory of quantum entanglement, offer promising new avenues of research into holant problems. A holant problem is parameterised by a set of functions $\mathcal{F}$; in this paper we consider finite sets of algebraic complex-valued functions of Boolean inputs. The restriction to finite sets follows the standard in the counting CSP community. We use it to avoid issues around efficient computability that arise when allowing problems to be parameterised by infinite sets of functions. In the following, the set of all algebraic complex-valued functions of Boolean inputs is denoted $\Upsilon$. We also write $\Upsilon_n:=\{f\in\Upsilon\mid\ari(f)=n\}$ for the restriction of $\Upsilon$ to functions of arity $n$. An instance of the problem $\mathsf{Holant}(\mathcal{F})$ consist of a multigraph $G=(V,E)$ with vertices $V$ and edges $E$, and a map $\pi$. This map assigns to each vertex $v\in V$ a function $\pi(v)=f_v\in\mathcal{F}$. The map also sets up a bijection between the edges incident on $v$ and the arguments of $f_v$, so the degree of $v$ must equal the arity of $f_v$. Given the map $\pi$, any assignment $\sigma:E\to\{0,1\}$ of Boolean values to edges induces a weight \[ \wt(\sigma) := \prod_{v\in V} f_v(\sigma|_{E(v)}), \] where $\sigma|_{E(v)}$ is the restriction of $\sigma$ to the edges incident on $v$. The desired output of the holant problem is the total weight $\sum_{\sigma:E\to\{0,1\}}\wt(\sigma)$, where the sum is over all assignments $\sigma$. Formally, we define the problem $\mathsf{Holant}(\mathcal{F})$ as follows. \begin{description}[noitemsep] \item[Name] $\mathsf{Holant}(\mathcal{F})$ \item[Instance] A tuple $(G,\mathcal{F},\pi)$. \item[Output] $\Holant_\Omega = \sum_{\sigma: E\to\{0,1\}} \prod_{v\in V}f_v(\sigma|_{E(v)})$. \end{description} For example, let $N\in\mathbb{N}_{>0}$, and define $\mathcal{M}_N:=\{\mathrm{ONE}_n \mid 1\leq n\leq N\}$, where \[ \mathrm{ONE}_n(\vc{x}{n}) = \begin{cases} 1 &\text{if } \sum_{k=1}^n x_k=1 \\ 0 &\text{otherwise.} \end{cases} \] Then $\mathsf{Holant}(\mathcal{M}_N)$ corresponds to counting the number of perfect matchings on graphs of maximum degree $N$. To see this, consider some graph $G=(V,E)$ with maximum degree $N$. The map $\pi$ must assign to each vertex a function of appropriate arity and since all functions are invariant under permutations of the arguments, this means $\pi$ is fully determined by this requirement. Consider an assignment $\sigma: E\to\{0,1\}$, this corresponds to a subset of edges $E_\sigma:=\{e\in E\mid \sigma(e)=1\}$. Note that $\wt(\sigma)\in\{0,1\}$ for all $\sigma$, since each weight is a product of functions taking values in the set $\{0,1\}$. Now suppose $\wt(\sigma)=1$, then for each $v\in V$, $\sigma$ assigns the value 1 to exactly one edge in $E(v)$. Thus, $E_\sigma$ is a perfect matching. Conversely, if $\wt(\sigma)=0$ then there exists a vertex for which $f_v(\sigma|_{E(v)})=0$. This implies $\sum_{e\in E(v)}\sigma(e)\neq 1$, i.e.\ either 0 or at least 2 of the edges incident on $v$ are assigned 1 by $\sigma$. Thus, $E_\sigma$ contains either no edges incident on $v$, or at least 2 edges incident on $v$, so $E_\sigma$ is not a perfect matching. We have shown that $\wt(\sigma)=1$ if $\sigma$ corresponds to a perfect matching and $\wt(\sigma)=0$ otherwise. The correspondence between assignments $\sigma:E\to\{0,1\}$ and subsets $E_\sigma\sse E$ is a bijection. Therefore, the total weight -- and hence the output of $\mathsf{Holant}(\mathcal{F})$ -- is exactly the number of perfect matchings of $G$. Sometimes it is interesting to restrict holant problems to instances defined on planar graphs. In that case, the correspondence between the edges incident on a vertex $v$ and the arguments of its assigned function $f_v$ must respect the ordering of the edges in the given planar embedding of the graph and the ordering of the arguments of the function. This means that choosing an edge to correspond to the first argument of $f_v$ fixes the bijection between edges and arguments for all the remaining edges and arguments. Given a finite set of functions $\mathcal{F}$, the problem of computing holant values on planar graphs is denoted $\mathsf{Pl\text{-}Holant}(\mathcal{F})$. For example, $\mathsf{Pl\text{-}Holant}(\mathcal{M}_N)$ is the problem of counting perfect matchings on planar graphs of maximum degree $N$. The complexity of holant problems has not been fully classified yet, but there exist a number of classifications for subfamilies of holant problems. These classifications restrict the family of problems in one or more of the following ways: \begin{itemize} \item by considering only symmetric functions, i.e.\ functions that are invariant under any permutation of the arguments (these functions depend only on the Hamming weight of their input), or \item by restricting the codomain, e.g. to algebraic real numbers or even non-negative real numbers, or \item by considering only sets of functions that contain some specified subset of functions, such as arbitrary unary functions or the two `pinning functions' $\dl_0(x)=1-x$ and $\dl_1(x)=x$. \end{itemize} Known classification include a full dichotomy for holant problems where all unary functions are assumed to be available \cite{cai_dichotomy_2011}, and a classification for $\holp[c]{\mathcal{F}}:=\holp{\mathcal{F}\cup\{\dl_0,\dl_1\}}$ with symmetric functions \cite{cai_holant_2012}. There is also a dichotomy for real-valued $\mathsf{Holant}^c$, where functions need not be symmetrical but must take values in $\mathbb{R}$ instead of $\mathbb{C}$ \cite{cai_dichotomy_2017}. Both existing results about $\mathsf{Holant}^c$ are proved via dichotomies for counting CSPs with complex-valued, not necessarily symmetric functions (see Section~\ref{s:csp} for a formal definition): in the first case, a dichotomy for general $\#\mathsf{CSP}$ \cite{cai_complexity_2014}, and in the second case, a dichotomy for $\#\mathsf{CSP}_2^c$, a subfamily of $\#\mathsf{CSP}$ in which each variable must appear an even number of times and variables can be pinned to 0 or 1 \cite{cai_dichotomy_2017}. The only broad classifications not assuming availability of certain functions are the dichotomy for symmetric holant \cite{cai_complete_2013} and the dichotomy for non-negative real-valued holant \cite{lin_complexity_2016}. The problem of computing amplitudes in quantum computation can also be expressed as a holant problem. This makes certain holant problems examples of \emph{(strong) classical simulation of quantum computations}, an area of active research in the theory of quantum computation. If a quantum computation can be simulated efficiently on a classical computer, the quantum computer yields no advantage. If, on the other hand, the complexity of classically simulating a certain family of quantum computations can be shown to satisfy some hardness condition, this provides evidence that quantum computations are indeed more powerful than classical computers. Indeed, while the origin of holant problems is not related to the notion of classical simulation of quantum computations, quantum computing did provide the inspiration for its origins \cite{valiant_holographic_2008,cai_complexity_2014}. Nevertheless, so far there have been no attempts to apply knowledge from quantum information theory or quantum computation to the analysis of holant problems. Yet, as we show in the following, quantum information theory, and particularly the theory of quantum entanglement, offer promising new avenues of research into holant problems. The complexity of $\holp{\mathcal{F}}$ may depend on \emph{decomposability} properties of the functions in $\mathcal{F}$. A function $f\in\Upsilon_n$ is considered to be decomposable if there exists a permutation $\rho:[n]\to [n]$, an integer $k$ satisfying {$1\leq k<n$}, and functions $f_1,f_2$ such that \[ f(\vc{x}{n}) = f_1(x_{\rho(1)},\ldots, x_{\rho(k)})f_2(x_{\rho(k+1)},\ldots, x_{\rho(n)}). \] For example, the constant function $f(x_1,x_2)=1$ is equal to $f_1(x_1)f_2(x_2)$, where $f_1(x)=f_2(x)=1$. Not all functions are decomposable: there are no functions $g_1,g_2\in\Upsilon_1$ such that $\mathrm{EQ}_2(x_1,x_2)=g_1(x_1)g_2(x_2)$. The functions in $\Upsilon_n$ are in bijection with the algebraic complex-valued vectors of length $2^n$ by mapping each function to a list of its values, or conversely. Under this map, the notion of a function being non-decomposable corresponds exactly to the quantum-theoretical notion of a vector being \emph{genuinely entangled}. We therefore draw on the large body of research about the properties of entangled vectors from quantum theory, and apply it to the complexity classification of holant problems. In the process, we also derive a new result about entanglement previously unknown in the quantum theory literature, this is Theorem~\ref{thm:three-qubit-entanglement}. For relating the complexity of two computational problems, the main technique used in this paper is that of \emph{gadget reductions}. An $n$-ary gadget over some set of functions $\mathcal{F}$ is a fragment of a signature grid using functions from $\mathcal{F}$, which is connected to the rest of the graph by $n$ external edges. This signature grid is assigned an effective function in $\Upsilon_n$ by treating the external edges as variables and summing over all possible assignments of Boolean values to the internal edges. If the function $g$ is the effective function of some gadget over $\mathcal{F}$, then we say $g$ is \emph{realisable} over $\mathcal{F}$. Allowing functions realisable over $\mathcal{F}$ in addition to the functions in $\mathcal{F}$ does not affect the complexity of computing the holant \cite{cai_dichotomy_2011}. Now, the problem $\mathsf{Holant}(\mathcal{F})$ is known to be polynomial-time computable if $\mathcal{F}$ contains only functions that decompose as products of unary and binary functions \cite[Theorem~2.1]{cai_dichotomy_2011}. For $\mathsf{Holant}(\mathcal{F})$ to be \#\textsf{P}-hard, $\mathcal{F}$ must thus contain a function that does not decompose in this way. Borrowing terminology from quantum theory, we say functions that do not decompose as products of unary and binary functions have \emph{multipartite entanglement}. By applying results from quantum theory, we show how to build a gadget for a non-decomposable ternary function using an arbitrary function with multipartite entanglement and the four unary functions \[ \dl_0(x) = 1-x, \qquad \dl_1(x) = x, \qquad \dl_+(x) = 1, \quad\text{and}\quad \dl_-(x) = (-1)^x. \] We further apply entanglement theory to analyse a family of symmetric ternary gadgets, and show that given some non-decomposable ternary function, this family lets us realise a symmetric non-decomposable ternary function. Furthermore, we show how to realise symmetric non-decomposable binary functions with specific properties. Together, these gadgets allow us to classify the complexity of the problem $\Holp[+]{\mathcal{F}}:=\holp{\mathcal{F}\cup\{\dl_0,\dl_1,\dl_+,\dl_-\}}$. \begin{theorem}[Informal statement of Theorem~\ref{thm:holant_plus}] Let $\mathcal{F}\sse\Upsilon$ be finite. Then $\Holp[+]{\mathcal{F}}$ can be computed in polynomial time if $\mathcal{F}$ satisfies one of five explicit conditions. In all other cases, $\Holp[+]{\mathcal{F}}$ is \sP-hard. The same dichotomy holds when the problem is restricted to instances defined on planar graphs. \end{theorem} By combining these entanglement-based techniques with techniques from the real-valued $\mathsf{Holant}^c$ classification in \cite{cai_complexity_2017}, we then develop a complexity classification for $\holp[c]{\mathcal{F}}:=\holp{\mathcal{F}\cup\{\dl_0,\dl_1\}}$. The problem $\mathsf{Holant}^c$ had first been described about a decade ago, with a full complexity classification remaining open until now. \begin{theorem}[Informal statement of Theorem~\ref{thm:holant-c}] Let $\mathcal{F}\sse\Upsilon$ be finite. Then $\Holp[c]{\mathcal{F}}$ can be computed in polynomial time if $\mathcal{F}$ satisfies one of six explicit conditions. In all other cases, $\Holp[c]{\mathcal{F}}$ is \sP-hard. \end{theorem} The remainder of this paper is structured as follows. We introduce preliminary definitions in Section~\ref{s:Holant_problem} and give an overview over known holant complexity classifications in Section~\ref{s:existing_results}. In Section~\ref{s:quantum_states}, we introduce relevant notions and results from quantum theory, particularly about entanglement. The complexity classification for $\mathsf{Holant}^+$ is derived in Section~\ref{s:Holant_plus} and the complexity classification for $\mathsf{Holant}^c$ is derived in Section~\ref{s:dichotomy}. \section{Preliminaries} \label{s:Holant_problem} Holant problems are a framework for computational counting problems on graphs, introduced by Cai \etal{} \cite{cai_complexity_2014}, and based on the theory of holographic algorithms developed by Valiant \cite{valiant_holographic_2008}. Throughout, we consider algebraic complex-valued functions with Boolean inputs. Let $\AA$ be the set of algebraic complex numbers. For any non-negative integer $k$, let $\Upsilon_k$ be the set of functions $\{0,1\}^k\to\AA$, and define $\Upsilon:=\bigcup_{k\in\mathbb{N}} \Upsilon_k$. Let $\mathcal{F}\sse\Upsilon$ be a finite set of functions, and let $G=(V,E)$ be an undirected graph with vertices $V$ and edges $E$. Throughout, graphs are allowed to have parallel edges and self-loops. A \emph{signature grid} is a tuple $\Omega=(G,\mathcal{F},\pi)$ where $\pi$ is a function that assigns to each $n$-ary vertex $v\in V$ a function $f_v:\{0,1\}^n\to\AA$ in $\mathcal{F}$, specifying which edge incident on $v$ corresponds to which argument of $f_v$. The Holant for a signature grid $\Omega$ is: \begin{equation}\label{eq:Holant_dfn} \Holant_\Omega = \sum_{\sigma:E\to\{0,1\}} \prod_{v\in V} f_v(\sigma|_{E(v)}), \end{equation} where $\sigma$ is an assignment of Boolean values to each edge and $\sigma|_{E(v)}$ is the application of $\sigma$ to the edges incident on $v$. \begin{description}[noitemsep] \item[Name] $\mathsf{Holant}(\mathcal{F})$ \item[Instance] A signature grid $\Omega=(G,\mathcal{F},\pi)$. \item[Output] $\Holant_\Omega$. \end{description} \begin{rem} We restrict the definition of holant problems to finite sets of functions, as is common in the counting CSP literature and some of the holant literature. This is to avoid issues with the representation of the function values, and it is also relevant for some of the results in Section~\ref{s:interreducing_planar}. \end{rem} For any positive integer $n$, let $[n]:=\{1,2,\ldots, n\}$. Suppose $f\in\Upsilon_n$ and suppose $\rho:[n]\to[n]$ is a permutation, then $f_\rho(\vc{x}{n}):=f(x_{\rho(1)},\ldots, x_{\rho(n)})$. A function is called \emph{symmetric} if it depends only on the Hamming weight of the input, {i.e.\ the number of non-zero bits in the input bit string}. An $n$-ary symmetric function is often written as $f = [f_0,f_1,\ldots,f_n]$, with $f_k$ for $k\in [n]$ being the value $f$ takes on inputs of Hamming weight $k$. For any positive integer $k$, define the \emph{equality function} of arity $k$ as $\mathrm{EQ}_k:=[1,0,\ldots,0,1]$, where there are $(k-1)$ zeroes in the expression. Furthermore, define $\mathrm{ONE}_k:=[0,1,0,\ldots,0]$, where there are $(k-1)$ zeroes at the end. Let $\dl_0:=[1,0]$ and $\dl_1:=[0,1]$, these are called the \emph{pinning functions} because they allow `pinning' inputs to 0 or 1, respectively. Finally, define $\dl_+:=[1,1]$ (this is equal to $\mathrm{EQ}_1$), $\dl_-:=[1,-1]$, and $\mathrm{NEQ}=[0,1,0]$. {We will also occasionally use the unary functions $\dl_i:=[1,i]$ and $\dl_{-i}=[1,-i]$, where $i$ is the imaginary unit, i.e.\ $i^2=-1$.} An $n$-ary function is called \emph{degenerate} if it is a product of unary functions in the following sense: there exist functions $\vc{u}{n}\in\Upsilon_1$ such that $f(\vc{x}{n}) = u_1(x_1)\ldots u_n(x_n)$. Any function that cannot be expressed as a product of unary functions in this way is called \emph{non-degenerate}. Suppose $f\in\Upsilon_n$, $k$ is an integer satisfying $1\leq k< n$, and $\rho:[n]\to [n]$ is a permutation such that $f_\rho(\vc{x}{n})=g(\vc{x}{k})h(x_{k+1},\ldots, x_n)$ for some functions $g\in\Upsilon_k,h\in\Upsilon_{n-k}$, then $f$ is said to be \emph{decomposable}. Any way of writing $f$ as a product of two or more functions with disjoint sets of arguments is called a \emph{decomposition} of $f$. {Borrowing terminology from linear algebra (which will be justified later), we say $g$ and $h$ in the above equation are \emph{tensor factors} of $f$.} A function that cannot be decomposed in this way is called \emph{non-decomposable}. Any degenerate function {of arity $n\geq 2$} is decomposable, but not all decomposable functions are degenerate. For example, $f(x_1,x_2,x_3,x_4)=\mathrm{EQ}_2(x_1,x_2)\mathrm{NEQ}(x_3,x_4)$ is decomposable but not degenerate since \new{$f$} cannot be written as \new{a product} of unary functions. For any $f\in\Upsilon_n$, the \emph{support} of $f$ is $\supp(f):=\{\mathbf{x}\in\{0,1\}^n\mid f(\mathbf{x})\neq 0\}$. Suppose $f\in\Upsilon_4$, $g\in\Upsilon_3$, and $h\in\Upsilon_2$. As a shorthand, let $f_{xyzw}:=f(x,y,z,w)$, $g_{xyz}:=g(x,y,z)$, and $h_{xy}:=h(x,y)$ for any $x,y,z,w\in\{0,1\}$. We sometimes identify these functions with the following matrices of their values: \[ f = \begin{pmatrix} f_{0000}&f_{0001}&f_{0010}&f_{0011} \\ f_{0100}&f_{0101}&f_{0110}&f_{0111} \\ f_{1000}&f_{1001}&f_{1010}&f_{1011} \\ f_{1100}&f_{1101}&f_{1110}&f_{1111} \end{pmatrix}, \quad g = \begin{pmatrix}g_{000}&g_{001}&g_{010}&g_{011}\\g_{100}&g_{101}&g_{110}&g_{111}\end{pmatrix} \quad\text{and}\quad h = \begin{pmatrix}h_{00}&h_{01}\\h_{10}&h_{11}\end{pmatrix}. \] {Here, for binary functions, matrix rows are labelled by the first input and columns are labelled by the second input. For ternary functions, rows are labelled by the first input and columns by the second and third inputs in lexicographic order. For functions of arity four, rows are labelled by the first two inputs and columns by the last two inputs, again in lexicographic order.} For any pair of counting problems $A$ and $B$, we say $A$ reduces to $B$ and write $A\leq_T B$ if there exists a polynomial-time Turing reduction from problem $A$ to problem $B$. If $A\leq_T B$ and $B\leq_T A$, we say $A$ and $B$ are \emph{interreducible} and write $A\equiv_T B$. The following result is well-known in the literature, see e.g.~\cite[p.~12]{cai_complexity_2017}. \begin{lemma}\label{lem:scaling} Suppose $\mathcal{F}\sse\Upsilon$ is finite, $g\in\Upsilon$, and $c\in\AA\setminus\{0\}$. Then $\holp{\mathcal{F}\cup\{c\cdot g\}} \equiv_T \holp{\mathcal{F}\cup\{g\}}$. \end{lemma} Given a bipartite graph $G=(V,W,E)$, with vertex partitions $V$ and $W$, we can define a \emph{bipartite signature grid}. Let $\mathcal{F}$ and $\mathcal{G}$ be two finite subsets of $\Upsilon$ and let $\pi:V\cup W\to\mathcal{F}\cup\mathcal{G}$ be a map assigning functions to vertices, with the property that $\pi(v)\in\mathcal{F}$ for all $v\in V$ and $\pi(w)\in\mathcal{G}$ for all $w\in W$. The bipartite signature grid specified in this way is denoted by the tuple $(G,\mathcal{F}|\mathcal{G},\pi)$. The corresponding \emph{bipartite holant problem} is $\holp{\mathcal{F}\mid\mathcal{G}}$. The following reductions relate bipartite and non-bipartite holant problems. \begin{proposition}[{\cite[Proposition~2]{cai_holant_2012}}]\label{prop:make_bipartite} For any finite $\mathcal{F}\sse\Upsilon$, we have \[ \holp{\mathcal{F}}\equiv_T\holp{\mathcal{F}\mid\{\mathrm{EQ}_2\}}. \] \end{proposition} \begin{proposition}[{\cite[Proposition~3]{cai_holant_2012}}]\label{prop:bipartite} Suppose $\mathcal{G}_1,\mathcal{G}_2\sse\Upsilon$ are finite, then \[ \holp{\mathcal{G}_1\cup\{\mathrm{EQ}_2\}\mid\mathcal{G}_2\cup\{\mathrm{EQ}_2\}} \equiv_T \holp{\mathcal{G}_1\cup\mathcal{G}_2}. \] \end{proposition} A signature grid $\Omega=(\mathcal{F},G,\pi)$ is called \emph{planar} if $G$ is a plane graph and, for each $v$, the arguments of $f_v$ are ordered counterclockwise starting from an edge specified by $\pi$ \cite[Section~2.1]{cai_holographic_2017}. We denote by $\mathsf{Pl\text{-}Holant}(\mathcal{F})$ the problem $\mathsf{Holant}(\mathcal{F})$ restricted to planar signature grids. \subsection{Signature grids in terms of vectors} \label{s:vector_perspective} As noted in \cite{cai_valiants_2006}, any function $f\in\Upsilon_n$ can be considered as a vector in $\AA^{2^n}$, which is the list of values of $f$, indexed by $\{0,1\}^n$. Let $\{\ket{\mathbf{x}}\}_{\mathbf{x}\in\{0,1\}^n}$ be an orthonormal basis\footnote{In using this notation for vectors, called \emph{Dirac notation} and common in quantum computation and quantum information theory, we anticipate the interpretation of the vectors associated with functions as quantum states, cf.\ Section \ref{s:quantum_states}.} for $\AA^{2^n}$. The vector corresponding to the function $f$ is then denoted by $\ket{f} := \sum_{\mathbf{x}\in\{0,1\}^n} f(\mathbf{x})\ket{\mathbf{x}}$. Denote by $\otimes$ the Kronecker product of matrices, which we usually call the \emph{tensor product}. Based on this \new{notion of tensor product}, we define \emph{tensor powers} of a matrix $M$ as follows: $M\t{1}:=M$ and $M\t{k+1}=M\t{k}\otimes M$ for any positive integer $k$. The operations of tensor product and tensor power can be extended to vectors by considering them as single-column matrices. Denote by $M^T$ the transpose of the matrix $M$. Suppose $\Omega=(G,\mathcal{F}|\mathcal{G},\pi)$ is a bipartite signature grid, where $G=(V,W,E)$ has vertex partitions $V$ and $W$. Then the holant for $\Omega$ can be written as: \begin{equation}\label{eq:bipartite_Holant_vectors} \Holant_\Omega = \left(\bigotimes_{w\in W} \left(\ket{g_w}\right)^T\right) \left(\bigotimes_{v\in V} \ket{f_v}\right) = \left(\bigotimes_{v\in V} \left(\ket{f_v}\right)^T\right) \left(\bigotimes_{w\in W} \ket{g_w}\right), \end{equation} where the tensor products are assumed to be ordered such that, in each inner product, two components associated with the same edge meet. \subsection{Holographic reductions} \emph{Holographic transformations} are the origin of the name `holant problems'. Let $\operatorname{GL}_2(\AA)$ be the set of all invertible 2 by 2 matrices over $\AA$, and let $\mathcal{O}:=\{M\in\operatorname{GL}_2(\AA)\mid M^TM=\smm{1&0\\0&1}\}$ be the set of orthogonal matrices. Suppose $M\in\operatorname{GL}_2(\AA)$, then for any $f\in\Upsilon_0$ let $M\circ f=f$, and for any $f\in\Upsilon_n$ with $n>0$ let $M\circ f$ denote the function corresponding to the vector $M\t{n}\ket{f}$. Furthermore, for any set of functions $\mathcal{F}$, define $M\circ\mathcal{F} := \{ M\circ f \mid f\in\mathcal{F} \}$. \begin{theorem}[Valiant's Holant Theorem \cite{valiant_holographic_2008} as stated in {\cite[Proposition~4]{cai_holant_2012}}]\label{thm:Valiant_Holant} For any $M\in\operatorname{GL}_2(\AA)$ and any finite sets $\mathcal{F},\mathcal{G}\sse\Upsilon$, \[ \Holp{\mathcal{F}\mid\mathcal{G}} \equiv_T \Holp{M\circ\mathcal{F}\mid (M^{-1})^T\circ\mathcal{G}}. \] \end{theorem} \begin{corollary}[{\cite[Proposition~5]{cai_holant_2012}}]\label{cor:orthogonal-holographic} Let $O\in\mathcal{O}$ and let $\mathcal{F}\sse\Upsilon$ be finite, then \[ \Holp{\mathcal{F}} \equiv_T \Holp{O\circ\mathcal{F}}. \] \end{corollary} Going from a set of functions $\mathcal{F}\mid\mathcal{G}$ to $M\circ\mathcal{F}\mid (M^{-1})^T\circ\mathcal{G}$ or from $\mathcal{F}$ to $O\circ\mathcal{F}$ is a \emph{holographic reduction}. \subsection{Gadgets and realisability} A \emph{gadget} over a finite set of functions $\mathcal{F}$ (also called $\mathcal{F}$-gate) is a fragment of a signature grid with some `dangling' edges. Any such gadget can be assigned an effective function $g$. Formally, let $G=(V,E,E')$ be a graph with vertices $V$, (normal) edges $E$, and dangling edges $E'$, where $E\cap E'=\emptyset$. Unlike a normal edge, each dangling edge has only one end incident on a vertex in $V$, the other end is dangling. The gadget is determined by a tuple $\Gamma=(\mathcal{F},G,\pi)$, where $\pi:V\to\mathcal{F}$ assigns a function to each vertex $v$ in such a way that each argument of the function corresponds to one of the edges (normal or dangling) incident on $v$. Suppose $E'=\{\vc{e}{n}\}$, then the effective function associated with this gadget is \[ g_\Gamma(\vc{y}{n}) = \sum_{\sigma:E\to\{0,1\}} \prod_{v\in V} f_v(\hat{\sigma}|_{E(v)}), \] where $\hat{\sigma}$ is the extension of $\sigma$ to domain $E\cup E'$ which satisfies $\hat{\sigma}(e_k)=y_k$ for all $k\in[n]$, and $\hat{\sigma}|_{E(v)}$ is the restriction of $\hat{\sigma}$ to edges (both normal and dangling) which are incident on $v$. If $g$ is the effective function of some gadget over $\mathcal{F}$, then $g$ is said to be \emph{realisable over $\mathcal{F}$}. This notion is sometimes extended to say $g$ is realisable over $\mathcal{F}$ if there exists a gadget over $\mathcal{F}$ with effective function $c\cdot g$ for some $c\in\AA\setminus\{0\}$. By Lemma~\ref{lem:scaling}, the extended definition does not affect the validity of the following lemma. \begin{lemma}[{\cite[p.~1717]{cai_dichotomy_2011}}]\label{lem:realisable} Suppose $\mathcal{F}\sse\Upsilon$ is finite and ${g}$ is realisable over $\mathcal{F}$. Then \[ \Holp{\mathcal{F}\cup\{{g}\}} \equiv_T \Holp{\mathcal{F}}. \] \end{lemma} Following \cite{lin_complexity_2016}, we define $S(\mathcal{F}) = \{ {g} \mid {g} \text{ is realisable over } \mathcal{F} \}$ for any set of functions $\mathcal{F}$. Then Lemma~\ref{lem:realisable} implies that, for any finite subset $\mathcal{F}'\sse S(\mathcal{F})$, we have $\Holp{\mathcal{F}'} \leq_T \Holp{\mathcal{F}}$. The following lemma will be useful later. This result is stated e.g.\ in \cite[Lemma~2.1]{cai_dichotomy_2017}, but as it is not proved there, we give a quick proof here. Note this lemma uses the scaled definition of realisability. \begin{lemma}\label{lem:decomposable} Suppose $f(\vc{x}{n+m})=g(\vc{x}{n})h(x_{n+1},\ldots, x_{n+m})$, where none of these functions are identically zero. Then $g,h\in S(\{f,\dl_0,\dl_1\})$. \end{lemma} \begin{proof} As $g$ is not identically zero, there exists $\mathbf{a}\in\{0,1\}^n$ such that $g(\mathbf{a})\neq 0$. But then \[ h(\vc{y}{m}) = \sum_{\vc{x}{n}\in\{0,1\}} f(\vc{x}{n},\vc{y}{m}) \prod_{k\in [n]} \dl_{a_k}(x_k). \] The right-hand side is the effective function of some gadget over $\{f,\dl_0,\dl_1\}$, which consists of one copy of $f$ connected to $n$ unary functions, so $h\in S(\{f,\dl_0,\dl_1\})$. An analogous argument with the roles of $g$ and $h$ swapped shows that $g\in S(\{f,\dl_0,\dl_1\})$. \end{proof} When considering a bipartite Holant problem $\holp{\mathcal{F}\mid\mathcal{G}}$ for some finite $\mathcal{F},\mathcal{G}\sse\Upsilon$, we need to use gadgets that respect the bipartition. Suppose $\Gm=(\mathcal{F}|\mathcal{G},G,\pi)$ where $G=(V,W,E,E')$ is a bipartite graph with vertex partitions $V$ and $W$, (normal) edges $E$ and dangling edges $E'$, and suppose $\pi:V\cup W\to\mathcal{F}\cup\mathcal{G}$ satisfies $\pi(v)\in\mathcal{F}$ for all $v\in V$ and $\pi(w)\in\mathcal{G}$ for all $w\in W$. If furthermore all dangling edges are incident on vertices from $V$, then $\Gm$ is called a \emph{left-hand side (LHS) gadget} over $\mathcal{F}|\mathcal{G}$. Otherwise, if all dangling edges are incident on vertices from $W$, then $\Gm$ is called a \emph{right-hand side (RHS) gadget} over $\mathcal{F}|\mathcal{G}$. The following result is a straightforward extension of Lemma~\ref{lem:realisable}. \begin{lemma}\label{lem:realisable_planar} Let $\mathcal{F},\mathcal{G}\sse\Upsilon$ be two finite sets of functions. Suppose $f$ is an LHS gadget over $\mathcal{F}|\mathcal{G}$ and $g$ is a RHS gadget over $\mathcal{F}|\mathcal{G}$. Then \[ \holp{\mathcal{F}\cup\{f\}\mid\mathcal{G}} \equiv_T \holp{\mathcal{F}\mid\mathcal{G}} \quad\text{and}\quad \holp{\mathcal{F}\mid\mathcal{G}\cup\{g\}} \equiv_T \holp{\mathcal{F}\mid\mathcal{G}}. \] \end{lemma} A gadget is called \emph{planar} if it is defined by a plane graph and if the dangling edges, ordered counterclockwise corresponding to the order of the arguments of the effective function, are in the outer face in a planar embedding \cite[Section~2.4]{cai_holographic_2017}. In reductions between planar holant problems, only planar gadgets may be used. \subsection{Polynomial interpolation} Finally, there is the technique of \emph{polynomial interpolation}. Let $\mathcal{F}$ be a set of functions and suppose $g$ is a function that cannot be realised over $\mathcal{F}$. If, given any signature grid over $\mathcal{F}\cup\{g\}$, it is possible to set up a family of signature grids over $\mathcal{F}$ such that the holant for the original problem instance can be determined efficiently from the holant values of the family by solving a system of linear equations, then $g$ is said to be \emph{interpolatable} over $\mathcal{F}$. We do not directly use polynomial interpolation here, though the technique is employed by many of the results we build upon. A rigorous definition of polynomial interpolation can be found in \cite{cai_complexity_2014}. {\subsection{Linear algebra lemmas for holographic transformations} \label{s:linear-algebra} Holographic transformations are, at their core, linear maps. In this section, we give a few lemmas about decompositions of matrices that will significantly simplify later arguments about these transformations. In particular, we extend the orthogonal QR decomposition from real to complex matrices and prove two further lemmas building on it. These result are straightforward and may not be novel, but we have not been able to find a reference for them in the literature, so we provide proofs for completeness.} Let $K := \left(\begin{smallmatrix}1&1\\i&-i\end{smallmatrix}\right)$, $X := \left( \begin{smallmatrix}0&1\\1&0\end{smallmatrix}\right)$ and $T:=\smm{1&0\\0&\exp(i\pi/4)}$ {where $i$ is the imaginary unit}; these are all elements of $\operatorname{GL}_2(\AA)$. Recall that any real square matrix $M$ can be written as $M=QR$ where $Q$ is an orthogonal matrix and $R$ is upper (or lower) triangular. The equivalent result for complex matrices requires $Q$ to be unitary instead of orthogonal. Nevertheless, many complex 2 by 2 matrices do admit a decomposition with a complex orthogonal matrix and an upper or lower triangular matrix. Where this is not possible, we give an alternative decomposition using the matrix $K$ {defined above} instead of an orthogonal matrix. \begin{lemma}[Orthogonal QR decomposition for complex matrices]\label{lem:QR_decomposition} Let $M$ be an invertible 2 by 2 complex matrix, write {$\mathcal{O}$} for the set of all 2 by 2 complex orthogonal matrices, and let $K$ be as defined above. Then the following hold: \begin{itemize} \item There exists $Q\in \mathcal{O}\cup\{K, KX\}$ such that $Q^{-1}M$ is upper triangular. \item There exists $Q\in \mathcal{O}\cup\{K,KX\}$ such that $Q^{-1}M$ is lower triangular. \item If $Q^{-1}M$ is neither lower nor upper triangular for any orthogonal $Q$, then $M=KD$ or $M=KXD$, where $D$ is diagonal. \end{itemize} \end{lemma} \begin{proof} Write $M$ as: \[ M = \begin{pmatrix} x&y\\z&w\end{pmatrix}. \] We assumed $M$ was invertible, so $\det M = xw-yz\neq 0$. {Note that $K^{-1} = \frac{1}{2}\smm{1&-i\\1&i}$ and $(KX)^{-1} = \frac{1}{2}\smm{1&i\\1&-i}$. For a lower triangular decomposition, we want the top right element of $Q^{-1}M$ to vanish, this is $(Q^{-1})_{00} y + (Q^{-1})_{01} w$. The elements $y$ and $w$ cannot both be zero since $M$ is invertible. Suppose first $y^2+w^2=0$, i.e.\ $w=\pm iy$. Then \[ \pmm{1&\pm i\\1&\mp i}\pmm{x&y\\z&\pm i y} = \begin{pmatrix} x\pm iz & 0 \\ x\mp iz & 2y \end{pmatrix} \] so $Q^{-1}M$ is \new{lower} triangular for some $Q\in\{K,KX\}$. If instead $y^2+w^2\neq 0$, then \[ \frac{1}{\sqrt{y^2+w^2}}\pmm{w&-y\\y&w}\pmm{x&y\\z&w} = \frac{1}{\sqrt{y^2+w^2}}\pmm{wx-yz&0\\yx+wz&y^2+w^2}, \] and $\frac{1}{\sqrt{y^2+w^2}}\smm{w&-y\\y&w}\in\mathcal{O}$. Analogously, for an upper triangular decomposition, some $Q\in\{K,KX\}$ works if $x^2+z^2=0$, and some $Q\in\mathcal{O}$ works otherwise. We have $Q\in\{K,KX\}$ for both decompositions if and only if} $x^2+z^2=0$ and $y^2+w^2=0$ simultaneously. Write $z=\pm ix$. Then, by invertibility of $M$, $w=\mp iy$. Thus, letting $D=\left(\begin{smallmatrix}x&0\\0&y\end{smallmatrix}\right)$: \[ M = \begin{pmatrix} x&y\\\pm ix&\mp iy\end{pmatrix} = \begin{pmatrix} 1&1\\\pm i&\mp i\end{pmatrix} \begin{pmatrix} x&0\\0&y\end{pmatrix} = \begin{cases} KD &\text{if $\pm$ goes to $+$, or} \\ KXD & \text{if $\pm$ goes to $-$.} \end{cases} \] This completes the proof. \end{proof} \begin{lemma}\label{lem:ATA-D} Suppose $M$ is a 2 by 2 invertible complex matrix such that $M^TM=\smm{\ld&0\\0&\mu}$ for some $\ld,\mu\in\mathbb{C}\setminus\{0\}$. Then there exists a 2 by 2 orthogonal matrix $Q$ and a diagonal matrix $D$ such that $M=QD$. \end{lemma} \begin{proof} {Given invertibility of $M$, the property $(M^TM)_{01} = (M^TM)_{10} = 0$ means that the columns of $M$ are orthogonal to each other (under the `real' inner product). Hence there must exists $Q\in\mathcal{O}$ such that the columns of $M$ are scalings of the columns of $Q$, which implies $M=QD$.} \end{proof} Using the complex QR decomposition, we can also consider the solutions of $A^TA\doteq X$, where `$\doteq$' denotes equality up to scalar factor. \begin{lemma}\label{lem:ATA-X} The solutions of: \begin{equation}\label{eq:ATA-X} A^TA \doteq X \end{equation} are exactly those matrices $A$ satisfying $A=KD$ or $A=KXD$ for some invertible diagonal matrix $D$. \end{lemma} \begin{proof} First, we check that matrices of the form $KD$ or $KXD$ for some invertible diagonal matrix $D$ satisfy \eqref{eq:ATA-X}. Indeed: \begin{equation} (KD)^TKD = D^T K^T K D = D \begin{pmatrix}1&i\\1&-i\end{pmatrix} \begin{pmatrix}1&1\\i&-i\end{pmatrix} D = 2 D X D = 2 xy X \doteq X, \end{equation} where $D = \left(\begin{smallmatrix}x&0\\0&y\end{smallmatrix}\right)$ with $x,y\in\mathbb{C}\setminus\{0\}$. Similarly: \begin{equation} (KXD)^TKXD = D^T X^T K^T KXD = 2DX^3D = 2DXD = 2xyX \doteq X. \end{equation} This completes the first part of the proof. It remains to be shown that these are the only solutions of $A^TA\doteq X$. Assume, for the purposes of deriving a contradiction, that there is a solution that has an orthogonal QR decomposition. In particular, suppose $A=QR$ for some upper triangular matrix $R$. Then, by orthogonality of $Q$: \begin{equation} A^T A = R^TQ^TQR = R^T R = \begin{pmatrix} R_{00} & 0 \\ R_{01} & R_{11} \end{pmatrix} \begin{pmatrix} R_{00} & R_{01} \\ 0 & R_{11} \end{pmatrix} = \begin{pmatrix} R_{00}^2 & R_{00}R_{01} \\ R_{00}R_{01} & R_{01}^2+R_{11}^2 \end{pmatrix}. \end{equation} The only way for the top left component of this matrix to be zero, as required, is if $R_{00}$ is zero. Yet, in that case, the top right and bottom left components of $A^T A$ are zero too, hence $A^T A$ cannot be invertible. That is a contradiction because any non-zero scalar multiple of $X$ is invertible. A similar argument applies if $R$ is lower triangular. Thus, all solutions of \eqref{eq:ATA-X} have to fall into the third case of Lemma \ref{lem:QR_decomposition}: i.e.\ all solutions must be of the form $A=KD$ or $A=KXD$. \end{proof} For any $M\in\operatorname{GL}_2(\AA)$, denote by $f_M(x,y):=M_{xy}$ the binary function corresponding to this matrix. {The following is straightforward, but we give a proof for completeness.} \begin{lemma}\label{lem:hc_gadget} Suppose $M\in\operatorname{GL}_2(\AA)$ and $g\in\Upsilon$, then $M\circ g, M^T\circ g\in S(\{g,f_M\})$. \end{lemma} \begin{proof} Let $n=\ari(g)$. We have \begin{align*} (M\circ g)(\vc{x}{n}) &= \sum_{\vc{y}{n}\in\{0,1\}} \left(\prod_{j=1}^{n}M_{x_j y_j}\right) g(\vc{y}{n}) \\ &= \sum_{\vc{y}{n}\in\{0,1\}} \left(\prod_{j=1}^{n} f_M(x_j,y_j)\right) g(\vc{y}{n}), \end{align*} so $M\circ g\in S(\{g,f_M\})$. Furthermore, for any $M\in\operatorname{GL}_2(\AA)$, $f_{M^T}(x,y)=f_M(y,x)$; therefore \[ (\new{M^T}\circ g)(\vc{x}{n}) = \sum_{\vc{y}{n}\in\{0,1\}} \left(\prod_{j=1}^{n} f_M(y_j,x_j)\right) g(\vc{y}{n}) \] and $M^T\circ g\in S(\{g,f_M\})$. \end{proof} \section{Known results about holant problems} \label{s:existing_results} We now introduce the existing families of holant problems and their associated dichotomy results. Gadget constructions (which are at the heart of many reductions) are easier the more functions are known to be available. As a result, several families of holant problems have been defined, in which certain sets of functions are freely available, {i.e.\ are always added to the set of functions parameterising the holant problem. More formally, suppose $\mathcal{G}\sse\Upsilon$ is a finite set of functions and denote by $\mathsf{Holant}^\mathcal{G}$ the holant problem where functions in $\mathcal{G}$ are freely available, then $\mathsf{Holant}^\mathcal{G}(\mathcal{F}) := \mathsf{Holant}(\mathcal{F}\cup \mathcal{G})$ for any set of functions $\mathcal{F}$. Effectively, the problem $\mathsf{Holant}^\mathcal{G}$ restricts analysis to cases where the set of constraint functions contains $\mathcal{G}$.} \subsection{Conservative holant} \label{s:Holant_star} Write $\mathcal{U}:=\Upsilon_1$ for compatibility with earlier notation. We will not use the notation $\mathsf{Holant}^*$ \cite{cai_complexity_2014,cai_dichotomy_2011} here as we do not define holant problems for infinite sets of constraint functions. Instead, as is common in the counting CSP literature, we refer to holant problems where arbitrary finite subsets of $\mathcal{U}$ are freely available as `conservative'. We begin with some definitions. Given a bit string $\mathbf{x}$, let $\bar{\mathbf{x}}$ be its bit-wise complement and let $\abs{\mathbf{x}}$ denote its Hamming weight. Denote by $\avg{\mathcal{F}}$ the closure of a set of functions $\mathcal{F}$ under tensor products. Furthermore, define: \begin{itemize} \item the set of all unary and binary functions \[ \mathcal{T} := \Upsilon_1\cup\Upsilon_2, \] \item the set of functions which are non-zero on at most two \new{complementary} inputs \[ \mathcal{E} := \{f\in\Upsilon \mid \exists \mathbf{a}\in\{0,1\}^{\ari(f)} \text{ such that } f(\mathbf{x})=0 \text{ if } \mathbf{x}\notin\{\mathbf{a},\bar{\mathbf{a}}\} \}, \] \item the set of functions which are non-zero only on inputs of Hamming weight at most 1 \[ \mathcal{M} := \{f\in\Upsilon \mid f(\mathbf{x})=0 \text{ if } \abs{\mathbf{x}}> 1\}. \] \end{itemize} {Note that $\mathcal{U}\sse\mathcal{T}$, $\mathcal{U}\sse\mathcal{E}$ and $\mathcal{U}\sse\mathcal{M}$.} The following result has been adapted to our notation. \begin{theorem}[{\cite[Theorem~2.2]{cai_dichotomy_2011}}]\label{thm:Holant-star} Suppose $\mathcal{F}$ is a finite subset of $\Upsilon$. If \begin{itemize} \item $\mathcal{F}\subseteq\avg{\mathcal{T}}$, or \item there exists $O\in\mathcal{O}$ such that $\mathcal{F}\subseteq\avg{O\circ\mathcal{E}}$, or \item $\mathcal{F}\subseteq\avg{K\circ\mathcal{E}}=\avg{KX\circ\mathcal{E}}$, or \item $\mathcal{F}\subseteq\avg{K\circ\mathcal{M}}$ or $\mathcal{F}\subseteq\avg{KX\circ\mathcal{M}}$, \end{itemize} then, for any finite subset $\new{\mathcal{U}'}\sse\mathcal{U}$, the problem $\mathsf{Holant}(\mathcal{F},\new{\mathcal{U}'})$ is polynomial-time computable. Otherwise, there exists a finite subset $\new{\mathcal{U}'}\sse\mathcal{U}$ such that $\mathsf{Holant}(\mathcal{F},\new{\mathcal{U}'})$ is \sP-hard. The dichotomy is still true even if the inputs are restricted to planar graphs. \end{theorem} {To get an intuition for the polynomial-time computable cases, first note that every tensor product can be thought of as a gadget over its factors, and that tensor closure commutes with holographic transformations. Furthermore, if a signature grid is not connected, its holant is just the product of the holant values of the individual connected components. Now, in the first polynomial-time computable case of Theorem~\ref{thm:Holant-star}, $\mathcal{F}\sse\ang{\mathcal{T}}$, the signature grid can be transformed to one in which every vertex has degree at most 2 by replacing every function with a disconnected gadget over unary and binary functions. Then each connected component is a path or cycle, and its holant value can be computed by matrix multiplication. In the second polynomial-time computable case, $\mathcal{F}\subseteq\avg{O\circ\mathcal{E}}$, again replace decomposable functions by disconnected gadgets. Suppose $O$ is the identity matrix, then an assignment of a Boolean value to one edge contributes to at most one non--zero-weight assignment of values to all edges in the same connected component. Thus the holant value for any connected graph is a sum of at most two terms, which can be computed efficiently, and hence the overall value can be found. If $O$ is not the identity, apply Corollary~\ref{cor:orthogonal-holographic} to reduce to the identity case. For the third polynomial-time computable case, $\mathcal{F}\subseteq\avg{K\circ\mathcal{E}}$, note that given any finite $\mathcal{G}\sse\Upsilon$, \[ \mathsf{Holant}\left(K\circ\mathcal{G}\right) \equiv_T \mathsf{Holant}(K\circ\mathcal{G}\mid\{\mathrm{EQ}_2\}) \equiv_T \mathsf{Holant}(\mathcal{G}\mid\{\mathrm{NEQ}\}) \leq_T \mathsf{Holant}(\mathcal{G}\cup\{\mathrm{NEQ}\}), \] where the first equivalence is Proposition~\ref{prop:make_bipartite}, the second is Theorem~\ref{thm:Valiant_Holant}, and the final reduction is because dropping the restriction to bipartite signature grids cannot make the problem easier. Hence, since $\mathrm{NEQ}\in\mathcal{E}$, this case reduces to the second one. For the final polynomial-time computable case, $\mathcal{F}\subseteq\avg{K\circ\mathcal{M}}$ or $\mathcal{F}\subseteq\avg{KX\circ\mathcal{M}}$, use the first two steps of the above reduction. It can be shown that all LHS gadgets over $\mathcal{M}|\{\mathrm{NEQ}\}$ are in $\ang{\mathcal{M}}$ (cf.\ e.g.\ \cite[Lemma~43]{backens_holant_2018}). Thus, vertices from the right-hand side partition can be removed one-by-one, updating the signature grid in the process, until it decomposes into a discrete graph whose holant value can be computed straightforwardly. The polynomial-time computable sets of Theorem~\ref{thm:Holant-star} will also be related to quantum entanglement in Section~\ref{s:existing_quantum}. If $\mathcal{F}$ is not one of the exceptional sets defined above, then the closure of $\mathcal{F}\cup\mathcal{U}$ under taking gadgets contains all functions~\cite[Theorem~67]{backens_holant_2018}. } \subsection{Counting constraint satisfaction problems} \label{s:csp} The family of complex-weighted Boolean $\#\mathsf{CSP}$ (counting constraint satisfaction problems) is defined as follows. Let $\mathcal{F}\sse\Upsilon$ be a finite set of functions and let $V$ be a finite set of variables. A \emph{constraint} $c$ over $\mathcal{F}$ is a tuple consisting of a function $f_c\in\mathcal{F}$ and a \emph{scope}, which is a tuple of $\ari(f)$ (not necessarily distinct) elements of $V$. If $C$ is a set of constraints, then any assignment $\mathbf{x}:V\to\{0,1\}$ of values to the variables induces a \emph{weight} $\wt_{(V,C)}(\mathbf{x}):=\prod_{c\in C}f_c(\mathbf{x}|_c)$, where $\mathbf{x}|_c$ denotes the restriction of $\mathbf{x}$ to the scope of $c$. \begin{description}[noitemsep] \item[Name] $\#\mathsf{CSP}(\mathcal{F})$ \item[Instance] A tuple $(V,C)$, where $V$ is a finite set of variables and $C$ is a finite set of constraints over $\mathcal{F}$. \item[Output] The value $Z_{(V,C)} = \sum_{\mathbf{x}:V\to\{0,1\}} \prod_{c\in C} f_c(\mathbf{x}|_c)$. \end{description} Counting constraint satisfaction problems are closely related to holant problems. {In particular, a counting constraint satisfaction problem can be thought of as a bipartite holant problem with the constraints as one part and the variables as the other part. Each vertex corresponding to a variable is assigned the function $\mathrm{EQ}_k$, with $k$ the total number of times that variable appears across the scopes of all constraints. The straightforward formalisation of this idea would require parameterising the holant problem with an infinite set of functions, the set containing all equality functions of any arity. Yet the bipartite structure is not necessary: if two variable vertices are adjacent to each other, they can be merged into one, and if two constraint vertices are adjacent to each other, a new variable vertex assigned $\mathrm{EQ}_2$ can be introduced between them. Now, $S(\{\mathrm{EQ}_3\})$ is exactly the set of all equality functions, hence:} \begin{proposition}[{\cite[Proposition~1]{cai_holant_2012}}]\label{prop:CSP_holant} $\#\mathsf{CSP}(\mathcal{F}) \equiv_T \Holp{\mathcal{F}\cup\{ \mathrm{EQ}_3 \}}$. \end{proposition} {This proposition shows that any counting constraint satisfaction problem can be expressed as a holant problem. On the other hand, some holant problems can only be expressed as counting constraint satisfaction problems with the additional restriction that every variable must appear exactly twice across the scopes of all the constraints. This means the holant framework is more general than counting constraint satisfaction problems. For example, as is well known, the problem of counting matchings on a graph can be expressed in the holant framework but not in the standard $\#\mathsf{CSP}$ framework as defined above (cf.~\cite[pp.~23:3--4]{backens_holant_2018}).} The dichotomies for $\#\mathsf{CSP}$ and its variants feature families of tractable functions which do not appear in Theorem~\ref{thm:Holant-star}. \begin{definition}\label{dfn:affine_function} A function $f:\{0,1\}^n\to\AA$ for some non-negative integer $n$ is called \emph{affine} if it has the form: \begin{equation} f(\mathbf{x}) = c i^{l(\mathbf{x})} (-1)^{q(\mathbf{x})} \chi_{A\mathbf{x}=\mathbf{b}}(\mathbf{x}), \end{equation} where $c\in\AA$, $i^2=-1$, $l:\{0,1\}^n\to\{0,1\}$ is a linear function, $q:\{0,1\}^n\to\{0,1\}$ is a quadratic function, $A$ is an $m$ by $n$ matrix with Boolean entries for some $0\leq m\leq n$, $\mathbf{b}\in\{0,1\}^m$, and $\chi$ is an indicator function which takes value 1 on inputs satisfying $A\mathbf{x}=\mathbf{b}$, and 0 otherwise. The set of all affine functions is denoted by $\mathcal{A}$. \end{definition} The support of the function $\chi_{A\mathbf{x}=\mathbf{b}}$ is an affine subspace of $\{0,1\}^n$, hence the name `affine functions'. There are different definitions of this family in different parts of the literature, but they are equivalent. For the reader familiar with quantum information theory, the affine functions correspond -- up to scaling -- to stabiliser states (cf.\ Section \ref{s:existing_quantum} and also independently \cite{cai_clifford_2018}). \begin{lemma}[{\cite[Lemma~3.1]{cai_clifford_2018}}]\label{lem:cai_clifford} If $f(\vc{x}{n}), g(\vc{y}{m})\in\mathcal{A}$, then so are \begin{enumerate} \item $(f\otimes g)(\vc{x}{n},\vc{y}{m}) = f(\vc{x}{n}) g(\vc{y}{m})$, \item $f(x_{\rho(1)},\ldots, x_{\rho(n)})$ for any permutation {$\rho:[n]\to [n]$}, \item $f^{x_j=x_\ell}(\vc{x}{j-1},x_{j+1},\ldots, x_n) = f(\vc{x}{j-1},x_\ell,x_{j+1},\ldots, x_n)$, setting the variable $x_j$ to be equal to $x_\ell$, and \item $f^{x_j=*}(\vc{x}{j-1},x_{j+1},\ldots, x_n) = \sum_{x_j\in\{0,1\}} f(\vc{x}{n})$. \end{enumerate} \end{lemma} \begin{definition}[\new{adapted from \cite[Definition~10]{cai_dichotomy_2017}}] A function $f:\{0,1\}^n\to\AA$ for some non-negative integer $n$ is called \emph{local affine} if it satisfies $\left( \bigotimes_{j=1}^{n} T^{a_j} \right) \ket{f} \in \mathcal{A}$ for any $\mathbf{a}\in\supp(f)$, where \[ T^1 = T = \begin{pmatrix}1&0\\0&e^{i\pi/4}\end{pmatrix} \qquad\text{and}\qquad T^0 = \begin{pmatrix}1&0\\0&1\end{pmatrix}. \] The set of all local affine functions is denoted $\mathcal{L}$. \end{definition} Both $\mathcal{A}$ and $\mathcal{L}$ are closed under tensor products, i.e.\ $\ang{\mathcal{A}}=\mathcal{A}$ and $\ang{\mathcal{L}}=\mathcal{L}$. \begin{theorem}[{\cite[Theorem~3.1]{cai_complexity_2014}}]\label{thm:csp} Suppose $\mathcal{F}\sse\Upsilon$ is finite. If $\mathcal{F}\subseteq\mathcal{A}$ or $\mathcal{F}\subseteq\avg{\mathcal{E}}$, then $\#\mathsf{CSP}(\mathcal{F})$ is computable in polynomial time. Otherwise, $\#\mathsf{CSP}(\mathcal{F})$ is \#\textsf{P}-hard. \end{theorem} {Unlike a holant problem, the complexity of a counting CSP does not change if the pinning functions $\dl_0$ and $\dl_1$ are added to the set of constraint functions. \begin{lemma}[{Pinning lemma \cite[Lemma~8]{dyer_complexity_2009}}]\label{lem:csp^c} For any finite $\mathcal{F}\sse\Upsilon$, \[ \#\mathsf{CSP}^c(\mathcal{F}):=\#\mathsf{CSP}(\mathcal{F}\cup\{\dl_0,\dl_1\})\equiv_T\#\mathsf{CSP}(\mathcal{F}). \] \end{lemma} The dichotomy of Theorem~\ref{thm:csp} also holds for a variant counting constraint satisfaction problem called \#\textsf{R$_3$-CSP} with a restriction on the number of times each variable may appear \cite[Theorem~3.2]{cai_complexity_2014}. \begin{description}[noitemsep] \item[Name] \#\textsf{R$_3$-CSP}$(\mathcal{F})$ \item[Instance] A tuple $(V,C)$, where $V$ is a finite set of variables and $C$ is a finite set of constraints over $\mathcal{F}$ such that each variable appears at most three times across all scopes. \item[Output] The value $Z_{(V,C)} = \sum_{\mathbf{x}:V\to\{0,1\}} \prod_{c\in C} f_c(\mathbf{x}|_c)$. \end{description} For any finite $\mathcal{F}\sse\Upsilon$, the problem \#\textsf{R$_3$-CSP}$(\mathcal{F})$ is equivalent to the bipartite holant problem $\Holp{\mathcal{F}\mid\{\mathrm{EQ}_1,\mathrm{EQ}_2,\mathrm{EQ}_3\}}$ \cite[Section~2]{cai_complexity_2014}. Its} dichotomy follows immediately from that for $\#\mathsf{CSP}$ if $\mathcal{F}$ contains the binary equality function (or indeed any equality function of arity at least 2), but \new{this interreduction} is non-trivial if $\mathcal{F}$ does not contain any equality function of arity at least 2. We will also consider a variant of counting CSPs in which each variable has to appear an even number of times. \begin{description}[noitemsep] \item[Name] $\#\mathsf{CSP}_2(\mathcal{F})$ \item[Instance] A tuple $(V,C)$, where $V$ is a finite set of variables and $C$ is a finite set of constraints over $\mathcal{F}$ such that each variable appears an even number of times across all scopes. \item[Output] The value $Z_{(V,C)} = \sum_{\mathbf{x}:V\to\{0,1\}} \prod_{c\in C} f_c(\mathbf{x}|_c)$. \end{description} Based on the above definition, we also define $\#\mathsf{CSP}_2^c(\mathcal{F}):=\#\mathsf{CSP}_2(\mathcal{F}\cup\{\dl_0,\dl_1\})$. The dichotomy for this problem differs from that for plain $\#\mathsf{CSP}$. \begin{theorem}[{\cite[Theorem 4.1]{cai_dichotomy_2017}}]\label{thm:csp_2^c} Suppose $\mathcal{F}\sse\Upsilon$ is finite. A $\#\mathsf{CSP}_2^c(\mathcal{F})$ problem has a polynomial time algorithm if one of the following holds: \begin{itemize} \item $\mathcal{F}\subseteq\avg{\mathcal{E}}$, \item $\mathcal{F}\subseteq\mathcal{A}$, \item $\mathcal{F}\subseteq T\circ\mathcal{A}$, or \item $\mathcal{F}\subseteq\mathcal{L}$. \end{itemize} Otherwise, it is \#\textsf{P}-hard. \end{theorem} {For any finite $\mathcal{F}\sse\Upsilon$, we have \[ \#\mathsf{CSP}(\mathcal{F}\cup\{\dl_+\})\leq_T\#\mathsf{CSP}_2(\mathcal{F}\cup\{\dl_+\}), \] where, for each variable $y$ that appears an odd number of times in the original instance, we add a new constraint $(\dl_+,(y))$ to make it appear an even number of times instead. Yet if no non-zero scaling of $\dl_+$ is present or realisable (via gadgets or other methods), then there is no general reduction from $\#\mathsf{CSP}(\mathcal{F})$ to $\#\mathsf{CSP}_2(\mathcal{F})$. This lack of a general reduction can be seen for example by noting the differences between Theorem~\ref{thm:csp_2^c} (which classifies the complexity of $\#\mathsf{CSP}_2^c$) and Theorem~\ref{thm:csp} (which by Lemma~\ref{lem:csp^c} also applies to $\#\mathsf{CSP}^c$).} By analogy with planar holant problems, we also define planar counting CSPs. \begin{definition}\label{dfn:planar_CSP} $\mathsf{Pl}\text{-}\#\mathsf{CSP}(\mathcal{F}):=\mathsf{Pl\text{-}Holant}\left(\mathcal{F}\cup\{ \mathrm{EQ}_3 \}\right)$, i.e.\ $\mathsf{Pl}\text{-}\#\mathsf{CSP}(\mathcal{F})$ is the restriction of $\#\mathsf{CSP}(\mathcal{F})$ to planar instances of the corresponding holant problem according to Proposition~\ref{prop:CSP_holant}. \end{definition} The complexity dichotomy for $\mathsf{Pl}\text{-}\#\mathsf{CSP}$ involves an additional tractable family as compared to the general case. This new family is called \emph{matchgate functions} and consists of those functions which correspond to computing a weighted sum of perfect matchings. We denote the set of all matchgate functions by\footnote{Note that we use a different symbol than \cite{cai_holographic_2017} for the set of matchgate functions, to avoid clashing with other established notation.} $\mathcal{H}\sse\Upsilon$. As the rigorous definition of this set is somewhat intricate and not required for our work we do not reproduce it here; the interested reader can find it in \cite[pp.~STOC17-65f]{cai_holographic_2017}. Indeed, the only property of $\mathcal{H}$ which we will require is the following lemma, which is adapted to avoid having to define concepts not used in this paper. \begin{lemma}[{\cite[first part of Lemma~2.29]{cai_holographic_2017}}]\label{lem:unary_matchgate} If $f\in\Upsilon$ has arity $\leq 3$, then $f\in\mathcal{H}$ if and only if one of the following parity conditions is satisfied: \begin{itemize} \item $f(\mathbf{x})=0$ whenever $\abs{\mathbf{x}}$ is even, or \item $f(\mathbf{x})=0$ whenever $\abs{\mathbf{x}}$ is odd. \end{itemize} \end{lemma} For $f\in\Upsilon_1$, this means $f$ is a matchgate function if and only if $f=c\cdot\delta_0$ or $f=c\cdot\delta_1$ for some {$c\in\AA$}. We can now state the dichotomy for $\mathsf{Pl}\text{-}\#\mathsf{CSP}$. \begin{theorem}[{\cite[Theorem~6.1$'$]{cai_holographic_2017}}]\label{thm:planar_csp} Let $\mathcal{F}$ be any finite set of complex-valued functions in Boolean variables. Then $\mathsf{Pl}\text{-}\#\mathsf{CSP}(\mathcal{F})$ is \#\textsf{P}-hard unless $\mathcal{F}\sse\mathcal{A}$, $\mathcal{F}\sse\ang{\mathcal{E}}$, or $\mathcal{F}\sse\smm{1&1\\1&-1}\circ\mathcal{H}$, in which case the problem is computable in polynomial time. \end{theorem} \subsection{Partial results for \textsf{Holant}\texorpdfstring{\textsuperscript{c}}{\textasciicircum c} and \textsf{Holant}} \label{s:Holant_c} $\mathsf{Holant}^c$ is the holant problem in which the unary functions pinning edges to 0 or 1 are freely available \cite{cai_complexity_2014,cai_holant_2012}, i.e.\ $\Holp[c]{\mathcal{F}} := \Holp{\mathcal{F}\cup\{\delta_0,\delta_1\}}$ for any finite $\mathcal{F}\sse\Upsilon$. We will give a full dichotomy for this problem in Section~\ref{s:dichotomy}, building on the following dichotomies for symmetric functions and for real-valued functions. \begin{theorem}[{\cite[Theorem~6]{cai_holant_2012}}]\label{thm:Holant-c-sym} Let $\mathcal{F}\sse\Upsilon$ be a finite set of symmetric functions. $\Holp[c]{\mathcal{F}}$ is \#\textsf{P}-hard unless $\mathcal{F}$ satisfies one of the following conditions, in which case it is polynomial-time computable: \begin{itemize} \item $\mathcal{F}\subseteq\avg{\mathcal{T}}$, or \item there exists $O\in\mathcal{O}$ such that $\mathcal{F}\subseteq\avg{O\circ\mathcal{E}}$, or \item $\mathcal{F}\subseteq\avg{K\circ\mathcal{E}}=\avg{KX\circ\mathcal{E}}$, or \item $\mathcal{F}\subseteq\avg{K\circ\mathcal{M}}$ or $\mathcal{F}\subseteq\avg{KX\circ\mathcal{M}}$, or \item there exists $B\in\mathcal{B}$ such that $\mathcal{F}\subseteq B\circ\mathcal{A}$, where: \begin{equation}\label{eq:cS_definition} \mathcal{B} = \left\{ M \,\middle|\, M^T\circ \{ \mathrm{EQ}_2, \delta_0, \delta_1 \} \subseteq \mathcal{A} \right\}. \end{equation} \end{itemize} \end{theorem} Note that the first four polynomial-time computable cases are exactly the ones appearing in Theorem~\ref{thm:Holant-star}. The preceding results all apply to algebraic complex-valued functions, but the following theorem is restricted to algebraic real-valued functions. \begin{theorem}[{\cite[Theorem 5.1]{cai_dichotomy_2017}}] \label{thm:real-valued_Holant-c} Let $\mathcal{F}$ be a set of algebraic real-valued functions. Then $\Holp[c]{\mathcal{F}}$ is \#\textsf{P}-hard unless $\mathcal{F}$ is a tractable family for conservative holant or for $\#\mathsf{CSP}_2^c$. \end{theorem} In the case of $\mathsf{Holant}$ with no \new{freely available functions}, there exists a dichotomy for complex-valued symmetric functions \cite[Theorem~31]{cai_complete_2013} and a dichotomy for (not necessarily symmetric) functions taking non-negative real values \cite[Theorem~19]{lin_complexity_2016}. We will not explore those results in any detail here. \subsection{Results about ternary symmetric functions} \label{s:results_ternary_symmetric} The computational complexity of problems of the form $\Holp{\{[y_0,y_1,y_2]\} \mid \{[x_0,x_1,x_2,x_3]\}}$, where $[y_0,y_1,y_2]\in\Upsilon_2$ and $[x_0,x_1,x_2,x_3]\in\Upsilon_3$, has been fully determined. These are holant problems on bipartite graphs where one partition only contains vertices of degree~2, the other partition only contains vertices of degree~3, and all vertices of the same arity are assigned the same symmetric function. If $[x_0,x_1,x_2,x_3]$ is degenerate, the problem is tractable by the first case of Theorem \ref{thm:Holant-star}. If $[x_0,x_1,x_2,x_3]$ is non-degenerate, it can always be mapped to either $[1,0,0,1]$ or $[1,1,0,0]$ by a holographic transformation \cite[Section~3]{cai_holant_2012}, cf.\ also Section~\ref{s:quantum_states} below. By Theorem \ref{thm:Valiant_Holant}, it thus suffices to consider the cases $\{[y_0,y_1,y_2]\}\mid\{[1,0,0,1]\}$ and $\{[y_0,y_1,y_2]\}\mid\{[1,1,0,0]\}$. There are also some holographic transformations which leave the function $[1,0,0,1]$ invariant. {In particular, if} $M=\smm{1&0\\0&\om}$, where $\om^3=1$, i.e.\ $\om$ is a third root of unity, {then $M\circ\mathrm{EQ}_3=\mathrm{EQ}_3$ \cite[Section~4]{cai_holant_2012}}. By applying Theorem~\ref{thm:Valiant_Holant} with this $M$, \begin{equation}\label{eq:normalisation} \Holp{\{[y_0,y_1,y_2]\}\mid\{[1,0,0,1]\}} \equiv_T \Holp{\{[y_0, \omega y_1, \omega^2 y_2]\}\mid\{[1,0,0,1]\}}. \end{equation} This relationship can be used to reduce the number of symmetric binary functions needing to be considered in this section. Following \cite[Section~4]{cai_holant_2012}, a symmetric binary function $[y_0,y_1,y_2]$ is called \emph{$\omega$-normalised}\footnote{We use the term $\omega$-normalisation to distinguish it from other notions of normalisation, e.g.\ ones relating to the norm of the vector associated with a function.} if \begin{itemize} \item $y_0=0$, or \item there does not exist a primitive $(3t)$-th root of unity $\lambda$, where the greatest common divisor $\operatorname{gcd}(t,3)=1$, such that $y_2=\lambda y_0$. \end{itemize} Similarly, a unary function $[a,b]$ is called $\omega$-normalised if \begin{itemize} \item $a=0$, or \item there does not exist a primitive $(3t)$-th root of unity $\lambda$, where $\operatorname{gcd}(t,3)=1$, such that $b=\lambda a$. \end{itemize} If a binary function is not $\omega$-normalised, it can be made so through application of a holographic transformation of the form given in \eqref{eq:normalisation}. Unary functions will only be required when the binary function has the form $[0,y_1,0]$; in that case the binary function is automatically $\omega$-normalised, and it remains so under a holographic transformation that $\omega$-normalises the unary function. These definitions allow a complexity classification of all holant problems on bipartite signature grids where there is a ternary equality function on one partition and a non-degenerate symmetric binary function on the other partition. \begin{theorem}[{\cite[Theorem~5]{cai_holant_2012}}]\label{thm:GHZ-state} Let $\mathcal{G}_1,\mathcal{G}_2\sse\Upsilon$ be finite and let $[y_0,y_1,y_2]\in\Upsilon_2$ be an $\omega$-normalised and non-degenerate function. In the case of $y_0=y_2=0$, further assume that $\mathcal{G}_1$ contains a unary function $[a,b]$ which is $\omega$-normalised and satisfies $ab\neq 0$. Then: \[ \Holp{\{[y_0,y_1,y_2]\}\cup\mathcal{G}_1 \mid \{[1,0,0,1]\}\cup\mathcal{G}_2} \equiv_T \#\mathsf{CSP}(\{[y_0,y_1,y_2]\}\cup\mathcal{G}_1\cup\mathcal{G}_2). \] \end{theorem} \begin{theorem}[{\cite[Theorem~4]{cai_holant_2012}}]\label{thm:W-state} $\Holp{\{[y_0,y_1,y_2]\}\mid\{[x_0,x_1,x_2,x_3]\}}$ is \#\textsf{P}-hard unless $[y_0,y_1,y_2]$ and $[x_0,x_1,x_2,x_3]$ satisfy one of the following conditions, in which case the problem is polynomial-time computable: \begin{itemize} \item $[x_0,x_1,x_2,x_3]$ is degenerate, or \item there is $M\in\operatorname{GL}_2(\AA)$ such that: \begin{itemize} \item $[x_0,x_1,x_2,x_3]=M\circ[1,0,0,1]$ and $M^T\circ[y_0,y_1,y_2]$ is in $\mathcal{A}\cup\avg{\mathcal{E}}$, \item $[x_0,x_1,x_2,x_3]=M\circ[1,1,0,0]$ and $[y_0,y_1,y_2]=(M^{-1})^T\circ[0,a,b]$ for some $a,b\in\AA$. \end{itemize} \end{itemize} \end{theorem} Here, we have combined the last two cases of \cite[Theorem~4]{cai_holant_2012} into one case, since one can be mapped to the other by a bit flip: a holographic transformation using the matrix $X$. We will also use a result about planar holant problems involving the ternary equality function. As the notation used in the original statement of this theorem differs significantly from the notation used in this paper, we first state the original theorem and then prove a corollary which translates the theorem into our notation. \begin{theorem}[{\cite[Theorem~7]{kowalczyk_holant_2016}}]\label{thm:kowalczyk-cai} Let $a,b\in\AA$ and define $X:=ab$, $Y:=a^3+b^3$. The problem $\mathsf{Pl\text{-}Holant}\left(\{[a,1,b]\}\mid \{\mathrm{EQ}_3\}\right)$ is \sP-hard for all $a,b\in\AA$ except in the following cases, for which the problem is polynomial-time computable: \begin{enumerate} \item $X=1$ \item $X=Y=0$ \item $X=-1$ and $Y\in\{0,\pm 2i\}$, or \item $4X^3=Y^2$. \end{enumerate} \end{theorem} \begin{corollary}\label{cor:pl-holant_binary} Suppose $g\in\Upsilon_2$ is symmetric. The problem $\mathsf{Pl\text{-}Holant}\left(\{g\}\mid \{\mathrm{EQ}_3\}\right)$ is \sP-hard except in the following cases, for which the problem is polynomial-time computable: \begin{enumerate} \item\label{c:gen_eq} $g\in\ang{\mathcal{E}}$, \item\label{c:affine} $\smm{1&0\\0&\ld}\circ g\in\mathcal{A}$ for some $\ld\in\AA$ such that $\ld^3=1$, or \item\label{c:matchgate} $g=c\cdot [a,1,b]$, where $a,b,c\in\AA\setminus\{0\}$ and $a^3=b^3$. \end{enumerate} \end{corollary} \begin{rem} Note that the three exceptional cases of Corollary~\ref{cor:pl-holant_binary} overlap, e.g.\ $g=\mathrm{EQ}_2$ satisfies both Case~\ref{c:gen_eq} and Case~\ref{c:affine}, and $g=[1,1,1]$ satisfies all three cases. We also include binary functions of the form $[a,0,b]$ in the corollary for completeness. \end{rem} \begin{proof}[Proof of Corollary~\ref{cor:pl-holant_binary}.] First, we show that the problem is tractable in the exceptional cases. Let $M=\smm{1&0\\0&\ld}$ with $\ld^3=1$ be such that $g':=M\circ g$ is $\om$-normalised. Then \begin{align*} \mathsf{Pl\text{-}Holant}\left(\{g\}\mid \{\mathrm{EQ}_3\}\right) &\equiv_T \mathsf{Pl\text{-}Holant}\left(\{g'\}\mid \{\mathrm{EQ}_3\}\right) \\ &\leq_T \holp{\{g', \mathrm{EQ}_3\}} \\ &\leq_T \#\mathsf{CSP}(\{g'\}), \end{align*} where the first step is by Theorem~\ref{thm:Valiant_Holant}, the second step is by forgetting about the bipartition and the planarity constraint, and the third step is by Proposition~\ref{prop:CSP_holant}. Hence if $\#\mathsf{CSP}(\{g'\})$ is polynomial-time computable, then so is $\mathsf{Pl\text{-}Holant}\left(\{g\}\mid \{\mathrm{EQ}_3\}\right)$. By Theorem~\ref{thm:csp}, $\#\mathsf{CSP}(\{g'\})$ is polynomial-time computable if $g'\in\mathcal{A}$ or $g'\in\ang{\mathcal{E}}$. Since $M$ is diagonal, $g'\in\ang{\mathcal{E}}$ is equivalent to $g\in\ang{\mathcal{E}}$, so we have tractability for Case~\ref{c:gen_eq}. Furthermore, $g'\in\mathcal{A}$ is equivalent to $\smm{1&0\\0&\ld}\circ g\in\mathcal{A}$ for some $\ld^3=1$, which establishes tractability for Case~\ref{c:affine}. This leaves Case~\ref{c:matchgate}. Assume $g=c\cdot [a,1,b]$ for some $a,b,c\in\AA\setminus\{0\}$ with $a^3=b^3$, and define $g'':=[a,1,b]$. The functions $g$ and $g''$ differ only by a non-zero factor, so by a straightforward extension of Lemma~\ref{lem:scaling} to bipartite signature grids we have \begin{equation}\label{eq:scaling} \mathsf{Pl\text{-}Holant}\left(\{g\}\mid \{\mathrm{EQ}_3\}\right) \equiv_T \mathsf{Pl\text{-}Holant}\left(\{g''\}\mid \{\mathrm{EQ}_3\}\right). \end{equation} Now, $a^3=b^3$ implies \[ 0 = (a^3-b^3)^2 = (a^3+b^3)^2 - 4(ab)^3, \] hence $g''$ satisfies condition~4 of Theorem~\ref{thm:kowalczyk-cai}. This establishes tractability for Case~\ref{c:matchgate}. It remains to prove the hardness part of the theorem. If $g=[a,0,b]$ for some $a,b\in\AA$, then $g\in\ang{\mathcal{E}}$ and $\mathsf{Pl\text{-}Holant}\left(\{g\}\mid \{\mathrm{EQ}_3\}\right)$ is polynomial-time computable by the above arguments. Thus, from now on, we may assume $g=c\cdot [a,1,b]$, where $a,b,c\in\AA$ and $c\neq 0$. Let $g'':=[a,1,b]$ as before, then again \eqref{eq:scaling} holds. We will show that if $g''$ satisfies one of the tractability conditions of Theorem~\ref{thm:kowalczyk-cai}, then $g$ satisfies one of Cases~\ref{c:gen_eq}--\ref{c:matchgate}. Consider each tractability condition of Theorem~\ref{thm:kowalczyk-cai} in turn. \begin{enumerate} \item $X=1$ is equivalent to $ab=1$. This is exactly the condition for the function $[a,1,b]$ to be degenerate. But then $g=c\cdot [a,1,b]$ is degenerate as well, so $g\in\ang{\mathcal{E}}$ and $g$ satisfies Case~\ref{c:gen_eq}. \item $X=Y=0$ is equivalent to $ab=0$ and $a^3+b^3=0$. Together, the two equalities imply that $a=b=0$. Thus $g''=\mathrm{NEQ}$, which implies $g=c\cdot\mathrm{NEQ}\in\ang{\mathcal{E}}$, so $g$ satisfies Case~\ref{c:gen_eq}. \item $X=-1$ implies that $a,b\neq 0$. We can therefore rewrite $X=ab=-1$ to $b=-a^{-1}$, which in turn implies $Y=a^3-a^{-3}$. This case therefore reduces to $a^3-a^{-3}\in\{0,\pm 2i\}$. We distinguish subcases. \begin{itemize} \item Suppose $a^3-a^{-3}=0$. Then $a^6=1$, i.e.\ $a=e^{ik\pi/3}$ for some $k\in\{0,1,2,3,4,5\}$. \begin{itemize} \item If $k=0$, let $\ld=1$, then $\ld^3=1$ and $\smm{1&0\\0&\ld}\circ g=c\cdot[1,1,-1]\in\mathcal{A}$. \item If $k=1$, let $\ld=e^{4i\pi/3}$, then $\ld^3=1$ and \[ \smm{1&0\\0&\ld}\circ g = c\cdot [e^{i\pi/3}, \ld, -\ld^2 e^{-i\pi/3}] = c\cdot [e^{i\pi/3}, e^{4i\pi/3}, -e^{7i\pi/3}] = e^{4i\pi/3}c\cdot [-1,1,1] \] so $\smm{1&0\\0&\ld}\circ g\in\mathcal{A}$. \item If $k=2$, let $\ld=e^{2i\pi/3}$, then $\ld^3=1$ and \[ \smm{1&0\\0&\ld}\circ g = c\cdot [e^{2i\pi/3}, \ld, -\ld^2 e^{-2i\pi/3}] = c\cdot [e^{2i\pi/3}, e^{2i\pi/3}, -e^{2i\pi/3}] = e^{2i\pi/3}c\cdot [1,1,-1] \] so $\smm{1&0\\0&\ld}\circ g\in\mathcal{A}$. \end{itemize} The remaining subcases are similar. \item Suppose $a^3-a^{-3}= 2i$. Then $a^6-1=2ia^3$, or equivalently $a^6 - 2ia^3 -1=(a^3 - i)^2 = 0$. Thus $a = e^{(4k+1)i\pi/6}$ for some $k\in\{0,1,2\}$. \begin{itemize} \item If $k=0$, let $\ld=e^{2i\pi/3}$, then $\ld^3=1$ and \[ \smm{1&0\\0&\ld}\circ g = c\cdot [e^{i\pi/6}, \ld, -\ld^2 e^{-i\pi/6}] = c\cdot [e^{i\pi/6}, e^{2i\pi/3}, -e^{7i\pi/6}] = e^{i\pi/6}c\cdot [1,-1,1] \] so $\smm{1&0\\0&\ld}\circ g\in\mathcal{A}$. \item If $k=1$, let $\ld=e^{4i\pi/3}$, then $\ld^3=1$ and \[ \smm{1&0\\0&\ld}\circ g = c\cdot [e^{5i\pi/6}, \ld, -\ld^2 e^{-5i\pi/6}] = c\cdot [e^{5i\pi/6}, e^{4i\pi/3}, -e^{11i\pi/6}] = e^{5i\pi/6}c\cdot [1,-1,1] \] so $\smm{1&0\\0&\ld}\circ g\in\mathcal{A}$. \item If $k=2$, let $\ld=1$, then $\ld^3=1$ and \[ \smm{1&0\\0&\ld}\circ g = c\cdot [e^{9i\pi/6}, \ld, -\ld^2 e^{-9i\pi/6}] = c\cdot [e^{3i\pi/2}, 1, -e^{-3i\pi/2}] = c\cdot [-i,1,-i] \] so $\smm{1&0\\0&\ld}\circ g\in\mathcal{A}$. \end{itemize} \item Suppose $a^3-a^{-3}= -2i$. This subcase is analogous to the subcase $a^3-a^{-3}= 2i$. \end{itemize} In each subcase, we were able to find $\ld\in\AA$ such that $\ld^3=1$ and $\smm{1&0\\0&\ld}\circ g\in\mathcal{A}$, i.e.\ the function $g$ satisfies Case~\ref{c:affine}. \item $4X^3=Y^2$ implies $4(ab)^3=(a^3+b^3)^2$, which is equivalent to $(a^3-b^3)^2=0$. Hence this condition immediately implies that $g$ satisfies Case~\ref{c:matchgate}. \end{enumerate} This completes the case distinction. We have shown that if $g''=[a,1,b]$ satisfies any of the tractability conditions of Theorem~\ref{thm:kowalczyk-cai}, then $g=c\cdot g''$ satisfies one of Cases~\ref{c:gen_eq}--\ref{c:matchgate}. Hence, conversely, if $g=c\cdot [a,1,b]$ and $g$ does not satisfy any of Cases~\ref{c:gen_eq}--\ref{c:matchgate}, then $g''=[a,1,b]$ does not satisfy any of the tractability conditions of Theorem~\ref{thm:kowalczyk-cai}. In that case, Theorem~\ref{thm:kowalczyk-cai} implies that $\mathsf{Pl\text{-}Holant}\left(\{g''\}\mid \{\mathrm{EQ}_3\}\right)$ is \sP-hard. Thus, by \eqref{eq:scaling}, $\mathsf{Pl\text{-}Holant}\left(\{g\}\mid \{\mathrm{EQ}_3\}\right)$ is \sP-hard. \end{proof} \subsection{Results about functions of arity 4} Besides the above results about ternary functions, we will also make use of the following result about realising or interpolating the arity-4 equality function from a more general function of arity 4. \begin{lemma}[{\cite[Part of Lemma~2.40]{cai_holographic_2017}}]\label{lem:interpolate_equality4} Suppose $\mathcal{F}\sse\Upsilon$ is finite and contains a function $f$ of arity 4 with matrix \[ \begin{pmatrix} a&0&0&b \\ 0&0&0&0 \\ 0&0&0&0 \\ c&0&0&d \end{pmatrix} \] where $\smm{a&b\\c&d}$ has full rank. Then $\mathsf{Pl\text{-}Holant}(\{\mathrm{EQ}_4\}\cup\mathcal{F}) \leq_T \mathsf{Pl\text{-}Holant}(\mathcal{F})$. \end{lemma} {The lemma can of course also be used in the non-planar setting.} Functions in $\mathcal{E}$ are often called `generalised equality functions'. Recall from Section~\ref{s:csp} that $\#\mathsf{CSP}_2$ is a counting CSP in which each variable has to appear an even number of times. \begin{lemma}[{\cite[Lemma~5.2]{cai_dichotomy_2017}}]\label{lem:generalised_equality4} Suppose $\mathcal{F}\sse\Upsilon$ is finite and contains a generalised equality function $f$ of arity 4, then \[ \Holp{\mathcal{F}}\equiv_T\#\mathsf{CSP}_2(\mathcal{F}). \] \end{lemma} \begin{rem} Note that the statement of Lemma~5.2 in \cite{cai_dichotomy_2017} has `$\mathsf{Holant}^c$' instead of plain `$\mathsf{Holant}$', but the proof does not use pinning functions, so this is presumably a typo. \end{rem} \section{The quantum state perspective} \label{s:quantum_states} In Section \ref{s:vector_perspective}, we introduced the idea of considering functions as complex vectors. This perspective is not only useful for proving Valiant's Holant Theorem (which is at the heart of the theory of holant problems), it also gives a connection to the theory of quantum computation. In quantum computation and quantum information theory, the basic system of interest is a \emph{qubit} (quantum bit), which takes the place of the usual bit in standard computer science. The state of a qubit is described by a vector\footnote{Strictly speaking, vectors only describe \emph{pure} quantum states: there are also \emph{mixed} states, which need to be described differently; but we do not consider those here.} in the two-dimensional complex Hilbert space $\mathbb{C}^2$. State spaces compose by tensor product, i.e.\ the state of $n$ qubits is described by a vector in $\left(\mathbb{C}^2\right)\t{n}$, which is isomorphic to $\mathbb{C}^{2^n}$. Here, $\otimes$ denotes the tensor product of Hilbert spaces and tensor powers are defined analogously to tensor powers of matrices in Section~\ref{s:vector_perspective}. Thus, the vector associated with an $n$-ary function can be considered to be a quantum state of $n$ qubits. The vectors describing quantum states are usually required to have norm 1, but for the methods used here, multiplication by a non-zero complex number does not make a difference, so we can work with states having arbitrary norms. Let $\{\ket{0},\ket{1}\}$ be an orthonormal basis for $\mathbb{C}^2$. This is usually called the \emph{computational basis}. The induced basis on $\left(\mathbb{C}^2\right)\t{n}$ is labelled by $\{\ket{\mathbf{x}}\}_{\mathbf{x}\in\{0,1\}^n}$ as a short-hand, e.g.\ we write $\ket{00\ldots 0}$ instead of $\ket{0}\otimes\ket{0}\otimes\ldots\otimes\ket{0}$. This is exactly the same as the basis introduced in Section \ref{s:vector_perspective}. Holographic transformations also have a natural interpretation in quantum information theory: going from an $n$-qubit state $\ket{f}$ to $M\t{n}\ket{f}$, where $M$ is some invertible 2 by 2 matrix, is a `stochastic local operation with classical communication' (SLOCC) \cite{bennett_exact_2000,dur_three_2000}. These are physical operations that can be applied locally (without needing access to more than one qubit at a time) using classical (i.e.\ non-quantum) communication between the sites where the different qubits are held, and which succeed with non-zero probability. {If two quantum states are equivalent under SLOCC, they can be used for the same quantum information tasks, albeit potentially with different probabilities of success. Two $n$-qubit states $\ket{\psi}$ and $\ket{\phi}$ are equivalent under SLOCC if and only if there exist invertible complex 2 by 2 matrices $M_1,M_2,\ldots, M_n$ such that $\ket{\psi} = (M_1\otimes M_2\otimes \ldots \otimes M_n) \ket{\phi}$ \cite[Section~II.A]{dur_three_2000}. In particular, SLOCC operations do not need to be symmetric under permutations.} From now on, we will sometimes mix standard holant terminology (or notation) and quantum terminology (or notation). \subsection{Entanglement and its classification} \label{s:entanglement} One major difference between quantum theory and preceding theories of physics (known as `classical physics') is the possibility of \emph{entanglement} in states of multiple systems. \begin{definition}\label{def:entanglement} A state of multiple systems is \emph{entangled} if it cannot be written as a tensor product of states of individual systems. \end{definition} \begin{ex} In the case of two qubits, \begin{equation} \ket{00}+\ket{01}+\ket{10}+\ket{11} \end{equation} is a product state -- it can be written as $(\ket{0}+\ket{1})\otimes(\ket{0}+\ket{1})$. On the other hand, consider the state \begin{equation} \ket{00}+\ket{11}. \end{equation} It is impossible to find single-qubit states $\ket{f},\ket{g}\in\mathbb{C}^2$ such that $\ket{f}\otimes\ket{g} = \ket{00}+\ket{11}$. {In function notation, this can be seen by noting that if $h(x,y)=f(x)g(y)$, then \[ h(0,0)h(1,1)-h(0,1)h(1,0) = f(0)g(0)f(1)g(1) - f(0)g(1)f(1)g(0) = 0, \] whereas for $h(x,y)=\mathrm{EQ}_2(x,y)$, we have $h(0,0)h(1,1)-h(0,1)h(1,0) = 1$.} Thus, $\ket{00}+\ket{11}$ is entangled. \end{ex} Where a state involves more than two systems, it is possible for some of the systems to be entangled with each other and for other systems to be in a product state with respect to the former. We sometimes use the term \emph{genuinely entangled state} to refer to a state in which no subsystem is in a product state with the others. The term \emph{multipartite entanglement} refers to entangled states in which more than two qubits are mutually entangled. {Under the bijection between functions in vectors described in Section~\ref{s:vector_perspective}, a state vector is entangled if and only if the corresponding function is non-degenerate. A state is genuinely entangled if and only if the corresponding function is non-decomposable. Finally, a state has multipartite entanglement if and only if the corresponding function has a non-decomposable factor of arity at least three. In other words, entangled states correspond to functions in $\Upsilon\setminus\ang{\mathcal{U}}$ and multipartite entangled states correspond to functions in $\Upsilon\setminus\ang{\mathcal{T}}$.} Entanglement is an important resource in quantum computation, where it has been shown that quantum speedups are impossible without the presence of unboundedly growing amounts of entanglement \cite{jozsa_role_2003}. Similarly, it is a resource in quantum information theory \cite{nielsen_quantum_2010}, featuring in protocols such as quantum teleportation \cite{bennett_teleporting_1993} and quantum key distribution \cite{ekert_quantum_1991}. Many quantum information protocols have the property that two quantum states can be used to perform the same task if one can be transformed into the other by SLOCC, motivating the following equivalence relation. \begin{definition}\label{def:SLOCC-equivalence} Two $n$-qubit states are \emph{equivalent under SLOCC} if one can be transformed into the other using SLOCC. More formally: suppose $\ket{f},\ket{g}\in(\mathbb{C}^2)\t{n}$ are two $n$-qubit states. Then $\ket{f} \sim_{SLOCC} \ket{g}$ if and only if there exist invertible complex 2 by 2 matrices $M_1,M_2,\ldots, M_n$ such that $\left(M_1\otimes M_2\otimes \ldots \otimes M_n\right) \ket{f} = \ket{g}$. \end{definition} The equivalence classes of this relation are called \emph{entanglement classes} or \emph{SLOCC classes}. {This definition is justified because SLOCC does not affect the decomposition of a state into tensor factors. To see this, suppose $\ket{f}, \ket{g}$ are two $n$-qubit states satisfying Definition~\ref{def:SLOCC-equivalence}. Furthermore, suppose $\ket{f}$ decomposes as $\ket{f_1^k}\otimes\ket{f_{k+1}^n}$ for some $k$ with $1\leq k<n$, where $\ket{f_1^k}$ is a state on the first $k$ qubits and $\ket{f_{k+1}^n}$ is a state on the last $(n-k)$ qubits. Then \begin{align*} \ket{g} &= \left(M_1\otimes M_2\otimes \ldots \otimes M_n\right) \ket{f} \\ &= \left(M_1\otimes M_2\otimes \ldots \otimes M_n\right) \left(\ket{f_1^k}\otimes\ket{f_{k+1}^n}\right) \\ &= \left((M_1\otimes\ldots\otimes M_k)\ket{f_1^k}\right) \otimes \left((M_{k+1}\otimes\ldots\otimes M_n)\ket{f_{k+1}^n}\right), \end{align*} so $\ket{g}$ decomposes in the same way as $\ket{f}$. Since the matrices $M_1,M_2,\ldots, M_n$ are invertible, the converse also holds. Hence $\ket{g}$ can be decomposed as a tensor product according to some partition if and only if $\ket{f}$ can be decomposed according to the same partition. Therefore SLOCC does not affect entanglement. Due to the correspondences between vectors and functions outlined in the first part of the section, the same holds for holographic transformations.} For two qubits, there is only one class of entangled states: all entangled two-qubit states are equivalent to $\ket{00}+\ket{11}$ (the vector corresponding to $\mathrm{EQ}_2$) under SLOCC. For three qubits, there are two classes of genuinely entangled states, the GHZ class and the $W$ class \cite{dur_three_2000}. The former contains states that are equivalent under SLOCC to the GHZ state: \begin{equation} \ket{\mathrm{GHZ}} := \frac{1}{\sqrt{2}}(\ket{000}+\ket{111}), \end{equation} the latter those equivalent to the $W$ state: \begin{equation} \ket{W} := \frac{1}{\sqrt{3}}(\ket{001}+\ket{010}+\ket{100}). \end{equation} Note that, up to scalar factor, $\ket{\mathrm{GHZ}}$ is the vector corresponding to $\mathrm{EQ}_3$ and $\ket{W}$ is the vector corresponding to $\mathrm{ONE}_3$. We say that a function has GHZ type if it is equivalent to the GHZ state under local holographic transformations and that a function has $W$ type if it is equivalent to the GHZ state under local holographic transformations. Local holographic transformations include invertible scaling, so non-zero scalar factors do not affect the entanglement classification. In the holant literature, GHZ-type functions have been called the \emph{generic case} and $W$-type functions have been called the \emph{double-root case}, cf.\ \cite[Section~3]{cai_holant_2012}. The two types of genuinely entangled ternary functions can be distinguished as follows. \begin{lemma}[\cite{li_simple_2006}]\label{lem:li} Let $f$ be a ternary function and write $f_{klm}:=f(k,l,m)$ for all $k,\ell,m \in\{0,1\}$. Then $f$ has GHZ type if the following polynomial in the function values is non-zero: \begin{equation}\label{eq:GHZ_polynomial} (f_{000}f_{111} - f_{010}f_{101} + f_{001}f_{110} - f_{011}f_{100})^2 - 4(f_{010}f_{100}-f_{000}f_{110})(f_{011}f_{101}-f_{001}f_{111}). \end{equation} The function $f$ has $W$ type if the polynomial \eqref{eq:GHZ_polynomial} is zero, and furthermore each of the following three expressions is satisfied: \begin{align} (f_{000}f_{011}\neq f_{001}f_{010}) &\vee (f_{101}f_{110}\neq f_{100}f_{111}) \label{eq:W1} \\ (f_{001}f_{100}\neq f_{000}f_{101}) &\vee (f_{011}f_{110}\neq f_{010}f_{111}) \label{eq:W2} \\ (f_{011}f_{101}\neq f_{001}f_{111}) &\vee (f_{010}f_{100}\neq f_{000}f_{110}). \label{eq:W3} \end{align} If the polynomial \eqref{eq:GHZ_polynomial} is zero and at least one of the three expressions evaluates to false, $f$ is decomposable. In fact, for any decomposable $f$, at least two of the expressions are false. \end{lemma} The above lemma can be specialised to symmetric functions as follows. \begin{lemma}\label{lem:li_symmetric} Let $f$ be a ternary symmetric function and write $f=[f_0,f_1,f_2,f_3]$. Then $f$ has GHZ type if the following polynomial in the function values is non-zero: \begin{equation}\label{eq:GHZ_polynomial_symmetric} (f_0 f_3 - f_1 f_2)^2 - 4(f_1^2 - f_0 f_2)(f_2^2 - f_1 f_3) \neq 0. \end{equation} The function $f$ has $W$ type if the above polynomial is zero and furthermore \[ (f_1^2 \neq f_0 f_2) \vee (f_2^2 \neq f_1 f_3). \] If the polynomial in \eqref{eq:GHZ_polynomial_symmetric} is zero and the above expression evaluates to false, $f$ is decomposable. \end{lemma} For joint states of more than three qubits, there are infinitely many SLOCC classes. It is possible to partition these into families which share similar properties. Yet, so far, there is no consensus on how to partition the classes: there are different schemes for partitioning even the four-qubit entanglement classes, yielding different families \cite{verstraete_four_2002,lamata_inductive_2007,backens_inductive_2016}. A \emph{generalised GHZ state} is a vector corresponding (up to scalar factor) to $\mathrm{EQ}_k$ for some integer $k\geq 3$. A \emph{generalised $W$ state} is a vector corresponding (up to scalar factor) to $\mathrm{ONE}_k$ for some integer $k\geq 3$. \subsection{The existing results in the quantum picture} \label{s:existing_quantum} {Using the correspondence between vectors and functions,} several of the existing dichotomies have straightforward descriptions in the quantum picture. The tractable cases of Theorem~\ref{thm:Holant-star} can be described as follows: \begin{itemize} { \item The case $\mathcal{F}\subseteq\avg{\mathcal{T}}$ corresponds to vectors with no multipartite entanglement. Unbounded multipartite entanglement is needed for quantum computation to offer any advantage over non-quantum computation \cite{jozsa_role_2003}, so it makes sense that its absence would lead to a holant problem that is polynomial-time computable. \item In the cases $\mathcal{F}\subseteq\avg{O\circ\mathcal{E}}$ or $\mathcal{F}\subseteq\avg{K\circ\mathcal{E}}$, assuming $\mathcal{F}\nsubseteq\avg{\mathcal{T}}$, there is GHZ-type multipartite entanglement. To see this, note first that if $f\in\ang{\mathcal{E}}\setminus\avg{\mathcal{T}}$ is non-decomposable, then $f$ must have arity at least 3 and be non-zero on exactly two \new{complementary} inputs. Suppose $f$ has arity $n$, and let $\mathbf{a}\in\new{\{0,1\}^n}$ be such that $f(\mathbf{a})\neq 0$. \new{Without loss of generality, assume $a_1=0$ (if $a_1=1$, replace $\mathbf{a}$ by $\bar{\mathbf{a}}$)}. Then \[ \ket{f} = \left(\pmm{f(\mathbf{a})&0\\0&f(\bar{\mathbf{a}})} \otimes \bigotimes_{k=2}^n X^{a_n}\right) \ket{\mathrm{EQ}_n}, \] where $X^0$ is the identity matrix; hence $\ket{f}\sim_{SLOCC}\ket{\mathrm{EQ}_n}$. Further holographic transformations do not affect the equivalence under SLOCC. As formally shown in \cite[Lemma~46]{backens_holant_2018}, the sets $\avg{O\circ\mathcal{E}}$ and $\avg{K\circ\mathcal{E}}$ are already closed under taking gadgets, so every non-decomposable function of arity $n$ is SLOCC-equivalent to $\mathrm{EQ}_n$. This means it is impossible to realise $W$-type multipartite entanglement \new{from $\mathcal{F}$} via gadgets, which again indicates these cases are insufficient to describe full quantum computation. \item Finally, in the case $\mathcal{F}\subseteq\avg{K\circ\mathcal{M}}$ or $\mathcal{F}\subseteq\avg{KX\circ\mathcal{M}}$, again assuming $\mathcal{F}\nsubseteq\avg{\mathcal{T}}$, there is $W$-type multipartite entanglement. To see this, note first that if $f\in\ang{\mathcal{M}}\setminus\ang{\mathcal{T}}$ is non-decomposable, then $f$ must have arity at least 3. Suppose $n:=\ari(f)\geq 3$ and there exists an index $k\in[n]$ such that $f$ is 0 on the bit string that has a 1 only on position $k$ and zeroes elsewhere. Then by the definition of $\mathcal{M}$, the function $f$ is 0 whenever input $k$ is non-zero; hence $f$ can be decomposed as $f(\vc{x}{n}) = \dl_0(x_k) f'(\vc{x}{k-1},x_{k+1},\ldots, x_{n})$. Thus, any non-decomposable $f\in\ang{\mathcal{M}}$ has support on all bit strings of Hamming weight exactly 1. Now suppose $f\in\mathcal{M}$ is a non-decomposable function of arity $n$, then if $f(0\ldots 0)=0$, we have \[ \ket{f} = \left(\bigotimes_{k=1}^n \pmm{1&0\\0&f(\mathbf{e}_k)}\right) \ket{\mathrm{ONE}_n}, \] where $\mathbf{e}_k$ for $1\leq k\leq n$ is the $n$-bit string that contains a single 1 in position $k$ and zeroes elsewhere. If $f(0\ldots 0)\neq 0$, then \[ \ket{f} = f(0\ldots 0) \left(\bigotimes_{k=1}^n \pmm{1&\frac{1}{n}\\0&\frac{f(\mathbf{e}_k)}{f(0\ldots 0)}}\right) \ket{\mathrm{ONE}_n}. \] In both cases $\ket{f}\sim_{SLOCC}\ket{\mathrm{ONE}_n}$, i.e.\ any non-decomposable function in $\mathcal{M}$ has $W$-type entanglement. Further holographic transformations do not affect SLOCC-equivalence. Again, it was shown in \cite[Lemma~46]{backens_holant_2018} that $\ang{K\circ\mathcal{M}}$ and $\ang{KX\circ\mathcal{M}}$ are closed under taking gadgets, so every non-decomposable function of arity $n$ they contain is SLOCC-equivalent to $\mathrm{ONE}_n$. In particular, it is impossible to realise GHZ-type multipartite entanglement. } \end{itemize} The family of affine functions (cf.\ Definition~\ref{dfn:affine_function}) also has a natural description in quantum information theory: the quantum states corresponding to affine functions are known as \emph{stabiliser states} \cite{dehaene_clifford_2003}. These states and the associated operations play an important role in the context of quantum error-correcting codes \cite{gottesman_heisenberg_1998} and are thus at the core of most attempts to build large-scale quantum computers \cite{devitt_quantum_2013}. The fragment of quantum theory consisting of stabiliser states and operations that preserve the set of stabiliser states can be efficiently simulated on a classical computer \cite{gottesman_heisenberg_1998}; this result is known as the Gottesman-Knill theorem. The connection between affine functions and stabiliser quantum mechanics has recently been independently noted and explored in \cite{cai_clifford_2018}. The examples in this section show that holant problems and quantum information theory are linked not only by quantum algorithms being an inspiration for holographic ones: instead, many of the known tractable set of functions of various holant problems correspond to state sets that are of independent interest in quantum computation and quantum information theory. The one exception are local affine functions, which seem not to have been described in the quantum literature, possibly because this set does not contain any interesting unitary operations. The restriction to algebraic numbers is not a problem from the quantum perspective, not even when considering the question of universal quantum computation: there exist (approximately) universal sets of quantum operations where each operation can be described using algebraic complex coefficients. One such example is the Clifford+T gate set \cite{boykin_universal_1999,giles_exact_2013}, which is generated by the operators \[ \pmm{1&0\\0&e^{i\pi/4}}, \qquad \frac{1}{\sqrt{2}}\pmm{1&1\\1&-1}, \quad\text{and}\quad \pmm{1&0&0&0\\0&1&0&0\\0&0&0&1\\0&0&1&0}. \] {\subsection{Affine functions, holographic transformations, and entanglement} The lemmas in this section, like those in Section~\ref{s:linear-algebra} are straightforward. They will be useful in the complexity classification proofs later. The first results is well known in quantum theory. It is also} closely related to the theory of functional clones \cite{bulatov_expressibility_2013,bulatov_functional_2017} and holant clones \cite{backens_holant_2018}, though we will not introduce the full formalisms of those frameworks here. {Instead,} we give a proof in the language of gadgets. \begin{lemma}\label{lem:affine_closed} The set of affine functions is closed under taking gadgets, i.e.\ $S(\mathcal{A})=\mathcal{A}$. \end{lemma} \begin{proof} Suppose $G=(V,E,E')$ is a graph with vertices $V$, (normal) edges $E$, and dangling edges $E'=\{\vc{e}{n}\}$, where $E\cap E'=\emptyset$. Let $\Gamma=(\mathcal{A},G,\pi)$ be a gadget with effective function \[ g_\Gamma(\vc{y}{n}) = \sum_{\sigma:E\to\{0,1\}} \prod_{v\in V} f_v(\hat{\sigma}|_{E(v)}), \] where $\hat{\sigma}$ is the extension of $\sigma$ to domain $E\cup E'$ which satisfies $\hat{\sigma}(e_k)=y_k$ for all $k\in[n]$, and $\hat{\sigma}|_{E(v)}$ is the restriction of $\hat{\sigma}$ to edges (both normal and dangling) which are incident on $v$. We prove $g_\Gamma\in\mathcal{A}$ by induction on the number of normal edges $m:=\abs{E}$. The base case $m=0$ implies that all edges are dangling and $g_\Gamma = \bigotimes_{v\in V} f_v$. Then by associativity of $\otimes$ and by repeated application of Lemma~\ref{lem:cai_clifford}~(1), we have $g_\Gamma\in\mathcal{A}$. For the inductive step, assume the desired property holds if there are $m$ normal edges. Consider a gadget $\Gamma=(\mathcal{A},G,\pi)$ with $m+1$ normal edges and $n$ dangling edges. Pick some $e=\{u,v\}\in E$ and `cut it', i.e.\ replace it by two dangling edges $e_{n+1},e_{n+2}$, where $e_{n+1}$ is incident on $u$ and $e_{n+2}$ is incident on $v$. Let $\bar{E}=E\setminus\{e\}$ and let $E''=E'\cup\{e_{n+1},e_{n+2}\}$. The resulting graph is $G'=(V,\bar{E},E'')$. Since $G'$ has the same vertices as $G$ and each vertex has the same degree in both graphs, $\Gamma'=(\mathcal{A},G',\pi)$ is a valid gadget, where $\pi$ is the same map as before. Then \[ g_{\Gamma'}(\vc{y}{n+2}) = \sum_{\sigma:\bar{E}\to\{0,1\}} \prod_{v\in V} f_v(\hat{\sigma}'|_{E(v)}), \] where $\hat{\sigma}'$ is the extension of $\sigma$ to domain $\bar{E}\cup E''$ which satisfies $\hat{\sigma}'(e_k)=y_k$ for all $k\in[n+2]$. Now $\Gamma'$ is a gadget with $m$ normal edges, so by the inductive hypothesis, $g_{\Gamma'}(\vc{y}{n+2})\in\mathcal{A}$. But $g_\Gamma(\vc{y}{n}) = \sum_{y_{n+1}\in\{0,1\}} g_{\Gamma'}(\vc{y}{n},y_{n+1},y_{n+1})$, i.e.\ $g_\Gamma = (g_{\Gamma'}^{y_{n+2}=y_{n+1}})^{y_{n+1}=*}$. Thus, by Lemma~\ref{lem:cai_clifford}~(3) and (4), we have $g_\Gamma\in\mathcal{A}$. \end{proof} \begin{lemma}\label{lem:cS_cA-group} The set $\mathcal{B}_\mathcal{A} := \{L\in\operatorname{GL}_2(\AA)\mid f_L\in\mathcal{A}\}$ is a group under matrix multiplication. \end{lemma} \begin{proof} {Closure under matrix multiplication follows directly from Lemma~\ref{lem:affine_closed}. The identity matrix corresponds to $\mathrm{EQ}_2$, which is affine, so $\mathcal{B}_\mathcal{A}$ contains the identity. For closure under inverse, note that by Definition~\ref{dfn:affine_function} any matrix $A\in\mathcal{B}_\mathcal{A}$ corresponds to a function \[ f_A(x,y) = ci^{\ell(x,y)}(-1)^{q(x,y)}\chi(x,y), \] where $c\in\AA\setminus\{0\}$ by invertibility of $A$, $\ell,q:\{0,1\}^2\to\{0,1\}$ with $\ell$ being linear and $q$ being quadratic, and $\chi\in\{[1,1,1],\,[1,0,1],\,[0,1,0]\}$ is the indicator function for an affine support. (With support on some other affine subspace of $\{0,1\}^2$, $A$ could not be invertible.) Constant terms in $\ell$ or $q$ can be absorbed into $c$, and for Boolean variables, $x^2=x$, so non--cross-terms from $q$ can be absorbed into $\ell$. Thus, without loss of generality, $\ell(x,y)=\ld x + \mu y$ and $q(x,y)= \kappa xy$ for some $\ld,\mu\in\{0,1,2,3\}$ and $\kappa\in\{0,1\}$. Let $P:=\smm{1&0\\0&i}$. Now if $\chi=[1,1,1]$, then \[ A = c\pmm{1 & i^{\mu} \\ i^{\ld} & (-1)^\kappa i^{\ld+\mu}} = c P^\ld \pmm{1&1\\1&(-1)^\kappa} P^\mu. \] For $A$ to be invertible, $\kappa$ must be 1. But then $A^{-1} = c^{-1} P^{4-\mu} \smm{1&1\\1&-1} P^{4-\ld} \in \mathcal{B}_\mathcal{A}$. On the other hand, if $\chi=[1,0,1]$ or $\chi=[0,1,0]$, then \[ A = c\pmm{1 & 0 \\ 0 & (-1)^\kappa i^{\ld+\mu}} \qquad\text{or}\qquad A = c\pmm{0 & i^{\mu} \\ i^{\ld} & 0}. \] Again, in both cases $A^{-1}$ has the same form as $A$, so $A^{-1}\in\mathcal{B}_\mathcal{A}$. Hence $\mathcal{B}_\mathcal{A}$ is a group.} \end{proof} Up to scaling, the unary elements of $\mathcal{A}$ are \[ \dl_0:=[1,0],\quad \dl_1:=[0,1],\quad \dl_+:=[1,1],\quad \dl_-:=[1,-1],\quad \dl_i:=[1,i],\quad\text{and}\quad \dl_{-i}:=[1,-i]. \] We say the pairs $\{\dl_0,\dl_1\}$, $\{\dl_+,\dl_-\}$, and $\{\dl_i,\dl_{-i}\}$ are orthogonal pairs\footnote{This is because the two corresponding vectors are orthogonal \new{under the complex inner product}.}, any other pair of distinct unary functions $u,v\in\mathcal{A}$ is called non-orthogonal. {It is straightforward to see that if $u,u^\perp$ are an orthogonal pair and $v\in\mathcal{A}$ is a unary function that is not a scaling of $u$ or $u^\perp$, then $v(x)=\alpha\cdot u(x) + \beta\cdot u^\perp(x)$ where $\alpha^4=\beta^4$.} \begin{lemma}\label{lem:cS-cA} Suppose $M\in\mathcal{B}=\left\{L\in\operatorname{GL}_2(\AA) \,\middle|\, L^T\circ\{\mathrm{EQ}_2,\dl_0,\dl_1\}\sse\mathcal{A} \right\}$ and $M^T\circ\dl_+,M^T\circ\dl_-\in\mathcal{A}$. Then $M\in\mathcal{B}_\mathcal{A}=\{L\in\operatorname{GL}_2(\AA)\mid f_L\in\mathcal{A}\}$. \end{lemma} \begin{proof} {The property $M\in\mathcal{B}$ implies there are $u,v\in\{\dl_0,\dl_1,\dl_+,\dl_-,\dl_i,\dl_{-i}\}$ and $\ld,\mu\in\AA\setminus\{0\}$ such that $M^T\circ\dl_0=\ld\cdot u$ and $M^T\circ\dl_1=\mu\cdot v$. But $\dl_\pm(x)=\dl_0(x)\pm\dl_1(x)$, so by linearity $(M^T\circ\dl_\pm)(x) = \ld\cdot u(x) \pm\mu\cdot v(x)$. First, suppose $u$ and $v$ are orthogonal. Then $M^T = M'\smm{\ld&0\\0&\mu}$ where \[ M' \in \left\{ \pmm{1&0\\0&1}, \pmm{0&1\\1&0}, \pmm{1&1\\1&-1}, \pmm{1&1\\-1&1}, \pmm{1&1\\i&-i} , \pmm{1&1\\-i&i} \right\}, \] since this set contains all matrices that map $\dl_0$ and $\dl_1$ to a pair of orthogonal functions in $\mathcal{A}$, including permutations. It is straightforward to check that if $u$ and $v$ are orthogonal, then $M^T\circ\dl_\pm\in\mathcal{A}$ if and only if $\ld^4=\mu^4$. But then $M^T$ is a product of two matrices corresponding to functions in $\mathcal{A}$, so $M\in\mathcal{B}_\mathcal{A}$ by Lemma~\ref{lem:affine_closed}. Now suppose $u$ and $v$ are not orthogonal; denote by $u^\perp$ the function which forms an orthogonal pair with $u$. Then there exist $\alpha,\beta\in\AA\setminus\{0\}$ with $\alpha^4=\beta^4$ such that $v(x) = \alpha\cdot u(x) + \beta\cdot u^\perp(x)$. Thus \[ (M^T\circ\dl_\pm)(x) = (\ld\pm\mu\alpha)\cdot u(x) \pm\mu\beta\cdot u^\perp(x). \] These two functions are in $\mathcal{A}$ if and only if both $(\ld+\mu\alpha)^4=\mu^4\beta^4$ and $(\ld-\mu\alpha)^4=\mu^4\beta^4$. That means \[ (\ld+\mu\alpha)^4 = (\ld-\mu\alpha)^4 \quad\Longleftrightarrow\quad \ld\mu\alpha(\ld^2 + \mu^2\alpha^2) = 0 \quad\Longleftrightarrow\quad \ld = \pm i \mu\alpha \] Then $\mu^4\beta^4 = (1+i)^4\mu^4\alpha^4$, so since all of the numbers are non-zero, we have $\beta^4 = -4\alpha^4$. This contradicts the assumption $\alpha^4=\beta^4$. Therefore this case cannot happen and we always have $M\in\mathcal{B}_\mathcal{A}$ by the previous case.} \end{proof} {The following makes more precise some of the arguments about entanglement types in Section~\ref{s:existing_quantum} in the context of ternary functions, and extends the argument to functions in $\mathcal{A}$.} \begin{lemma}\label{lem:family-types} Suppose $f\in\ang{\mathcal{E}}$ is a non-decomposable ternary function, then $f$ has GHZ type. Similarly, suppose $g\in\ang{\mathcal{M}}$ is a non-decomposable ternary function, then $g$ has $W$ type. Finally, suppose $h\in\mathcal{A}$ is a non-decomposable ternary function, then $h$ has GHZ type. \end{lemma} \begin{proof} Suppose $f\in\ang{\mathcal{E}}$ is a non-decomposable ternary function. Non-decomposability implies $f\in\mathcal{E}$, so there exists $\mathbf{a}\in\{0,1\}^3$ such that $f(\mathbf{x})=0$ unless $\mathbf{x}\in\{\mathbf{a},\bar{\mathbf{a}}\}$. Since $f$ is non-decomposable, $f(\mathbf{a})$ and $f(\bar{\mathbf{a}})$ must both be non-zero. We can thus find matrices $A,B,C\in\{I,X\}$ such that $(A\otimes B\otimes C)\ket{f}= f_{\mathbf{a}}\ket{000}+f_{\bar{\mathbf{a}}}\ket{111}$, which is clearly a GHZ-type state. But this is an SLOCC operation, which does not affect the entanglement class, so $f$ has GHZ type. Now suppose $g\in\ang{\mathcal{M}}$ is a non-decomposable ternary function. Non-decomposability implies $g\in\mathcal{M}$, hence $g(\mathbf{x})=0$ whenever $\abs{\mathbf{x}}>1$. The polynomial in \eqref{eq:GHZ_polynomial} becomes \begin{multline*} (g_{000}g_{111} - g_{010}g_{101} + g_{001}g_{110} - g_{011}g_{100})^2 - 4(g_{010}g_{100}-g_{000}g_{110})(g_{011}g_{101}-g_{001}g_{111}) \\ = (0 - 0 + 0 - 0)^2 - 4(g_{010}g_{100}-0)(0-0) = 0. \end{multline*} Yet $g$ is non-decomposable by assumption, therefore Lemma~\ref{lem:li} implies that $g$ must have $W$~type. Finally, suppose $h\in\mathcal{A}$ is a non-decomposable ternary function. Then by Definition~\ref{dfn:affine_function}, $h(\mathbf{x}) = c i^{l(\mathbf{x})} (-1)^{q(\mathbf{x})} \chi_{A\mathbf{x}=\mathbf{b}}(\mathbf{x})$ where $c\in\AA\setminus\{0\}$ is a constant, $l$ is a linear Boolean function, $q$ is a quadratic Boolean function, and $\chi$ is a 0-1 valued indicator function for an affine subspace of $\{0,1\}^3$. By~\cite{montanaro_hadamard_2006}, for any function $h'\in\Upsilon_n$ there exist matrices $\vc{M}{n}\in\{I,H\}$, where $H=\smm{1&1\\1&-1}$ is the Hadamard matrix, such that $(M_1\otimes\ldots\otimes M_n)\ket{h'}$ is everywhere non-zero. Both $I$ and $H$ correspond to affine functions, so by Lemmas~\ref{lem:hc_gadget} and~\ref{lem:affine_closed}, if $h'\in\mathcal{A}$ then the function corresponding to $(M_1\otimes\ldots\otimes M_n)\ket{h'}$ is also in $\mathcal{A}$. Hence, since SLOCC operations do not affect the entanglement class, we may assume without loss of generality that $h$ has full support by replacing it with the function transformed according to \cite{montanaro_hadamard_2006} if necessary. Then $\chi$ is the constant-1 function and can be ignored. {Now, a SLOCC transformation by $P := \smm{1&0\\0&i}$ on argument $x_k$ contributes a factor $i^{x_k}$ to the overall function. Thus, by such transformations, we can make $l$ trivial and remove all terms of the form $x_k^2$ from $q$ without changing the entanglement. It thus suffices to consider the function $h'(x_1,x_2,x_3) = (-1)^{\gamma_{12} x_1 x_2 + \gamma_{13} x_1 x_3 + \gamma_{23} x_2 x_3}$, where $\gamma_{12},\gamma_{13},\gamma_{23}\in\{0,1\}$. Then the first term of the polynomial in \eqref{eq:GHZ_polynomial} becomes \begin{multline*} (h_{000}h_{111} - h_{010}h_{101} + h_{001}h_{110} - h_{011}h_{100})^2 \\ = c^4 \Big((-1)^{\gamma_{12}+\gamma_{13}+\gamma_{23}} - (-1)^{\gamma_{13}} + (-1)^{\gamma_{12}} - (-1)^{\gamma_{23}}\Big)^2 \end{multline*} The second term becomes \begin{align*} -4(h_{010}h_{100}-h_{000}h_{110})(h_{011}h_{101} &-h_{001}h_{111}) \\ &=- 4c^4 \Big(1-(-1)^{\gamma_{12}}\Big) \Big((-1)^{\gamma_{13}+\gamma_{23}}-(-1)^{\gamma_{12}+\gamma_{13}+\gamma_{23}}\Big) \\ &=- 4c^4 (-1)^{\gamma_{13}+\gamma_{23}} \Big(1-(-1)^{\gamma_{12}}\Big)^2 \end{align*} Thus, if $\gamma_{12}=0$, the polynomial in \eqref{eq:GHZ_polynomial} is equal to \[ c^4 \Big((-1)^{\gamma_{13}+\gamma_{23}} - (-1)^{\gamma_{13}} + 1 - (-1)^{\gamma_{23}}\Big)^2, \] which is $16c^4$ if $\gamma_{13}=\gamma_{23}=1$, and 0 otherwise. If $\gamma_{12}=1$, the polynomial becomes \[ c^4 \Big(-(-1)^{\gamma_{13}+\gamma_{23}} - (-1)^{\gamma_{13}} - 1 - (-1)^{\gamma_{23}}\Big)^2 - 16 c^4 (-1)^{\gamma_{13}+\gamma_{23}}, \] which is 0 if $\gamma_{13}=\gamma_{23}=0$, and non-zero otherwise. Hence the function has GHZ-type if and only if $\gamma_{12}+\gamma_{13}+\gamma_{23}\geq 2$. It remains to see what happens if $\gamma_{12}+\gamma_{13}+\gamma_{23} < 2$. Now, the condition \eqref{eq:W1}, $(h_{000}h_{011}\neq h_{001}h_{010}) \vee (h_{101}h_{110}\neq h_{100}h_{111})$, becomes \[ \Big( (-1)^{\gamma_{23}} \neq 1 \Big) \vee \Big((-1)^{\gamma_{12}+\gamma_{13}} \neq (-1)^{\gamma_{12}+\gamma_{13}+\gamma_{23}} \Big), \] which reduces to the single inequality $\gamma_{23}\neq 0$. Similarly, \eqref{eq:W2} becomes $\gamma_{13}\neq 0$ and \eqref{eq:W3} becomes $\gamma_{12}\neq 0$. Hence $h$ either has GHZ type or it decomposes.} Thus, any non-decomposable ternary function in $\mathcal{A}$ has GHZ type. \end{proof} \section{\textsf{Holant}\texorpdfstring{\textsuperscript{+}}{\textasciicircum +}} \label{s:Holant_plus} Before deriving the dichotomy for $\mathsf{Holant}^c$, we consider a new family of holant problems, called $\mathsf{Holant}^+$, which fits between conservative holant problems and $\mathsf{Holant}^c$: It has four freely available functions, which are all unary and include the pinning functions. Using results from quantum information theory, these four functions can be shown to be sufficient for constructing the gadgets required to apply the dichotomies in Section \ref{s:results_ternary_symmetric}. Formally, for any finite $\mathcal{F}\sse\Upsilon$: \begin{equation} \Holp[+]{\mathcal{F}} := \Holp{\mathcal{F}\cup\{\dl_0,\dl_1,\dl_+,\dl_-\}}. \end{equation} Note that the vectors $\ket{0}$ and $\ket{1}$ corresponding to $\dl_0$ and $\dl_1$ are orthogonal to each other. {Similarly,} the vectors $\ket{+}$ and $\ket{-}$ corresponding, {up to scalar factor,} to $\dl_+$ and $\dl_-$ are orthogonal to each other. In quantum theory, the set $\{\ket{+},\ket{-}\}$ is known as the \emph{Hadamard basis} of $\mathbb{C}^2$, since these vectors are related to the computational basis vectors by a Hadamard transformation: $\{\ket{+},\ket{-}\}\doteq H\circ\{\ket{0},\ket{1}\}$, where $H = \frac{1}{\sqrt{2}}\left(\begin{smallmatrix}1&1\\1&-1\end{smallmatrix}\right)$. {Hence $\ket{+}=\frac{1}{\sqrt{2}}\left(\ket{0}+\ket{1}\right)$ and $\ket{-}=\frac{1}{\sqrt{2}}\left(\ket{0}-\ket{1}\right)$, i.e.\ the Hadamard basis vectors differ from the vectors corresponding to $\delta_+$ and $\delta_-$ by a factor of $\frac{1}{\sqrt{2}}$, which does not affect any of the following arguments. In the next subsection, we first state and extend a result from quantum theory that is used in the later proofs. Specifically, we prove that if $\mathcal{F}\nsubseteq\ang{\mathcal{T}}$, then $S(\mathcal{F}\cup\{\dl_0,\dl_1,\dl_+,\dl_-\})$ contains a non-decomposable ternary function. It is vital for the proof to have all four unary functions available, e.g.\ if $\mathcal{F}=\{\mathrm{EQ}_4\}$, it would be impossible to produce a non-decomposable ternary function using only pinning. In Section~\ref{s:symmetrising_ternary}, we furthermore show that, under some mild assumptions on $\mathcal{F}$, the set $S(\mathcal{F}\cup\{\dl_0,\new{\dl}_1,\dl_+,\dl_-\})$ actually contains a \emph{symmetric} non-decomposable ternary function. In Section~\ref{s:binary}, we exhibit gadget constructions for certain binary functions and show that the assumptions of the previous section are satisfied if $\mathcal{F}$ is not one of the exceptional cases of Theorem~\ref{thm:Holant-star}. All the gadgets in these subsections are planar. To ensure that the full complexity classification works for planar holant problems, we next give a reduction between planar holant problems and planar counting CSPs in Section~\ref{s:interreducing_planar}. This is based on results sketched in the literature, but to our knowledge the full proof has not been written out before. Finally, in Section~\ref{s:hardness}, we combine all of the parts to prove the complexity classification for $\mathsf{Holant}^+$, which holds even when restricted to the planar case. } \subsection{Why these free functions?} The definition of $\mathsf{Holant}^+$ is motivated by the following results from quantum theory. We first state the results in quantum terminology and translate them into holant terminology at the end of the section. \begin{theorem}[{\cite[Lemma on p.~296]{popescu_generic_1992},\cite{gachechiladze_addendum_2016}}]\label{thm:popescu-rohrlich} Let $\ket{\Psi}$ be an $n$-system genuinely entangled quantum state. For any two of the $n$ systems, there exists a projection, onto a tensor product of states of the other $(n-2)$ systems, that leaves the two systems in an entangled state. \end{theorem} Here, `projection' means a (partial) inner product between $\ket{\Psi}$ and the tensor product of single-system states. {The $n$ systems do not have to be qubits, but for the purposes of the following arguments, it suffices to think of them as $n$ qubits.} The original proof of this statement in \cite{popescu_generic_1992} was flawed but it was recently corrected \cite{gachechiladze_addendum_2016}. The following corollary is not stated explicitly in either paper, but can be seen to hold by inspecting the proof in \cite{gachechiladze_addendum_2016}. \begin{corollary}\label{cor:popescu-rohrlich_restricted} {Let $\ket{\Psi}$ be an $n$-qubit genuinely entangled quantum state. For any two of the $n$ qubits, there exists a projection, onto a tensor product of computational and Hadamard basis states of the other $(n-2)$ qubits, that leaves the remaining two qubits in an entangled state.} \end{corollary} In other words, Theorem \ref{thm:popescu-rohrlich} holds when the systems are restricted to qubits and the projectors are restricted to products of computational and Hadamard basis states. Here, it is crucial to have projectors taken from two bases that are linked by the Hadamard transformation: the proof applies only in that case. {Intuitively, the corollary states that if $n$ parties share a genuinely entangled $n$-qubit state, then this can be converted into an entangled 2-qubit state shared by two of the parties using local projections\footnote{In the real world, these projections correspond to post-selected measurements, so without post-selection the protocol may fail in some runs.}. In holant terminology, the corollary corresponds to the following proposition about producing binary non-decomposable functions from a higher-arity non-decomposable function via gadgets with unary functions.} \begin{proposition}[Restatement of Corollary~\ref{cor:popescu-rohrlich_restricted}]\label{prop:popescu-rohrlich_gadget} Let $f$ be a non-decomposable function of arity {$n\geq 2$}. Suppose $j,k\in [n]$ with $j<k$. Then there exist $u_m\in\{\dl_0,\dl_1,\dl_+,\dl_-\}$ for all $m\in [n]\setminus\{j,k\}$ such that the following binary function is non-decomposable: \[ g(x_j,x_k) = \sum_{x_s\in\{0,1\}\text{ for } s\in [n]\setminus\{j,k\}} f(\vc{x}{n}) \prod_{m\in [n]\setminus\{j,k\}} u_m(x_m). \] This $g$ is the effective function of the gadget in Figure~\ref{fig:pr}a. \end{proposition} \begin{rem} Proposition~\ref{prop:popescu-rohrlich_gadget} can be considered an alternative definition of what it means to be non-decomposable. To see this, suppose the function $f\in\Upsilon_n$ is decomposable, i.e.\ there exists $1\leq k<n$ and functions $f_1,f_2$ such that: \[ f(\vc{x}{n}) = f_1(x_{\rho(1)},\ldots, x_{\rho(k)})f_2(x_{\rho(k+1)},\ldots, x_{\rho(n)}). \] Choose one argument from each partition, say $x_{\rho(1)}$ and $x_{\rho(n)}$; the argument is analogous for any other choice. Then for all $u_m\in\{\dl_0,\dl_1,\dl_+,\dl_-\}$ we have \begin{align*} g(x_{\rho(1)},x_{\rho(n)}) &= \sum_{x_s\in\{0,1\}\text{ for } s\in [n]\setminus\{\rho(1),\rho(n)\}} f(\vc{x}{n}) \prod_{m\in [n]\setminus\{\rho(1),\rho(n)\}} u_m(x_m) \\ &= \left( \sum_{x_{\rho(2)},\ldots, x_{\rho(k)}\in\{0,1\}} f_1(x_{\rho(1)},\ldots, x_{\rho(k)}) \prod_{m=2}^k u_m(x_{\rho(m)}) \right) \\ &\qquad\times \left( \sum_{x_{\rho(k+1)},\ldots, x_{\rho(n-1)}\in\{0,1\}} f_2(x_{\rho(k+1)},\ldots, x_{\rho(n)}) \prod_{m=k+1}^{n-1} u_m(x_{\rho(m)}) \right) \end{align*} which is clearly decomposable. Thus if the conclusion of Proposition~\ref{prop:popescu-rohrlich_gadget} holds for some $f$, then $f$ must be non-decomposable. \end{rem} We extend this proposition as follows. Note that all gadgets are planar. \begin{figure} \centering (a) \input{tikz_files/PR.tikz} \hfill (b) \input{tikz_files/PR-extended.tikz} \caption{(a) The gadget from Proposition~\ref{prop:popescu-rohrlich_gadget} and (b) the gadget from Theorem~\ref{thm:three-qubit-gadget}.} \label{fig:pr} \end{figure} \begin{theorem}\label{thm:three-qubit-gadget} Let $f$ be a non-decomposable function of arity $n\geq 3$. Then there exist $j,k,\ell\in [n]$ with $j<k<\ell$, and $u_m\in\{\dl_0,\dl_1,\dl_+,\dl_-\}$ for all $m\in [n]\setminus\{j,k,\ell\}$, such that the following ternary function is non-decomposable: \[ g(x_j,x_k,x_\ell) = \sum_{x_s\in\{0,1\}\text{ for } s\in [n]\setminus\{j,k,\ell\}} f(\vc{x}{n}) \prod_{m\in [n]\setminus\{j,k,\ell\}} u_m(x_m). \] This $g$ is the effective function of the gadget in Figure~\ref{fig:pr}b. \end{theorem} {\begin{proof} The result is proved by induction on $n$. If $n=3$, $f$ itself is the desired non-decomposable ternary function; this is the base case. Now suppose the result holds for all $n$ satisfying $3\leq n\leq N$. We prove the result for $n=N+1$ by contradiction, i.e.\ we begin by assuming that for some non-decomposable function $f$ of arity $n=N+1$ there does not exist any choice $j,k,\ell\in [n]$ with $j<k<\ell$, and $u_m\in\{\dl_0,\dl_1,\dl_+,\dl_-\}$ for all $m\in [n]\setminus\{j,k,\ell\}$ such that the function $g$ defined in the theorem statement is non-decomposable. Note the function $f$ cannot be identically zero since such functions are trivially decomposable. First, consider the family of gadgets that arise by composing one input of $f$ with one of the allowed unary functions: \[ h_{j,u}(\vc{x}{j-1},x_{j+1},\ldots, x_{n}) := \sum_{x_j\in\{0,1\}} f(\vc{x}{n}) u(x_j) \] where $j\in [n]$ and $u\in\{\dl_0,\dl_1,\dl_+,\dl_-\}$. \new{Note that \begin{align*} f(\vc{x}{n}) &= h_{j,\dl_0}(\vc{x}{j-1},x_{j+1},\ldots, x_{n})\dl_0(x_j) + h_{j,\dl_1}(\vc{x}{j-1},x_{j+1},\ldots, x_{n})\dl_1(x_j) \\ &= h_{j,\dl_+}(\vc{x}{j-1},x_{j+1},\ldots, x_{n})\dl_+(x_j) + h_{j,\dl_-}(\vc{x}{j-1},x_{j+1},\ldots, x_{n})\dl_-(x_j) \end{align*} for any $j\in[n]$. Hence, if $h_{j,u}$ was identically zero for some $j$ and $u$, then $f$ would be decomposable. But we assumed $f$ was non-decomposable, therefore the functions $h_{j,u}$ cannot be identically zero.} Furthermore, if one of the functions $h_{j,u}$ has a non-decomposable tensor factor of arity at least $3$, then we can remove the other tensor factors by Lemma~\ref{lem:decomposable}, replace $f$ with the resulting function of arity between 3 and $N$ (inclusive), and be done by the inductive hypothesis. Thus, for all $j$ and $u$, we must have $h_{j,u}\in\ang{\mathcal{T}}$, i.e.\ $h_{j,u}$ decomposes as a tensor product of unary and binary functions. Now by Proposition~\ref{prop:popescu-rohrlich_gadget}, we can find $u_m\in\{\dl_0,\dl_1,\dl_+,\dl_-\}$ for all $m\in [n]\setminus\{1,2\}$ such that \[ b(x_1,x_2) := \sum_{x_s\in\{0,1\}\text{ for } s\in [n]\setminus\{1,2\}} f(\vc{x}{n}) \prod_{m\in [n]\setminus\{1,2\}} u_m(x_m) \] is non-decomposable. Since $b$ arises from $h_{n,u_n}$ by contraction with unary functions, the arguments $x_1$ and $x_2$ must appear in the same tensor factor of $h_{n,u_n}$. Yet $h_{n,u_n}\in\ang{\mathcal{T}}$ by the argument of the previous paragraph, so we must have $h_{n,u_n}(\vc{x}{n-1}) = b(x_1,x_2) h(x_3,\ldots, x_{n-1})$ for some $h\in\ang{\mathcal{T}}$, which is not identically zero. Similarly, by Proposition~\ref{prop:popescu-rohrlich_gadget}, we can find $v_m\in\{\dl_0,\dl_1,\dl_+,\dl_-\}$ for all $m\in [n]\setminus\{2,3\}$ such that \[ b'(x_2,x_3) := \sum_{x_s\in\{0,1\}\text{ for } s\in [n]\setminus\{2,3\}} f(\vc{x}{n}) \prod_{m\in [n]\setminus\{2,3\}} v_m(x_m) \] is non-decomposable. Then, analogous to the above, $h_{1,v_1}(x_2,\ldots, x_n) = b'(x_2,x_3) h'(x_4,\ldots, x_n)$ for some $h'\in\ang{\mathcal{T}}$, which is not identically zero. Now consider the gadget \[ f'(x_2,\ldots, x_{n-1}) := \sum_{x_1,x_n\in\{0,1\}} f(\vc{x}{n}) v_1(x_1) u_n(x_n). \] If we perform the sum over $x_n$ first, we find \begin{equation}\label{eq:f-prime1} f'(x_2,\ldots, x_{n-1}) = \sum_{x_1\in\{0,1\}} h_{n,u_n}(\vc{x}{n-1}) v_1(x_1) = \sum_{x_1\in\{0,1\}} b(x_1,x_2) h(x_3,\ldots, x_{n-1}) v_1(x_1), \end{equation} which is not identically zero since $h$ is not, and $v'(x_2) := \sum_{x_1\in\{0,1\}} b(x_1,x_2) v_1(x_1)$ being identically zero would imply $b$ is decomposable. By inspection, the arguments $x_2$ and $x_3$ appear in different tensor factors of $f'$. If, on the other hand, we perform the sum over $x_1$ first, we find \begin{equation}\label{eq:f-prime2} f'(x_2,\ldots, x_{n-1}) = \sum_{x_n\in\{0,1\}} h_{1,v_1}(x_2,\ldots, x_n) u_n(x_n) = \sum_{x_n\in\{0,1\}} b'(x_2,x_3) h'(x_4,\ldots, x_n) u_n(x_n). \end{equation} This could be identically zero if $h'(x_4,\ldots, x_n) = h''(x_4,\ldots, x_{n-1}) u_n^{\perp}(x_n)$ for some $h''$, where the function $u_n^\perp$ satisfies $\sum_{x_n\in\{0,1\}}u_n^{\perp}(x_n) u_n(x_n)=0$. Yet from \eqref{eq:f-prime1} we deduced that $f'$ is not identically zero, so this cannot happen. Then, inspection of \eqref{eq:f-prime2} shows that the arguments $x_2$ and $x_3$ appear in the same non-decomposable tensor factor of $f'$. This contradicts the finding from \eqref{eq:f-prime1} that they appear in different tensor factors. Hence the assumption must have been wrong and we have $h_{1,v_1}\notin\ang{\mathcal{T}}$ or $h_{n,u_n}\notin\ang{\mathcal{T}}$. Thus, by Lemma~\ref{lem:decomposable} and the induction hypothesis, we can realise the desired non-decomposable ternary function. \end{proof} In quantum terminology, this corresponds to the following theorem.} \begin{theorem}[Restatement of Theorem~\ref{thm:three-qubit-gadget}]\label{thm:three-qubit-entanglement} Let $\ket{\Psi}$ be an $n$-qubit genuinely entangled state with $n\geq 3$. There exists some choice of three of the $n$ qubits and a projection of the other $(n-3)$ qubits onto a tensor product of computational and Hadamard basis states that leaves the three qubits in a genuinely entangled state. \end{theorem} This result, which was not previously known in the quantum information theory literature, is stronger than Corollary~\ref{cor:popescu-rohrlich_restricted} in that we construct entangled three-qubit states rather than two-qubit ones. On the other hand, our result may not hold for arbitrary choices of three qubits: all we show is that there exists some choice of three qubits for which it does hold. The original proof of this theorem in an earlier version of this paper was long and involved; this new shorter proof was suggested by Gachechiladze and G\"{u}hne \cite{gachechiladze_personal_2017}. \subsection{Symmetrising ternary functions} \label{s:symmetrising_ternary} The dichotomies given in Section \ref{s:results_ternary_symmetric} apply to symmetric ternary non-decomposable functions. The functions constructed according to Theorem \ref{thm:three-qubit-gadget} are ternary and non-decomposable, but they are not generally symmetric. Yet, these general ternary non-decomposable functions can be used to realise symmetric ones, possibly with the help of an additional binary non-decomposable function. We prove this by distinguishing cases according to whether the ternary non-decomposable function constructed using Theorem \ref{thm:three-qubit-gadget} is in the GHZ or the $W$ entanglement class (cf.\ Section~\ref{s:entanglement}). Consider a function $f\in\Upsilon_3$ which is in the GHZ class. By definition, there exist matrices $A,B,C\in\operatorname{GL}_2(\AA)$ such that $\ket{f} = (A\otimes B\otimes C)\ket{\mathrm{GHZ}}$, i.e. \[ f(x_1,x_2,x_3) = \sum_{y_1,y_2,y_3\in\{0,1\}} A_{x_1, y_1} B_{x_2, y_2} C_{x_3, y_3} \mathrm{EQ}_3(y_1,y_2,y_3). \] We can thus draw $f$ as the `virtual gadget' shown in Figure \ref{fig:virtual_gadget}. The `boxes' denoting the matrices are non-symmetric to indicate that $A,B,C$ are not in general symmetric. The white dot is assigned $\mathrm{EQ}_3$. This notation is not meant to imply that the binary functions associated with $A,B,C$ or the ternary equality function are available on their own. Instead, thinking of the function as such a composite will simply make future arguments more straightforward. Similarly, if $f$ is in the $W$ class then there exist matrices $A,B,C\in\operatorname{GL}_2(\AA)$ such that $\ket{f} = (A\otimes B\otimes C)\ket{W}$, or equivalently, \[ f(x_1,x_2,x_3) = \sum_{y_1,y_2,y_3\in\{0,1\}} A_{x_1, y_1} B_{x_2, y_2} C_{x_3, y_3} \mathrm{ONE}_3(y_1,y_2,y_3). \] In this case, $f$ can again be represented as a virtual gadget, but now the white dot in Figure~\ref{fig:virtual_gadget} is assigned the function $\mathrm{ONE}_3$. In both the GHZ and the $W$ case, three vertices assigned $f$ can be connected to form the rotationally symmetric gadget shown in Figure \ref{fig:symmetrising_GHZ}a. In fact, since $f$ is a function over the Boolean domain, the effective function $g$ of that gadget is fully symmetric: its value depends only on the Hamming weight of the inputs. On the other hand, $g$ may be decomposable (in fact, any symmetric decomposable function must be degenerate) and it may be the all-zero function. For a general non-symmetric $f$ there are three such symmetric gadgets that can be constructed by `rotating' $f$, i.e.\ by replacing $f(x_1,x_2,x_3)$ with $f(x_2,x_3,x_1)$ or $f(x_3,x_1,x_2)$. Rotating $f$ in this way does not affect the planarity of the gadget. The idea leads to the following lemmas. \begin{figure} \centering \input{tikz_files/GHZ_class_state.tikz} \caption{A `virtual gadget' for a non-decomposable ternary function. The white vertex represents either $\mathrm{EQ}_3$ or $\mathrm{ONE}_3$ and the boxes represent (not necessarily symmetric) binary functions corresponding to the matrices $A$, $B$, and $C$, respectively.} \label{fig:virtual_gadget} \end{figure} \begin{figure} \centering (a) \; \input{tikz_files/symmetrising_GHZ.tikz} \qquad (b) \input{tikz_files/symmetrising_GHZ2.tikz} \caption{(a) A symmetric gadget constructed from three copies of the ternary function from Figure~\ref{fig:virtual_gadget}. (b) A simplified version of the same gadget, where $M:=C^TB$. The variable names next to the edges are those used in \eqref{eq:symmetrising}.} \label{fig:symmetrising_GHZ} \end{figure} \begin{lemma}\label{lem:GHZ_symmetrise} Suppose $f\in\Upsilon_3$ has GHZ type, i.e.\ $\ket{f}=(A\otimes B\otimes C)\ket{\mathrm{GHZ}}$ for some matrices $A,B,C\in\operatorname{GL}_2(\AA)$. Then there exists a non-decomposable symmetric ternary function $g\in S(\{f\})$, which is furthermore realisable by a planar gadget. \end{lemma} \begin{proof} The function $f$ can be represented by a virtual gadget as in Figure~\ref{fig:virtual_gadget}, where the white vertex is assigned $\mathrm{EQ}_3$ and the matrices $A$, $B$, and $C$ are those appearing in the statement of the lemma. We will realise a symmetric ternary function by using the triangle gadget in Figure~\ref{fig:symmetrising_GHZ}a. Note that there are three different planar versions of this gadget by cyclically permuting the inputs of $f$ (and thus the roles of $A$, $B$, and $C$) in the gadget. The desired result will follow from arguing that either at least one of these three gadgets is non-decomposable and therefore yields the desired function $g$, or $f$ is already symmetric. In the latter case, we simply take $g:=f$. Consider the gadget in Figure~\ref{fig:symmetrising_GHZ}a. To simplify the argument, we will not parameterise each of the three matrices $A$, $B$, and $C$ individually; instead we let \begin{equation}\label{eq:M} M := C^T B = \begin{pmatrix}a&b\\c&d\end{pmatrix}, \end{equation} yielding the gadget in Figure~\ref{fig:symmetrising_GHZ}b. The effective function of that gadget, which we will denote $h(x_1,x_2,x_3)$ is equal to \begin{multline} \sum A_{x_1,y_1} A_{x_2,y_2} A_{x_3,y_3} \mathrm{EQ}_3(y_1,z_1,w_1) \mathrm{EQ}_3(y_2,z_2,w_2) \mathrm{EQ}_3(y_3,z_3,w_3) M_{z_1,w_2} M_{z_2,w_3} M_{z_3,w_1} \\ = \sum_{y_1,y_2,y_3\in\{0,1\}} A_{x_1,y_1} A_{x_2,y_2} A_{x_3,y_3} M_{y_1,y_2} M_{y_2,y_3} M_{y_3,y_1}, \label{eq:symmetrising} \end{multline} where the first sum is over all $y_1,y_2,y_3,z_1,z_2,z_3,w_1,w_2,w_3\in\{0,1\}$. Let the function $h'$ be such that $h=A\circ h'$, then, by invertibility of $A$, \[ h'(y_1,y_2,y_3) = M_{y_1,y_2} M_{y_2,y_3} M_{y_3,y_1}. \] Plugging in the values from \eqref{eq:M}, we find that $h' = [a^3, abc, bcd, d^3]$. Since $h$ and $h'$ are connected by a holographic transformation, they must be in the same entanglement class: in particular, $h$ is non-decomposable if and only if $h'$ is non-decomposable. Hence we may work with $h'$ instead of $h$ for the remainder of the proof. Recall from Section~\ref{s:entanglement} that a symmetric ternary function is non-decomposable if and only if it has either GHZ type or $W$ type. By Lemma~\ref{lem:li_symmetric}, $h'$ has GHZ type if and only if: \[ P_{h'}:=a^2 d^2 (ad+3bc)(ad-bc)^3\neq 0. \] It has $W$ type if $P_{h'} = 0$ and furthermore \begin{equation}\label{eq:W-conditions-h'} \left( (abc)^2 \neq a^3 bcd \right) \vee \left( (bcd)^2 \neq abcd^3 \right). \end{equation} If neither of these conditions is satisfied, $h'$ is decomposable (and in fact degenerate). Now, as $M$ is invertible, we have $ad-bc\neq 0$. Thus, $h'$ fails to have GHZ type only if at least one of $a$, $d$, or $(ad+3bc)$ is zero. We consider the cases individually. \begin{itemize} \item Suppose $a=0$. Then $P_{h'}=0$ and \eqref{eq:W-conditions-h'} becomes $(0\neq 0)\vee(bcd\neq 0)$. Hence $h'$ has $W$ type if $bcd\neq 0$ and is degenerate otherwise. Since $M$ is invertible, $a=0$ implies $bc\neq 0$. Thus, $h'$ is degenerate (in fact, it is identically zero) if $a=d=0$ and it has $W$ type otherwise. \item Suppose $a\neq 0$ and $d=0$. Then $P_{h'}=0$ and \eqref{eq:W-conditions-h'} becomes $(abc\neq 0)\vee (0\neq 0)$. Hence $h'$ has $W$ type if $abc\neq 0$. But invertibility of $M$ together with $d=0$ implies that $bc\neq 0$, so in this case $a,b,c$ are all guaranteed to be non-zero. Therefore $h'$ always has $W$ type. \item Suppose $a,d\neq 0$ and $ad+3bc=0$, i.e.\ $bc=-\frac{1}{3}ad$. Then $P_{h'}=0$ and by substituting for $bc$, \eqref{eq:W-conditions-h'} becomes \[ \left( \frac{1}{9} a^4 d^2 \neq -\frac{1}{3} a^4 d^2 \right) \vee \left(\frac{1}{9} a^2 d^4 \neq -\frac{1}{3} a^2 d^4 \right), \] which is true for all $a,d\neq 0$. Therefore $h'$ always has $W$ type. \end{itemize} By combining the three cases, we find that $h'$ is degenerate if and only if $a=d=0$, or equivalently if and only if $(C^TB)_{00}=(C^TB)_{11}=0$. As noted before, there are three different planar gadgets that can be constructed from the same non-symmetric ternary function, by `rotating' it. Assume that all three gadget constructions yield a decomposable function. For the original gadget, this means that $(C^TB)_{00}=(C^TB)_{11}=0$. For the rotated versions of the gadget, the assumption furthermore implies that \[ (B^TA)_{00}=0=(B^TA)_{11} \quad\text{and}\quad (A^TC)_{00}=0=(A^TC)_{11}. \] These equalities indicate that $C^TB$, $B^TA$, and $A^TC$ are purely off-diagonal matrices. Additionally, since $A,B,C$ are invertible, $C^TB$, $B^TA$, and $A^TC$ must also be invertible. Hence there exist invertible diagonal matrices $D_1,D_2,D_3\in\operatorname{GL}_2(\AA)$ such that: \begin{equation}\label{eq:GHZ_fail_conditions} B^TA=XD_1, \qquad C^TB=XD_2, \quad\text{and}\quad A^TC = XD_3. \end{equation} By rearranging the second equation of \eqref{eq:GHZ_fail_conditions}, we find $B = (C^T)^{-1} X D_2$. Now, by transposing, rearranging, and then inverting, the third equation of \eqref{eq:GHZ_fail_conditions} is equivalent to \[ C^T A = D_3 X \quad\Longleftrightarrow\quad C^T = D_3 X A^{-1} \quad\Longleftrightarrow\quad (C^T)^{-1} = A X D_3^{-1}, \] so $B = A X D_3^{-1} X D_2$. Similarly, transposing and then rearranging the second equation of \eqref{eq:GHZ_fail_conditions} yields $C = (B^T)^{-1} D_2 X$. By rearranging and inverting, the first equation of \eqref{eq:GHZ_fail_conditions} is equivalent to \[ B^T = X D_1 A^{-1} \quad\Longleftrightarrow\quad (B^T)^{-1} = A D_1^{-1} X, \] so $C = A D_1^{-1} X D_2 X$. Both $D_1^{-1} X D_2 X$ and $X D_3^{-1} X D_2$ are diagonal matrices; write them as \begin{equation} D_C = D_1^{-1} X D_2 X = \begin{pmatrix}\gamma_0&0\\0&\gamma_1\end{pmatrix} \quad\text{and}\quad D_B = X D_3^{-1} X D_2 = \begin{pmatrix}\beta_0&0\\0&\beta_1\end{pmatrix} \end{equation} for some $\beta_0,\beta_1,\gamma_0,\gamma_1\in\AA\setminus\{0\}$. Then $B=A D_B$ and $C=A D_C$, so: \begin{equation} \ket{f} = (A\otimes B\otimes C)\ket{\mathrm{GHZ}} = A\t{3} (I\otimes D_B\otimes D_C) \ket{\mathrm{GHZ}} = A\t{3}\left( \beta_0\gamma_0\ket{000}+\beta_1\gamma_1\ket{111} \right), \end{equation} where $I$ is the 2 by 2 identity matrix. Hence the assumption that all three gadgets are decomposable implies that $f=A\circ [\beta_0\gamma_0, 0, 0, \beta_1\gamma_1]$, i.e.\ the assumption implies that $f$ is already symmetric. We have shown that either one of the three possible planar triangle gadgets yields a symmetric non-degenerate function, or $f$ itself is already symmetric. Thus there always exists a non-degenerate symmetric ternary function in $S(\{f\})$ which is realisable by a planar gadget. \end{proof} \begin{lemma}\label{lem:W_symmetrise} Suppose $f\in\Upsilon_3$ has $W$ type, i.e.\ $\ket{f}=(A\otimes B\otimes C)\ket{W}$ for some matrices $A,B,C\in\operatorname{GL}_2(\AA)$. {If $f\notin (K\circ\mathcal{M})\cup(KX\circ\mathcal{M})$, there exists a non-degenerate symmetric ternary function $g\in S(\{f\})$. The function} $g$ has GHZ type and is realisable by a planar gadget. \end{lemma} \begin{proof} Since $f$ has $W$ type, we can write $\ket{f}=(A\otimes B\otimes C)\ket{W}$: i.e.\ the function $f$ can be thought of as the `virtual gadget' given in Figure~\ref{fig:virtual_gadget}, where the white dot is now assigned $\mathrm{ONE}_3$. We can therefore combine three copies of $f$ into the triangle gadget given in Figure~\ref{fig:symmetrising_GHZ}a, analogous to the previous lemma. {By basic linear algebra, we can apply the LDU decomposition to $C^T B$: \[ C^T B = LDU = \pmm{1&0\\a&1}\pmm{b&0\\0&c}\pmm{1&d\\0&1}, \] where $a,b,c,d\in\AA$ with $b,c\neq 0$ and $L=\smm{1&0\\a&1}$ is lower triangular, $D=\smm{b&0\\0&c}$ is diagonal, and $U=\smm{1&d\\0&1}$ is upper triangular. This is illustrated in Figure~\ref{fig:sym-W}a. Now, note that each of the white dots assigned $\mathrm{ONE}_3$ is connected to a copy of $U$ and a transposed copy of $L$. It is straightforward to show that \[ (I\otimes U\otimes L^T)\ket{W} = (a+d)\ket{000} + \ket{W} = \left( \pmm{1&a+d\\0&1}\otimes I \otimes I\right)\ket{W}, \] so the diagram can be transformed to the one of Figure~\ref{fig:sym-W}b, where $A':=A\smm{1&a+d\\0&1}$. \begin{figure} \centering (a) \input{tikz_files/symmetrising_W.tikz} \qquad (b) \input{tikz_files/symmetrising_W2.tikz} \caption{(a) The symmetrisation gadget after the matrix $C^T B$ has been converted into $LDU$ decomposition, where $L$ is lower triangular, $D$ is diagonal (and hence symmetric), and $U$ is upper triangular. (b) If the white vertices are assigned the function $\mathrm{ONE}_3$, the triangular matrices can be replaced by a different matrix on the outer leg, which is absorbed into $A$ to form $A'$.} \label{fig:sym-W} \end{figure} The effective function of the gadget in Figure~\ref{fig:sym-W}b, which we will denote $g(x_1,x_2,x_3)$, equals \[ \sum A_{x_1,y_1}' A_{x_2,y_2}' A_{x_3,y_3}' \mathrm{ONE}_3(y_1,z_1,w_1) \mathrm{ONE}_3(y_2,z_2,w_2) \mathrm{ONE}_3(y_3,z_3,w_3) D_{z_1,w_2} D_{z_2,w_3} D_{z_3,w_1}, \] where the sum is over all $y_1,y_2,y_3,z_1,z_2,z_3,w_1,w_2,w_3\in\{0,1\}$. Let $g':=(A')^{-1}\circ g$, then \[ g'(y_1,y_2,y_3) = \sum_{z_1,z_2,z_3\in\{0,1\}} \mathrm{ONE}_3(y_1,z_1,z_3) \mathrm{ONE}_3(y_2,z_2,z_1) \mathrm{ONE}_3(y_3,z_3,z_2) D_{z_1,z_1} D_{z_2,z_2} D_{z_3,z_3}, \] where we have used the property that $D$ is diagonal. Recall that $\mathrm{ONE}_3$ is the `perfect matchings' constraint. This means the gadget for $g'$ is a symmetric matchgate on three vertices, where every internal edge that is \new{in the matching} contributes a factor $c$ to the weight, and every internal edge that is not \new{in the matching} contributes a factor $b$ to the weight. Clearly, $g'(1,1,1)=b^3$ since if the three external edges are \new{in the matching}, there can be no internal edges \new{in the matching}. As a matchgate, $g'$ must satisfy a parity condition: in particular, given the odd number of vertices, it is 0 on inputs of even Hamming weight. The remaining value on inputs of Hamming weight 1, having one external edge and thus one internal edge to make a matching, is $b^2c$. Combining these, we have $g' = [0,\; b^2c,\; 0,\; b^3]$. Note that $\ket{g}$ and $\ket{g'}$ are by definition equivalent under SLOCC, i.e.\ $g$ and $g'$ are in the same entanglement class. We may therefore reason about $g'$ instead of $g$. By Lemma~\ref{lem:li_symmetric}, $g'$ has GHZ type if and only if \[ 0\neq (0 - 0)^2 - 4\left((b^2c)^2 - 0\right)\left(0 - b^2c b^3\right) = 4 b^9 c^3. \] Yet $b,c\neq 0$, so $g'$ always has GHZ type. The gadget is planar.} \end{proof} {The following lemma shows an analogous result for the case where the ternary function $f\in K\circ\mathcal{M}$ or $f\in KX\circ\mathcal{M}$. In that case, some additional support from a binary function outside the respective set is needed; we will show in the next section that such a binary function can be realised unless all functions are in $\ang{K\circ\mathcal{M}}$ or $\ang{KX\circ\mathcal{M}}$.} \begin{lemma}\label{lem:W_symmetrise-K} {Let $\cM'$ be one of $K\circ\mathcal{M}$ and $KX\circ\mathcal{M}$. Suppose $f\in\Upsilon_3\cap\cM'$ and $h\in\Upsilon_2\setminus\ang{\cM'}$, then there exists a non-degenerate symmetric ternary function $g\in S(\{f,h\})$. In both} cases, $g$ has GHZ type and is realisable by a planar gadget. \end{lemma} \begin{proof} {Note that by Lemma~\ref{lem:family-types} and the property that holographic transformations do not affect the entanglement class, $f$ has $W$ type in both cases, i.e.\ $\ket{f}=(A\otimes B\otimes C)\ket{W}$ for some matrices $A,B,C\in\operatorname{GL}_2(\AA)$. Now first suppose} $f\in K\circ\mathcal{M}$ and $h\notin\ang{K\circ\mathcal{M}}$. Note that any binary function $h$ with this property is non-degenerate (and thus non-decomposable), since $\Upsilon_1\sse K\circ\mathcal{M}$. Let $f':=K^{-1}\circ f\in\mathcal{M}$ and $h':=K^{-1}\circ h\notin\mathcal{M}$, then we can write \[ f' = \begin{pmatrix}f_{000}&f_{001}&f_{010}&0\\f_{100}&0&0&0\end{pmatrix} \quad\text{and}\quad h' = \begin{pmatrix}h_{00}&h_{01}\\h_{10}&h_{11}\end{pmatrix}, \] where $f_{001}f_{010}f_{100}\neq 0$ by non-decomposability of $f'$, $h_{00}h_{11}-h_{01}h_{10}\neq 0$ by non-de\-com\-posa\-bi\-li\-ty of $h'$, and $h_{11}\neq 0$ because $h'\notin \mathcal{M}$. \begin{figure} \centering \input{tikz_files/not_KcM.tikz} \caption{Gadget for constructing a ternary function that is not in $K\circ\mathcal{M}$ (or $KX\circ\mathcal{M}$). The degree-3 vertex is assigned the function $f$ and the degree-2 vertex is assigned the function $h$, with the second input of $h$ connected to the first input of $f$.} \label{fig:not_KcM} \end{figure} Consider the gadget in Figure~\ref{fig:not_KcM}, where the second input of $h$ is connected to the first input of $f$. The effective function associated with this gadget is \begin{align*} f''(x_1,x_2,x_3) &= \sum_{y\in\{0,1\}} h(x_1,y) f(y,x_2,x_3) \\ &= \sum_{y,\vc{z}{5}\in\{0,1\}} K_{x_1,z_1}K_{y,z_2} h'(z_1,z_2) K_{y,z_3} K_{x_2,z_4} K_{x_3,z_5} f'(z_3,z_4,z_5) \\ &= \sum_{\vc{z}{5}\in\{0,1\}} K_{x_1,z_1} K_{x_2,z_4} K_{x_3,z_5} h'(z_1,z_2) f'(z_3,z_4,z_5) \sum_{y\in\{0,1\}} K_{y,z_2} K_{y,z_3} \\ &= \sum_{z_1,z_4,z_5\in\{0,1\}} K_{x_1,z_1} K_{x_2,z_4} K_{x_3,z_5} \sum_{z_2,z_3\in\{0,1\}} 2\cdot h'(z_1,z_2) f'(z_3,z_4,z_5) \mathrm{NEQ}(z_2,z_3). \end{align*} Let $f''':=\frac{1}{2}(K^{-1}\circ f'')$ or equivalently $f''=2(K\circ f''')$, then \[ f''' = \begin{pmatrix} f_{000}h_{01}+f_{100}h_{00} & f_{001}h_{01} & f_{010}h_{01} & 0 \\ f_{000}h_{11}+f_{100}h_{10} & f_{001}h_{11} & f_{010}h_{11} & 0 \end{pmatrix}. \] Note that $f''\in K\circ\mathcal{M}$ if and only if $f'''\in\mathcal{M}$, and $f''\in KX\circ\mathcal{M}$ if and only if $f'''\in X\circ \mathcal{M}$. We now show $f''\notin (K\circ\mathcal{M})\cup (KX\circ\mathcal{M})$. Assume for a contradiction that $f''\in K\circ\mathcal{M}$, i.e.\ $f'''\in\mathcal{M}$. This implies that $0=f'''(1,0,1)=f_{001}h_{11}$ and $0=f'''(1,1,0)=f_{010}h_{11}$. But, as stated above, $f$ being non-decomposable and $h$ not being in $K\circ\mathcal{M}$ imply that $f_{001},f_{010}$ and $h_{11}$ must be non-zero. Therefore $f'''\notin\mathcal{M}$ and $f''\notin K\circ\mathcal{M}$. Similarly, assume for a contradiction that $f''\in KX\circ\mathcal{M}$, i.e.\ $f'''\in X\circ\mathcal{M}$. This implies \begin{align*} 0 &= f'''(0,0,0) = f_{000}h_{01}+f_{100}h_{00} \\ 0 &= f'''(0,0,1) = f_{001}h_{01} \\ 0 &= f'''(0,1,0) = f_{010}h_{01} \\ 0 &= f'''(1,0,0) = f_{000}h_{11}+f_{100}h_{10}. \end{align*} Now, the condition $f_{001}h_{01}=0$ implies $h_{01}=0$ since $f_{001}\neq 0$ by non-decomposability of $f$. Then the first equality reduces to $0=f_{100}h_{00}$, which implies $h_{00}=0$ since $f_{100}\neq 0$ by non-decomposability of $f$. Yet $h_{01}=h_{00}=0$ would imply that $h$ is degenerate, contradicting the assumption of the lemma. Thus $f'''\notin X\circ\mathcal{M}$ and $f''\notin KX\circ\mathcal{M}$. Hence, we may replace $f$ by $f''$ and proceed as in Lemma~\ref{lem:W_symmetrise}. Since $f''\in S(\{f,h\})$ and $f''$ is realisable by a planar gadget, the function $g\in S(\{f''\})$ that results from applying this lemma to $f''$ is in $S(\{f,h\})$ and $g$ is again realisable by a planar gadget. {Now suppose $f\in KX\circ\mathcal{M}$ and $h\notin\ang{KX\circ\mathcal{M}}$. As before, let $f':=K^{-1}\circ f$ and $h':=K^{-1}\circ h$, then $f'\in X\circ\mathcal{M}$ and $h'\notin\ang{X\circ\mathcal{M}}$. The properties of these functions differ from the ones in the previous case by bit flips on all inputs. Otherwise the argument is the same.} \end{proof} \subsection{Realising binary functions} \label{s:binary} We have shown in the previous section that it is possible to realise a non-decomposable ternary symmetric function from an arbitrary non-decomposable ternary function under some mild further assumptions. Now, we show that if the full set of functions $\mathcal{F}$ is not a subset of $\ang{K\circ\mathcal{M}}$, there exists a planar gadget in $S(\mathcal{F}\cup\{\dl_0,\dl_1,\dl_+,\dl_-\})$ whose effective function $g$ is binary, symmetric, non-decomposable and satisfies $g\notin\ang{K\circ\mathcal{M}}$. An analogous result holds with $KX$ instead of $K$. \begin{lemma}\label{lem:not_in_cM} Suppose $f\in\Upsilon_n$ satisfies $f(1,1,a_3,\ldots, a_n)\neq 0$ for some $a_3,\ldots, a_n\in\{0,1\}$. Suppose furthermore that there exist functions $u_3,\ldots, u_n\in\mathcal{U}$ such that setting \[ f'(x_1,x_2) := \sum_{x_3,\ldots, x_n\in\{0,1\}} f(\vc{x}{n}) \prod_{j=3}^n u_j(x_j) \] implies $f'(0,0)f'(1,1)-f'(0,1)f'(1,0)\neq 0$. Then $f\notin\ang{\mathcal{M}}$. Conversely, if $f\in\Upsilon_n\setminus\ang{\mathcal{M}}$, then there exists some permutation $\rho:[n]\to [n]$ such that $f_\rho$ satisfies the above properties. \end{lemma} \begin{proof} Consider the first part of the lemma and assume for a contradiction that $f\in\ang{\mathcal{M}}$. Then there exists a decomposition of $f$ where each factor is in $\mathcal{M}$. In particular, the first and second arguments of $f$ must belong to different factors: if they belonged to the same factor, that factor would be a function which is non-zero on an input of Hamming weight at least 2, so it could not be in $\mathcal{M}$. Hence there exists some permutation $\rho:\{3,4,\ldots, n\}\to\{3,4,\ldots, n\}$ and $k\in\{3,4,\ldots, n\}$ such that \[ f_\rho(\vc{x}{n}) = f_1(x_1,x_{k+1},\ldots, x_n)f_2(x_2,\ldots, x_k), \] where $f_1,f_2$ may be further decomposable. But then \[ f'(x_1,x_2) = \sum_{x_3,\ldots, x_n\in\{0,1\}} f_1(x_1,x_{k+1},\ldots, x_n)f_2(x_2,\ldots, x_k) \prod_{j=3}^n u_j(x_j) = w_1(x_1) w_2(x_2), \] where \begin{align*} w_1(x_1) &:= \sum_{x_{k+1},\ldots, x_n\in\{0,1\}} f_1(x_1,x_{k+1},\ldots, x_n) \prod_{j=k+1}^n u_j(x_j) \\ w_2(x_2) &:=\sum_{x_3,\ldots, x_k\in\{0,1\}} f_2(x_2,\ldots, x_k) \prod_{j=3}^k u_j(x_j). \end{align*} This implies that $f'(0,0)f'(1,1)-f'(0,1)f'(1,0)=0$, contradicting the assumption in the lemma. Thus $f$ cannot be in $\ang{\mathcal{M}}$. For the second part of the lemma, assume $f\in\Upsilon_n\setminus\ang{\mathcal{M}}$. In particular, $f$ is not the all-zero function, so it has a decomposition. If all the factors in the decomposition of $f$ are in $\mathcal{M}$, then $f\in\ang{\mathcal{M}}$. Hence there exists a non-decomposable factor $g$ of $f$ which is not in $\mathcal{M}$. Without loss of generality, assume that $g$ contains exactly the first $k$ arguments of $f$ for some $k\in [n]$, otherwise permute the arguments. As $\mathcal{M}$ contains all unaries, $g$ must be a function of arity $k\geq 2$. Furthermore, there must exist some bit string $\mathbf{a}\in\{0,1\}^k$ of Hamming weight at least 2 such that $g(\mathbf{a})\neq 0$. Without loss of generality, assume $a_1=a_2=1$, otherwise permute the arguments. Then $f$ satisfies the first property. If $f=g$, i.e.\ $f$ itself is non-decomposable, the second property is satisfied by Proposition~\ref{prop:popescu-rohrlich_gadget}. Otherwise, by Lemma~\ref{lem:decomposable}, there exist $u_{k+1},\ldots, u_n\in\{\dl_0,\dl_1\}$ such that \[ g(\vc{x}{k}) = \sum_{x_{k+1},\ldots, x_n\in\{0,1\}} f(\vc{x}{n}) \prod_{j=k+1}^n u_j(x_j). \] We then find that the second property is satisfied by combining this equation with an application of Proposition~\ref{prop:popescu-rohrlich_gadget} to $g$. \end{proof} \begin{lemma}\label{lem:binary_notin_KcircM} {Let $\cM'$ be one of $K\circ\mathcal{M}$ and $KX\circ\mathcal{M}$.} Suppose $f\in\Upsilon\setminus\ang{\cM'}$. Then there exists $g\in S(\{f,\dl_0,\dl_1,\dl_+,\dl_-\})$ which is binary, non-decomposable, and satisfies $g\notin\ang{\cM'}$. Furthermore, this function is realised by a planar gadget. \end{lemma} \begin{proof} {First, suppose $\cM' = K\circ\mathcal{M}$.} Note that $\Upsilon_1\sse K\circ\mathcal{M}\sse\ang{K\circ\mathcal{M}}$, so $f\notin\ang{K\circ\mathcal{M}}$ implies $\ari(f)\geq 2$. Furthermore, $\ang{\Upsilon_1}\sse\ang{K\circ\mathcal{M}}$, thus any binary function which is not in $\ang{K\circ\mathcal{M}}$ must be non-decomposable. We will show that we can realise a gadget like that in Figure~\ref{fig:pr}a, whose effective function $g$ is non-decomposable and satisfies $g\notin\ang{K\circ\mathcal{M}}$. As the gadget consists of a number of unary functions connected to one larger-arity function, planarity is assured even if we arbitrarily permute the arguments of $f$ at intermediate stages of the proof. The result will be proved by induction on the arity of $f$. The base case is $\ari(f)=2$, in which case we may take $g=f$. This function is binary, non-decomposable, a gadget in $S(\{f,\dl_0,\dl_1,\dl_+,\dl_-\})$ and it is not in $\ang{K\circ\mathcal{M}}$. Furthermore, the gadget is trivially planar, so $g$ has all the required properties. Now suppose the desired result has been proved for functions of arity at most $n$, and suppose $\ari(f)=n+1$. We distinguish cases according to the decomposability of $f$. \textbf{Case~1}: Suppose $f$ is decomposable and consider some decomposition of $f$. If all factors are in $\ang{K\circ\mathcal{M}}$, then $f$ is in $\ang{K\circ\mathcal{M}}$, so there must be a factor $f'\notin\ang{K\circ\mathcal{M}}$. By Lemma~\ref{lem:decomposable}, $f'\in S(\{f,\dl_0,\dl_1,\dl_+,\dl_-\})$. Thus, we are done by the inductive hypothesis. \textbf{Case~2}: Suppose $f$ is non-decomposable. We will show that there exists $u\in\{\dl_0,\dl_1,\dl_+,\dl_-\}$ and $k\in [n+1]$ such that $\sum_{x_k\in\{0,1\}}f(\vc{x}{n+1})u(x_k)$ is not in $\ang{K\circ\mathcal{M}}$. It is simpler to work with $\ang{\mathcal{M}}$ than $\ang{K\circ\mathcal{M}}$, so let $h=K^{-1}\circ f$; then $h\notin\ang{\mathcal{M}}$. Thus, by Lemma~\ref{lem:not_in_cM}, there exists a permutation $\rho:[n+1]\to [n+1]$ such that $h_\rho(1,1,a_3,\ldots, a_{n+1})\neq 0$ for some $a_3,\ldots, a_{n+1}\in\{0,1\}$ and there exist functions $u_3,\ldots, u_{n+1}\in\mathcal{U}$ such that letting \begin{equation}\label{eq:h-prime} h'(x_1,x_2) := \sum_{x_3,\ldots, x_{n+1}\in\{0,1\}} h_\rho(\vc{x}{n+1}) \prod_{j=3}^{n+1} u_j(x_j) \end{equation} implies $h'(0,0)h'(1,1)-h'(0,1)h'(1,0)\neq 0$. Note that \begin{align*} h'(x_1,x_2) &= \sum_{x_3,\ldots, x_{n+1}\in\{0,1\}} (K^{-1}\circ f_\rho)(\vc{x}{n+1}) \prod_{j=3}^{n+1} u_j(x_j) \\ &= K^{-1}\circ\left(\sum_{x_3,\ldots, x_{n+1}\in\{0,1\}} f_\rho(\vc{x}{n+1}) \prod_{j=3}^{n+1} ((K^{-1})^T\circ u_j)(x_j)\right) \end{align*} and holographic transformations do not affect whether a function is decomposable. Thus, by applying Proposition~\ref{prop:popescu-rohrlich_gadget} to $f_\rho$, we find that it suffices to take \[ (K^{-1})^T\circ u_j\in\{\dl_0,\dl_1,\dl_+,\dl_-\} \quad\Leftrightarrow\quad u_j\in\{\dl_+,\dl_-,\dl_i,\dl_{-i}\}, \] where $\dl_i:=[1,i]$ and $\dl_{-i}:=[1,-i]$ and we have ignored some scalar factors which, by Lemma~\ref{lem:scaling}, do not affect the complexity. For each $v\in\{\dl_+,\dl_-,\dl_i,\dl_{-i}\}$ define \[ h_v(\vc{x}{n}) := \sum_{x_{n+1}\in\{0,1\}} h_\rho(\vc{x}{n+1}) v(x_{n+1}). \] We now argue that for at least one $v\in\{\dl_+,\dl_-,\dl_i,\dl_{-i}\}$, the function $h_v$ is not in $\ang{\mathcal{M}}$. Write $v=[1,\alpha]$, where $\alpha\in\{1,-1,i,-i\}$. First, consider the value $h_v(1,1,a_3,\ldots, a_{n})$, where $a_3,\ldots, a_{n}$ are the above values resulting from applying Lemma~\ref{lem:not_in_cM} to $h$. Then \begin{equation}\label{eq:linear_alpha} h_v(1,1,a_3,\ldots, a_{n}) = h_\rho(1,1,a_3,\ldots, a_{n},0)+\alpha h_\rho(1,1,a_3,\ldots, a_{n},1) \end{equation} is a linear polynomial in the variable $\alpha$. Furthermore, this polynomial is not identically zero since $h_\rho(1,1,a_3,\ldots, a_{n+1})\neq 0$. Hence, this expression vanishes for at most one value of $\alpha$, i.e.\ one of the $h_v$. Secondly, let \[ h_u'(x_1,x_2) := \sum_{x_3,\ldots, x_{n+1}\in\{0,1\}} h_\rho(\vc{x}{n+1}) v(x_{n+1}) \prod_{j=3}^{n} u_j(x_j), \] where $u_j\in\{\dl_+,\dl_-,\dl_i,\dl_{-i}\}$ are the unary functions determined by applying Lemma~\ref{lem:not_in_cM} to $h$ as in \eqref{eq:h-prime}. Then \begin{equation}\label{eq:quadratic_alpha} h_v'(0,0)h_v'(1,1)-h_v'(0,1)h_v'(1,0) \end{equation} is a quadratic polynomial in $\alpha$ which is not identically zero. Thus this polynomial vanishes for at most two distinct values of $\alpha$, i.e.\ two of the $h_v$. Therefore, there must be at least one $h_v$ such that both \eqref{eq:linear_alpha} and \eqref{eq:quadratic_alpha} are non-zero. By Lemma~\ref{lem:not_in_cM}, this $h_v$ is not in $\ang{\mathcal{M}}$. Furthermore, \[ (K\circ h_v)(\vc{x}{n}) = \sum_{x_{n+1}\in\{0,1\}} f_\rho(\vc{x}{n+1}) v'(x_{n+1}), \] where $v'=(K^{-1})^T\circ v\in\{\dl_0,\dl_1,\dl_+,\dl_-\}$. Thus, $K\circ h_v\in S(\{f,\dl_0,\dl_1,\dl_+,\dl_-\})\setminus\ang{K\circ\mathcal{M}}$ and we are done by the inductive hypothesis. {Now suppose $\cM' = KX\circ\mathcal{M}$. The argument of Case~1 is the same as before, with $KX\circ\mathcal{M}$ instead of $K\circ\mathcal{M}$. In Case~2, again define $h:=K^{-1}\circ f$, then the properties of this function differ from the ones written out above by a bit flip on all inputs. Note that, up to scalar factor, the set $\{\dl_+,\dl_-,\dl_i,\dl_{-i}\}$ is invariant under bit flip. Thus, the argument is analogous to before.} \end{proof} \subsection{Interreducing planar holant problems and planar counting CSPs} \label{s:interreducing_planar} The interreducibility of certain bipartite holant problems and counting CSPs, as in Theorem~\ref{thm:GHZ-state}, will be a crucial ingredient in our hardness proof. We therefore need to ensure this result holds in the planar case. Recall that $\#\mathsf{R_3\text{-}CSP}\left(\mathcal{F}\right) \equiv_T \holp{\mathcal{F}\mid\{\mathrm{EQ}_1,\mathrm{EQ}_2,\mathrm{EQ}_3\}}$. The following lemma will be useful. \begin{lemma}[{\cite[Lemma~6.1]{cai_complexity_2014}}] Let $g\in\Upsilon_2$ be a non-degenerate binary function. Then, for any finite $\mathcal{F}\sse\Upsilon$ containing $g$, we have $\#\mathsf{R_3\text{-}CSP}\left(\mathcal{F}\cup\{\mathrm{EQ}_2\}\right) \leq_T \#\mathsf{R_3\text{-}CSP}\left(\mathcal{F}\right)$. \end{lemma} In fact, this result can straightforwardly be extended to the following. \begin{lemma}\label{lem:interpolate_equality2} Let $g\in\Upsilon_2$ be a non-degenerate binary function. Suppose $\mathcal{G}_1,\mathcal{G}_2\sse\Upsilon$ are finite, then \begin{multline*} \mathsf{Pl\text{-}Holant}\left(\mathcal{G}_1\cup\{g,\mathrm{EQ}_2\}\mid\mathcal{G}_2\cup\{\mathrm{EQ}_1,\mathrm{EQ}_2,\mathrm{EQ}_3\}\right) \\ \leq_T \mathsf{Pl\text{-}Holant}\left(\mathcal{G}_1\cup\{g\}\mid\mathcal{G}_2\cup\{\mathrm{EQ}_1,\mathrm{EQ}_2,\mathrm{EQ}_3\}\right). \end{multline*} \end{lemma} \begin{proof} The proof is analogous to that of \cite[Lemma~6.1]{cai_complexity_2014}, noting that the constructions used in gadgets and in polynomial interpolation are planar, and that adding more functions on the RHS does not affect the argument. \end{proof} The following result about a polynomial-time reduction from any counting CSP to some problem in \sP{} is known, see e.g.\ \cite[p.~212]{cai_complexity_2017}. Nevertheless, we have not been able to find an explicit proof, so we give one here for completeness. This proof is based on a similar reduction for graph homomorphism problems \cite[Lemma~7.1]{cai_graph_2010}. First, we define the field within which we will be working, and the computational problem to which the counting CSP will be reduced. For any finite $\mathcal{F}\sse\Upsilon$, let $A_\mathcal{F}$ be the set of algebraic complex numbers that appear as a value of some function in $\mathcal{F}$: \begin{equation}\label{eq:function_values} A_\mathcal{F} := \left\{ z\in\AA \,\middle|\, \exists f\in\mathcal{F} \text{ and } \mathbf{x}\in\{0,1\}^{\ari(f)} \text{ such that } f(\mathbf{x})=z \right\}. \end{equation} Since $\mathcal{F}$ is fixed and finite, the set $A_\mathcal{F}$ is also fixed and finite. Let $\mathbb{Q}(A_\mathcal{F})$ be the algebraic extension of the field of rational numbers by the numbers in $A_\mathcal{F}$. Note that, given an instance $(V,C)$ of $\#\mathsf{CSP}(\mathcal{F})$, the weight $\wt_{(V,C)}(\xi)$ associated with any assignment $\xi:V\to\{0,1\}$ is in $\mathbb{Q}(A_\mathcal{F})$, as is the total weight $Z_{(V,C)}$. We may thus define the following counting problem: \begin{description}[noitemsep] \item[Name] $\mathsf{COUNT}(\mathcal{F})$ \item[Instance] A tuple $((V,C),x)$, where $V$ is a finite set of variables, $C$ is a finite set of constraints over $\mathcal{F}$, and $x\in\mathbb{Q}(A_\mathcal{F})$. \item[Output] $\#_\mathcal{F}((V,C), x) = \abs{ \{\text{assignments } \xi: V\to\{0,1\} \mid \wt_{(V,C)}(\xi) = x\} }$. \end{description} $\mathsf{COUNT}(\mathcal{F})$ is the problem of counting the number of accepting paths of a Turing machine that accepts an input $((V,C),x,\xi)$ if and only if $\wt_{(V,C)}(\xi) = x$. Here, $\xi:V\to\{0,1\}$ has to be an assignment of values to the variables $V$, otherwise the input is rejected. Given an assignment $\xi$, computing the associated weight in the fixed algebraic extension field $\mathbb{Q}(A_\mathcal{F})$ can be done in time polynomial in the size of $(V,C)$. Furthermore, numbers within $\mathbb{Q}(A_\mathcal{F})$ can be compared efficiently. Therefore the Turing machine runs in time polynomial in the size of its input, and $\mathsf{COUNT}(\mathcal{F})$ is in \sP. \begin{lemma}\label{lem:NCSP_to_numP} Let $\mathcal{F}\sse\Upsilon$ be finite. Then $\#\mathsf{CSP}(\mathcal{F}) \leq_T \mathsf{COUNT}(\mathcal{F})$. \end{lemma} \begin{proof} Consider an instance $(V,C)$ of $\#\mathsf{CSP}(\mathcal{F})$, where $V$ is a finite set of variables and $C$ is a finite set of constraints over $\mathcal{F}$. Since $\mathcal{F}$ is a fixed finite set, its elements can be enumerated in some order $\vc{f}{m}$, where $m:=\abs{\mathcal{F}}$ is a constant. For each $j\in [m]$, define $a_j:=\ari(f_j)$ as a short-hand. Let $n:=\abs{C}$ be the number of constraints, and define the following set of algebraic complex numbers: \[ {\mathcal{X}} = \left\{ \prod_{j\in[m]} \prod_{\mathbf{x}_j\in\{0,1\}^{a_j}} (f_j(\mathbf{x}_j))^{k_{j,\mathbf{x}_j}} \;\middle|\; k_{j,\mathbf{x}_j}\in\mathbb{Z}_{\geq 0} \text{ and } \sum_{j\in[m]} \sum_{\mathbf{x}_j\in\{0,1\}^{a_j}} k_{j,\mathbf{x}_j} = n \right\}. \] Each element of $\mathcal{X}$ is uniquely determined by the integers $k_{j,\mathbf{x}_j}$, and there are $M:=\sum_{j\in[m]}2^{a_j}$ such integers. Thus, the elements of $\mathcal{X}$ are in bijection with the $M$-tuples of non-negative integers satisfying the property that the sum of all elements of the tuple is $n$. The set of all such $M$-tuples is exactly the support set of a multinomial distribution with $n$ trials and $M$ possible outcomes for each trial; therefore \[ \abs{\mathcal{X}} = \binom{n+M-1}{M-1} \leq \frac{(n+M-1)^{M-1}}{(M-1)!}, \] where the inequality uses the straightforward-to-derive bound $\binom{n}{k} \leq \frac{n^k}{k!}$ on binomial coefficients. Now the parameter $(M-1)$ is constant (it depends only on the properties of the fixed finite set $\mathcal{F}$), so $\abs{\mathcal{X}}$ is polynomial in $n$ and thus polynomial in the instance size. Note that $\mathcal{X}\sse \mathbb{Q}(A_\mathcal{F})$. Consider an element of $\mathcal{X}$ of the form \[ \prod_{j\in[m]} \prod_{\mathbf{x}_j\in\{0,1\}^{a_j}} (f_j(\mathbf{x}_j))^{k_{j,\mathbf{x}_j}}. \] The condition on the sum of the integers $k_{j,\mathbf{x}_j}$, together with non-negativity, implies that at most $n$ of these integers are non-zero. Thus, each element of $\mathcal{X}$ can be computed in time polynomial in $n$. Since the size of $\mathcal{X}$ is also polynomial in $n$, this means the elements of $\mathcal{X}$ can be enumerated in time polynomial in $n$. Recall from Section~\ref{s:csp} that, for any assignment $\xi:V\to\{0,1\}$, we have $\wt_{(V,C)}(\xi) = \prod_{c\in C}f_c(\xi|_c)$, where $f_c$ is the function associated with the constraint $c$ and $\xi|_c$ is the restriction of the assignment $\xi$ to the scope of $c$. Now, for any $j\in [m]$ and $\mathbf{x}_j\in\{0,1\}^{a_j}$, define $\kappa_{j,\mathbf{x}_j} := \abs{\{c\in C \text{ such that } f_c=f_j \text{ and } \xi|_c=\mathbf{x}_j\}}$, then \[ \wt_{(V,C)}(\xi) = \prod_{c\in C}f_c(\xi|_c) = \prod_{j\in[m]} \prod_{\mathbf{x}_j\in\{0,1\}^{a_j}} (f_j(\mathbf{x}_j))^{\kappa_{j,\mathbf{x}_j}} \] and \[ \sum_{j\in[m]} \sum_{\mathbf{x}_j\in\{0,1\}^{a_j}} \kappa_{j,\mathbf{x}_j} = \abs{C} = n. \] Hence $\wt_{(V,C)}(\xi)\in\mathcal{X}$. This in turn implies that \begin{equation}\label{eq:CSP-to-COUNT} Z_\mathcal{F}(V,C) = \sum_{x\in\mathcal{X}} x\cdot \#_\mathcal{F}((V,C), x). \end{equation} {Recall that $\abs{\mathcal{X}}$ is polynomial in the instance size, and that the elements of $\mathcal{X}$ are generated by a straightforward procedure. Therefore, the elements of $\mathcal{X}$ can be enumerated in polynomial time.} Multiplication and addition within $\mathbb{Q}(A)$ are also efficient, {hence} \eqref{eq:CSP-to-COUNT} gives the desired reduction $\#\mathsf{CSP}(\mathcal{F}) \leq_T \mathsf{COUNT}(\mathcal{F})$. \end{proof} We are now ready to prove the planar version of Theorem~\ref{thm:GHZ-state}. \begin{theorem}\label{thm:GHZ-state-planar} Let $\mathcal{G}_1,\mathcal{G}_2\sse\Upsilon$ be finite. Let $[y_0,y_1,y_2]\in\Upsilon_2$ be an $\omega$-normalised and non-degenerate function. In the case $y_0=y_2=0$, further assume that $\mathcal{G}_1$ contains a unary function $[a,b]$ which is $\omega$-normalised and satisfies $ab\neq 0$. Then: \[ \plholp{\{[y_0,y_1,y_2]\}\cup\mathcal{G}_1 \mid \{\mathrm{EQ}_3\}\cup\mathcal{G}_2} \equiv_T \mathsf{Pl}\text{-}\#\mathsf{CSP}(\{[y_0,y_1,y_2]\}\cup\mathcal{G}_1\cup\mathcal{G}_2). \] \end{theorem} \begin{proof} First, consider the reduction from the holant problem to the counting CSP. We have \begin{align*} \plholp{\{[y_0,y_1,y_2]\}\cup\mathcal{G}_1 \mid \{\mathrm{EQ}_3\}\cup\mathcal{G}_2} &\leq_T \plholp{\{[y_0,y_1,y_2]\}\cup\mathcal{G}_1 \cup \{\mathrm{EQ}_3\}\cup\mathcal{G}_2} \\ &\leq_T \mathsf{Pl}\text{-}\#\mathsf{CSP}(\{[y_0,y_1,y_2]\}\cup\mathcal{G}_1\cup\mathcal{G}_2), \end{align*} where the first step is by forgetting the bipartition and the second step is by Definition~\ref{dfn:planar_CSP}. The reduction from the counting CSP to the holant problem is more complicated and separates into multiple cases. \textbf{Case~1}: Assume $\plholp{\{[y_0,y_1,y_2]\}\mid\{\mathrm{EQ}_3\}}$ is \sP-hard. Then the more general counting problem $\plholp{\{[y_0,y_1,y_2]\}\cup\mathcal{G}_1 \mid \{\mathrm{EQ}_3\}\cup\mathcal{G}_2}$ is also \sP-hard. The set $\{[y_0,y_1,y_2]\}\cup\mathcal{G}_1\cup\mathcal{G}_2$ is finite, hence $\mathsf{COUNT}(\{[y_0,y_1,y_2]\}\cup\mathcal{G}_1\cup\mathcal{G}_2)$ is in \sP. Therefore, \[ \mathsf{COUNT}(\{[y_0,y_1,y_2]\}\cup\mathcal{G}_1\cup\mathcal{G}_2) \leq_T \plholp{\{[y_0,y_1,y_2]\}\cup\mathcal{G}_1 \mid \{\mathrm{EQ}_3\}\cup\mathcal{G}_2}. \] Furthermore, by Lemma~\ref{lem:NCSP_to_numP} with $\mathcal{F}=\{[y_0,y_1,y_2]\}\cup\mathcal{G}_1\cup\mathcal{G}_2$, \[ \mathsf{Pl}\text{-}\#\mathsf{CSP}(\{[y_0,y_1,y_2]\}\cup\mathcal{G}_1\cup\mathcal{G}_2) \leq_T \mathsf{COUNT}(\{[y_0,y_1,y_2]\}\cup\mathcal{G}_1\cup\mathcal{G}_2). \] Combining the two reductions yields the desired result. \textbf{Case~2}: Assume $\plholp{\{[y_0,y_1,y_2]\}\mid\{\mathrm{EQ}_3\}}$ is not \sP-hard. By Corollary~\ref{cor:pl-holant_binary}, this implies at least one of the following properties holds: \begin{enumerate} \item $[y_0,y_1,y_2]\in\ang{\mathcal{E}}$, or \item $[y_0,y_1,y_2]\in\mathcal{A}$, or \item $y_0,y_1,y_2\neq 0$ and $y_0^3=y_2^3$. \end{enumerate} We have dropped the holographic transformation from Subcase~2 because $[y_0,y_1,y_2]$ is required to be $\om$-normalised, which forces the holographic transformation to be trivial. For Properties~1 and 2, the desired reduction follows from Lemmas~2--4 of \cite{cai_holant_2012} since all the gadget constructions in those proofs are planar. For Property~3, note that the equation $y_0^3=y_2^3$ implies that $y_2 = e^{2ik\pi/3}y_0$ for some $k\in\{0,1,2\}$. Since $[y_0,y_1,y_2]$ is $\om$-normalised, we must have $k=0$ and thus $y_2=y_0$. We will now prove the following chain of reductions, where $g:=[y_0,y_1,y_0]$: \begin{align*} \mathsf{Pl}\text{-}\#\mathsf{CSP}\left(\{g\}\cup \mathcal{G}_1 \cup \mathcal{G}_2\right) &\leq_T \mathsf{Pl\text{-}Holant}\left(\{g, \mathrm{EQ}_3\}\cup \mathcal{G}_1 \cup \mathcal{G}_2\right) \\ &\leq_T \mathsf{Pl\text{-}Holant}\left(\{g, \mathrm{EQ}_2\}\cup \mathcal{G}_1 \mid \{\mathrm{EQ}_2,\mathrm{EQ}_3\} \cup \mathcal{G}_2\right) \\ &\leq_T \mathsf{Pl\text{-}Holant}\left(\{g,\mathrm{EQ}_2\}\cup \mathcal{G}_1 \mid \{\mathrm{EQ}_1,\mathrm{EQ}_2,\mathrm{EQ}_3\} \cup \mathcal{G}_2\right) \\ &\leq_T \mathsf{Pl\text{-}Holant}\left(\{g\}\cup \mathcal{G}_1 \mid \{\mathrm{EQ}_1,\mathrm{EQ}_2,\mathrm{EQ}_3\} \cup \mathcal{G}_2\right) \\ &\leq_T \mathsf{Pl\text{-}Holant}\left(\{g\}\cup \mathcal{G}_1 \mid \{\mathrm{EQ}_3\} \cup \mathcal{G}_2\right). \end{align*} The first reduction is the definition of $\mathsf{Pl}\text{-}\#\mathsf{CSP}$, the second step is by Proposition~\ref{prop:bipartite}, and the third step is because adding an additional function on the RHS cannot make the problem easier. The fourth reduction step is by Lemma~\ref{lem:interpolate_equality2}. \begin{figure} \centering (a) \qquad \input{tikz_files/EQ1-gadget.tikz} \qquad\qquad\qquad\qquad (b) \qquad \input{tikz_files/EQ2-gadget.tikz} \caption{(a) A gadget for $y_0\cdot\mathrm{EQ}_1$ and (b) a gadget for $y_0(y_0+y_1)\cdot\mathrm{EQ}_2$, where each black degree-2 vertex is assigned $[y_0,y_1,y_0]$ and each white degree-3 vertex is assigned $\mathrm{EQ}_3$.} \label{fig:equality_gadgets} \end{figure} The first three reduction steps do not use any of the specific properties of $g$, and the fourth step only uses its property of being non-degenerate. It is only the fifth (and last) reduction step -- which we will now prove -- that uses the specific symmetry properties of $g$. Consider the gadgets in Figure~\ref{fig:equality_gadgets}, which can both be used on the RHS of the problem $\mathsf{Pl\text{-}Holant}\left(\{g\}\cup \mathcal{G}_1 \mid \{\mathrm{EQ}_3\} \cup \mathcal{G}_2\right)$. The first gadget has effective function $y_0\cdot\mathrm{EQ}_1$ and the second gadget has effective function $y_0(y_0+y_1)\cdot\mathrm{EQ}_2$. Recall that, $y_0\neq 0$ by the assumption of the subcase and $y_0^2=y_0y_2\neq y_1^2$ by non-degeneracy of $g$. The latter implies that $y_0+y_1\neq 0$. We thus have non-zero scalings of $\mathrm{EQ}_1$ and $\mathrm{EQ}_2$ on the RHS. Therefore, by Lemmas~\ref{lem:scaling} and~\ref{lem:realisable}, \[ \mathsf{Pl\text{-}Holant}\left(\{g\}\cup \mathcal{G}_1 \mid \{\mathrm{EQ}_1,\mathrm{EQ}_2,\mathrm{EQ}_3\} \cup \mathcal{G}_2\right) \leq_T \mathsf{Pl\text{-}Holant}\left(\{g\}\cup \mathcal{G}_1 \mid \{\mathrm{EQ}_3\} \cup \mathcal{G}_2\right). \] This establishes the desired result. \end{proof} \subsection{Proof of the \textsf{Holant}\texorpdfstring{\textsuperscript{+}}{\textasciicircum +} dichotomy theorem} \label{s:hardness} \newcommand{\stateholantplusdichotomy}{ Let $\mathcal{F}\sse\Upsilon$ be finite. $\Holp[+]{\mathcal{F}}$ can be computed in polynomial time if $\mathcal{F}$ satisfies one of the following conditions: \begin{itemize} \item $\mathcal{F}\subseteq\avg{\mathcal{T}}$, or \item there exists $O\in\mathcal{O}$ such that $\mathcal{F}\subseteq\avg{O\circ\mathcal{E}}$, or \item $\mathcal{F}\subseteq\avg{K\circ\mathcal{E}}=\avg{KX\circ\mathcal{E}}$, or \item $\mathcal{F}\subseteq\avg{K\circ\mathcal{M}}$ or $\mathcal{F}\subseteq\avg{KX\circ\mathcal{M}}$, or \item $\mathcal{F}\subseteq\mathcal{A}$. \end{itemize} In all other cases, the problem is \#\textsf{P}-hard. The same dichotomy holds for $\mathsf{Pl\text{-}Holant}^+(\mathcal{F})$. } \begin{theorem}\label{thm:holant_plus} \stateholantplusdichotomy \end{theorem} \begin{proof} Define $\mathcal{F}':=\mathcal{F}\cup\{\dl_0,\dl_1,\dl_+,\dl_-\}$. The tractability part of the theorem follows by reduction to a conservative holant problem or to $\#\mathsf{CSP}$, respectively: if $\mathcal{F}$ is a subset of one of the tractable sets of Theorem~\ref{thm:Holant-star}, then $\mathcal{F}'$ is also a subset of one of the tractable sets of Theorem~\ref{thm:Holant-star}, and thus $\Holp[+]{\mathcal{F}}$ can be solved in polynomial time. Similarly, if $\mathcal{F}\subseteq\mathcal{A}$, then $\mathcal{F}'\subseteq\mathcal{A}$. Furthermore, \[ \Holp[+]{\mathcal{F}} = \Holp{\mathcal{F}'} \leq_T \holp{\mathcal{F}'\cup\{\mathrm{EQ}_3\}} \leq_T \#\mathsf{CSP}(\mathcal{F}'), \] where the first reduction holds because adding a function cannot make the problem easier, and the second reduction is Proposition~\ref{prop:CSP_holant}. Now, by Theorem~\ref{thm:csp}, $\mathcal{F}'\sse\mathcal{A}$ implies that \new{$\#\mathsf{CSP}(\mathcal{F}')$ can be solved in polynomial time. Thus, by the above reduction, $\Holp[+]{\mathcal{F}}$ can be solved in polynomial time.} Hence from now on we may assume that we are not in one of the known tractable cases. We will then prove the hardness of $\Holp[+]{\mathcal{F}}$ via Theorem~\ref{thm:GHZ-state} (or Theorem~\ref{thm:GHZ-state-planar} in the planar case), which requires ternary and binary symmetric non-decomposable functions. Not being in one of the known tractable cases implies in particular that $\mathcal{F}\not\sse\ang{\mathcal{T}}$, i.e.\ there is some function $f\in\mathcal{F}$ having at least one factor which is a non-decomposable function of arity $\geq 3$. Thus, we can apply Theorem~\ref{thm:three-qubit-gadget} to realise a non-decomposable ternary function $f'\in S(\{f,\dl_0,\dl_1,\dl_+,\dl_-\})$ via a planar gadget. This function has either $W$ or GHZ type, we distinguish cases accordingly. \begin{enumerate} \item\label{c:W} Suppose $f'$ has $W$ type. There are several subcases. \begin{itemize} \item If $f'\notin (K\circ\mathcal{M})\cup (KX\circ\mathcal{M})$, then there exists a non-decomposable symmetric ternary function $h\in S(\{f'\})$ by Lemma~\ref{lem:W_symmetrise}. \item If $f'\in K\circ\mathcal{M}$, since $\mathcal{F}\nsubseteq\ang{K\circ\mathcal{M}}$, there exists $g\in\mathcal{F}\setminus\ang{K\circ\mathcal{M}}$. We can realise a non-decomposable binary function $g'\in S(\{g,\dl_0,\dl_1,\dl_+,\dl_-\})\setminus\ang{K\circ\mathcal{M}}$ via a planar gadget by Lemma~\ref{lem:binary_notin_KcircM}. Then Lemma~\ref{lem:W_symmetrise-K} can be applied, yielding a non-decomposable symmetric ternary function $h\in S(\{f',g'\})$. \item If $f'\in KX\circ\mathcal{M}$, the process is analogous to the subcase $f'\in K\circ\mathcal{M}$. \end{itemize} In each subcase, by Lemma~\ref{lem:W_symmetrise} or Lemma~\ref{lem:W_symmetrise-K}, the gadget for $h$ is planar, the non-decomposable symmetric ternary function $h$ is in $S(\mathcal{F}')$, and $h$ has GHZ type. \item Suppose $f'$ has GHZ type. Again, there are several subcases. \begin{itemize} \item If $f'$ is already symmetric, let $h:=f'$. \item If $f'$ is not symmetric, we can realise a non-decomposable symmetric ternary function $f''\in S(\{f',\dl_0,\dl_1,\dl_+,\dl_-\})$ by Lemma~\ref{lem:GHZ_symmetrise}. The gadget for $f''$ is planar. \begin{itemize} \item If $f''$ has GHZ type, let $h:=f''$. \item If $f''$ has $W$ type, go back to Case~\ref{c:W} with $f''$ in place of $f'$ and apply the symmetrisation procedure for $W$-type functions to get a {symmetric} GHZ-type function. \end{itemize} \end{itemize} \end{enumerate} To summarise, if $\mathcal{F}$ is not one of the tractable sets, then there exists a non-decomposable symmetric ternary function $h\in S(\mathcal{F}')$ which can be realised via a planar gadget and which has GHZ type. \new{Recall from Section~\ref{s:results_ternary_symmetric} that this means} there exists $M\in\operatorname{GL}_2(\AA)$ such that $h=M\circ\mathrm{EQ}_3$ and either $M^T\circ\mathrm{EQ}_2$ is $\om$-normalised, or $M^T\circ\mathrm{EQ}_2=c\cdot\mathrm{NEQ}$ for some $c\in\AA\setminus\{0\}$. In the latter case, since $\smm{1&0\\0&\ld}\circ\mathrm{NEQ}=\ld\cdot\mathrm{NEQ}$ for any $\ld$, \new{recall that} we can choose $M$ such that $M^T\circ\dl_0$ is $\om$-normalised. We may thus apply the following chain of interreductions: \begin{align*} \holp[+]{\mathcal{F}} &= \holp{\mathcal{F}'} \\ &\equiv_T \holp{\mathcal{F}'\cup\{h\}} \\ &\equiv_T \holp{\mathcal{F}'\cup\{\mathrm{EQ}_2\}\mid\{h,\mathrm{EQ}_2\}} \\ &\equiv_T \holp{M^T\circ(\mathcal{F}'\cup\{\mathrm{EQ}_2\})\mid\{\mathrm{EQ}_3,M^{-1}\circ\mathrm{EQ}_2\}} \\ &\equiv_T \#\mathsf{CSP}\left( M^T\circ(\mathcal{F}'\cup\{\mathrm{EQ}_2\})\cup\{M^{-1}\circ\mathrm{EQ}_2\} \right) \end{align*} where the first step is the definition of $\mathsf{Holant}^+$, the second step is by Lemma~\ref{lem:realisable}, the third step is by Proposition~\ref{prop:bipartite} with $\mathcal{G}_1=\mathcal{F}'$ and $\mathcal{G}_2=\{h\}$, the fourth step is by Theorem~\ref{thm:Valiant_Holant}, and the last step is by Theorem~\ref{thm:GHZ-state}. To prove $\holp[+]{\mathcal{F}}$ is hard, it therefore suffices to show that the counting CSP is hard whenever $\mathcal{F}$ is not one of the tractable families of Theorem~\ref{thm:holant_plus}. We show the contrapositive: if the counting CSP is polynomial-time computable according to Theorem~\ref{thm:csp}, then $\mathcal{F}$ is one of the tractable families of Theorem~\ref{thm:holant_plus}. The argument is split into cases according to the tractable cases of Theorem~\ref{thm:csp}. \begin{itemize} \item Suppose $M^T\circ(\mathcal{F}\cup\{\mathrm{EQ}_2,\dl_0,\dl_1,\dl_+,\dl_-\})\cup\{M^{-1}\circ\mathrm{EQ}_2\}\sse\mathcal{A}$. The condition $M^T\circ\{\mathrm{EQ}_2,\dl_0,\dl_1\}\sse\mathcal{A}$ is equivalent to $M\in\mathcal{B}$ by \eqref{eq:cS_definition}. The remaining conditions of the case are $M^T\circ\dl_+,M^T\circ\dl_-,M^{-1}\circ\mathrm{EQ}_2\in\mathcal{A}$. Denote by $f_L$ the binary function corresponding to a matrix $L\in\operatorname{GL}_2(\AA)$. By Lemma~\ref{lem:cS-cA}, $M\in\mathcal{B}$ and $M^T\circ\dl_+,M^T\circ\dl_-\in\mathcal{A}$ together imply that $M\in\mathcal{B}_\mathcal{A}:=\{L\in\operatorname{GL}_2(\AA)\mid f_L\in\mathcal{A}\}$, i.e.\ $M$ is a matrix corresponding to a binary function in $\mathcal{A}$. Furthermore, by Lemma~\ref{lem:cS_cA-group}, $\mathcal{B}_\mathcal{A}$ is a group, so $M^{-1}\in\mathcal{B}_\mathcal{A}$. Now, by Lemma~\ref{lem:affine_closed}, $\mathcal{A}$ is closed under taking gadgets, so by Lemma~\ref{lem:hc_gadget}, $M^{-1}\circ\mathrm{EQ}_2\in\mathcal{A}$. Transposition of a matrix permutes the inputs of the corresponding function, so $M\in\mathcal{B}_\mathcal{A}$ also implies $M^T, (M^T)^{-1}\in\mathcal{B}_\mathcal{A}$. Thus, $M^T\circ\mathcal{F}\sse\mathcal{A}$ implies that $\mathcal{F}\sse(M^T)^{-1}\circ\mathcal{A}\sse\mathcal{A}$, one of the known tractable cases. \item Suppose $M^T\circ(\mathcal{F}\cup\{\mathrm{EQ}_2,\dl_0,\dl_1,\dl_+,\dl_-\})\cup\{M^{-1}\circ\mathrm{EQ}_2\}\sse\ang{\mathcal{E}}$. Now, $M^T\circ\mathrm{EQ}_2$ and $M^{-1}\circ\mathrm{EQ}_2$ are non-decomposable symmetric binary functions with matrices $M^TM$ and $M^{-1}(M^{-1})^T$. All non-decomposable symmetric binary functions in $\ang{\mathcal{E}}$ take the form $[\ld,0,\mu]$ or $[0,\ld,0]$ for some $\ld,\mu\in\AA\setminus\{0\}$. \begin{itemize} \item If $M^TM=\smm{\ld&0\\0&\mu}$, then $M=QD$ for some $Q\in\mathcal{O}$ and some invertible diagonal matrix $D$ by Lemma~\ref{lem:ATA-D}. Now $(QD)^T\circ\mathcal{F}\sse\ang{\mathcal{E}}$ implies $\mathcal{F}\sse\ang{(QD^{-1})\circ\mathcal{E}}=\ang{Q\circ\mathcal{E}}$, which is one of the known tractable families. \item Similarly, if $M^TM=\ld X$, then $M=KD$ or $M=KXD$ for some invertible diagonal matrix $D$ by Lemma~\ref{lem:ATA-X}. Now $K^T\doteq XK^{-1}$, so $(KD)^T\circ\mathcal{F}\sse\ang{\mathcal{E}}$ implies $\mathcal{F}\sse\ang{(KXD^{-1})\circ\mathcal{E}}=\ang{K\circ\mathcal{E}}$, which is another of the known tractable families. An analogous argument holds for $KX$ instead of $K$. \end{itemize} \end{itemize} We have shown that if $\#\mathsf{CSP}\left( M^T\circ(\mathcal{F}'\cup\{\mathrm{EQ}_2\})\cup\{M^{-1}\circ\mathrm{EQ}_2\} \right)$ can be solved in polynomial time, this implies that $\mathcal{F}$ is one of the tractable families listed in the Theorem~\ref{thm:holant_plus}. By Theorem~\ref{thm:csp}, the counting CSP is \#\textsf{P}-hard in all other cases. Thus, if $\mathcal{F}$ is not one of the tractable families listed in the theorem, then $\holp[+]{\mathcal{F}}$ is \#\textsf{P}-hard. This completes the proof of the theorem for the non-planar case. As all gadgets used in this proof are planar, the above constructions also work in the planar case. The only difference is that, when considering planar holant problems, we need to use Theorem~\ref{thm:GHZ-state-planar} instead of Theorem~\ref{thm:GHZ-state} and apply the planar $\#\mathsf{CSP}$ dichotomy from Theorem~\ref{thm:planar_csp} instead of Theorem~\ref{thm:csp}. In addition to the tractable cases from the general $\#\mathsf{CSP}$ dichotomy, which we have already excluded, Theorem~\ref{thm:planar_csp} contains one additional tractable family: $\mathsf{Pl}\text{-}\#\mathsf{CSP}\left( M^T\circ(\mathcal{F}'\cup\{\mathrm{EQ}_2\})\cup\{M^{-1}\circ\mathrm{EQ}_2\} \right)$ can be solved in polynomial time if $M^T\circ(\mathcal{F}'\cup\{\mathrm{EQ}_2\})\cup\{M^{-1}\circ\mathrm{EQ}_2\}\sse \smm{1&1\\1&-1}\circ\mathcal{H}$, where $\mathcal{H}$ is the set of matchgate functions. All other cases remain \#\textsf{P}-hard. By Lemma~\ref{lem:unary_matchgate}, the only unary matchgate functions are scalings of the pinning functions $\dl_0$ and $\dl_1$, so the only unary functions in $\smm{1&1\\1&-1}\circ\mathcal{H}$ are $\smm{1&1\\1&-1}\circ\dl_0\doteq\dl_+$ and $\smm{1&1\\1&-1}\circ\dl_1\doteq\dl_-$ (up to scaling). Yet $M^T\circ\mathcal{F}'$ contains at least the four unary functions $M^T\circ\{\dl_0,\dl_1,\dl_+,\dl_-\}$ and, by invertibility of $M$, these four functions have to be mapped to four pairwise linearly-independent functions. Thus, $M^T\circ(\mathcal{F}'\cup\{\mathrm{EQ}_2\})\cup\{M^{-1}\circ\mathrm{EQ}_2\}$ cannot be a subset of $\smm{1&1\\1&-1}\circ\mathcal{H}$. Therefore, the $\mathsf{Holant}^+$ dichotomy remains unchanged when restricted to planar instances. \end{proof} \section{The full \textsf{Holant}\texorpdfstring{\textsuperscript{c}}{\textasciicircum c} dichotomy} \label{s:dichotomy} We now combine the techniques developed in deriving the $\mathsf{Holant}^+$ dichotomy with techniques from the real-valued $\mathsf{Holant}^c$ dichotomy \cite{cai_dichotomy_2017} to get a full complexity classification for complex-valued $\mathsf{Holant}^c$. As in the $\mathsf{Holant}^+$ case, the general proof strategy is to realise a non-de\-com\-posable ternary function and then a non-decomposable symmetric ternary function. Without $\dl_+$ and $\dl_-$, we can no longer use Theorem~\ref{thm:three-qubit-gadget}. Instead, we employ a technique using $\dl_0$, $\dl_1$ and self-loops from the proof of Theorem~5.1 in \cite{cai_dichotomy_2017} with some slight modifications. Self-loops reduce the arity of a function in steps of 2, so sometimes this technique fails to yield a non-decomposable ternary function. When not yielding a ternary function, the process instead yields a non-decomposable arity-4 function with specific properties. The complexity classification for the latter case was resolved in \cite{cai_dichotomy_2017} even for complex values. The symmetrisation constructions for binary and ternary functions, as well as the subsequent hardness proofs, occasionally require a little extra work in the $\mathsf{Holant}^c$ setting as compared to the $\mathsf{Holant}^+$ setting; we deal with those issues before proving the main theorem. \subsection{Hardness proofs involving a non-decomposable ternary function} \label{s:ternary} First, we prove several lemmas that give a complexity classification for $\mathsf{Holant}^c$ problems in the presence of a non-decomposable ternary function. These results adapt techniques used in the $\mathsf{Holant}^+$ complexity classification to the $\mathsf{Holant}^c$ setting. They also replace Lemmas 5.1, 5.3, and 5.5--5.7 of \cite{cai_dichotomy_2017}. Whereas the last three of those only apply to real-valued functions, our new results work for complex values. \begin{lemma}\label{lem:case_KM} Suppose $\mathcal{F}\sse\Upsilon$ is finite and $f\in\mathcal{F}$ is a non-decomposable ternary function. If $f\in K\circ\mathcal{M}$ and {$\mathcal{F}\nsubseteq\ang{K\circ\mathcal{M}}$}, then $\holp[c]{\mathcal{F}}$ is \#\textsf{P}-hard. Similarly, if $f\in KX\circ\mathcal{M}$ and {$\mathcal{F}\nsubseteq\ang{KX\circ\mathcal{M}}$}, then $\holp[c]{\mathcal{F}}$ is \#\textsf{P}-hard. \end{lemma} \begin{proof} We consider the case $f\in K\circ\mathcal{M}$ and {$\mathcal{F}\nsubseteq\ang{K\circ\mathcal{M}}$}, the proof for the second case is analogous {since $X\circ\mathcal{M}$ differs from $\mathcal{M}$ only by a bit flip on all function inputs}. As {$\mathcal{F}\nsubseteq\ang{K\circ\mathcal{M}}$}, we can find $h\in\mathcal{F}\setminus\ang{K\circ\mathcal{M}}$. Then $h$ has arity at least 2 and is non-degenerate because all unary functions are in $K\circ\mathcal{M}$. We distinguish cases according to the arity and decomposability properties of $h$. \textbf{Case~1}: Suppose $h$ has arity 2, then non-degeneracy implies $h$ is non-decomposable. Thus, by Lemma~\ref{lem:W_symmetrise-K}, there exists a non-decomposable symmetric ternary function $g\in S(\{f,h\})$. This function $g$ is guaranteed to have GHZ type by the same lemma. We thus have \begin{align*} \holp[c]{\mathcal{F}} = \holp{\mathcal{F}\cup\{\dl_0,\dl_1\}} &\equiv_T \holp{\mathcal{F}\cup\{\dl_0,\dl_1,g\}} \\ &\equiv_T \holp{\mathcal{F}\cup\{\dl_0,\dl_1,\mathrm{EQ}_2\} \mid \{g,\mathrm{EQ}_2\}} \end{align*} where the equality is the definition of $\mathsf{Holant}^c$, the first reduction is by Lemma~\ref{lem:realisable}, and the second reduction is by {Proposition~\ref{prop:bipartite}}. Furthermore, there exists $M\in\operatorname{GL}_2(\AA)$ such that $g=M\circ\mathrm{EQ}_3$ and either $M^T\circ\mathrm{EQ}_2$ is $\om$-normalised or $M^T\circ\mathrm{EQ}_2=c\cdot\mathrm{NEQ}$ for some $c\in\AA\setminus\{0\}$. In the latter case, we can choose $M$ such that $M^T\circ\dl_0$ is $\om$-normalised. Therefore, \begin{align*} \holp[c]{\mathcal{F}} &\equiv_T \holp{\mathcal{F}\cup\{\dl_0,\dl_1,\mathrm{EQ}_2\} \mid \{g,\mathrm{EQ}_2\}} \\ &\equiv_T \holp{M^T\circ(\mathcal{F}\cup\{\dl_0,\dl_1,\mathrm{EQ}_2\})\mid\{\mathrm{EQ}_3,M^{-1}\circ\mathrm{EQ}_2\}} \\ &\equiv_T \#\mathsf{CSP}\left( M^T\circ(\mathcal{F}\cup\{\dl_0,\dl_1,\mathrm{EQ}_2\})\cup\{M^{-1}\circ\mathrm{EQ}_2\} \right) \end{align*} where the first step is from above, the second step is Theorem~\ref{thm:Valiant_Holant}, and the third step is by Theorem~\ref{thm:GHZ-state}. Now, $f\in\mathcal{F}\cap K\circ\mathcal{M}$ is a non-decomposable ternary function. By Lemma~\ref{lem:family-types}, any non-decomposable ternary function in $\mathcal{M}$ has $W$ type, and holographic transformations by definition do not affect the entanglement class. Thus, $M^T\circ f$ has $W$ type. But Lemma~\ref{lem:family-types} also shows that any non-decomposable ternary function in $\ang{\mathcal{E}}$ or in $\mathcal{A}$ has GHZ type. Therefore $M^T\circ f\notin\ang{\mathcal{E}}$ and $M^T\circ f\notin\mathcal{A}$, hence the counting CSP is \#\textsf{P}-hard. \textbf{Case~2}: Suppose $h$ is an $n$-ary function with $n>2$, and $h$ is non-decomposable. Write the ternary function $f$ as $K\circ f'$, where $f'\in\mathcal{M}$ means that it takes the form $f'=(a,b,c,0,d,0,0,0)$ for some $a,b,c,d\in\AA$. Non-decomposability of $f$ implies $bcd\neq 0$. Consider the three different gadgets that consist of a vertex assigned function $f$ with a self-loop (where the three gadgets differ in which argument of $f$ corresponds to the dangling edge). The gadget where the first edge is dangling has effective function \begin{align*} \sum_{y\in\{0,1\}} f(x,y,y) &= \sum_{y,z_1,z_2,z_3\in\{0,1\}} K_{x z_1}K_{y z_2} K_{y z_3} f'(z_1,z_2,z_3) \\ &= 2\sum_{z_1,z_2,z_3\in\{0,1\}} K_{x z_1} \mathrm{NEQ}(z_2,z_3) f'(z_1,z_2,z_3) \\ &= 2\sum_{z_1\in\{0,1\}} K_{x z_1} \left( f'(z_1,0,1) + f'(z_1,1,0) \right) \\ &= 2(b+c) (K\circ\dl_0)(x) \end{align*} Using a self loop on a vertex assigned function $f$, we can therefore realise the unary function $2(b+c) (K\circ\dl_0) = 2(b+c)[1,i]$, {where $i$ is the imaginary unit}. The other two gadgets similarly yield $2(b+d)[1,i]$ and $2(c+d)[1,i]$. Since $bcd\neq 0$, at least one of those gadgets is non-zero. Thus we can realise $\dl_i=[1,i]$ up to irrelevant scaling. We can now prove the following chain of interreductions: \begin{align*} \holp[c]{\mathcal{F}} &= \holp{\mathcal{F}\cup\{\dl_0,\dl_1\}} \\ &\equiv_T \holp{\mathcal{F}\cup\{\dl_0,\dl_1,\dl_i,f,h\}} \\ &\equiv_T \holp{\mathcal{F}\cup\{f,h,\mathrm{EQ}_2\}\mid\{\dl_0,\dl_1,\dl_i,\mathrm{EQ}_2\}} \\ &\equiv_T \holp{ K^{-1}\circ(\mathcal{F}\cup\{f,h,\mathrm{EQ}_2\}) \mid K^T\circ\{\dl_0,\dl_1,\dl_i,\mathrm{EQ}_2\} } \end{align*} Here, the equality is the definition of $\mathsf{Holant}^c$. The first reduction is by Lemma~\ref{lem:realisable} and the above gadget constructions. The second reduction is by {Proposition~\ref{prop:bipartite}} and the third reduction is by Theorem~\ref{thm:Valiant_Holant}. Recall that $K=\smm{1&1\\i&-i}$, so $K^T\circ\dl_0\doteq \dl_+$, $K^T\circ\dl_1\doteq \dl_-$, $K^T\circ\dl_i\doteq \dl_1$ and $K^T\circ\mathrm{EQ}_2\doteq\mathrm{NEQ}$. Let $h':=K^{-1}\circ h$ and recall from above that $f'=K^{-1}\circ f = (a,b,c,0,d,0,0,0)$. Thus, the effective function of the gadget in Figure~\ref{fig:unaries}a is $k(x,y)=\sum_{z\in\{0,1\}} f'(x,y,z)\dl_\pm(z)$. This is a LHS gadget and $k=(a\pm b, c, d, 0)$. Since $b\neq 0$, there is a choice of sign such that $k(0,0)\neq 0$. Then the gadget in Figure~\ref{fig:unaries}b has effective function \[ k'(x,y) = \sum_{z_1,z_2\in\{0,1\}} k(x,z_1)\mathrm{NEQ}(z_1,z_2)k(y,z_2). \] It is a symmetric LHS gadget, and $k' = (2d(a\pm b),cd,cd,0) \doteq [\frac{2}{c}(a\pm b),1,0]$. Note that with the above choice of sign, $z:=k'(0,0)\neq 0$. \begin{figure} \centering (a) \input{tikz_files/k-gadget.tikz} \qquad\qquad (b) \input{tikz_files/k_prime-gadget.tikz} \qquad\qquad (c) \input{tikz_files/unary-gadget.tikz} \caption{(a) The gadget for $k$, where we denote $f$ by a non-symmetric box to indicate that it is not generally a symmetric function. (b) The gadget for $k'$, which is symmetric. (c) A family of gadgets for producing unary functions.} \label{fig:unaries} \end{figure} A chain of $\ell$ of these symmetric gadgets, connected to $\dl_{\pm}$ at one end and connected by copies of $\mathrm{NEQ}$, as shown in Figure~\ref{fig:unaries}c, gives a RHS gadget with function $[1,\ell z\pm 1]$. Thus, since $z\neq 0$, we can realise polynomially many different unary functions on the RHS. Since $h'=K^{-1}\circ h\notin\mathcal{M}$, there exists a bit string $\mathbf{a}\in\{0,1\}^n$ of Hamming weight at least 2 such that $h(\mathbf{a})\neq 0$. Without loss of generality, assume $a_1=a_2=1$. Otherwise permute the argument of $h$, the resulting function is in $S(\{h\})$ so it can be added to the LHS of our holant problem without affecting the complexity. Let $v_m=\dl_{a_m}$ for $m\in\{3,4,\ldots, n\}$ and define \[ g_1(x_1,x_2) := \sum_{x_3,\ldots, x_n\in\{0,1\}} h'(\vc{x}{n}) \prod_{m=3}^n v_m(x_m). \] Then \begin{equation}\label{eq:not_in_cM} g_1(1,1)\neq 0. \end{equation} Furthermore, by Proposition~\ref{prop:popescu-rohrlich_gadget}, we know that there exist $u_m\in\{\dl_0,\dl_1,\dl_+,\dl_-\}$ for all $m\in\new{\{3,4,\ldots, n\}}$ such that the following function is non-decomposable: \[ g_2(x_1,x_2) := \sum_{x_3,\ldots, x_n\in\{0,1\}} h'(\vc{x}{n}) \prod_{m=3}^n u_m(x_m). \] The non-decomposability condition for binary functions is \begin{equation}\label{eq:entangled} g_2(0,0)g_2(1,1)-g_2(0,1)g_2(1,0) \neq 0. \end{equation} \new{Now consider a third function \[ g_3(x_1,x_2) := \sum_{x_3,\ldots, x_n\in\{0,1\}} h'(\vc{x}{n}) \prod_{m=3}^n w_m(x_m) \] where $w_m = [1,1+\ell_m z]$ for each $m\in\{3,4,\ldots, n\}$, with the $\ell_m$ being integer variables whose values are yet to be determined. Define \[ p(\ell_3,,\ldots,\ell_m) := g_3(1,1) \big( g_3(0,0)g_3(1,1)-g_3(0,1)g_3(1,0) \big). \] Then $p$ is a multivariate polynomial where the maximum exponent of any variable is 3. By \eqref{eq:not_in_cM} and \eqref{eq:entangled}, this polynomial is not identically zero. Now since the variable $\ell_3$ has degree at most 3 in $p$, there exists a value $\ld_3\in\{0,1,2,3\}$ such that $p(\ld_3,\ell_4,\ldots,\ell_m)$ is not identically zero. We may repeat this argument for $\ell_4,\ldots,\ell_m$ until we have found values $\ld_4,\ldots,\ld_m$ for all the variables such that $p(\ld_3,\ldots,\ld_m)\neq 0$. Each resulting $w_m$ is realisable by a RHS gadget. Thus, $g_3$ is realisable by a LHS gadget. The function $g_3$} is binary, non-decomposable, and not in $K\circ\mathcal{M}$. Therefore we can proceed as in the case where $h$ is binary. \textbf{Case~3}: Suppose $h$ is an $n$-ary function with $n>2$, and $h$ is decomposable. Since $h\notin\ang{K\circ\mathcal{M}}$, in any decomposition of $h$ there must be one factor $h'$ which is not in $\ang{K\circ\mathcal{M}}$. This factor has arity at least 2 since all unary functions are in $\ang{K\circ\mathcal{M}}$, and its arity is strictly smaller than that of $h$. Furthermore, by Lemma~\ref{lem:decomposable}, $h'\in S(\{h,\dl_0,\dl_1\})$. We may thus apply the argument to $h'$ instead of $h$. If $h$ satisfies the conditions of Case~1, we are immediately done. Case~2 straightforwardly reduces to Case~1. Finally, Case~3 yields a function of smaller arity than the original one, to which the case distinction can then be applied. Because the arity decreases every time we hit Case~3, the argument terminates. Thus the proof is complete. \end{proof} \begin{lemma}\label{lem:arity3_hardness} Suppose $\mathcal{F}\sse\Upsilon$ is finite and contains a non-decomposable ternary function $f$. Then $\holp[c]{\mathcal{F}}$ is \#\textsf{P}-hard unless: \begin{enumerate} \item\label{c:orthogonal} There exists $O\in\mathcal{O}$ such that $\mathcal{F}\subseteq\avg{O\circ\mathcal{E}}$, or \item\label{c:KcE} $\mathcal{F}\subseteq\avg{K\circ\mathcal{E}}=\avg{KX\circ\mathcal{E}}$, or \item\label{c:KcM} $\mathcal{F}\subseteq\avg{K\circ\mathcal{M}}$ or $\mathcal{F}\subseteq\avg{KX\circ\mathcal{M}}$, or \item\label{c:McA} $\mathcal{F}\subseteq M\circ\mathcal{A}$ for some $M\in\mathcal{B}$, as defined in \eqref{eq:cS_definition}. \end{enumerate} In all the exceptional cases, the problem $\holp[c]{\mathcal{F}}$ can be solved in polynomial time. \end{lemma} \begin{rem} The case $\mathcal{F}\sse\ang{\mathcal{T}}$ does not appear here because $f$ is a non-decomposable ternary function and $f\in\mathcal{F}$, which implies $\mathcal{F}\nsubseteq\ang{\mathcal{T}}$. \end{rem} \begin{proof} We distinguish two cases according to whether the ternary function $f$ is symmetric. \textbf{Case~1}: Suppose $f$ is symmetric. By the entanglement classification (cf.\ Section~\ref{s:entanglement}), $f$ has either GHZ type or $W$ type. We treat these two subcases separately. \textbf{Subcase~a}: Suppose $f$ is of GHZ type. Given an instance of $\holp[c]{\mathcal{F}}$, by Proposition~\ref{prop:make_bipartite}, \[ \holp[c]{\mathcal{F}} = \holp{\mathcal{F}\cup\{\dl_0,\dl_1\}} \equiv_T \holp{\mathcal{F}\cup\{\dl_0,\dl_1\}\mid\{\mathrm{EQ}_2\}}. \] Furthermore, we have \[ \holp{\mathcal{F}\cup\{\dl_0,\dl_1\}\mid\{\mathrm{EQ}_2\}} \equiv_T \holp{\mathcal{F}\cup\{\dl_0,\dl_1\}\mid\{\mathrm{EQ}_2,\dl_0,\dl_1\}}. \] The $\leq_T$ direction is immediate; for the other direction, note that any occurrence of $\dl_0$ or $\dl_1$ on the RHS can be replaced by a gadget consisting of a LHS copy of the unary function connected to $\mathrm{EQ}_2$. By Theorem~\ref{thm:Valiant_Holant}, for any $M\in\operatorname{GL}_2(\AA)$, \begin{multline}\label{eq:Holant-c_holographic} \holp{\mathcal{F}\cup\{\dl_0,\dl_1\}\mid\{\mathrm{EQ}_2,\dl_0,\dl_1\}} \\ \equiv_T \holp{M^{-1}\circ(\mathcal{F}\cup\{\dl_0,\dl_1\}) \,\middle|\, M^T\circ\{\mathrm{EQ}_2,\dl_0,\dl_1\}}. \end{multline} In the following, we aim to identify a matrix $M$ such that Theorem~\ref{thm:GHZ-state} can be applied to {show that the RHS of \eqref{eq:Holant-c_holographic} is equivalent to a counting CSP. To apply the theorem, the following three properties must hold for $M$ and $\mathcal{F}$: \begin{itemize} \item $\mathrm{EQ}_3\in M^{-1}\circ\mathcal{F}$, \item $M^T\circ\mathrm{EQ}_2$ is $\om$-normalised, and \item if $M^T\circ\mathrm{EQ}_2=[0,\ld,0]$ for some $\ld\in\AA\setminus\{0\}$, then there must exist an $\om$-normalised function in $M^T\circ\{\dl_0,\dl_1\}$ which is not a pinning function \end{itemize}} We now show that it is always possible to choose $M$ so these conditions are satisfied. As $f$ has GHZ type, there exists $A\in\operatorname{GL}_2(\AA)$ such that $f=A\circ\mathrm{EQ}_3$. The matrix $A$ is not unique {(cf.\ Section~\ref{s:results_ternary_symmetric})}; for now we pick an arbitrary one among all matrices that satisfy $f=A\circ\mathrm{EQ}_3$. \begin{itemize} \item Suppose $A^T\circ\mathrm{EQ}_2\neq \ld\cdot\mathrm{NEQ}$ for any $\ld\in\AA$. If $A^T\circ\mathrm{EQ}_2$ is $\om$-normalised, let $M:=A$. Otherwise, by the argument in Section~\ref{s:ternary}, there exists $D_\om:=\smm{1&0\\0&\om}$ with $\om\in\{e^{2i\pi/3}, e^{4i\pi/3}\}$ such that $(D_\om A^T)\circ\mathrm{EQ}_2$ is $\om$-normalised. Let $M:=A D_\om$, then $f=M\circ\mathrm{EQ}_3$. Now, since $\mathrm{EQ}_3\in M^{-1}\circ\mathcal{F}$ and $M^T\circ\mathrm{EQ}_2$ is an $\om$-normalised symmetric binary function, Theorem \ref{thm:GHZ-state} can be applied. \item Suppose $A^T\circ\mathrm{EQ}_2=\ld\cdot\mathrm{NEQ}$ for some $\ld\in\AA$, then $A^T A\doteq X$. Thus, by Lemma~\ref{lem:ATA-X}, $A=KD$ or $A=KXD$ for some invertible diagonal matrix $D$. In either of these cases, all {entries} of $A$ are non-zero, so $A^T\circ\dl_0=[a,b]$ for some $a,b,\in\AA\setminus\{0\}$. Thus there exists $M:=A D_\om$ for some $\om^3=1$ such that $f=M\circ\mathrm{EQ}_3$ and $M^T\circ\dl_0$ is $\om$-normalised, i.e.\ the conditions of Theorem \ref{thm:GHZ-state} are satisfied. \end{itemize} In either case, Theorem~\ref{thm:GHZ-state} yields \begin{equation}\label{eq:Holant-c_csp} \holp[c]{\mathcal{F}} \equiv_T \#\mathsf{CSP}\left( M^{-1}\circ\left( \mathcal{F} \cup \left\{ \dl_0, \dl_1 \right\} \right) \cup M^T \circ\{ \mathrm{EQ}_2, \dl_0, \dl_1 \} \right), \end{equation} where $\mathrm{EQ}_3\in M^{-1}\circ\mathcal{F}$. The matrix $M$ may still not be uniquely defined, but the remaining ambiguity does not affect any of the subsequent arguments. By \eqref{eq:Holant-c_csp} and Theorem~\ref{thm:csp}, $\holp[c]{\mathcal{F}}$ is \#\textsf{P}-hard unless \[ \mathcal{F}':= M^{-1}\circ\left( \mathcal{F} \cup \left\{ \dl_0, \dl_1 \right\} \right) \cup M^T \circ\{\mathrm{EQ}_2, \dl_0, \dl_1\} \] is a subset of either $\avg{\mathcal{E}}$ or $\mathcal{A}$. Again, we treat these two cases separately. \begin{itemize} \item Suppose $\mathcal{F}'\sse\ang{\mathcal{E}}$. All non-decomposable binary functions in $\avg{\mathcal{E}}$ are of the form \[ \begin{pmatrix}\alpha&0\\0&\beta\end{pmatrix} \qquad\text{or}\qquad \begin{pmatrix}0&\alpha\\\beta&0\end{pmatrix} \] for some $\alpha,\beta\in\AA\setminus\{0\}$. Note that $M^T\circ\mathrm{EQ}_2$ corresponds to the matrix $M^T M$. As $M^T\circ\mathrm{EQ}_2\in\mathcal{F}'$, there are two subcases. \begin{itemize} \item Suppose $M^T M = \smm{\alpha&0\\0&\beta}$. Then by Lemma~\ref{lem:ATA-D}, $M=QD$ for some orthogonal 2 by 2 matrix $Q$ and some invertible diagonal matrix $D$. Now, $\mathcal{F}'\sse\ang{\mathcal{E}}$ implies $M^{-1}\circ\mathcal{F}\sse\ang{\mathcal{E}}$, which is equivalent to $\mathcal{F}\sse\ang{M\circ\mathcal{E}}$ since holographic transformations and closure under tensor products commute. But $D\circ\mathcal{E}=\mathcal{E}$ for any invertible diagonal matrix $D$. Hence $\mathcal{F}\sse\ang{Q\circ\mathcal{E}}$, where $Q$ is orthogonal, i.e.\ $\mathcal{F}$ satisfies Item~\ref{c:orthogonal} of this lemma. \item Suppose $M^T M = \smm{0&\alpha\\\beta&0}$ Then by Lemma~\ref{lem:ATA-X}, $M=KD$ or $M=KXD$ for some invertible diagonal matrix $D$. As in the previous case, $\mathcal{F}'\sse\ang{\mathcal{E}}$ implies $\mathcal{F}\sse\ang{M\circ\mathcal{E}}$. Since $\mathcal{E}$ is invariant under holographic transformations by diagonal matrices and under bit flips, this is the same as $\mathcal{F}\sse\ang{K\circ\mathcal{E}}$. Hence $\mathcal{F}$ satisfies Item~\ref{c:KcE} of this lemma. \end{itemize} This completes the analysis {of the subcase $\mathcal{F}'\sse\ang{\mathcal{E}}$.} \item Suppose $\mathcal{F}'\sse\mathcal{A}$. This implies in particular \begin{equation}\label{eq:condition_cA} M^T\circ\left\{\mathrm{EQ}_2,\dl_0,\dl_1\right\}\sse\mathcal{A}, \end{equation} which is the definition of $M\in\mathcal{B}$, cf.\ \eqref{eq:cS_definition}. The assumption $\mathcal{F}'\in\mathcal{A}$ also implies $M^{-1}\circ\{\dl_0,\dl_1\}\sse\mathcal{A}$. This does not yield any further restrictions on $M=\smm{a&b\\c&d}$. To see this, note we already know that $M^T\circ\{\dl_0,\dl_1\}\sse\mathcal{A}$, i.e.\ $[a,b],[c,d]\in\mathcal{A}$. Now, up to an irrelevant scalar factor, $M^{-1}\circ\{\dl_0,\dl_1\}$ is $\{[d,-c],[-b,a]\}$. Note that $[-b,a]=-\smm{0&1\\-1&0}\circ [a,b]$ and $[d,-c]=\smm{0&1\\-1&0}\circ [c,d]$. The function corresponding to $\smm{0&1\\-1&0}$ is affine and $\mathcal{A}$ is closed under taking gadgets (cf.\ Lemma~\ref{lem:affine_closed}) and under scalings. Hence, by Lemma~\ref{lem:hc_gadget}, $[a,b],[c,d]\in\mathcal{A}$ implies $[d,-c],[-b,a]\in\mathcal{A}$. Thus $\mathcal{F}'\sse\mathcal{A}$ implies $M\in\mathcal{B}$. Furthermore, $\mathcal{F}'\sse\mathcal{A}$ implies $M^{-1}\circ\mathcal{F}\sse\mathcal{A}$, or equivalently $\mathcal{F}\sse M\circ\mathcal{A}$, i.e.\ $\mathcal{F}$ satisfies Item~\ref{c:McA} of this lemma. \end{itemize} To summarise, if $f$ has GHZ type, the problem is tractable if $\mathcal{F}\subseteq\avg{Q\circ\mathcal{E}}$ for some orthogonal 2 by 2 matrix $Q$, if $\mathcal{F}\subseteq\avg{K\circ\mathcal{E}}$, or if there exists $M\in\mathcal{B}$ such that $\mathcal{F}\subseteq M\circ\mathcal{A}$. In all other cases, the problem is \#\textsf{P}-hard by reduction from $\#\mathsf{CSP}$. \textbf{Subcase~b}: Suppose $f$ is of $W$ type, then: \begin{itemize} \item If $f\notin (K\circ\mathcal{M})\cup (KX\circ\mathcal{M})$, $\holp{f}$ is \#\textsf{P}-hard by Theorem \ref{thm:W-state}. \item If {$\mathcal{F}\subseteq\ang{K\circ\mathcal{M}}$ or $\mathcal{F}\subseteq\ang{KX\circ\mathcal{M}}$}, the problem is polynomial-time computable by Theorem~\ref{thm:Holant-star}; this is Item~\ref{c:KcM} of this lemma. \item If $f\in K\circ\mathcal{M}$ but {$\mathcal{F}\nsubseteq\ang{K\circ\mathcal{M}}$}, the problem is \#\textsf{P}-hard by Lemma~\ref{lem:case_KM}, and analogously with $KX$ instead of $K$. \end{itemize} \textbf{Case~2}: Suppose $f$ is not symmetric. \begin{itemize} \item If $f\notin (K\circ\mathcal{M})\cup (KX\circ\mathcal{M})$, we can realise a non-decomposable symmetric ternary function by Lemmas \ref{lem:GHZ_symmetrise} and \ref{lem:W_symmetrise} and then proceed as in Case~1. \item If {$\mathcal{F}\subseteq\ang{K\circ\mathcal{M}}$ or $\mathcal{F}\subseteq \ang{KX\circ\mathcal{M}}$}, the problem is polynomial-time computable by Theorem~\ref{thm:Holant-star}; this is Item~\ref{c:KcM} of the lemma. \item Finally, if $f\in K\circ\mathcal{M}$ but {$\mathcal{F}\nsubseteq\ang{K\circ\mathcal{M}}$}, or $f\in KX\circ\mathcal{M}$ but {$\mathcal{F}\nsubseteq\ang{KX\circ\mathcal{M}}$}, the problem is \#\textsf{P}-hard by Lemma~\ref{lem:case_KM}. \end{itemize} This covers all cases. \end{proof} \subsection{Main theorem} \label{s:main_theorem} We now have all the components required to prove the main dichotomy for $\mathsf{Holant}^c$. The following theorem generalises Theorem 5.1 of \cite{cai_dichotomy_2017}, which applies only to real-valued functions. The proof follows the one in \cite[Theorem~5.1]{cai_dichotomy_2017} fairly closely. \begin{theorem}\label{thm:holant-c} Let $\mathcal{F}\sse\Upsilon$ be finite. Then $\holp[c]{\mathcal{F}}$ is \#\textsf{P}-hard unless: \begin{itemize} \item $\mathcal{F}\subseteq\avg{\mathcal{T}}$, or \item there exists $O\in\mathcal{O}$ such that $\mathcal{F}\subseteq\avg{O\circ\mathcal{E}}$, or \item $\mathcal{F}\subseteq\avg{K\circ\mathcal{E}}=\avg{KX\circ\mathcal{E}}$, or \item $\mathcal{F}\subseteq\avg{K\circ\mathcal{M}}$ or $\mathcal{F}\subseteq\avg{KX\circ\mathcal{M}}$, or \item there exists $B\in\mathcal{B}$ such that $\mathcal{F}\subseteq B\circ\mathcal{A}$, or \item $\mathcal{F}\subseteq\mathcal{L}$. \end{itemize} In all of the exceptional cases, $\holp[c]{\mathcal{F}}$ is polynomial-time computable. \end{theorem} \begin{proof} If $\mathcal{F}$ is in one of the first four exceptional cases, then polynomial-time computability follows from Theorem~\ref{thm:Holant-star}. In the \new{penultimate exceptional case, let $\mathcal{F}':= B^{-1}\circ\mathcal{F}\sse\mathcal{A}$, then \begin{align*} \holp[c]{\mathcal{F}} &\leq_T \holp[c]{\mathcal{F}\mid\{\mathrm{EQ}_2\}} \\ &\leq_T \holp[c]{\mathcal{F}'\mid\{B^T\circ\mathrm{EQ}_2\}} \\ &\leq_T \holp[c]{\mathcal{F}'\cup\{B^T\circ\mathrm{EQ}_2\}} \\ &\leq_T \holp[c]{\mathcal{F}'\cup\{B^T\circ\mathrm{EQ}_2,\mathrm{EQ}_3\}} \\ &\leq_T \#\mathsf{CSP}(\mathcal{F}'\cup\{B^T\circ\mathrm{EQ}_2\}) \end{align*} where the first reduction is Proposition~\ref{prop:make_bipartite} and the second reduction is Theorem~\ref{thm:Valiant_Holant} with $M=B^{-1}$. The third and fourth reductions hold because dropping the bipartite restriction and adding a function cannot make the problem easier. The final reduction combines Proposition~\ref{prop:CSP_holant} and Lemma~\ref{lem:csp^c}. Now, $B^T\circ\mathrm{EQ}_2\in\mathcal{A}$ by \eqref{eq:cS_definition}. A counting CSP with affine constraint functions is polynomial-time computable by Theorem~\ref{thm:csp}. In the final exceptional case, we have \[ \holp[c]{\mathcal{F}} \leq_T \holp[c]{\mathcal{F}\cup\{\mathrm{EQ}_4\}} \leq_T \#\mathsf{CSP}_2^c(\mathcal{F}) \] where the final reduction can be proved analogously to Proposition~\ref{prop:CSP_holant}. Polynomial-time computability thus follows from Theorem~\ref{thm:csp_2^c}.} From now on, assume $\mathcal{F}$ does not satisfy any of the tractability conditions. In particular, this implies that $\mathcal{F}\not\subseteq\avg{\mathcal{T}}$, i.e.\ $\mathcal{F}$ has multipartite entanglement. By Lemma~\ref{lem:decomposable}, without loss of generality, we may focus on non-decomposable functions. So assume that there is some non-decomposable function $f\in S(\mathcal{F}\cup\{\dl_0,\dl_1\})$ of arity $n\geq 3$. If the function has arity 3, we are done by Lemma~\ref{lem:arity3_hardness}. Hence assume $n\geq 4$. Now if there was a bit string $\mathbf{a}\in\{0,1\}^n$ such that $f(\mathbf{x})=0$ for all $\mathbf{x}\in\{0,1\}^n\setminus\{\mathbf{a}\}$, then $f$ would be a scaling of some function in $\ang{\{\dl_0,\dl_1\}}$, so it would be degenerate and thus decomposable. Hence, $f$ being non-decomposable implies that there exist two distinct $n$-bit strings $\mathbf{a},\mathbf{b}\in\{0,1\}^n$ such that $f(\mathbf{a})f(\mathbf{b})\neq 0$. As in the proof of \cite[Theorem~5.1]{cai_dichotomy_2017}, let: \begin{equation}\label{eq:D0} D_0 = \min \left\{ d(\mathbf{x},\mathbf{y}) \mid \mathbf{x}\neq\mathbf{y}, f(\mathbf{x})\neq 0, f(\mathbf{y})\neq 0 \right\}, \end{equation} where $d(\cdot,\cdot)$ is the Hamming distance. By the above argument, $D_0$ exists. We now distinguish cases according to the values of $D_0$. \textbf{Case $D_0\geq 4$ and $D_0$ is even}: Pick a pair of bit strings $\mathbf{a},\mathbf{b}$ such that $f(\mathbf{a})f(\mathbf{b})\neq 0$ with {minimum} Hamming distance $d(\mathbf{a},\mathbf{b})$. Pin all inputs where the two bit strings agree (without loss of generality, we always assume bit strings agree on the last $n-D_0$ bits; otherwise permute the arguments). This realises a function $f'\in\Upsilon_{D_0}$ of the form \begin{equation}\label{eq:generalised_eq} f'(\mathbf{x}) = \begin{cases} \alpha & \text{if } x_j=a_j \text{ for {all} } j\in [D_0] \\ \beta & \text{if } x_j=\bar{a}_j \text{ for {all} } j\in [D_0] \\ 0 & \text{otherwise}, \end{cases} \end{equation} where $\alpha\beta\neq 0$. {If $D_0=4$, this is a generalised equality of arity 4. If $D_0>4$, there must exist $j,k\in[D_0]$ with $j\neq k$ but $a_j=a_k$. Via a self-loop, we can realise the function $\sum_{x_j,x_k\in\{0,1\}}\mathrm{EQ}_2(x_j,x_k)f(\mathbf{x})$, which has arity $(D_0-2)$ and still satisfies a property like \eqref{eq:generalised_eq} for the remaining arguments. This process can be repeated, reducing the arity in steps of 2, to finally realise an arity-4 generalised equality function.} Then \#\textsf{P}-hardness follows by Lemma~\ref{lem:generalised_equality4}. \textbf{Case $D_0\geq 3$ and $D_0$ is odd}: Pin and use self-loops analogously to the previous case to realise a function of the form \eqref{eq:generalised_eq} with arity 3. Then \#\textsf{P}-hardness follows by Lemma~\ref{lem:arity3_hardness}. \textbf{Case $D_0=2$}: We can realise by pinning either a function $g=[\alpha,0,\beta]$ or a function $\tilde{g} = \smm{0&\alpha\\\beta&0}$ (up to permutation of the arguments), i.e.\ a generalised equality or a generalised disequality. In the second subcase, suppose the indices of $f$ that were not pinned in order to realise $\tilde{g}$ are $j$ and $k$. Replace $f$ by the gadget $\sum_{z\in\{0,1\}}\tilde{g}(x_j,z)f(\vc{x}{j-1},z,x_{j+1},\ldots, x_n)$. The new function has the same entanglement and the same $D_0$ as the old one, but pinning all inputs except $j$ and $k$ now gives rise to a generalised equality function as in the first subcase. \new{If necessary, redefine $\alpha,\beta$ according to this new function. In the following, we assume $j=1$, $k=2$; this is without loss of generality as permuting arguments is a gadget operation.} Following the proof of \cite[Theorem~5.1]{cai_dichotomy_2017}, define $f_\mathbf{x} (z_1,z_2):=f(z_1,z_2,\mathbf{x})$ for any $\mathbf{x}\in\{0,1\}^{n-2}$ and let \begin{align*} A_1 &:= \{ \mathbf{x}\in\{0,1\}^{n-2} \mid f_\mathbf{x}(z_1,z_2) = c[\alpha,0,\beta] \text{ for some } c\in\AA\setminus\{0\} \}, \\ B_1 &:= \{ \mathbf{x}\in\{0,1\}^{n-2} \mid f_\mathbf{x}(z_1,z_2) \neq c[\alpha,0,\beta] \text{ for any } c\in\AA \}. \end{align*} \new{Note that $A_1$ is non-empty since we can realise $g$ by pinning. The set $B_1$ must then be non-empty since otherwise $f$ would be decomposable as $f(z_1,z_2,\mathbf{x}) = g(z_1,z_2)f'(\mathbf{x})$ for some $f'\in\Upsilon_{n-2}$. Furthermore,} note that $A_1\cap B_1=\emptyset$ \new{and that} if $f_\mathbf{y}$ is identically 0, then $\mathbf{y}$ is not in $A_1\cup B_1$. Thus we can define: \[ D_1 = \min\{ d(\mathbf{x},\mathbf{y}) \mid \mathbf{x} \in A_1, \mathbf{y}\in B_1 \}. \] Pick a pair $\mathbf{a}\in A_1,\mathbf{b}\in B_1$ with {minimum} Hamming distance and pin wherever they are equal, as in the cases where $D_0\geq 3$. This realises a function \[ h(\vc{x}{D_1+2}) = g(x_1,x_2)\prod_{k=1}^{D_1}\dl_{a_k}(x_{k+2}) + g'(x_1,x_2)\prod_{k=1}^{D_1}\dl_{\bar{a}_k}(x_{k+2}), \] where $g'(x_1,x_2):=f_\mathbf{b}(x_1,x_2)$. Note that the assumption $D_0=2$ implies that either $f_\mathbf{y}(0,1)=f_\mathbf{y}(1,0)=0$ or $f_\mathbf{y}(0,0)=f_\mathbf{y}(1,1)=0$ for all $\mathbf{y}\in B_1$. {(Assume otherwise, i.e.\ each of the sets $\{f_\mathbf{y}(0,1),f_\mathbf{y}(1,0)\}$ and $\{f_\mathbf{y}(0,0),f_\mathbf{y}(1,1)\}$ contains at least one non-zero value. Then it is straightforward to see that there are two inputs for which $f$ is non-zero and whose Hamming distance is 1, so by \eqref{eq:D0} we should have $D_0=1$, a contradiction.)} Thus, $g'$ is either $\smm{\ld&0\\0&\mu}$ or $\smm{0&\ld\\\mu&0}$ for some $\ld,\mu\in\AA$ such that $\ld,\mu$ are not both zero. If $g'=\smm{\ld&0\\0&\mu}$, then the definition of $B_1$ furthermore implies $\alpha\mu-\beta\ld\neq 0$. We now distinguish cases according to the values of $D_1$. \begin{itemize} \item If $D_1\geq 3$, \new{then $\ari(h)=D_1+2\geq 5$}. Distinguish cases according to $g'$. \begin{itemize} \item Suppose $g'=\smm{\ld&0\\0&\mu}$. If $\ld\neq 0$, we can pin the first two inputs of $h$ to 00 to get a function ${\alpha}(\prod_{k=1}^{D_1}\dl_{a_k}(x_{k+2}))+{\ld}(\prod_{k=1}^{D_1}\dl_{\bar{a}_k}(x_{k+2}))$. \new{The resulting function still has arity at least 3, so we can} proceed as in the cases $D_0\geq 4 $ or $D_0\geq 3$. If $\ld=0$ then $\mu\neq 0$ and we can pin to 11 instead {to get the function $\beta(\prod_{k=1}^{D_1}\dl_{a_k}(x_{k+2}))+\mu\prod_{k=1}^{D_1}\dl_{\bar{a}_k}(x_{k+2}))$, and proceed analogously}. \item Suppose $g'=\smm{0&\ld\\\mu&0}$. If $\ld\neq 0$, we can pin the first input of $h$ to 0 to realise: \[ {\alpha}\delta_0(x_2)\prod_{k=1}^{D_1}\dl_{a_k}(x_{k+2}) + \ld\delta_1(x_2)\prod_{k=1}^{D_1}\dl_{\bar{a}_k}(x_{k+2}) \] at which point we again proceed as in the cases $D_0\geq 4 $ or $D_0\geq 3$. If $\ld=0$ then $\mu\neq 0$ and we can pin the first input to 1 instead {to get the function \[ \beta\delta_1(x_2)\prod_{k=1}^{D_1}\dl_{a_k}(x_{k+2}) + \mu\delta_0(x_2)\prod_{k=1}^{D_1}\dl_{\bar{a}_k}(x_{k+2}), \] and proceed analogously.} \end{itemize} \item If $D_1=2$, there are eight subcases, depending on the form of $g'$ and the value of $\mathbf{a}$. They can be considered pairwise, grouping $\mathbf{a}$ with $\bar{\mathbf{a}}$. \begin{itemize} \item If $g'=\smm{\ld&0\\0&\mu}$ {and $\mathbf{a}=00$}, the function after pinning is \[ \begin{pmatrix} \alpha&0&0&\ld \\ 0&0&0&0 \\ 0&0&0&0 \\ \beta&0&0&\mu \end{pmatrix} \] with $\alpha\beta\neq 0$ and $\alpha\mu-\beta\ld\neq 0$. In this case, apply Lemma~\ref{lem:interpolate_equality4} {to show \[ \mathsf{Holant}^c(\mathcal{F}\cup\{\mathrm{EQ}_4\})\leq_T\mathsf{Holant}^c(\mathcal{F}). \] Now by Lemma~\ref{lem:generalised_equality4}, $\#\mathsf{CSP}_2^c(\mathcal{F}\cup\{\mathrm{EQ}_4\})\leq_T\mathsf{Holant}^c(\mathcal{F}\cup\{\mathrm{EQ}_4\})$. Therefore hardness follows by Theorem~\ref{thm:csp_2^c}.} Here, the original proof in \cite{cai_dichotomy_2017} used a different technique requiring real values. {If $\mathbf{a}=11$, the argument is analogous with the first and last columns swapped.} \item {If $g'=\smm{\ld&0\\0&\mu}$ and $\mathbf{a}=01$, the function after pinning is \[ \begin{pmatrix} 0&\alpha&\ld&0 \\ 0&0&0&0 \\ 0&0&0&0 \\ 0&\beta&\mu&0 \end{pmatrix} \] where again $\alpha\beta\neq 0$ and $\alpha\mu-\beta\ld\neq 0$. Call this function $h'$. Then the gadget $\sum_{y,z\in\{0,1\}} h'(x_1,x_2,y,z)h'(x_3,x_4,y,z)$ corresponds to taking the product of the above matrix with its transpose; it takes values \[ \begin{pmatrix} \alpha^2+\lambda^2&0&0&\alpha\beta+\ld\mu \\ 0&0&0&0 \\ 0&0&0&0 \\ \alpha\beta+\ld\mu&0&0&\beta^2+\mu^2 \end{pmatrix} \] Now \[ \det \pmm{\alpha^2+\lambda^2&\alpha\beta+\ld\mu \\ \alpha\beta+\ld\mu&\beta^2+\mu^2} = (\alpha\mu-\beta\ld)^2 \neq 0, \] which means Lemma~\ref{lem:interpolate_equality4} can be applied again. Thus hardness follows as in the first subcase. If $\mathbf{a}=10$, the middle two columns of the first matrix are swapped, but the argument is analogous.} \item If $g'=\smm{0&\ld\\\mu&0}$ {and $\mathbf{a}=00$}, the function after pinning is \[ \begin{pmatrix} \alpha&0&0&0 \\ 0&0&0&\ld \\ 0&0&0&\mu \\ \beta&0&0&0 \end{pmatrix}. \] If $\ld\neq 0$, pin the first input of $h$ to 0 to get the function $[\alpha,0,0,\ld]$. If $\ld=0$ then $\mu\neq 0$, so pin the first input of $h$ to 1 to get the function {$\smm{0&0&0&\mu\\\beta&0&0&0}$. If $\mathbf{a}=11$, then the first and last columns are swapped, the argument is otherwise analogous. In each case, the resulting function has arity~3 and is non-decomposable (in fact, it is a generalised equality), so we can show hardness via Lemma~\ref{lem:arity3_hardness}.} \item {If $g'=\smm{0&\ld\\\mu&0}$ and $\mathbf{a}=01$, the function after pinning is \[ \begin{pmatrix} 0&\alpha&0&0 \\ 0&0&\ld&0 \\ 0&0&\mu&0 \\ 0&\beta&0&0 \end{pmatrix} \] and we can pin as in the previous subcase. If $\mathbf{a}=10$, the middle columns are swapped and the process is analogous. All resulting functions have arity~3 and are non-decomposable (in fact, they are generalised equalities), so we can show hardness via Lemma~\ref{lem:arity3_hardness}.} \end{itemize} \item If $D_1=1$, then $h(x_1,x_2,x_3)=g(x_1,x_2)\dl_{a_1}(x_3) + g'(x_1,x_2)\dl_{\bar{a}_1}(x_3)$ is a non-decomposable ternary function since $g$ and $g'$ are linearly independent. Thus, we can apply Lemma~\ref{lem:arity3_hardness} to show hardness. \end{itemize} \textbf{Case $D_0=1$}: By pinning, we can realise $g=[\alpha,\beta]$ for some $\alpha,\beta\neq 0$. \new{Without loss of generality, assume this function arises by pinning the last $(n-1)$ inputs of $f$; otherwise permute the arguments.} Define $f_\mathbf{x} (z):=f(z,\mathbf{x})$ for any $\mathbf{x}\in\{0,1\}^{n-1}$ and let \begin{align*} A_2 &:= \{ \mathbf{x}\in\{0,1\}^{n-1} \mid f_\mathbf{x}(z) = c[\alpha,\beta] \text{ for some } c\in\AA\setminus\{0\} \}, \\ B_2 &:= \{ \mathbf{x}\in\{0,1\}^{n-1} \mid f_\mathbf{x}(z) \neq c[\alpha,\beta] \text{ for any } c\in\AA \}. \end{align*} \new{Analogously to $A_1$ and $B_1$, these two sets are non-empty, do not intersect, and do not contain any $\mathbf{x}$ such that $f_\mathbf{x}(z)$ is identically zero.} Then let: \[ D_2 = \min\{ d(\mathbf{x},\mathbf{y}) \mid \mathbf{x}\in A_2, \mathbf{y}\in B_2 \}, \] \new{which is well-defined.} By pinning we can realise a function \[ h(\vc{x}{D_2+1}) = g(x_1)\prod_{k=1}^{D_2}\dl_{a_k}(x_{k+1}) + g'(x_1)\prod_{k=1}^{D_2}\dl_{\bar{a}_k}(x_{k+1}), \] where $g'=[\ld,\mu]$ with $\alpha\mu-\beta\ld\neq 0$. \begin{itemize} \item If $D_2\geq 3$, then $h$ has arity greater than 3. If $\ld\neq 0$, pin the first input of $h$ to 0, which yields a generalised equality function of arity at least 3, {and then proceed as in the cases $D_0\geq 4$ or $D_0\geq 3$}. If $\ld=0$ then $\mu\neq 0$, so we can pin to 1 instead for an analogous result. \item If $D_2=2$, $h$ is a non-decomposable ternary function so we are done by Lemma~\ref{lem:arity3_hardness}. This is another change compared to the proof in \cite{cai_dichotomy_2017}, where hardness was only shown for a real-valued function of the given form. \item If $D_2=1$, $h$ is a non-decomposable binary function $\smm{\alpha&\beta\\\ld&\mu}$ (possibly up to a bit flip of the second argument, which does not affect the proof). Unlike in \cite{cai_dichotomy_2017}, we do not attempt to use this binary function for interpolation. Instead we immediately proceed to defining $A_3$, $B_3$, and $D_3$ analogous to before. \new{Without loss of generality, assume that the two variables of $h$ correspond to the first two variables of $f$ (otherwise permute arguments). Let} \begin{align*} A_3 &:= \{ \mathbf{x}\in\{0,1\}^{n-2} \mid f_\mathbf{x}(z_1,z_2) = c\cdot h(z_1,z_2) \text{ for some } c\in\AA\setminus\{0\} \}, \\ B_3 &:= \{ \mathbf{x}\in\{0,1\}^{n-2} \mid f_\mathbf{x}(z_1,z_2) \neq c\cdot h(z_1,z_2) \text{ for any } c\in\AA \}, \end{align*} \new{then $A_3$ and $B_3$ are non-empty, they do not intersect, and they do not contain any $f_\mathbf{x}$ which is identically zero. Thus we may define} $D_3 = \min\{ d(\mathbf{x},\mathbf{y}) \mid \mathbf{x}\in A_3, \mathbf{y}\in B_3 \}$. Let $\mathbf{a}\in A_3$ be such that there exists $\mathbf{b}\in B_3$ with $d(\mathbf{a},\mathbf{b})=D_3$. By pinning in all places that $\mathbf{a}$ and $\mathbf{b}$ agree, we realise a function \begin{equation}\label{eq:function3} h(x_1,x_2)\prod_{k=1}^{D_3}\dl_{a_k}(x_{k+2}) + h'(x_1,x_2)\prod_{k=1}^{D_3}\dl_{\bar{a}_k}(x_{k+2}) \end{equation} where $h'(x_1,x_2):=f_\mathbf{b}(x_1,x_2)$ is not a scaling of $h$. Suppose $h'=\smm{\alpha'&\beta'\\\ld'&\mu'}$. Distinguish cases according to $D_3$. {It will be useful to consider these in ascending order since the case $D_3=2$ follows straightforwardly from $D_3=1$.} \begin{itemize} \item {If $D_3=1$, the function in \eqref{eq:function3} is ternary. Note that the case $a_1=1$ differs from the case $a_1=0$ only by a bit flip on the third input, which is a SLOCC transformation by $(I\otimes I\otimes X)$ and thus does not affect entanglement. So it suffices to consider $a_1=0$; the function then takes values \[ \pmm{\alpha&\alpha'&\beta&\beta'\\\ld&\ld'&\mu&\mu'}. \] Recall that $\alpha,\beta\neq 0$, the unprimed variables satisfy $\alpha\mu-\beta\ld\neq 0$, and $(\alpha',\beta',\ld',\mu')$ is not a scaling of $(\alpha,\beta,\ld,\mu)$. By Lemma~\ref{lem:li}, the ternary function is non-decomposable unless at least two of the following three expressions are false: \begin{align*} (\alpha\beta'\neq\beta\alpha') &\vee (\mu\ld'\neq\ld\mu') \\ (\ld\alpha'\neq\alpha\ld') &\vee (\mu\beta'\neq\beta\mu') \\ (\beta'\ld'\neq\alpha'\mu') &\vee (\beta\ld\neq\alpha\mu) \end{align*} Now, the third expression is always true since $\alpha\mu-\beta\ld\neq 0$ implies $\beta\ld\neq\alpha\mu$. So the function being decomposable would imply $\alpha\beta'=\beta\alpha'$, $\mu\ld'=\ld\mu'$, $\ld\alpha'=\alpha\ld'$ and $\mu\beta'=\beta\mu'$. But the first of these equations implies $\beta'=\beta\alpha'/\alpha$ since $\alpha\neq 0$. Similarly, the third equation implies $\ld'=\ld\alpha'/\alpha$. Also, since $\beta\neq 0$, the fourth equation implies $\mu'=\mu\beta'/\beta = \mu\alpha'/\alpha$. Thus, \[ h' = \pmm{\alpha'&\beta'\\\ld'&\mu'} = \frac{\alpha'}{\alpha}\pmm{\alpha&\beta\\\ld&\mu} = \frac{\alpha'}{\alpha}\cdot h, \] which is a contradiction since $h'$ is not a scaling of $h$. Thus, the ternary function must be non-decomposable, and hardness follows by Lemma~\ref{lem:arity3_hardness}. This is a change compared to the proof in \cite{cai_dichotomy_2017}, where multiple cases were distinguished and the hardness lemmas only applied to real-valued functions.} \item {If $D_3=2$, the function in \eqref{eq:function3} has arity 4. By connecting $[\alpha,\beta]$ to the last input, we get one of the functions \begin{align*} \alpha\cdot h(x_1,x_2)\dl_{a_1}(x_3) + \beta\cdot h'(x_1,x_2)\dl_{\bar{a}_1}(x_3) \\ \beta\cdot h(x_1,x_2)\dl_{a_1}(x_3) + \alpha\cdot h'(x_1,x_2)\dl_{\bar{a}_1}(x_3) \end{align*} depending on whether $a_2$ is 0 or 1. Note that these functions differ from the function considered in the previous subcase -- $h(x_1,x_2)\dl_{a_1}(x_3) + h'(x_1,x_2)\dl_{\bar{a}_1}(x_3)$ -- only by a SLOCC with $\smm{\alpha&0\\0&\beta}$ or $\smm{\beta&0\\0&\alpha}$ on the third input (depending on whether $a_1=a_2$). Hence they must be in the same entanglement class and hardness follows as in the previous subcase. This is another change compared to \cite{cai_dichotomy_2017}, where the hardness lemma only applied to real values and the construction did not employ the function $[\alpha,\beta]$.} \item If $D_3\geq 3$, we consider different cases according to the relationships between the values of $h$ and $h'$. {If $\alpha\alpha'=\beta\beta'=\ld\ld'=\mu\mu'=0$, then $\alpha'=\beta'=0$ since $\alpha,\beta$ are non-zero by assumption. Thus, at least one of $\ld'$ and $\mu'$ must be non-zero because $h'$ is not identically zero. Also, at least one of $\ld$ and $\mu$ must be non-zero because $h$ is non-decomposable. Therefore we have either $\ld=\mu'=0$ and $\ld',\mu\neq 0$, or $\ld'=\mu=0$ and $\ld,\mu'\neq 0$.} In the first case, $\ld=\mu'=0$ and $\ld',\mu\neq 0$, pin the first input to 1 to get: \[ \mu\dl_1(x_2)\prod_{k=1}^{D_3}\dl_{a_k}(x_{k+2})+\ld'\dl_0(x_2)\prod_{k=1}^{D_3}\dl_{\bar{a}_k}(x_{k+2}). \] This is a generalised equality function of arity at least 4, so we can proceed as before. In the second case, $\ld'=\mu=0$ and $\ld\mu'\neq 0$, pinning the first input to 1 {works again, albeit resulting in a different generalised equality function}. Otherwise, there exists a pair of primed and unprimed coefficients of the same label that are both non-zero. If these are $\alpha$ and $\alpha'$, pin the first two inputs to 00 to get a generalised equality of arity at least 3. If the non-zero pair are $\beta$ and $\beta'$, pin to 01, and so on. Given the generalised equality function, we may proceed as in the cases $D_0\geq4$ or $D_0\geq 3$, depending on whether the arity of this function is even or odd. \end{itemize} \end{itemize} We have covered all cases, hence the proof is complete. \end{proof} \section*{Acknowledgements} I would like to thank Pinyan Lu for pointing out a flaw in the original statement of the main theorem, Jin-Yi Cai for interesting discussions about extending the $\mathsf{Holant}^+$ result to the planar case, and Mariami Gachechiladze and Otfried G\"uhne for pointing out a significantly shorter and more elegant proof of Theorem~\ref{thm:three-qubit-entanglement}, and for letting me use it here. Thanks also go to Leslie Ann Goldberg, Ashley Montanaro, and William Whistler for for helpful comments on earlier versions of this paper, as well as to the anonymous referees for their insightful feedback and suggestions. \bibliographystyle{plain}
1,108,101,565,086
arxiv
\section{Experiments} Here, we present benchmark results and analysis on USB. Thereafter, we rethink state-of-the-art methods on COCO with the USB protocols. See Supplementary Material\xspace for the details of the experimental settings and results, including additional analysis and ablation studies. \subsection{Experimental Settings} \label{sec:experimental_settings} We compared and analyzed eight methods (detectors), using ResNet-50-B~\cite{ResNet_CVPR2016, BagOfTricks_Classification_CVPR2019} as the default backbone. Two of them are popular multi-stage detectors: (1) Faster R-CNN~\cite{Faster_R-CNN_NIPS2015} with FPN~\cite{FPN_CVPR2017} and (2) Cascade R-CNN~\cite{Cascade_R-CNN_CVPR2018}. Three of them are popular single-stage detectors: (3) RetinaNet~\cite{RetinaNet_ICCV2017}, (4) ATSS~\cite{ATSS_CVPR2020}, and (5) GFL~\cite{GFL_NeurIPS2020}. We designed three additional detectors for USOD by collecting methods for multi-scale object detection. (6) \textit{ATSEPC}: ATSS~\cite{ATSS_CVPR2020} with SEPC without iBN~\cite{SEPC_CVPR2020}. (7) \textit{UniverseNet\xspace}: ATSEPC with Res2Net-50-v1b~\cite{Res2Net_TPAMI2020}, Deformable Convolutional Networks (DCN)~\cite{DCN_ICCV2017}, and multi-scale training. (8) \textit{UniverseNet-20.08\xspace}: A variant of UniverseNet\xspace designed around August 2020 with GFL~\cite{GFL_NeurIPS2020}, SyncBN~\cite{MegDet_CVPR2018}, iBN~\cite{SEPC_CVPR2020}, and the light use of DCN~\cite{DCN_ICCV2017, SEPC_CVPR2020}. See Supplementary Material\xspace for the details of the methods and architectures used in UniverseNets\xspace. Our code is built on MMDetection~\cite{MMDetection} v2. We trained all models with Stochastic Gradient Descent (SGD). COCO models were fine-tuned from ImageNet~\cite{ImageNet_IJCV2015} pre-trained backbones. We trained the models for WOD and M109s\xspace from the corresponding COCO pre-trained models. \vspace{-2mm} \begin{table}[ht] \setlength{\tabcolsep}{0.9mm} \renewcommand\arraystretch{0.85} \begin{minipage}{0.71\hsize} \scalebox{0.62}{\begin{tabular}{lccc} \toprule Hyperparameters & COCO & WOD & M109s\xspace \\ \midrule LR for multi-stage detectors & 0.02 & 0.02 & 0.16 \\ LR for single-stage detectors & 0.01 & 0.01 & 0.08 \\ Test scale & 1333$\times$800 & 1248$\times$832 & 1216$\times$864 \\ Range for multi-scale training$^*$ & 480--960 & 640--1280 & 480--960 \\ \bottomrule \end{tabular}} \end{minipage} \hfill \begin{minipage}{0.26\hsize} \scalebox{0.62}{\begin{tabular}{lc} \toprule Hyperparam. & Common \\ \midrule Epoch & 12 \\ Batch size & 16 \\ Momentum & 0.9 \\ Weight decay & $10^{-4}$ \\ \bottomrule \end{tabular}} \end{minipage} \caption{ Default hyperparameters. $*$: Shorter side pixels. } \label{table:hyperparameters} \end{table} \vspace{-3mm} The default hyperparameters are listed in Table~\ref{table:hyperparameters}. Most values follow standard settings~\cite{MMDetection, RetinaNet_ICCV2017, ATSS_CVPR2020, FPN_CVPR2017}. We used some dataset-dependent values. For M109s\xspace, we roughly tuned the learning rates (LR) based on a preliminary experiment with the RetinaNet~\cite{RetinaNet_ICCV2017} baseline model. Test scales were determined within the Standard USB protocol, considering the typical aspect ratio of the images in each dataset. We used multi-scale training for UniverseNets\xspace (see Supplementary Material\xspace for ablation results). The ranges for multi-scale training for COCO and M109s\xspace follow prior work~\cite{SEPC_CVPR2020}. We used larger scales for WOD because objects in WOD are especially small. We follow the learning rate schedules of MMDetection~\cite{MMDetection}. Unless otherwise specified, we used the 1$\times$ schedule (12 epochs). \subsection{Benchmark Results on USB} \label{sec:usb_results} \begin{table}[t] \setlength{\tabcolsep}{0.8mm} \renewcommand\arraystretch{0.85} \begin{center} \scalebox{0.72}{\begin{tabularx}{1.388\linewidth}{lc*{5}{>{\centering\arraybackslash}X}ccc} \toprule Method & mCAP & AP$_{50}$ & AP$_{75}$ & AP$_\textit{S}$\xspace & AP$_\textit{M}$\xspace & AP$_\textit{L}$\xspace & COCO & WOD & M109s\xspace \\ \midrule Faster R-CNN~\cite{Faster_R-CNN_NIPS2015} & 45.9 & 68.2 & 49.1 & 15.2 & 38.9 & 62.5 & 37.4 & 34.5 & 65.8 \\ Cascade R-CNN~\cite{Cascade_R-CNN_CVPR2018} & 48.1 & 68.5 & 51.5 & 15.6 & 41.3 & 65.9 & 40.3 & 36.4 & 67.6 \\ RetinaNet~\cite{RetinaNet_ICCV2017} & 44.8 & 66.0 & 47.4 & 12.9 & 37.3 & 62.6 & 36.5 & 32.5 & 65.3 \\ ATSS~\cite{ATSS_CVPR2020} & 47.1 & 68.0 & 50.2 & 15.5 & 39.5 & 64.7 & 39.4 & 35.4 & 66.5 \\ ATSEPC\xspace~\cite{ATSS_CVPR2020, SEPC_CVPR2020} & 48.1 & 68.5 & 51.2 & 15.5 & 40.5 & 66.8 & 42.1 & 35.0 & 67.1 \\ GFL~\cite{GFL_NeurIPS2020} & 47.7 & 68.3 & 50.6 & 15.8 & 39.9 & 65.8 & 40.2 & 35.7 & 67.3 \\ UniverseNet\xspace & 51.4 & 72.1 & 55.1 & 18.4 & 45.0 & 70.7 & 46.7 & 38.6 & 68.9 \\ UniverseNet-20.08\xspace & \textbf{52.1} & \textbf{72.9} & \textbf{55.5} & \textbf{19.2} & \textbf{45.8} & \textbf{70.8} & \textbf{47.5} & \textbf{39.0} & \textbf{69.9} \\ \bottomrule \end{tabularx}} \end{center} \vspace{-3mm} \caption{ Benchmark results on USB. } \label{table:usb} \end{table} \noindent \textbf{Main results.} We trained and evaluated the methods on USB. All the methods follow the Standard USB 1.0 protocol using the default hyperparameters in Sec.~\ref{sec:experimental_settings}. The results are shown in Table~\ref{table:usb}. UniverseNet-20.08\xspace achieves the highest results on all the datasets, resulting in 52.1\% mCAP. In most cases, methods that work on COCO also work on the other datasets. Cascade R-CNN~\cite{Cascade_R-CNN_CVPR2018} and ATSS~\cite{ATSS_CVPR2020} achieve over 2\% more mCAP than Faster R-CNN~\cite{Faster_R-CNN_NIPS2015} and RetinaNet~\cite{RetinaNet_ICCV2017}, respectively. In some cases, however, methods that work on COCO show small or negative effects on WOD and M109s\xspace (see especially the decrease of WOD CAP from ATSS to ATSEPC). COCO-biased methods that greatly improve \textit{only} COCO CAP do not improve mCAP much. For example, adding SEPC~\cite{SEPC_CVPR2020} to ATSS~\cite{ATSS_CVPR2020} improves mCAP by 1.0\%, although it improves COCO CAP by 2.7\%. Thus, USB can impose a penalty on COCO-biased methods. \noindent \textbf{Scale-wise AP.} We show RSAP on USB in Figure~\ref{fig:usb_rsap}. It does not increase monotonically but rather decreases at relative scales greater than $1/4$. We cannot find this weakness from the coarse COCO-style scale-wise AP in Table~\ref{table:usb}. The difficulty of very large objects may be caused by truncation or unusual viewpoints~\cite{DiagDet_ECCV2012}. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{images/finer_scale_rsap_mean_thinline.pdf} \caption{ Relative Scale AP on USB. } \label{fig:usb_rsap} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{images/usb_corr_thinline.pdf}% \caption{ Correlation between mCAP and CAP on each dataset. } \label{fig:usb_correlation} \end{figure} \begin{table}[t] \setlength{\tabcolsep}{0.8mm} \renewcommand\arraystretch{0.85} \begin{center} \scalebox{0.72}{\begin{tabularx}{1.388\linewidth}{lc*{5}{>{\centering\arraybackslash}X}ccc} \toprule Backbone & mCAP & AP$_{50}$ & AP$_{75}$ & AP$_\textit{S}$\xspace & AP$_\textit{M}$\xspace & AP$_\textit{L}$\xspace & COCO & WOD & M109s\xspace \\ \midrule ResNet-50-B~\cite{ResNet_CVPR2016, BagOfTricks_Classification_CVPR2019} & 47.1 & 68.0 & 50.2 & 15.5 & 39.5 & 64.7 & 39.4 & 35.4 & \textbf{66.5} \\ Swin-T~\cite{SwinTransformer_ICCV2021} & \textbf{49.0} & \textbf{70.6} & \textbf{52.0} & \textbf{17.2} & \textbf{41.8} & \textbf{67.2} & \textbf{43.7} & \textbf{37.2} & 66.2 \\ \bottomrule \end{tabularx}} \end{center} \vspace{-3mm} \caption{ ResNet \vs Swin Transformer on USB with ATSS~\cite{ATSS_CVPR2020}. } \label{table:usb_swin} \end{table} \begin{table*}[t] \setlength{\tabcolsep}{0.15em} \renewcommand\arraystretch{0.85} \begin{minipage}[c]{0.289\hsize} \begin{center} \scalebox{0.66}{\begin{tabularx}{1.5\textwidth}{l*{6}{>{\centering\arraybackslash}X}} \toprule Method & AP & AP$_{50}$ & AP$_{75}$ & AP$_\textit{S}$\xspace & AP$_\textit{M}$\xspace & AP$_\textit{L}$\xspace \\ \midrule Faster R-CNN~\cite{Faster_R-CNN_NIPS2015} & 37.4 & 58.1 & 40.4 & 21.2 & 41.0 & 48.1 \\ Cascade R-CNN~\cite{Cascade_R-CNN_CVPR2018} & 40.3 & 58.6 & 44.0 & 22.5 & 43.8 & 52.9 \\ RetinaNet~\cite{RetinaNet_ICCV2017} & 36.5 & 55.4 & 39.1 & 20.4 & 40.3 & 48.1 \\ ATSS~\cite{ATSS_CVPR2020} & 39.4 & 57.6 & 42.8 & 23.6 & 42.9 & 50.3 \\ ATSEPC\xspace~\cite{ATSS_CVPR2020, SEPC_CVPR2020} & 42.1 & 59.9 & 45.5 & 24.6 & 46.1 & 55.0 \\ GFL~\cite{GFL_NeurIPS2020} & 40.2 & 58.4 & 43.3 & 23.3 & 44.0 & 52.2 \\ UniverseNet\xspace & 46.7 & 65.0 & 50.7 & \textbf{29.2} & 50.6 & 61.4 \\ UniverseNet-20.08\xspace & \textbf{47.5} & \textbf{66.0} & \textbf{51.9} & 28.9 & \textbf{52.1} & \textbf{61.9} \\ \bottomrule \end{tabularx}} \end{center} \vspace{-3.5mm} \caption{ Results on COCO \texttt{minival}. } \label{table:coco_minival} \end{minipage} \hfill \begin{minipage}[c]{0.330\hsize} \begin{center} \scalebox{0.66}{\begin{tabularx}{1.5\textwidth}{l*{9}{>{\centering\arraybackslash}X}} \toprule Method & AP & AP$_{50}$ & AP$_{75}$ & AP$_\textit{S}$\xspace & AP$_\textit{M}$\xspace & AP$_\textit{L}$\xspace & veh. & ped. & cyc. \\ \midrule Faster & 34.5 & 55.3 & 36.3 & 6.0 & 35.8 & 67.4 & 42.7 & 34.6 & 26.1 \\ Cascade & 36.4 & 56.3 & 38.6 & 6.5 & 38.1 & 70.6 & 44.5 & 36.3 & 28.5 \\ RetinaNet & 32.5 & 52.2 & 33.7 & 2.6 & 32.8 & 67.9 & 40.0 & 32.5 & 25.0 \\ ATSS & 35.4 & 56.2 & 37.0 & 6.1 & 36.6 & 69.8 & 43.6 & 35.6 & 27.0 \\ ATSEPC\xspace & 35.0 & 55.3 & 36.5 & 5.8 & 35.5 & 70.5 & 43.5 & 35.3 & 26.3 \\ GFL & 35.7 & 56.0 & 37.1 & 6.2 & 36.7 & 70.7 & 44.0 & 36.0 & 27.1 \\ Univ\xspace & 38.6 & 59.8 & \textbf{40.9} & 7.4 & 41.0 & \textbf{74.0} & 46.0 & 37.6 & \textbf{32.3} \\ Univ20.08\xspace & \textbf{39.0} & \textbf{60.2} & 40.4 & \textbf{8.3} & \textbf{41.7} & 73.3 & \textbf{47.1} & \textbf{38.7} & 31.0 \\ \bottomrule \end{tabularx}} \end{center} \vspace{-3.5mm} \caption{ Results on WOD \texttt{f0val}. } \label{table:waymo_f0_train_f0val_832} \end{minipage} \hfill \begin{minipage}[c]{0.370\hsize} \begin{center} \scalebox{0.66}{\begin{tabularx}{1.5\textwidth}{l*{10}{>{\centering\arraybackslash}X}} \toprule Method & AP & AP$_{50}$ & AP$_{75}$ & AP$_\textit{S}$\xspace & AP$_\textit{M}$\xspace & AP$_\textit{L}$\xspace & body & face & frame & text \\ \midrule Faster & 65.8 & 91.1 & 70.6 & 18.4 & 39.9 & 72.1 & 58.3 & 47.5 & 90.1 & 67.1 \\ Cascade & 67.6 & 90.6 & 72.0 & 17.9 & 41.9 & 74.3 & 60.8 & \textbf{48.2} & 92.5 & 69.0 \\ RetinaNet & 65.3 & 90.5 & 69.5 & 15.7 & 38.9 & 71.9 & 58.3 & 46.3 & 88.8 & 67.7 \\ ATSS & 66.5 & 90.1 & 70.8 & 16.8 & 38.9 & 74.0 & 60.9 & 44.6 & 91.3 & 69.0 \\ ATSEPC\xspace & 67.1 & 90.2 & 71.5 & 16.2 & 39.8 & 74.9 & 62.3 & 44.6 & 92.1 & 69.4 \\ GFL & 67.3 & 90.6 & 71.5 & 17.9 & 38.9 & 74.4 & 61.7 & 45.7 & 92.2 & 69.4 \\ Univ\xspace & 68.9 & 91.4 & 73.7 & 18.7 & 43.4 & 76.6 & 65.8 & 46.6 & 93.0 & 70.3 \\ Univ20.08\xspace & \textbf{69.9} & \textbf{92.5} & \textbf{74.3} & \textbf{20.5} & \textbf{43.6} & \textbf{77.1} & \textbf{66.6} & 48.0 & \textbf{93.7} & \textbf{71.2} \\ \bottomrule \end{tabularx}} \end{center} \vspace{-3.5mm} \caption{ Results on Manga109-s\xspace \texttt{15test}. } \label{table:Manga109s_15test} \end{minipage} \end{table*} \noindent \textbf{Affinity between methods and datasets.} To compare the effectiveness of each method on each dataset, we show the correlation between mCAP and CAP on each dataset in Figure~\ref{fig:usb_correlation}. SEPC~\cite{SEPC_CVPR2020} improves COCO CAP and deteriorates WOD CAP. Multi-stage detectors~\cite{Faster_R-CNN_NIPS2015, Cascade_R-CNN_CVPR2018} show relatively high CAP on WOD and relatively low CAP on COCO. Adding GFL~\cite{GFL_NeurIPS2020} is especially effective on M109s\xspace (see improvements from ATSS to GFL and from UniverseNet\xspace to UniverseNet-20.08\xspace). \noindent \textbf{Details on each dataset.} Table~\ref{table:coco_minival} shows the COCO results. Since the effectiveness of existing methods has been verified on COCO, their improvements are steady. Table~\ref{table:waymo_f0_train_f0val_832} shows the WOD results. AP$_\textit{S}$\xspace on WOD is less than 10\%, which is much lower than AP$_\textit{S}$\xspace on COCO. This highlights the limitation of COCO and current detectors. Adding SEPC~\cite{SEPC_CVPR2020} to ATSS~\cite{ATSS_CVPR2020} decreases all metrics except for AP$_\textit{L}$\xspace. We found that this reduction does not occur at large test scales in higher USB evaluation protocols (see Supplementary Material\xspace). UniverseNet-20.08\xspace shows worse results than UniverseNet\xspace in some metrics, probably due to the light use of DCN for fast inference. Table~\ref{table:Manga109s_15test} shows the M109s\xspace results. Interestingly, improvements by ATSS~\cite{ATSS_CVPR2020} are smaller than those on COCO and WOD due to the drop of face AP. We conjecture that this phenomenon comes from the domain differences discussed in Sec.~\ref{sec:usb_datasets} and prior work~\cite{Manga109_detection_Ogawa_2018}, although we should explore it in future work. \noindent \textbf{ResNet \vs Swin Transformer.} Transformer-based backbones have shown promising results recently. We evaluated a representative one: Swin Transformer~\cite{SwinTransformer_ICCV2021}. Table~\ref{table:usb_swin} shows the results with ATSS~\cite{ATSS_CVPR2020} detectors (see Supplementary Material\xspace for hyperparameters). Swin-T~\cite{SwinTransformer_ICCV2021} shows lower AP than ResNet-50-B~\cite{ResNet_CVPR2016, BagOfTricks_Classification_CVPR2019} on M109s\xspace. \noindent \textbf{Qualitative results.} We show some qualitative results of the best detector (UniverseNet-20.08\xspace) in Figure~\ref{fig:teaser}. Although most detections are accurate, it still suffers from classification error, localization error, and missing detections of tiny vehicles and small manga faces. \input{universenet_table_coco_sota} \subsection{Rethinking COCO with USB Protocols} \label{sec:rethink_coco_sota} We classify state-of-the-art methods on COCO \texttt{test-dev} (as of November 14, 2020) by the proposed protocols without compatibility. The results are shown in Table~\ref{table:coco_sota}. Although state-of-the-art detectors on the COCO benchmark were trained with various settings, the introduced divisions enable us to compare methods in each division. UniverseNet-20.08d\xspace achieves the highest AP (51.3\%) in the Standard USB 1.0 protocol. Despite 12.5$\times$ fewer epochs, the speed-accuracy trade-offs of UniverseNets\xspace are comparable to those of EfficientDet~\cite{EfficientDet_CVPR2020} (see also Figure~\ref{fig:coco_speed_accuracy}). With 13-scale TTA, UniverseNet-20.08d\xspace achieves the highest AP (54.1\%) in the Huge USB 1.0 protocol. Results in higher protocols than USB 1.0 are scattered. If we ignore the difference of hyperparameter optimization, YOLOv4~\cite{YOLOv4_2020} shows a better speed-accuracy trade-off than EfficientDet~\cite{EfficientDet_CVPR2020} in Standard USB 3.x. Comparisons across different divisions are difficult. Especially, long training is problematic because it can secretly increase AP without decreasing FPS, unlike large test scales. Nevertheless, the EfficientDet~\cite{EfficientDet_CVPR2020}, YOLOv4~\cite{YOLOv4_2020}, and SpineNet~\cite{SpineNet_CVPR2020} papers compare methods in their tables without specifying the difference in training epochs. The compatibility of the USB training protocols (Sec.~\ref{sec:usb_training}) resolves this disorder. We hope that many papers report results with the protocols for inclusive, healthy, and sustainable development of detectors. To simulate the compatibility from Standard USB 3.0 to 1.0, we refer to the training log of the EfficientDet author. The AP of EfficientDet-D4~\cite{EfficientDet_CVPR2020} on COCO \texttt{minival} is 43.8\% at 23 epoch~\cite{EfficientDetGitHubComment}. Although it could be improved by changing the learning rate schedule, EfficientDet's inference efficiency is not compatible with training efficiency. \section*{Supplementary Material\xspace} \input{universenet_supplementary} \end{document} \section{Introduction} \label{sec:introduction} Humans can detect various objects. See Figure~\ref{fig:teaser}. One can detect close equipment in everyday scenes, far vehicles in traffic scenes, and texts and persons in manga (Japanese comics). If computers can automatically detect various objects, they will yield significant benefits to humans. For example, they will help impaired people and the elderly, save lives by autonomous driving, and provide safe entertainment during pandemics by automatic translation. Researchers have pushed the limits of object detection systems by establishing datasets and benchmarks~\cite{object_detection_survey_Liu_IJCV2020}. One of the most important milestones is \textsc{Pascal}\xspace VOC~\cite{PASCALVOC_IJCV2015}. It has enabled considerable research on object detection, leading to the success of deep learning-based methods and successor datasets such as ImageNet~\cite{ImageNet_IJCV2015} and COCO~\cite{COCO_ECCV2014}. Currently, COCO serves as \textit{the} standard dataset and benchmark for object detection because it has several advantages over \textsc{Pascal}\xspace VOC~\cite{PASCALVOC_IJCV2015}. COCO contains more images, categories, and objects (especially small objects) in their natural context~\cite{COCO_ECCV2014}. Using COCO, researchers can develop and evaluate methods for multi-scale object detection. However, the current object detection benchmarks, especially COCO, have the following two problems. \textbf{Problem 1: Variations in object scales and image domains remain limited.} To realize human-level perception, computers must handle various object scales and image domains as humans can. Among various domains~\cite{UniversalObjectDetection_CVPR2019}, the traffic and artificial domains have extensive scale variations (see Sec.~\ref{sec:usb}). COCO is far from covering them. Nevertheless, the current computer vision community is overconfident in COCO results. For example, most studies on state-of-the-art methods in 2020 only report COCO results~\cite{ATSS_CVPR2020, SEPC_CVPR2020, PAA_ECCV2020, GFL_NeurIPS2020, RepPointsv2_NeurIPS2020, RelationNet2_NeurIPS2020} or those for bounding box object detection~\cite{EfficientDet_CVPR2020, SpineNet_CVPR2020, YOLOv4_2020, DetectoRS_2020}. Readers cannot assess whether these methods are specialized for COCO or generalizable to other datasets and domains. \textbf{Problem 2: Protocols for training and evaluation are not well established.} There are standard experimental settings for the COCO benchmark~\cite{Detectron2018, MMDetection, FPN_CVPR2017, RetinaNet_ICCV2017, FCOS_ICCV2019, ATSS_CVPR2020, GFL_NeurIPS2020}. Many studies train detectors within 24 epochs using a learning rate of 0.01 or 0.02 and evaluate them on images within 1333$\times$800. These settings are not obligations but non-binding agreements for fair comparison. Some studies do not follow the settings for accurate and fast detectors\footnote{YOLOv4 was trained for 273 epochs~\cite{YOLOv4_2020}, DETR for 500 epochs~\cite{DETR_ECCV2020}, EfficientDet-D6 for 300 epochs~\cite{EfficientDet_CVPR2020}, and EfficientDet-D7x for 600 epochs~\cite{EfficientDet_arXiv}. SpineNet uses a learning rate of 0.28~\cite{SpineNet_CVPR2020}, and YOLOv4 uses a searched learning rate of 0.00261~\cite{YOLOv4_2020}. EfficientDet finely changes the image resolution from 512$\times$512 to 1536$\times$1536~\cite{EfficientDet_CVPR2020}.}. Their abnormal and scattered settings hinder the assessment of the most suitable method (see Figure~\ref{fig:coco_speed_accuracy}). Furthermore, by ``buying stronger results''~\cite{GreenAI_CACM2020}, they build a barrier for those without considerable funds to develop and train detectors. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{images/coco_ap_time_epoch_thinline.pdf} \caption{ Speed-accuracy trade-offs in the current standard COCO benchmark. Most studies train models with standard settings (\eg, within 24 epochs), while some studies train with abnormal settings (\eg, 300 epochs). For fair comparison, we propose \textit{USB protocols} that urge the latter studies to report results with standard settings. } \label{fig:coco_speed_accuracy} \vspace{-2mm} \end{figure} This study makes the following two contributions to resolve the problems. \textbf{Contribution 1:} We introduce the \textit{Universal-Scale object detection Benchmark (USB)} that consists of three datasets. In addition to COCO, we selected the Waymo Open Dataset~\cite{WaymoOpenDataset_CVPR2020} and Manga109-s\xspace~\cite{Manga109_Matsui_MTAP2017, Manga109_Aizawa_IEEEMM2020} to cover various object scales and image domains. They are the largest public datasets in their domains and enable reliable comparisons. To the best of our knowledge, USB is the first benchmark beyond COCO that evaluates finer scale-wise metrics across multiple domains. We conducted experiments using eight methods and found weaknesses of existing COCO-biased methods. \textbf{Contribution 2:} We established the \textit{USB protocols} for fair training and evaluation, inspired by weight classes in sports and the backward compatibility of the Universal Serial Bus. Specifically, USB protocols enable fair and easy comparisons by defining multiple divisions for training epochs and evaluation image resolutions. Furthermore, we introduce compatibility across training protocols by requesting participants to report results with not only higher protocols (longer training) but also lower protocols (shorter training). To the best of our knowledge, our training protocols are the first ones that allow for both fair comparisons with shorter training and strong results with longer training. Our protocols promote inclusive, healthy, and sustainable object detection research. \section{Related Work} \label{sec:related_work} \subsection{Object Detection Methods} Deep learning-based detectors dominate the recent progress in object detection~\cite{object_detection_survey_Liu_IJCV2020}. They can be divided~\cite{object_detection_survey_Liu_IJCV2020, MMDetection, Faster_R-CNN_NIPS2015} as single-stage detectors without region proposal~\cite{YOLO_CVPR2016, SSD_ECCV2016, RetinaNet_ICCV2017} and multi-stage (including two-stage) detectors with region proposal~\cite{Faster_R-CNN_NIPS2015, FPN_CVPR2017, Cascade_R-CNN_CVPR2018}. Detecting multi-scale objects is a fundamental challenge in object detection~\cite{object_detection_survey_Liu_IJCV2020, UniversalObjectDetection_ZhaoweiCai_2019}. Various components have been improved, including backbones and modules~\cite{Inception_CVPR2015, ResNet_CVPR2016, BagOfTricks_Classification_CVPR2019, Res2Net_TPAMI2020, DCN_ICCV2017}, necks~\cite{FPN_CVPR2017, SEPC_CVPR2020, EfficientDet_CVPR2020}, heads and training sample selection~\cite{Faster_R-CNN_NIPS2015, SSD_ECCV2016, ATSS_CVPR2020}, and multi-scale training and testing~\cite{Rowley_PAMI1998, SNIP_Singh_CVPR2018, ATSS_CVPR2020} (see Supplementary Material\xspace for the details of related work). Unlike most prior studies, we analyzed their methods across various object scales and image domains through the proposed benchmark. \subsection{Object Detection Benchmarks} There are numerous object detection benchmarks. For specific (category) object detection, recent benchmarks such as WIDER FACE~\cite{WIDERFACE_CVPR2016} and TinyPerson~\cite{TinyPerson_WACV2020} contain tiny objects. Although they are useful for evaluation for a specific category, many applications should detect multiple categories. For autonomous driving, KITTI~\cite{KITTI_CVPR2012} and Waymo Open Dataset~\cite{WaymoOpenDataset_CVPR2020} mainly evaluate three categories (car, pedestrian, and cyclist) in their leaderboards. For generic object detection, \textsc{Pascal}\xspace VOC~\cite{PASCALVOC_IJCV2015} and COCO~\cite{COCO_ECCV2014} include 20 and 80 categories, respectively. The number of categories has been further expanded by recent benchmarks, such as Open Images~\cite{OpenImagesDataset_IJCV2020}, Objects365~\cite{Objects365_ICCV2019}, and LVIS~\cite{LVIS_CVPR2019}. All the above datasets comprise photographs, whereas Clipart1k, Watercolor2k, Comic2k~\cite{CrossDomainDetection_Inoue_CVPR2018}, and Manga109-s\xspace~\cite{Manga109_Matsui_MTAP2017, Manga109_Aizawa_IEEEMM2020} comprise artificial images. Although Waymo Open Dataset~\cite{WaymoOpenDataset_CVPR2020} and Manga109-s\xspace~\cite{Manga109_Matsui_MTAP2017, Manga109_Aizawa_IEEEMM2020} have extensive scale variations (see Sec.~\ref{sec:usb_datasets}), scale-wise metrics have not been evaluated~\cite{WaymoOpenDataset_CVPR2020, Manga109_detection_Ogawa_2018}. Detectors evaluated on a specific dataset may perform worse on other datasets or domains. To address this issue, some benchmarks consist of multiple datasets. In the Robust Vision Challenge 2020~\cite{RVC2020}, detectors were evaluated on three datasets in the natural and traffic image domains. For \textit{universal-domain} object detection, the Universal Object Detection Benchmark (UODB)~\cite{UniversalObjectDetection_CVPR2019} comprises 11 datasets in the natural, traffic, aerial, medical, and artificial image domains. Although it is suitable for evaluating detectors in various domains, variations in object scales are limited. Unlike UODB, our USB focuses on \textit{universal-scale} object detection. The datasets in USB contain more instances, including tiny objects, than the datasets used in UODB. As discussed in Sec.~\ref{sec:introduction}, the current benchmarks allow extremely unfair settings (\eg, 25$\times$ training epochs). We resolved this problem by establishing USB protocols for fair training and evaluation. \section{Benchmark Protocols of USB} \label{sec:usb} Here, we present the principle, datasets, protocols, and metrics of USB. See Supplementary Material\xspace for additional information. \subsection{Principle} \label{sec:usb_principle} We focus on the \textit{Universal-Scale Object Detection (USOD)} task that aims to detect various objects in terms of object scales and image domains. Unlike separate discussions for multi-scale object detection (Sec.~\ref{sec:related_work}) and universal (-domain) object detection~\cite{UniversalObjectDetection_CVPR2019}, USOD does not ignore the relation between scales and domains (Sec.~\ref{sec:usb_datasets}). For various applications and users, benchmark protocols should cover from short to long training and from small to large test scales. On the other hand, they should not be scattered for meaningful benchmarks. To satisfy the conflicting requirements, we define multiple divisions for training epochs and evaluation image resolutions. Furthermore, we urge participants who have access to extensive computational resources to report results with standard training settings. This request enables fair comparison and allows many people to develop and compare object detectors. \subsection{Definitions of Object Scales} \label{sec:object_scale_definitions} Following the TinyPerson benchmark~\cite{TinyPerson_WACV2020}, we consider two types of object scales. The absolute scale is calculated as $\sqrt{wh}$, where $w$ and $h$ denote the object's width and height, respectively. The relative scale is calculated as $\sqrt{\frac{wh}{WH}}$, where $W$ and $H$ denote the image's width and height, respectively. \subsection{Datasets} \label{sec:usb_datasets} \begin{figure}[t] \centering \begin{minipage}[c]{0.46\linewidth} \includegraphics[width=\linewidth]{images/instance_scale_distribution_kde_cumulative_thinline.pdf} \end{minipage}\hfill \begin{minipage}[c]{0.53\linewidth} \caption{ Object scale distributions based on relative scale~\cite{TinyPerson_WACV2020, SNIP_Singh_CVPR2018, Waymo2d_1st_2020}. USB covers extensive scale variations quantitatively. } \label{fig:instance_scale_distribution} \end{minipage} \end{figure} \begin{table}[t] \setlength{\tabcolsep}{1.4mm} \renewcommand\arraystretch{0.85} \begin{center} \scalebox{0.75}{\begin{tabular}{llll} \toprule Dataset & Domain & Color & Main sources of scale variation \\ \midrule COCO~\cite{COCO_ECCV2014} & Natural & RGB & Categories, distance \\ WOD~\cite{WaymoOpenDataset_CVPR2020} & Traffic & RGB & Distance \\ Manga109-s\xspace~\cite{Manga109_Matsui_MTAP2017, Manga109_Aizawa_IEEEMM2020} & Artificial & Grayscale$^*$ & Viewpoints, page layouts \\ \bottomrule \end{tabular}} \end{center} \vspace{-3mm} \caption{ Characteristics of datasets. USB covers many qualitative variations related to scales and domains. $*$: Few RGB images. } \label{table:usb_characteristics} \end{table} \begin{table}[t] \setlength{\tabcolsep}{0.5mm} \renewcommand\arraystretch{0.85} \begin{center} \scalebox{0.75}{\begin{tabular}{llcccc} \toprule Benchmark & Dataset & Boxes & Images & B/I & Scale variation$^\dagger$ \\ \midrule \multirow{3}{*}{USB (Ours)} & COCO~\cite{COCO_ECCV2014} & \textbf{897\,k\xspace} \small{(\textbf{3.1}$\times$)} & \textbf{123\,k\xspace} & \textbf{7.3} & 88.8 \small{(1.0$\times$)} \\ & WOD~\cite{WaymoOpenDataset_CVPR2020} v1.2 \texttt{f0} & \textbf{1.0\,M\xspace} \small{(\textbf{29}$\times$)} & \textbf{100\,k\xspace} & \textbf{10.0} & \textbf{96.7} \small{(\textbf{5.8}$\times$)} \\ & Manga109-s\xspace~\cite{Manga109_Matsui_MTAP2017, Manga109_Aizawa_IEEEMM2020} & \textbf{401\,k\xspace} \small{(\textbf{63}$\times$)} & \textbf{8.2\,k\xspace} & \textbf{49.2} & \textbf{28.6} \small{(\textbf{1.5}$\times$)} \\ \midrule \multirow{3}{*}{UODB~\cite{UniversalObjectDetection_CVPR2019}} & COCO~\cite{COCO_ECCV2014} \texttt{val2014} & 292\,k\xspace & 41\,k\xspace & 7.2 & \textbf{89.6} \\ & KITTI~\cite{KITTI_CVPR2012} & 35\,k\xspace & 7.5\,k\xspace & 4.7 & 16.6 \\ & Comic2k~\cite{CrossDomainDetection_Inoue_CVPR2018} & 6.4\,k\xspace & 2.0\,k\xspace & 3.2 & 19.1 \\ \bottomrule \end{tabular}} \end{center} \vspace{-3mm} \caption{ Statistics of datasets in USB and counterpart datasets in UODB~\cite{UniversalObjectDetection_CVPR2019}. Values are based on publicly available annotations. B/I: Average number of boxes per image. $\dagger$: Calculated by the ratio of the 99 percentile to 1 percentile of relative scale. } \label{table:dataset_stats} \end{table} To establish USB, we selected the COCO~\cite{COCO_ECCV2014}, Waymo Open Dataset (WOD)~\cite{WaymoOpenDataset_CVPR2020}, and Manga109-s\xspace (M109s\xspace)~\cite{Manga109_Matsui_MTAP2017, Manga109_Aizawa_IEEEMM2020}. WOD and M109s\xspace are the largest public datasets with many small objects in the traffic and artificial domains, respectively. Object scales in these domains vary significantly with distance and viewpoints, unlike those in the medical and aerial domains\footnote{Aerial datasets contain abundant small objects but scarce large ones (see Table 4 in~\cite{DOTA2_TPAMI2021}). WOD has larger scale variation by distance variation, where 1\% of objects are larger than $1/4$ of the image area.}. USB covers extensive scale variations quantitatively (Figure~\ref{fig:instance_scale_distribution}) and qualitatively (Table~\ref{table:usb_characteristics}). As shown in Table~\ref{table:dataset_stats}, these datasets contain more instances and larger scale variations than their counterpart datasets in UODB~\cite{UniversalObjectDetection_CVPR2019} (COCO~\cite{COCO_ECCV2014} \texttt{val2014}, KITTI~\cite{KITTI_CVPR2012}, and Comic2k~\cite{CrossDomainDetection_Inoue_CVPR2018}). USOD needs to evaluate detectors on datasets with many instances because more instances enable more reliable comparisons of scale-wise metrics. For the first dataset, we adopted the COCO dataset~\cite{COCO_ECCV2014}. COCO contains natural images of everyday scenes collected from the Internet. Annotations for 80 categories are used in the benchmark. As shown in Figure~\ref{fig:teaser} (left), object scales mainly depend on the categories and distance. Although COCO contains objects smaller than those of \textsc{Pascal}\xspace VOC~\cite{PASCALVOC_IJCV2015}, objects in everyday scenes (especially indoor scenes) are relatively large. Since COCO is the current standard dataset for multi-scale object detection, we adopted the same training split \texttt{train2017} (also known as \texttt{trainval35k}) as the COCO benchmark to eliminate the need for retraining across benchmarks. We adopted the \texttt{val2017} split (also known as \texttt{minival}) as the test set. For the second dataset, we adopted the WOD, which is a large-scale, diverse dataset for autonomous driving~\cite{WaymoOpenDataset_CVPR2020} with many annotations for tiny objects (Figure~\ref{fig:instance_scale_distribution}). The images were recorded using five high-resolution cameras mounted on vehicles. As shown in Figure~\ref{fig:teaser} (middle), object scales vary mainly with distance. The full data splits of WOD are too large for benchmarking methods. Thus, we extracted 10\% size subsets from the predefined training split (798 sequences) and validation split (202 sequences)~\cite{WaymoOpenDataset_CVPR2020}. Specifically, we extracted splits based on the ones place of the frame index (frames 0, 10, ..., 190) in each sequence. We call the subsets \texttt{f0train} and \texttt{f0val} splits. Each sequence in the splits contains $\sim$20 frames (20\,s, 1\,Hz\xspace), and each frame contains five images for five cameras. We used three categories (vehicle, pedestrian, and cyclist) following the official \textit{ALL\_NS} setting~\cite{WaymoOpenDataset_2D_detection_leaderboard} used in WOD competitions. For the third dataset, we adopted the M109s\xspace~\cite{Manga109_Matsui_MTAP2017, Manga109_Aizawa_IEEEMM2020}. M109s\xspace contains artificial images of manga (Japanese comics) and annotations for four categories (body, face, frame, and text). Many characteristics differ from those of natural images. Most images are grayscale. The objects are highly overlapped~\cite{Manga109_detection_Ogawa_2018}. As shown in Figure~\ref{fig:teaser} (right), object scales vary unrestrictedly with viewpoints and page layouts. Small objects differ greatly from downsampled versions of large objects because small objects are drawn with simple lines and points. For example, small faces look like a sign ($\because$). This characteristic may ruin techniques developed mainly for natural images. Another challenge is ambiguity in annotations. Sometimes, a small-scale object is annotated, and sometimes, a similar scale object on another page is not annotated. Since annotating small objects is difficult and labor-intensive, this is an important and practical challenge. We carefully selected 68, 4, and 15 volumes for training, validation, and testing splits, and we call them the \texttt{68train}, \texttt{4val}, and \texttt{15test}, respectively. We selected the test splits from images with publicly available annotations to reduce labor for submissions. Participants should not fine-tune hyperparameters based on the test splits to prevent overfitting. \subsection{Motivation of Training Protocols} \label{sec:usb_training_motivation} \begin{table}[t] \setlength{\tabcolsep}{0.95mm} \renewcommand\arraystretch{0.85} \begin{center} \scalebox{0.7}{\begin{tabular}{lccccc} \toprule \multirow{2}{*}{Protocol} & \multirow{2}{*}{Fair} & Suitable for & Strong & Selectable & Comparable \\ & & each model & results & divisions & across divisions \\ \midrule A) Standard (short) training & \cm & & & & \\ B) Lawless (no regulations) & & \cm & \cm & & \\ C) Ours w/o compatibility & \cm & \cm & \cm & \cm & \\ D) Ours & \cm & \cm & \cm & \cm & \cm \\ \bottomrule \end{tabular}} \end{center} \vspace{-3mm} \caption{ Comparison of training protocols. } \label{table:training_protocols_comparison} \end{table} We describe the motivation of our training protocols with Table~\ref{table:training_protocols_comparison}, which compares existing protocols (A and B) and novel protocols (C and D). Protocol A is the current standard training protocol within 24 epochs, popularized by successive detectors, Detectron~\cite{Detectron2018}, and MMDetection~\cite{MMDetection}. This protocol is fair but not suitable for slowly convergent models (\eg, DETR~\cite{DETR_ECCV2020}). Protocol B is lawless without any regulations. Participants can train their models with arbitrary settings suitable for them, even if they are unfair settings (\eg, standard training for existing methods and longer training for proposed ones). Since object detectors can achieve high accuracy with long training schedules and strong data augmentation~\cite{SpineNet_CVPR2020, EfficientDet_arXiv, SimpleCopyPaste_CVPR2021}, participants can buy stronger results~\cite{GreenAI_CACM2020}. Both existing protocols A and B have advantages and disadvantages. Neither is appropriate. Thus, we considered novel protocols to bridge them. We first defined multiple divisions for training epochs, inspired by weight classes in sports. This Protocol C enables fair comparison in each division. Participants can select divisions according to their purposes and resources. However, we cannot compare models across divisions. To resolve this, we then came up with an idea: introducing backward compatibility like the Universal Serial Bus, resulting in Protocol D. As described above, \emph{our protocols introduce a completely different paradigm from existing limited or unfair protocols.} To the best of our knowledge, \emph{our protocols are the first ones that satisfy the five requirements in Table~\ref{table:training_protocols_comparison}.} \subsection{Training Protocols} \label{sec:usb_training} \begin{table}[t] \setlength{\tabcolsep}{1.05mm} \renewcommand\arraystretch{0.85} \begin{center} \scalebox{0.8}{\begin{tabular}{lrcll} \toprule Protocol & Max epoch & AHPO & Compatibility & Example \\ \midrule USB 1.0 & 24 & \xm & --- & 2$\times$ schedule~\cite{Detectron2018, RethinkingImageNet_ICCV2019} \\ USB 2.0 & 73 & \xm & USB 1.0 & 6$\times$ schedule~\cite{RethinkingImageNet_ICCV2019} \\ USB 3.0 & 300 & \xm & USB 1.0, 2.0 & EfficientDet-D6~\cite{EfficientDet_CVPR2020} \\ USB 3.1 & 300 & \cm & USB 1.0, 2.0, 3.0 & YOLOv4~\cite{YOLOv4_2020} \\ Freestyle & $\infty$ & \cm & --- & EfficientDet-D7x~\cite{EfficientDet_arXiv} \\ \bottomrule \end{tabular}} \end{center} \vspace{-3mm} \caption{ USB training protocols. AHPO: Aggressive hyperparameter optimization. } \label{table:USB_training} \end{table} For fair training, we propose the \textit{USB training protocols} shown in Table~\ref{table:USB_training}. By analogy with the backward compatibility of the Universal Serial Bus\footnote{ Higher protocols can adapt the data transfer rate to lower protocols. }, USB training protocols emphasize compatibility between protocols. Importantly, \textit{participants should report results with not only higher protocols but also lower protocols.} For example, when a participant trains a model for 150 epochs with standard hyperparameters, it corresponds to USB 3.0. The participant should also report the results of models trained for 24 and 73 epochs in a paper. This reveals the effectiveness of the method by ablating the effect of long training. The readers of the paper can judge whether the method is useful for standard training epochs. Since many researchers and practitioners do not have access to extensive computational resources, such information is important to select methods. The number of maximum epochs for USB 1.0 is 24, which is the most popular setting in COCO (Table~\ref{table:coco_sota}). We adopted 73 epochs for USB 2.0, where models trained from scratch can catch up with those trained from ImageNet pre-trained models~\cite{RethinkingImageNet_ICCV2019}. This serves as a guideline for comparison between models with and without pre-training, although perfectly fair comparisons are impossible considering large differences caused by pre-training~\cite{Shinya_ICCVW2019}. We adopted 300 epochs for USB 3.x such that YOLOv4~\cite{YOLOv4_2020} and most EfficientDet models~\cite{EfficientDet_arXiv} correspond to this protocol. Models trained for more than 300 epochs are regarded as Freestyle. They are not suitable for benchmarking methods, although they may push the empirical limits of detectors~\cite{EfficientDet_arXiv, DETR_ECCV2020}. The correspondences between Tables~\ref{table:training_protocols_comparison} and \ref{table:USB_training} are as follows: Protocol A corresponds to only USB 1.0; Protocol B corresponds to only Freestyle; Protocol C corresponds to all protocols (divisions) in Table~\ref{table:USB_training} without compatibility; and Protocol D corresponds to all protocols (divisions) in Table~\ref{table:USB_training} with compatibility. For models trained with annotations other than 2D bounding boxes (\eg, instance/stuff/panoptic segmentation, keypoint, caption, and point cloud), $0.5$ is added to their number of protocols. Participants should also report results without such annotations if possible for their algorithms. For ease of comparison, we limit the pre-training datasets to the three datasets and ImageNet-1k (ILSVRC 1,000-class classification)~\cite{ImageNet_IJCV2015}. Other datasets are welcome only when the results with and without additional datasets are reported. Participants should describe how to use the datasets. A possible way is to fine-tune models on WOD and M109s\xspace from COCO pre-trained models. Another way is to train a single model jointly~\cite{UniversalObjectDetection_CVPR2019} on the three datasets. In addition to long training schedules, hyperparameter optimization is resource-intensive. If authors of a paper fine-tune hyperparameters for their architecture, other people without sufficient computational resources cannot compare methods fairly. For hyperparameters that need to be tuned exponentially, such as learning rates and $1 - m$ where $m$ denotes momentum, the minimum ratio of hyperparameter choices should be greater than or equal to $2$ (\eg, choices $\{0.1, 0.2, 0.4, 0.8, ...\}$, $\{0.1, 0.2, 0.5, 1.0, ...\}$, and $\{0.1, 0.3, 1.0, ...\}$). For hyperparameters that need to be tuned linearly, the number of choices should be less than or equal to $11$ (\eg, choices $\{0.0, 0.1, 0.2, ..., 1.0\}$). When participants perform aggressive hyperparameter optimization (AHPO) by manual fine-tuning or automatic algorithms, $0.1$ is added to their number of protocols. They should report both results with and without AHPO. To further improve fairness without sacrificing the practicality and simplicity of the protocols, we consider it a kind of AHPO to use data augmentation techniques that more than double the time per epoch. \subsection{Evaluation Protocols} \label{sec:usb_evaluation} \begin{table}[t] \setlength{\tabcolsep}{0.95mm} \renewcommand\arraystretch{0.85} \begin{center} \scalebox{0.8}{\begin{tabular}{lrrl} \toprule Protocol & Max reso. & Typical scale & Reference \\ \midrule Standard USB & 1,066,667 & 1333$\times$\;\:800 & Popular in COCO~\cite{COCO_ECCV2014, MMDetection, Detectron2018} \\ Mini USB & 262,144 & 512$\times$\;\:512 & Popular in VOC~\cite{PASCALVOC_IJCV2015, SSD_ECCV2016} \\ Micro USB & 50,176 & 224$\times$\;\:224 & Popular in ImageNet~\cite{ImageNet_IJCV2015, ResNet_CVPR2016} \\ Large USB & 2,457,600 & 1920$\times$1280 & WOD front cameras~\cite{WaymoOpenDataset_CVPR2020} \\ Huge USB & 7,526,400 & 3360$\times$2240 & WOD top methods (\hspace{1sp}\cite{Waymo2d_1st_2020}, ours) \\ Freestyle & $\infty$ & ---\:\;\;\;\;\;\; & --- \\ \bottomrule \end{tabular}} \end{center} \vspace{-3mm} \caption{ USB evaluation protocols. } \label{table:USB_resolutions} \end{table} For fair evaluation, we propose the \textit{USB evaluation protocols} shown in Table~\ref{table:USB_resolutions}. By analogy with the size variations of the Universal Serial Bus connectors for various devices, USB evaluation protocols have variations in test image scales for various devices and applications. The maximum resolution for Standard USB follows the popular test scale of 1333$\times$800 in the COCO benchmark (Table~\ref{table:coco_sota}, \cite{MMDetection, Detectron2018}). For Mini USB, we limit the resolution based on 512$\times$512. This resolution is popular in the \textsc{Pascal}\xspace VOC benchmark~\cite{PASCALVOC_IJCV2015, SSD_ECCV2016}, which contains small images and large objects. It is also popular in real-time detectors~\cite{EfficientDet_CVPR2020, YOLOv4_2020}. We adopted a further small-scale 224$\times$224 for Micro USB. This resolution is popular in ImageNet classification~\cite{ImageNet_IJCV2015, ResNet_CVPR2016}. Although small object detection is extremely difficult, it is suitable for low-power devices. Additionally, this protocol enables people to manage object detection tasks using one or few GPUs. To cover larger test scales than Standard USB, we define Large USB and Huge USB based on WOD resolutions (see Supplementary Material\xspace for the top methods). Although larger inputs (regarded as Freestyle) may be preferable for accuracy, excessively large inputs reduce the practicality of detectors. In addition to test image scales, the presence and degree of Test-Time Augmentation (TTA) make large differences in accuracy and inference time. When using TTA, participants should report its details (including the number of scales of multi-scale testing) and results without TTA. \subsection{Evaluation Metrics} We mainly use the COCO metrics~\cite{COCO_ECCV2014, cocoapi} to evaluate the performance of detectors on each dataset. We provide data format converters for WOD\footnote{Our GitHub repository (anonymized for review).\xspace}} and M109s\xspace\footnote{Our GitHub repository (anonymized for review).\xspace}}. We first describe the calculation of COCO metrics according to the official evaluation code~\cite{cocoapi}. True or false positives are judged by measuring the Intersection over Union (IoU) between predicted bounding boxes and ground truth bounding boxes~\cite{PASCALVOC_IJCV2015}. For each category, the Average Precision (AP) is calculated as precision averaged over $101$ recall thresholds $\{0, 0.01, ..., 1\}$. The COCO-style AP (CAP) for a dataset $d$ is calculated as \begin{equation} \mathrm{CAP}_d = \frac{1}{|T|}\sum_{t \in T} \frac{1}{|C_d|}\sum_{c \in C_d} \mathrm{AP}_{t, c}, \end{equation} where $T = \{0.5, 0.55, ..., 0.95\}$ denotes the predefined $10$ IoU thresholds, $C_d$ denotes categories in the dataset $d$, $|\cdot|$ denotes the cardinality of a set (\eg, $|C_d| = 80$ for COCO), and $\mathrm{AP}_{t, c}$ denotes AP for an IoU threshold $t$ and a category $c$. For detailed analysis, five additional AP metrics (averaged over categories) are evaluated. AP$_{50}$ and AP$_{75}$ denote AP at single IoU thresholds of $0.5$ and $0.75$, respectively. AP$_\textit{S}$\xspace, AP$_\textit{M}$\xspace, and AP$_\textit{L}$\xspace are variants of CAP, where target objects are limited to small (area $\leq 32^2$), medium ($32^2 \leq$ area $\leq 96^2$), and large ($96^2 \leq$ area) objects, respectively. The area is measured using mask annotations for COCO and bounding box annotations for WOD and M109s\xspace. As the primary metric for USB, we use the mean COCO-style AP (mCAP) averaged over all datasets $D$ as \begin{equation} \mathrm{mCAP} = \frac{1}{|D|}\sum_{d \in D} \mathrm{CAP}_d. \end{equation} Since USB adopts the three datasets described in Sec.~\ref{sec:usb_datasets}, $\mathrm{mCAP} = (\mathrm{CAP_{COCO}}+\mathrm{CAP_{WOD}}+\mathrm{CAP_{M109s\xspace}}) / 3.$ Similarly, we define five metrics from AP$_{50}$, AP$_{75}$, AP$_\textit{S}$\xspace, AP$_\textit{M}$\xspace, and AP$_\textit{L}$\xspace by averaging them over the datasets. The three COCO-style scale-wise metrics (AP$_\textit{S}$, AP$_\textit{M}$, and AP$_\textit{L}$) are too coarse for detailed scale-wise analysis. They confuse objects of significantly different scales. For example, the absolute scale of a large object might be 100 or 1600. Thus, we introduce finer scale-wise metrics. We define the \textit{Absolute Scale AP (ASAP)} and \textit{Relative Scale AP (RSAP)} using exponential thresholds. ASAP partitions object scales based on absolute scale $(0, 8, 16, 32, ..., 1024, \infty)$. RSAP partitions object scales based on relative scale $(0, \frac{1}{256}, \frac{1}{128}, ..., \frac{1}{2}, 1)$. We call the partitions by their maximum scales. For ease of quantitative evaluation, we limit the number of detections per image to 100 across all categories, following the COCO benchmark~\cite{cocoapi}. For qualitative evaluation, participants may raise the limit to 300 (1\% of images in the M109s\xspace \texttt{15test} set contain more than 100 annotations). \input{universenet_experiment} \section{Conclusions and Limitations}\label{sec:conclusions} We introduced USB, a benchmark for universal-scale object detection. To resolve unfair comparisons in existing benchmarks, we established USB training/evaluation protocols. Using the proposed benchmark, we found weaknesses in existing methods to be addressed in future research. There are four limitations to this work. (1) USB depends on datasets with many instances. Reliable scale-wise metrics for small datasets should be considered. (2) USB does not cover the resolution of recent smartphone cameras (\eg, 4000$\times$3000). Such high-resolution images may encourage completely different approaches. (3) We could not train detectors with higher protocols than USB 1.0 due to limited resources. Although the compatibility enables comparison in low protocols, still only well-funded researchers can compare detectors in high protocols. Other efforts are also needed to ensure fairness and inclusion in research. (4) The architectures and results of the eight methods are still biased toward COCO due to development and pre-training on COCO. Less biased and more universal detectors should be developed in future research. The proposed USB protocols can be applied to other tasks with modifications. We believe that our work is an important step toward recognizing universal-scale objects by connecting various experimental settings. \noindent \textbf{Research ethics.} See Supplementary Material\xspace for discussion on research ethics, including negative societal impact. \section{Discussions on Research Ethics} \iftoggle{cvprfinal}{}{ \paragraph{Code and reproducibility.} We attached the ``code.zip'', which includes the following codes. {\setlength{\leftmargini}{15pt} \begin{itemize} \setlength{\itemsep}{0.0mm} \setlength{\parskip}{0.0mm} \item UniverseNet: The main code to train and evaluate object detectors. \item WaymoCOCO: The data format converter for WOD. \item manga109api: The data format converter for M109s\xspace. \end{itemize} } \noindent To reproduce our results, see the instructions written in them. The usage of the main code is the same as MMDetection~\cite{MMDetection} v2. } \noindent \textbf{Potential negative societal impacts.} Improving the accuracy and universality of object detectors could improve the performance of autonomous weapons. To mitigate the risk, we could develop more detectors for entertainment to increase people's happiness and decrease their hatred. Besides, detectors might be misused for surveillance systems (\eg, as a part of person tracking methods). To mitigate the risk, the computer vision community will need to have discussions with national and international organizations to regulate them appropriately. \noindent \textbf{Existing assets.} We used the assets listed in Table~\ref{table:assets}. See our codes for more details. Refer to the papers~\cite{COCO_ECCV2014, WaymoOpenDataset_CVPR2020, Manga109_Aizawa_IEEEMM2020} and the URLs for how the datasets were collected. \begin{table*}[t] \setlength{\tabcolsep}{1.0mm} \renewcommand\arraystretch{0.72} \begin{center} \scalebox{0.75}{\begin{tabular}{llll} \toprule Asset & Version & URL & License \\ \midrule COCO~\cite{COCO_ECCV2014} & 2017 & \url{https://cocodataset.org/} & Annotations: CC-BY 4.0; images: various licenses \\ WOD~\cite{WaymoOpenDataset_CVPR2020} & 1.2 & \url{https://waymo.com/open/} & Custom license \\ Manga109-s\xspace~\cite{Manga109_Matsui_MTAP2017, Manga109_Aizawa_IEEEMM2020} & 2020.12.18 & \url{http://www.manga109.org/} & Custom license \\ \midrule COCO API~\cite{cocoapi} & 2.0 & \url{https://github.com/cocodataset/cocoapi} & 2-Clause BSD License \\ WOD (code) & --- & \url{https://github.com/waymo-research/waymo-open-dataset} & Apache License 2.0 \\ Manga109 API & 0.3.1 & \url{https://github.com/manga109/manga109api} & MIT License \\ MMDetection~\cite{MMDetection} & 2.17.0 & \url{https://github.com/open-mmlab/mmdetection} & Apache License 2.0 \\ \bottomrule \end{tabular}} \end{center} \vspace{-3mm} \caption{ Existing assets we used. } \label{table:assets} \end{table*} \noindent \textbf{Consent.} See~\cite{Manga109_Aizawa_IEEEMM2020} for M109s\xspace. For the other datasets, we could not find whether and how consent was obtained. It will be impossible to obtain consent from people recorded in datasets for autonomous driving such as WOD~\cite{WaymoOpenDataset_CVPR2020}. \noindent \textbf{Privacy.} Faces and license plates in WOD~\cite{WaymoOpenDataset_CVPR2020} are blurred. COCO images may harm privacy because they probably contain personally identifiable information. However, COCO~\cite{COCO_ECCV2014} is so popular that the computer vision community cannot stop using it suddenly. This paper will be a step toward reducing the dependence on COCO. \noindent \textbf{Offensive contents.} M109s\xspace covers various contents~\cite{Manga109_Aizawa_IEEEMM2020}. This characteristic is useful to develop universal-scale object detectors. One of the authors checked many images of the three datasets with eyes and felt that some images in M109s\xspace may be considered offensive (\eg, violence in battle manga and nudity in romantic comedy). Thus, researchers should be careful how they use it. It is also valuable to develop methods to detect such scenes using the dataset. \noindent \textbf{Compute.} Considering the numbers of training images, training for USB takes about 1.7 ($\approx \frac{118287+79735+6467}{118287}$) times longer than that for COCO. This is reasonable as a next-generation benchmark after COCO. Furthermore, the proposed protocols provide incentives to avoid computationally intensive settings~\cite{EfficientDet_CVPR2020}. \section{Details of Related Work} \subsection{Components for Multi-Scale Object Detection} \noindent \textbf{Backbones and modules.} Inception module~\cite{Inception_CVPR2015} arranges $1{\times}1$, $3{\times}3$, and $5{\times}5$ convolutions to cover multi-scale regions. Residual block~\cite{ResNet_CVPR2016} adds multi-scale features from shortcut connections and $3{\times}3$ convolutions. ResNet-C and ResNet-D~\cite{BagOfTricks_Classification_CVPR2019} replace the first layer of ResNet with the deep stem (three $3{\times}3$ convolutions)~\cite{Inceptionv3_CVPR2016}. Res2Net module~\cite{Res2Net_TPAMI2020} stacks $3{\times}3$ convolutions hierarchically to represent multi-scale features. Res2Net-v1b~\cite{Res2Net_TPAMI2020} adopts deep stem with Res2Net module. Deformable convolution module in Deformable Convolutional Networks (DCN)~\cite{DCN_ICCV2017} adjusts receptive field adaptively by deforming the sampling locations of standard convolutions. These modules are mainly used in backbones. \noindent \textbf{Necks.} To combine and enhance backbones' representation, necks follow backbones. Feature Pyramid Networks (FPN)~\cite{FPN_CVPR2017} adopt top-down path and lateral connections like architectures for semantic segmentation. Scale-Equalizing Pyramid Convolution (SEPC)~\cite{SEPC_CVPR2020} introduces pyramid convolution across feature maps with different resolutions and utilizes DCN to align the features. \noindent \textbf{Heads and training sample selection.} Faster R-CNN~\cite{Faster_R-CNN_NIPS2015} spreads multi-scale anchors over a feature map. SSD~\cite{SSD_ECCV2016} spreads multi-scale anchors over multiple feature maps with different resolutions. Adaptive Training Sample Selection (ATSS)~\cite{ATSS_CVPR2020} eliminates the need for multi-scale anchors by dividing positive and negative samples according to object statistics across pyramid levels. \noindent \textbf{Multi-scale training and testing.} Traditionally, the image pyramid is an essential technique to handle multi-scale objects~\cite{Rowley_PAMI1998}. Although recent detectors can output multi-scale objects from a single-scale input, many studies use multi-scale inputs to improve performance~\cite{Faster_R-CNN_NIPS2015, RetinaNet_ICCV2017, ATSS_CVPR2020, SEPC_CVPR2020}. In a popular implementation~\cite{MMDetection}, multi-scale training randomly chooses a scale at each iteration for (training-time) data augmentation. Multi-scale testing infers multi-scale inputs and merges their outputs for Test-Time Augmentation (TTA). Scale Normalization for Image Pyramids (SNIP)~\cite{SNIP_Singh_CVPR2018} limits the range of object scales at each image scale during training and testing. \subsection{Scale-Wise Metrics} Many studies have introduced different scale-wise metrics~\cite{DiagDet_ECCV2012, Caltech_PAMI2012, COCO_ECCV2014, ImageNet_IJCV2015, TinyPerson_WACV2020, TIDE_ECCV2020, REVISE_ECCV2020}. Unlike these studies, we introduce two types of finer scale-wise metrics based on the absolute scale and relative scale~\cite{TinyPerson_WACV2020}. More importantly, we evaluated them on the datasets that have extensive scale variations and many instances in multiple domains. \subsection{Criticism of Experimental Settings} For fair, inclusive, and efficient research, many previous studies have criticized experimental settings (\eg, \cite{GreenAI_CACM2020, MetricLearningRealityCheck_ECCV2020}). These previous studies do not propose fair and practical protocols for object detection benchmarks. \section{Details of Protocols} \subsection{Dataset Splits of Manga109-s\xspace} \begin{table}[t] \renewcommand\arraystretch{0.85} \begin{center} \scalebox{0.8}{\begin{tabular}{ll} \toprule Volume & Genre \\ \midrule \multicolumn{2}{l}{\texttt{15test} \textit{set:}} \\ Aku-Ham & Four-frame cartoons \\ Bakuretsu! Kung Fu Girl & Romantic comedy \\ Doll Gun & Battle \\ Eva Lady & Science fiction \\ Hinagiku Kenzan\textit{!} & Love romance \\ Kyokugen Cyclone & Sports \\ Love Hina vol. 1 & Romantic comedy \\ Momoyama Haikagura & Historical drama \\ Tennen Senshi G & Humor \\ Uchi no Nyan's Diary & Animal \\ Unbalance Tokyo & Science fiction \\ Yamato no Hane & Sports \\ Youma Kourin & Fantasy \\ Yume no Kayoiji & Fantasy \\ Yumeiro Cooking & Love romance \\ \midrule \multicolumn{2}{l}{\texttt{4val} \textit{set:}} \\ Healing Planet & Science fiction \\ Love Hina vol. 14 & Romantic comedy \\ Seijinki Vulnus & Battle \\ That's! Izumiko & Fantasy \\ \midrule \multicolumn{2}{l}{\texttt{68train} \textit{set: All the other volumes}} \\ \bottomrule \end{tabular}} \end{center} \vspace{-3mm} \caption{ Manga109-s dataset splits (87 volumes in total). } \label{table:manga109_split} \end{table} The \textit{Manga109-s} dataset (87 volumes) is a subset of the full \textit{Manga109} dataset (109 volumes)~\cite{Manga109_Aizawa_IEEEMM2020}. Unlike the full Manga109 dataset, the Manga109-s dataset can be used by commercial organizations. The dataset splits for the full Manga109 dataset used in prior work~\cite{Manga109_detection_Ogawa_2018} cannot be used for the Manga109-s dataset. We defined the Manga109-s dataset splits shown in Table~\ref{table:manga109_split}. Unlike alphabetical order splits used in the prior work~\cite{Manga109_detection_Ogawa_2018}, we selected the volumes carefully. The \texttt{15test} set was selected to be well-balanced for reliable evaluation. Five volumes in the \texttt{15test} set were selected from the 10 test volumes used in~\cite{Manga109_detection_Ogawa_2018} to enable partially direct comparison. All the authors of the \texttt{15test} and \texttt{4val} set are different from those of the \texttt{68train} set to evaluate generalizability. \subsection{Number of Images} There are 118,287 images in COCO \texttt{train2017}, 5,000 in COCO \texttt{val2017}, 79,735 in WOD \texttt{f0train}, 20,190 in WOD \texttt{f0val}, 6,467 in M109s \texttt{68train}, 399 in M109s \texttt{4val}, and 1,289 in M109s \texttt{15test}. Following prior work~\cite{Manga109_detection_Ogawa_2018}, we exclude M109s images without annotations because objects in irregular pages are not annotated. \subsection{Importance of Many Instances} Here, we highlight the importance of a larger number of instances than UODB~\cite{UniversalObjectDetection_CVPR2019}. We show that if we introduced scale-wise metrics to UODB, the results would be unreliable. Watercolor2k, one of the datasets adopted by UODB, has $6$ classes and $27$ bicycle instances~\cite{CrossDomainDetection_Inoue_CVPR2018}. If we equally divided the dataset for training and evaluation and they had the same number of small, medium, and large bicycles, the average number of bicycles of a particular scale in the evaluation split would be $4.5$. Since the $4.5$ bicycles affect $\frac{1}{6}$ of a scale-wise metric, a single error can change the results by $3.7$\%. Thus, randomness can easily reverse the ranking between methods, making the benchmark results unreliable. \begin{table*}[t] \setlength{\tabcolsep}{1.3mm} \renewcommand\arraystretch{0.85} \newcommand{\multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{Method}}{\multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{Method}} \newcommand{\multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{FPS}}{\multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{FPS}} \newcommand{Backbone}{Backbone} \newcommand{\cmidrule(l{.2em}r{.2em}){3-4}}{\cmidrule(l{.2em}r{.2em}){2-3}\cmidrule(l{.2em}r{.2em}){4-6}\cmidrule(l{.2em}r{.2em}){7-9}\cmidrule(l{.2em}r{.2em}){10-10}\cmidrule{12-17}} \newcommand{\cite{Res2Net_TPAMI2020}}{\cite{Res2Net_TPAMI2020}} \newcommand{\cite{SEPC_CVPR2020, MegDet_CVPR2018}}{\cite{SEPC_CVPR2020, MegDet_CVPR2018}} \begin{center} \scalebox{0.75}{\begin{tabular}{lcccccccccc@{\hspace{.9em}}cccccc} \toprule \multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{Method} & \multicolumn{2}{c}{Head} & \multicolumn{3}{c}{Neck} & \multicolumn{3}{c}{Backbone} & Input & \multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{FPS} & \multicolumn{6}{c}{COCO (1$\times$ schedule)} \\ \cmidrule(l{.2em}r{.2em}){3-4} & ATSS & GFL & PConv & DCN & iBN & Res2 & DCN & SyncBN & MStrain & & AP & AP$_{50}$ & AP$_{75}$ & AP$_\textit{S}$\xspace & AP$_\textit{M}$\xspace & AP$_\textit{L}$\xspace \\ \midrule RetinaNet~\cite{RetinaNet_ICCV2017} & & & & & & & & & & 33.9 & 36.5 & 55.4 & 39.1 & 20.4 & 40.3 & 48.1 \\ ATSS~\cite{ATSS_CVPR2020} & \cm & & & & & & & & & 35.2 & 39.4 & 57.6 & 42.8 & 23.6 & 42.9 & 50.3 \\ GFL~\cite{GFL_NeurIPS2020} & \cm & \cm & & & & & & & & 37.2 & 40.2 & 58.4 & 43.3 & 23.3 & 44.0 & 52.2 \\ ATSEPC\xspace~\cite{ATSS_CVPR2020, SEPC_CVPR2020} & \cm & & \cm & P, LC & & & & & & 25.0 & 42.1 & 59.9 & 45.5 & 24.6 & 46.1 & 55.0 \\ UniverseNet\xspace & \cm & & \cm & P, LC & & \cm & c3-c5 & & \cm & 17.3 & 46.7 & 65.0 & 50.7 & 29.2 & 50.6 & 61.4 \\ UniverseNet$+$GFL\xspace & \cm & \cm & \cm & P, LC & & \cm & c3-c5 & & \cm & 17.5 & 47.5 & 65.8 & 51.8 & 29.2 & 51.6 & 62.5 \\ UniverseNet-20.08d\xspace & \cm & \cm & \cm & P, LC & \cm & \cm & c3-c5 & \cm & \cm & 17.3 & \textbf{48.6} & \textbf{67.1} & \textbf{52.7} & \textbf{30.1} & \textbf{53.0} & \textbf{63.8} \\ UniverseNet-20.08\xspace & \cm & \cm & \cm & LC & \cm & \cm & c5 & \cm & \cm & 24.9 & 47.5 & 66.0 & 51.9 & 28.9 & 52.1 & 61.9 \\ \midrule UniverseNet-20.08\xspace w/o SEPC~\cite{SEPC_CVPR2020} & \cm & \cm & & & & \cm & c5 & \cm & \cm & 26.7 & 45.8 & 64.6 & 50.0 & 27.6 & 50.4 & 59.7 \\ UniverseNet-20.08\xspace w/o Res2Net-v1b~\cite{Res2Net_TPAMI2020} & \cm & \cm & \cm & LC & \cm & & c5 & \cm & \cm & 32.8 & 44.7 & 62.8 & 48.4 & 27.1 & 48.8 & 59.5 \\ UniverseNet-20.08\xspace w/o DCN~\cite{DCN_ICCV2017} & \cm & \cm & \cm & & \cm & \cm & & \cm & \cm & 27.8 & 45.9 & 64.5 & 49.8 & 28.9 & 49.9 & 59.0 \\ UniverseNet-20.08\xspace w/o iBN, SyncBN~\cite{SEPC_CVPR2020, MegDet_CVPR2018} & \cm & \cm & \cm & LC & & \cm & c5 & & \cm & 25.7 & 45.8 & 64.0 & 50.2 & 27.9 & 50.0 & 59.8 \\ UniverseNet-20.08\xspace w/o MStrain & \cm & \cm & \cm & LC & \cm & \cm & c5 & \cm & & 24.8 & 45.9 & 64.5 & 49.6 & 27.4 & 50.5 & 60.1 \\ \bottomrule \end{tabular}} \end{center} \vspace{-3.6mm} \caption{ Architectures of UniverseNets\xspace with a summary of ablation studies on COCO \texttt{minival}. See Sec.~\ref{sec:coco_ablation} for step-by-step improvements. All results are based on MMDetection~\cite{MMDetection} v2. The ``Head'' methods (ATSS and GFL) affect losses and training sample selection. Res2: Res2Net-v1b~\cite{Res2Net_TPAMI2020}. PConv (Pyramid Convolution) and iBN (integrated Batch Normalization) are the components of SEPC~\cite{SEPC_CVPR2020}. The DCN columns indicate where to apply DCN. ``P'': The PConv modules in the combined head of SEPC~\cite{SEPC_CVPR2020}. ``LC'': The extra head of SEPC for localization and classification~\cite{SEPC_CVPR2020}. ``c3-c5'': conv3\_x, conv4\_x, and conv5\_x layers in ResNet-style backbones~\cite{ResNet_CVPR2016}. ``c5'': conv5\_x layers in ResNet-style backbones~\cite{ResNet_CVPR2016}. ATSEPC\xspace: ATSS with SEPC (without iBN). MStrain: Multi-scale training. FPS: Frames per second on one V100 with mixed precision. } \label{table:coco_ablation_details} \end{table*} \subsection{Exceptions of Protocols} The rounding error of epochs between epoch- and iteration-based training can be ignored when calculating the maximum epochs. Small differences of eight pixels or less can be ignored when calculating the maximum resolutions. For example, DSSD513~\cite{DSSD_2017} will be compared in Mini USB. \subsection{Constraints on Training Time} We do not adopt constraints on training time as the major constraints of the training protocols because they have the following issues. {\setlength{\leftmargini}{15pt} \begin{itemize} \setlength{\itemsep}{0.0mm} \setlength{\parskip}{0.0mm} \item It is difficult to measure training time on unified hardware. \item It is complicated to measure training time, calculate allowable epochs, and set learning rate schedules for each model. \item It is difficult to compare with previous studies, which align the number of epochs. \item They will reduce the value of huge existing resources for standard training epochs (trained models, configuration files, and experimental results) provided by popular object detection libraries such as MMDetection~\cite{MMDetection}. \item They overemphasize implementation optimization rather than trial and error of novel methods. \item There are overlaps between the factors of training time and those of inference time. \end{itemize} } The proposed constraints on training epochs are much easier to adopt and more reasonable. Furthermore, our protocols compensate for the shortcomings of the training epoch constraints by defining the provisions for hyperparameter optimization and data augmentation. \subsection{Characteristics of Scale-Wise Metrics} \label{sec:scale_metrics_characteristics} ASAP and COCO-style scale-wise metrics are based on the absolute scale. It has a weakness that it changes with image resizing. To limit inference time and GPU memory consumption, and to ensure fair comparisons, input image scales are typically resized. If they are smaller than the original image scales, relative scales have direct effects on accuracy rather than absolute scales. Furthermore, objects with the same absolute scale in the original images may have different absolute scales in the input images. Fluctuating object scale thresholds is not desirable for scale-wise metrics. In addition, ASAP is not suitable for evaluating accuracy for very large objects. It may be impossible to calculate ASAP for large absolute scales on some datasets. In the case of USB, we cannot calculate ASAP$_\infty$ on COCO because the absolute scales of COCO objects are smaller than 1024 (we filled ASAP$_\infty$ on COCO with zero in experiments). Furthermore, ASAP for large absolute scales may show unusual behavior. For example, in the evaluation of ASAP$_\infty$ on M109s\xspace, all predictions larger than 1024 of absolute scales have larger IoUs than $0.5$ with an object of image resolution size (1654$\times$1170). We prefer RSAP to ASAP due to the above-mentioned weaknesses of ASAP. Absolute scales may be important depending on whether and how participants resize images. In that case, RSAP and ASAP can be used complementarily. \section{Details of UniverseNets\xspace} For fast and accurate detectors for USOD, we designed UniverseNets\xspace. We adopted single-stage detectors for efficiency. We show the detailed architectures in Table~\ref{table:coco_ablation_details}. As a baseline model, we used the RetinaNet~\cite{RetinaNet_ICCV2017} implemented in MMDetection~\cite{MMDetection}. Specifically, the backbone is ResNet-50-B~\cite{BagOfTricks_Classification_CVPR2019} (a variant of ResNet-50~\cite{ResNet_CVPR2016}, also known as the PyTorch style). The neck is FPN~\cite{FPN_CVPR2017}. We used focal loss~\cite{RetinaNet_ICCV2017}, single-scale training, and single-scale testing. Built on the RetinaNet baseline, we designed \textit{UniverseNet\xspace} by collecting human wisdom about multi-scale object detection as of May 2020. We used ATSS~\cite{ATSS_CVPR2020} and SEPC without iBN~\cite{SEPC_CVPR2020} (hereafter referred to as \textit{ATSEPC}). The backbone is Res2Net-50-v1b~\cite{Res2Net_TPAMI2020}. We adopted Deformable Convolutional Networks (DCN)~\cite{DCN_ICCV2017} in the backbone and neck. We used multi-scale training. Unless otherwise stated, we used single-scale testing for efficiency. By adding GFL~\cite{GFL_NeurIPS2020}, SyncBN~\cite{MegDet_CVPR2018}, and iBN~\cite{SEPC_CVPR2020}, we designed three variants of UniverseNet\xspace around August 2020. \textit{UniverseNet-20.08d\xspace} heavily uses DCN~\cite{DCN_ICCV2017}. \textit{UniverseNet-20.08\xspace} speeds up inference (and training) by the light use of DCN~\cite{DCN_ICCV2017, SEPC_CVPR2020}. \textit{UniverseNet-20.08s\xspace} further speeds up inference using the ResNet-50-C~\cite{BagOfTricks_Classification_CVPR2019} backbone. \section{Details of Experiments} Here, we show the details of experimental settings and results. See also the code to reproduce our settings including minor hyperparameters. \subsection{Common Settings} We follow the learning rate schedules of MMDetection~\cite{MMDetection}, which are similar to those of Detectron~\cite{Detectron2018}. Specifically, the learning rates are reduced by 10$\times$ in two predefined epochs. Epochs for the first learning rate decay, the second decay, and ending training are $(8, 11, 12)$ for the 1$\times$ schedule, $(16, 22, 24)$ for the 2$\times$ schedule, and $(16, 19, 20)$ for the 20e schedule. To avoid overfitting by small learning rates~\cite{Shinya_ICCVW2019}, the 20e schedule is reasonable. We mainly used the 1$\times$ schedule (12 epochs). For comparison with state-of-the-art methods on COCO, we used the 2$\times$ schedule (24 epochs) for most models and the 20e schedule (20 epochs) for UniverseNet-20.08d\xspace due to overfitting with the 2$\times$ schedule. For comparison with state-of-the-art methods on WOD, we trained UniverseNet\xspace on the WOD full training set for 7 epochs. We used a learning rate of $10^{-3}$ for 6 epochs and $10^{-4}$ for the last epoch. We mainly used ImageNet~\cite{ImageNet_IJCV2015} pre-trained backbones that are standard in MMDetection~\cite{MMDetection}. Some pre-trained Res2Net backbones not supported in MMDetection were downloaded from the Res2Net~\cite{Res2Net_TPAMI2020} repository. We used the COCO pre-trained models of the MMDetection~\cite{MMDetection} repository for existing methods (Faster R-CNN~\cite{Faster_R-CNN_NIPS2015} with FPN~\cite{FPN_CVPR2017}, Cascade R-CNN~\cite{Cascade_R-CNN_CVPR2018}, RetinaNet~\cite{RetinaNet_ICCV2017}, ATSS~\cite{ATSS_CVPR2020}, and GFL~\cite{GFL_NeurIPS2020}). We trained most models with mixed precision and 4 GPUs ($\times$ 4 images per GPU). We mainly used NVIDIA T4 GPUs on the Google Cloud Platform. All results on USB and all results of UniverseNets\xspace are single model results without ensemble. We excluded images without annotations during training. We could not train each object detector multiple times with different random seeds to report error bars because training object detectors is too computationally expensive. For training ATSS~\cite{ATSS_CVPR2020} with Swin Transformer~\cite{SwinTransformer_ICCV2021}, we follow the optimizer settings of the Swin paper~\cite{SwinTransformer_ICCV2021}. Specifically, we used the AdamW optimizer~\cite{AdamW_ICLR2019} with an initial learning rate of $10^{-4}$ and a weight decay of $0.05$. For M109s\xspace, we used an initial learning rate of $4{\times}10^{-4}$, roughly tuned from choices $\{2{\times}10^{-4}, 4{\times}10^{-4}, 8{\times}10^{-4}\}$. \subsection{Settings on COCO} For comparison with state-of-the-art methods with TTA on COCO, we used soft voting with 13-scale testing and horizontal flipping following the original implementation of ATSS~\cite{ATSS_CVPR2020}. Specifically, shorter side pixels are (400, 500, 600, 640, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1800), while longer side pixels are their 1.667$\times$. For the 13 test scales, target objects are limited to corresponding 13 predefined ranges ((96, $\infty$), (96, $\infty$), (64, $\infty$), (64, $\infty$), (64, $\infty$), (0, $\infty$), (0, $\infty$), (0, $\infty$), (0, 256), (0, 256), (0, 192), (0, 192), (0, 96)), where each tuple denotes the minimum and maximum absolute scales. We also evaluated 5-scale TTA because the above-mentioned ATSS-style TTA is slow. We picked (400, 600, 800, 1000, 1200) for shorter side pixels, and ((96, $\infty$), (64, $\infty$), (0, $\infty$), (0, $\infty$), (0, 256)) for absolute scale ranges. \iftoggle{cvprarxiv}{ \subsection{Settings on NightOwls} NightOwls~\cite{NightowlsDataset_ACCV2018} is a dataset for person detection at night. It contains three categories (pedestrian, bicycle driver, and motorbike driver). In contrast to WOD, it is important to detect medium or large objects because the evaluation of NightOwls follows the \textit{reasonable} setting~\cite{Caltech_PAMI2012} where small objects (less than 50 pixels tall) are ignored. We prevented the overfitting of the driver categories (bicycle driver and motorbike driver) in two ways. The first is to map the classifier layer of a WOD pre-trained model. We transferred the weights for cyclists learned on the richer WOD to those for the NightOwls driver categories. The second is early stopping. We trained the model for 2 epochs (4,554 iterations) without background images. } \begin{figure}[t] \centering \includegraphics[width=\linewidth]{images/finer_scale_asap_mean_thinline.pdf} \vspace{-4mm} \caption{ Absolute Scale AP on USB. } \label{fig:usb_asap} \end{figure} \begin{table}[t] \setlength{\tabcolsep}{2.3mm} \renewcommand\arraystretch{0.85} \newcommand{\multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{Rank}}{\multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{Rank}} \newcommand{\multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{Method}}{\multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{Method}} \newcommand{\multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{AP/L2}}{\multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{AP/L2}} \newcommand{\cmidrule(l{.2em}r{.2em}){3-4}}{\cmidrule(l{.2em}r{.2em}){3-4}} \begin{center} \scalebox{0.75}{\begin{tabular}{llccc} \toprule \multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{Rank} & \multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{Method} & \multicolumn{2}{c}{\# Models} & \multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{AP/L2} \\ \cmidrule(l{.2em}r{.2em}){3-4} & & Multi-stage & Single-stage & \\ \midrule \multicolumn{5}{l}{\textit{Methods including multi-stage detector:}} \\ 1 & RW-TSDet~\cite{Waymo2d_1st_2020} & 6+ & & \textbf{74.43} \\ 2 & HorizonDet~\cite{Waymo2d_2nd_2020} & 4 & 8 & 70.28 \\ 3 & SPNAS-Noah~\cite{Waymo2d_3rd_2020} & 2 & & 69.43 \\ \midrule \multicolumn{5}{l}{\textit{Single-stage detectors:}} \\ 7 & \textbf{UniverseNet (Ours)} & & \textbf{1} & \textbf{67.42} \\ 13 & YOLO V4~\cite{YOLOv4_2020} & & 1+ & 58.08 \\ 14 & ATSS-Efficientnet~\cite{ATSS_CVPR2020, EfficientNet_ICML2019} & & 1+ & 56.99 \\ \bottomrule \end{tabular}} \end{center} \vspace{-3mm} \caption{ Waymo Open Dataset Challenge 2020 2D detection~\cite{WaymoOpenDataset_2D_detection_leaderboard}. } \label{table:waymo_leaderboard} \end{table} \iftoggle{cvprarxiv}{ \begin{table}[t] \setlength{\tabcolsep}{2.4mm} \renewcommand\arraystretch{0.85} \begin{center} \scalebox{0.75}{\begin{tabular}{lllcc} \toprule Rank & Method & Test scale & Test flip & MR \\ \midrule 1 & UniverseNet\xspace & 1280$\times$800, 1536$\times$960 & \cm & \;\:\textbf{5.67} \\ -- & UniverseNet\xspace & 1280$\times$800 & & \;\:7.49 \\ 2 & DeepBlueAI~\cite{NightOwls_talks_CVPRW2020_anonymize}\xspace} & 1920$\times$1280, 2048$\times$1280 & \cm & \;\:8.06 \\ 3 & dereyly~\cite{NightOwls_talks_CVPRW2020_anonymize}\xspace} & --- & --- & 10.29 \\ \bottomrule \end{tabular}} \end{center} \vspace{-3mm} \caption{ NightOwls Detection Challenge 2020 all objects track. MR: Average Miss Rate (\%) on \texttt{test} set (\textit{reasonable} setting). } \label{table:nightowls_test} \end{table} } \subsection{Evaluation with Scale-Wise Metrics} We show RSAP and ASAP on USB in \iftoggle{cvprarxiv}{Figures~\ref{fig:usb_rsap}}{Figures~4} and \ref{fig:usb_asap}, respectively. They do not increase monotonically but rather decrease at relative scales greater than $1/4$ or absolute scales greater than $512$. The difficulty of very large objects may be caused by truncation or unusual viewpoints~\cite{DiagDet_ECCV2012}. Except for the issues of ASAP$_\infty$ discussed in Sec.~\ref{sec:scale_metrics_characteristics}, ASAP shows similar changes to RSAP. We conjecture that this is because image resolutions do not change much in each dataset of USB. RSAP$_{\frac{1}{64}}$ and ASAP$_{16}$ are less than 10\%, which indicates the difficulty of tiny object detection~\cite{TinyPerson_WACV2020}. RetinaNet~\cite{RetinaNet_ICCV2017} shows low AP for small objects, while Faster R-CNN~\cite{Faster_R-CNN_NIPS2015} with FPN~\cite{FPN_CVPR2017} shows low AP for large objects. These results are consistent with the benchmark results of previous work~\cite{SpeedAccuracyTradeOffs_CVPR2017}, which compares SSD~\cite{SSD_ECCV2016} with Faster R-CNN without FPN on COCO. For further analysis, it will be worth comparing the design choice of pyramid levels~\cite{RetinaNet_ICCV2017, TinyPerson_WACV2020}. \begin{table*}[t] \captionsetup[sub]{font=footnotesize,width=.85\linewidth} \setlength{\tabcolsep}{1.2mm} \renewcommand\arraystretch{0.85} \begin{minipage}[c]{0.46\hsize} \begin{center} \scalebox{0.8}{\begin{tabular}{p{29mm}cccccc} \toprule Method & AP & AP$_{50}$ & AP$_{75}$ & AP$_\textit{S}$\xspace & AP$_\textit{M}$\xspace & AP$_\textit{L}$\xspace \\ \midrule ATSS~\cite{ATSS_CVPR2020} & 39.4 & 57.6 & 42.8 & 23.6 & 42.9 & 50.3 \\ ATSEPC\xspace~\cite{ATSS_CVPR2020, SEPC_CVPR2020} & \textbf{42.1} & \textbf{59.9} & \textbf{45.5} & \textbf{24.6} & \textbf{46.1} & \textbf{55.0} \\ \bottomrule \end{tabular}} \end{center} \vspace{-4mm} \subcaption{ AP improvements by SEPC without iBN~\cite{SEPC_CVPR2020}. } \label{table:coco_ATSEPC} \vspace{-2mm} \begin{center} \scalebox{0.8}{\begin{tabular}{p{29mm}cccccc} \toprule Method & AP & AP$_{50}$ & AP$_{75}$ & AP$_\textit{S}$\xspace & AP$_\textit{M}$\xspace & AP$_\textit{L}$\xspace \\ \midrule ATSEPC\xspace~\cite{ATSS_CVPR2020, SEPC_CVPR2020} & 42.1 & 59.9 & 45.5 & 24.6 & 46.1 & 55.0 \\ UniverseNet\xspace & \textbf{46.7} & \textbf{65.0} & \textbf{50.7} & \textbf{29.2} & \textbf{50.6} & \textbf{61.4} \\ \bottomrule \end{tabular}} \end{center} \vspace{-4mm} \subcaption{ AP improvements by Res2Net-v1b~\cite{Res2Net_TPAMI2020}, DCN~\cite{DCN_ICCV2017}, and multi-scale training. } \label{table:coco_OurOrig} \vspace{-2mm} \begin{center} \scalebox{0.8}{\begin{tabular}{p{29mm}cccccc} \toprule Method & AP & AP$_{50}$ & AP$_{75}$ & AP$_\textit{S}$\xspace & AP$_\textit{M}$\xspace & AP$_\textit{L}$\xspace \\ \midrule UniverseNet\xspace & 46.7 & 65.0 & 50.7 & \textbf{29.2} & 50.6 & 61.4 \\ UniverseNet$+$GFL\xspace & \textbf{47.5} & \textbf{65.8} & \textbf{51.8} & \textbf{29.2} & \textbf{51.6} & \textbf{62.5} \\ \bottomrule \end{tabular}} \end{center} \vspace{-4mm} \subcaption{ AP improvements by GFL~\cite{GFL_NeurIPS2020}. } \label{table:coco_OurGFL} \vspace{-2mm} \begin{center} \scalebox{0.8}{\begin{tabular}{p{29mm}cccccc} \toprule Method & AP & AP$_{50}$ & AP$_{75}$ & AP$_\textit{S}$\xspace & AP$_\textit{M}$\xspace & AP$_\textit{L}$\xspace \\ \midrule UniverseNet$+$GFL\xspace & 47.5 & 65.8 & 51.8 & 29.2 & 51.6 & 62.5 \\ UniverseNet-20.08d\xspace & \textbf{48.6} & \textbf{67.1} & \textbf{52.7} & \textbf{30.1} & \textbf{53.0} & \textbf{63.8} \\ \bottomrule \end{tabular}} \end{center} \vspace{-4mm} \subcaption{ AP improvements by SyncBN~\cite{MegDet_CVPR2018} and iBN~\cite{SEPC_CVPR2020}. } \label{table:coco_OurAugustD} \end{minipage} \hfill \begin{minipage}[c]{0.55\hsize} \begin{center} \scalebox{0.8}{\begin{tabular}{lcccccccc} \toprule Method & DCN & FPS & AP & AP$_{50}$ & AP$_{75}$ & AP$_\textit{S}$\xspace & AP$_\textit{M}$\xspace & AP$_\textit{L}$\xspace \\ \midrule UniverseNet-20.08d\xspace & heavy & 17.3 & \textbf{48.6} & \textbf{67.1} & \textbf{52.7} & \textbf{30.1} & \textbf{53.0} & \textbf{63.8} \\ UniverseNet-20.08\xspace & light & \textbf{24.9} & 47.5 & 66.0 & 51.9 & 28.9 & 52.1 & 61.9 \\ \bottomrule \end{tabular}} \end{center} \vspace{-4mm} \subcaption{ Speeding up by the light use of DCN~\cite{DCN_ICCV2017, SEPC_CVPR2020}. } \label{table:coco_OurAugust} \vspace{-1.5mm} \begin{center} \scalebox{0.8}{\begin{tabular}{p{38mm}ccccccc} \toprule Method & FPS & AP & AP$_{50}$ & AP$_{75}$ & AP$_\textit{S}$\xspace & AP$_\textit{M}$\xspace & AP$_\textit{L}$\xspace \\ \midrule UniverseNet-20.08\xspace & 24.9 & \textbf{47.5} & \textbf{66.0} & \textbf{51.9} & \textbf{28.9} & \textbf{52.1} & \textbf{61.9} \\ w/o SEPC~\cite{SEPC_CVPR2020} & 26.7 & 45.8 & 64.6 & 50.0 & 27.6 & 50.4 & 59.7 \\ w/o Res2Net-v1b~\cite{Res2Net_TPAMI2020} & \textbf{32.8} & 44.7 & 62.8 & 48.4 & 27.1 & 48.8 & 59.5 \\ w/o DCN~\cite{DCN_ICCV2017} & 27.8 & 45.9 & 64.5 & 49.8 & \textbf{28.9} & 49.9 & 59.0 \\ w/o multi-scale training & 24.8 & 45.9 & 64.5 & 49.6 & 27.4 & 50.5 & 60.1 \\ w/o SyncBN, iBN~\cite{MegDet_CVPR2018, SEPC_CVPR2020} & 25.7 & 45.8 & 64.0 & 50.2 & 27.9 & 50.0 & 59.8 \\ \bottomrule \end{tabular}} \end{center} \vspace{-4mm} \subcaption{ Ablation from UniverseNet-20.08\xspace. Replacing Res2Net-v1b backbone with ResNet-B~\cite{BagOfTricks_Classification_CVPR2019} has the largest effects. } \label{table:coco_OurAugust_ablation} \vspace{-1.5mm} \begin{center} \scalebox{0.8}{\begin{tabular}{p{38mm}ccccccc} \toprule Backbone & FPS & AP & AP$_{50}$ & AP$_{75}$ & AP$_\textit{S}$\xspace & AP$_\textit{M}$\xspace & AP$_\textit{L}$\xspace \\ \midrule ResNet-50-B~\cite{BagOfTricks_Classification_CVPR2019} & \textbf{32.8} & 44.7 & 62.8 & 48.4 & 27.1 & 48.8 & 59.5 \\ ResNet-50-C~\cite{BagOfTricks_Classification_CVPR2019} & 32.4 & 45.8 & 64.2 & 50.0 & 28.8 & 50.1 & 60.0 \\ Res2Net-50~\cite{Res2Net_TPAMI2020} & 25.0 & 46.3 & 64.7 & 50.3 & 28.2 & 50.6 & 60.8 \\ Res2Net-50-v1b~\cite{Res2Net_TPAMI2020} & 24.9 & \textbf{47.5} & \textbf{66.0} & \textbf{51.9} & \textbf{28.9} & \textbf{52.1} & \textbf{61.9} \\ \bottomrule \end{tabular}} \end{center} \vspace{-4mm} \subcaption{ UniverseNet-20.08\xspace with different backbones. } \label{table:coco_backbone} \end{minipage} \caption{ Ablation studies on COCO \texttt{minival}. } \label{table:coco_ablation} \end{table*} \subsection{Comparison with State-of-the-Art} \label{sec:sota_comparison} \noindent \textbf{WOD.} For comparison with state-of-the-art methods on WOD, we submitted the detection results of UniverseNet\xspace to the Waymo Open Dataset Challenge 2020 2D detection, a competition held at a CVPR 2020 workshop. The primary metric is AP/L2, a KITTI-style AP evaluated with LEVEL\_2 objects~\cite{WaymoOpenDataset_CVPR2020, WaymoOpenDataset_2D_detection_leaderboard}. We used multi-scale testing with soft-NMS~\cite{SoftNMS_ICCV2017}. The shorter side pixels of test scales are $(960, 1600, 2240)$, including 8 pixels padding. These scales enable utilizing SEPC~\cite{SEPC_CVPR2020} (see Sec.~\ref{sec:test_scales}) and detecting small objects. Table~\ref{table:waymo_leaderboard} shows the top teams' results. UniverseNet\xspace achieves 67.42\% AP/L2 without multi-stage detectors, ensembles, expert models, or heavy backbones, unlike other top methods. RW-TSDet~\cite{Waymo2d_1st_2020} overwhelms other multi-stage detectors, whereas UniverseNet overwhelms other single-stage detectors. These two methods used light backbones and large test scales~\cite{ashraf2016shallow}. Interestingly, the maximum test scales are the same (3360$\times$2240). We conjecture that this is not a coincidence but a convergence caused by searching the accuracy saturation point. \noindent \textbf{Manga109-s\xspace.} To the best of our knowledge, no prior work has reported detection results on the \textit{Manga109-s\xspace} dataset (87 volumes). Although many settings differ, the state-of-the-art method on the full \textit{Manga109} dataset (109 volumes, non-public to commercial organizations) achieves 77.1--92.0\% (mean: 84.2\%) AP$_{50}$ on ten test volumes~\cite{Manga109_detection_Ogawa_2018}. The mean AP$_{50}$ of UniverseNet-20.08\xspace on the \texttt{15test} set (92.5\%) is higher than those results. \iftoggle{cvprarxiv}{ \noindent \textbf{NightOwls.} To evaluate the generalization ability, we show the results on another dataset out of USB. We trained UniverseNet\xspace on the NightOwls~\cite{NightowlsDataset_ACCV2018}, a dataset for person detection at night, from the WOD pre-trained model in Sec.~\ref{sec:sota_comparison}. The top teams' results of the NightOwls Detection Challenge 2020 are shown in Table~\ref{table:nightowls_test}. UniverseNet\xspace is more accurate than other methods, even without TTA, and should be faster than the runner-up method that uses larger test scales and a heavy model (Cascade R-CNN, ResNeXt-101, CBNet, Double-Head, DCN, and soft-NMS)~\cite{NightOwls_talks_CVPRW2020_anonymize}\xspace}. } \subsection{Ablation Studies for UniverseNets\xspace} \label{sec:coco_ablation} We show the results of ablation studies for UniverseNets\xspace on COCO in Table~\ref{table:coco_ablation}. As shown in Table~\ref{table:coco_ATSEPC}, ATSEPC\xspace (ATSS~\cite{ATSS_CVPR2020} with SEPC without iBN~\cite{SEPC_CVPR2020}) outperforms ATSS by a large margin. The effectiveness of SEPC for ATSS is consistent with those for other detectors reported in the SEPC paper~\cite{SEPC_CVPR2020}. As shown in Table~\ref{table:coco_OurOrig}, UniverseNet\xspace further improves AP metrics by $\sim$5\% by adopting Res2Net-v1b~\cite{Res2Net_TPAMI2020}, DCN~\cite{DCN_ICCV2017}, and multi-scale training. As shown in Table~\ref{table:coco_OurGFL}, adopting GFL~\cite{GFL_NeurIPS2020} improves AP by 0.8\%. There is room for improvement of AP$_\textit{S}$\xspace in the Quality Focal Loss of GFL~\cite{GFL_NeurIPS2020}. As shown in Table~\ref{table:coco_OurAugustD}, UniverseNet-20.08d\xspace achieves 48.6\% AP by making more use of BatchNorm (SyncBN~\cite{MegDet_CVPR2018} and iBN~\cite{SEPC_CVPR2020}). It is much more accurate than other models trained for 12 epochs using ResNet-50-level backbones (\eg, ATSS: 39.4\%~\cite{ATSS_CVPR2020, MMDetection}, GFL: 40.2\%~\cite{GFL_NeurIPS2020, MMDetection}). On the other hand, the inference is not so fast (less than 20 FPS) due to the heavy use of DCN~\cite{DCN_ICCV2017}. UniverseNet-20.08\xspace speeds up inference by the light use of DCN~\cite{DCN_ICCV2017, SEPC_CVPR2020}. As shown in Table~\ref{table:coco_OurAugust}, UniverseNet-20.08\xspace is 1.4$\times$ faster than UniverseNet-20.08d\xspace at the cost of a $\sim$1\% AP drop. To further verify the effectiveness of each technique, we conducted ablation from UniverseNet-20.08\xspace shown in Table~\ref{table:coco_OurAugust_ablation}. All techniques contribute to the high AP of UniverseNet-20.08\xspace. Ablating the Res2Net-v1b backbone (replacing Res2Net-50-v1b~\cite{Res2Net_TPAMI2020} with ResNet-50-B~\cite{BagOfTricks_Classification_CVPR2019}) has the largest effects. Res2Net-v1b improves AP by 2.8\% and increases the inference time by 1.3$\times$. To further investigate the effectiveness of backbones, we trained variants of UniverseNet-20.08\xspace as shown in Table~\ref{table:coco_backbone}. Although the Res2Net module~\cite{Res2Net_TPAMI2020} makes inference slower, the deep stem used in ResNet-50-C~\cite{BagOfTricks_Classification_CVPR2019} and Res2Net-50-v1b~\cite{Res2Net_TPAMI2020} improves AP metrics with similar speeds. UniverseNet-20.08s\xspace (the variant using the ResNet-50-C backbone) shows a good speed-accuracy trade-off by achieving 45.8\% AP and over 30 FPS. \subsection{Effects of Test Scales} \label{sec:test_scales} We show the results on WOD at different test scales in Figure~\ref{fig:waymo_scales_cap_supmat}. Single-stage detectors require larger test scales than multi-stage detectors to achieve peak performance, probably because they cannot extract features from precisely localized region proposals. Although ATSEPC\xspace shows lower AP than ATSS at the default test scale (1248$\times$832 in Standard USB), it outperforms ATSS at larger test scales (\eg, 1920$\times$1280 in Large USB). We conjecture that we should enlarge object scales in images to utilize SEPC~\cite{SEPC_CVPR2020} because its DCN~\cite{DCN_ICCV2017} enlarges effective receptive fields. SEPC and DCN prefer large objects empirically (Tables~\ref{table:coco_ATSEPC}, \ref{table:coco_OurAugust_ablation}, \cite{SEPC_CVPR2020, DCN_ICCV2017}), and DCN~\cite{DCN_ICCV2017} cannot increase the sampling points for objects smaller than the kernel size in principle. By utilizing the characteristics of SEPC and multi-scale training, UniverseNets\xspace achieve the highest AP in a wide range of test scales. \iftoggle{cvprarxiv}{ \begin{figure}[t] \centering \begin{minipage}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{images/wod_ap_test_scale_cap_thinline.pdf} \vspace{-5mm} \subcaption{COCO-style AP} \label{fig:waymo_scales_cap_supmat} \vspace{2mm} \end{minipage} \begin{minipage}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{images/wod_ap_test_scale_kap_thinline.pdf} \vspace{-5mm} \subcaption{KITTI-style AP} \label{fig:waymo_scales_kap_supmat} \end{minipage} \caption{ Test scales \vs different AP metrics on WOD \texttt{f0val}. } \label{fig:waymo_scales_supmat} \end{figure} }{ \begin{figure}[t] \centering \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[width=\linewidth]{images/wod_ap_test_scale_cap_thinline.pdf} \vspace{-5mm} \subcaption{COCO-style AP} \label{fig:waymo_scales_cap_supmat} \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[width=\linewidth]{images/wod_ap_test_scale_kap_thinline.pdf} \vspace{-5mm} \subcaption{KITTI-style AP} \label{fig:waymo_scales_kap_supmat} \end{minipage} \caption{ Test scales \vs different AP metrics on WOD \texttt{f0val}. } \label{fig:waymo_scales_supmat} \end{figure} } \subsection{Evaluation with KITTI-Style AP} We evaluated the KITTI-style AP (KAP) on WOD. KAP is a metric used in benchmarks for autonomous driving~\cite{KITTI_CVPR2012, WaymoOpenDataset_CVPR2020}. Using different IoU thresholds (0.7 for vehicles, and 0.5 for pedestrians and cyclists), KAP is calculated as $\mathrm{KAP} = (\mathrm{AP_{0.7, veh.}}+\mathrm{AP_{0.5, ped.}}+\mathrm{AP_{0.5, cyc.}}) / 3.$ The results of KAP are shown in Figure~\ref{fig:waymo_scales_kap_supmat}. GFL~\cite{GFL_NeurIPS2020} and Cascade R-CNN~\cite{Cascade_R-CNN_CVPR2018}, which focus on localization quality, are less effective for KAP. \subsection{Effects of COCO Pre-Training} To verify the effects of COCO pre-training, we trained UniverseNet-20.08\xspace on M109s\xspace from different pre-trained models. Table~\ref{table:Manga109s_15test_pretrain} shows the results. COCO pre-training improves all the metrics, especially body AP. We also trained models with the eight methods on M109s\xspace from ImageNet pre-trained backbones. We halved the learning rates in \iftoggle{cvprarxiv}{Table~\ref{table:hyperparameters}}{Table~6} and doubled warmup iterations~\cite{Goyal2017AccurateLM} (from 500 to 1,000) because the training of single-stage detectors without COCO pre-training or SyncBN~\cite{MegDet_CVPR2018} is unstable. The CAP without COCO pre-training is 1.9\% lower than that with COCO pre-training \iftoggle{cvprarxiv}{(Table~\ref{table:usb})}{(Table~7)} on average. \begin{table}[t] \setlength{\tabcolsep}{0.9mm} \renewcommand\arraystretch{0.72} \begin{center} \scalebox{0.8}{\begin{tabular}{lcccccccccc} \toprule Pre-training & AP & AP$_{50}$ & AP$_{75}$ & AP$_\textit{S}$\xspace & AP$_\textit{M}$\xspace & AP$_\textit{L}$\xspace & body & face & frame & text \\ \midrule ImageNet & 68.9 & 92.2 & 73.3 & 19.9 & 42.6 & 75.8 & 64.3 & 47.6 & 93.0 & 70.7 \\ COCO 1$\times$ & \textbf{69.9} & \textbf{92.5} & \textbf{74.3} & \textbf{20.5} & \textbf{43.6} & \textbf{77.1} & \textbf{66.6} & \textbf{48.0} & 93.7 & \textbf{71.2} \\ COCO 2$\times$ & 69.8 & 92.3 & 74.0 & \textbf{20.5} & 43.4 & 77.0 & 66.5 & 47.8 & \textbf{93.8} & \textbf{71.2} \\ \bottomrule \end{tabular}} \end{center} \vspace{-3mm} \caption{ UniverseNet-20.08\xspace on Manga109-s \texttt{15test} with different pre-trained models. } \label{table:Manga109s_15test_pretrain} \end{table}
1,108,101,565,087
arxiv
\section{Introduction} The study of behavior of Hecke eigenvalues has been an interesting as well as an important theme in the theory of modular forms. For example, the distribution of Hecke eigenvalues and Omega results (i.e., `sharp' lower bounds on suitable subsequences) have been studied extensively. In the case of an elliptic Hecke eigenform, the equidistribution of the eigenvalues is a consequence of Sato--Tate conjecture, which is known from the deep results in \cite{barnet}. However, reasonable Omega results can, in many cases, be proved by less sophisticated techniques. For example it is well known that for holomorphic cusp forms on $\mathrm{GL}(2)$, such a result follows from the fact that for $r\le 4$, the symmetric $r$-power $L$-functions have an analytical continuation upto $\mathrm{Re}(s)\ge 1/2$ (see \cite{murty1983oscillations}). In the case of our interest, namely holomorphic Siegel modular forms of degree $2$, none of the above-mentioned results are known outside of the Maa{\ss} space, even though there are some average results \cite{kim} (vertical Sato--Tate on average) and \cite{tsuzuki} (Sato--Tate on average). There are far fewer results however, when one fixes the modular form. Namely, in the case at hand, the distribution of eigenvalues $\lambda_F(p)$, ($p$ prime, $F$ is a non Saito--Kurokawa lift) can be found in \cite{saha2011prime} and \cite{das2013natural}. In this article we study the distribution of Hecke eigenvalues $\mu_F(p^r)$ (for $r=1,2,3$, $p$ being a prime) of a Siegel Hecke eigenform of degree $2$ with full level that is not a Saito--Kurokawa (Maa{\ss}) lift. We do this with an aim of proving an Omega result for Hecke eigenvalues of such an $F$. Let us denote by $S_k^{2,*}$, the Maa{\ss} subspace and by $S_k^{2,\perp}$ subspace of $S_k^{2}$ orthogonal to $S_k^{2,*}$. Our main result \thmref{th:dis} implies (via \thmref{th:omega}) in particular that the Ramanujan--Petersson conjecture for eigenforms in $S_k^{2,\perp}$ is optimal, in a sense described below. First, let us describe the main result of this article. Let $F \in S_k^{2,\perp}$ be a Hecke eigenform with eigenvalues $\mu_F(n)$; so that if $T(n)$ denotes the $n$-th (similitude) Hecke operator on $S^2_k$, one has $T(n) F = \mu_F(n) F$ for all $n \ge 1$. The Ramanujan--Petersson conjecture (proved by Weissauer, \cite{weissauer2009endoscopy}) for $F$ implies that (see \cite{das-koh-sen}) for all $n \ge 1$, \begin{align}\label{rama} |\mu_F(n) | \le d_5(n) n^{k-3/2} , \end{align} where $d_5(n)$ denotes the number of ways of writing $n$ as the product of $5$ positive integers. We now normalize $\mu_F(n)$ by putting \begin{align} \label{lnorm} \lambda_F(n) =\mu_F(n) /n^{k-3/2}. \end{align} We call $\mu_F(n) $ to be `large' if $|\lambda_F(n)| >c$ for some $c>1$. Our main theorem then says that there exist a plethora of `large' eigenvalues if we search in the sequence $\{ p^j | p \text{ prime }, j=1,2,3\}$. By multiplicativity of the Hecke eigenvalues, this would then produce other `large' eigenvalues. \begin{thm}\label{th:dis} Let $F\in S_k^{2,\perp}$ be a Hecke eigenform. Then there exists $c>1$ and $\delta>0$ such that \begin{equation} \liminf_{x \to \infty}\frac{\#\{p\le x : \max\{|\lambda_F(p^i)|: i=1,2,3\}\ge c\}}{\pi(x)}>\delta. \end{equation} where $\pi(x)$ denotes the number of primes upto $x$. \end{thm} Note that this would mean that for every large $x$, there exists an $i=i_x\in\{1,2,3\}$ such that $\#\{p\le x: |\lambda(p^i)|>c\}>\delta\cdot \pi(x)$. This immediately gives us the following corollary. \begin{cor}\label{cor:dis} For at least one $j \in \{1,2,3\}$, the following statement is true: there exist constants $c>1$ and $\delta>0$ such that \begin{align} \limsup_{x \to \infty} \frac{\# \{p\le x : |\lambda(p^j)|\ge c\}}{\pi(x)} >\delta. \end{align} \end{cor} The main point to note here is that $c >1$ (so that we are dealing with `large' eigenvalues); analogous assertions when $c<1$ follow already from \cite{das2013natural}. The main tools used in the proof of Theorem \ref{th:dis} are the prime number theorems for the spinor and the standard $L$-functions (denoted as $Z(F,s)$ and $Z^{st}(F,s)$ respectively) attached to $F$ and the Hecke relations among the Hecke eigenvalues. Also crucially used in the proof is the existence of a functorial transfer from $\mathrm{GSp}(4)$ to $\mathrm{GL}(4)$ from the work of \cite{pitaleetal}, which enables us to use the analytic machinery from $\mathrm{GL}(4)$ automorphic representations. Let us now discuss some applications of \thmref{th:dis} towards Omega results on the sequence of eigenvalues $\{ \lambda_F(n) \}$. In particular, we would like to know if \eqref{rama} is the best possible. This means two things: first, the exponent $k-3/2$ should be the best possible and second, the order of magnitude of the slowly growing function $d_5(n)$ should also be the best possible. Of these, the assertion about the exponent is true and follows from \cite{das2013natural}. It should also follow by considering the Rankin--Selberg convolution of $Z(F,s)$ with itself, and arguing with the location of poles (cf. \cite[remark~5.3]{das-boech}). More subtle is the slowly growing function, and we prefer to treat these functions simultaneously. To set the stage, let us recall some facts about this type of questions in the case of elliptic modular forms and Saito--Kurokawa lifts. In the case of elliptic modular forms, the answer to the question of sharpness of the Ramanujan--Petersson conjecture \[ \text{(Deligne's bound): } \quad |a_g(m)| \le d_2(n) n^{(k-1)/2}, \quad n\ge 1 \] (for a newform $g$) is that it is the best possible in terms of the exponent $(k-1)/2$; so the question boils down to understanding the behavior of the function $a_g(m) / m^{(k-1)/2}$. One knows the following $\Omega$-type results about this function. In \cite{rankin1973theta} Rankin proved, essentially exploiting the prime number theorem for a Hecke eigenform $g \in S_k$, that it is not bounded: \begin{equation} \label{om1} \limsup_{m \to \infty} \frac{a_g(m)}{m^{(k-1)/2}} = \infty. \end{equation} Even a stronger result is known due to Ram Murty (cf. \cite{murty1983oscillations}, using the holomorphy of suitable symmetric power $L$-functions): \begin{equation} \label{om2} \frac{a_g(m)}{m^{(k-1)/2}} = \Omega \left( \exp \left(\frac{\alpha \log m}{ \log \log m }\right) \right) \qquad (\alpha>0). \end{equation} It is known that the Saito--Kurokawa lifts of degree 2, fail to satisfy (\ref{rama}). Instead they satisfy \begin{equation} \lambda_F(n)\ll_\epsilon n^{k-1+\epsilon}, \quad \text{for any } \epsilon>0. \end{equation} An Omega result for such an $F\in S_k^{2}$ was obtained by Das (see \cite{das2014omega}) and was later improved by Gun et al (see \cite{gun2018hecke}). Here and in the rest of the paper, for arithmetical functions $f(n),g(n)$ with $g(n)>0$ for all $n \ge 1$, we use the notation \begin{equation} \label{omegadef} f(n)= \Omega_{\pm} (g(n)) \quad \text{ if and only if } \quad \limsup_{n \to \infty} \frac{ f(n) }{g(n)} >0 \quad (\text{resp. } \liminf_{n \to \infty} \frac{ f(n) }{g(n)} <0). \end{equation} In more simple terms, this just means that $|f(n) |/g(n)$ is bounded away from zero along a subsequence of the set of natural numbers $\mathbb N$. Moreover we write $f(n)= \Omega (g(n))$ if $|f(n)|= \Omega_{+} (g(n))$. Using the corollary \ref{cor:dis} of \thmref{th:dis}, we can deduce easily the following Omega result. \begin{thm}\label{th:omega} Let $F$ be as in \thmref{th:dis}. Then there exists a constant $c>0$ such that \begin{equation} \lambda_F(n)=\Omega_{\pm}\left(\exp\left(\frac{c\log n}{\log\log n}\right)\right). \end{equation} \end{thm} Actually the above Omega result is realized over a certain subset of fourth power-free integers. It is easy to check that on this subset $\log d_5(n)$ and the function $\log n/\log\log n$ are same asymptotically upto the constant $c$. For example, if $p$ is a prime, then $d_5(p) = 5$. Therefore putting $a_N = \prod_{p \le N} p$, $\log d_5(a_N) = \log 5\cdot \log(\pi(N)) \sim A\cdot \log a_N/\log \log a_N $. So our result can also be presented as $\lambda_F(n) = \Omega_{\pm}( d_5(n)^\omega)$ for some $\omega>0$. At any rate this not only proves the optimality of the exponent in \eqref{rama}, but that the slowly growing function is also the same upto a suitable exponent. It is also interesting to ask for Omega results in the context of Fourier coefficients; this has recently been addressed in \cite{das-omega}. It is not immediately clear how the results of this article influence those of \cite{das-omega} and vice-versa. \subsection*{Acknowledgments} {\small S.D. was supported by a Humboldt Fellowship from the Alexander von Humboldt Foundation at Universit\"{a}t Mannheim during the preparation of the paper, and thanks both for the generous support and for providing excellent working conditions. He also thanks IISc, Bangalore, DST (India) and UGC centre for advanced studies for financial support. During the preparation of this work S.D. was supported by a MATRICS grant MTR/2017/000496 from DST-SERB, India. P.A. and R.P. were supported by IISc Research Associateship during the preparation of this article and thank IISc, Bangalore for the support.} \section{Notation and Preliminaries} First we recall some basic facts about Siegel cusp forms of degree $2$ and the classical $L$-functions attached to them. Let $F\in S_k^2$ be an eigenform for all Hecke operators $T(n)$ which is not a Saito--Kurokawa lift. Let $\{\lambda_F(n)\}$ (normalized as in \eqref{lnorm}) be the normalized eigenvalues of $F$. We refer the reader to \cite{andrianov1974euler} for more details. \subsection*{Some L-functions attached to $F$} The degree $4$ spinor zeta function attached to $F$ is given by \begin{equation*} Z(F,s)=\prod_{p}Z_p(F,p^{-s}), \end{equation*} where the $p$-th Euler factor $Z_p(F,\cdot)$ of $Z(F, \cdot)$ is given by \begin{align}\label{spinorp} Z_p(F,t)^{-1}&=(1-\alpha_{0,p} t)(1-\alpha_{0,p}\alpha_{1,p} t)(1-\alpha_{0,p}\alpha_{2,p} t)(1-\alpha_{0,p}\alpha_{1,p}\alpha_{2,p} t)\\ &=1-\lambda_F(p) t+(\lambda_F(p)^2-\lambda_F(p^2)-p^{-1})t^2-\lambda_F(p) t^3+t^4.\nonumber \end{align} The degree $5$ standard $L$-function attached to $F$ is given by \begin{equation*} Z^{st}(F,s)=\prod_{p}Z_p^{st}(F,p^{-s}), \end{equation*} where \begin{equation}\label{standardp} Z_p^{st}(F,t)^{-1}=(1-t)(1-\alpha_{1,p}t)(1-\alpha_{2,p}t)(1-\alpha_{1,p}^{-1}t)(1-\alpha_{2,p}^{-1}t). \end{equation} Here $\alpha_{0,p},\alpha_{1,p},\alpha_{2,p}$ denote the Satake $p$-parameters attached to $F$ and satisfy \begin{equation}\label{alphaprod} \alpha_{0,p}^2\alpha_{1,p}\alpha_{2,p}=1. \end{equation} By virtue of the Ramanujan--Petersson conjecture proved by Weissauer (see \cite{weissauer2009endoscopy}) we have \begin{equation*} |\alpha_{0,p}|=|\alpha_{1,p}|=|\alpha_{2,p}|=1, \end{equation*} for all primes $p$. Moreover the Hecke eigenvalues are related to the spinor zeta function by \begin{equation} \label{lamspin} \sum_{n \ge 1} \lambda_F(n) n^{-s} = \frac{Z(F,s)}{\zeta(2s+1)}. \end{equation} Let the Dirichlet series of $Z^{st}(F,s)$ be denoted as \begin{equation*} Z^{st}(F,s)=\sum_{n\ge 1}\frac{b(n)}{n^s}. \end{equation*} Then by expanding (\ref{standardp}) we have \begin{equation}\label{bpexp} b(p)=1+\alpha_{1,p}+\alpha_{2,p}+\alpha_{1,p}^{-1}+\alpha_{2,p}^{-1}. \end{equation} From \cite{pitaleetal} we know that there exist cuspidal automorphic representations $\Pi_4$ of $\mathrm{ GL}_4(\mathbb{A})$ and $\Pi_5$ of $\mathrm{ GL}_5(\mathbb{A})$ such that \begin{equation*} Z(F,s)=L(\Pi_4,s)\qquad\text{and}\qquad Z^{st}(F,s)=L(\Pi_5,s). \end{equation*} Then using the prime number theorem (PNT) for Rankin--Selberg L functions $L(\Pi_4\times\Pi_4,s)$ and $L(\Pi_5\times\Pi_5,s)$ (see \cite{liu-ye}), we obtain \begin{lem}\label{lem:PNT} For all large $X$ \begin{enumerate} \item $\underset{p\le X}{\sum}\lambda_F(p)^2\log p= X+O(X\exp(-\kappa_1\sqrt{\log X})).$ \item $\underset{p\le X}{\sum}b(p)^2\log p= X+O(X\exp(-\kappa_2\sqrt{\log X})).$ \end{enumerate} Here $\kappa_1, \kappa_2>0$. \end{lem} \subsection*{Hecke relations} The eigenvalues $\lambda_F(p^n)$ of $F$ satisfy the following recursive relation (from \cite[Theorem 1.3.2]{andrianov1974euler}). \begin{equation}\label{heckerelations} \lambda_F(p^n)=\lambda_F(p)\Big(\lambda_F(p^{n-1})+\lambda_F(p^{n-3})\Big)-\lambda_F(p^{n-2})\Big(\lambda_F(p)^2-\lambda_F(p^2)-\frac{1}{p}\Big)-\lambda_F(p^{n-4}). \end{equation} We also need the relation between the eigenvalues $\lambda_F(p)$, $\lambda_F(p^2)$ and the Dirichlet coefficients $b(p)$ of the standard $L$-function $Z^{st}(F,s)$. Let $\beta_{1,p}=\alpha_{0,p}$, $\beta_{2,p}=\alpha_{0,p}\alpha_{1,p}$, $\beta_{3,p}=\alpha_{0,p}\alpha_{2,p}$ and $\beta_{4,p}=\alpha_{0,p}\alpha_{1,p}\alpha_{2,p}$. The $p$-th Euler factor of $Z(F,s)$ can be written in terms of $\beta_{i,p}$s as follows. \begin{equation*} Z_p(F,t)^{-1}=\prod_{1\le i\le 4}(1-\beta_{i,p} t). \end{equation*} By expanding the product and using (\ref{spinorp}) we get the following identities. Namely \begin{equation} \label{lambeta} \lambda_F(p)=\sum_{1\le i\le 4}\beta_{i,p} \end{equation} and \begin{equation}\label{lambpp2} \lambda_F(p)^2-\lambda_F(p^2)-p^{-1}=\sum_{1\le i<j\le 4}\beta_{i,p}\beta_{j,p}. \end{equation} Combining these identities we get \begin{equation}\label{lamp^2} \lambda(p^2)=\sum_{1\le i\le j\le 4}\beta_{i,p}\beta_{j,p}-\frac{1}{p}. \end{equation} From \eqref{lambeta}, \eqref{lamp^2} and \eqref{bpexp} one obtains the following estimates. \begin{equation}\label{lamest} |\lambda_F(p)| \le 4, \quad |\lambda_F(p^2)| \le 10+1/p, \quad |b(p)| \le 5. \end{equation} We also need the relation between the eigenvalues $\lambda(p), \lambda(p^2)$ and the Dirichlet coefficient $b(p)$ of the standard L-function. We get the following relation by using the identities (\ref{alphaprod}), (\ref{bpexp}), \eqref{lambeta} and \eqref{lambpp2}. \begin{equation}\label{bprelation} \lambda_F(p)^2-\lambda_F(p^2)=b(p)+1+\frac{1}{p}. \end{equation} For $a<b$ and $i=1,2$ or $3$, we consider the following subsets of $\mathcal P$, the set of prime numbers. \begin{equation} V_i(a,b;x):=\{p\le x : a\le |\lambda(p^i)|<b\} \end{equation} and we denote the set $\{p\le x :|\lambda(p^i)|\ge a\}$ by $V_i(a,\bullet;x)$. Let us put \[ \eta_1=10^{-10} \text{ and } \eta_2=1/10. \] \section{Proof of Theorem \ref{th:dis}} In this section we collect various implications arising from the asymptotic formulas of the PNT for $Z(F,s)$ and $Z^{st}(F,s)$ (cf. \lemref{lem:PNT}) in combination with the Hecke relations \eqref{bprelation} and the bounds on the eigenvalues \eqref{lamest}. The results are in the form of lower bounds on the sets $V_j(a,b;x)$ under suitable hypotheses. Note that using the partial summation one can deduce (from lemma \ref{lem:PNT}) that \begin{equation}\label{PNT1} \sum_{p \le x} \lambda_F(p)^2 = \frac{x}{\log x} + o\left( \frac{x}{\log x}\right) \end{equation} and similarly \begin{equation}\label{pntstd} \sum_{p\le x}b^2(p)=\frac{x}{\log x}+o\left(\frac{x}{\log x}\right). \end{equation} \subsection{Choice of $X_0$:}\label{subX0} We choose a large $X_0$ such that the following hold (note here that $X_0$ may be dependent on the weight $k$). \begin{enumerate} \item Let $M(x)$ and $E_i(x)$ ($i=1,2$) denote the main and error terms in (\ref{PNT1}) and (\ref{pntstd}) respectively. Then, for $x> X_0$, $E_i(x)\le 10^{-6}\cdot M(x)$ for $i=1,2$. \item For $x\ge X_0$, $\pi(10^4)\le 10^{-6}\cdot \pi(x)$. \item $\frac{999}{1000}\cdot\pi(x)\le\frac{x}{\log x}$ for all $x> X_0$. \end{enumerate} With this choice of $X_0$, we have the following results. \begin{prop}\label{prop:proportion1} For any $x\ge X_0$, one of the following is true. (i) \, For some $\delta_1\ge 10^{-5}$, \, $|V_1(1+\eta_1, \bullet ;x)| \ge \delta_1\cdot \pi(x)$. (ii) \, For some $\delta_2\ge 98/100$,\, $|V_1(1-\eta_2,1+\eta_1;x)| \ge \delta_2\cdot\pi(x)$. \end{prop} \begin{proof} Let $x_0\ge X_0$ such that $(i)$ and $(ii)$ does not hold. That is suppose $|V_1(1+\eta_1, \bullet ;x_0)|< 10^{-5}\cdot \pi(x_0)$ and $|V_1(1-\eta_2,1+\eta_1;x_0)|<98/100 \cdot\pi(x_0)$. Now we decompose the sum on the LHS of \eqref{PNT1} into disjoint parts and bound them as follows: \begin{align*} \sum_{p\le x_0}\lambda_F(p)^2&=\sum_{p\in V_1(0,1-\eta_2;x_0) }\lambda_F(p)^2+\sum_{p\in V_1(1-\eta_2,1+\eta_1;x_0)}\lambda_F(p)^2+\sum_{p\in V_1(1+\eta_1, \bullet;x_0)}\lambda_F(p)^2\\ &< (1-\eta_2)^2 |V_1(0,1-\eta_2;x_0)| + (1+\eta_1)^2 |V_1(1-\eta_2,1+\eta_1;x_0)|\\ &\quad+16 |V_1(1+\eta_1, \bullet;x_0)|. \end{align*} For simplicity, let us put \[ A:= |V_1(0,1-\eta_2;x_0)|, B:=|V_1(1-\eta_2,1+\eta_1;x_0)|, C:=|V_1(1+\eta_1, \bullet;x_0)| ,\] so that $A+B+C = \pi(x_0)$. Then \begin{align} \sum_{p\le x_0}\lambda_F(p)^2 &< (1-\eta_2)^2 \pi (x_0) + B((1+\eta_1)^2 - (1-\eta_2)^2 ) +C(16 - (1-\eta_2)^2) \nonumber \\ &< \left( (1-\eta_2)^2 + \frac{98}{100} \left( (1+\eta_1)^2 - (1-\eta_2)^2 \right) + 10^{-5} \left( 16 - (1-\eta_2)^2 \right) \right) \pi(x_0)\nonumber \\ &< \frac{998}{1000} \cdot \pi(x_0), \label{ineqcontra} \end{align} upon a short calculation. Thus, for any $x$ such that the conditions $(i)$ and $(ii)$ both fail, the RHS is bounded by $998/1000\cdot \pi(x)$. This is clearly a contradiction in view of conditions (1) and (3) in subsection (\ref{subX0}) on choice of $X_0$. \end{proof} If condition $(i)$ of \propref{prop:proportion1} is true for all $x\ge X_0$, then the proof of Theorem \ref{th:dis} is done. But, if for some $x_0\ge X_0$ only condition $(ii)$ of \propref{prop:proportion1} is true, then we need to look at the sets $V_2(a,b;x_0)$ and $V_3(a,b;x_0)$. To do this we look at the distribution of coefficients $b(p)$ of the standard $L$-function $Z^{st}(F,s)$. \begin{prop}\label{prop:v_2} Let $x_0\ge X_0$ be such that the condition (i) of proposition \ref{prop:proportion1} does not hold. Additionally suppose that $\#\{p\le x_0 : |b(p)|>2.1\}>10^{-3}\cdot\pi(x_0)$. Then \[|V_2(1.09,\bullet;x_0)| > 9\times 10^{-4} \cdot \pi(x_0).\] \end{prop} \begin{proof} From (\ref{bprelation}) we have \begin{equation} \label{tricky} |\lambda_F(p^2)|\ge |b(p)|-|1-\lambda_F(p)^2+p^{-1}|. \end{equation} Now for $p\not\in V_1(1+\eta_1, \bullet;x)$, note that \begin{align} |1-\lambda_F(p)^2+1/p | \le \begin{cases} 1+1/p & \quad\text{if }\quad\q |\lambda_F(p)| \le 1;\\ \alpha + 1/p & \quad\text{if }\;1< |\lambda_F(p)| \le 1+\eta_1. \end{cases} \end{align} In the second inequality above we have put $\lambda_F(p)^2 = 1+\alpha$ and an easy calculation shows that $0<\alpha < 10^{-9}$. Thus it follows from \eqref{tricky} that \begin{equation} |\lambda_F(p^2)| > |b(p)|-1-1/p. \end{equation} Let us put $A(x):= \{p\le x : |b(p)|>2.1\}$. Moreover, if $p\in A(x)$ (and $p>10^4$) we have \begin{equation} |\lambda_F(p^2)| > 2.1-1-p^{-1}>1.09. \end{equation} These observations suffice to finish the proof as follows. From our two hypotheses in the statement of \propref{prop:v_2} it follows that \begin{equation} \label{av1} |A(x_0)| > 10^{-3} \pi(x_0) ; \quad |V_1(1+\eta_1, \bullet;x_0)| < 10^{-5} \pi(x_0). \end{equation} From the above calculations and \eqref{av1} we then conclude (putting $B^c = \text{ `complement' of } B$) \begin{equation} A(x_0) \cap V_1(1+\eta_1, \bullet;x_0)^{c}\setminus\mathcal P(10^4) \subset V_2(1.09,\bullet;x_0), \end{equation} where $\mathcal P(10^4)$ is the set of primes $\le 10^4$. By our choice of $X_0$, we have $\pi(10^4)\le \pi(x_0)/10^6$. Therefore \begin{equation} |V_2(1.09,\bullet;x_0)| \ge |A(x_0)| - |V_1(1+\eta_1, \bullet;x_0)|-\pi(10^4) \ge (10^{-3} - 10^{-5}-10^{-6} ) \pi(x_0), \end{equation} which immediately gives the lemma. \end{proof} Now we prove a result regarding the coefficients $b(p)$ of $Z^{st}(F,s)$. \begin{prop}\label{prop:bp1/18} Let $x_0\ge X_0$ be such that $\#\{p\le x_0 : |b(p)|>2.1\}\le 10^{-3}\cdot\pi(x_0)$. Then\qquad $\#\{p\le x_0 : 6/7\le |b(p)|\le 2.1\}>\frac{1}{16}\cdot\pi(x_0)$. \end{prop} \begin{proof} We argue in the same way as in proposition \ref{prop:proportion1}. First we decompose the LHS of (\ref{pntstd}) into disjoint sums as follows: \begin{equation} \sum_{p\le x_0}b^2(p)=\underset{0\le |b(p)|<6/7}{\sum_{p\le x_0}}b^2(p)+\underset{6/7\le |b(p)|\le 2.1}{\sum_{p\le x_0}}b^2(p)+\underset{ 2.1<|b(p)|\le 5}{\sum_{p\le x_0}}b^2(p). \end{equation} As in the proof of proposition \ref{prop:proportion1}, let $A$, $B$ and $C$ denote the cardinality of the sets in the first, second and third terms of the RHS, respectively. Thus we have $A+B+C=\pi(x_0)$ and we get \begin{align*} \sum_{p\le x_0}b^2(p)&\le \frac{36}{49}(\pi(x_0)-B-C)+4.41\cdot B+25\cdot C\\ &=\frac{36}{49}\cdot \pi(x_0)+(4.41-\frac{36}{49})\cdot B+(25-\frac{36}{49})\cdot C. \end{align*} From our assumption we have, $C\le 10^{-3}\cdot \pi(x_0)$. Now, if the conclusion of the proposition is not true, then $B\le \frac{1}{16}\cdot \pi(x_0)$ and we have \begin{align*} \sum_{p\le x_0}b^2(p)&\le \left(\frac{36}{49}+3.68\cdot\frac{1}{16}+24.27\cdot\frac{1}{10^3}\right)\cdot \pi(x_0)\\ &<\frac{989}{1000}\cdot\pi(x_0). \end{align*} A clear contradiction to (\ref{pntstd}) by our choice of $X_0$. \end{proof} \begin{prop}\label{prop:v_3} Let $x_0\ge X_0$ be such that the condition (1) of proposition \ref{prop:proportion1} does not hold. Additionally suppose that $\#\{p\le x_0 : |b(p)|>2.1\}\le 10^{-3}\cdot\pi(x_0)$. Then \[| V_3(1.02, \bullet;x_0)|>\frac{1}{25}\cdot \pi(x_0).\] \end{prop} \begin{proof} We again make use of the following inequality from (\ref{bprelation}). \begin{equation*} |\lambda_F(p^2)|\ge |b(p)|-|1-\lambda_F(p)^2+p^{-1}|. \end{equation*} For $p\in V_1(1-\eta_2,1+\eta_1;x)$, since $\eta_2=\frac{1}{10}$, we have \begin{align} |1-\lambda_F(p)^2+1/p | \le \begin{cases} 19/100+1/p & \quad\text{if }\;(1-\eta_2)\le |\lambda_F(p)| \le 1;\\ \alpha +1/p & \quad\text{if }\quad \qquad 1< |\lambda_F(p)| < 1+\eta_1, \end{cases} \end{align} where $0<\alpha<10^{-9}$. Thus for $p\in V_1(1-\eta_2,1+\eta_1;x)\cap \{p\le x : |b(p)|\ge 6/7\}$ we have \begin{equation*} |\lambda_F(p^2)|\ge \frac{6}{7}-\frac{19}{100}-\frac{1}{p}. \end{equation*} Again choosing $p$ large enough ($p>10^4$) we get that $|\lambda_F(p^2)|\ge 0.667>2/3$ and we have \begin{equation}\label{p22/3set} V_1(1-\eta_2,1+\eta_1;x)\cap \{p\le x : |b(p)|\ge 6/7\} \setminus\mathcal P(10^4)\subseteq V_2(2/3,\bullet;x). \end{equation} Now from the Hecke relations (see (\ref{heckerelations})) we have \begin{equation} \lambda_F(p^3)=\lambda_F(p)\left(2\lambda_F(p^2)-\lambda_F(p)^2+1+\frac{1}{p}\right) \end{equation} and if $p\in V_2(2/3,\bullet;x_0)\cap V_1(1-\eta_2,1+\eta_1;x_0)$, we have \begin{align*} |\lambda_F(p^3)|&\ge (1-\eta_2)\left||2\lambda_F(p^2)|-|\lambda_F(p)^2-1-\frac{1}{p}|\right|\\ &> \frac{9}{10}\left( \frac{4}{3}-\frac{19}{100}-\frac{1}{p}\right)\\ &> 1.02. \end{align*} Combining this with (\ref{p22/3set}), gives us the following inclusions. \begin{align} V_3(1.02, \bullet;x_0)&\supseteq V_2(2/3,\bullet;x_0)\cap V_1(1-\eta_2,1+\eta_1;x_0)\nonumber\\ &\supseteq V_1(1-\eta_2,1+\eta_1;x_0)\cap \{p\le x_0 : |b(p)|\ge 6/7\}\setminus\mathcal P(10^4). \end{align} Now since condition (1) of proposition \ref{prop:proportion1} does not hold for $x_0$, $V_1(1-\eta_2,1+\eta_1;x_0)\ge\frac{98}{100}\cdot \pi(x_0)$ and from proposition \ref{prop:bp1/18}, we have $\#\{p\le x_0 : |b(p)|\ge 6/7\}>\frac{1}{16}\cdot\pi(x_0)$. Thus \begin{equation}\label{p22/3} |V_3(1.02, \bullet;x_0)|\ge \left(\frac{98}{100}+\frac{1}{16}-1-\frac{1}{10^{6}}\right)\cdot \pi(x_0)>\frac{1}{25}\cdot\pi(x_0).\qedhere \end{equation} \end{proof} \textit{Proof of \thmref{th:dis}:} Fix an $x\ge X_0$. Now choose $c=1+\eta_1$ (which is the smallest among $1+\eta_1$, $1.09$ and $1.02$) and $\delta=10^{-5}$ (which is the smallest among $10^{-5}$, $9\times 10^{-4}$ and $1/25$). Note here that both $c$ and $\delta$ are independent of $x$. From propositions \ref{prop:proportion1}, \ref{prop:v_2} and \ref{prop:v_3}, for each large enough $x$, we get an $l_x\in\{1,2,3\}$ such that $V_{l_x}(c,\bullet;x)> \delta\cdot\pi(x)$. Also note that for any $p\in V_{l_x}(c,\bullet;x)$, \begin{equation} \max\{|\lambda_F(p^i)|: i=1,2,3\}\ge |\lambda_F(p^{l_x})|\ge c. \end{equation} Thus for any $x\ge X_0$, there exists an $l_x\in\{1,2,3\}$ such that \begin{equation} \{p\le x : \max\{|\lambda_F(p^i)|: i=1,2,3\}\ge c\}\supseteq V_{l_x}(c,\bullet;x). \end{equation} This completes the proof of \thmref{th:dis} since both $c$ and $\delta$ are independent of $x$. \begin{rmk} Note here that the numerical values used in this section are not optimized. This is because it does not improve the Omega result that we are after. \end{rmk} \section{Proof of Theorem \ref{th:omega}} By corollary \ref{cor:dis} of Theorem \ref{th:dis}, there exist constants $C>1$, $\delta>0$ and an integer $1\le r\le 3$ (depending on $N$) such that \begin{equation}\label{fixr} \#\{p\le N : |\lambda(p^r)|\ge C\}>\delta\cdot \pi(N), \end{equation} for infinitely many integers $N$. Fix the integer $r$ from (\ref{fixr}) and denote the set on the LHS by $B_N$. We now use standard techniques (see \cite{murty1983oscillations} for similar arguments) to prove the $\Omega_{\pm}$ result. Let $B_N^{+}=\{p\le N : \lambda(p^r)\ge C\}\subset B_N$ and $B_N^{-}=\{p\le N : \lambda(p^r)\le -C\}\subset B_N$. Since $B_N>\delta \cdot \pi(N)$, either $B_N^{+}>\delta_1\cdot\pi(N)$ for some $\delta_1>0$ or $B_N^{-}>\delta_2\cdot\pi(N)$ for some $\delta_2>0$. If $B_N^{+}>\delta_1\cdot\pi(N)$, choose an integer $n$ as follows. \begin{equation}\label{eq:n} n=\prod_{p\in B_N^{+}}p^r. \end{equation} Note here that $r$ varies with $N$ and thus $n$. But this is not a cause of concern since $r\le 3$. With this choice of $n$, we have \begin{equation} \lambda_F(n)=\prod_{p\in B_N^+}\lambda_F(p^r). \end{equation} Thus \begin{equation*} \lambda_F(n)\ge C^{|B_N^+|}> C^{\delta_1\cdot \pi(N)}\ge C^{\delta_1 c_1\cdot \frac{N}{\log N}}=\exp\left(c_0 \frac{N}{\log N}\right), \end{equation*} where we choose constants $c_1$ and $c_2$ such that $c_1\frac{N}{\log N}\le \pi(N)\le c_2\frac{N}{\log N}$ for all large $N$. Now from (\ref{eq:n}) we have \begin{equation}\label{eq:nx} \log n=r\sum_{p\in B_N^+}\log p\le r\sum_{p\le N}\log p\sim rN. \end{equation} Also note that \begin{equation*} \log n\ge r\log 2\cdot |B_N^+|\gg \frac{N}{\log N}, \end{equation*} from which we get, $\log N\ll \log\log n$. Hence for some constant $c$, \begin{equation} \lambda_F(n)\gg\exp\left(\frac{c \log n}{\log \log n}\right). \end{equation} If $B_N^{-}>\delta_2\cdot\pi(N)$, we take $n$ to be product of even number of primes in $B_N^{-}$ and proceed as above. Now to prove the $\Omega_{-}$ result consider the following. If $B_N^{+}>\delta_1\cdot\pi(N)$, then we proceed as follows. We know that there exists a $n_0\in \mathbf Z$ such that $\lambda_F(n_0)<0$ (see \cite{kohnen2007sign}). Now let \begin{equation}\label{eq:n-} n=n_0\underset{(p,n_0)=1}{\prod_{p\in B_N^+}}p^r. \end{equation} Thus $\lambda_F(n)=\lambda_F(n_0)\underset{(p,n_0)=1}{\prod_{p\in B_N^+}}\lambda_F(p^r)$. Now proceeding as above we get \begin{equation} -\lambda_F(n)\gg \exp\left(\frac{c \log n}{\log \log n}\right). \end{equation} If $B_N^{-}>\delta_2\cdot\pi(N)$, we take $n$ to be product of odd number of primes in $B_N^{-}$ and proceed as above. This completes the proof of Theorem \ref{th:omega}.
1,108,101,565,088
arxiv
\section{Introduction} \subsection{General setting} Let $C$ be a complex curve of genus $g$. We are interested in the different $2$-dimensional neighborhoods $S$ of $C$. More precisely, two surfaces $S,S'$ equipped with embeddings $C\hookrightarrow S$, $C\hookrightarrow S'$ define formally/analytically equivalent neighborhoods if there exists neighborhoods $U,U'$ of $C$ in $S$ and $S'$ and a formal/analytic diffeomorphism $\varphi: U \rightarrow U'$ inducing the identity on $C$. The equivalence of two neighborhoods is thus given by diagrams \begin{center} \begin{tikzpicture} \draw (0,0) node[left] {$C$}; \draw [right hook-latex] (0,0) -- (2,0); \draw (2,0) node[right] {$U\subset S$}; \draw [->] (-0.3,-0.3) -- (-0.3,-1.3); \draw (0,-1.5) node[left] {$C$}; \draw [right hook-latex] (0,-1.5) -- (2,-1.5); \draw (2,-1.5) node[right] {$U'\subset S'$}; \draw [->] (2.2,-0.3) -- (2.2,-1.3); \draw (0.1,-0.8) node[] {$id$}; \draw (2.2,-0.8) node[right] {$\varphi$}; \end{tikzpicture} \end{center} We want to understand the classification of such neighborhoods up to equivalence. The first invariants in this problem are the normal bundle $N_C$ of $C$ in $S$ and the self-intersection $C\cdot C = \mr{deg}(N_C)$ of the curve $C$. If $C\cdot C<0$, Grauert's theorem (cf \cite{grauert} or \cite{cam}) tells that if the self-intersection is sufficiently negative (more precisely, if $C\cdot C< 2(2-2g)$), then $S$ is analytically equivalent to $N_C$ (ie. a neighborhood of $C$ in $S$ is analytically equivalent to a neighborhood of the zero section in the total space of $N_C$). In the case $C\cdot C>0$, we can cite the works of Ilyashenko \cite{ilyashenko_imbedding_elliptic} on strictly positive neighborhood of elliptic curves and of Mishustin \cite{mishustin} for neighborhoods of genus $g\geq 2$ curves with large self-intersection ($C\cdot C > 2g-2$). In both cases, the authors show that there is a huge family of non-equivalent neighborhoods (there are some functional invariants). In the case $C\cdot C = 0$, the neighborhoods of elliptic curves have already been studied. Arnol'd showed in \cite{arnold_bifurcations} that if $S$ is a neighborhood of an elliptic curve whose normal bundle $N_C$ is not torsion, $S$ is formally equivalent to $N_C$; if moreover $N_C$ satisfies some diophantine condition, then $S$ is analytically equivalent to $N_C$. The case when $C$ is an elliptic curve and $N_C$ is torsion was studied in \cite{ltt}; in particular, it is shown that the formal moduli space (ie. with respect to formal classification) of such neighborhoods is a countable union of finite dimensional spaces. The goal of this paper is to study the neighborhoods of genus $g\geq 2$ curves with trivial normal bundle under formal equivalence. \subsection{Notations} Throughout this paper, we will use the term $\Df{}$ to denote the group of germs of analytic diffeomorphisms of $\mb{C}$ at $0$; we will write $\widehat{\mr{Diff}}(\mb{C},0)$ the group of formal diffeomorphisms of $\mb{C}$ at $0$. A \emph{formal neighborhood} $\widehat{S}$ of $C$ is a scheme $\ms{X}=(X,\widehat{\mc{O}})$ with $C$ as a subscheme such that there is an open covering $X=\cup U_i$ of $X$ with $\widehat{\mc{O}}\vert_{U_i} = (\mc{O}_C\vert_{U_i})[[y_i]]$, some coordinates $x_i$ on $C\cap U_i$ and some holomorphic functions $u_{ji}^{(k)}$ with $y_j = \sum_{k\geq 1} u_{ji}^{(k)}(x_i) y_i^k$ and $u_{ji}^{(1)}$ not vanishing on $U_i\cap U_j$. If $S$ is an analytic neighborhood of $C$, then the completion $\widehat{\mc{O}}$ of $\mc{O}_S$ along $C$ is the structure sheaf of a formal neighborhood $\widehat{S}$ of $C$. The natural inclusion $\mc{O}_S \hookrightarrow \widehat{\mc{O}}$ gives an injection $\widehat{S}\hookrightarrow S$ and allows us to see $S$ as a formal neighborhood. We say that two analytic neighborhoods $S,S'$ are formally equivalent if $\widehat{S}$ and $\widehat{S'}$ are equivalent. Let $S=\cup U_i$ be a covering of an analytic neighborhood $S$ and $(u_i,v_i)$ some analytic coordinates on $U_i$ with $C\cap U_i = \{v_i=0\}$. A \emph{regular analytic foliation} on $S$ having $C$ as a leaf can be seen as a collection of submersive analytic functions $y_i=\sum_{k\geq 1}{y_i^{(k)}(u_i)v_i^k}$ on each $U_i$ such that there exist some diffeomorphisms $\varphi_{ji}\in\Df{}$ with $y_j = \varphi_{ji}\circ y_i$ on $U_i\cap U_j$. In analogy, a \emph{regular formal foliation} on $S$ around $C$ (or on a formal neighborhood $\widehat{S}$ of $C$) is a collection of formal power series $y_i = \sum_{k\geq 1}{y_i^{(k)}(u_i)v_i^k}$ with $y_j=\varphi_{ji}\circ y_i$ for some $\varphi_{ji}\in \widehat{\mr{Diff}}(\mb{C},0)$ where the coefficients $y_i^{(k)}(u_i)$ are still analytic functions on $C\cap U_i$ and $y_i^{(1)}$ does not vanish on $C\cap U_i$ (otherwise stated, the divisor $\{y_i=0\}$ is equal to $\{v_i=0\}=C\cap U_i$). \subsection{Results} We will use the same strategy as in \cite{ltt}: first construct two "canonical" regular formal foliations $\F$, $\G$ on $S$ having $C$ as a leaf, then study the classification of formal/convergent bifoliated neighborhoods $(S,\F,\G)$, and finally put these together to obtain the formal classification of neighborhoods. The first step, the construction of "canonical" foliations, is explained in section \ref{sec_construction}. It has already been proved in \cite{clpt} that there exist formal regular foliations in $S$ having $C$ as a leaf. Since we need to have some kind of unicity to be able to use these for the classification of neighborhoods, we will need to adapt the construction of \cite{clpt}. The idea is to construct foliations whose holonomy is trivial along as many loops as possible. For this, we fix a family $(\alpha_1,\ldots,\alpha_g,\beta_1,\ldots,\beta_g)$ of loops in $C$ which is a symplectic basis in homology and denote $A$-loops the loops $\alpha_i$ and $B$-loops the $\beta_i$. We prove the following: \begin{thm} \label{thm_constr_fol} Let $C$ be a curve of genus $g\geq 2$ and $S$ a neighborhood of $C$ with trivial normal bundle. Then there exists a unique regular formal foliation $\mc{F}$ on $S$ having $C$ as a leaf, such that the holonomy of $\mc{F}$ along $A$-loops is trivial. \end{thm} \begin{figure}[H] \begin{center} \begin{tikzpicture} \draw[line width=0.4mm] (-4.2,0) -- (4.2,0); \draw (4.2,0) node[right] {$C$}; \draw plot [domain=-4:4,smooth] (\x,{(\x+9)*(\x+9)/81}); \draw plot [domain=-4:4,smooth] (\x,{(\x+9)*(\x+9)/(2*81)}); \draw plot [domain=-4:4,smooth] (\x,{(\x+9)*(\x+9)/(4*81)}); \draw plot [domain=-4:0.5,smooth] (\x,{(\x+9)*(\x+9)/40}); \draw plot [domain=-4:4,smooth] (\x,{-(\x+9)*(\x+9)/81}); \draw plot [domain=-4:4,smooth] (\x,{-(\x+9)*(\x+9)/(2*81)}); \draw plot [domain=-4:4,smooth] (\x,{-(\x+9)*(\x+9)/(4*81)}); \draw plot [domain=-4:0.5,smooth] (\x,{-(\x+9)*(\x+9)/40}); \draw plot [domain=-4:4,smooth] (-\x,{(\x+9.5)*(\x+9.5)/81}); \draw plot [domain=-4:4,smooth] (-\x,{(\x+9.5)*(\x+9.5)/(2*81)}); \draw plot [domain=-4:4,smooth] (-\x,{(\x+9.5)*(\x+9.5)/(4*81)}); \draw plot [domain=-4:0,smooth] (-\x,{(\x+9.5)*(\x+9.5)/40}); \draw plot [domain=-4:4,smooth] (-\x,{-(\x+9.5)*(\x+9.5)/81}); \draw plot [domain=-4:4,smooth] (-\x,{-(\x+9.5)*(\x+9.5)/(2*81)}); \draw plot [domain=-4:4,smooth] (-\x,{-(\x+9.5)*(\x+9.5)/(4*81)}); \draw plot [domain=-4:0,smooth] (-\x,{-(\x+9.5)*(\x+9.5)/40}); \end{tikzpicture} \end{center} \caption{A bifoliated neighborhood of $C$} \end{figure} The second step, the classification of bifoliated neighborhoods, can be found in \cite{thom_these}. We will explain in section \ref{sec_bif_classification} how this classification works in the generic case and show that a bifoliated neighborhood $(S,\F,\G)$ is characterised by the order of tangency $k$ between $\F$ and $\G$ along $C$, a $1$-form $\omega$ which controls how $\F$ and $\G$ differ at order $k+1$ and an additionnal invariant \[ Inv(S,\F,\G)\in (\mr{Diff}(\mb{C},0))^{6g-3}/\sim \] (resp. $\widehat{Inv}(S,\F,\G)\in (\Dfhat)^{6g-3}/\sim$ for formal neighborhoods), where the relation $\sim$ is given by the action of $\mr{Diff}(\mb{C},0))$ on $(\Df{})^{6g-3}$ by conjugacy on each factor (resp. the action of $\Dfhat$ on $(\Dfhat)^{6g-3}$ by conjugacy on each factor). This invariant is given by holonomies of the foliations $\F$ and $\G$ computed on a tangency curve $T_1$, ie. an irreducible component different from $C$ of the set of points at which $\F$ and $\G$ are tangent. \begin{thmref}{Theorem}{\ref{thm_classif_bif}} Let $C$ be a curve of genus $g\geq 2$. Let $(S,\F,\G)$ and $(S',\F',\G')$ be two bifoliated neighborhoods of $C$ with same tangency order $k$ and $1$-form $\omega$. Suppose $k\geq 1$ and that $\omega$ has simple zeroes $p_1,\ldots,p_{2g-2}$. Denote $T_1, T'_1$ the tangency curves passing through $p_1$ and compute the invariants $Inv(S,\F,\G)$ and $Inv(S',\F',\G')$ on the tangency curves $T_1, T'_1$. Then $(S,\F,\G)$ and $(S',\F',\G')$ are analytically (resp. formally) diffeomorphic if and only if \[ Inv(S',\F',\G')=Inv(S,\F,\G) \] (resp. $\widehat{Inv}(S',\F',\G') = \widehat{Inv}(S,\F,\G)$). \end{thmref} Moreover, we know which invariants come from a bifoliated neighborhood: if $((\varphi_i^1)_{i=1}^{2g},(\varphi_i^2)_{i=1}^{2g},(\varphi_j^3)_{j=2}^{2g-2})$ is a representant of $Inv(S,\F,\G)$, then the $\varphi_r^s$ must be tangent to identity at order $k$. Moreover, if we write $\varphi_r^s(t) = t+a_r^st^{k+1} \text{ (mod $t^{k+2}$)}$, then the periods of $\omega$ must be $(a_i^2-a_i^1)_{i=1,\ldots,2g}$ (equation \eqref{eq_compatibility_1} in the text); $a_j^3$ must be equal to $\int_{p_1}^{p_j}\omega$ for $j=2,\ldots,2g-2$ (equation \eqref{eq_compatibility_2}); and the $(\varphi_i^s)_{i=1}^{2g}$ must be representations of the fundamental group of $C$ for $s=1,2$, ie. $[\varphi_1^s,\varphi_{1+g}^s]\ldots[\varphi_g^s,\varphi_{2g}^{s}] = id$ (equation \eqref{eq_representation}). \begin{thmref}{Theorem}{\ref{thm_constr_bif}} Let $((\varphi_i^1)_{i=1}^{2g},(\varphi_i^2)_{i=1}^{2g},(\varphi_j^3)_{j=2}^{2g-2})$ be some analytic/formal diffeomorphisms; let $k$ be an integer and $\omega$ a $1$-form. They define a bifoliated analytic/formal neighborhood $(S,\F,\G)$ with $\F$ and $\G$ tangent at order $k$ and with $1$-form $\omega$ if and only if every $\varphi_r^s$ is tangent to identity at order (at least) $k$ and if they satisfy the relations \eqref{eq_compatibility_1}, \eqref{eq_compatibility_2} and \eqref{eq_representation}. \end{thmref} If the diffeomorphisms $\varphi_r^s$ are only formal, then the neighborhood is a priori only a formal neighborhood of $C$. Note here that the relations \eqref{eq_compatibility_1} and \eqref{eq_compatibility_2} are in fact relations between jets of order $k+1$ of the $\varphi_r^s$, so the set of bifoliated neighborhoods modulo equivalence has huge dimension. Indeed, the space of pairs of diffeomorphisms modulo common conjugacy is already infinite dimensional, even formally: if we fix one diffeomorphism $\varphi_1\neq id$ tangent to the identity, then the centralizer of $\varphi_1$ has dimension $1$ so that the set of pairs $(\varphi_1,\varphi_2)$ modulo common conjugacy has roughly speaking the same cardinality as the set of diffeomorphisms. Finally, the last step (the formal classification of neighborhoods) is done in section \ref{sec_formal_classification}. For the pair of canonical foliations constructed, the tangency order $k$ will be the Ueda index of the neighborhood (introduced by Ueda in \cite{ueda} and named by Neeman in \cite{neeman_ueda}), ie. the highest order such that there is a tangential fibration on $Spec(\mc{O}_S/I^{k+1})$ where $I$ is the ideal sheaf of $C$ in $S$. Similarly, $\omega$ can be interpreted in term of the Ueda class of $S$. We will define the space $\ms{V}(C,k,\omega)$ of neighborhoods with trivial normal bundle, fixed Ueda index equal to $k$ and fixed Ueda class given by $\omega$ in order to state the final theorem: \begin{thmref}{Theorem}{\ref{thm_formal_classification}} Let $C$ be a curve of genus $g\geq 2$, $1\leq k<\infty$ and $\omega$ a $1$-form on $C$ with simple zeroes. Then there is an injective map \[ \Phi : \ms{V}(C,k,\omega) \hookrightarrow \Dfhat{}^g\times\Dfhat{}^g\times\Dfhat{}^{2g-3}/\sim \] where the equivalence relation $\sim$ is given by the action of $\Dfhat{}$ on $\Dfhat{}^N$ by conjugacy on each factor. A tuple of diffeomorphisms $((\varphi_{i}^{(j)})_i)_{j=1}^3$ is in the image of $\Phi$ if and only if the $\varphi_{i}^{(j)}$ are tangent to the identity at order $k$ and if they satisfy the compatibility conditions \eqref{eq_compatibility_1} and \eqref{eq_compatibility_2}. \end{thmref} \section{Construction of foliations} \label{sec_construction} On the curve $C$ we can choose loops $\alpha_i,\beta_i$, $i=1,\ldots,g$ forming a symplectic basis of $H_1(C,\mb{C})$, ie. $\alpha_i\cdot \alpha_j = \beta_i\cdot \beta_j=0$ and $\alpha_i\cdot \beta_j = 1$ if $i=j$ and $0$ otherwise. We call $A$-loops the loops $\alpha_i$ and $B$-loops the $\beta_i$. Similarly, if $\omega$ is a $1$-form on $C$, we will call $A$-period (resp. $B$-period) of $\omega$ any integral $\int_{\alpha_i} \omega$ (resp. $\int_{\beta_i} \omega$). \begin{figure}[H] \includegraphics{lacets.png} \caption{$A$- and $B$-loops} \end{figure} \begin{df} A foliation will be called $A$-canonical if its holonomy representation $\rho$ satisfies $\rho(\alpha_i) = id$ for all $i=1,\ldots,g$ and if the linear part of $\rho$ is trivial. We define the notion of $B$-canonicity similarly; unless otherwise stated, the term "canonical" will mean $A$-canonical. \end{df} Choose an open covering $(U_i)$ of some neighborhood of $C$; let $V_i = U_i\cap C$ and $\mc{V}=(V_i)$ the associated open covering of $C$. Denote by $\mb{C}$ the trivial rank one local system on $C$ and by $\mc{O}_C$ the trivial line bundle on $C$. First, let us give the following definitions: \begin{df} Let $(a_{ij})$ be a cocycle in $Z^1(\mc{V},\mb{C})$ and let $\gamma$ be a loop on $C$. We define the period of $(a_{ij})$ along $\gamma$ to be the sum \[ \int_\gamma (a_{ij}) = \sum_{p=1}^n a_{i_p i_{p+1}} \] where the open sets $(V_{i_p})_{p=1}^n$ form a simple covering of $\gamma$ and $V_{i_p}\cap V_{i_{p+1}}\cap \gamma\neq \emptyset$. \end{df} This application only depends on the class $[\gamma]$ of $\gamma$ in the fundamental group of $C$; taking periods along the $\alpha_i$ and $\beta_i$ gives applications \[ P_A,P_B : Z^1(\mc{V},\mb{C}) \rightarrow \mb{C}^g. \] Putting these together gives an application $P: Z^1(\mc{V},\mb{C}) \rightarrow \mb{C}^{2g}$ which induces an injection $P : H^1(\mc{V},\mb{C}) \rightarrow \mb{C}^{2g}$. On the other hand, the exact sequence \[ 0 \rightarrow \mb{C} \rightarrow \mc{O}_C \rightarrow \Omega^1 \rightarrow 0 \] gives the exact sequence in cohomology \begin{equation} \label{eq_exact_sequence} 0 \rightarrow H^0(C,\Omega^1) \overset{\delta}{\rightarrow} H^1(C,\mb{C}) \rightarrow H^1(C,\mc{O}_C) \rightarrow 0. \end{equation} The fact that the arrow $H^1(C,\mb{C}) \rightarrow H^1(C,\mc{O}_C)$ is surjective is an easy consequence of (\cite{ueda}, proposition 1). We have $\mr{dim}(H^0(C,\Omega^1)) = \mr{dim}(H^1(C,\mc{O}_C)) = g$ and $\mr{dim}(H^1(C,\mb{C}))=2g$ so that $P : H^1(C,\mb{C}) \rightarrow \mb{C}^{2g}$, being injective, is bijective. It is well-known that a $1$-form whose $A$-periods vanish is zero, so that the application $P_A\circ \delta : H^0(C,\Omega^1) \rightarrow \mb{C}^g$ is a bijection. Constructing a foliation on $S$ is equivalent to constructing functions $y_i$ on $U_i$ which are reduced equations of $C\cap U_i$ such that \[ y_j = \varphi_{ji}(y_i), \] where the $\varphi_{ji}$ are diffeomorphisms of $(\mb{C},0)$. As before, if $\gamma$ is a loop, we can define the product \[ H_{\gamma}((\varphi_{ji})) = \varphi_{i_1i_n}\circ \ldots\circ \varphi_{i_3i_2}\circ \varphi_{i_2i_1} \] which will be the holonomy of the foliation given by the $y_i$ along the loop $\gamma$. To construct such functions, we are going to proceed by steps, but first, we need another definition. \begin{df} A set of functions $(y_i)$ on the open sets $U_i$ is called \emph{$A$-normalized at order $\mu$} if the $y_i$ are regular functions on $U_i$ vanishing at order $1$ on $C$ and \begin{equation} \label{eq_mu_foliated} y_j = \varphi_{ji}^{(\mu)}(y_i) + a_{ji}^{(\mu+1)}y_i^{\mu+1}, \end{equation} on $U_i\cap U_j$, where $a_{ji}^{(\mu+1)}$ is a function on $U_i\cap U_j$, the $\varphi_{ji}^{(\mu)}$ are polynomials of degree $\mu$ which are also diffeomorphisms tangent to identity and the holonomies $H_{\alpha_k}((\varphi_{ji}))$ are the identity modulo $y_i^{\mu+1}$ for all $k=1,\ldots,g$. \end{df} The idea of the proof is first to construct some functions $(y_i)$ which are $A$-normalized at order $1$, and then to show that every $A$-normalized at order $\mu$ set of functions $(y_i)$ can be transformed into an $A$-normalized at order $(\mu+1)$ set of functions by changes of coordinates $y_i \mapsto y_i-b_iy_i^{\mu+1}$ for some functions $b_i$ on $U_i$. At the limit, we will thus obtain a formal foliation on $S$ with trivial holonomy along $A$-loops. \begin{lem} There exists an $A$-normalized at order $1$ set of functions and the foliations associated to two such sets of functions coincide at order $1$. \end{lem} \begin{proof} Take any reduced equations $(y_i)$ of $C$ and compute $y_j$ in the coordinate $y_i$: \[ y_j = a_{ji}^{(1)}y_i. \] The cocycle $(a_{ji}^{(1)}\vert_C)$ defines the normal bundle $N_C=\mc{O}_C$ of $C$ so is cohomologous to the trivial cocycle: there exist functions $b_i$ on $U_i$ such that \[ a_{ji}^{(1)}\vert_C = \frac{b_j\vert_C }{b_i\vert_C }. \] Put $z_i = y_i/b_i$ to obtain \[ z_j = z_i + a_{ji}^{(2)} z_i^2. \] for some functions $a_{ji}^{(2)}$. For unicity, consider two sets of functions $(y_i)$ and $(z_i)$ $A$-normalized at order $1$. Then $(y_i)$ and $(z_i)$ define two sections $y^1$ and $z^1$ on the normal bundle $N_C$. Necessarily, $y^1$ and $z^1$ are colinear, hence the result. \end{proof} \begin{lem} \label{lem_step} Let $(y_i)$ be a set of functions $A$-normalized at order $\mu$. Then there exist functions $(b_i)$ on $U_i$ such that the coordinates $z_i = y_i - b_iy_i^{\mu+1}$ are $A$-normalized at order $\mu+1$. Moreover, two sets of functions $A$-normalized at order $(\mu+1)$ which coincide at order $\mu$ define the same foliation at order $\mu+1$. \end{lem} \begin{proof} Since $(y_i)$ is $A$-normalized at order $\mu$, it satisfies \[ y_j = \varphi_{ji}^{(\mu)}(y_i) + a_{ji}^{(\mu+1)} y_i^{\mu+1}. \] In the following, denote by $\mr{Diff}^{1}_{\mu}(\mb{C},0) = \mr{Diff}^1(\mb{C},0) / \mr{Diff}^{\mu+1}(\mb{C},0)$ the group of $\mu$-jets of diffeomorphisms tangent to the identity. The tuple $(\varphi_{ji}^{(\mu)})_{ji}$ is a cocycle in $H^1(C,\mr{Diff}^{1}_{\mu}(\mb{C},0))$; it is entirely determined by its holonomy representation $H((\varphi_{ji}^{(\mu)})) : \pi_1(C) \rightarrow \mr{Diff}^{1}_{\mu}(\mb{C},0)$. We would like to extend this cocycle to some cocycle in $H^1(C,\mr{Diff}^{1}_{\mu+1}(\mb{C},0))$. Since $H_{\alpha_k}((\varphi_{ji}^{(\mu)}))$ is trivial for $k=1,\ldots,g$, extend $H_{\alpha_k}((\varphi_{ji}^{(\mu)}))$ to $\rho_{\alpha_k}=id$. Next, extend the diffeomorphisms $H_{\beta_{k}}((\varphi_{ji}^{(\mu)}))$ to diffeomorphisms $\rho_{\beta_k}\in \mr{Diff}^{1}_{\mu+1}$ in any way. Then $\prod_{k=1}^g[\rho_{\alpha_k},\rho_{\beta_k}]=id$ so the $(\rho_\gamma)_{\gamma}$ define a representation of $\pi_1(C)$ into $\mr{Diff}^{1}_{\mu+1}$ which corresponds to a cocycle $(\psi_{ji})$ such that $H_{\alpha_k}((\psi_{ji}))=\rho_{\alpha_k}$ and $H_{\beta_k}((\psi_{ji}))=\rho_{\beta_k}$. We can then write \[ y_j = \psi_{ji}(y_i) + {a'_{ji}}^{(\mu+1)} y_{i}^{\mu+1} \] for some ${a'_{ji}}^{(\mu+1)}$. Next, \begin{align*} y_k &= \psi_{kj}(y_j) + {a'_{kj}}^{(\mu+1)} y_{j}^{\mu+1}\\ &= \psi_{kj} \left( \psi_{ji}(y_i) + {a'_{ji}}^{(\mu+1)} y_{i}^{\mu+1} \right) + {a'_{kj}}^{(\mu+1)} \left(\psi_{ji}(y_i) + {a'_{ji}}^{(\mu+1)} y_{i}^{\mu+1} \right)^{\mu+1}\\ &= \psi_{kj} ( \psi_{ji}(y_i)) + ({a'_{ji}}^{(\mu+1)}+{a'_{jk}}^{(\mu+1)} ) y_{i}^{\mu+1} + \ldots \end{align*} Since $\psi_{ki} = \psi_{kj}\psi_{ji}$, we obtain ${a'_{ki}}^{(\mu+1)}\vert_C = {a'_{kj}}^{(\mu+1)}\vert_C + {a'_{ji}}^{(\mu+1)}\vert_C$ and thus $({a'_{ji}}^{(\mu+1)}\vert_C)$ is a cocycle in $H^1(C,\mc{O}_C)$. By the exact sequence \eqref{eq_exact_sequence}, it is cohomologous to a constant cocycle $(c_{ji})\in H^1(C,\mb{C})$: there exists functions $b_i$ on $U_i$ such that ${a'_{ji}}^{(\mu+1)}\vert_C - c_{ji} = b_j\vert_C - b_i\vert_C$. Still using the exact sequence \eqref{eq_exact_sequence}, we see that two cocycles $(c_{ji}),(c'_{ji})$ cohomologous to $({a'_{ji}}^{(\mu+1)}\vert_C)$ differ only by the periods of a $1$-form. As noted before, $P_A\circ \delta: H^0(C,\Omega^1) \rightarrow H^1(C,\mb{C})$ is bijective so we can choose $(c_{ji})$ with trivial $A$-periods, and such a $(c_{ji})$ is unique. Put $\varphi_{ji}^{(\mu+1)}(y) = \psi_{ji}(y) + c_{ji} y^{\mu+1}$ and $z_i = y_i - b_i y_i^{\mu+1}$ to obtain \begin{align*} z_j &= \psi_{ji}(z_i) + ({a'_{ji}}^{(\mu+1)} -b_j+b_i) z_i^{\mu+1} + o(z_i^{\mu+1})\\ &= \varphi_{ji}^{(\mu+1)}(z_i) + o(z_i^{\mu+1}). \end{align*} Since the choice of $(c_{ji})\in H^1(C,\mb{C})$ is unique, if two sets of functions $(z_i), (z'_i)$ are both $A$-normalized at order $\mu+1$ and coincide at order $\mu$, then they differ at order $\mu+1$ by a coboundary $(d_i)\in H^0(C,\mb{C})$: $z'_i = z_i + d_iz_i^{\mu+1} + \ldots$ Hence, they define the same foliation at order $\mu+1$. \end{proof} Putting all this together, we obtain theorem \ref{thm_constr_fol}. \section{Classification of bifoliated neighborhoods} \label{sec_bif_classification} A \emph{bifoliated neighborhood} of $C$ is a tuple $(S,\F,\G)$ where $S$ is a neighborhood of $C$ and $\F$, $\G$ are distinct foliations on $S$ having $C$ as a common leaf. Two bifoliated neighborhoods $(S,\F,\G)$ and $(S',\F',\G')$ are said to be equivalent if there are two neighborhoods $U\subset S$, $U'\subset S'$ of $C$ and a diffeomorphism $\phi : U \rightarrow U'$ fixing $C$ such that \[ \phi_* \F = \F'\quad \text{and}\quad \phi_*\G = \G'. \] In this section, we want to study the classification of bifoliated neighborhoods under this equivalence relation. We will consider here analytic equivalence, but the formal classification can be obtained by replacing the word "analytic" by "formal" everywhere. A neighborhood will have a lot a formal foliations, and the canonical ones may diverge even though others might converge (cf. \cite{ltt}). We will thus consider here a general bifoliated neighborhood $(S,\F,\G)$, with the additional assumptions that $\F$ and $\G$ coincide at order $1$ and their holomy representations are tangent to the identity. The study can be done without these assumptions (cf. \cite{thom_these}), but they are true for the pair of canonical foliations and simplify the results (for example, in general an affine structure is involved which under our assumptions is only a translation structure, ie. a $1$-form). \subsection{First invariants} If $(S,\F,\G)$ is a bifoliated neighborhood, each foliation comes with the holonomy representation of the leaf $C$: \[ \rho_\F,\rho_\G : \pi_1(C) \rightarrow \Df{}, \] Fix a base point $p_0\in C$, a transversal $T_0$ passing through $C$ at $p_0$ and a coordinate $t$ on $T_0$ (ie. a function $t\in \Cg{} \mapsto q(t)\in T_0$. Let $\gamma$ be a loop on $C$ based at $p_0$; choose the minimal first integral $F$ of $\F$ around $T_0$ such that $F(q(t)) = t$. The analytic continuation $F^\gamma$ of $F$ along $\gamma$ is again a first integral of $\F$, hence is of the form \[ F^\gamma = \varphi_{\gamma}^{-1}\circ F. \] We define $\rho_\F(\gamma) = \varphi_{\gamma}$. A second invariant is the order of tangency between $\F$ and $\G$ along $C$: take two $1$-forms $\alpha$ and $\beta$ on $S$ defining locally the foliations $\F$ and $\G$. The $2$-form $\alpha \wedge \beta$ vanishes on $C$ so the order of vanishing of $\alpha \wedge \beta$ along $C$ gives a global invariant $k+1$ which does not depend on the choice of $\alpha$ and $\beta$. The order of tangency between $\F$ and $\G$ is defined to be this integer $k$. Our assumption that the foliations coincide at order $1$ exactly means that $k\geq 1$. The next invariant is a $1$-form on $C$ associated to this pair of foliations. Choose as before a point $p_0\in C$, a transversal $T_0$ at $p_0$ and a coordinate $t \mapsto q(t)$ on $T_0$. Take local minimal first integrals $F$ and $G$ of $\F$ and $\G$ such that $F(q(t)) = G(q(t)) = t$. By definition of $k$, $G=F+aF^{k+1}+\ldots$ in a neighborhood of $p$ for a local function $a$ on $C$. Take the analytic continuations $F^\gamma, G^\gamma$ and $a^\gamma$ of $F,G$ and $a$ along a loop $\gamma$. Then we can use the fact that the holonomy representations of $\F$ and $\G$ are tangent to the identity to get \begin{align*} G^\gamma &= F^\gamma + (a^\gamma) (F^\gamma)^{k+1} + \ldots\\ \rho_\G(\gamma)^{-1}\circ G &= \rho_\F(\gamma)^{-1}\circ F + (a^\gamma)(\rho_\F(\gamma)^{-1}\circ F)^{k+1} + \ldots\\ G &= \rho_\G(\gamma) \left(\rho_\F(\gamma)^{-1}\circ F + a^\gamma F^{k+1} + \ldots \right)\\ &= \rho_\G(\gamma)\circ \rho_\F(\gamma)^{-1}\circ F + a^\gamma F^{k+1}+\ldots\\ &= F + a F^{k+1} + \ldots \end{align*} Since $\rho_\G(\gamma)\circ \rho_\F(\gamma)^{-1}$ has constant coefficients, there exists constants $c^\gamma$ such that $a^\gamma = a+c^\gamma$. Then the $1$-form $\omega = da$ is a well-defined $1$-form on $C$. Note that we saw in the process that \begin{equation} \label{eq_compatibility_1} \rho_\G(\gamma)\circ \rho_\F(\gamma)^{-1}(y) = y + \left(\int_{\gamma}\omega \right) y^{k+1}+\ldots, \end{equation} thus the form $\omega$ is entirely determined by $\rho_\F$ and $\rho_\G$. Note also that the holonomy representation and the form $\omega$ depend on the choice of the transversal $T_0$ and of a coordinate $t$ on it. A change of coordinate $\tilde{t} = \varphi(t)$ induces conjugacies on $\rho_\F$ and $\rho_\G$ and changes $\omega$ into some multiple of it: $\tilde{\rho}_\F(\gamma) = \varphi\circ \rho_\F(\gamma)\circ \varphi^{-1}$, $\tilde{\rho}_\G(\gamma) = \varphi\circ \rho_\G(\gamma)\circ \varphi^{-1}$ and $\tilde{\omega} = \varphi'(0)^{-k} \omega$. \subsection{Tangency set} If $F$ and $G$ are local minimal first integrals of $\F$ and $\G$, then the tangency set between $\F$ and $\G$ is defined to be \[ \mr{Tang}(\F,\G) = \{ dF \wedge dG = 0\}. \] This definition does not depend on the choice of $F$ and $G$ and gives a well-defined analytic subset of $S$. Note that if we write $G = F+aF^{k+1}+\ldots$, then we obtain $dF \wedge dG = F^{k+1} dF \wedge (da + \ldots)$. Since $\omega=da$, \[ \mr{Tang}(\F,\G)\cap C = \{\omega = 0\}. \] In particular, the set $\mr{Tang}(\F,\G)$ intersects $C$ at $2g-2$ points counting multiplicities. In the sequel, we will suppose that we are in the generic case: $\omega$ has $2g-2$ distinct zeroes. This also means that $\mr{Tang}(\F,\G)$ is the union of $2g-2$ curves which are transverse to $C$. Denote $p_1,\ldots,p_{2g-2}$ the zeroes of $\omega$ and $T_i$ the tangency curve passing through $p_i$. If we fix some simple paths $\gamma_{ij}$ between $p_i$ and $p_j$, we can look at the holonomy transports \[ \varphi_{ij}^{\F},\varphi_{ij}^{\G}: T_i \rightarrow T_j \] following the leaves of $\F$ and $\G$ along $\gamma_{ij}$. To simplify, suppose that the $\gamma_{1j}$ only intersect each other at $p_1$ and that $\gamma_{ij} = \gamma_{1i}^{-1}\cdot \gamma_{1j}$. \begin{figure}[H] \includegraphics[scale=0.5]{chemins.png} \caption{The paths $\gamma_{1j}$} \end{figure} To define this, fix some coordinates $t_i,t_j$ on $T_i$ and $T_j$; there exists a simply connected neighborhood $U$ of the path $\gamma_{ij}$. Let $L$ be a leaf of $\F$ on $U$. It intersects $T_i$ at exactly one point (let $t_i$ be its coordinate). In the same way, let $t_j$ be the coordinate of $L\cap T_j$. We set $\varphi_{ij}^{\F}(t_i)=t_j$: this gives a germ of diffeomorphism $\varphi_{ij}^{\F}\in \Df{}$ which depends on the choices of coordinates on $T_i$ and $T_j$. Their composition \[ \varphi_{ij}^{\leftrightarrow} = (\varphi_{ij}^{\G})^{-1}\circ \varphi_{ij}^{\F} \] is a diffeomorphism of $T_i$ so only depends on the choice of a coordinate on $T_i$; a change of coordinate $t'_i=\varphi(t_i)$ acts by conjugacy $\varphi_{ij}'^{\leftrightarrow} = \varphi\circ \varphi_{ij}^{\leftrightarrow}\circ \varphi^{-1}$. We can show as in the previous subsection that the holonomy transports $\varphi_{ij}^{\leftrightarrow}$ are related to the $1$-form $\omega$ by the relation: \begin{equation} \label{eq_compatibility_2} \varphi_{ij}^{\leftrightarrow}(t_i) = t_i - \left(\int_{\gamma_{ij}}\omega \right) t_i^{k+1} + \ldots \end{equation} \begin{figure}[H] \begin{center} \begin{tikzpicture}[scale = 1.7] \draw (-1.5,0) -- (5.5,0); \draw (-1,0.5) -- (5,0.5); \draw (-1,1) -- (5,1); \draw (-1,1.5) -- (5,1.5); \draw (-1,2) -- (5,2); \draw (0,-0.5) -- (0,2.5); \draw (4,-0.5) -- (4,2.5); \draw plot [domain = -1:5,smooth] (\x,{1/2*(cos(180/4*(\x))-1)+2}); \draw plot [domain = -1:5,smooth] (\x,{1.5/4*(cos(180/4*(\x))-1)+1.5}); \draw plot [domain = -1:5,smooth] (\x,{1/4*(cos(180/4*(\x))-1)+1}); \draw plot [domain = -1:5,smooth] (\x,{1/8*(cos(180/4*(\x))-1)+0.5}); \draw (0,1) node {$\bullet$}; \draw (0,1) node[below left] {$t_1$}; \draw (4,1) node {$\bullet$}; \draw (4,1) node[above right] {$\varphi_{12}^\F(t_1)$}; \draw (0,2) node {$\bullet$}; \draw (0,2) node[above right] {$\varphi_{12}^{\leftrightarrow}(t_1)$}; \draw (0,0) node[below left] {$T_1$}; \draw (4,0) node[below right] {$T_2$}; \draw (5.5,0) node[right] {$C$}; \end{tikzpicture} \end{center} \caption{Holonomy transports} \end{figure} \subsection{Classification of bifoliated neighborhoods} We say that a bifoliated neighborhood $(S,\F,\G)$ is generic if there are $2g-2$ distinct tangency curves $T_1,\ldots,T_{2g-2}$ between $\F$ and $\G$ and if they intersect $C$ transversely at some distinct points $p_1,\ldots,p_{2g-2}$. On each neighborhood, we can fix one of these points, for example $p_1$, fix a coordinate $t$ on $T_1$, fix paths $\gamma_{1j}$ between $p_1$ and $p_j$ and compute every invariant on the transversal $T_1$ with coordinate $t$. We thus have the holonomy representations $\rho_\F, \rho_\G$ and the holonomy transports $\varphi_{1j}^{\leftrightarrow}$ between $T_1$ and another tangency curve $T_j$. The holonomy representations $\rho_\F$ and $\rho_\G$ are entirely determined by the images of the basis $\alpha_1,\ldots,\alpha_g,\beta_1,\ldots,\beta_g$: these are any diffeomorphisms such that \begin{equation} \label{eq_representation} \begin{aligned} \text{ } [\rho_\F(\alpha_1),\rho_\F(\beta_1)]\ldots[\rho_\F(\alpha_g),\rho_\F(\beta_g)] &= id\\ [\rho_\G(\alpha_1),\rho_\G(\beta_1)]\ldots[\rho_\G(\alpha_g),\rho_\G(\beta_g)] &= id. \end{aligned} \end{equation} Every invariant diffeomorphism found $\varphi = \rho_\F(\alpha_i), \rho_\F(\beta_i), \rho_\G(\alpha_i), \rho_\G(\beta_i), \varphi_{1j}^\leftrightarrow$ depend on the choice of the coordinate $t$. A change of coordinate $t' = \psi(t)$ induces a conjugacy on $\varphi$: $\varphi' = \psi\circ \varphi\circ \psi^{-1}$. So we define the invariant of a neighborhood $(S,\F,\G)$ to be \begin{align*} Inv(S,\F,\G) &= \left[ ((\rho_\F(\alpha_i))_{i=1}^{g},(\rho_\F(\beta_i))_{i=1}^g,(\rho_\G(\alpha_i))_{i=1}^{g},(\rho_\G(\beta_i))_{i=1}^g,(\varphi_{1j}^{\leftrightarrow})_{j=2}^{2g-2}) \right]\\ &\in \Df{}^{2g}\times\Df{}^{2g}\times\Df{}^{2g-3}/\sim \end{align*} where $\sim$ is the action of $\Df{}$ by conjugacy on each factor. \begin{thm} \label{thm_classif_bif} Let $C$ be a curve of genus $g\geq 2$. Let $(S,\F,\G)$ and $(S',\F',\G')$ be two bifoliated neighborhoods of $C$ with tangency order $k$ and $1$-form $\omega$. Suppose $k\geq 1$ and that $\omega$ has simple zeroes $p_1,\ldots,p_{2g-2}$. Denote $T_1, T'_1$ the tangency curves passing through $p_1$ and compute the invariants $Inv(S,\F,\G)$ and $Inv(S',\F',\G')$ on the tangency curves $T_1, T'_1$. Then $(S,\F,\G)$ and $(S',\F',\G')$ are diffeomorphic if and only if \[ Inv(S',\F',\G')=Inv(S,\F,\G). \] \end{thm} Before starting the proof, let us write some lemmas. \begin{lem} \label{lem_fcts_transv} Let $p$ be a point in $C$, let $F$, $G$ be two reduced equations of $C$ around $p$ and $(x,y)$ some local coordinates with $C=\{y=0\}$. Suppose that $F$ and $G$ are tangent at order $k$ and that the zero divisor of $dF \wedge dG$ is $(k+1)C$ (ie. there are no other tangencies). There exists a unique diffeomorphism $\phi$ fixing $C$ pointwise such that \[ (F,G)\circ \varphi = (y,y+a(x)y^{k+1}). \] The function $a$ is unique and satisfies $da\vert_C= \omega$. \end{lem} The proof of this lemma can be found in \cite{ltt}. \begin{lem} \label{lem_fcts_tang} Let $p$ be a point in $C$, let $F$, $G$ be two reduced equations of $C$ around $p$ and $(x,y)$ some local coordinates with $C=\{y=0\}$. Suppose that there is a transversal $T$ to $C$ such that the zero divisor of $dF \wedge dG$ is $(k+1)C+T$. Then there exists a unique diffeomorphism $\phi$ fixing $C$ pointwise such that \[ (F,G)\circ \varphi = (y,b(y) + a(x)y^{k+1}). \] The function $b$ is unique and $a$ is the primitive of $\omega$ which is zero at $p$. \end{lem} The function $b$ is of course entirely determined by the equation $G\vert_T = b(F\vert_T)$. \begin{proof} Put $\tilde{y}=F$, $b$ the function determined by $G\vert_T = b(F\vert_T)$, $H = G-b(\tilde{y})$ and suppose $x$ is a reduced equation of $T$. Then $dF \wedge dG = (\partial_{x}H)dF \wedge dx$ so by the hypotheses on the tangency divisor, $\partial_{x}H = 2x\tilde{y}^{k+1}u$ for some invertible function $u$. Then $H = x^2\tilde{y}^{k+1}v$ with $v$ invertible so for $\phi(x,\tilde{y}) = x\sqrt{v}$, we have $G = b(\tilde{y}) + \phi(x,\tilde{y})^2\tilde{y}^{k+1}$. If $\psi = \phi\vert_C$, then the coordinate $\tilde{x}=\psi^{-1}\circ \phi(x,\tilde{y})$ is equal to $x$ on $C$ and $(F,G) = (\tilde{y},b(\tilde{y}) + \psi(\tilde{x})^2\tilde{y}^{k+1})$. Thus the diffeomorphism $\varphi(x,y) = (\tilde{x},\tilde{y})$ is as sought. \end{proof} \begin{lem} \label{lem_hol_transport} Let $(S,\F,\G)$ be a bifoliated neighborhood whose $1$-form $\omega$ has simple zeroes, let $T_1,T_j$ be two tangency curves and $\gamma_{1j}$ a simple path between $p_1=T_1\cap C$ and $p_j=T_j\cap C$. Suppose $F$ and $G$ are some submersive first integrals of $\F$ and $\G$ around $p_1$ such that $F\vert_{T_1} = G\vert_{T_1}$. By lemma \ref{lem_fcts_tang}, the analytic continuations of $F$ and $G$ along $\gamma_{1j}$ can be written $F=y$ and $G=b(y) + a(x)y^{k+1}$ for some coordinates $(x,y)$ around $p_j$. Then $b(y) = \varphi_{1j}^{\leftrightarrow}(y)$ if $\varphi_{1j}^{\leftrightarrow}$ is computed in the coordinate $t=y$ on $T_1$. \end{lem} \begin{proof} Indeed, $b$ is characterised by $G\vert_{T_j} = b\circ F\vert_{T_j}$, and $\varphi_{1j}^{\leftrightarrow}$ by the fact that the leaf of $\F$ passing through $T_1$ at the point of coordinate $F=y_0$ intersects (tangentially) on $T_j$ the leaf of $\G$ passing through $T_1$ at the point of coordinate $F=\varphi_{1j}^{\leftrightarrow}(y_0)$. This means that the first integral $\varphi_{1j}^{\leftrightarrow}\circ F$ of $\F$ coincides with $G$ on $T_j$, ie $\varphi_{1j}^{\leftrightarrow}\circ F\vert_{T_j} = G\vert_{T_j}$, hence the result. \end{proof} \begin{proof}[Proof of theorem \ref{thm_classif_bif}] Take two bifoliated neighborhoods $(S,\F,\G)$ and $(S',\F',\G')$ with the same tangency index $k$, $1$-form $\omega$ and the same invariants computed in some coordinates $t,t'$ on $T_1$ and $T'_1$. Begin by fixing simply connected neighborhoods $Y$, $Y'$ of $\cup_{j=2}^{2g}\gamma_{1j}$ in $S$ and $S'$. We begin by showing that $(Y,\F,\G)$ and $(Y',\F',\G')$ are diffeomorphic, and we will then show that this diffeomorphism can be extended to $S$ and $S'$. \begin{figure}[H] \includegraphics[scale=0.75]{Y.png} \caption{The neighborhood $Y$} \end{figure} Since $Y$ and $Y'$ are simply connected, the foliations on these sets have first integrals $F,G,F',G'$ and we can suppose that $F(t)=G(t)$ and $F'(t')=G'(t')$ on $T_1$ and $T'_1$. Since $\omega=\omega'$, lemma \ref{lem_fcts_tang} tells us that there is a (unique) diffeomorphism $\psi$ between a neighborhood of $p_1$ in $S$ and a neighborhood of $p_1$ in $S'$ such that $F'\circ \psi = F$ and $G'\circ \psi=G$. We can take the analytic continuation of $F,G,F'$ and $G'$ along one of the paths $\gamma_{1j}$. For any point $p$ in this path, lemma \ref{lem_fcts_transv} tells us that the pairs $(F,G)$ and $(F',G')$ are equivalent (by a unique diffeomorphism) if and only if the number $a(p)$ is the same for both couples. But $a(p) = \int_{p_1}^p \omega$ where the integral is taken along the path $\gamma_{1j}$ so it is the case. By unicity, the diffeomorphism $\psi$ can be extended along the path $\gamma_{1j}$ arbitrarily near the point $p_j$. At the point $p_j$, lemmas \ref{lem_fcts_tang} and \ref{lem_hol_transport} show that the pairs $(F,G)$ and $(F',G')$ are also conjugated by a unique diffeomorphism in a neighborhood of $p_j$. Hence, we can extend $\psi$ to a diffeomorphism $\psi:Y \rightarrow Y'$ conjugating the pairs $(F,G)$ and $(F',G')$. By the lemma \ref{lem_fcts_transv}, we can also extend $\psi$ along any simple path. Then we only need to show that $\psi$ can be extended along a non-trivial loop. Let $\gamma$ be a non-trivial loop on $C$ based at $p_1$, $\varphi_\F = \rho_\F(\gamma)$ and $\varphi_\G = \rho_\G(\gamma)$. The extensions of $F$ and $G$ along $\gamma$ are $\varphi_{\F}^{-1}\circ F$ and $\varphi_\G^{-1}\circ G$; we know that $F'\circ \psi=F$ and $G'\circ \psi=G$, so $\varphi_\F^{-1}\circ F'\circ \psi = \varphi_\F^{-1}\circ F$ and $\varphi_\G^{-1}\circ G'\circ \psi = \varphi_\G^{-1}\circ G$. Hence $\psi$ is the diffeomorphism conjugating $(\varphi_\F^{-1}\circ F,\varphi_\G^{-1}\circ G)$ with $(\varphi_\F^{-1}\circ F',\varphi_\G^{-1}\circ G')$ and by unicity this means that $\psi$ can be extended along any loop. Thus $\psi$ can be extended to a diffeomorphism between $(S,\F,\G)$ and $(S',\F',\G')$. \end{proof} \subsection{Construction of bifoliated neighborhoods} We saw three restrictions for a set of diffeomorphisms to be an invariant of some bifoliated neighborhood: these are the compatibility relations \eqref{eq_compatibility_1}, \eqref{eq_compatibility_2} and \eqref{eq_representation}. These are the only restrictions; to obtain a simpler result, we will consider the $1$-form $\omega$ as an invariant here. \begin{thm} \label{thm_constr_bif} Let $((\varphi_i^1)_{i=1}^{2g},(\varphi_i^2)_{i=1}^{2g},(\varphi_j^3)_{j=2}^{2g-3})$ be some diffeomorphisms; let $k$ be an integer and $\omega$ a $1$-form. They define a bifoliated neighborhood $(S,\F,\G)$ with $\F$ and $\G$ tangent at order $k$ and with $1$-form $\omega$ if and only if every $\varphi_r^s$ is tangent to identity at order (at least) $k$ and if they satisfy the relations \eqref{eq_compatibility_1}, \eqref{eq_compatibility_2} and \eqref{eq_representation}. \end{thm} \begin{proof} Denote by $\rho_1$ and $\rho_2$ the representations given by the diffeomorphisms $(\varphi_i^1)$ and $(\varphi_i^2)$. Consider $\tilde{C}=\mb{D}_x$ the universal cover of $C$, $X$ a small neighborhood of a fundamental domain, $U_i$ a small neighborhood of $p_i$ in $X$ and $\check{C} = X\setminus(U_2\cup\ldots\cup U_{2g-2})$. \begin{figure}[H] \includegraphics[scale=0.6]{Ccheck.png} \caption{The neighborhood $X$} \end{figure} Consider next the trivial bundle $\check{S} = \check{C}\times \mb{C}_y$ along with two functions $F=y$ and $G = y + a(x)y^{k+1}$ (with $a(x) = \int_{p_1}^x \omega$). We now want to glue the borders of $\check{S}$ together: for this, we need to show that there exists for each loop $\gamma$ a diffeomorphism $\psi_{\gamma}$ defined when it makes sense such that $\psi_{\gamma}\vert_{\check{C}} = \gamma$ and \[ (\rho_1(\gamma)\circ F, \rho_2(\gamma)\circ G) = (F\circ \psi_{\gamma},G\circ \psi_{\gamma}). \] Thanks to the compatibility condition \eqref{eq_compatibility_1} and lemma \ref{lem_fcts_transv}, the couples $(\rho_1(\gamma)\circ F,\rho_2(\gamma)\circ G)$ and $(F\circ \gamma,G\circ \gamma)$ are diffeomorphic so we can indeed find such a $\psi_{\gamma}$. We can then glue the borders of $\check{S}$ together to obtain a surface which is a neighborhood of $C$ with holes $H_i$ around $p_i$ ($i=2,\ldots,2g-2$) and two foliations $\F$ and $\G$ transverse outside the holes. The holonomies of these foliations are $\rho_\F = \rho_1$ and $\rho_\G = \rho_2$ by construction. To fill these holes, take $C_i$ a neigrborhood of $p_i$ in $X$ slightly larger than $U_i$ and consider the patch $P_i = C_i\times \mb{C}_y$. Consider on $P_i$ the couple \[ (\tilde{F},\tilde{G}) = (y,\varphi_i^3(y)+(x-p_i)^2y^{k+1}). \] By lemma \ref{lem_fcts_transv} and compatibility condition \eqref{eq_compatibility_2}, for every point $p$ near the boundary of the hole $H_i$, there exists a unique diffeomorphism $\psi$ between a neighborhood of $p$ in $\check{S}$ and a neighborhood of $p$ in $P_i$ sending $(F,G)$ to $(\tilde{F},\tilde{G})$. By unicity, these diffeomorphisms glue to a diffeomorphism between neighborhoods of the boundaries of $H_i$ and $P_i$ and we can then glue the patch $P_i$ onto $H_i$ using this diffeomorphism. By the lemma \ref{lem_hol_transport}, we then have $\varphi_{1i}^{\leftrightarrow} = \varphi_i^3$ which concludes the proof. \end{proof} \section{Formal classification of neighborhoods} \label{sec_formal_classification} We know how to construct two canonical foliations on any neighborhood, and we know the classification of bifoliated neighborhoods, so we only need to put this together. Denote $\F$ and $\G$ the $A$- and $B$-canonical foliations. Note that if $\F=\G$, then they define a fibration tangent to $C$, so this case can be treated by Kodaira's deformation theory. Suppose this is not the case and $\F\neq\G$, then their order of tangency $k$ is the Ueda index of $S$. Moreover, let $(u_{ij})\in H^1(C,\mc{O}_C)$ be the Ueda class of the neighborhood and $(a_{ij}),(b_{ij})\in H^1(C,\mb{C})$ the cocycles defining the $(k+1)$-th order holonomy of $\F$ and $\G$. By definition, the images of $(a_{ij})$ and $(b_{ij})$ under the map $H^1(C,\mb{C}) \rightarrow H^1(C,\mc{O}_C)$ are both $(u_{ij})$. Thus by the exact sequence \eqref{eq_exact_sequence}, the cocycle $(b_{ij}-a_{ij})$ is given by a $1$-form: this $1$-form is exactly $\omega$. To sum up, we have constructed an application $H^1(C,\mc{O}_C) \rightarrow H^0(C,\Omega^1)$; this is a bijection because we can find $(a_{ij})$ (and thus $(u_{ij})$) from $\omega$ as the cocycle with null $A$-periods and with $B$-periods equal to those of $\omega$. By extension, we will call this form the Ueda form of the neighborhood. The Ueda class (and thus the Ueda form) is well-defined only up to a multiplicative constant, but the set of its zeroes is well-defined. The situation will be quite different depending on the tangency set between $\F$ and $\G$, so suppose that $\omega$ has only simple zeroes (so that the tangency set consists of $2g-2$ simple transversal tangency curves). Denote by $\ms{V}(C,k,\omega)$ the space of $2$-dimensional formal neighborhoods of $C$ with trivial normal bundle, Ueda index $k<\infty$ and Ueda form (a multiple of) $\omega$ modulo formal equivalence. \begin{thm} \label{thm_formal_classification} Let $C$ be a curve of genus $g\geq 2$, $1\leq k<\infty$ and $\omega$ a $1$-form on $C$ with simple zeroes. Then there is an injective map \[ \Phi : \ms{V}(C,k,\omega) \hookrightarrow \Dfhat{}^g\times\Dfhat{}^g\times\Dfhat{}^{2g-3}/\sim \] where the equivalence relation $\sim$ is given by the action of $\Df{}$ on $\Df{}^N$ by conjugacy on each factor. A tuple of diffeomorphisms $((\varphi_{i}^{(j)})_i)_{j=1}^3$ is in the image of $\Phi$ if and only if the $\varphi_{i}^{(j)}$ are tangent to the identity at order $k$ and if they satisfy the compatibility conditions \eqref{eq_compatibility_1} and \eqref{eq_compatibility_2}. \end{thm} \begin{proof} Fix a zero $p_1$ of $\omega$, fix some loops $\alpha_1,\ldots,\alpha_g,\beta_1,\ldots,\beta_g$ forming a symplectic basis of $H_1(C,\mb{C})$, fix some paths $\gamma_{1j}$ between $p_1$ and $p_j$. Let $[S]\in \ms{V}(C,k,\omega)$ and $S$ be a representative of $[S]$. Let $\F$ and $\G$ be respectively the $A$-canonical and the $B$-canonical foliations on $S$. Let $\varphi_{\tau}^{\F}$ and $\varphi_{\tau}^{\G}$ be the holonomies of $\F$ and $\G$ along the loops $\tau=\alpha_1,\ldots,\beta_g$; let $\varphi_{1j}^{\leftrightarrow} = (\varphi_{1j}^{\G})^{-1}\circ \varphi_{1j}^{\F}$ be computed along the path $\gamma_{1j}$. We put \[ \theta(S) = ((\varphi_{\beta_i}^{\F})_{i=1}^{g},(\varphi_{\alpha_i}^{\G})_{i=1}^g,(\varphi_{1j}^{\leftrightarrow})_{j=2}^{2g-2})\quad\text{and}\quad \Phi([S]) = [\theta(S)] \] the class of $\theta(S)$ modulo common conjugacy. Since a diffeomorphism $\psi$ between two neighborhoods $S$ and $S'$ sends the $A$-canonical foliation $\F$ of $S$ to the $A$-canonical foliation $\F'$ of $S'$ (resp. the $B$-canonical foliations $\G,\G'$), $\psi$ then sends the bifoliated neighborhood $(S,\F,\G)$ to $(S',\F',\G')$. Thus $\theta(S)$ and $\theta(S')$ are conjugated, ie. $\Phi([S])$ is well-defined. Conversely, if $\Phi([S])=\Phi([S'])$, then $(S,\F,\G)$ is diffeomorphic to $(S',\F',\G')$ (and therefore $S$ is diffeomorphic to $S'$). The realization part of the theorem is a direct consequence of theorem \ref{thm_classif_bif} (the relation \eqref{eq_representation} is trivial here). \end{proof} \begin{rmq} About the realization of a tuple $((\varphi_{i}^{(j)})_i)_{j=1}^3$, remark that the conditions \eqref{eq_compatibility_1} and \eqref{eq_compatibility_2} only depend on the coefficients of $\varphi_i^{(j)}$ of order $k+1$. In this sense, we can say that the image $\Phi(\ms{V}(C,k,\omega))$ is of finite codimension. \end{rmq} \section{Concluding remarks} \subsection{About convergent foliations in $S$} In some cases, the canonical foliations do not converge even if the neighborhood is analytic. Indeed, if $C$ is an elliptic curve, Mishustin gave in \cite{mishustin_no_foliation} an example of a neighborhood $S$ of $C$ with trivial normal bundle and no analytic foliations tangent to $C$. We can use this example to build examples in higher genus: let $p_1,p_2$ be two points on $C$ and $T_1,T_2$ two transversals at $p_1$ and $p_2$. Consider the two-fold branched covering $\pi : S' \rightarrow S$ of $S$ branching at $T_1$ and $T_2$. Denote $C' = \pi^{-1}(C)$, $\alpha,\beta$ the $A$- and $B$-loops on $C$ based at $p_1$, and $\alpha_1,\alpha_2,\beta_1,\beta_2$ the preimages of $\alpha$ and $\beta$. They are the $A$- and $B$-loops on $C'$ based at $\pi^{-1}(p_1)$. If $\F,\G$ are the canonical foliations on $S$, denote $\F'$ and $\G'$ the preimages of $\F,\G$ by $\pi$. Then $S'$ is an analytic neighborhood of the genus $2$ curve $C'$, the canonical foliations of $S'$ are $\F'$ and $\G'$, and they do not converge. Even with these examples, the question of the existence of an analytic neighborhood of a genus $2$ curve without any convergent foliation is still open. \subsection{About analytic equivalence of neighborhoods} Let $S,S'\in\ms{V}(C,k,\omega)$ be two analytic neighborhoods such that the canonical foliations converge. Let $\theta=(\varphi_i),\theta'=(\varphi'_i)$, $i=1,\ldots,4g-3$ be the diffeomorphisms obtained in the construction, so that $\theta$ is a representative of $\Phi(S)$ and $\theta'$ is a representative of $\Phi(S')$. Consider the groups $G$, $G'$ spanned by the $\varphi_i$ (resp. $\varphi'_i$). Suppose $G$ is not abelian. Then if $S$ and $S'$ are formally diffeomorphic, there is a formal diffeomorphism $\psi$ conjugating $\theta$ and $\theta'$. This $\psi$ realizes a conjugacy between $G$ and $G'$ so by Cerveau-Moussu's rigidity theorem \cite{cm}, $\psi$ is convergent. This in turn implies that $\theta$ and $\theta'$ are analytically conjugated, so that $S$ and $S'$ are analytically diffeomorphic. Note that since the diffeomorphisms $\varphi_i$ are tangent to the identity, the group $G$ is abelian only if the $\varphi_i$ are flows of a same formal vector field \cite{loray_pseudogroupe}. This argument also works for non-canonical foliations: suppose that $S$ and $S'$ are analytic neighborhoods conjugated by a formal diffeomorphism $\psi$. Suppose that there is on $S$ two convergent foliations $\F$ and $\G$ with tangency index $k\geq 1$ and $1$-form $\omega$ with simple zeroes. Suppose that $\psi$ sends $\F$ and $\G$ to convergent foliations $\F'$ and $\G'$. Suppose finally that the group $G$ spanned by the diffeomorphisms composing the invariant $Inv(S,\F,\G)$ of theorem \ref{thm_classif_bif} is not abelian. Then $\psi$ converges. \subsection{About degenerate cases} If the $1$-form $\omega$ doesn't have simple zeroes, we can still obtain a classification of neighborhoods in $\ms{V}(C,k,\omega)$ by the same method. The problem is that in this case some non-trivial local invariants can arise. For genus $g=2$ curves, the local situations which can be involved were classified in \cite{thom_boletim}. These local classifications can then be used to obtain a classification of bifoliated neighborhoods of genus $2$ curves even in the degenerate cases (see \cite{thom_these}), which in turn could give a complete formal classification of neighborhoods of genus $2$ curves with trivial normal bundle.
1,108,101,565,089
arxiv
\section{Introduction} \bigskip Arbitrary integer powers of a square matrix is used to solve some difference equations, differential and delay differential equations and boundary value problems. Recently, the calculations eigenvalues and integer powers of anti-tridiagonal matrices have been well studied in the literature. For instance, Rimas [1-3] obtained the integer powers of anti-tridiagonal matrices of odd and even order. \"{O}tele\c{s} and Akbulak [8-10] generalized Rimas's the some results and obtained complex factorizations. Guti\'{e}rrez [11] calculated the powers of complex persymmetric or skew-persymmetric anti-tridiagonal matrices with costant anti-diagonals. For details on the eigenvalues and the powers of tridiagonal and anti-tridiagonal matrices, see [4-7, 12]. We obtained integer powers of the tridiagonal matri \begin{equation} \widetilde{A}_{n}=\left\{ \begin{array}{l} \widetilde{a}_{11}=\widetilde{a}_{nn}=a \\ \widetilde{a}_{12}=\widetilde{a}_{n,n-1}=2b \\ \widetilde{a}_{21}=\widetilde{a}_{n-1,n}=b \\ tridiag(-b,a,-b),\ othe \end{array \right. \label{1} \end{equation} as \begin{eqnarray} \widetilde{a}_{ij}(r) &=&\frac{1}{2n-2}\left( \lambda _{2}^{r}T_{i-1}(m_{2})T_{j-1}(m_{2})+\lambda _{3}^{r}T_{i-1}(m_{3})T_{j-1}(m_{3})\right. \label{2} \\ &&+2\dsum\limits_{\underset{k\neq 2,3}{k=1}}^{n}\lambda _{k}^{r}T_{i-1}(m_{k})T_{j-1}(m_{k}));\ i=1,\ldots ,n;\ j=1,n \notag \end{eqnarray an \begin{eqnarray} \widetilde{a}_{ij}(r) &=&\frac{(-1)^{j}}{n-1}\left( \begin{array}{c} \\ (\lambda _{2}^{r}T_{i-1}(m_{2})T_{j-1}(m_{2})+\lambda _{3}^{r}T_{i-1}(m_{3})T_{j-1}(m_{3})) \\ \end{array \right. \label{3} \\ &&\left. +2\dsum\limits_{\underset{k\neq 2,3}{k=1}}^{n}\lambda _{k}^{r}T_{i-1}(m_{k})T_{j-1}(m_{k})\right) ;i=1,\ldots ,n;\ j=2,\ldots ,n-1 \notag \end{eqnarray where$\ m_{k}=\frac{\lambda _{k}-a}{2b}$ for $\lambda _{k}$ is the $k$-th eigenvalue of $\widetilde{A}_{n}$ see [14, p. 3]\ and $T_{j}(.)$ is the $j- th degree Chebyshev polynomial of the first kind [11, p. 14]. \"{O}tele\c{s} et al. computed integer powers of certain complex tridiagonal matri \begin{equation*} \widetilde{B}_{n}=\left\{ \begin{array}{l} \widetilde{b}_{11}=\widetilde{b}_{nn}=a+b \\ \widetilde{b}_{12}=\widetilde{b}_{21}=\widetilde{b}_{n-1,n}=\widetilde{b _{n,n-1}=b \\ tridiag(b,a,b),\ other \end{array \right. [9,\ p.67] \end{equation* a \begin{equation} \widetilde{b}_{ij}^{r}=\dsum\limits_{k=1}^{n}f_{k}\lambda _{k}^{r}T_{\frac 2i-1}{2}}\left( \frac{\lambda _{k}-a}{2b}\right) T_{\frac{2j-1}{2}}\left( \frac{\lambda _{k}-a}{2b}\right) \text{; }i,j=1,\ldots ,n \label{4} \end{equation where $T_{j}(.)$ is the $j-$th degree Chebyshev polynomial of the first kind [11, p. 14], $\lambda _{k}$\ is the $k$-th eigenvalue of the matrix \widetilde{B}_{n}$ and \begin{equation*} f_{k}=\left\{ \begin{array}{l} \frac{2}{n},\ \text{if }k=1,\ldots ,n-1 \\ \frac{1}{n},\text{ if }k=n \end{array \right. \end{equation*} Let \begin{equation} A_{n}:=\left( \begin{array}{cccccc} & & & & 2b & a \\ & & & -b & a & b \\ & & {\mathinner{\mkern2mu\raise1pt\hbox{.}\mkern2mu \raise4pt\hbox{.}\mkern2mu\raise7pt\hbox{.}\mkern1mu}} & a & -b & \\ & -b & {\mathinner{\mkern2mu\raise1pt\hbox{.}\mkern2mu \raise4pt\hbox{.}\mkern2mu\raise7pt\hbox{.}\mkern1mu}} & \mathinner{\mkern2mu\raise1pt\hbox{.}\mkern2mu \raise4pt\hbox{.}\mkern2mu\raise7pt\hbox{.}\mkern1mu}} & & \\ b & a & -b & & & \\ a & 2b & & & & \end{array \right) \label{5} \end{equation an \begin{equation} B_{n}:=\left( \begin{array}{cccccc} & & & & b & a+b \\ & & & b & a & b \\ & & b & a & b & \\ & {\mathinner{\mkern2mu\raise1pt\hbox{.}\mkern2mu \raise4pt\hbox{.}\mkern2mu\raise7pt\hbox{.}\mkern1mu}} & \mathinner{\mkern2mu\raise1pt\hbox{.}\mkern2mu \raise4pt\hbox{.}\mkern2mu\raise7pt\hbox{.}\mkern1mu}} & \mathinner{\mkern2mu\raise1pt\hbox{.}\mkern2mu \raise4pt\hbox{.}\mkern2mu\raise7pt\hbox{.}\mkern1mu}} & & \\ b & a & b & & & \\ a+b & b & & & & \end{array \right) \label{6} \end{equation be the anti-tridiagonal matrices, where $b\neq 0$ and $a,b\in \mathbb{C} $. In this paper, we obtain the integer powers of the $n\times n$ complex anti-tridiagonal matrices in (5) and (6). We also get the complex factorizations of Fibonacci polynomials, Fibonacci and Pell numbers using the eigenvalues of the matrices $A_{n}$ and $B_{n}$. \section{General Expression of $A_{n}^{r}$} In this section, we obtain a general expression for the entries of the $r -th powers of an $n\times n$ complex anti-tridiagonal matrix $A_{n}$ and B_{n}$\ in (5) and (6) where if $n$ is even, $r\in \mathbb{N}$ or $n$ is odd, $r\in \mathbb{Z}$. \begin{lemma} Let $a,0\neq b\in \mathbb{C} ,$ $n\in \mathbb{N}\ $an \begin{equation} \widetilde{A}_{n}=\left( \begin{array}{ccccccc} a & 2b & & & & & \\ b & a & b & & & & \\ & -b & a & -b & & & \\ & & -b & \ddots & \ddots & & \\ & & & \ddots & a & -b & \\ & & & & -b & a & b \\ & & & & & 2b & \end{array \right) , \label{7} \end{equation \begin{equation} \widetilde{B}_{n}=\left( \begin{array}{ccccccc} a+b & b & & & & & \\ b & a & b & & & & \\ & b & a & b & & & \\ & & b & \ddots & \ddots & & \\ & & & \ddots & a & b & \\ & & & & b & a & b \\ & & & & & b & a+ \end{array \right) \label{8} \end{equation an \begin{equation} J_{n}=\left( \begin{array}{ccccc} & & & & 1 \\ & & & 1 & \\ & & {\mathinner{\mkern2mu\raise1pt\hbox{.}\mkern2mu \raise4pt\hbox{.}\mkern2mu\raise7pt\hbox{.}\mkern1mu}} & & \\ & 1 & & & \\ 1 & & & & \end{array \right) . \label{9} \end{equation The \begin{equation*} A_{n}=J_{n}\widetilde{A}_{n}=\widetilde{A}_{n}J_{n} \end{equation* an \begin{equation*} B_{n}=J_{n}\widetilde{B}_{n}=\widetilde{B}_{n}J_{n} \end{equation*} \end{lemma} \begin{proof} See [7]. \end{proof} \begin{lemma} \bigskip Let $A_{n}$ be an $n\times n$ complex anti-tridiagonal matrix given by (5). The \begin{equation} A_{n}^{r}=\left\{ \begin{array}{l} J_{n}\widetilde{A}_{n}^{r},\ r\text{ is odd} \\ \widetilde{A}_{n}^{r},\ \ \ \ r\text{ is even. \end{array \right. \label{10} \end{equation} \end{lemma} \begin{proof} We will show by induction on $r$. The case $r=1$ is clear. Suppose that the equality (10) is true for $r>1.$ Now let us show the equality (10) is true for $r+1.$ By the induction hypothesis we hav \begin{equation*} A_{n}^{r+1}=\left\{ \begin{array}{l} \widetilde{A}_{n}J_{n}\widetilde{A}_{n}^{r},\ \ \ \ \ r+1\text{ is odd} \\ \widetilde{A}_{n}J_{n}\widetilde{A}_{n}^{r}J_{n},\ r+1\text{ is even. \end{array \right. \end{equation* Since $A_{n}=\widetilde{A}_{n}J_{n},$ we obtai \begin{equation*} A_{n}^{r+1}=\left\{ \begin{array}{l} J_{n}\widetilde{A}_{n}^{r+1},\ r+1\text{ is odd} \\ \widetilde{A}_{n}^{r+1},\ \ \ \ r+1\text{ is even. \end{array \right. \end{equation*} \end{proof} The same proof can be done easily for the matrix $B_{n}$. \begin{theorem} \bigskip Let $A_{n}$ be an $n\times n$ complex anti-tridiagonal matrix given by (5). If $n$ is odd, then the r-th power of $A_{n}$ is \begin{eqnarray} a_{n-i+1,j}^{r} &=&\frac{1}{2n-2}\left( \lambda _{2}^{r}T_{n-i}(m_{2})T_{j-1}(m_{2})+\lambda _{3}^{r}T_{n-i}(m_{3})T_{j-1}(m_{3})\right. \notag \\ &&+2\dsum\limits_{\underset{k\neq 2,3}{k=1}}^{n}\lambda _{k}^{r}T_{n-i}(m_{k})T_{j-1}(m_{k}));\ i=1,\ldots ,n;\ j=1,n \label{11} \end{eqnarray an \begin{eqnarray} a_{n-i+1,j}^{r} &=&\frac{(-1)^{j}}{n-1}\left( \begin{array}{c} \\ \lambda _{2}^{r}T_{n-i}(m_{2})T_{j-1}(m_{2})+\lambda _{3}^{r}T_{n-i}(m_{3})T_{j-1}(m_{3}) \\ \end{array \right. \notag \\ &&\left. +2\dsum\limits_{\underset{k\neq 2,3}{k=1}}^{n}\lambda _{k}^{r}T_{n-i}(m_{k})T_{j-1}(m_{k})\right) \label{12} \end{eqnarray for $i=1,\ldots ,n;\ j=2,\ldots ,n-1$\ and if $n$ is even, then the $r$-th power of $A_{n}$ i \begin{eqnarray} a_{i,j}^{r} &=&\frac{1}{2n-2}\left( \lambda _{2}^{r}T_{i-1}(m_{2})T_{j-1}(m_{2})+\lambda _{3}^{r}T_{i-1}(m_{3})T_{j-1}(m_{3})\right. \notag \\ &&+2\dsum\limits_{\underset{k\neq 2,3}{k=1}}^{n}\lambda _{k}^{r}T_{i-1}(m_{k})T_{j-1}(m_{k}));\ i=1,\ldots ,n;\ j=1,n \label{13} \end{eqnarray an \begin{eqnarray} a_{i,j}^{r} &=&\frac{(-1)^{j}}{n-1}\left( \begin{array}{c} \\ \lambda _{2}^{r}T_{i-1}(m_{2})T_{j-1}(m_{2})+\lambda _{3}^{r}T_{i-1}(m_{3})T_{j-1}(m_{3}) \\ \end{array \right. \notag \\ &&\left. +2\dsum\limits_{\underset{k\neq 2,3}{k=1}}^{n}\lambda _{k}^{r}T_{i-1}(m_{k})T_{j-1}(m_{k})\right) \label{14} \end{eqnarray for $i=1,\ldots ,n;\ j=2,\ldots ,n-1.$ \end{theorem} \begin{proof} Let $A_{n}^{r}=(a_{ij}^{r})$ and $U=\widetilde{A}_{n}^{r}$. We obtained the eigenvalues of\ $\widetilde{A}_{n}$ a \begin{equation*} \lambda _{k}=a+2b\cos \left( \frac{(k-1)\pi }{n-1}\right) ,\ k=1,\ldots ,n\ (see\ [14,p.3]) \end{equation* and the entries of the matrix $U$\ a \begin{eqnarray} u_{ij}(r) &=&\frac{1}{2n-2}\left( \lambda _{2}^{r}T_{i-1}(m_{2})T_{j-1}(m_{2})+\lambda _{3}^{r}T_{i-1}(m_{3})T_{j-1}(m_{3})\right. \notag \\ &&+2\dsum\limits_{\underset{k\neq 2,3}{k=1}}^{n}\lambda _{k}^{r}T_{i-1}(m_{k})T_{j-1}(m_{k}));\ i=1,\ldots ,n;\ j=1,n \label{15} \end{eqnarray and for $i=1,\ldots ,n$ and$\ j=2,\ldots ,n-1 \begin{eqnarray} u_{ij}(r) &=&\frac{(-1)^{j}}{n-1}\left( \begin{array}{c} \\ \lambda _{2}^{r}T_{i-1}(m_{2})T_{j-1}(m_{2})+\lambda _{3}^{r}T_{i-1}(m_{3})T_{j-1}(m_{3}) \\ \end{array \right. \notag \\ &&\left. +2\dsum\limits_{\underset{k\neq 2,3}{k=1}}^{n}\lambda _{k}^{r}T_{i-1}(m_{k})T_{j-1}(m_{k})\right) \label{16} \end{eqnarray where $m_{k}=\frac{\lambda _{k}-a}{2b}\ $(see [14, p.]) and $T_{s}(.)$ is the $s-$th degree Chebyshev polynomial of the first kind [13, p. 14]. If $r$ is even, then the equalities (15) and (16) are valid by the equality (10). Let $r$ be odd number. If we multiply the equalities (15) and (16) by $J_{n}$ from left side, then we hav \begin{equation*} (J_{n}\widetilde{A}_{n}^{r})_{i,k}=\dsu \limits_{s=1}^{n}(J_{n})_{i,s}u_{s,k}(r)=u_{n-i+1,k}(r);\ k=1,\ldots ,n. \end{equation* Hence we obtai \begin{eqnarray*} a_{n-i+1,j}^{r} &=&u_{n-i+1,j}(r) \\ &=&\frac{1}{2n-2}\left( \lambda _{2}^{r}T_{n-i}(m_{2})T_{j-1}(m_{2})+\lambda _{3}^{r}T_{n-i}(m_{3})T_{j-1}(m_{3})\right. \\ &&+2\dsum\limits_{\underset{k\neq 2,3}{k=1}}^{n}\lambda _{k}^{r}T_{n-i}(m_{k})T_{j-1}(m_{k}));\ i=1,\ldots ,n;\ j=1,n \end{eqnarray* an \begin{eqnarray*} a_{n-i+1,j}^{r} &=&u_{n-i+1,j}(r) \\ &=&\frac{(-1)^{j}}{n-1}\left( \begin{array}{c} \\ \lambda _{2}^{r}T_{n-i}(m_{2})T_{j-1}(m_{2})+\lambda _{3}^{r}T_{n-i}(m_{3})T_{j-1}(m_{3}) \\ \end{array \right. \\ &&\left. +2\dsum\limits_{\underset{k\neq 2,3}{k=1}}^{n}\lambda _{k}^{r}T_{n-i}(m_{k})T_{j-1}(m_{k})\right) ;i=1,\ldots ,n;\ j=2,\ldots ,n-1. \end{eqnarray*} \end{proof} \begin{theorem} \bigskip Let $B_{n}$ be an $n\times n$ complex anti-tridiagonal matrix given by (6), $\widetilde{V}=\widetilde{B}_{n}^{r}$ and $V=B_{n}^{r}.$ Then the entries of $V$ ar \begin{equation} v_{ij}(r)=\left\{ \begin{array}{l} \dsum\limits_{k=1}^{n}f_{k}\lambda _{k}^{r}T_{\frac{2i-1}{2}}\left( \frac \lambda _{k}-a}{2b}\right) T_{\frac{2j-1}{2}}\left( \frac{\lambda _{k}-a}{2b \right) ,\text{ if r is even} \\ \dsum\limits_{k=1}^{n}f_{k}\lambda _{k}^{r}T_{\frac{2(n-i)+1}{2}}\left( \frac{\lambda _{k}-a}{2b}\right) T_{\frac{2j-1}{2}}\left( \frac{\lambda _{k}-a}{2b}\right) ,\text{ if r is odd \end{array \right. \label{17} \end{equation where $i,j=1,\ldots ,n$ and \begin{equation*} \lambda _{k}=a+2b\cos \left( \frac{(k-1)\pi }{n}\right) ,\ k=1,\ldots ,n \end{equation* are the eigenvalues of the matrix $\widetilde{B}_{n}.$ \end{theorem} \begin{proof} \bigskip Since $B_{n}^{r}=\widetilde{B}_{n}^{r}$ from (10) for $r$ is even, the equality (17) is valid. Let $r$ be odd number. If we multiply the equality (4) by $J_{n}$ from the left side, then we hav \begin{eqnarray*} v_{i,j} &=&(J_{n}\widetilde{B}_{n}^{r})_{i,j}=\dsu \limits_{s=1}^{n}(J_{n})_{i,s}\widetilde{v}_{s,k}(r) \\ &=&\widetilde{v}_{n-i+1,j}(r) \\ &=&\dsum\limits_{k=1}^{n}f_{k}\lambda _{k}^{r}T_{\frac{2(n-i)+1}{2}}\left( \frac{\lambda _{k}-a}{2b}\right) T_{\frac{2j-1}{2}}\left( \frac{\lambda _{k}-a}{2b}\right) . \end{eqnarray*} \end{proof} \begin{corollary} Let the matrix $A_{n}$ be as in (5). Then the eigenvalues of $A_{n}$ ar \begin{equation} \lambda _{k}=(-1)^{k}\left( a+2b\cos \left( \frac{(k-1)\pi }{n-1}\right) \right) ,k=1,\ldots ,n. \label{18} \end{equation} \end{corollary} \begin{proof} See [3, p.574] and [14, p. 5]. \end{proof} \begin{corollary} Let the matrix $B_{n}$ be as in (6). Then the eigenvalues of $B_{n}$ ar \begin{equation} \lambda _{k}=(-1)^{k-1}\left( a+2b\cos \left( \frac{(k-1)\pi }{n}\right) \right) \label{19} \end{equation where $k=1,\ldots ,n.$ \end{corollary} \begin{proof} See [5, p.574] and [14, p. ]. \end{proof} \section{\textbf{Numerical \ examples}} Considering the Eqs. (11-14), we can find the arbitrary integer powers of the $n\times n$ complex anti-tridiagonal matrix $A_{n}$ in (5), where $n$ is positive odd integer. \begin{example} Let $n=3,r=3,a=1$ and $b=3.$\ Sinc \begin{equation*} \widetilde{J}=diag(\lambda _{1},\lambda _{2},\lambda _{3})=diag(a,a+2b,a-2b)=diag(1,7,-5) \end{equation* an \begin{equation*} \widetilde{A}_{3}^{3}=(u_{ij}(3))=\left( \begin{array}{ccc} 55 & 234 & 54 \\ 117 & 109 & 117 \\ 54 & 234 & 5 \end{array \right) , \end{equation* we obtai \begin{equation*} A_{3}^{3}=J_{3}\widetilde{A}_{3}^{3}=\left( \begin{array}{ccc} 54 & 234 & 55 \\ 117 & 109 & 117 \\ 55 & 234 & 5 \end{array \right) . \end{equation*} \end{example} \begin{example} If $n=5,r=4,a=1$ and $b=3,$ then \begin{eqnarray*} \widetilde{J} &=&diag(\lambda _{1},\lambda _{2},\lambda _{3},\lambda _{4},\lambda _{5}) \\ &=&diag(1,7,-5,1+3\sqrt{2},1-3\sqrt{2}) \end{eqnarray* and sinc \begin{equation*} \widetilde{A}_{5}^{4}=(u_{ij}(4))=\left( \begin{array}{rrrrr} 595 & 672 & -756 & 216 & 162 \\ 336 & 973 & -444 & 540 & 108 \\ -378 & -444 & 757 & -444 & -378 \\ 108 & 540 & -444 & 973 & 336 \\ 162 & 216 & -756 & 672 & 59 \end{array \right) , \end{equation* we hav \begin{equation*} A_{5}^{4}=J_{5}\widetilde{A}_{5}^{4}=\left( \begin{array}{rrrrr} 162 & 216 & -756 & 672 & 595 \\ 108 & 540 & -444 & 973 & 336 \\ -378 & -444 & 757 & -444 & -378 \\ 336 & 973 & -444 & 540 & 108 \\ 595 & 672 & -756 & 216 & 16 \end{array \right) . \end{equation*} \end{example} \section{Complex Factorizations} The well-known the Fibonacci polynomials$\ F(x)=\{F_{n}(x)\}_{n=1}^{\infty }$ are defined by $F_{n}(x)=xF_{n-1}(x)+F_{n-2}(x)$ with initial conditions F_{0}(x)=0$ and $F_{1}(x)=1.$ For $x=1$ and $x=2,$\ then we obtain the Fibonacci and Pell numbers a \begin{equation*} F_{n}(1)=\{0,1,1,2,3,5,8,\ldots \} \end{equation* an \begin{equation*} F_{n}(2)=\{0,1,2,5,12,29,\ldots \}, \end{equation* respectively. \begin{theorem} \bigskip Let $A_{n}$ be an $n\times n$ complex anti-tridiagonal matrix given by (5). If $a:=x$ and $b:=\mathbf{i}$, the \begin{equation*} \det (A_{n})=\left\{ \begin{array}{r} \ \ \ (x^{2}+4)F_{n-1}(x),\ n\equiv 0\ or\ 1\ \func{mod}4 \\ -(x^{2}+4)F_{n-1}(x),\ n\equiv 2\ or\ 3\ \func{mod} \end{array \right. \end{equation* where $\mathbf{i}=\sqrt{-1}.$ \end{theorem} \begin{proof} Since $\widetilde{A}_{n}=J_{n}A_{n}, \begin{equation*} \det (\widetilde{A}_{n})=x^{2}D_{n-2}+4xD_{n-3}+4D_{n-4} \end{equation* where $D_{n}=\det (tridiag_{n}(-\mathbf{i},x,-\mathbf{i}))$ an \begin{equation*} \det (tridiag_{n}(-\mathbf{i},x,-\mathbf{i}))=F_{n+1}(x)\ (see\ [14,\ p.]), \end{equation* we arrive a \begin{eqnarray*} \det (\widetilde{A}_{n}) &=&x^{2}F_{n-1}(x)+4xF_{n-2}(x)+4F_{n-3}(x) \\ &=&x^{2}(xF_{n-2}(x)+F_{n-3}(x))+4xF_{n-2}(x)+4F_{n-3}(x) \\ &=&(x^{2}+4)(xF_{n-2}(x)+F_{n-3}(x))=(x^{2}+4)F_{n-1}(x). \end{eqnarray* Sinc \begin{equation*} \det (J_{n})=\left\{ \begin{array}{r} 1,\ \ n\equiv 0\ or\ 1\func{mod}4 \\ -1,n\equiv 2\ or\ 3\func{mod}4 \end{array \right. \end{equation* the proof of theorem is completed. \end{proof} \begin{corollary} Let $A_{n}$ be an $n\times n$ complex anti-tridiagonal matrix given by (5). If $a:=x$ and $b:=\mathbf{i}$, then the complex factorization of generalized Fibonacci-Pell numbers is the following form \begin{equation*} F_{n-1}(x)=\frac{\alpha }{x^{2}+4}\dprod\limits_{k=1}^{n}\left( x+2\mathbf{i \cos \left( \frac{(k-1)\pi }{n-1}\right) \right) \end{equation* wher \begin{equation*} \alpha =\left\{ \begin{array}{l} (-1)^{n},\ if\ n\equiv 0\ or\ 1\ \func{mod}4 \\ (-1)^{n-1},if\ n\equiv 2\ or\ 3\ \func{mod}4 \end{array \ \right. \end{equation*} \end{corollary} \begin{proof} Since the eigenvalues of the matrix $A_{n}$ from (18 \begin{equation*} \lambda _{j}=(-1)^{j}\left( x+2\mathbf{i}\cos \left( \frac{(j-1)\pi }{n-1 \right) \right) ,\ j=1,\ldots ,n, \end{equation* the determinant of the matrix $A_{n}$ can be obtained a \begin{equation} \det (A_{n})=(-1)^{n}\dprod\limits_{k=1}^{n}\left( x+2\mathbf{i}\cos \left( \frac{(k-1)\pi }{n-1}\right) \right) . \label{20} \end{equation By considering (20) and Theorem 9, the complex factorization of generalized Fibonacci-Pell numbers is obtained. \end{proof} \begin{theorem} Let $B_{n}$ be an $n\times n$ complex anti-tridiagonal matrix given by (6). If $a:=1$ and $b:=\mathbf{i}$, then \begin{equation} \det (B_{n})=\left\{ \begin{array}{c} (1+2\mathbf{i})F_{n},\ \ n\equiv 0\ or\ 1\func{mod}4 \\ -(1+2\mathbf{i})F_{n},n\equiv 2\ or\ 3\func{mod} \end{array \right. \label{21} \end{equation and if $a:=2$ and $b:=\mathbf{i}$, then \begin{equation} \det (B_{n})=\left\{ \begin{array}{c} (2+2\mathbf{i})P_{n},\ \ n\equiv 0\ or\ 1\func{mod}4 \\ -(2+2\mathbf{i})P_{n},n\equiv 2\ or\ 3\func{mod} \end{array \right. \label{22} \end{equation where $\mathbf{i}=\sqrt{-1}$ and $F_{n}$ and $P_{n}$ denote the nth Fibonacci and Pell numbers, respectively. \end{theorem} \begin{proof} Applying Laplace expansion according to the first two and last two rows of the determinant of $\widetilde{B}_{n}$, we hav \begin{eqnarray} \det (\widetilde{B}_{n}) &=&(a+b)^{2}\det (tridiag_{n-2}(b,a,b)) \label{23} \\ &&-2b^{2}(a+b)\det (tridiag_{n-3}(b,a,b)) \notag \\ &&+b^{4}\det (tridiag_{n-4}(b,a,b))\ (see\ [9,\ p.]). \notag \end{eqnarray If we take $a:=1$ and $b:=\mathbf{i}$ in (23), then we ge \begin{eqnarray*} \det (\widetilde{B}_{n}) &=&(1+\mathbf{i})^{2}\det (tridiag_{n-2}(\mathbf{i ,1,\mathbf{i})) \\ &&+2(1+\mathbf{i})\det (tridiag_{n-3}(\mathbf{i},1,\mathbf{i})) \\ &&+\det (tridiag_{n-4}(\mathbf{i},1,\mathbf{i}))\ (see\ [9,\ p.]) \\ &=&(1+\mathbf{i})^{2}F_{n-1}+2(1+\mathbf{i})F_{n-2}+F_{n-3} \\ &=&(1+2\mathbf{i})F_{n}. \end{eqnarray* Sinc \begin{equation*} B_{n}=J_{n}\widetilde{B}_{n}, \end{equation* we obtain (21). Similar to the above, we can easily obtain Pell numbers. \end{proof} \begin{corollary} Let $B_{n}$ be an $n\times n$ complex anti-tridiagonal matrix given by (6). If $a:=1\ $and $b:=\mathbf{i}$, then the complex factorizations of Fibonacci numbers ar \begin{equation*} F_{n}=(-1)^{n}\left\{ \begin{array}{l} \ \ \dprod\limits_{k=2}^{n-1}\left( 1+2\mathbf{i}\cos \left( \frac{(k-1)\pi }{n}\right) \right) ,n\ is\ even \\ -\dprod\limits_{k=2}^{n}\left( 1+2\mathbf{i}\cos \left( \frac{(k-1)\pi }{n \right) \right) ,n\ is\ od \end{array \right. \end{equation* and if $a:=2\ $and $b:=\mathbf{i,}$ then the complex factorizations of Pell numbers ar \begin{equation*} P_{n}=(-1)^{n}\left\{ \begin{array}{l} \ \ \dprod\limits_{k=2}^{n}\left( 2+2\mathbf{i}\cos \left( \frac{(k-1)\pi }{ }\right) \right) ,\ n\ is\ even \\ -\dprod\limits_{k=2}^{n}\left( 2+2\mathbf{i}\cos \left( \frac{(k-1)\pi }{n \right) \right) ,\ n\ is\ odd \end{array \right. \end{equation*} \end{corollary} \begin{proof} Let $a:=1\ $and $b:=\mathbf{i.}$ Since the eigenvalues of the matrix $B_{n}$ ar \begin{equation*} \lambda _{k}=(-1)^{n-1}\left( 1+2\mathbf{i}\cos \left( \frac{(k-1)\pi }{n \right) \right) \end{equation* for $k=1,\ldots ,n$\ and the determinant of the matrix $B_{n}$ is equal to multiplication of its eigenvalues, we ge \begin{eqnarray*} F_{n} &=&\frac{1}{1+2\mathbf{i}}\left\{ \begin{array}{c} \det (B_{n}),\ \ n\equiv 0\ or\ 1\func{mod}4 \\ -\det (B_{n}),n\equiv 2\ or\ 3\func{mod} \end{array \right. \\ &=&\frac{1}{1+2\mathbf{i}}\left\{ \begin{array}{l} \ \ \ (-1)^{n}\dprod\limits_{k=1}^{n}\left( 1+2\mathbf{i}\cos \left( \frac (k-1)\pi }{n}\right) \right) ,\ n\ is\ even \\ (-1)^{n+1}\dprod\limits_{k=1}^{n}\left( 1+2\mathbf{i}\cos \left( \frac (k-1)\pi }{n}\right) \right) ,\ n\ is\ od \end{array \right. \\ &=&(-1)^{n}\left\{ \begin{array}{l} \ \ \dprod\limits_{k=2}^{n}\left( 1+2\mathbf{i}\cos \left( \frac{(k-1)\pi }{ }\right) \right) ,\ n\ is\ even \\ -\dprod\limits_{k=2}^{n}\left( 1+2\mathbf{i}\cos \left( \frac{(k-1)\pi }{n \right) \right) ,\ n\ is\ odd \end{array \right. \end{eqnarray* We can easily obtain Pell numbers similarly for $a:=2\ $and $b:=\mathbf{i.}$ Thus, the proof is completed. \end{proof} \textbf{Acknowledgement. }The authors are partially supported by TUBITAK and the Office of Sel\c{c}uk University Research Project (BAP).
1,108,101,565,090
arxiv
\section{Introduction} Hydrodynamics is the natural framework for the nuclear matter at finite temperature and at long distance scales. A novel nuclear matter state has been discovered in the Relativistic Heavy Ion Collision (RHIC) experiment \cite{expt1} (for review see for example \cite{expt2}). It is widely believed that a deconfinement phase transition happens and the corresponding deconfined state is the {\em quark gluon plasma} (QGP) \cite{Shuryak} (for early work prior to QCD, see \cite{Lee}). It has been estimated that the critical density and critical temperature are $\mu_C\sim 1 GeV/fm^3$, ten times the density of the normal nuclear matter, and $T_C\sim 170 MeV$ at which $\alpha_s\sim 1$. So the reliability of the perturbative description of QGP is highly suspicious. To apply hydrodynamics in this case requires the consideration of the propagation of the color charges. Therefore the corresponding liquid is ``colored''. More interestingly, experiment shows that the shear viscosity of the deconfined nuclear matter is close to the lower bound set by theory based on AdS/CFT \cite{grav}. This implies that a perfect fluid is a good approximation to the real QGP. The framework for a perfect color liquid has been developed in \cite{NFM} (for review see \cite{NFMR}). In using the nonabelian fluid mechanics to explore the physics of QGP, the natural first step is to discuss the classical solution space. So we ask whether there exists any topological nontrivial solution. In fact within the framework of the nonabelian fluid, the configuration of fluid is described by a field $g$ which takes values in the color group, say $SU(n)$. Because QGP only exists in finite space region, a natural boundary condition is that $g$ should go to a constant at spacial infinity. Therefore, the fluid configurations are classified by $\Pi_3(SU(n))=\mathbb{Z}$. Moreover, in each topological class, we hope to check whether there exists a configuration minimizing the total energy. Such solutions are referred to as skyrmions related to color group or simply {\em color skyrmions}. In our previous paper \cite{DN}, the existence of color skyrmion is shown for a particular choice of the hydrodynamic Hamiltonian. In this paper, we continue the discussion of the existence of color skyrmions in both time-independent and time-dependent cases. We will consider the nonabelian fluid system whose equation of state (EOS) follows the so-called {\em $\gamma$-law}, namely the pressure density is proportional to the energy density: \begin{equation}\label{GammaLaw} \wp=(\gamma-1)\varepsilon. \end{equation} From the relativistic fluid mechanics, we know that the case $\gamma=4/3$ corresponds to radiation ($\wp=\varepsilon/3$) while the case $\gamma=1$ corresponds to dust ($\wp=0$). The up-to-date results on EOS for the isentropic expansion of QGP come from lattice simulation \cite{lat}. In the temperature region relevant to RHIC, for example, a phenomenological parametrization of the resulting equation of state is given by \begin{equation}\label{EOS} {\wp\over\varepsilon}={1\over 3}\Bigl(1-\frac{1.2}{1+0.5\varepsilon~\mbox{fm}^3/\mbox{GeV}}\Bigr). \end{equation} We see it is a smooth transition from radiation to dust. In addition, we will take $\gamma=2$ as a heuristic example. Consequently, we will restrict $1\le \gamma\le 2$ in this paper. Our major results can be summarized as follows. For the time-independent case, \begin{itemize} \item $\gamma=2$: we showed the existence of color skyrmion for this case in a particular ansatz for configuration in previous work. In this paper, the existence is shown for more general cases. \item $6/5\le\gamma\le 5/3$: the topological configuration are in general unstable. The total energy gets minimized by infinite dilution in the fluid and the final value of energy is zero. \item $\gamma=1$: the energy is minimized to a finite value when the fluid is infinitely diluted. \item $1<\gamma<6/5$ and $5/3<\gamma<2$: unstable topological configurations exist; however, the existence of any metastable soliton cannot be ruled out. \end{itemize} For time-dependent case, both oscillating and expanding solutions are found. The organization of this paper is the following. In Sect.~\ref{Pre}, preliminaries materials on nonabelian fluid mechanics are reviewed. In Sect.~\ref{TIS} and \ref{DG}, static color skyrmions are thoroughly considered for different $\gamma$. In Sect.~\ref{TDS}, time-dependent solutions are given. In Sect.~\ref{DIS}, conclusions are given and some open issues are addressed. \section{Preliminaries}\label{Pre} We describe the internal degrees of freedom for a single colored particle by a group element $g$ in the color group $SU(n)$. And the corresponding phase space structure is determined by the symplectic potential: \begin{equation} \Theta=-i\rho_sTr(T_sg^{-1}dg) \end{equation} from which a key Poisson bracket is determined \begin{equation} \{\rho_s, g\}=-igT_s. \end{equation} $T_s$ is a generator in Cartan subalgebra and $\rho_s$ is the conjugate momentum. The index $s$ runs over all the generators $T_s$. The dynamics of color degrees of freedom is therefore determined by the Lagrangian \begin{equation} L=-i\rho_sTr(T_sg^{-1}\dot{g})-H(\rho_s,g) \end{equation} where $H$ is the Hamiltonian. The generalization to many-particle system gives the nonabelian fluid Lagrangian density: \begin{equation}\label{A1} {\cal L}=\sum\limits_sj_s^\mu \omega_{s\mu}-F(\{n_s\}) \end{equation} where $j_s^\mu$ is the color current, $n_s^2=j_s^\mu j_{s\mu}$ and $F$ an invariant potential function. And \begin{equation} \omega_{s\mu}=-iTr(T_sg^{-1}\partial_\mu g) \end{equation} is the generalized velocity field. The action in a general metric background is given by $S=\int d^4x\sqrt{g}{\cal L}$. In this paper, we will only consider rank-one group $SU(2)$; therefore, the index $s$ will be omitted hereafter. Recall that the energy-momentum tensor of an ideal fluid is of the standard form \begin{equation}\label{A4} T^{\mu\nu}=(\varepsilon+\wp)u^\mu u^\nu-\wp g^{\mu\nu} \end{equation} where $u^\mu$ is the unit velocity field such that $u^\mu u_\mu=1$. Now we identify $u^\mu$ in the following way: \begin{equation}\label{A5} j^\mu=nu^\mu. \end{equation} Note an on-shell condition is \begin{equation}\label{A6} n\omega_\mu=F^\prime j_\mu. \end{equation} With the identity $\delta g = gg^{\mu\nu}\delta g_{\mu\nu}$ and the definition of the energy-momentum tensor, \begin{equation}\label{A3} T^{\mu\nu}=-{2\over\sqrt{g}}{\delta S\over\delta g_{\mu\nu}}, \end{equation} it can be verified that $T^{\mu\nu}$ for the nonabelian fluid system in Eq.~(\ref{A1}) is in accordance with the form for an ideal fluid in Eq.~(\ref{A4}) and that \begin{equation}\label{ID} \varepsilon=F,~~\wp=nF^\prime-F. \end{equation} So we claim that the Lagrangian in Eq.~(\ref{A1}) does describe an ideal fluid system. Eq.~(\ref{ID}) shows that for the $\gamma$-law of Eq.~(\ref{GammaLaw}), we solve \begin{equation}\label{A8} F(n)={\alpha\over\gamma} n^\gamma, \end{equation} with $\alpha$ a dimensional constant. Remember the physical boundary condition for $g$ as a field defined in space $\mathbb{R}^3$ in the previous introductory section. In accordance with the boundary condition, the configuration space is classified by the mapping class $g:S^3\rightarrow SU(n)$. We do not need to specify the topological number in this paper. The assumption that the configuration carries certain topological charges is enough in practice below. \section{Variational Analysis of Time-Independent Configurations}\label{TIS} We discuss the existence of the static soliton solution with defined topological charge in the nonabelian fluid in this section. \subsection{General Setting} To do so, it is convenient to use Hamiltonian. In fact, the Lagrangian in Eq.~(\ref{A1}) can be reexpressed as ${\cal L}=\rho\omega_0-{\cal H}$ where $\rho=j^0$. The Hamiltonian density is given by \begin{equation}\label{A10} {\cal H}={\vec j}\cdot{\vec \omega}+F \end{equation} and there is an on-shell constraint: \begin{equation} {\delta {\cal H}\over \delta \vec{j}}=\vec{\omega}+{\delta F\over\delta\vec{j}}=\vec{\omega}-{F^\prime\over n}\vec{j}=0. \end{equation} With some simple algebra, we get a dynamical system \begin{equation} {\cal H}={n\over F^\prime}\omega^2+F, ~ \rho^2=n^2+{n^2\over F^{\prime 2}}\omega^2 \end{equation} where $\omega=|{\vec \omega}|$. In principle, the invariant density $n$ can be eliminated from Hamiltonian by solving the second equation and the Hamiltonian becomes a function of $\rho$ and $\omega$. However, the constraint is in general unsolvable algebraically. This is the hard part of this issue. Now we introduce the major physical assumptions of this paper, under which the existence of color skyrmions are considered. \begin{enumerate} \item We only consider the fluid configurations with a single characteristic scale, which is written as $R$. \item $\rho$ scales like $1/R^3$. \item $\omega$ scales like $1/R$. \item $F$ follows the $\gamma$-law. \end{enumerate} \noindent We will simply call $R$ as the size of fluid configuration. Remarkably, these assumptions are general enough regardless of the spin and the color of the fluid configuration. Accordingly, we introduce the dimensionless quantities in the following: \begin{equation} {\vec X}={\vec x}/R,~~ \mathbf{P}=R^3\rho,~~ \Omega=R\omega. \end{equation} Moreover, we parameterize the dimensional constant in the definition of $\gamma$-law as \begin{equation} \mu=\alpha^{1/(4-3\gamma)}. \end{equation} So we have two scales in the problem: $\mu$ which is fixed, and $R$ which can be varied to minimize the energy. It is natural to make the size of the configuration dimensionless by defining \begin{equation} r=\mu R. \end{equation} Instead of using the invariant density $n$, it is more convenient to use the ratio \begin{equation} Z={\rho\over n}. \end{equation} With these technical preparations and by putting the $\gamma$-law in, we can recast the physical quantities in a more compact form: \begin{eqnarray}\nonumber H&=&T+P,\\ \nonumber T&=&{\mu\over r^{5-3\gamma}}\int d^3X{A\over Z^{2-\gamma}},\\ P&=&{\mu\over r^{3(\gamma-1)}}\int d^3X{B\over Z^\gamma};\label{E}\\ \label{A7Z} Z^2&=&1+cZ^{2(\gamma-1)}. \end{eqnarray} where we introduce the following definitions \begin{equation}\label{C} c={C\over r^{2(4-3\gamma)}} \end{equation} and \begin{equation}\label{PARA} A=\Omega^2\mathbf{P}^{2-\gamma},~ B={\mathbf{P}^\gamma\over \gamma},~ C={\Omega^2\over \mathbf{P}^{2(\gamma-1)}}. \end{equation} We will refer $T$ as \underline{tension term} and $P$ as \underline{potential term}. We can now define the basic problem of the variational procedure: Given $\gamma$, whether the total energy in Eq.~(\ref{E}) is able to be minimized by the variation of $r$ with the constraint given in Eq.~(\ref{A7Z})? The solution to this question gives rise to a stable configuration in nonabelian fluid within the variational method. To solve this question, we {\em first} consider some general implications of the constraint equation (\ref{A7Z}) and the solution $Z$ as a function in $c$. Eq.~(\ref{A7Z}) cannot be given algebraically for the generic value of $\gamma$. In other words, Eq.~(\ref{A7Z}) is transcendental in general; even for the simple rational powers like $\gamma=7/6,6/5,7/5,8/5,9/5,11/6$, the roots of this equation cannot be written down. Now we rewrite Eq.~(\ref{A7Z}) in another form \begin{equation}\label{A7c} c={Z^2-1\over Z^{2(\gamma-1)}}. \end{equation} we see immediately that $Z>1$ for all $c>0$ since $Z$ as a ratio of two densities is always nonnegative. \footnote{Technically, $\rho$ can be zero at some points. However, such points have no contribution to energy; therefore in the integral they can be excluded.} The next step is to consider the monotonicity of $Z$ in $c$. We take derivatives on both sides of Eq.~(\ref{A7c}) to get \begin{equation} {dc\over dZ}={2(2-\gamma)\over Z^{2\gamma-3}}+{2(\gamma-1)\over Z^{2\gamma-1}}. \end{equation} So as long as $1<\gamma<2$, $dc/dZ>0$ and $Z$ is strictly monotonic in $c$. Henceforth, we will assume \begin{equation} 1\le \gamma\le 2. \end{equation} Furthermore, we claim a very useful inequality: \begin{equation}\label{Ie} Z>(\gamma-1)cZ^{2\gamma-3}. \end{equation} In fact, take derivatives on the both side of Eq.~(\ref{A7Z}) and one can derive the following relation: \begin{equation} Z-(\gamma-1)cZ^{2\gamma-3}={Z^{2\gamma-2}\over 2Z^\prime}. \end{equation} The claimed statement follows the fact that the righthand side of above equation is positive. So much for our general consideration of $Z$. \subsection{$\gamma=4/3$} In the following we will discuss the existence of variational solution for different $\gamma$. Recall the definition $c=Cr^{2(3\gamma-4)}$. An important observation is that $\gamma=4/3$ is a critical value for which the density ratio $Z$ is independent of the configuration size $R$. So even though the constraint in Eq.~(\ref{A7Z}) can be solved algebraically in this case, the detail of the solution is irrelevant to the existence of stable configuration. In fact, the total energy is of the form \begin{equation} E={const.\over R} \end{equation} where the unspecified constant comes from the dimensionless integral. Physically, this simplicity follows directly from the fact that the case $\gamma=4/3$ describes the massless particle system. The immediate inference is that no stable fluid configuration with fixed topological charge exists at finite $R$. For the physics of QGP, this is not bad news because we expect to see the expansion process of the fireball of nuclear matter. So we refer this type of unstable configuration as the {\em unstable color skyrmion}. We can imagine that once an unstable configuration is generated, it can be stabilized through a process in which the size $R$ is enlarged and the energy is dissipated. Because the process is continuous, the topological charge is not changed during this process. However, the final state is a null state in which the configuration is completely diluted and the energy is completely dissipated. \subsection{Asymptotic Analysis} We discuss the asymptotic behaviors of the total energy in the limits $r\rightarrow 0$ and $\infty$ for $1<\gamma<4/3$ and $4/3<\gamma<2$. To do so, we need to solve Eq.~(\ref{A7Z}) approximately in these two limits. In fact if $c=0$ the physical solution to Eq.~(\ref{A7Z}) is $Z=1$. So, for small $c$, the equation can be solved approximately by \begin{equation} Z\approx 1+{c\over 2}. \end{equation} For large $c$, we recast Eq.~(\ref{A7Z}) in the form \begin{equation}\label{Z} {Z^2\over c}={1\over c}+Z^{2(\gamma-1)}. \end{equation} Let $Z\sim c^\lambda$ with a positive power $\lambda$ such that the first term on the right hand side of (\ref{Z}) can be dropped. The approximative solution is \begin{equation} Z=c^{1/2(2-\gamma)}. \end{equation} In the region $4/3<\gamma<2$ for small $r$, the contribution of $Z$ can be omitted and the potential term is the dominant part of the energy in Eq.~(\ref{E}), \begin{equation} H\sim {\mu\over r^{3(\gamma-1)}}. \end{equation} In the region $4/3<\gamma<2$ for large $r$, \begin{equation}\label{ASYMPT} Z\sim r^{(3\gamma-4)/(2-\gamma)},~~ T\sim {\mu\over r},~~ P\sim {\mu\over r^{(5\gamma-6)/(2-\gamma)}}; \end{equation} therefore, the tension term dominates. In the region $1<\gamma<4/3$ for large $r$, the contribution of $Z$ can be omitted and again the potential term is dominant. In the region $1<\gamma<4/3$ for small $r$ from the same equation (\ref{ASYMPT}), the tension term dominates. The common feature of both regions of $\gamma$ is that the energy blows up in the $r\rightarrow 0$ limit and vanishes as $r\rightarrow\infty$. So the most immediate conclusion from above asymptotic analysis is that there always exists unstable color skyrmion in each region for $\gamma$. Nevertheless, there is a subtle sub-region for both $1<\gamma<4/3$ and $4/3<\gamma<2$ from the behavior of energy in the small $r$ limit. In the sub-region $5/3<\gamma<2$ for small $r$, the tension term behaves like \begin{equation}\label{ASYM1} T\sim \mu r^{3\gamma-5}\rightarrow 0. \end{equation} Combining the second formula in Eq.~(\ref{ASYMPT}) and Eq.~(\ref{ASYM1}), we conclude the profile of tension term has maximum for some finite $r$ in the region $5/3<\gamma<2$. In the sub-region $1<\gamma<6/5$ for small $r$, the potential term behaves like \begin{equation}\label{ASYM2} P\sim \mu r^{(6-5\gamma)/(2-\gamma)}, \end{equation} which implies that the profile of potential term has maximum for some finite $r$ in the region $1<\gamma<6/5$. These results reveal the possibility that there may exist metastable color skyrmions in these two sub-regions. By a metastable soliton, we mean that the energy has a local minimum. The implication for physics is the following. At the classical level, if the configuration is generated with a large enough size $R$ then it is unstable and it can only be stabilized by expansion and dissipation until the configuration is completely diluted. For small sized configuration however, there is a locally stable point with finite energy. \subsection{$6/5<\gamma<5/3$} Beyond the asymptotic analysis, we conduct a monotonicity analysis to scrutinize a claim that {\em there does not exist metastable color skyrmion in $6/5<\gamma<5/3$}. This conclusion is certainly true for $\gamma=4/3$ from our previous discussion. For $4/3<\gamma<5/3$, the conclusion is also easy to make. In fact, from the monotonicity of $Z$ in $c$ and the dependency of $c$ on $r$, it is easy to conclude that the total energy is strictly decreasing in $r$. And the two limiting cases $r\rightarrow 0, \infty$ have been considered in the asymptotic analysis. The case for $6/5<\gamma<4/3$ is more subtle. The monotonicity of total energy is determined by the two combinations, \begin{equation}\label{W1} W_1={Z\over c^{3(\gamma-1)\over 2\gamma(4-3\gamma)}},~~ W_2={Z\over c^{5-3\gamma\over 2(4-3\gamma)(2-\gamma)}}, \end{equation} which are implicit in the expressions of potential term and tension term in total energy. By Eq.~(\ref{A7Z}), we know $W_2$ satisfies the relation: \begin{equation} W_2^2=c^{-{3\gamma-5\over (3\gamma-4)(2-\gamma)}}+c^{1\over 3\gamma-4} W_2^{2(\gamma-1)}. \end{equation} Taking derivatives on both sides, \begin{equation}\label{D} 2\Bigl(W_2-{\gamma-1\over c^{1\over 4-3\gamma}} W_2^{2\gamma-3}\Bigr){dW_2\over dc}=(-) \frac{Z^{2(\gamma-1)}+{5-3\gamma\over (2-\gamma)c}}{(4-3\gamma)c^{{5-3\gamma\over (4-3\gamma)(2-\gamma)}}}, \end{equation} where we have expressed the right hand side back in terms of $Z$. The bracketed factor on the left hand side can be also expressed in terms of $Z$ as \begin{equation} W_2-{\gamma-1\over c^{1\over 4-3\gamma}} W_2^{2\gamma-3}={Z-(\gamma-1)cZ^{2\gamma-3}\over c^{(3\gamma-5)\over 2(3\gamma-4)(2-\gamma)}}>0. \end{equation} The last inequality is because of (\ref{Ie}) so we conclude that \begin{equation} {dW_2\over dc}<0, \end{equation} namely $W_2$ is strictly decreasing in $c$, hence increasing in $r$. Due to Eq.~(\ref{A7Z}), $W_1$ satisfies \begin{equation} W_1^2=c^{-{3(\gamma-1)\over\gamma(4-3\gamma)}}+c^{6-5\gamma\over\gamma(4-3\gamma)}W_1^{2\gamma-2}. \end{equation} Taking derivatives on both sides of Eq.~(\ref{W1}), \begin{equation}\label{W1I} 2(W_1-{\gamma-1\over c^{5\gamma-6\over \gamma(4-3\gamma)}}W_1^{2\gamma-3}){dW_1\over dc}=(-)\frac{(5\gamma-6)Z^{2(\gamma-1)}+{3(\gamma-1)\over c}}{\gamma(4-3\gamma)c^{3(\gamma-1)\over \gamma(4-3\gamma)}}. \end{equation} Again, the sign of $dW_1/dc$ is determined by the righthand side because the inequality (\ref{Ie}) implies the left hand side is positive. So Eq.~(\ref{W1I}) implies \begin{equation} {dW_1\over dc}<0 \end{equation} namely $W_1$ is also strictly decreasing in $c$. \footnote{Certain uniformity conditions for the profile functions $\Omega$ and $\mathbf{P}$ are expected to guarantee that the conclusion does not change after the integral over the space coordinates $\int d^3X$.} To conclude this part of the monotonicity analysis, we see the total energy is strictly decreasing in $r$ in the region $6/5<\gamma<5/3$. In addition, the monotonicity cannot be established from the righthand sides of Eqs.~(\ref{D},\ref{W1I}) if $\gamma>5/3$ or $\gamma<6/5$. We leave the issue for possible metastable configurations to further work. \subsection{$\gamma=2$} Now we consider the boundary value of $\gamma=2$. This is the case which has been worked out with a more specific ansatz in our previous paper \cite{DN}. The same result can be established on more general grounds here. For $\gamma=2$, Eq.~(\ref{A7Z}) can be solved by \begin{equation} n=\sqrt{\rho^2-{\omega^2\over\alpha^2}}. \end{equation} The Hamiltonian density in Eq.~(\ref{A10}) is then \begin{equation} {\cal H}={\omega^2\over 2\alpha}+{\alpha\rho^2\over 2}. \end{equation} The total energy is given by \begin{equation} H={\mu\over 2}\int d^3X \Bigl(r\Omega^2+{\mathbf{P}^2\over r^3}\Bigr). \end{equation} The existence of color skyrmion that minimizes the energy at certain finite $R$ is straightforward. \subsection{$\gamma=1$} In this case, Eq.~(\ref{A7Z}) is solved by \begin{equation} n={\rho\over\sqrt{1+{\omega^2\over \alpha^2}}} \end{equation} and the energy is \begin{equation} E=\mu\int d^3X ~\mathbf{P}\sqrt{1+{\Omega^2\over r^2}}. \end{equation} The unstable color skyrmion has a finite energy in the limit $R\rightarrow \infty$ and the final state energy is \begin{equation} E\longrightarrow \mu\int d^3X ~\mathbf{P}. \end{equation} To this point, we establish the results of the time-independent color skyrmions listed in the introductory section. \section{$\gamma$-Dependence of Total Energy} \label{DG} In previous section, we solve the variational problem for the nonabelian fluid system with fixed $\gamma$. However, in reality the power $\gamma$ is a changing quantity during the process of expansion of QGP. So in this section, we consider the dependence of the total energy on the power $\gamma$. To do so, we will calculate $\partial H/\partial\gamma$. We assume the profiles of $\Omega$, $\mathbf{P}$ and the parameters $\mu$, $R$ are independent to $\gamma$. We claim that $\partial H/\partial\gamma<0$ in the ``physical parameter region'' $1<\gamma<4/3$ provided the soliton size $R$ is large enough. In fact, \begin{eqnarray}\nonumber {1\over\mu}{\partial H\over \partial \gamma}&=&\int d^3X\Bigl(({\Omega^2\mathbf{P}^{2-\gamma}\over r^{5-3\gamma}Z^{2-\gamma}} -{\mathbf{P}^\gamma\over \gamma r^{3(\gamma-1)}Z^\gamma})\ln{r^3 Z\over \mathbf{P}}\\\nonumber &&-\Bigl[(2-\gamma){\Omega^2\mathbf{P}^{2-\gamma}\over r^{5-3\gamma}Z^{3-\gamma}} +{\mathbf{P}^\gamma\over r^{3(\gamma-1)}Z^{\gamma+1}}\Bigr]{\partial Z\over\partial\gamma}\\ &&-{\mathbf{P}^\gamma\over \gamma^2r^{3(\gamma-1)}Z^\gamma} \Bigr). \end{eqnarray} We can impose the following sufficient conditions to guarantee $\partial H/\partial \gamma<0$: \begin{eqnarray}\label{I1+} {Z^{2(\gamma-1)}\over r^{2(4-3\gamma)}}&<&{\mathbf{P}^{2(\gamma-1)}\over \gamma \Omega^2};\\\label{I2+} {\partial Z\over\partial\gamma}&>&0;\\\label{I3+} r^3Z&>&\mathbf{P}. \end{eqnarray} By using Eq.~(\ref{A7Z}) and the definition of $C$ in Eq.~(\ref{PARA}), one can show that (\ref{I1+}) is equivalent to \begin{equation}\label{I1+M} Z^2<{\gamma+1\over\gamma}. \end{equation} By the fact that $dZ/dc>0$ and the relation in Eq.~(\ref{C}), we conclude that for large enough $r$, the inequality (\ref{I1+M}), hence (\ref{I1+}), is satisfied if $1<\gamma<4/3$. For (\ref{I2+}), we take derivatives with respect to $\gamma$ on both sides of (\ref{A7Z}), \begin{equation} \Bigl(Z-(\gamma-1)cZ^{2\gamma-3}\Bigr){\partial Z\over\partial\gamma}= cZ^{2\gamma-2}\ln{r^3Z\over \mathbf{P}}. \end{equation} Recall the relation in (\ref{Ie}). So (\ref{I2+}) is satisfied if (\ref{I3+}) is satisfied. To show (\ref{I3+}), we need to check the monotonicity of the combination \begin{equation} W_3\equiv c^{3/(6\gamma-8)}Z. \end{equation} We claim that \begin{equation}\label{CLAIM} {dW_3\over dc}<0. \end{equation} In fact, $W_3$ satisfies the equation \begin{equation} W_3^2=c^{3/3\gamma-4}+c^{2/3\gamma-4}W_3^{2(\gamma-1)}. \end{equation} Take derivatives in $c$: \begin{equation}\label{D1+} 2(W_3-(\gamma-1)c^{2/3\gamma-4}W_3^{2\gamma-3}){dW_3\over dc}=-{1\over 4-3\gamma}(3c^{7-3\gamma/3\gamma-4}+2c^{6-3\gamma/3\gamma-4}W_3^{2\gamma-2}). \end{equation} It is easy to show \begin{equation} W_3-(\gamma-1)c^{2/3\gamma-4}W_3^{2\gamma-3}=c^{3/6\gamma-8}(Z-(\gamma-1)cZ^{2\gamma-3}). \end{equation} Therefore, this factor is positive. So from (\ref{D1+}), it is obvious that $dW_3/dc<0$ for $\gamma<4/3$. We see that the conditions (\ref{I2+}) and (\ref{I3+}) are universally true provided $\gamma$ is in the physical region. But (\ref{I1+}) requires a large configuration size. Now we want to estimate the lower bound for $r$. From (\ref{A7Z}) and (\ref{I3+}), we derive an interesting relation: \begin{equation} Z^2>1+{\Omega^2\over r^2}. \end{equation} Combined with (\ref{I1+M}),we have \begin{equation} 1+{\Omega^2\over r^2}<Z^2<{1+\gamma\over \gamma}\Rightarrow r^2>\gamma\Omega^2. \end{equation} So given an ansatz profile function $\Omega$, we know how to estimate the configuration size such that the energy is strictly decreasing with $\gamma$. The physical meaning of the condition $\partial H/\partial \gamma<0$ is the following. It is easy to understand from the previous section that with expanding soliton size $r$, the energy decreases. On the other hand, the equation of state in Eq.~(\ref{EOS}) implies with the decreasing energy density, $\gamma$ becomes smaller as well. So there is a trajectory in the $(\gamma,r)$-plane along which the total energy can be expected to be unchanged during the expansion process. \section{Time Evolution of Color Skyrmions}\label{TDS} So far we have dealt with time-independent fluid configurations. In this section, we consider the time-dependent configurations. To do so, we need to take account of the presence of $\omega_0$ which plays no role in the time-independent case. Accordingly, we need to eliminate $j^0$ as well as $\vec{j}$ by the equation of motion, \begin{equation} {\delta {\cal L}\over\delta j^\mu}=\omega_\mu-\alpha n^{\gamma-2}j_\mu=0 \end{equation} where we have used the $\gamma$-law. Eliminating $j^\mu$ by $j^\mu \omega_\mu=\alpha n^\gamma$, $\omega^\mu \omega_\mu=\alpha n^{\gamma-2}j_\mu\omega^\mu$, we get \begin{equation} \omega^\mu\omega_\mu=\alpha^2n^{2\gamma-2}. \end{equation} The Lagrangian density is therefore expressed in terms of $\omega_\mu$ as \begin{equation} {\cal L}={\gamma-1\over\gamma}\alpha n^\gamma = {\gamma-1\over\gamma}\alpha^{-{1\over\gamma-1}}\Bigl(\omega^\mu\omega_\mu\Bigr)^{\gamma\over 2(\gamma-1)}. \end{equation} Since in this paper we will concentrate on classical configurations, the pre-factor $(\gamma-1)/\gamma\alpha^{1/\gamma-1}$ does not matter to us at this level. So we will deal with the following Lagrangian \begin{equation}\label{C3} L=\int d^3x \Bigl(\omega_0^2-\vec{\omega} \cdot\vec{\omega}\Bigr)^{\gamma\over 2(\gamma-1)}. \end{equation} As in the case for the time-independent configurations, we make the following scaling assumptions: \begin{equation} \omega_0^2=f_1{\dot{R}^2\over R^2},~~ \vec{\omega}\cdot\vec{\omega}={f_2\over R^2} \end{equation} where $f_{1,2}$ are two dimensionless functions depending only on $\vec{X}=\vec{x}/R$. Then the Lagrangian in Eq.~(\ref{C3}) is transformed to be \begin{equation}\label{C4} L(R,\dot{R})=R^{2\gamma-3\over\gamma-1}\int d^3X \Bigl( f_1\dot{R}^2-f_2\Bigr)^{\gamma\over 2(\gamma-1)}. \end{equation} Next we will derive the Euler-Lagrangian equation for the Lagrangian in Eq.~(\ref{C4}). First of all, \begin{equation} {\partial L\over\partial \dot{R}}={\gamma\over\gamma-1} R^{2\gamma-3\over \gamma-1}\dot{R} \int d^3X f_1\Bigl(f_1\dot{R}^2-f_2\Bigr)^{2-\gamma\over 2(\gamma-1)}, \end{equation} from which \begin{eqnarray}\nonumber {d\over dt}{\partial L\over\partial\dot{R}}&=&{\gamma\over \gamma-1}\Biggl( {2-\gamma\over \gamma-1}R^{2\gamma-3\over \gamma-1}\dot{R}^2\ddot{R}\int d^3X f_1^2\Bigl( f_1\dot{R}^2-f_2\Bigr)^{4-3\gamma\over 2(\gamma-1)}\\\nonumber && +R^{2\gamma-3\over\gamma-1}\ddot{R}\int d^3X f_1\Bigl(f_1\dot{R}^2-f_2\Bigr)^{2-\gamma\over 2(\gamma-1)}\\&& +{2\gamma-3\over \gamma-1} R^{\gamma-2\over\gamma-1}\dot{R}^2\int d^3X f_1\Bigl(f_1\dot{R}^2-f_2\Bigr)^{2-\gamma\over 2(\gamma-1)} \Biggr). \end{eqnarray} Further \begin{equation} {\partial L\over\partial R}={2\gamma-3\over\gamma-1} R^{\gamma-2\over \gamma-1} \int d^3X \Bigl(f_1\dot{R}^2-f_2\Bigr)^{\gamma\over 2(\gamma-1)}. \end{equation} Thus the Euler-Lagrangian equation is given by \begin{eqnarray}\nonumber {2-\gamma\over\gamma-1}R\dot{R}^2\ddot{R}\int d^3X f_1^2\Bigl( f_1\dot{R}^2-f_2\Bigr)^{4-3\gamma\over 2(\gamma-1)}&&\\\nonumber +R\ddot{R}\int d^3X f_1\Bigl(f_1\dot{R}^2-f_2\Bigr)^{2-\gamma\over 2(\gamma-1)}&&\\\nonumber +{2\gamma-3\over\gamma-1}\dot{R}^2\int d^3X f_1\Bigl(f_1\dot{R}^2-f_2\Bigr)^{2-\gamma\over 2(\gamma-1)}&&\\ -{2\gamma-3\over\gamma}\int d^3X \Bigl(f_1\dot{R}^2-f_2\Bigr)^{\gamma\over 2(\gamma-1)}&=&0.\label{ELE} \end{eqnarray} In the following we will investigate into two particular cases for $\gamma=2$ and $\gamma=4/3$. \subsection{$\gamma=2$} In this case, stable time-independent color skyrmion exists. So we expect the solution to the time-evolution equation to be of the form of oscillations around the stable point. In fact, the Euler-Lagrangian equation (\ref{ELE}) reduces to \begin{equation}\label{ELE2} R\ddot{R}+{\dot{R}^2\over 2}+\mu=0 \end{equation} where \begin{equation} \mu=\frac{\int d^3Xf_2}{2\int d^3Xf_1}. \end{equation} To solve Eq.~(\ref{ELE2}), we change its form to \begin{equation} {d^2\over dt^2}R^{3/2}+{3\mu\over 2R^{1/2}}=0. \end{equation} Let $q=R^{3/2}$ and $p=dq/dt$ then \begin{equation} {p\over\mu}{dp\over dq}+{3\over 2q^{1/3}}=0 \end{equation} which can be integrated as \begin{equation}\label{TDS2} {p^2\over 2\mu}+{9\over 4}q^{2/3}={\cal E} \end{equation} where ${\cal E}$ is an integral constant. The solution in Eq.~(\ref{TDS2}) means the color skyrmion for $\gamma=2$ forms a one-dimensional Hamiltonian system with the potential of the form $q^{2/3}$ and the motion is always bounded and oscillating! \subsection{$\gamma=4/3$} We know that only unstable color skyrmions exist in the time-independent case. Therefore, for the time-dependent case, we expect the solution to describe the expansion of the color skrymion with the size going from some initial value to infinity. In fact, the Euler-Lagrangian equation (\ref{ELE}) becomes \begin{equation}\label{ELE4/3a} \dot{R}^2(R\ddot{R}-{\dot{R}^2\over 4})-\beta_2(R\ddot{R}-{\dot{R}^2\over 2})+\beta_4=0 \end{equation} where \begin{equation} \beta_2=\frac{\int d^3Xf_1f_2}{3\int d^3Xf_1^2},~~ \beta_4=\frac{\int d^3Xf_2^2}{12\int d^3Xf_1^2}. \end{equation} We will solve Eq.~(\ref{ELE4/3a}) approximately. To do this, we first rewrite this equation by the form \begin{equation}\label{ELE4/3b} {R\ddot{R}\over \dot{R}^2}-{1\over 4}={\beta_2\over \dot{R}^2}({R\ddot{R}\over\dot{R}^2}-{1\over 2})-{\beta_4\over \dot{R}^4} \end{equation} and make the following scaling assumptions: \begin{equation}\label{Scaling} |\dot{R}|\gg 1,~ |R\ddot{R}/\dot{R}^2|\sim 1,~ \beta_{2,4}\sim 1. \end{equation} To the zero order $R\approx R^{(0)}$, Eq.~(\ref{ELE4/3b}) can be approximated by \begin{equation} {R^{(0)}\ddot{R}^{(0)}\over \dot{R}^{(0)2}}-{1\over 4}=0, \end{equation} which is solved by \begin{equation}\label{EX} R^{(0)}=R_0({t\over \tau}+1)^{4/3} \end{equation} with $R_0$, $\tau$ two integral constants. The scaling rule in Eq.~(\ref{Scaling}) works if the ratio \begin{equation} \epsilon={\tau\over R_0}\ll 1. \end{equation} It is very interesting to see that $R^{3/4}$ is linear in time. Eq.~(\ref{EX}) describes the expansion of the color skyrmion for $\tau>0$ and the contraction or collapse for $\tau<0$. Actually, we will only take the former case and consider the latter as unphysical. We will refer the motion in Eq.~(\ref{EX}) as {\em linear expansion}. To check the stability of the above-mentioned expansion motion, we do the perturbation to the order of $\epsilon^2$ by writing \begin{equation} R=R^{(0)}+\delta R \end{equation} and introducing the following quantities: \begin{equation} \delta R=R_0\epsilon^2y,~ x={t\over\tau}+1,~ f^\prime={df\over dx}. \end{equation} The perturbation satisfies the following equation \begin{equation}\label{4/3P} y^{\prime\prime}-{2y^\prime\over 3 x}+{4y\over 9x^2}=-{\beta_2\over 4x^{4/3}}. \end{equation} Eq.~(\ref{4/3P}) has the particular solution \begin{equation} y={9\beta_2\over 8}x^{2/3}. \end{equation} It is easy to see that the linear independent solutions to the homogenous equation in (\ref{4/3P}) are $x^{4/3}$, $x^{1/3}$. So the general solution is given by \begin{equation} y={9\beta_2\over 8}x^{2/3}+Ax^{4/3}+Bx^{1/3} \end{equation} where $A$, $B$ are two integration constants. By examining the initial conditions, we can fix $A$, $B$. Actually we let \begin{equation} R|_{t=0}=R^{(0)}|_{t=0},~~ \dot{R}|_{t=0}=\dot{R}^{(0)}|_{t=0} \end{equation} then \begin{equation} y|_{x=1},~ y^\prime|_{x=1}=0, \end{equation} from which we get \begin{equation} A=-{3\over 8}\beta_2,~~ B=-{3\over 4}\beta_2. \end{equation} The solution to Eq.(\ref{ELE4/3a}), to $\epsilon^2$ order, is then given by \begin{equation} R=R_0\Bigl((1-{3\over 8}\beta_2\epsilon^2)x^{4/3}+{9\over 8}\beta_2\epsilon^2x^{2/3}-{3\over 4}\beta_2\epsilon^2x^{1/3}\Bigr) \end{equation} where $x=1+t/\tau$. To see the stability of the expansion, we consider the long time behavior. It is easy to see that the leading order contribution in the limit $t\rightarrow\infty$ is given by the $x^{4/3}$ term, which is the linear expansion as in Eq.~(\ref{EX}). \section{Conclusion and Discussion} \label{DIS} With the assumption of $\gamma$-law for the equation of state for a nonabelian fluid, we show in this paper the value of $\gamma$ is crucial to the existence and the properties of color skyrmion for time-independent configurations. For $\gamma$ between $6/5$ and $5/3$, there is only an unstable color skyrmion. For $\gamma=2$, the skyrmion is stable. The case $\gamma=1$ is special for the unstable skyrmion, since it has finite energy even after it is diluted infinitely. For $1<\gamma<6/5$ and $5/3<\gamma<2$, we cannot rule out the possibility of the existence of metastable skymions besides the unstable ones. For two particular values $\gamma=4/3,2$, we furthermore consider the time-dependent configurations. And for the latter we find the oscillating evolution and for the former we find linear expansion. We now turn to the question of how this relates to QGP. As mentioned in the introduction, if we try to model the lattice estimate of the equation of state as a $\gamma$-law, the value of $\gamma$ varies between $4/3$ and $1$, as the energy density decreases. Therefore, our analysis of the expanding soliton for $\gamma=4/3$ takes on a special significance. In the creation of the QGP by nuclear collision, we are starting at high energy densities, or at values of $\gamma$ close to $4/3$. The collision process also favors the creation of small solitons during the phase transition process since establishing coherent fields or color densities over large scales is generally more difficult, less probable. Based on these two observations and in the light of our general analysis, an expanding soliton would seem to be the generic case we can expect for the QGP. There are, of course, some caveats to our analysis. We have neglected thermal gradients as well as dissipative effects such as viscosity. (The latter quantity, although small, is not zero.) It would be interesting to incorporate some of these effects. The soliton by virtue of its nontrivial topology is a generic feature of nonabelian fluid dynamics. We may expect many qualitative aspects to hold true even with thermal gradients, viscosity, etc., although there can be significant changes in details. We note that the possibility of color skyrmions arises also in the recent effective action description of Wilson lines \cite{Pisarski1,Pisarski2}. It is clear that skyrmions in the QGP merit further analysis. Other hydrodynamics approach towards QGP can be found for example in \cite{MM}. \vskip .2in\noindent This work was supported in part by a CUNY Collaborative Research Incentive grant. The author appreciates V. P. Nair's mentoring on this project.
1,108,101,565,091
arxiv
\section{Introduction} \vspace{-0.1cm} Cancer genomics depends upon the identification of variants that are associated with particular types of cancers. Because such variants are deleterious, they are not typically part of the ancient standing variation spread across all humans; instead they are more recent mutations specific to particular populations. Indeed, such variants are often present prominently only in particular ethnic groups due to genetic drift \cite{Foulkes:2002cc}. In addition, most associations are mapped not to causal variants, but to more common neighboring variants that are present on genotyping arrays. Since these neighboring variants are linked to the causal variant via correlation structures (linkage) that are specific to each population, the ancestry of the genomic segment in which the correlated variant is found becomes crucial. Indeed, as a result of linkage and epistatic effects, genomic variants that are associated with cancer in one ancestry maybe have no association \cite{Wang:2018js}, or may even have an opposite association \cite{Rajabli:2018cb}, in another ancestry. This phenomenon persists even in admixed individuals possessing multiple ancestries, such as African Americans; in such individuals the ancestry (European or African) of the specific genomic fragment containing the associated variant has been found to reverse the association \cite{shortRajabli:2018cb}. This phenomenon dubbed "flip-flop," is not an unusual case, rather ancestry-specific effects in genetic association studies are the rule. For this reason, polygenic-risk scores (PRS), increasingly important to genomic cancer prediction \cite{Mavaddat:2019ix}, have been found to be several times less accurate when used on populations of different ancestry from the one on which they were trained \cite{shortMartin:2019bm}. As a result of these ancestry specific effects, accurately identifying the ancestry of each segment of the genome is becoming increasingly crucial for genomic medicine. Such algorithms, known as local ancestry inference, have been developed both for historical population genetics \cite{Tang2006, Sundquist2008, ShortPrice:2009bga, sankararaman2008estimating, Durand:2014hj, maples2013rfmix, vaegan2019, lainet2020} and for recreational consumer ancestry products \cite{Durand:2015jx}, but none have been developed to date for the particular demands of clinical genomic medicine. Such an algorithm would need to provide ancestry not as a culturally defined label, but as continuous genetic coordinates that could be used as a covariate in predication and association algorithms. This method is also important for deconvolving ancestry effects in genetic association studies. To date, most genome-wide association studies (GWAS) are conducted in populations of single ancestry (typically European) to avoid confounding effects of ancestry on reversing associations. Researchers often avoid admixed populations, for instance African Americans or Hispanics, who encompass more than one ancestry, and avoid populations with too much genetic variation or too many diverse sub-populations, as is common within Africa. This has resulted in over 80\% of the individuals in GWAS studies to date stemming from European ancestry (and only 2\% from African ancestry) \cite{Sirugo:2019ie, Popejoy:2016di}. A reliable coordinate-based local ancestry algorithm would allow such studies to embrace diversity, rather than intentionally eschewing it, by allowing an additional covariate along the genome to be used (ancestry) to remove the confounding effects of ancestry-dependent genomic associations. With such a tool, medical researchers would no longer need to avoid admixed and globally diverse genetic study cohorts. \section{Ancestry Inference} \vspace{-0.1cm} Here were present an accurate coordinate-based local ancestry inference algorithm, XGMix, that can be used for addressing ancestry-specific associations and predictions. XGMix uses modern single ancestry reference populations to accurately predict the latitude and longitude of the closest modern source population for each segment of an individual's genome. These coordinate annotations along the genome can then be used as covariates for genome-wide association studies (GWAS) and for polygenic risk score (PRS) predictions. Estimation of an individual's ancestry, both globally and locally (i.e. assigning an ancestry estimate to each region of the chromosomal sequence), has been tackled with a wide range of methods and technologies \cite{Tang2006, Sundquist2008, ShortPrice:2009bga, sankararaman2008estimating, Durand:2014hj, maples2013rfmix, vaegan2019, lainet2020}. Local ancestry inference has traditionally been framed as a classification problem using pre-defined ancestries. Classification approaches provide discrete ancestry labels but can be highly inaccurate for neighboring populations (or population gradients) and intractable for genetically diverse populations with multiple sources. Geographical regression along the genome, although a much more challenging problem, could provide a continuous representation of ancestry capable of capturing the complexities of worldwide populations. XGMix consists of two layers of stacked gradient boosted trees (a genomic window-specific layer and a window aggregating smoother) and can infer local-ancestry with both classification probabilities and geographical coordinates along each phased chromosome. Here we demonstrate XGMix by training on whole genomes from real individuals from the five African populations included in the 1000 genomes project \cite{10002015global}. We simulate admixed individuals of various generations using Wright-Fisher simulation \cite{maples2013rfmix} to create ground truth labels of ancestry along the genome and split this data for training and testing. As these reference African populations lie close to a single arc along the globe we estimate along this arc, getting geographic assignments for each genomic segment. \begin{figure}[htp] \centering \includegraphics[width=0.85\textwidth]{plot_2.pdf} \caption{(a) The inferred coordinates for each genomic segment of an admixed Kenyan-Nigerian individual. The model was trained on all indicated African reference populations. (b-c) The inferred location of each genomic segment of a Kenyan-Nigerian (b) and Kenyan-Gambian (c) individual using the principal coordinate arc of the reference populations' locations. The bimodal distribution of Kenyan segments (green) may reflect the historical Bantu expansion from Cameroon into Kenya.} \label{fig:map} \end{figure}
1,108,101,565,092
arxiv
\section{Background galaxy number counts and shear noise-levels} Because the optical images used in this analysis... \begin{figure* \includegraphics[width=10.9cm]{1787f23.eps} \caption{Shown in greyscale is a...} \label{cl12301} \end{figure*} In this case.... \begin{figure*} \centering \includegraphics[width=16.4cm,clip]{1787f24.ps} \caption{Plotted above...} \label{appfig} \end{figure*} Because the optical images... \section{Title of Second appendix.....} These studies, however, have faced... \begin{table} \caption{Complexes characterisation.}\label{starbursts} \centering \begin{tabular}{lccc} \hline \hline Complex & $F_{60}$ & 8.6 & No. of \\ ... \hline \end{tabular} \end{table} The second method produces... \end{appendix} \end{document} \begin{appendix} \section{Background galaxy number counts and shear noise-levels} \longtab[1]{ \begin{longtable}{lrcrrrrrrrrl} \caption{Line data and abundances ...}\\ \hline \hline Def & mol & Ion & $\lambda$ & $\chi$ & $\log gf$ & N & e & rad & $\delta$ & $\delta$ red & References \\ \hline \endfirsthead \caption{Continued.} \\ \hline Def & mol & Ion & $\lambda$ & $\chi$ & $\log gf$ & B & C & rad & $\delta$ & $\delta$ red & References \\ \hline \endhead \hline \endfoot \hline \endlastfoot A & CH & 1 &3638 & 0.002 & $-$2.551 & & & & $-$150 & 150 & Jorgensen et al. (1996) \\ \end{longtable} \end{appendix} \begin{appendix} \longtab[1]{ \begin{landscape} \begin{longtable}{lrcrrrrrrrrl} ... \end{longtable} \end{landscape} \end{appendix} \section{Introduction} Outflows and jets are ubiquitous structures that accompany the formation of low-mass stars \citep{Arce-2007, Lee20}, especially during the Class 0 and I evolutionary phases. As the material is accreted from the envelope or the protoplanetary disk onto the protostar, a fraction of the material is expelled as a result of angular momentum conservation. The gas can be ejected in high-velocity collimated jets or in low-velocity disk winds \citep[and references therein]{Hartmann16-1}. In recent years, jet-like structures and molecular outflows have been reported in several young very low-mass (VLM) stars and brown dwarfs (BDs), mostly based on optical and infrared spectral and spectro-astrometric observations. Examples of well-studied VLM objects with jets are Par-Lup3-4 \citep{FC2005}, LS-R CrA 1 \citep{Whelan09-1}, 2M1207 \citep{Whelan12-1}, ISO 143 \citep{Joergens12-1}, ISO-217 \citep{Joergens12-2}, and ISO-Oph 200 \citep{Whelan18-1}. Jets and outflows in VLM stars and BDs have also been detected at centimeter (cm) and millimeter (mm) wavelengths. In the cm regime, \citet{Morata15-1} discovered compact free-free emission in four sources. In the (sub)mm regime, several VLM stars and BDs have been reported to host jets or outflows in different evolutionary phases: the Class 0/I proto-BDs L1014-IRS \citep{Bourke2005,Huard06-1}, and L1148-IRS \citep{Kauffmann11-1}; and Class II sources ISO-Oph 102 \citep{Phan-Bao08-1}, the VLM star MHO 5 \citep{Phan-Bao11-1}, and GM Tau \citep{Phan-Bao14-1}. Common and notable characteristics of outflows from these sources include a very small physical size (600-1000 au), and a low outflow velocity (<5 km/s). Other characteristics, such as the ratio between wind mass-loss and accretion rate \citep{Phan-Bao14-1}, have been studied to search for trends among VLM stars and BDs with evolution. The observational uncertainties are still large, however. \medskip VLM stars and BDs are very faint and difficult to detect because of sensitivity limitations, resulting in only few studies of mm-wave molecular outflows. \citet{Phan-Bao14-1} detected only three out of eight VLM outflows surveyed in their (sub)mm study. Moreover, there are no sufficient studies (to our knowledge) in the very low-mass regime on the details at the base of the outflow near the driving source; nor have the inner cavity walls near the launch region been observed and described in great detail. The study of outflows from VLM stars may provide further evidence about their formation mechanism. VLM stars may be formed similarly to low-mass stars \citep{Maclow04}, or their formation may be more similar to that of BDs. The dominant formation mechanism of BDs is still under debate; for a review, see \citet{Joergens14}. Theories such as photoevaporation \citep{WhitworthandZinnecker04-1}, disk fragmentation \citep{Stamatellosandwhitworth09-1}, dynamical ejection \citep{Reipurthandclarke01-1}, or gravoturbulent fragmentation \citep{Padoan2002, Hennebelle2008} can explain the formation of VLM stars and BDs. In this work, we present new ALMA observations of Par-Lup3-4, a VLM star located in the Lupus 3 molecular cloud. We detect the source in Band 6 and 7 continuum and in three gas emission lines: CO(2-1), CO(3-2), and $^{13}$CO(3-2). These observations reveal that an outflow structure surrounds Par-Lup3-4. This outflow is consistent with being a scaled-down version of outflows detected in more massive stars. \section{Previous observations of Par-Lup3-4} \label{info} Par-Lup3-4, located in the Lupus 3 molecular cloud, is a VLM star with a mass of 0.12 M$_{\odot}$ and spectral type M5 \citep{Comeron03-1}. The name of the object is taken from \citealt{Comeron03-1}. The source appears underluminous when compared to similar sources in the same star-forming region, and this is likely due to its edge-on disk orientation. Stellar parameters of Par-Lup3-4 were estimated by fitting its visible and near-infrared spectrum, including a temperature of 3197 K and a luminosity of 0.003 L$_{\odot}$ \citep[][assuming a distance of 200 pc]{alcalaetal14-1} as well as a mass accretion rate of $\log \dot{M}_{\mathrm{acc}}$= -9.1 $\pm$ 0.4 M$_{\odot}$/yr. A jet from this source was first reported by \citet{FC2005}. A bright knot at 1$.\!\!^{\prime\prime}$3 was detected in H$\alpha$ and [S\,II]. \citet{Comeron2011-1} followed the jet up with narrow-band imaging with the FORS2 instrument, and the knot moved to 2.$\!\!^{\prime\prime}$55 in 7.2 years; it was not detected in H$\alpha$ in the second epoch, and it was fainter in [S\,II] by around 30\,\%. The velocity of the jet is 168 $\pm$ 30 km/s in the plane of the sky, giving a jet inclination of 6$\mathrm{^{\circ}}$7\,$\pm$\,1$\mathrm{^{\circ}}$4. The mass-loss rate was estimated as 3.2 $\times 10^{-10}$M$_{\odot}$/yr \citep{Bacciotti2011}. A detailed study of the optical jet can be found in \citet{Whelan14-1}; the jet extends to $\pm$ 3$^{\prime\prime}$, and is in agreement with the kinematics derived in previous studies. The authors also obtained a better estimate of the ratio $\mathrm{\dot{M}_{out}/\dot{M}_{acc}}$ = 0.05$^{+0.10}_{-0.02}$, which supports theoretical predictions of jet launch \citep{Ferreira2013, Frank2014}. The early classification as Class I was revisited based on high angular-resolution infrared observations, which revealed no thick envelope around Par-Lup3-4. The spectral energy distribution (SED) was modeled using radiative transfer simulations, and several parameters were estimated. One of them is a disk inclination of 81$\mathrm{^{o}}$, which is compatible with the value obtained by \citet{FC2005}. The maximum derived grain size is $>$10\,$\mu$m, which may be indicative of dust processing. Recently, \citet{Ansdell16-1} and \citet{Ansdell18-1} reported the detection of dust continuum emission from Par-Lup3-4 with ALMA at 335.8 GHz and 225.66 GHz, but there is no report about the bipolar molecular outflow cavity. Using data from Gaia DR2, we have estimated the distance to the Lupus 3 cloud to be 155 $\pm$ 10 pc (Santamar\'ia-Miranda, in prep), which agrees within uncertainties with the distance derived by \citet{Zucker20}. This distance is closer than the 200 pc that was previously adopted for this region. We also derived a distance of $\sim$155 pc to Par-Lup3-4. \section{ALMA observations} \label{observaciones} We present ALMA Cycle 3 and 5 observations of Par-Lup3-4 in Bands 6 and 7, respectively. The ALMA Band 6 (1.33 mm) observations were part of a continuum survey that studied the formation mechanisms and evolution of BDs. These observations comprised more than 60 substellar object candidates covering different stages of evolution, from the prestellar core phase to Class II objects (Santamar\'ia-Miranda et al. in prep). Par-Lup-3-4 was included in the list of objects to study, and we present here the continuum emission and CO(2-1) gas emission associated with this source. These observations were performed on 31 March 2016 as part of the Cycle 3 ALMA program 2015.1.00512.S. Data were taken in single-field interferometry mode, and the time on source was 3.5 minutes. The number of antennas used was 43, with minimum and maximum baselines of 15 meters and 452 meters, respectively. The angular resolution achieved was $\sim$0.9 arcsec (see Table \ref{tabla_ppal}, and the largest angular scale was $\sim$11 arcsec. The field of view was 23 arcsec. Observations were taken with a precipitable water-vapor column of $\sim$1.17 mm. QSO J1517-2422 was used as bandpass and flux calibrator, and QSO J1610-3958 as a phase calibrator. \begin{table*}[t] \caption{Dust emission properties derived from ALMA observations } \label{tabla_ppal} \resizebox{\textwidth}{!}{ \begin{tabular}{c c c c c c c c c c} \hline\hline Wavelength & Robust & \multicolumn{3}{c}{Beam size} & rms & Flux & Peak & Gauss flux& Gauss peak \\ \cmidrule(lr){3-5} & & Major axis & Minor axis & PA & & density & intensity & density & intensity \\ \noalign{\smallskip} [mm] & & [arcsec] & [arcsec] & [$^{o}$] & [mJy/beam] &[mJy] & [mJy/beam] & [mJy] & [mJy/beam] \\ \hline 1.33 & 2 & 0.93 & 0.84 & 84 & 0.051 & 0.31 & 0.31 & (0.41$\pm$ 0.07) & 0.28 \\ 0.89 & 1.5 & 0.41 & 0.36 & 86 & 0.017 & 0.59 & 0.59 & (0.57 $\pm$ 0.02) & 0.60 \\ \end{tabular} } \end{table*} This project was carried out in dual-polarization mode, and it was designed mainly to detect the continuum and serendipitous gas emission from CO(2-1). The correlator setup included four different basebands, three of them in time-division mode centered at 233.5 GHz, 217.0 GHz, and 219.25 GHz, with a total bandwidth of 1.875 GHz and a spectral resolution of 1.94 MHz. These three spectral windows also covered the frequencies of the C$^{18}$O(2-1), SiO(5-4), and DCN(3-2) transitions with a velocity resolution of $\sim$2.5 km/s, although these molecules were not detected. A fourth baseband was split into two spectral windows with a bandwidth of 468.75 MHz and a velocity resolution of $\sim$0.32 km/s each, centered at 231.15 GHz for continuum detection and at the rest frequency of the CO(2-1) line (230.538 GHz). ALMA Band 7 (0.89 mm) observations were performed in Cycle 5 on 24 March 2018 as part of the ALMA program 2017.1.01401.S. Two individual executions were performed to achieve the requested sensitivity. The time on source was $\sim$72 min. Data were taken using 45 antennas with maximum and minimum baselines of 15 and 783 m, respectively, which provided an angular resolution of $\sim$0.25 arcsec and a largest angular scale of 7.29 arcsec. The observations were taken with a precipitable water-vapor column of $\sim$0.60 mm. The field of view was 16 arcsec. We observed a single field in dual-polarization mode, dedicating one spectral window of 0.469 GHz bandwidth to observing CO(3-2) with a velocity resolution of 0.11 km/s. The other three spectral windows were selected to study continuum emission with a bandwidth of 1.875 GHz. These three spectral windows covered transitions of $^{13}$CO(3-2), CS(7-6), and SO$_{2}$(4(3,1)-3(2,2)) for a serendipitous detection of gas tracers with a velocity resolution of 0.44 km/s, as well as centering at strategical frequencies to obtain the best atmospheric transmission. Bandpass and flux calibrations were made using QSO J1517-2422, while QSO J1610-3958 was used as the phase calibrator. Data were processed using the Common Astronomy Software Applications package (CASA, \citealt{casa}). We used the pipeline version 4.5.3 for the Cycle 3 observations, and version 5.3.0 for Cycle 5 observations. The task CLEAN was used to produce continuum and spectral line images. We used Briggs weighting with different robust parameters in order to obtain the best compromise between spatial resolution and signal-to-noise ratio, varying from 0.5 for CO gas emission lines to 2 for the $^{13}$CO (see Table \ref{tabla_gas} for more details). Primary beam correction was applied before inferring physical parameters from any of the images. \section{Results} \label{resultados} \subsection{Continuum emission} \label{resultados:contin} ALMA continuum emission images in Band 6 and Band 7 were generated considering all spectral channels from all spectral windows, but excluding channels with spectral line emission. The gas emission lines identified were CO(v=0 2-1) in Band 6, and CO(v=0 3-2) and $^{13}$CO(v=0 3-2) in Band 7. Details of the continuum emission properties are included in Table \ref{tabla_ppal}. \begin{figure*}[ht] \includegraphics[width=0.5\textwidth]{Figures/chapter3/cont_B6_natural.pdf}\label{cont_b6} \includegraphics[width=0.5\textwidth]{Figures/chapter3/cont_B7_w1-5.pdf} \caption[ALMA continuum images of Par-Lup3-4 at 1.3 mm]{Left panel: 1.3 mm ALMA continuum image using natural weighting. The white contour scale is 3 and 5 $\sigma$, where $\sigma$ is the rms noise level of the map. Right panel: 0.89 mm ALMA continuum image using a robust value of 1.5. The white contour scale is 3, 5, 10, 20, 30, and 35 $\sigma$, where $\sigma$ is the rms noise level of the map. The beam size is represented by the yellow ellipse in the bottom left corner in both panels.\label{cont_b7}} \end{figure*} We detect dust continuum emission at 1.3 mm (225.27 GHz) at the position R.A.=\,16h08m51.426s, Dec=\,-39$^{\circ}$05'30.82" (see the left panel Fig. \ref{cont_b7}). The ALMA Band 6 continuum image with best sensitivity was obtained using natural weighting (Briggs weighting with robust = 2). Par-Lup-3-4 was detected at a 6$\sigma$ level with a flux density of 0.31$\pm$0.05 mJy, including a flux calibration error of 10\,\%. The synthesized beam size is $0.93^{\prime\prime}\times0.84^{\prime\prime}$, and the source is spatially unresolved. Our results are compatible with previous observations reported by \citet{Ansdell18-1}, who measured a flux of 0.35$\pm$0.11 mJy. Dust continuum emission at 0.89 mm (338.15 GHz) is clearly detected at more than 50 $\sigma$ (see right panel Fig.\ref{cont_b7}) at the (J2000) position R.A.=\,16h08m51.424s, Dec=\,-39$^{\circ}$05'30.91". The best-quality image (prioritizing the signal-to-noise ratio while optimizing angular resolution to resolve the spatial structure) was obtained using a robust parameter value of 1.5, yielding an unresolved source with a flux density of 0.59$\pm$0.06 mJy, including a flux calibration error of 10\,\%. \citet{Ansdell16-1} showed the first ALMA continuum image of this source in Band 7, where dust properties were constrained with a flux density of 0.91 $\pm$ 0.26 mJy. Our results are marginally compatible with theirs within 1$\sigma$ uncertainty. In an attempt to improve the spatial resolution, we generated an image using uniform weighting, but still did not resolve the source, we therefore additionally adopted the value of the synthesized beam ($\sim$0.24 arcsec) of the image with uniform weighting as an upper limit of the dust disk size (60 au diameter at 155 pc). Assuming the ALMA continuum emission comes from thermal dust emission, and considering optically thin emission, we derive the total dust mass from \citet{Hildebrand1983} as \begin{equation} M = \frac{S_{\lambda} D^{2}}{B_{\lambda}(T_{dust}) \kappa_{\lambda}}, \end{equation} where $S_{\lambda}$ is the flux density from Table \ref{tabla_ppal}, $D$ is the distance to the source (155\,pc), and $B_{\lambda}(T_{dust})$ is the Planck function at a temperature T$\mathrm{_{dust}}$. The temperature was derived using T$_{\mathrm{dust}}$\,=\,25\,K \,$\times$\,(L$_{*}$/L$_{\sun}$)$^{0.25}$ \citep{Andrewsetal13-1}. We used the stellar radius to obtain the luminosity (L$_{*}$), assuming an effective temperature of 3197 K \citep{alcalaetal14-1}. We did not use the standard L$_{*}$ estimates (i.e., visual magnitude plus bolometric correction) because the extinction toward the photosphere is highly uncertain because the source is seen edge-on and therefore appears underluminous in the Hertzsprung-Russell diagram. We first used a radius of 1.1 R$_{\sun}$ \citep{Huelamo10-1}, obtaining a temperature of 15 K. Then, we used a radius of 2 R$_{\sun}$, as the result of our SED fitting (see Section \ref{fiteo}), obtaining a temperature of 20 K. $\kappa_{\lambda}$ is the absorption coefficient obtained from \citet{Ose94} for thin ice mantles and a density of $10^{6}$ cm$^{-3}$. We interpolated for the wavelength of 1.3 mm and for 0.89 mm and obtained values of $\kappa$= 0.85 cm${^2}$g${^{-1}}$ and 1.8 cm${^2}$g${^{-1}}$, respectively. The derived masses for the dust at a temperature of 20 K are 0.28 $\pm$ 0.05 M$\mathrm{_{\oplus}}$ for Band 7 and 0.60 $\pm$ 0.15 M$\mathrm{_{\oplus}}$ for Band 6. Using a temperature of 15 K, we obtained a dust disk mass of 0.43 $\pm$ 0.08 M$\mathrm{_{\oplus}}$ for Band 7 and 0.88 $\pm$ 0.22 M$\mathrm{_{\oplus}}$ for Band 6. When the fluxes from \citet{Ansdell16-1} and \citet{Ansdell18-1} with a distance of 155 pc instead of their assumed 200 pc, and the opacity law from from \citet{Ose94} are used, our results are compatible with theirs within the errors. \subsection{Gas emission lines} We detected three different gas emission lines toward this source for the first time with ALMA. In this section we describe structures with 3 $\sigma$ detection or more. The results for CO(3-2), CO(2-1), and $^{13}$CO(3-2) are summarized in Table \ref{tabla_gas}, and they are described in detail in the following subsections. \subsubsection{CO(3-2)} \label{sec:co_3-2_descripcion} CO(3-2) (top left panel in Fig. \ref{figure:zero_images_b}) emits in a velocity interval between -2.92 to 11.60 km/s (see Fig. \ref{channel_map_CO3-2}). The spectrum of Par-Lup3-4 displays a double-peak profile with blueshifted and redshifted wings and a self-absorption feature at $\sim$3.4 km/s (see Fig. \ref{espectro_central}), probably due to the cold foreground parental molecular cloud. The self-absorption feature is close to the source systemic velocity at $\sim$3.7 km/s (see Sect. \ref{13co}). The CO(3-2) blueshifted emission spans velocities between -2.92 to 2.80 km/s, and the redshifted emission is between 3.70 to 10.72 km. The blueshifted arc-like structures are seen from -2.92 to 2.36 km/s in the southeast and from 0.60 to 2.80 km/s to the northwest. The redshifted emission comes from a northwest arc-like structure that emits between 3.68 to 6.32 km/s and from a southeast structure that extends from 5.00 to 10.72 km/s. Integrated red- and blueshifted CO(2-1) contour maps are provided in Appendix \ref{red_blue_b7} \begin{table*}[!hbt] \caption{Gas properties of the ALMA detection} \label{tabla_gas} \begin{tabular}{c c c c c c c c} \hline\hline Molecular & Robust & \multicolumn{3}{c}{Beam size} & rms & Integrated & Peak \\ \cmidrule(lr){2-5} transition & & Major axis & Minor axis & PA & & intensity\footnote{1} & intensity \\ \noalign{\smallskip} & & [arcsec] & [arcsec] & [$^{o}$] & [Jy/beam km/s] & [Jy km/s ] & [Jy/beam] \\ \hline CO(3-2) & 1 & 0.38 & 0.35 & 80 & 1.57$\times$10$^{-2}$ & 3.79 & 0.99 \\ CO(2-1) & 1 & 0.79 & 0.71 & 82 & 3.12$\times$10$^{-2}$ & 1.97 & 0.89 \\ $^{13}$CO(3-2)& 2 & 0.41 & 0.37 & 87 & 4.07$\times$10$^{-3}$ & 8.16$\times$10$^{-2}$ & 5.50$\times$10$^{-2}$ \\ \end{tabular} \begin{tablenotes} \begin{footnotesize} \item[1] $^{1}$Obtained over a 3$\sigma$ contour that corresponds to an area of $\sim$3.8 arcsec$^{2}$, $\sim$5.0 arcsec$^{2}$ , and $\sim$0.7 arcsec$^{2}$ for CO(3-2), CO(2-1), and $^{13}$CO(3-2), respectively. \end{footnotesize} \end{tablenotes} \end{table*} \begin{figure*} \includegraphics[width=0.5\textwidth]{Figures/chapter3/co_b7_w1.pdf} \includegraphics[width=0.5\textwidth]{Figures/chapter3/13co_w2.pdf} \begin{center} \includegraphics[width=0.5\textwidth]{Figures/chapter3/co_b6_w1.pdf} \end{center} \caption[ALMA gas images of Par-Lup3-4 at 1.3 mm]{Top left panel: CO(3-2) flux-integrated ALMA map from velocity -3 to 11 km/s with a robust value of 1. The white contours are 3, 5, 7, 9, 12, 15, 20, 30, and 50 times the rms. Top right panel: $^{13}$CO flux-integrated ALMA map from velocity -0.08 to 7.44 km/s. The white contours are 3, 5, 7, 9, and 12 times the rms. Bottom panel: CO(2-1) flux-integrated ALMA map from velocity -2.62 to 10.08 km/s. The white contours are 3, 5, 7, 9, 12, 15, and 20 times the rms. The beam size is represented by a yellow ellipse in the bottom left corner of the three panels. The three panels show a zoom-in of the main central core region. A zoom-out image can be found in Fig. \ref{channel_map_CO2-1}, Fig. \ref{channel_map_CO3-2}, and Fig. \ref{channel_map_13CO} for CO(2-1), CO(3-2), and $^{13}$CO, respectively.} \label{figure:zero_images_b} \end{figure*} \begin{figure*} \newgeometry{inner=2.5cm, outer=2.5cm \includegraphics[height=1.05\textheight]{Figures/chapter3/channel_CO_B7_beta_low_res_bin_044_all.pdf} \caption[Zoom out ALMA CO(3-2) channel emission map of Par-Lup3-4]{ \label{channel_map_CO3-2} Zoom-out CO(3-2) channel maps toward Par-Lup3-4 using robust value of 1. We used a range >50 k$\lambda$ to eliminate the extended emission. We binned the image to a velocity resolution of 0.44 km/s. The velocity of the channels is shown in the LSR frame in km/s, centered at the frequency of CO(3-2). All maps share the same linear color scale. White contour levels are 3, 5, 9, and 17 $\sigma$. $\sigma$ is the rms noise level of the map. The cyan star marks the position of the peak intensity in the continuum image. The beam size is represented by a yellow ellipse in the bottom left corner.} \restoregeometry \end{figure*} \begin{figure} \includegraphics[width=\hsize]{Figures/chapter3/3lineas_spectral_profile.pdf} \caption[$^{13}$CO, CO(2-1) and CO(3-2) spectra]{$^{13}$CO (solid green), CO(2-1) (dashed red), and CO(3-2) (semidashed blue) spectra averaged over the 5\,$\sigma$ contour level in the integrated flux maps for each line applied to the spectral cube and centered on Par-Lup3-4. The black line represents the system velocity of Par-Lup3-4, i.e., the velocity average of gas, obtained from $^{13}$CO, $\sim $3.7km/s. Blueshifted material is lower than 3.45 km/s and redshifted material is greater than 3.90 km/s.} \label{espectro_central} \end{figure} CO(3-2) traces low-velocity outflowing material with an inclination near the plane of the sky, as revealed by the different arc-like quasi-symmetric structures with superimposed blue- and redshifted emission that traces the base of a compact bipolar outflow very close to the position of Par-Lup3-4 (Fig. \ref{jet}). This outflow has the same orientation as the jet and the counterjet detected by \citet{FC2005}. The CO(3-2) lobe structures clearly delineate the southeast and northwest side of the outflow cavities that result from the interaction between the ejected material with the surrounding envelope. \begin{figure} \centering \includegraphics[width=\hsize]{Figures/chapter3/jet+cavidad+contb7.pdf} \caption[Jet and left bipolar outflow cavity of Par-Lup3-4]{CO(3-2) flux-integrated map in color to show the bipolar molecular outflow cavity. Only channels that show bipolar emission between [-3 km/s, 2.25 km/s] and [6.75 km/s, 10.25km/s]) were included. We excluded channels with possible cloud contamination. White contour levels are 3, 5, 7, 9, 12, 15, 20, and 30 times the rms with a robust value of 2. Green contours are the ALMA continuum image at 0.89 mm using 5, 7, 15, and 25 times the rms value with a robust value of 1.5. $\sigma$ is the rms noise level of the map. Only channels with a bipolar molecular structure were chosen.} \label{jet} \end{figure} Cloud emission is seen as an inhomogeneous distribution of material that is spread randomly throughout the whole map between 2.51 km/s to 4.41 km/s and between 5.26 km/s and 5.58 km/s. Beyond 5.90 km/s and below 7.20 km/s, we cannot distinguish clearly between cloud emission and outflow emission. There is an elongated and clumpy structure northeast of Par-Lup3-4 with a size of $\sim5^{\prime\prime}$ and a velocity gradient from 5.44 to 7.20 km/s whose origin is unknown (see Fig~\ref{second_outflow}, right panel). This structure seems to originate close to the position of Par-Lup3-4, with a velocity that increases with distance. It ends in the north at the position of a more extended clump, which may be part of the surrounding parental molecular cloud. We discuss the possible nature of this feature in the next section. \subsubsection{CO(2-1)} The first molecular gas emission detection (bottom panel in Fig. \ref{figure:zero_images_b}) of Par-Lup3-4 was at the frequency of CO(2-1) as part of our ALMA Band 6 Lupus 1 and 3 dataset. The CO(2-1) emission spans a total velocity of $\sim$13 km/s in velocity channels ranging from -3.2 to 10.0 km/s (see Appendix \ref{channel_map_CO2-1}), and it shows similar spatial and spectral characteristics as CO(3-2). Its spectrum has a double-peak profile, with a more intense red wing and a self-absorption feature between 2.3 to 4.5 km/s (see Fig. \ref{espectro_central}). The CO(3-2) emission described in the previous subsection suggests the existence of a compact bipolar outflow, which is confirmed by the CO(2-1) emission line detected in our Band 6 data. The blueshifted emission spans velocities between -3.2 to 2.8 km/s with two spatial components that show an arc-like structure in the southeast direction (from -3.2 to 1.5 km/s) and in the northwest (from 1.8 to 2.5 km/s). The redshifted emission (velocities between 3.7 to 10 km/s) shows a similar trend, tracing an arc-like shape in the northwest direction between $\sim$4 to 6.6 km/s and in the southeast direction between 7.0 to 10 km/s (see Appendix \ref{channel_map_CO2-1}). These blue- and redshifted structures for the first time suggest that a compact low-velocity bipolar molecular outflow near the plane of the sky is powered by Par-Lup3-4, with the southern lobe showing higher velocities than the systemic velocity than does the northern lobe. Integrated red- and blueshifted CO(2-1) contour maps are provided in Appendix \ref{red_blue_b6}. Extended emission and negative features due to the effects of filtering large-scale structures by the interferometer are present near 2.7, 4, and 5 km/s. In particular, the remnants of the parental cloud are seen in velocity channels from 3.4 to 4.3 km/s, and this might be a remnant of the envelope in which the source was originally embedded. The northern stream of clumpy material (hereafter called possible secondary outflow) observed in CO(3-2) at velocities between 5.6 to 7.2 km/s is also detected in the CO(2-1) transition, with similar characteristics in terms of speed and location. The nature of this structure is discussed in Sect.\ref{sec:Molecular_outflow}. \begin{figure*} \includegraphics[width=0.53\textwidth]{Figures/chapter3/channel_segundo_outflow_channel_b6_all.pdf} \includegraphics[width=0.46\textwidth]{Figures/chapter3/segundo_outflow_channel_b6.pdf} \caption{Left panel: CO(2-1) ALMA channel maps toward Par-Lup3-4 and the possible secondary outflow following the northeast direction. All the maps share the same linear color scale with a robust value of 1. Right panel: ALMA-integrated emission of CO(2-1) from a velocity of 5.63 to 6.90 km/s. Contours show 3, 5, and 7 times the rms (6.0$\times 10^{-3}$ Jy/beam). The cyan and white star shows the peak intensity in the continuum source position for all the images. The beam size is represented by a yellow ellipse in the bottom left corner.} \label{second_outflow} \end{figure*} \subsubsection{$^{13}$CO(3-2)} \label{13co} $^{13}$CO(3-2) emission is detected very close to Par-Lup3-4 in a velocity range between 0.96 and 7.12 km/s (see Appendix \ref{channel_map_13CO}). $^{13}$CO(3-2) traces a more compact and denser structure than the structure that is observed with the other two $^{12}$CO molecular transitions, where the northwestern outflow cavity is still well perceived (top right panel in Fig. \ref{figure:zero_images_b}). To investigate the nature of the $^{13}$CO line, we calculated the optical depth and obtained a value of 0.31 that is in the optically thin regime. The $^{13}$CO spectrum is less affected by possible cloud contamination (compared to the $^{12}$CO spectra), and we used it to infer a more accurate value for the systemic velocity of the source, that is, the velocity average of gas, which we found to be between 3.46 to 3.90 km/s. The velocity map (Fig. \ref{espectro_central_13C0}) of the most intense and compact gas close to Par-Lup3-4 suggests a rotation pattern with redshifted material in the northeast and blueshifted material in the southwest, in a flattened structure perpendicular to the direction of the molecular outflow that is clearly detected in the other CO transitions. Moreover, the velocity pattern is oriented in the expected direction of the circumstellar disk. The spatial and spectral resolution as well as the sensitivity of our data do not allow us to confirm the Keplerian nature of this rotation. Future ALMA observations are required to confirm and study this structure properly. A schematic view of the relative position of the protoplanetary disk, the primary bipolar molecular outflow, and the possible secondary outflow are in Fig. \ref{esquema}. \begin{figure}[ht] \includegraphics[width=0.5\textwidth]{Figures/chapter3/13co_velocity_w2.pdf} \caption{Velocity map of the $^{13}$CO emission line with a threshold of seven times the rms (4.07 $\times 10^{-3}$ Jy/beam). The beam size is represented by the gray ellipse in the bottom left corner.} \label{espectro_central_13C0} \end{figure} \begin{figure} \centering \includegraphics[width=\hsize]{Figures/chapter3/esquema.pdf} \caption{Schematic picture of the relative position of the primary bipolar molecular outflow, the protoplanetary disk, and the possible secondary molecular outflow.} \label{esquema} \end{figure} \subsection{Geometrical and dynamical properties of the outflow} We derived the geometrical and dynamical properties of the primary outflow of Par-Lup3-4 to add evidence for the characterization of outflows surrounding VLM stars and BDs. The calculated geometrical properties include the opening angle, the average length, and the position angle of the outflow, which were measured by hand for both lobes. Our ALMA observations are consistent with the base of a molecular outflow (section \ref{sec:co_3-2_descripcion}), but neither the full ellipsoidal lobe nor the shockwave front are detected. Therefore we can only obtain a lower limit for the average lobe length. The average length was measured using the 3$\sigma$ contour of the CO(3-2) flux-integrated map (Fig. \ref{jet}), and we obtained values of 2.5 arcsec for the southeast lobe and 1.1 arcsec for the northwest lobe. The average size is 1.8 arcsec. The opening angle was calculated in this map, providing an angle of 108$^{\circ}$ for the southern lobe and 116$^{\circ}$ for the northern lobe. Additionally, we measured a position angle of 116$^{\circ}$ and 307$^{\circ}$ for the southern and northern lobes, respectively. The dynamical properties we studied comprise the outflow mass (M$\mathrm{_{outflow}}$), the dynamical time ($\tau\mathrm{_{dyn}}$), the momentum (P$\mathrm{_{outflow}}$), the kinetic energy (E$\mathrm{_{outflow}}$), the luminosity (L$\mathrm{_{outflow}}$) and the force (F$\mathrm{_{outflow}}$). The brightness temperature of the CO(3-2) line peak (20.6 K) is similar to the kinetic temperature, which indicates that at systemic velocities, the line is optically thick ($\tau$\,>>\,1). We therefore derived an excitation temperature (T$\mathrm{_{ex}}$) of $\sim$26.4 K from this spectrum. This value is similar to the value that was obtained in previous studies of similar sources \citep{Phan-Bao08-1, Phan-Bao11-1, Phan-Bao14-1}, where T$\mathrm{_{ex}}$ ranges between 20 and 35 K. \vspace{7mm} The column density and the mass of the primary CO outflow were calculated following the prescription in \citet{Scoville1986} and \citet{Palau2007} for the CO(3-2) transition. The optical depth, measured channel by channel in the four arc-like outflow structures described in section \ref{sec:co_3-2_descripcion}, provides values of $\tau$ <<1. We obtained a mean optical depth value of 0.25 for both the blueshifted and redshifted southwest cavity as well as for the redshifted northwest cavity, and a value of 0.15 for the blueshifted northwest cavity. Therefore we consider that the wings of the CO(3-2) emission line are in the optically thin regime. Emission close to the position of the central object is optically thick, and therefore this emission was excluded from calculations. Consequently, our estimates of the outflow mass are lower limits because the border between the central object and the base of the outflow is almost indistinguishable. We obtained a total outflow mass of 9.5$\times$10$^{-7}$ M$_{\odot}$ as the sum of the mass of the four arc-like structures: the sum of 3.4$\times$10$^{-7}$ M$_{\odot}$ and 2.3$\times$10$^{-7}$ M$_{\odot}$ for the blueshifted ([-1.60 km/s,2.36 km/s]) and redshifted ([6.76 km/s, 9.84 km/s]) components of the east lobes plus the sum of 9.8$\times$10$^{-8}$ M$_{\odot}$ and 2.8$\times$10$^{-7}$ M$_{\odot}$ for the blueshifted ([1.48 km/s, 2.36 km/s]) and redshifted ([4.12 km/s, 5.88 km/s]) components in the west lobes. The outflow velocities extend to -2.7 km/s in the blueshifted lobe and to 9.8 km/s in the redshifted lobe, resulting in a v$\mathrm{_{max}}$ of $\sim6$\ km/s. In our calculations we did not apply an outflow inclination correction because the brighter emission of the CO(3-2) at the borders of the cavity suggest a limb-brightening effect, and therefore also that this emission is mostly dominated by material closer to the plane of the sky. The remaining dynamical parameters were obtained using the formulas in Table \ref{tab:formulas}. Their values are $\tau\mathrm{_{dyn}}$ = 220 yr, P$\mathrm{_{outflow}}$ = 5.7$\times$10$^{-7}$ M$_{\odot}$ km/s, E$\mathrm{_{outflow}}$= 3.4$\times$10$^{38}$ erg, L$\mathrm{_{outflow}}$= 1.2$\times$10$^{-5}$ L$_{\odot}$ , and F$\mathrm{_{outflow}}$ = 2.6$\times$10$^{-8}$ M$_{\odot}$ km/s yr. We finally warn that the values obtained in this section should be treated with some caution. Although these measurements are useful for constraining models, it is highly probable that not the entire outflow structure is detected, and molecular cloud contamination affects several emission channels near the cloud velocity. We estimate that other uncertainties such as T$\mathrm{_{ex}}$, optical depth, and the outflow extent may result in an increase in outflow mass by less than a factor 2. \begin{table}[!hbt] \caption{Geometrical and dynamical properties of the outflow. Because the flux mass is probably underestimated (see \ref{maximum}), these values should be regarded as order-of-magnitude estimates.} \label{tab:formulas} \resizebox{0.5\textwidth}{!}{ \begin{tabular}{l l l} \hline\hline Derived & Formula & Value \\ properties & \\ \hline\hline Length & R$\mathrm{_{lobe}}$ & 1.8 arcsec\\ Velocity & v$\mathrm{_{max}}$ & 6 km/s \\ Dynamical time & $\tau\mathrm{_{dynamical}}$ = R$\mathrm{_{lobe}}$ / v$\mathrm{_{max}}$ & 220 yr \\ Mass-loss rate & $\dot{\mathrm{M}}$ = M$\mathrm{_{outflow}}$ / $\tau\mathrm{_{dynamical}}$ & 4.3$\times$10$^{-9}$ M$_{\odot}$/yr \\ Momentum & P$\mathrm{_{outflow}}$ = M$\mathrm{_{outflow}}$ $\times$ v & 5.7$\times$10$^{-7}$ M$_{\odot}$ km/s \\ Energy & E$\mathrm{_{outflow}}$ = 1/2 M$\mathrm{_{outflow}}$ $\times$ v$^{2}$ & 3.4$\times$10$^{38}$ erg\\ Luminosity & L$\mathrm{_{outflow}}$ = E$\mathrm{_{outflow}}$ /$\tau\mathrm{_{dynamical}}$ & 1.2$\times$10$^{-5}$ L$_{\odot}$\\ Force & F$\mathrm{_{outflow}}$ = P$\mathrm{_{outflow}}$ / $\tau\mathrm{_{dynamical}}$ & 2.6$\times$10$^{-8}$ M$_{\odot}$ km/s yr \\ \end{tabular} } \end{table} \label{gaseo} \subsection{Spectral index} The spectral index ($\alpha$) at millimeter frequencies used to be a tool for inferring grain growth signatures under the assumptions that the emission is in the Rayleigh-Jeans regime and is optically thin. The spectral index in the optically thin regime provides information on the dust opacity index ($\beta$); its value depends on dust size, and consequently, on grain growth in the case of large dust particles. Typical $\alpha$ values for the interstellar medium (ISM) are close to 3.7 \citep{Draine06}, while lower values $\leq$ 3 are found in most disks, and they are interpreted as a signature of grain growth \citep{Ricci10-1, Ricci10-2, Ribas17-1}. We assume that the emission is associated with the Rayleigh-Jeans regime and that the optically thin emission $\alpha$ depends on the flux as F$_{\nu} \propto \nu^{\alpha}$, where $\beta$ = $\alpha$ -2 and $\kappa_{\nu} \propto \nu^{\beta}$. We obtained the spectral index, $\alpha$, using the formula \begin{equation} \alpha = \frac{\mathrm{log\frac{F_{\nu_{1}}}{F_{\nu_{2}}}}}{\mathrm{log}\frac{\nu_{1}}{\nu_{2}}} ,\end{equation} where F is the flux density at frequencies $\nu_{1}= 328$ GHz and $\nu_{2}= 225$ GHz. The $\alpha$ uncertainty is determined as \begin{equation} \Delta \alpha =\sqrt{\mathrm{\bigg(\frac{\Delta S_{1}}{S_{1}ln\big(\frac{\nu_{1}}{\nu_{2}}\big)}\bigg)^{2}}+\mathrm{\bigg(\frac{\Delta S_{2}}{S_{2}ln\big(\frac{\nu_{1}}{\nu_{2}}\big)}\bigg)^{2}}} .\end{equation} Using the values given in Table \ref{tabla_ppal}, we obtained that the Par-Lup3-4 spectral index is 1.6 $\pm$ 0.4. $\alpha$ values below 2 imply that the disk does not appear to follow the Rayleigh-Jeans approximation \citep{Ansdell18-1}. Therefore we cannot obtain information about the composition and size of the dust using the spectral index for Par-Lup3-4. \section{Discussion} \label{discusion} \subsection{Nature of the extended gas emission to the northeast} This subsection discusses the nature of the elongated and clumpy structure observed in CO(3-2) (Fig. \ref{fig:CO_momentos_0_1_outflow}) and CO(2-1). The structure is redshifted with respect to the LSR of Par-Lup3-4 and propagating from the center toward the northeast side of the field of view, with less redshifted velocities toward the position of the source. Here we discuss four possible origins of this structure. \begin{figure} \includegraphics[width=0.5\textwidth]{Figures/chapter3/outflow_b7_m1_sin_filtar.pdf} \includegraphics[width=0.5\textwidth]{Figures/chapter3/outflow_b7_m0_sin_filtrar.pdf} \centering \includegraphics[width=0.5\textwidth]{Figures/chapter3/pv_outflow_sin_filtrar.pdf} \caption{Images of the CO(3-2). Top panel: ALMA velocity map considering an intensity above 5$\sigma$. Middle panel: CO(3-2) flux-integrated ALMA map with pixels above 5 $\sigma$. The black line marks the path for the position-velocity diagram in the bottom panel. Bottom panel: Position-velocity diagram with a PA of 30.5 deg and a width of $\sim$1.2 arcsec. The horizontal black lines shows the systemic velocity of Par-Lup3-4. The cyan or black star marks the position of the peak intensity in the continuum image. In the two first images the beam size is represented by the gray ellipse in the bottom left corner.} \label{fig:CO_momentos_0_1_outflow} \end{figure} First, it is possible that this structure is a second molecular outflow originating from Par-Lup3-4. This could be explained if Par-Lup3-4 is not a single object but a close binary system. Adaptive optics (AO)-assisted observations \citep{Huelamo10-1} have not revealed an additional companion down to 0.1 arcsec ($\sim$15.5 pc), but closer companions cannot be excluded. Additionally, another argument in favor of this possibility is that the emission appears to originate from the location of Par-Lup3-4. The emission is detected with a velocity of $\sim$5.4 km/s at a distance of $\sim$2 \ arcsec (310 au) northeast of the source; and up to $\sim$6.7 km/s at a distance of $\sim$6 \ arcsec (930 au) from the source, as is clearly seen in the position-velocity diagram of Fig.\,\ref{fig:CO_momentos_0_1_outflow}. Bands 6 and 7 show very similar morphologies and velocity patterns. If the emission arises from an actual outflow, it appears to be a monopolar one, as there is no counterpart with emission moving toward the southwest. Monopolar outflows have been observed in the literature for low-mass stars \citep{Codella14-1, Louvet18-1}. On a related note, the jet detected by \citet{FC2005} in Par-Lup3-4 shows an asymmetry, as the southeast lobe is much more prominent than the northwest lobe. Although the orientation of this second molecular outflow, perpendicular to the optical jet, is less likely, a similar scenario with two perpendicular outflows has previously been seen in other sources \citep{Tobin15-1}; this might be interpreted as another indication of the binary nature of the central source. A second possibility is that we are observing cloud contamination. In support of this hypothesis, we note that the velocity of the structure is very close to the velocity of the second component of the cloud. In the same low-velocity regime, we also detect extended cloud emission compatible with the cloud velocity. However, we detect very compact structures when we perform a visibility range (see Appendix \ref{fig:CO_momentos_0_1_outflow_filtrado}), and inhomogeneities with this morphology in molecular clouds, although possible, are not common. It is also not expected that we see clear strips in the field of view pointing toward Par-Lup3-4 along several channels instead of randomly distributed material, as we can see in the channels with a velocity higher than 5.70 km/s in Appendix \ref{channel_map_nube}. A third scenario is that the outflow comes from another source that is neither Par-Lup3-4 nor a close companion. This may be the unlikely case of a source in the line of sight of Par-Lup3-4 that would be responsible for the molecular outflow. Another possibility is a nearby source that just happens to be outside the field of view to one edge or another. It might even be a poorly known source in Lupus located at several dozens or hundreds of arcsec away. Another possibility to discuss is a stream of envelope material that is infalling onto Par-Lup3-4. In case of pure infall of foreground material (i.e., material located between the star and us), we would expect a velocity pattern with more redshifted velocities closer to the central source and we see the opposite, where the pattern is more consistent with outflowing material from a Hubble-type outflow in which the velocity increases with distance from the protostar. Now, if the infalling stream is also rotating clockwise, this material might be slowed down closer to the protostar, providing a velocity pattern more compatible with the observed pattern. \label{sec:Molecular_outflow} The current (sub)mm and optical observations do not allow us to distiguish between the four scenarios. Future studies using ALMA should be able to unveil the origin of the moving structure. ALMA observations with better velocity resolution and higher sensitivity may also help to distinguish between outflow or cloud emission in the second scenario. High angular resolution observations as well as radial velocity observations in the infrared may help to detect a close binary. \begin{table*} \centering \caption{Best-fit parameters from SED modeling.} \label{tab:modeling_results} \begin{tabular}{l c c} \hline\hline Parameter & JHK from 2MASS & JHK from \citet{Comeron03-1} \\ \hline Stellar radius ($R_\odot$)& 2.0 & 2.0 \\ Disk dust mass ($M_\odot$)& 5$\times10^{-7}$ & 1$\times10^{-7}$ \\ Maximum grain size (mm) & 10 & 0.5 \\ Scale height at 100\,au (au)& 20 & 20 \\ Flaring index & 1.2 & 1.1 \\ Surface density index & -1.5 & -1.0 \\ Inclination (deg) & 82.5 & 85 \\ Interstellar extinction (mag) & 3.5 & 2.5 \\ \hline\hline \end{tabular} \end{table*} \subsection{SED fitting} \label{fiteo} As part of a study of Par-Lup3-4, we have also tried to characterize its circumstellar disk, taking advantage of the fact that new photometric data are available and can help to populate the SED studied by \citet{Huelamo10-1}. We therefore complemented the photometric points from \citet{Huelamo10-1} with new Herschel (PACS $\text{and}$ SPIRE; see Tab. \ref{tab:fotometria}) and ALMA data. The resulting SED is displayed in Fig. \ref{fig:SED_modeling}. Then, we used the radiative transfer code MCFOST \citep{Pinte2006,Pinte2009} to infer the main disk properties. \begin{table} \centering \caption{PACS and SPIRE photometry} \label{tab:fotometria} \begin{tabular}{l c c} \hline\hline Wavelength & Flux & Flux error \\ $[\mu$m] & [mJy] & [mJy] \\ \hline 71.42 & 59.4 & 2.4 \\ 160 & $<$159.2 & - \\ 250 & $<$1215 &- \\ 350 & $<$1455 & - \\ 500 & $<$1101 &- \\ \hline\hline \end{tabular} \end{table} Modeling protoplanetary disks involves defining several free parameters, some of which are highly uncertain or degenerate. Such a scenario is better dealt with within a Bayesian framework and using statistical tools such as Markov chain Monte Carlo methods, but this approach is computationally very demanding and has only been applied in a few cases \citep[e.g.,][]{Ribas2016, Wolff2017}. Because the amount of data available at long wavelengths for Par-Lup3-4 is limited, we chose to run a grid of models to obtain a general idea of the system parameters. Our initial attempts at fitting the SED of Par-Lup3-4 used fixed stellar parameters from the BT-Settl models \citep{Allard2012, Baraffe15-1} based on previous studies. The stellar temperature was fixed to 3200\,K \citep{Alcala17-1} because it is firmly given, with a narrow uncertainty margin, by the spectral classification of the star. Age estimates for the source range from 1 to 3\,Myr \citep{Comeron03-1,Alcala17-1}. Assuming 2\,Myr and using the BT-Settl models, we derived a stellar radius and a mass of 1.1\,R$_\odot$ and 0.2\,M$_\odot$. The distance to the source was set to d$\sim$155\,pc (see Section \ref{info}). Regarding the disk parameters, we have fixed the disk inner radius and the minimum grain size to 0.05\,au and 0.005\,$\mu$m, following \citet{Huelamo10-1}. For our model, we defined eight free parameters and explored them within reasonable ranges: \begin{itemize} \item the stellar radius, including the value of 1.1\,$R_\odot$ derived from isochrones, plus 1.5, 2, and 2.5\,$R_\odot$ as additional values; \item the disk dust mass, from $1\times10^{-7}$ to $1\times10^{-5}$\,M$_\odot$ in steps of 0.5 dex; \item the maximum grain size, from 500\,$\mu$m to 1\,cm in steps of 0.5 dex; \item the scale height at a radius of 100\,au, from 5 to 20\,au in steps of 5\,au; \item the flaring index, with values 1.0, 1.1, and 1.2 (the disk is assumed to flare following $H(r)\propto r^\gamma$, where $H(r)$ is the scale height as a function of radius, and $\gamma$ is the flaring index); \item the surface density index, with values -0.5, -1.0, and -1.5; \item the inclination, with values 80, 82.5, and 86\,deg, based on the inclination derived from the optical jet \citep{Comeron2011-1}; and \item the interstellar extinction values, from 0 to 5\,mag in steps of 0.5\,mag. \end{itemize} This setup resulted in a significant underestimation of the optical fluxes, even assuming no extinction. \citet{Huelamo10-1} found a similar problem in their modeling efforts of this source, but attributed it to an uncertain distance value. However, the distance measurement and uncertainty in the distance estimate by \emph{Gaia} clearly show that this is not the case, and the source must be intrinsically more luminous in order to reproduce the observed fluxes. For this reason, we then included the stellar radius as a parameter in the subsequent modeling process. $\chi^2$ values were then computed for each model. We note that there are two different results from near-IR observations \citep[one from 2MASS and from][]{Comeron03-1}, possibly reflecting the variable nature of Par-Lup3-4. Thus, two different $\chi^2$ values were computed for each model. The results from the SED modeling are shown in Table~\ref{tab:modeling_results} and Fig.~\ref{fig:SED_modeling}. SED fitting is, in general, a degenerate process, and in our case, the photometric coverage from the IR to mm wavelength is rather poor. The derived values are highly uncertain (especially when we consider that Par-Lup3-4 may still be surrounded by a dusty envelope, which we did not consider in our models). However, one crucial result is the fact that a radius of 2\,$R_\odot$ is required in both cases. Different tests with varying source distance and radius showed that it is not possible to fit the optical part of the observed SED with a source of 3200\,K and a radius of 1.1\,$R_\odot$ at the distance of 155\,pc measured by \emph{Gaia} because the source is simply not bright enough. Given the small uncertainties associated with the parallax in the \emph{Gaia} DR2 catalog and because its distance is compatible with that of other sources in the region, a plausible explanation is that the radius of Par-Lup3-4 is larger, and thus it is younger than the median age of Lupus. The radius is even inconsistent with models at any age. This idea is also supported by the presence of a molecular outflow, and by the fact that the source appears to be still embedded in the cloud. If this is the case, it is likely that Par-Lup3-4 is still surrounded by material from the envelope. \begin{figure} \centering \includegraphics[width=\hsize]{Figures/chapter3/ParLup3-4_SEDmodel.pdf} \caption[SED and modeling results for Par-Lup3-4.]{SED and modeling results for Par-Lup3-4. Dots are photometric observations. Arrows are upper limits. Red and blue dots are the 2MASS and the \citet[][CM03]{Comeron03-1} observations, respectively. The best-fit models are also shown with a similar color code.} \label{fig:SED_modeling} \end{figure} A second possibility to explain the 2 R$_{\odot}$ radius, and therefore the higher luminosity of Par-Lup3-4, is that the source is a binary system with a mass ratio near one. If this is the case, the perpendicular structure described in section\,\ref{sec:Molecular_outflow} could be an outflow originating from the companion. However, as explained before, no clear signature of a companion has been found so far. If any of these two scenarios applies, the disk parameters derived from our modeling process should be treated with caution because the models we used may not reflect the true nature of the source. We note that there is a factor of $\sim$2 to $\sim$27 in difference between the dust disk mass obtained from fitting the SED and the values obtained in Section \ref{resultados:contin}. We acknowledge that the model itself may need to be refined in order to minimize this discrepancy. The SED fitting reported in \citet{Huelamo10-1} fixed the stellar parameters from BT-Settl models, but they found that it was not possible to fit the SED unless they changed the distance. Now, using Gaia, we are certain that the distance is not the main problem. While it is impossible fit the SED by assuming a single central object of a plausible luminosity, the existence of a secondary outflow leads to the reasonable explanation that Par-Lup3-4 is formed by a close binary with two equal-mass components. \subsection{Characterizing the molecular outflow cavity} Par-Lup3-4 is the first VLM star to date for which we detected the base of a bipolar molecular outflow at (sub)mm wavelengths, and for which the highly supersonic outflow (jet) has been detected at optical wavelengths. Following the low-mass star outflow model, the interaction between the jet or the wide-angle outflow and the envelope creates the detected cavities \citep{Li-1996}. The expelled gas and material carves out the cavities in the envelope, and the interaction in the boundary between the outflowing gas and the envelope material creates the physical conditions for exciting the CO transitions that we detected. In this section, we characterize the VLM outflow of Par-Lup3-4 in the context of mass ejection from low-mass protostars. The first property that we use to characterize the outflow is the lobe length, which in the case of the primary outflow of Par-Lup3-4 is $\sim$1.8 arcsec ($\sim$295 au). This is a lower limit. The uncertainty on the lobe length is at least about one half of the beam size at a distance of $\sim$155 pc, which yields a value of $\sim$28 au, and it is likely much more as we cannot recover the entire lobe. The length of this outflow is on the same order of magnitude as the lengths of other VLM protostars and proto-BDs within the uncertainties, such as ISO-Oph\,102, GM\,Tau, MHO\,5 \citep{Phan-Bao14-1}, IC348-SMM2E \citep{Palau14-1} , or L1148-IRS \citep{Kauffmann11-1}, which have outflows between 500 to 1800 au. The outflow length for low-mass stars is between 0.1 to 10 pc (20,000 and 2,000,000 au) \citep[and references therein]{Arce-2007}, which is one or two orders of magnitude larger than the sizes of outflows from VLM protostars and proto-BDs. Therefore, the length of the Par-Lup 3-4 outflow, comparable to those of VLM protostars and proto-BDs, is a scaled-down version of outflows from low-mass protostars. The outflow velocity measured for Par-Lup3-4 is slightly higher (6 km/s) but on the same order as the velocities observed in other VLM stars and BDs, which are between 1 to 4.7\,km/s \citep{Phan-Bao08-1, Kauffmann11-1, Phan-Bao14-1}. Low-mass stars have outflows with velocities in the range between 10-100 km/s \citep[and references therein]{Arce-2007}. The velocity of the Par-Lup3-4 outflow is closer to the VLM regime than that of low-mass stars. Par-Lup3-4 has a molecular outflow mass of $\sim$10$^{-6}$ M$_{\odot}$ that is in the range of the observed values for other VLM stars and BDs that span 10$^{-4}$ to 10$^{-6}$ M$\odot$ \citep{Phan-Bao14-1}. As pointed out by \citet{Phan-Bao14-1}, the VLM outflow values are at least one order of magnitude lower than the values obtained for low-mass protostars in a similar evolutionary status. The mass-loss rate from Par-Lup3-4 ($\mathrm{\dot{M}_{outflow}}$=M$\mathrm{_{outflow}}$/$\tau{\mathrm{_{dyn}}}$) is 4.3$\times$10$^{-9}$ M$_{\odot}$/yr. The outflow mass-loss rate for a typical low-mass protostar ranges from 8.9\,$\times$\,10$^{-9}$ to 10$^{-4}$\,M$_{\odot}$\,yr$^{-1}$ \,M$_{\odot}$ yr $^{-1}$, although the median value is 10$^{-7}$ M$_{\odot}$ yr $^{-1}$ \citep{Levreault1988-2}. The mass-loss rate for VLM stars and BDs is lower, with values lying between 2.5\,$\times$\,10$^{-9}$ and 2\,$\times$\,10$^{-7}$ \,M$_{\odot}$\,yr$^{-1}$ \citep{Phan-Bao14-1}. The mass-loss rate for Par-Lup3-4 is even lower than the expected value for VLM stars and BDs, but this might be an effect of the potentially missed flux emission from the whole outflow extent that we do not observe. The mass-loss rate of the stellar wind was obtained as $\mathrm{\dot{M}_{wind}}$ [M$_{\odot}$/yr]=M$\mathrm{_{outflow}}$v$\mathrm{_{max}}$/ $\tau{\mathrm{_{dyn}}}$v$\mathrm{_{wind}}$\, \citep{Phan-Bao14-1}. We used a wind velocity of 168 $\pm$ 30 km/s \citep{Comeron2011-1}. We assumed that the momentum from the jet is completely transferred to the molecular outflow, which may occur in Class II sources. The wind mass-loss rate derived for Par-Lup3-4 is 1.53$\times$10$^{-10}$ M$_{\odot}$ /yr, which is very similar to the value of MHO5, another VLM star studied by \citet{Phan-Bao14-1}. We compare the outflow mass against the mass-loss rate of the stellar wind for Par-Lup3-4 along with other VLM stars, BDs, and low-mass stars using low-resolution observations in Fig. \ref{mout_mwin}, and it appears that Par-Lup3-4 follows the trend of more massive sources. \begin{figure} \centering \includegraphics[width=\hsize]{Figures/chapter3/mwind_mout.pdf} \caption[Molecular outflow versus wind mass-loss rate of Par-Lup3-4 and very low-mass sources]{Molecular outflow mass vs. wind mass-loss rate of Par-Lup3-4 (red square), very low-mass sources (arrows are lower and upper limits) from \citet{Phan-Bao14-1}, and Class II young stellar objects (blue circles; \citealt{Levreault88-1}).} \label{mout_mwin} \end{figure} Accretion and outflow, jet, or winds are phenomena that are deeply linked \citep{Hartigan1995, Calvet1997, Rigliaco-2013, Natta14-1}. As the material infalls from the envelope or disk onto the central source, a jet perpendicular to the disk is launched by angular momentum conservation. The accretion rate in combination with the outflow properties can give us information about the history of the sources. The accretion rate of Par-Lup3-4 has been measured in recent years: 1.4\,$\times$\,10$^{-9}$\,M$_{\odot}$\,yr $^{-1}$ \citep{Comeron03-1}, 7.9\,$\times$\,10$^{-10}$ \citep{Bacciotti2011}, 5.0\,$\times$\,10$^{-10}$\,M$_{\odot}$\,yr $^{-1}$ \citep{Whelan14-1}, and 4.3\,$\times$\,10$^{-12}$\,M$_{\odot}$\,yr $^{-1}$ \citep{alcalaetal14-1}. These studies used different extinction values and accretion tracers, and each study had its own assumptions and caveats. The ratio $\mathrm{\dot{M}_{wind}}$/$\mathrm{\dot{M}_{acc}}$ for VLM stars and BDs is between 0.05 to 100 (see Table 3 in \citealt{Phan-Bao14-1}), while for low-mass stars the expected value is in the range of ${\sim}$0.0003-0.4 \citep{Hartigan1995}. When we use the obtained wind mass-loss rate and the accretion rates from the literature, the relation $\mathrm{\dot{M}_{wind}}$/$\mathrm{\dot{M}_{acc}}$ varies between 0.07 to 22, which is in the expected range for VLM stars and BDs. Additionally, we investigated another parameter, the opening angle, that can be used for evolutionary classification. The wide opening angle of the outflow, described in section \ref{gaseo}, has an average value of 112$\mathrm{^\circ}$ between the two lobes. Previous studies of low-mass stars have discussed the relation between the age or evolutionary classification and the opening angle because the angle broadens with age \citep{Offner-2011}. \citet{Arce-2006} classified low-mass stars as Class 0 with opening angles $\leq$\,55$\mathrm{^\circ}$, Class\,I if the angle is $\geq$\,75$\mathrm{^\circ}$, and Class II when the outflow has no clear structure. A year later, \citet{Arce-2007} defined the boundary for Class I as $\geq$90$\mathrm{^\circ}$. Par-Lup3-4 is consistent with Class I based on these studies and is near to the transition to Class II given the wide opening angle, although these values are still under discussion in the field and cannot be considered as the only parameter to classify the evolutionary stage of the source. Unfortunately, there are no previous records in the literature about the opening angle in VLM stars and BDs. Future studies should correlate opening angle and evolutionary stage in VLM stars using larger samples. \subsection{VLM stars and BDs as a scale down version of low-mass stars?} In the previous section, we reviewed the outflow properties of Par-Lup3-4. Properties such as the outflow length, the velocity, outflow mass, and wind mass are in the expected range for VLM protostars and proto-BDs. Additionally, other properties such as the disk mass are in the expected range for VLM stars and BDs. The disk mass of Par-Lup3-4 is $\sim$ 10$^{-6}$ M$_{\odot}$ with the assumptions mentioned in Section \ref{resultados:contin}. Low-mass protostar disk masses are in the range of 10$^{-3}$ to 10$^{-1}$ M$_{\odot}$ , and the theoretical values obtained for VLM stars and BD disks using radiative transfer algorithms extends from 10$^{-6}$ to 10$^{-3}$ M$_{\odot}$. Recently, \citet{Sanchis20} measured the mass of several BD disk in Lupus and obtained masses between 7 $\times$ 10$^{-4}$ M$_{\odot}$ and 6 $\times$ 10$^{-5}$ M$_{\odot}$. The disk mass measured for Par-Lup3-4 therefore indicates a downsized version of the low-mass protostar disks, although the mass inferred from our ALMA observations may represent a lower limit given the disk inclination and the uncertainties with the optical depth. All these properties are in agreement with previous studies of VLM stars and BDs (e.g., \citealt{Phan-Bao14-1}), which pointed out that the formation of VLM stars and BDs follows a scaled-down version of low-mass star formation. While these characteristics indicate a downsized version of star formation, there are still several uncertainties in the measurements that may further constrain this. For example, uncertainties are related to the outflow mass calculation, such as the optical depth or the CO abundance relative to the H$_{2}$, that can vary by a factor of three \citep[and references therein]{Dunham2014}. In the case of Par-Lup3-4, all these uncertainties are negligible when we compare the geometry of the outflow, which is not fully revealed with our interferometric ALMA data. This directly affects the average length of the outflow and the dynamical time, propagating the uncertainties into the dynamical parameters, which may be longer if we are able to reveal the whole extent and shape of both outflow lobes. Another important source of uncertainty is the missing flux and possible faint extended emission in our observations; the maximum outflow length that we detect is 2.9 arcsec, but it is expected to have larger scales because we do not detect the shockwave front. Therefore, our results might be impaired by two effects: extended flux was filtered with the interferometer, and the sensitivity is not high enough. \label{maximum} Additionally, previous studies of VLM stars and BDs are biased by low-sensitivity observations, showing a smaller velocity range. This affects the dynamical properties. Another important bias in previous studies comes from the inclusion of face-on disks or disks with inclinations up to $\sim$30$\mathrm{^{\circ}}$ , which are easier to detect than edge-on disk systems. Previous observation of molecular outflows in VLM stars and BDs might be affected by the limited sensitivity that affects the low detection rate found so far (e.g., \citealt{Phan-Bao14-1}). With this work, we proved that we can detect a very low amount of expelled mass with the high ALMA sensitivity with enough resolution to observe the base of the outflow. Therefore we conclude that more sensitive observations are indispensable to improve the statistics of outflows in these sources. In spite of the uncertainties and the small and biased sample of VLM stars and BDs with outflows, the main conclusion still reamins: the formation of VLM stars and BDs can be understood as a downsized version of low-mass stars. \subsection{Revealing the true nature of Par-Lup3-4} Par-Lup3-4 is a complex object that may help to understand the formation of VLM stars and BDs. Previous SED fitting \citep{Huelamo10-1} was not accurate because of the degeneracies between age and distance. The last Gaia data release broke the degeneracy by providing a precise distance value. Our SED fitting was underluminous in the optical, and this can best be explained with two main possibilities: either the source is younger than expected, or it is a binary. The presence of a bipolar molecular outflow, which is more common in Class 0 and I sources, together with the opening angle of the outflow points to a younger nature than Class II, and this is in agreement with one of the modeling results. We detect a stream of gas perpendicular to the detected bipolar outflow, whose origin is puzzling. We discussed four scenarios that might explain this. The possibility that this is a secondary molecular outflow, which would mean that ParLup 3-4 might be a binary, would agree with the results obtained in the SED fitting analysis. Deep ALMA observations with higher resolution that point at the base of this possible secondary outflow may help to distinguish between outflowing material and cloud contamination, and might also help resolve the continuum emission of a possible binary source. Additionally, ACA and Total Power ALMA observations are required to recover the larger spatial scales, in an attempt to detect the full extent of the outflow. \section{Conclusions} Par-Lup3-4 is a very low-mass protostar located in the Lupus 3 cloud. It has an edge-on disk with an optical jet. We observed Par-Lup3-4 using ALMA Bands 6 and 7, and we detected continuum and gas emission for three molecular lines (CO\,J=2-1, CO\,J=3-2 and $^{13}$CO\,J=3-2). These observations revealed for the first time the faint base of a molecular outflow and the cavity walls associated with this source, and a rotation pattern is seen with $^{13}$CO near location of the continuum source. The main results from this work are listed below. \begin{itemize} \item The dust disk is faint and unresolved. The lower limit of the dust disk mass is 0.28 M$\mathrm{_{\oplus}}$. \item The SED of Par-Lup3-4 can only be fit with models including a stellar radius 2 R$_{\odot}$, which is far from the 1.1 R$_{\odot}$ value derived in evolutionary models at the age of 2 Myr. We suggest that this radius value, and therefore a higher stellar luminosity, can be explained if Par-Lup3-4 is a close binary system. \item The average extent of the outflow is $\sim$300 au, which is a relatively short length for an outflow in the very low-mass regime. The outflow mass is found to be $\sim$10$^{-6}$M$_{\odot}$ , and the maximum outflow velocity we derive is 6 km/s. We may not be observing the full extent of the outflow, and as a consequence, a portion of the outflow mass may be also missed. Our observations place lower limits on these outflow quantities. \item We detected a secondary structure that extends from the location of Par-Lup3-4 to the northeast and perpendicular to the primary molecular outflow that we report here. This might be a second outflow from Par-Lup3-4, suggesting that the source is a binary, or it may come from another nearby source. However, cloud contamination and a stream of infalling and rotating foreground material from the envelope cannot be discarded. \end{itemize} After measuring the particular properties of this VLM star, including the outflow length, mass, and maximum velocity, we compared our results with the predictions for VLM star and BD formation theories and found that they are all consistent with the formation of Par-Lup3-4 as a scaled-down version of low-mass star formation, as expected. \begin{acknowledgements} We thank the referee for their thorough review and comments that helped improve the quality of this manuscript. This work makes use of the following ALMA data: ADS/JAO.ALMA$\#$2015.1.00512.S and ADS/JAO.ALMA$\#$2017.1.01401.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. \\ A.S-M, and M.R.S acknowledge support from the "Iniciativa Cient\'ifica Milenio" via N\'ucleo Milenio de Formaci\'on Planetaria. NH acknowledges finantial support from the Spanish State Research Agency (AEI) Project No. ESP2017-87676-C5-1-R and from project No. MDM-2017-0737 Unidad de Excelencia Mar\'ia de Ma\'eztu - Centro de Astrobiolog\'ia (CSIC-INTA). NH and FC acknowledge support from the Faculty of the European Space Astronomy Centre (ESAC). IdG is partially supported by MCIU-AEI (Spain) grant AYA2017-84390-C2-R (co-funded by FEDER). \end{acknowledgements} \bibliographystyle{aa}
1,108,101,565,093
arxiv
\section{Introduction} Cancer diagnosis and treatment plans are guided by multiple streams of data acquired from several modalities, such as radiology scans, molecular profiling, histology slides, and clinical variables. Each characterizes unique aspects of tumor biology and, collectively, they help clinicians understand patient prognosis and assess therapeutic options. Advances in molecular profiling techniques have enabled the discovery of prognostic gene signatures, bringing precision medicine to the forefront of clinical practice \cite{el-deiry_current_2019}. More recently, computational techniques in the field of radiology have identified potential imaging-based phenotypes of treatment response and patient survival. Such approaches leverage large sets of explicitly designed image features (commonly known as radiomics \cite{gillies_radiomics:_2015}) or entail the novel discovery of image patterns by optimizing highly parameterized deep learning models such as convolutional neural networks (CNN) \cite{saba_present_2019} for prediction. Along similar lines, the digitization of histopathology slides has opened new avenues for tissue-based assays that can stratify patients by risk from H\&E slide images alone \cite{skrede_deep_2020}. Given the complementary nature of these various modalities in comprehensive clinical assessment, we hypothesize that their combination in a rigorous machine learning framework may predict patient outcomes more robustly than qualitative clinical assessment or unimodal strategies. Glioma is an intuitive candidate for deep learning-based multimodal biomarkers owing to the presence of well-characterized prognostic information across modalities \cite{louis_2016_2016}, as well as its severity \cite{siegel_cancer_2017}. Gliomas can be subdivided by their malignancy into histological grades II-IV \cite{louis_2016_2016}. Grades differ in their morphologic and molecular heterogeneity \cite{olar_using_2014}, which correspond to treatment resistance and short-term recurrence \cite{stupp_radiotherapy_2005,parker_intratumoral_2016}. Quantitative analysis of glioma \cite{bae_radiomic_2018} and its tumor habitat \cite{prasanna_radiomic_2017} on MRI has demonstrated strong prognostic potential, as well as complex interactions with genotype \cite{beig_radiogenomic-based_2020} and clinical variables \cite{beig_sexually_2021}. Most deep multimodal prediction strategies to date have focused on the fusion of biopsy-based modalities \cite{chen_pathomic_2019,mobadersany_predicting_2018,cheerla_deep_2019}. For instance, previous work integrating molecular data with pathology analysis via CNN or graph convolutional neural networks (GCN) has shown that a deep, multimodal approach improves prognosis prediction in glioma patients \cite{chen_pathomic_2019,mobadersany_predicting_2018}. Likewise, Cheerla et al. integrated histology, clinical, and sequencing data across cancer types by condensing each to a correlated prognostic feature representation \cite{cheerla_deep_2019}. Multimodal research involving radiology has been predominantly correlative in nature \cite{beig_sexually_2021,beig_radiogenomic-based_2020}. Some have explored late-stage fusion approaches combining feature-based representations from radiology with similar pathology \cite{vaidya_raptomics_2018} or genomic features \cite{subramanian_multimodal_2020} to predict recurrence. While promising, these strategies rely on hand-crafted feature sets and simple multimodal classifiers that likely limit their ability to learn complex prognostic interactions between modalities and realize the full additive benefit of integrating diverse clinical modalities. To our knowledge, no study to date has combined radiology, pathology, and genomic data within a single deep learning framework for outcome prediction or patient stratification. Doing so requires overcoming several challenges. First, owing to the difficulty of assembling multimodal datasets with corresponding outcome data in large quantities, fusion schemes must be highly data efficient in learning complex multimodal interactions. Second, the presence of strongly correlated prognostic signals between modalities \cite{cheerla_deep_2019} can create redundancy and hinder model performance. In this paper, we introduce a deep learning framework that combines radiologic, histologic, genomic, and clinical data into a fused prognostic risk score. Using a novel technique referred to as Deep Orthogonal Fusion (DOF), we train models using a Multimodal Orthogonalization (MMO) loss function to maximize the independent contribution of each data modality, effectively improving predictive performance. Our approach, depicted in Fig. \ref{fig1}, first trains unimodal embeddings for overall survival (OS) prediction through a Cox partial likelihood loss function. Next, these embeddings are combined through an attention-gated tensor fusion to capture all possible interactions between each data modality. Fusion models are trained simultaneously to predict OS and minimize the correlation between unimodal embeddings. We emphasize the following contributions: \textbf{Deep Fusion of Radiology, Pathology, and Omics Data}: We present a powerful, data-efficient framework for combining oncologic data across modalities. Our approach enabled a previously unexplored deep integration of radiology with tissue-based modalities and clinical variables for patient risk stratification. This fusion model significantly improved upon unimodal deep learning models. In particular, we found that integrating radiology into deep multimodal models, which is under-explored in previous prognostic studies, conferred the single greatest performance increase. This finding suggests the presence of independent, complementary prognostic information between radiology and biopsy-based modalities and warrants their combination in future prognostic studies. \textbf{MMO}: To mitigate the effect of inherent correlations between data modalities, we present an MMO loss function that penalizes correlation between unimodal embeddings and encourages each to provide independent prognostic information. We find that this training scheme, which we call DOF, improves prediction by learning and fusing disentangled, prognostic representations from each modality. DOF was also found to outperform a fusion scheme that enforces correlated representations between modalities \cite{cheerla_deep_2019}, emphasizing that the dissimilarity of these clinical data streams is crucial to their collective strength. \textbf{Multi-parametric Radiology FeatureNet:} A neural network architecture that can fuse CNN-extracted deep features from local tumor regions on multiple image sequences (e.g., Gd-T1w and T2w-FLAIR scans) with global hand-crafted radiomics features extracted across the full 3D region-of-interest. \textbf{Independent prognostic biomarker of OS in glioma patients:} Using 15-fold Monte Carlo cross-validation with a 20\% holdout test set, we evaluate deep fusion models to predict glioma prognosis. We compare this multimodal risk score with existing prognostic clinical subsets and biomarkers (grade, \textit{IDH} status) and investigate its prognostic value within these outcome-associated groups. \begin{figure} \centering \includegraphics[width=1\textwidth]{fusion_framework_V6.png} \caption{DOF model architecture and training.} \label{fig1} \end{figure} \section{Methodology} Let $X$ be a training minibatch of data for $N$ patients, each containing $M$ modalities such that $X = [x_1, x_2, ... , x_M]$. For each modality $m$, $x_m$ includes data from for $N$ patients. $\Phi_m$ denotes a trainable unimodal network, which accepts $x_m$ and generates a deep embedding $h_m = \Phi_m(x_m) \in \mathbb{R}^{l_1 x N}$. \subsection{Multimodal Fusion} When $M>1$, we combine embeddings from each modality in a multimodal fusion network. For each $h_m$, an attention mechanism is applied to control its expressiveness based on information from the other modalities. An additional fully connected layer results in $h_m^S$ of length $l_2$. Attention weights of length $l_2$ are obtained through a bilinear transformation of $h_m$ with all other embeddings (denoted as $H_{\cancel{m}}$), then applied to $h_m^S$ to yield the attention-gated embedding: \begin{equation} h_m^* = a_m * h_m^S = \sigma (h_m^T * W_A * H_{\cancel{m}})* h_m^S. \end{equation} To capture all possible interactions between modalities, we combine attention-weighted embeddings through an outer product between modalities, known as tensor fusion \cite{zadeh_tensor_2017}. A value of 1 is also included in each vector, allowing for partial interactions between modalities and for the constituent unimodal embeddings to be retained. The output matrix \begin{equation} F = \begin{bmatrix} 1 \\ h_{1}^* \end{bmatrix} \otimes \begin{bmatrix} 1 \\ h_{2}^* \end{bmatrix} \otimes ... \otimes \begin{bmatrix} 1 \\ h_{M}^* \end{bmatrix} \end{equation} is an $M$-dimensional hypercube of all multimodal interactions with sides of length $l_2 + 1$. Fig. \ref{fig1} depicts $F$ for the fusion of radiology, pathology, and genomic data. It contains subregions corresponding to unaltered unimodal embeddings, pairwise fusions between 2 modalities, and trilinear fusion between all three of the modalities. A final set of fully connected layers, denoted by $\Phi_F$, is applied to tensor fusion features for a final fused embedding $h_F = \Phi_F (F)$. \subsection{MMO Loss} To address the shortcoming of multimodal models converging to correlated predictors, we introduce MMO loss. Inspired by Orthogonal Low-rank Embedding \cite{lezama_ole_2017}, we stipulate that unimodal embeddings preceding fusion should be orthogonal. This criterion enforces that each modality introduced contributes unique information to outcome prediction, rather than relying on signal redundancy between modalities. Each $\Phi_m$ is updated through MMO loss to yield embeddings that better complement other modalities. Let $ H \in \mathbb{R}^{l_1 x M*N}$ be the set of embeddings from all modalities. MMO loss is computed as \begin{equation} L_{MMO} = \frac{1}{M*N} \sum_{m=1}^M max(1, ||h_m||_*) - ||H||_* \end{equation} where $ || \cdot ||_*$ denotes the matrix nuclear norm (i.e., the sum of the matrix singular values). This loss is the difference between the sum of nuclear norms per embedding and the nuclear norm of all embeddings combined. It penalizes the scenario where the variance of two modalities separately is decreased when combined and minimized when all unimodal embeddings are fully orthogonal. The per-modality norm is bounded to a minimum of 1 to prevent the collapse of embedding features to zero. \subsection{Cox Partial Likelihood Loss} The final layer of each network, parameterized by $\beta$, is a fully connected layer with a single unit. This output functions as a Cox proportional hazards model using the deep embedding from the previous layer, $h$, as its covariates. This final layer's output, $\theta$, is the log hazard ratio, which is used as a risk score. The log hazard ratio for patient $i$ is denoted as $ \theta_i = h_i^T * \beta $. We define the negative log likelihood $L_{pl}$ as our cost function \begin{equation} L_{pl} = - \sum_{i: E_i=1} \left ( \theta_i - log \sum_{j: t_i \geq t_j} e^{\theta_j} \right ) \end{equation} where $t \in \mathbb{R}^{Nx1} $ indicates the time to date of last follow up. The event vector, $E\in\{0,1\}^{Nx1}$, equals 1 if an event was observed (death) or 0 if a patient was censored (still alive) at time of last follow up. Each patient $i$ with an observed event is compared against all patients whose observation time was greater than or equal to $t_i$. Networks are trained using the final loss $L$ which is a linear combination of the two loss functions specified above \begin{equation} L = L_{pl} + \gamma L_{MMO} \end{equation} where $\gamma$ is a scalar weighting the contribution of MMO loss relative to Cox partial likelihood loss. When training unimodal networks, $\gamma$ is always zero. Performance for various values of $\gamma$ are included in the Table \ref{tab:tabs4}. \subsection{Modality-specific Networks for Outcome Prediction} \textbf{Radiology: }A multiple-input CNN was designed to incorporate multiparametric MRI data and global lesion measurements, shown in Fig. \ref{figS1}. The backbone of the network is a VGG-19 CNN \cite{simonyan_very_2015} with batch normalization, substituting the final max pooling layer with a 4x4 adaptive average pooling. Two pre-trained \cite{deng_imagenet_2009} CNN branches separately extract features from Gd-T1w and T2w-FLAIR images, which are then concatenated and passed through a fully connected layer. A third branch passes hand-crafted features (described in section 3) through a similar fully connected layer. Concatenated embeddings from all branches are fed to 2 additional fully connected layers. All fully connected layers preceding the final embedding layer have 128 units. \textbf{Histology, genomic, and clinical data:} We reused the models proposed in \cite{chen_pathomic_2019} - a pre-trained VGG-19 CNN with pretrained convolutional layers for Histology and a Self-Normalizing Neural Network (SNN) for genomic data. We also use this SNN for analysis of clinical data, which was not explored in \cite{chen_pathomic_2019}. \begin{figure} \centering \includegraphics[width=.9\textwidth]{sampling_strategy_V3.png} \caption{Sampling multiple radiology \& pathology images for patient level risk scores.} \label{fig2} \end{figure} \section{Experimental Details} \textbf{Radiology}: 176 patients (see patient selection in Fig. \ref{figS2}) with Gd-T1w and T2w-FLAIR scans from the TCGA-GBM \cite{scarpace_radiology_2016} and TCGA-LGG \cite{pedano_radiology_2016} studies were obtained from TCIA \cite{clark_cancer_2013} and annotated by 7 radiologists to delineate the enhancing lesion and edema region. Volumes were registered to the MNI-ICBM standardized brain atlas with 1 mm isotropic resolution, processed with N4 bias correction, and intensity normalized. 96x96x3 patches were generated from matching regions of Gd-T1w and T2w-FLAIR images within the enhancing lesion. For each patient, 4 samples were generated from four even quadrants of the tumor along the z-axis. Patch slice position was randomized in unimodal training and fixed to the middles of quadrants during inference and fusion network training. Nine features including size, shape, and intensity measures were extracted separately from Gd-T1w and T2w-FLAIR images, and summarized in three fashions for a total of 56 handcrafted features, listed in Table \ref{tab:tabs1}. \textbf{Pathology and Genomics:} We obtained 1024×1024 normalized regions-of-interest (ROIs) and DNA sequencing data curated by \cite{mobadersany_predicting_2018}. Each patient had 1-3 ROIs from diagnostic H\&E slides, totaling 372 images. DNA data consisted of 80 features including mutational status and copy number variation (Table \ref{tab:tabs2}). \textbf{Clinical information: }14 clinical features were included into an SNN for the prediction of prognosis. The feature set included demographic and treatment details, as well as subjective histological subtype (see Table \ref{tab:tabs3}). \textbf{Implementation Details:} The embedding size for unimodal networks, $l_1$, was set to 32. Pre-fusion scaled embedding size, $l_2$, was 32 for $M$=2, 16 for $M$=3, and 8 for $M$=4. Post-fusion fully connected layers consisted of 128 units each. The final layer of each network had a single unit with sigmoid activation, but its outputs were rescaled between -3 and 3 to function as a prognostic risk score. Unimodal networks were trained for 50 epochs with linear learning rate decay, while multimodal networks were trained for 30 epochs with learning rate decay beginning at the 10th epoch. When training multimodal networks, the unimodal embedding layers were frozen for 5 epochs to train the fusion layers only, then unfrozen for joint training of embeddings and fusion layers. \textbf{Statistical Analysis: }All models were trained via 15-fold Monte Carlo cross-validation with 20\% holdout using the patient-level splits provided in \cite{mobadersany_predicting_2018}. The primary performance metric was the median observed concordance index (C-index) across folds, a global metric of prognostic model discriminant power. We evaluated all possible combinations of a patient’s data (see sampling strategy in Fig. \ref{fig2}) and used the 75th percentile of predicted risk score as their overall prediction. C-indexes of the best-performing unimodal model and the DOF multimodal model were compared with a Mann-Whitney $U$ test \cite{mann_test_1947}. Binary low/high-risk groups were derived from the risk scores, where a risk score $>$0 corresponded to high risk. For Kaplan-Meier (KM) curves and calculation of hazard ratio (HR), patient-level risk scores were pooled across validation folds. \begin{table}[] \centering \caption{Median C-index of unimodal and fusion models with and without MMO loss. } \label{tab:table1} \begin{tabular}{@{}cccc@{}} \toprule \textbf{Group} & \textbf{Model} & \textbf{Cox Loss Only} & \textbf{With MMO Loss} \\ \midrule Unimodal & Rad & 0.718 ± 0.064 & -- \\ & Path & 0.715 ± 0.054 & -- \\ & Gen & 0.716 ± 0.063 & -- \\ & Clin & 0.702 ± 0.049 & -- \\ Pairwise Fusion & Path+Gen & 0.711 ± 0.055 & 0.752 ± 0.072 \\ & Gen+Clin & 0.702 ± 0.053 & 0.703 ± 0.052 \\ & Rad+Gen & 0.761 ± 0.071 & 0.766 ± 0.067 \\ & Rad+Path & 0.742 ± 0.067 & 0.752 ± 0.072 \\ & Rad+Clin & 0.746 ± 0.068 & 0.736 ± 0.067 \\ & Path+Clin & 0.696 ± 0.051 & 0.690 ± 0.043 \\ Triple Fusion & Path+Gen+Clin & 0.704 ± 0.059 & 0.720 ± 0.056 \\ & Rad+Path+Clin & 0.748 ± 0.067 & 0.741 ± 0.067 \\ & Rad+Gen+Clin & 0.754 ± 0.066 & 0.755 ± 0.067 \\ & Rad+Path+Gen & 0.764 ± 0.062 & \textbf{0.788 ± 0.067} \\ Full Fusion & Rad+Path+Gen+Clin & \textbf{0.775 ± 0.061} & 0.785 ± 0.077 \\ \bottomrule \end{tabular} \end{table} \section{Results and Discussion} Genomic- and pathology-only model performance metrics are practically similar (Table \ref{tab:table1}). However, the CNN-only (C-index=0.687 ± 0.067) and feature-only (C-index=0.653 ± 0.057) configurations of the radiology model underperform relative to the aforementioned unimodal models. Combining the radiology CNN features with the handcrafted features results in the strongest unimodal model. In contrast, clinical features are the least prognostic unimodal model. \begin{figure} \centering \def\columnwidth{\columnwidth} \input{miccai_fig_3_cameraready_tightcrop.pdf_tex} \caption{Stratification by (a) grade, (b) IDH mutation status, and (c) DOF risk groups.} \label{fig3} \end{figure} \begin{figure} \centering \def\columnwidth{\columnwidth} \input{miccai_fig_4_cameraready_tightcrop_tighter.pdf_tex} \caption{DOF risk groups stratify patients by OS within (a,b) grade \& (c,d) IDH subsets.} \label{fig4} \end{figure} Deep fusion models integrating radiology outperform individual unimodal models, naive ensembles of unimodal models, as well as fusions of only clinical and/or biopsy-derived modalities. The full fusion model (C-index=0.775 ± 0.061) achieves the best performance when trained with Cox loss \cite{ching_cox-nnet_2018} alone, second only to the Rad+Path+Gen model trained with MMO loss. Naive late fusion ensembles (i.e., averaging unimodal risk scores) exhibit inferior performance for Rad+Path+Gen with (C-index=0.735 ± 0.063) and without (C-index=0.739 ± 0.062) clinical features, confirming the benefits of deep fusion. The addition of MMO loss when training these deep fusion models consistently improves their performance at five different weightings (Table \ref{tab:tabs4}), with best performance for both at $\gamma=.5$. When all fusion models are trained at this weighting, 8 of 11 improve in performance. DOF combining radiology, pathology, and genomic data predicts glioma survival best overall with a median C-index of 0.788 ± 0.067, a significant increase over the best unimodal model (p=0.023). An ablation study was conducted to investigate the contributions of components of the fusion module (modality attention-gating and tensor fusion). We found that a configuration including both yields the best performance, but that strong results can also be achieved with a simplified fusion module (Table \ref{tab:tabs5}). In Fig. \ref{fig3}, KM plots show that the stratification of patients by OS in risk groups derived from this model perform comparably to established clinical markers. In Fig. \ref{fig4}, risk groups further stratify OS within grade and \textit{IDH} status groups. In sum, these results suggest that the DOF model provides useful prognostic value beyond existing clinical subsets and/or individual biomarkers. To further benchmark our approach, we implemented the fusion scheme of \cite{cheerla_deep_2019}, who combined pathology images, DNA, miRNA, and clinical data, which we further modified to also include radiology data. The network and learning approach is described in-depth in Table \ref{tab:tabs6}. In contrast to DOF, \cite{cheerla_deep_2019} instead seeks to maximize the correlation between modality embeddings prior to prediction. A model combining radiology, pathology, and genomic data achieved C-index=0.730 ± 0.05, while a model excluding the added radiology arm stratified patients by OS with C-index=0.715 ± 0.05. \section{Conclusions} We present DOF, a data efficient scheme for the novel fusion of radiology, histology, genomic, and clinical data for multimodal prognostic biomarker discovery. The integration of multi-dimensional data from biopsy-based modalities and radiology strongly boosts the ability to stratify glioma patients by OS. The addition of a novel MMO loss component, which forces unimodal embeddings to provide independent and complementary information to the fused prediction, further improves prognostic performance. Our DOF model incorporating radiology, histology, and genomic data significantly stratifies glioma patients by OS within outcome-associated subsets, offering additional granularity to routine clinical markers. DOF can be applied to any number of cancer domains, modality combinations, or new clinical endpoints including treatment response. \printbibliography \pagebreak \section{Supplemental Information} \beginsupplement \begin{figure}[!hbt] \centering \includegraphics[width=.8\textwidth]{radfeaturenet_vgg.png} \caption{Radiology FeatureNet combining images and features from Gd-T1w and T2w-FLAIR scans. }\label{figS1} \end{figure} \begin{figure}[!hbt] \centering \includegraphics[width=\textwidth]{MICCAI_patient_selection_2021.png} \caption{Patient selection flowchart.} \label{figS2} \end{figure} \begin{table}[!hbt] \caption{List of handcrafted radiology features. } \label{tab:tabs1} \resizebox{\textwidth}{!}{% \begin{tabular}{@{}lll@{}} \toprule \textbf{Feature name/number} & \textbf{Feature Description} & \textbf{Summarization of annotated regions} \\ \midrule No. regions (f1, f2) & \# annotated lesions on Gd-T1w, edema on T2w-FLAIR & N/A \\ Volume (f3-f8) & Volume of 3D ROI, measured in mm$^3$ & sum, largest, \& avg on Gd-T1w and T2w-FLAIR \\ Longest axis (f9-f14) & Longest distance between a contour’s vertices & sum, largest, \& avg on Gd-T1w and T2w-FLAIR \\ SA/V Ratio (f15-f20) & Ratio of the surface area to volume. & sum, largest, \& avg on Gd-T1w and T2w-FLAIR \\ Sphericity (f21-f26) & How closely a region’s shape resembles a sphere & sum, largest, \& avg on Gd-T1w and T2w-FLAIR \\ Mean I (f27-f32) & Mean intensity in contoured region & sum, largest, \& avg on Gd-T1w and T2w-FLAIR \\ 10th percentile (f33-f38) & 10th \% of intensities in contoured region & sum, largest, \& avg on Gd-T1w and T2w-FLAIR \\ 90th percentile (f39-f44) & 90th \% of intensities in contoured region & sum, largest, \& avg on Gd-T1w and T2w-FLAIR \\ Skewness (f45-f50) & Skewness of intensities in contoured region & sum, largest, \& avg on Gd-T1w and T2w-FLAIR \\ Variance (f51-f56) & Variance of intensities in contoured region & sum, largest, \& avg on Gd-T1w and T2w-FLAIR \\ \bottomrule \end{tabular}% } \end{table} \begin{table}[] \tiny \caption{List of molecular features} \label{tab:tabs2} \resizebox{\textwidth}{!}{% \begin{tabularx}{\textwidth}{lXl} \toprule \textbf{Category} & \textbf{Variable} & \textbf{Value Type} \\ \midrule Gene-level CNV (f1-f41) & EGFR, MDM4, MYC, BRAF, EZH2, MET, SMO, KIAA1549, CREB3L2, NTRK1, PRCC, BLM, NTRK3, CRTC3, CDKN2A, CDKN2B, FGFR2, TSHR, TCL1A, TRIP11, GOLGA5, GPHN, DICER1, TCL6, EBF1, ITK, RPL22, CDKN2C, LCP1, RB1, IRF4, FGFR1OP, MLLT4, MYB, ROS1, TNFAIP3, GOPC, CARD11, JAK2, STK11, PTEN & Categorical\\ Arm-level CNV (f42-f78) & 1q, 2p, 2q, 3p, 3q, 4p, 4q, 5p, 5q, 6p, 6q, 7p, 7q, 8p, 8q, 9p, 9q, 10p, 10q, 11p, 11q, 12p, 12q, 13q, 14q, 15q, 16p, 16q, 17p, 17q, 18p, 18q, 19p, 20p, 20q, 21q, 22q & Continuous \\ Biomarkers (f79, f80) & IDH Mutation, 1p/19q Codeletion & Binary \\ \bottomrule \end{tabularx}% } \end{table} \begin{table}[!hbt] \caption{List of clinical features. } \label{tab:tabs3} \resizebox{\textwidth}{!}{% \begin{tabular}{@{}ll@{}} \toprule \textbf{Variable} & \textbf{Value Type} \\ \midrule Age (f1) & Continuous \\ Karnofsky Performance Score (f2) & Continuous \\ Grade (f3) & Categorical \\ Sex: Male vs. Female (f4) & Binary \\ Treatment: any (f5), radiation (f6), chemotherapy (f7) & Binary \\ Histological diagnosis: LGG (f8), Astrocytoma (f9), Glioblastoma (f10), Oligoastrocytoma (f11), Oligodendroglioma (f12) & Binary \\ Race/ethnicity: White vs. Non-white (f13), Hispanic vs. Non-hispanic (f14) & Binary \\ \bottomrule \end{tabular}% } \end{table} \begin{table}[] \small \centering \caption{Median C-index for fusion models at various MMO loss weightings.} \label{tab:tabs4} \begin{tabular}{@{}ccc@{}} \toprule \textbf{$\gamma$} & \textbf{ Rad + Path + Gen } & \textbf{ Rad + Path + Gen + Clin } \\ \midrule 0 & 0.764 ± 0.062 & 0.775 ± 0.061 \\ .1 & 0.768 ± 0.064 & 0.745 ± 0.068 \\ .25 & 0.777 ± 0.066 & 0.782 ± 0.066 \\ .5 & 0.788 ± 0.067 & 0.785 ± 0.077 \\ 1 & 0.779 ± 0.070 & 0.776 ± 0.075 \\ 2.5 & 0.781 ± 0.073 & 0.760 ± 0.072 \\ \bottomrule \end{tabular} \end{table} \begin{table}[] \centering \small \caption{Ablation study investigating impact of components of fusion module for best-performing modality combination (Rad + Path + Gen)} \label{tab:tabs5} \resizebox{7.25cm}{!}{% \begin{tabular}{@{}lll@{}} \toprule \textbf{ Attention Gating } & \textbf{ Combination Strategy } & \textbf{ Median C-index } \\ \midrule Yes & Tensor Fusion & 0.79 ± 0.07 \\ No & Tensor Fusion & 0.77 ± 0.08 \\ Yes & Concatenation & 0.78 ± 0.07 \\ No & Concatenation & 0.76 ± 0.07 \\ \end{tabular}% } \end{table} \begin{table}[] \begin{minipage}{\linewidth} \tiny \caption{Correlation-based deep fusion framework \cite{cheerla_deep_2019}, adapted to include radiology. } \label{tab:tabs6} \resizebox{\textwidth}{!}{% \begin{tabularx}{\textwidth}{lX} \toprule \textbf{Module} & \textbf{Description} \\ \midrule DNA & Fully connected (FC) layer. Unmodified from \cite{cheerla_deep_2019}. \\ Pathology & Squeezenet applied to histology ROIs, followed by a FC layer. While \cite{cheerla_deep_2019} trained from scratch, we found better results when using pretrained ImageNet weights and freezing the convolutional layers \\ Radiology & Not included in \cite{cheerla_deep_2019}. We applied pre-trained, frozen squeezenet to Gd-T1w and T2w-FLAIR ROIs. FC layers were applied to CNN-extracted features and radiomics features, yielding 128 features from each. These were concatenated and processed by another FC layer \\ Fusion & The output of each unimodal arm is a feature vector of length 256, which were averaged together. The averaged feature representation is then processed by a 10-layer highway network. Unmodified from \cite{cheerla_deep_2019} \\ Loss & Combination of Cox proportional hazard-based loss for prognosis prediction and similarity loss that enforces correlated representations between modalities. Unmodified from \cite{cheerla_deep_2019}. \\ \bottomrule \end{tabularx}% } \par \tiny *All changes from \cite{cheerla_deep_2019} made by us are noted specifically above. Any further differences from the description of \cite{cheerla_deep_2019} are due to discrepancies between the paper and its codebase.\end{minipage} \end{table} \end{document}
1,108,101,565,094
arxiv
\section{Introduction and Background} The linguist George Kingsley Zipf made the observation that the frequency of a word is proportional to the inverse of the word's rank in a text. If the most common word occurs at frequency $n$, then the second most common word occurs at frequency $n/2$, the word with rank three at frequency $n/3$, etc. Generalized, Zipf's law \cite{zipf1949} states: \begin{equation} f \propto \frac{1}{r^\alpha} \label{zipf} \end{equation} where $r$ is the word rank and $f$ the frequency in the text, and $\alpha$ is the scaling coefficient generally found to be near 1.0 for many of the texts examined \cite{ferrer_pnas, ferrer_2013, alday_2016, moreno_2016}. Ferrer i Cancho's research group formalized the least-effort principle as it applies to Zipf's law \cite{ferrer2003, ferrer_pnas, ferrer2007, ferrer2010} by employing a mutation-driven genetic algorithm. Here the listener and speaker have different and conflicting interests. The listener seeks to gain as much information as possible from a communicative exchange, and would benefit if there were no ambiguity between word-object mappings. This is the case in which the correlation between words and objects is highest; in information theory \cite{shannon}, this corresponds to a high mutual information, or $I(S,R)$ where $S$ represents the symbol and $R$ the referent or object. The speaker on the other hand looks to minimize her effort in communicating and would benefit from fewer words to choose from, assuming that the choice of words comes with an effort; in information theory, this is quantified using information entropy or $H(S)$. To this end, Ferrer i Cancho \cite{ferrer_pnas} introduced an energy function based on information theory that models the speaker's and listener's interests: \begin{equation} \Omega(\lambda) = \lambda I(S,R) - (1-\lambda)H(S) \label{energy} \end{equation} where $\lambda$ (0 $\textless$ $\lambda$ $\textless$ 1) controls the balance between the speaker interests, $H(S)$, and listener interests, $I(S,R)$. It is found \cite{ferrer2003, ferrer_pnas, ferrer2007} that natural languages emerge at the phase transition (Fig.\,\ref{ferrer}) near $\lambda^{*} \approx 0.5$ (i.e., when listener and speaker interests are weighted about equally). For $\lambda < \lambda^{*}$, there is little or no communication because there are few words in the lexicon $\langle L\rangle$ (Fig.\,\ref{ferrer}B) while, it is assumed, the number of objects remains constant which produces tremendous ambiguity in word-meaning mappings -- one or a few words point to all the objects (i.e., low $I(S,R)$, (Fig.\,\ref{ferrer}A)). For $\lambda > \lambda^{*}$, there is extremely efficient communication involving single word-single object mappings (i.e., high $I(S,R)$) -- though this comes at a high cost for the speaker (i.e., high $H(S)$) because the lexicon abruptly rises to the number of objects. \begin{figure} \centerline{\includegraphics[width=\textwidth]{ferrer.png}} \caption{Phase transition in the mutual information $\langle I_n(S, R) \rangle$ and lexical size $\langle L\rangle$ of simulated languages as a function of the proportion of effort, i.e., bias ($\lambda$), devoted to listener interests as opposed to speaker interests. Reproduced with permission from Ferrer i Cancho and Sol\'e (2003).}\label{ferrer} \end{figure} The form of both of these phase transitions (Fig.\,\ref{ferrer}) lies somewhere between a step (Heaviside) function and a ramp function (Fig.\,\ref{afoto2}). The unit ramp function increases gradually, one unit per unit time. The abrupt switching between states [$x<0$, $f(x)=0$; $x>0$, $f(x)=1$] is typical of electrical circuits \cite{spiegel} and neural systems \cite{mcculloch}. Indeed, prior studies performed analytical derivations of global minima from equation \eqref{energy} to prove that this theoretical phase transition is well modeled by a step function \cite{ferrer2007, prokopenko}. These studies demonstrated that the domain $\lambda < \lambda^{*}$ is characterized by single-signal systems (i.e., one signal refers to all objects), the domain $\lambda = \lambda^{*}$ is characterized by non-synonymous systems (i.e., no two signals refer to the same object, although one signal may refer to multiple objects), and the domain $\lambda > \lambda^{*}$ is characterized by one-to-one mappings between signals and objects. In mathematics, a transform is a method used to convert an equation in one variable to an equation in a different variable \cite{korner}. Integrals are a common type of transform and have the generalized form: \begin{equation} T[f(x)] = F(z) = \int_a^b \! f(x)g(x,z) \, \mathrm{d}x \label{transform} \end{equation} where $f(x)$ is the function being transformed, $T$ is the generalized mathematical transform, and $g(x,z)$ is the kernel of the transform. When the definite integral is evaluated, the variable $x$ drops out of the equation and one is left with a function purely of $z$. For example, in a Laplace transform \cite{spiegel}, the kernel is the negative exponential $e^{-xz}$, which serves as a damping function. In the special case that $f(x)$ is the unit step function (Fig.\,\ref{afoto2}A), the Laplace transform simply yields $1/z$. For example, in electrical engineering, the Laplace transform is often used to map the behavior of functions in the time domain, $f(t)$, to the frequency domain, $F(z)$. \begin{figure*}[ht] \begin{center} \centerline{\includegraphics[width=\textwidth]{step.pdf}} \caption{(A) The unit step (Heaviside function) with phase transition at $\lambda=0.41$. (B) The unit ramp function on domain $[0,1]$.}\label{afoto2} \end{center} \end{figure*} \section{Results} We propose a new integral transform called the Slavi transform, $\mathcal{S}$, to map communicative bias functions to corresponding word frequencies. Consider the function to transform as $N(\lambda)$: the lexical size $\langle L\rangle$ of a language (i.e., the number of words in the language that are connected and have non-zero probability) as a function of the bias, $\lambda$, imparted to the listener over the speaker (Fig.\,\ref{ferrer}B). Because the lexicon size and word-meaning mappings abruptly change at the phase transition near $\lambda^{*}$ (Fig.\,\ref{ferrer}A,B), we can substitute the unit step function (Fig.\,\ref{afoto2}A) for $N(\lambda)$: \begin{equation} \begin{split} \mathcal{S}[N(\lambda)] = \int_0^1 \! N(\lambda)e^{-\lambda r} \, \mathrm{d}\lambda =\int_0^x \! N(\lambda)e^{-\lambda r} \, \mathrm{d}\lambda + \int_x^1 \! N(\lambda)e^{-\lambda r} \, \mathrm{d}\lambda = \\ \int_0^x \! (0)e^{-\lambda r} \, \mathrm{d}\lambda + \int_x^1 \! (1)e^{-\lambda r} \, \mathrm{d}\lambda = \frac{1}{r}(e^{-xr}-e^{-r}) = N(r, x) \end{split} \label{transform} \end{equation} \noindent where $x \in [0,1]$ represents the phase transition near $\lambda^{*}$. Kernels of integral transforms of this form are called Slavi kernels, $e^{-\lambda r}$. Since $e^{-xr}-e^{-r} < 1$ for all $x \in [0,1)$, it follows that: \begin{equation} N(r, x) < \frac{1}{r} \end{equation} We'll see the importance of this result in Equation \ref{inequality}. For now, we emphasize four key points to give some intuition behind the utility of the proposed kernel ($e^{-\lambda r}$) used during the mapping: \begin{itemize} \item Lexical size of a language has been transformed from a function of bias ($\lambda$) to a function of rank ($r$). \item $\lambda > \lambda^{*}$ (one-to-one mappings between signals and objects) corresponds to high $r$ (rare, unique signals specific to one object). $\lambda < \lambda^{*}$ (single-signal systems where one signal refers to all objects) corresponds to low $r$ (frequent, repetitive words referring to multiple objects). \item The y-axis is preserved under the transformation: it is still the number of words in the language (i.e., frequency). \item Applying dimensional analysis validates the prerequisite of dimensionless products, as the product $-\lambda r$ is dimensionless since $\lambda$ is a constant in the range $[0,1]$ (Fig.\,\ref{ferrer}) and $r$ is a rank ($r \in \mathbb{N}$) corresponding to a specific word (i.e., signal) in the lexicon. \end{itemize} Investigating the other boundary (Fig.\,\ref{afoto2}B) by substituting the unit ramp function for $N(\lambda)$ and performing the Slavi transform yields $1/r^2$ for $r \to \infty$, a hallmark of complex languages possessing many words (where high $r$ corresponds to rare words in the lexicon): \begin{equation} \begin{split} \mathcal{S}[N(\lambda)] = \int_0^1 \! N(\lambda)e^{-\lambda r} \, \mathrm{d}\lambda =\int_0^1 \! (\lambda)e^{-\lambda r} \, \mathrm{d}\lambda = -\frac{2}{re^{r}} + \frac{1}{r^2} = N(r) \end{split} \label{transform2} \end{equation} Thus, depending on how abrupt the phase transition is, one should expect most words in a complex language to scale within the range: \begin{equation} \frac{1}{r^2} \leq N(r) \leq \frac{1}{r} \label{inequality} \end{equation} or, in terms of the Zipfian exponent, $1 \leq \alpha \leq 2$, which is typically found to be the case \cite{ferrer_last, moreno_2016}. Taken together, there is a connection between the rank of the $r\textsuperscript{th}$ word and its frequency in the lexicon, $N(r, x)$, provided the language is organizing around a phase transition in mutual information and lexicon size. \section{Simulation} We want to further demonstrate the importance of the Slavi transform mapping the lexical size of a language, originally a function of bias, to a function of rank by introducing a reinforcement learning simulation. Our goal for this simulation is to introduce a realistic scenario where we can show that word rank is more impactful than bias for tasks involving communication performance. Higher word rank corresponds to more infrequent and unique words, while lower word rank corresponds to more frequent words and synonyms. To better understand the inspiration behind the simulation, consider the following scenario: You are walking with your friend in a very busy park. There are numerous objects that you and your friend see, but a brown colored dog captures your attention and you want to communicate this to your friend verbally. In this case, you are the ``speaker'' and your friend is the ``listener''. You, the speaker, generate a ``label'', such as ``brown dog'', to alert the listener's attention to the dog in question. ``Brown dog'', is a very specific label and therefore it makes it very easy for the listener to associate it with the specific dog the speaker intends to communicate. The phrase, ``brown dog'', is exclusively for a dog whose color is brown, which means that the phrase is mapped to the brown colored dog or similar objects like it in the environment. In this scenario, the speaker is exerting more effort than the listener, who can easily map ``brown dog'' to the brown colored dog. If the speaker instead just says ``brown thing'', the listener will have to exert more effort to figure out that the speaker is talking about the dog (and, therefore, map its location in the environment). This would mean that the listener is exerting more effort than the speaker. Recall from Eq.\,\ref{energy} that this effort measurement is represented as bias in Ferrer i Cancho's energy function. Bias (i.e., $\lambda$) controls the balance between the speaker interests, $H(S)$, and listener interests, $I(S,R)$. It can be viewed as a learning rate, which will help with the formulation of the training model later on. It is also important to note that even if the listener initially has no idea that the ``brown dog'' means brown colored dog, through repetition it would learn this association, in order to communicate effectively with the speaker. This can be viewed as natural language emergence, where labels are chosen (or ``generated") and repeated until the listener and speaker reach a consensus. Our main goal is to simulate this natural language emergence and compare the effects of bias and word rank on the ability to communicate efficiently between a speaker and a listener, ultimately testing the hypothesis that the Slavi transform (i.e., transforming bias to rank) is useful for practical computer vision tasks such as image classification. \section{Data and source code} We chose a set of 10 unique image classes, provided by the CIFAR10 dataset. The image classes are ``airplane'', ``automobile'', ``bird'', ``cat'', ``deer'', ``dog'', ``frog'', ``horse'', ``ship'', and ``truck''. Source code accompanying this simulation can be found here: \href{https://github.com/Quiltomics/NLERL}{https://github.com/Quiltomics/NLERL} \section{Model Structure} We attempt to model the scenario introduced above with a two-player game between two reinforcement learning agents, a speaker and a listener, where images are the objects that are to be communicated. The speaker and listener start off as almost independent, not communicating successfully. Through a training process they will create a language, or a mapping, between objects and one hot encoded values to communicate effectively. We adapted the model introduced by Angeliki Lazaridou, Alexander Peysakhovich, Marco Baroni \cite{multi_agent}. The game between the speaker, parameterized as $\theta_{s}$, and listener, parameterized as $\theta_l$, is as follows: \begin{enumerate} \item A sample image from each of the $n$ unique classes from an image dataset is drawn and passed through a pretrained VGG19 network \cite{vgg19}, the output represented by image vectors $\left\{i_0,...,i_{n-1}\right\}$. One of the vectors is chosen to be the ``target image'', represented as $i_t \in \left\{i_0,...,i_{n-1}\right\}$, where $t\in\left\{0,...,n-1\right\}$. \item The speaker takes as input the target image $i_t$ and generates a label from a vocabulary of size $m$, where $m > 1$. The label is represented as a one hot encoded vector of size $m$. This action is the speaker's policy, $\pi_{\theta_{s}}(i_t,m)$. \item The listener takes in each image vector, $\left\{i_0,...,i_{n-1}\right\}$, and the action label, $\pi_{\theta_{s}}(i_t,m)$, generated by the speaker. It tries to guess which image the speaker saw by matching the label generated by the speaker to the correct target image $i_t$. This guess is the listener's policy, $\pi_{\theta_{l}}(\left\{i_0,...i_{n-1}\right\}, \pi_{\theta_{s}}(i_t,m))$. \item If the listener guesses the target correctly, or $\pi_{\theta_{l}}(\left\{i_0,...i_{n-1}\right\}, \pi_{\theta_{s}}(i_t,m)) = i_t$, then both the speaker and listener receive a reward of 1. If the listener gets it wrong, the speaker and listener receive a reward of 0. \item Update parameters $\theta_{s}$ and $\theta_{l}$. \end{enumerate} Over time, the listener and the speaker will develop a mapping to communicate the target images. \section{Training and Testing} We chose the speaker and listener to be reinforcement learning agents because the way they learn to communicate effectively is similar to how humans would learn: through repetition and based on whether the speaker and listener reached a consensus or not. The update rule we chose to optimize the speaker's and listener's parameters is based off the Monte Carlo Policy Gradient (REINFORCE) algorithm \cite{reinforce}: \begin{algorithm}[H] \caption{Monte Carlo Policy Gradient (REINFORCE) algorithm }\label{euclid} \begin{algorithmic}[0] \Procedure{REINFORCE}{} \State initialize parameters $\theta$ arbitrarily \For{\texttt{each episode $\left\{s_{0},a_{0},r_{0},...,s_{T},a_{T},r_{T}\right\} \sim \pi_{\theta}$}} \For{\texttt{$t = 0$ to $T$}} \State generate long term value $v_t$ from function $Q^{\pi}(s,a)$ \State $\theta\gets \theta + \alpha\nabla_{\theta}\log\pi_{\theta}(s_{t},a_{t})v_{t}$ \EndFor \EndFor \State \textbf{return} $\theta$ \EndProcedure \end{algorithmic} \end{algorithm} We chose the long term reward $v_t$ for our algorithm to simply be $r_t$, because trials are independent of each other. We also chose to modify the update rule by incorporating bias, $0 < \lambda < 1$. Returning to Ferrer i Cancho's energy function (Eq.\,\ref{energy}), we can view bias (i.e., $\lambda$) as the ``learning rate'', measuring the importance of the speaker's and listener's performances, and updating the model accordingly. This lets us easily incorporate $\lambda$ into our simulation. Since $(1-\lambda)$ scales speaker interests, $H(S)$, and $\lambda$ scales listener interests, $I(S,R)$, we can scale our normal learning rate by $(1-\lambda)$ for the speaker's update and $\lambda$ for the listener's update and formulate a modified Monte Carlo Policy Gradient (REINFORCE) algorithm: \begin{algorithm}[H] \caption{Modified Monte Carlo Policy Gradient (REINFORCE) algorithm } \label{euclid} \begin{algorithmic}[0] \Procedure{Modified REINFORCE}{} \State $s_{t}$ = speaker state at time step $t$ \State $l_{t}$ = listener state at time step $t$ \State initialize speaker parameters $\theta_{s}$ and listener parameters $\theta_{l}$ arbitrarily. \For{\texttt{each episode $\left\{s_{0},l_{0},\pi_{\theta_{s}}(s_{0}),\pi_{\theta_{l}}(l_{0},\pi_{\theta_{s}}(s_{0})),r_{0},...,s_{T},l_{T},\pi_{\theta_{s}}(s_{T}),\pi_{\theta_{l}}(l_{T},\pi_{\theta_{s}}(s_{T})),r_{T}\right\}$}} \For{\texttt{$t = 0$ to $T$}} \State update speaker parameters: $\theta_{s}\gets \theta_{s} + (\alpha\times(1-\lambda))\nabla_{\theta_{s}}\log\pi_{\theta_{s}}(s_{t})r_{t}$ \State update listener parameters: $\theta_{l}\gets \theta_{l} + (\alpha\times\lambda)\nabla_{\theta_{l}}\log\pi_{\theta_{l}}(l_{t},\pi_{\theta_{s}}(s_{t}))r_{t}$ \EndFor \EndFor \State \textbf{return} $\theta_{s},\theta_{l}$ \EndProcedure \end{algorithmic} \end{algorithm} \section{Agent Architectures} Both the speaker and listener are feedforward neural networks, implemented in Keras. The neural networks' weights are initialized using Glorot Initialization \cite{glorot}. The speaker takes the target image as input and passes it through a pretrained VGG19 to generate an image vector embedding. The speaker passes the embeddings through its hidden layers to output a softmax probability vector of vocabulary size $m$. The speaker samples an action, or a label, from the generated probability vector. The listener passes a sampled image from each class, including the target image, through a pretrained VGG19 to generate image vector embeddings. The listener takes in the speaker's generated label as an additional input. The image vector embeddings are passed through a shared hidden layer to create a new embedding. The speaker's generated label is passed through a separate hidden layer to create a label embedding. Dot products are computed between each new image embedding and label embedding. A softmax probability vector of size $10$ is generated from these dot products. A comprehensive workflow is illustrated in Fig. 3. \begin{figure} \includegraphics[width=6cm]{speakerarchitecture.png} \includegraphics[width=11cm]{receiverarchitecture.png} \caption{(Left) Speaker model architecture: The speaker takes in a random batch of target images. After the images are passed through a pretrained VGG19 network, they are passed through a 32 neuron fully connected layer with ReLU activation to generate a target image embedding. The embedding is passed through a fully connected softmax output layer with the number of neurons equal to the vocabulary size $m$. (Right) Listener model architecture: The listener takes in two inputs -- a batch of samples of each image class, including the target image, and the softmax speaker output of size $m$. The images are passed through a pretrained VGG19 network and then one shared 32 neuron fully connected layer to create image embeddings. The softmax speaker output is passed through a separate 32 neuron fully connected layer with ReLU activation to create a label embedding. The dot products between the label embedding and the image embeddings are computed and passed through a softmax activation.} \end{figure} \section{Simulation Results} To simulate word rank, we simply modify the vocabulary size $m$: smaller vocabulary corresponds to more frequent labels for multiple objects, resulting in lower word rank, while larger vocabulary corresponds to more one-to-one mappings between labels and objects, resulting in higher word rank. This is very intuitive to how humans interact, if there are less words in a speaker's vocabulary than objects, the speaker will be forced to use the same word more frequently to mean different objects, making it harder to communicate. We trained for 1000 episodes, with 100 samples from each image class, and used a learning rate $\alpha$ of 0.001. To reiterate, our main goal of the simulation has been to test the practical utility of the Slavi transform, namely to see if transforming bias to rank is useful for machine learning tasks. We wish to test the hypothesis that word rank is a better and more intuitive alternative to bias for effective communication. To do this, we observed the effect on the performance of our model while changing vocabulary size $m$ and keeping bias $\lambda$ constant compared to the performance while changing bias $\lambda$ and keeping vocabulary size $m$ constant. Performance is measured as accuracy, or total reward / total number of trials, which means the proportion of trials where the receiver picked the right object. We then determine whether there is a stronger trend between accuracy and rank compared to accuracy and bias. To test word rank, we ran the simulation with vocabulary sizes $m$ of 2,10,50,100,250,500,650,800,1000, with a constant $\lambda$ of 0.5 (equal effort between speaker and listener). When testing for bias, we used $\lambda$ values of 0.002,0.01,0.05,0.1,0.25,0.5,0.65,0.8,0.99, and a constant vocabulary size $m$ of 10. The rolling average of the 100 most recent episodes for both the simulations are shown in Fig. 4. The accuracy values for rank and bias are shown in Table I and Table II, respectively. \begin{figure} \includegraphics[width=8.5cm]{rank_plot.png} \includegraphics[width=8.5cm]{bias_plot.png} \caption{(Left) Rolling average accuracy of 100 most recent episodes, over a training period of 1000, of models with different vocabulary sizes and a constant bias ($\lambda$) of 0.5. (Right) Rolling average accuracy of 100 most recent episodes, over a training period of 1000, of models with different bias ($\lambda$) values and a constant vocabulary size of 10.} \end{figure} \begin{table}[H] \begin{minipage}{.5\linewidth} \caption{Rolling Accuracy 100 episodes Word Rank} \centering \begin{tabular}[t]{|p{3.2cm}|c|} \hline {\bf Vocabulary Size}&{\bf Accuracy}\\ \hline 2&18\% \\ \hline 10&44\% \\ \hline 50&59\% \\ \hline 100&61\% \\ \hline 250&64\% \\ \hline 500&66\% \\ \hline 650&68\% \\ \hline 800&70\% \\ \hline 1000&69\% \\ \hline {\bf Linear Regression Coefficient}&{\bf 95.38} \\ \hline \end{tabular} \end{minipage}% \begin{minipage}{.5\linewidth} \caption{Rolling Accuracy 100 episodes Bias} \centering \begin{tabular}[t]{|p{3.2cm}|c|} \hline {\bf Bias \bf ($\lambda$)}&{\bf Accuracy}\\ \hline 0.002&18\% \\ \hline 0.01&29\% \\ \hline 0.05&29\% \\ \hline 0.1&37\% \\ \hline 0.25&44\% \\ \hline 0.5&44\% \\ \hline 0.65&37\% \\ \hline 0.8&37\% \\ \hline 0.99&47\% \\ \hline {\bf Linear Regression Coefficient}&{\bf 59.41} \\ \hline \end{tabular} \end{minipage} \end{table} From the accuracy measurements, we can see that both bias ($\lambda$) and word rank (vocabulary size $m$) have a positive relationship with accuracy. However, it is clear that bias has a weaker relationship than word rank. For vocabulary sizes $ m \leq 800$, there seemed to be a consistent increase in model performance, reaching a peak of $70\%$, and from $m > 800$ the accuracy seemed to level off at around $69\%$. Bias seemed to have a weaker relationship: at $0.001 \leq \lambda \leq 0.05$ and $0.25 \leq \lambda \leq 0.5$ the performance of the model did not improve. Furthermore, at $0.65 \leq \lambda \leq 0.8$, the performance actually dipped from $44\%$ to $37\%$, but eventually rose to $47\%$. Observing the Linear regression coefficients for vocabulary size vs accuracy and bias vs accuracy, we can see that a proportional increase in vocabulary size has about a $60\%$ larger expected increase in accuracy than bias ($95.38 / 59.41$). Word rank is not only more intuitive to understand (i.e., vocabulary size is much easier to understand than bias), but also has a stronger positive relationship with accuracy, implying that it has a stronger effect, compared to bias, on the communicative performance between a speaker and a listener. \section{Future directions} The peak accuracy we reached was $70\%$, so there is definite room for improvement. The model architecture can be expanded upon to improve the performance -- one idea could be to include 1D convolutional layers, which may improve the accuracy. In addition to model improvements, with more computational power, more image samples and training episodes can be played. It would also be interesting to substitute CIFAR10 with the CIFAR100 dataset (i.e., to have more objects to communicate the problem). \section{Conclusions} We show that the Slavi transform maps communicative functions of speaker-listener bias directly to word rank. Specifically, we demonstrate that the lexical size of a language can be mapped from a function of bias, $N(\lambda)$, to a function of rank at any arbitrary phase transition point, $N(r, x)$. We provide a practical example in the form of a unique approach to an image classification task, where two reinforcement learning agents (a speaker and a listener) communicate images and labels with each other. When testing the impact of bias and word rank we observed that word rank had a much stronger positive effect on communicative performance (and accuracy) than bias. This suggests that functions of word rank are generally more useful than functions of bias for modeling communicative systems. This study highlights the importance of integral transform theory to understanding and improving information-theoretic models of communicative systems in the context of Zipfian ranks.
1,108,101,565,095
arxiv
\section{Introduction}\label{Section introduction} In this paper we study the global properties of the period map from the Teichm\"{u}ller space of polarized and marked Calabi-Yau manifolds to the classifying space of polarized Hodge structures. Our method is based on a simple observation that there exists a natural holomorphic affine structure on the Teichm\"uller space, and the construction of a completion space of the Teichm\"uller space with a compatible affine structure on it. Although our method works for more general cases, for simplicity we will restrict our discussion to Calabi-Yau manifolds. More precisely a compact projective manifold $M$ of complex dimension $n$ with $n\geq 3$ is called Calabi-Yau in this paper, if it has a trivial canonical bundle and satisfies $H^i(M,\mathcal{O}_M)=0$ for $0<i<n$. A polarized and marked Calabi-Yau manifold is a triple consisting of a Calabi-Yau manifold $M$, an ample line bundle $L$ over $M$ and a basis of integral homology group modulo torsion, $H_n(M,\mathbb{Z})/Tor$. We will denote by $\mathcal{T}$ the Teichm\"{u}ller space from the deformation of the complex structure on $M$. We will take $\mathcal{T}$ as the universal cover of the smooth moduli space $\mathcal{Z}_m$ constructed by Popp, Viehweg, and Szendroi, for example in Section 2 of \cite{sz}. The versal family $\mathcal{U}\rightarrow\mathcal{T}$ of the polarized and marked Calabi-Yau manifolds is the pull-back of the versal family over $\mathcal{Z}_m$ constructed in \cite{sz}. Therefore $\mathcal{T}$ is a simply connected smooth complex manifold of complex dimension \begin{align*} \dim_{{\mathbb{C}}}{\mathcal{T}}=h^{n-1,1}(M)=N, \end{align*} where $h^{n-1,1}(M)=\dim_{\mathbb{C}} H^{n-1,1}(M)$ with $H^{n-1,1}(M)$ the $(n-1,1)$-Dolbeault cohomology group of $M$. See Section \ref{section construction of Tei} for details. Let $D$ be the classifying space of polarized Hodge structures of the weight $n$ primitive cohomology of $M$. The period map $\Phi:\,\mathcal{T}\rightarrow D$ assigns to each point in $\mathcal{T}$ the corresponding Hodge structure of the fiber. The main result of this paper is the proof of the following global Torelli theorem: \begin{thm} \label{toerllimainintro} The period map $\Phi:\, \mathcal{T}\rightarrow D$ is injective. \end{thm} The main idea of the paper is the construction on the Teichm\"{u}ller space $\mathcal{T}$ a holomorphic affine structure and the construction of the completion space $\mathcal{T}^H$. A holomorphic affine structure on a complex manifold is a holomorphic coordinate cover with affine transition maps. By using the local Kuranishi deformation theory of Calabi-Yau manifolds, we introduce local holomorphic affine flat coordinate charts and define a holomorphic affine structure on $\mathcal{T}$ by verifying that the transition maps between any two holomorphic affine flat coordinate charts are holomorphic affine maps. The computation is based on the construction in \cite{tod1} of the local canonical family of holomorphic $(n, 0)$-forms on the local Kuranishi family of Calabi-Yau manifolds, which also follows from the local Torelli theorem for Calabi-Yau manifolds and the Griffiths transversality for variations of Hodge structures. We call the coordinate cover of $\mathcal{T}$ given by this local holomorphic affine flat coordinate charts, the Kuranishi coordinate cover of the Teichm\"uller space. We introduce the completion space $\mathcal{T}^H$ of the Teichm\"uller space that we construct in \cite{CGL}. We mainly show that the completion space $\mathcal{T}^H$ has an affine structure compatible with the one on $\mathcal{T}$, and it is a complete space with respect to the induced Hodge metric. One may refer to Section 4 in \cite{CGL} for the details of the construction of $\mathcal{T}^H$. In \cite{CGL} it is also proved that period map $\Phi$ extends to a holomorphic map $\Phi^H:\, {\mathcal{T}}^H\longrightarrow N_+$, where $N_+$ is identified with its unipotent orbit in $\check{D}$ by by fixing a base point in $D$. Then we define $\Psi^H$ from $\mathcal{T}$ to a complex linear space as a composition of $\Phi^H$ and a projection map. Based on the completeness of $\mathcal{T}^H$, the property that the composition map $\Psi^H$ is an holomorphic affine embedding, and the local Torelli theorem for the period map $\Phi:\, \mathcal{T}\rightarrow D$, we prove the global Torelli theorem for the Teichm\"tuller space of polarized and marked Calabi-Yau manifolds. This paper is organized as follows. In Section \ref{Section period map} we review the definition of the classifying space of polarized Hodge structures and briefly describe the construction of the Teichm\"{u}ller space of polarized and marked Calabi-Yau manifolds and its basic properties. In Section \ref{Section local property} we review the geometry of the local deformation theory of complex structures, especially in the case of Calabi-Yau manifolds. Since local deformation of Calabi-Yau manifold is local Kuranishi at each point of the Teichm\"uller space, we can construct a canonical local holomorphic affine flat coordinate chart around each point based on the local Kuranishi deformation theory. In Section \ref{Section flat structure} we review the concept of affine manifolds and construct a holomorphic affine structure on the Teichm\"{u}ller space. In Section \ref{THm}, we introduce the Hodge metric completion space of the Teichm\"uller space with level structure that we constructed in \cite{CGL}. We show that the Hodge metric completion space with level structure $\mathcal{T}^H_{_m}$ are holomorphic affine manifolds. In Section \ref{MainProof}, we show that $\mathcal{T}^H_{_m}$ is independent on the level $m$ structure, and for simplicity we denote $\mathcal{T}^H_{_m}$ by $\mathcal{T}^H$, which contains the Teichm\"uller space as an open dense submanifold. One may refer to \cite{CGL} for more details about $\mathcal{T}^H$. Based on the completeness and holomorphic affineness of $\mathcal{T}^H$ and the local Torelli theorem for the period map $\Phi: \mathcal{T}\rightarrow D$, we prove that $\Psi^H$ is an embedding which implies the global Torelli theorem for the Teichm\"uller space of polarized and marked Calabi-Yau manifolds. Actually $\Psi^H$ induces global holomorphic affine coordinates in $\mathcal{T}$ and $\mathcal{T}^H$. In Appendix \ref{appi} we include some facts about the structures of the period domain $D$ which are known to experts but important to our discussions. We mainly follow Section 3 in \cite{schmid1} to interpret the classifying space $D$, its compact dual $\check{D}$, the tangent space of $D$, and the tangent space of $\check{D}$ as quotients of Lie groups and Lie algebras respectively in a general way. Moreover, we describe the relation between the period domain and the unipotent Lie group $N_+=\exp(\mathfrak{n}_+)$, which is the corresponding Lie group of the nilpotent Lie algebra. We also describe the matrix representation of elements in those Lie groups and Lie algebras if we fix a base point in $D$ and an adapted basis for the Hodge decomposition of the base point. Now we briefly recall the history of the Torelli problem. The idea to study the periods of abelian varieties on Riemann surfaces goes back to Riemann. In 1914 Torelli asked wether two curves are isomorphic if they have the same periods. See \cite{Tor} for detail. In \cite{aw1} Weil reformulated the Torelli problem as follows: Suppose for two Riemann surfaces, there exists an isomorphism of their Jacobians which preserves the canonical polarization of the Jacobians, is it true that the two Riemann surfaces are isomorphic. Andreotti proved Weil's version of the Torelli problem in \cite{and}. Another important achievement about the Torelli problem, conjectured by Weil in \cite{W2}, was the proof of the global Torelli Theorem for K3 surfaces, essentially due to Shafarevich and Piatetski-Shapiro in \cite{PS}. Andreotti's proof is based on specific geometric properties of Riemann surfaces. The approach of Shafarevich and Piatetski-Shapiro is based on the arithmeticity of the mapping class group of a K3 surface. It implies that the special K3 surfaces, the Kummer surfaces, are everywhere dense subset in the moduli of K3 surfaces. Shafarevich and Piatetski-Shapiro observed that the period map has degree one on the set of Kummer surfaces, which implies the global Torelli theorem. The literature about the Torelli problem is enormous. Many authors made very substantial contributions to the general Torelli problem. We believe that it is impossible to give a complete list of all the achievements in this area and its applications. In \cite{ver} Verbitsky used an approach similar to ours in his proof of the global Torelli theorem for hyperK\"{a}hler manifolds. We would like to thank Professor S.-T. Yau for his constant interest, for sharing his ideas and encouragement during the work on this project. We had numerous useful conversations with X. Chen and X. Sun. We would also like to thank Professors F. Bogomolov, P. Deligne, Bart van den Dries, R. Friedman, V. Golyshev, Radu Laza, S. Li, B. Lian, E. Loojienga, Yu. Manin, Veeravalli S. Varadarajan and M. Verbitsky for their interest and valuable suggestions. \section{The Period Map}\label{Section period map} In this section, we review the definitions and some basic results about period domain, Teichm\"{u}ller space and period map. In Section \ref{section period domain}, we recall the definition and some basic properties of period domain. In Section \ref{section construction of Tei}, we discuss the construction of the Teichm\"ulller space of Calabi-Yau manifolds based on the works of Popp \cite{Popp}, Viehweg \cite{v1} and Szendroi \cite{sz} on the moduli spaces of Calabi-Yau manifolds. In Section \ref{section period map}, we define the period map from the Teichm\"{u}ller space to the period domain. We remark that most of the results in this section are standard and can be found from the literature in the subjects. \subsection{Classifying space of polarized Hodge structure}\label{section period domain} We first review the construction of the classifying space of polarized Hodge structures, which is also called the period domain. We refer the reader to Section 3 of \cite{schmid1} for details. This paper mainly considers polarized and marked Calabi-Yau manifolds. A pair $(M,L)$ consisting of a Calabi-Yau manifold $M$ of complex dimension $n$ and an ample line bundle $L$ over $M$ is called a {polarized Calabi-Yau manifold}. By abuse of notation the Chern class of $L$ will also be denoted by $L$ and $L\in H^2(M,\mathbb{Z})$. We use $h^n=\dim_{\mathbb{C}}H^n(M,\mathbb{C})$ to denote the Betti number. Let $\{ \gamma_1,\cdots,\gamma_{h^n}\}$ be a basis of the integral homology group modulo torsion, $H_n(M,\mathbb{Z})/Tor$. \begin{definition} Let the pair $(M,L)$ be a polarized Calabi-Yau manifold, we call the triple $(M,L,\{\gamma_1,\cdots,\gamma_{h^n}\})$ a polarized and marked Calabi-Yau manifold. \end{definition} For a polarized and marked Calabi-Yau manifold $M$ with background smooth manifold $X$, we identify the basis of $H_n(M,\mathbb{Z})/Tor$ to a lattice $\Lambda$ as in \cite{sz}. This gives us a canonical identification of the middle dimensional de Rahm cohomology of $M$ to that of the background manifold $X$, \begin{equation*} H^n(M)\cong H^n(X) \end{equation*} where the coefficient ring can be ${\mathbb{Q}}$, ${\mathbb{R}}$ or ${\mathbb{C}} $. Since the polarization $L$ is an integer class, it defines a map \begin{equation*} L:\, H^n(X,{\mathbb{Q}})\to H^{n+2}(X,{\mathbb{Q}}) \end{equation*} given by $A\mapsto L\wedge A$ for any $A\in H^n(X,{\mathbb{Q}})$. We denote by $H_{pr}^n(X)=\ker(L)$ the primitive cohomology groups where, again, the coefficient ring is ${\mathbb{Q}}$, ${\mathbb{R}}$ or ${\mathbb{C}% }$. We let $H_{pr}^{k,n-k}(M)=H^{k,n-k}(M)% \cap H_{pr}^n(M,{\mathbb{C}})$ and denote its dimension by $h^{k,n-k}$. We have the Hodge decomposition \begin{align} \label{cl10} H_{pr}^n(M,{\mathbb{C}})=H_{pr}^{n,0}(M)\oplus\cdots\oplus H_{pr}^{0,n}(M). \end{align} It is easy to see that for a polarized Calabi-Yau manifold, since $H^2(M, {\mathcal O}_M)=0$, we have $$H_{pr}^{n,0}(M)= H^{n,0}(M), \ H_{pr}^{n-1,1}(M)= H^{n-1,1}(M).$$ The Poincar\'e bilinear form $Q$ on $H_{pr}^n(X,{\mathbb{Q}})$ is defined by \begin{equation*} Q(u,v)=(-1)^{\frac{n(n-1)}{2}}\int_X u\wedge v \end{equation*} for any $d$-closed $n$-forms $u,v$ on $X$. The bilinear form $Q$ is symmetric if $n$ is even and is skew-symmetric if $n$ is odd. Furthermore, $Q $ is non-degenerate and can be extended to $H_{pr}^n(X,{\mathbb{C}})$ bilinearly, and satisfies the Hodge-Riemann relations \begin{eqnarray} \label{cl30} Q\left ( H_{pr}^{k,n-k}(M), H_{pr}^{l,n-l}(M)\right )=0\ \ \text{unless}\ \ k+l=n, \quad\text{and}\quad \end{eqnarray} \begin{eqnarray} \label{cl40} \left (\sqrt{-1}\right )^{2k-n}Q\left ( v,\bar v\right )>0\ \ \text{for}\ \ v\in H_{pr}^{k,n-k}(M)\setminus\{0\}. \end{eqnarray} The above Hodge decomposition of $H_{pr}^n(M,{\mathbb{C}})$ can also be described via the Hodge filtration. Let $f^k=\sum_{i=k}^n h^{i,n-i}$, and \begin{equation*} F^k=F^k(M)=H_{pr}^{n,0}(M)\oplus\cdots\oplus H_{pr}^{k,n-k}(M) \end{equation*} from which we have the decreasing filtration \begin{equation*} H_{pr}^n(M,{\mathbb{C}})=F^0\supset\cdots\supset F^n. \end{equation*} We know that \begin{eqnarray} \label{cl45} \dim_{\mathbb{C}} F^k=f^k, \end{eqnarray} \begin{eqnarray} \label{cl46} H^n_{pr}(X,{\mathbb{C}})=F^{k}\oplus \bar{F^{n-k+1}},\quad\text{and}\quad \end{eqnarray} \begin{eqnarray} \label{cl48} H_{pr}^{k,n-k}(M)=F^k\cap\bar{F^{n-k}}. \end{eqnarray} In term of the Hodge filtration $F^n\subset\cdots\subset F^0=H_{pr}^n(M,{\mathbb{C}})$, the Hodge-Riemann relations can be written as \begin{eqnarray} \label{cl50} Q\left ( F^k,F^{n-k+1}\right )=0, \quad\text{and}\quad \end{eqnarray} \begin{eqnarray} \label{cl60} Q\left ( Cv,\bar v\right )>0 \ \ \text{if}\ \ v\ne 0, \end{eqnarray} where $C$ is the Weil operator given by $Cv=\left (\sqrt{-1}\right )^{2k-n}v$ when $v\in H_{pr}^{k,n-k}(M)$. The classifying space $D$ for polarized Hodge structures with data \eqref{cl45} is the space of all such Hodge filtrations \begin{equation*} D=\left \{ F^n\subset\cdots\subset F^0=H_{pr}^n(X,{\mathbb{C}})\mid % \eqref{cl45}, \eqref{cl50} \text{ and } \eqref{cl60} \text{ hold} \right \}. \end{equation*} The compact dual $\check D$ of $D$ is \begin{equation*} \check D=\left \{ F^n\subset\cdots\subset F^0=H_{pr}^n(X,{\mathbb{C}})\mid % \eqref{cl45} \text{ and } \eqref{cl50} \text{ hold} \right \}. \end{equation*} The classifying space $D\subset \check D$ is an open set. We note that the conditions \eqref{cl45}, \eqref{cl50} and \eqref{cl60} imply the identity % \eqref{cl46}. From the definition of classifying space we naturally get the {Hodge bundles} on $\check{D}$ by associating to each point in $\check{D}$ the vector spaces $\{F^k\}_{k=0}^n$ in the Hodge filtration of that point. Without confusion we will also denote by $F^k$ the bundle with $F^k$ as fiber, for each $0\leq k\leq n$. \begin{remark} We make remark one may refer to Appendix \ref{appi} for more detailed descriptions of the classifying space $D$, its compact dual $\check{D}$, and the tangent space at each point of them, by viewing them as quotients of Lie groups and Lie algebras. \end{remark} \subsection{Construction of the Teichmuller space}\label{section construction of Tei} In this subsection we briefly describe the construction of the Teichmuller space of polarized and marked Calabi-Yau manifolds and discuss its basic properties. We need the concept of Kuranishi family of compact complex manifolds, we refer to page~8-10 in \cite{su}, page~94 in \cite{Popp} or page 19 in \cite{v1}, for equivalent definitions and more details. If a complex analytic family $\pi:\,\mathcal{V}\rightarrow \mathcal{M}$ of compact complex manifolds is complete at each point of $\mathcal{M}$ and versal at the point $0\in\mathcal{M}$, then the family $\pi:\mathcal{V}\rightarrow\mathcal{M}$ is called the Kuranishi family of the complex manifold $V=\pi^{-1}(0)$. The base space $\mathcal{M}$ is called the Kuranishi space. If the family is complete at each point of a neighbourhood of $0\in\mathcal{M}$ and versal at $0$, then the family is called local Kuranishi family at $0\in\mathcal{M}$. In particular by definition if a family is versal at each point of $\mathcal{M}$, then it is local Kuranishi at every point of $\mathcal{M}$. Let $(M,L)$ be a polarized Calabi-Yau manifold. We call a basis of the quotient space $(H_n(M,\mathbb{Z})/Tor)/m(H_n(M,\mathbb{Z})/Tor)$ a level $m$ structure on the polarized Calabi-Yau manifold. For deformation of polarized Calabi-Yau manifold with level $m$ structure, we have the following theorem, which is a reformulation of Theorem~2.2 in \cite{sz}, we just take the statement we need in this paper. One can also look at \cite{Popp} and \cite{v1} for more details about the construction of moduli spaces of Calabi-Yau manifolds. \begin{thm}\label{Szendroi theorem 2.2} Let $m\geq 3$ and $M$ be polarized Calabi-Yau manifold with level $m$ structure, then there exists a quasi-projective complex manifold $Z_m$ with a versal family of Calabi-Yau manifolds, \begin{align}\label{szendroi versal family} \mathcal{X}_{Z_m}\rightarrow Z_m, \end{align} containing $M$ as a fiber, and polarized by an ample line bundle $\mathcal{L}_{Z_m}$ on $\mathcal{X}_{Z_m}$. \end{thm} Let $(M,L)$ be a polarized Calabi-Yau manifold. We define $\mathcal{T}_L(M)$ to be the universal cover of $Z_m$, \begin{align* \pi_m:\, \mathcal{T}_L(M)\rightarrow Z_m \end{align*} and the family \begin{align* \mathcal {U}\rightarrow \mathcal{T}_L(M) \end{align*} to be the pull-back of the family (\ref{szendroi versal family}) by the projection $\pi$. For simplicity we will denote $\mathcal{T}_L(M)$ by ${\mathcal T}$. \begin{proposition}\label{imp} The Teichm\"uller space $\mathcal{T}$ is a simply connected smooth complex manifold, and the family \begin{align}\label{versal family over Teich} \mathcal {U}\rightarrow\mathcal{T} \end{align} containing $M$ as a fiber, is local Kuranishi at each point of $\mathcal{T}$. \end{proposition} \begin{proof} For the first part, because $Z_m$ is smooth complex manifold, the universal cover of $Z_m$ is simply connected smooth complex manifold. For the second part, we know that the family (\ref{szendroi versal family}) is versal family at each point of $Z_m$, and $\pi$ is locally bi-holomorphic, thus the pull-back family via $\pi$ is also versal at each point of $\mathcal{T}$. By definition of local Kuranishi family, we get that $\mathcal {U}\rightarrow \mathcal{T}$ is local Kuranishi at each point of $\mathcal{T}$. \end{proof} Note that the Teichm\"uller space $\mathcal{T}$ does not depend on the choice of $m$. In fact let $m_1$, $m_2$ be two different integers, $\mathcal {U}_1\rightarrow\mathcal{T}_1$ and $\mathcal {U}_2\rightarrow\mathcal{T}_2$ are two versal families constructed via level $m_1$ and level $m_2$ respectively as above, both of which contain $M$ as a fiber. By using the fact that $\mathcal{T}_1$ and $\mathcal{T}_2$ are simply connected and the definition of versal family, we have a bi-holomorphic map $f:\mathcal{T}_1\rightarrow\mathcal{T}_2$, such that the versal family $\mathcal {U}_1\rightarrow\mathcal{T}_1$ is the pull back of the versal family $\mathcal {U}_2\rightarrow \mathcal{T}_2$ by $f$. Thus these two families are isomorphic to each other. We remark that our method of proving the global Torelli theorem for polarized and marked Calabi-Yau manifold applies without change to the Teichm\"{u}ller spaces of more general projective manifolds with trivial canonical line bundle, including polarized and marked hyperk\"{a}hler manifolds and K3 surfaces. In these cases the Teichm\"{u}ller spaces are inside symmetric domains of non-compact type, so naturally have global holomorphic affine flat coodinates. \subsection{The period map}\label{section period map} We are now ready to define the period map from the Teichm\"uller space to the period domain. For any point $p\in\mathcal{T}$, let $M_p$ be the fiber of family $\pi:\, \mathcal {U}\rightarrow \mathcal{T}$, which is a polarized and marked Calabi-Yau manifold. Since the Teichm\"{u}ller space is simply connected and we have fixed the basis of the middle homology group modulo torsions, we can use this to identify $H^n(M_p, \ \mathbb{C})$ for all fibers on $\mathcal{T}$, and thus get a canonical trivial bundle $H^n(M_p,\mathbb{C})\times \mathcal{T}$. We have similar identifications for $H^n(M_p,\mathbb{Q})$ and $H^n(M_p,\mathbb{Z})$. The period map from $\mathcal{T}$ to $D$ is defined by assigning each point $p\in\mathcal{T}$ the Hodge structure on $M_p$, \begin{align*} \Phi:\, \mathcal{T}\rightarrow D \end{align*} with $\Phi(p)=\{F^n(M_p)\subset\cdots\subset F^0(M_p)\}$. We denote $F^k(M_p)$ by $F^k_p$ for convenience. Period map has several good properties, we refer the reader to Chapter~10 in \cite{Voisin} for details. Among them the most important is the following Griffiths transversality. Let $\Phi$ be the period map, then $\Phi$ is holomorphic map and for any $p\in\mathcal{T}$ and $v\in T_p^{1,0}\mathcal{T}$, the tangent map satisfies, \begin{equation}\label{griffiths transversality quotient version} \Phi_*(v)\in \bigoplus_{k=1}^{n} \text{Hom}\left(F^k_p/F^{k+1}_p,F^{k-1}_p/F^{k}_p\right), \end{equation} where $F^{n+1}=0$, or equivalently \begin{equation*} \Phi_*(v)\in \bigoplus_{k=0}^{n} \text{Hom} (F^k_p,F^{k-1}_p). \end{equation*} In \cite{GS}, Griffiths and Schmid studied the so-called {Hodge metric} on the period domain $D$. We denote it by $h$. In particular, this Hodge metric is a complete homogeneous metric. Let us denote the period map on the moduli space $\Phi_{\mathcal{Z}_m}: \,\mathcal{Z}_{m}\rightarrow D/\Gamma$, where $\Gamma$ denotes the global monodromy group which acts properly and discontinuously on the period domain $D$. Denoting $\pi_m:\,\mathcal{T}\rightarrow \mathcal{Z}_m$ the universal covering, then $\Phi: \,\mathcal{T}\to D$ is the lifting of $\Phi_{\mathcal{Z}_m}\circ \pi_m$. By local Torelli theorem for Calabi--Yau manifolds, we know that $\Phi_{\mathcal{Z}_m}$ is locally injective and $\Phi$ is also locally injective. Thus it follows from \cite{GS} that the pull-back of $h$ by $\Phi_{\mathcal{Z}_m}$ and $\Phi$ on $\mathcal{Z}_m$ and $\mathcal{T}$ respectively are both well-defined K\"ahler metrics. By abuse of notation, we still call these pull-back metrics the \textit{Hodge metrics}, and they are both denoted by $h$. Hodge bundles over $\mathcal{T}$ are the pull-back of the Hodge bundles over $D$ through the period map. For convenience, we still denote them by $F^k$, for each $0\leq k\leq n$. We will also denote by $P_p^{k}$ the projection from $H^n(M,\mathbb{C})$ to $F^k_p$ with respect to the Hodge filtration at $M_p$, and $P_p^{n-k,k}$ the projection from $H^n(M,\mathbb{C})$ to $H^{n-k,k}(M_p)$ with respect to the Hodge decomposition at $M_p$. With all of these preparations, we are ready to state precisely the main theorem of this paper. \begin{thm} The period map $\Phi$ constructed above is an embedding, \begin{align*} \Phi: \, \mathcal{T} \hookrightarrow D. \end{align*} \end{thm} \section{Local Geometric Structure of the Teichm\"{u}ller Space}\label{Section local property} In this section, we review the local deformation theory of polarized Calabi-Yau manifolds, which will be needed for the construction of the global holomorphic affine structure on the Teichm\"{u}ller space in Section \ref{Section flat structure}. In Section \ref{section local deformation of complex structure}, we briefly review the basic local deformation theory of complex structures. In Section \ref{section local deformation of cy mfds}, we recall the local Kuranishi deformation theory of Calabi-Yau manifolds, which depends on the Calabi-Yau metric in a substantial way. In Section \ref{section local canonical section}, we describe a local family of the canonical holomorphic $(n,0)$-forms as a section of the Hodge bundle $F^n$ over the local deformation space of Calabi-Yau manifolds, from which we obtain an expansion of the family of holomorphic $(n,0)$-classes as given in Theorem \ref{expcoh}. This simple expansion is what we need for the construction of holomorphic affine flat structure on the Teichm\"uller space. Most of the results in this section are standard now in the literatures, and can be found in \cite{km1}, \cite{tod1}, and \cite{tian1}. For reader's convenience, we also briefly review some arguments. We remark that one may use a more algebraic approach to Theorem \ref{expcoh} by using the local Torelli theorem and the Griffiths transversality. \subsection{Local deformation of complex structure}\label{section local deformation of complex structure} Let $X$ be a smooth manifold of dimension $\dim_{\mathbb{R}} X=2n$ and let $J$ be an integrable complex structure on $X$. We denote by $M=(X,J)$ the corresponding complex manifold, and $\partial$, $\bar{\partial}$ the corresponding differential operators on $M$. Let $\phi\in A^{0,1}\left (M,T^{1,0}M\right )$ be a $T^{1,0}M$- valued smooth $(0,1)$-form. For any point $x\in M$, and any local holomorphic coordinate chart $(U, z_1,\cdots,z_n)$ around $x$. Let us express $\phi=\phi^i_{\bar{j}}d\bar{z}_j\otimes\frac{\partial}{\partial z_i}=\phi^i\partial_i$, where $\phi^i=\phi^i_{\bar{j}}d\bar{z}_j$ and $\partial_i=\frac{\partial}{\partial z_i}$ for simplicity. Here we use the standard convention to sum over the repeated indices. We can view $\phi$ as a map \begin{align* \phi:\, \Omega^{1,0}(M)\to \Omega^{0,1}(M) \end{align*} such that locally we have \begin{align*} \phi(dz_i)=\phi^i\ \ \ \ \ \text{for}\ \ 1\leq i\leq n. \end{align*} We use $\phi$ to describe deformation of complex structures. Let \begin{align*} \Omega_\phi^{1,0}(x)=\text{span}_\mathbb{C}\{ dz_1+\phi(dz_1), \cdots, dz_n+\phi(dz_n)\},\quad\text{and}\quad \end{align*} \begin{align*} \Omega_\phi^{0,1}(x)=\text{span}_\mathbb{C}\{ d\bar z_1+\bar\phi(d\bar z_1), \cdots, d\bar z_n+\bar\phi(d\bar z_n)\}, \end{align*} if $\Omega_\phi^{1,0}(x)\cap\Omega_\phi^{0,1}(x)=0$ for any $x$, then we can define a new almost complex structure $J_\phi$ by letting $\Omega_\phi^{1,0}(x)$ and $\Omega_\phi^{0,1}(x)$ be the eigenspaces of $J_\phi(x)$ with respect to the eigenvalues $\sqrt{-1}$ and $-\sqrt{-1}$ respectively, and we call such $\phi$ a Beltrami differential. It was proved in \cite{nn} that the almost complex structure $J_\phi$ is integrable if and only if \begin{eqnarray}\label{int} \bar\partial\phi=\frac{1}{2}[\phi,\phi]. \end{eqnarray} If (\ref{int}) holds, we will call $\phi$ an integrable Beltrami differential and denote by $M_\phi$ the corresponding complex manifold. Please see Chapter 4 in \cite{km1} for more details about the deformation of complex structures. Let us recall the notation for contractions and Lie bracket of Beltrami differentials. Let $(U,z_1, \cdots,z_n)$ be the local coordinate chart defined above, and $\Omega=fdz_1\wedge\cdots\wedge dz_n$ be a smooth $(n,0)$-form on $M$, and $\phi\in A^{0,1}(M,T^{1,0}M)$ be a Beltrami differential. We define \begin{align*} \phi\lrcorner\Omega=\sum\limits_{i}(-1)^{i-1}f\phi^i\wedge dz_1\wedge\cdots\wedge\widehat{dz_i}\wedge\cdots\wedge dz_n. \end{align*} For Beltrami differentials $\phi$, $\psi\in A^{0,1}(M,T^{1,0}M)$, with $\phi=\phi^i\partial_i$ and $\psi=\psi^k\partial_k$, recall that the Lie bracket is defined as \begin{align*} [\phi,\psi]=\sum\limits_{i,k}\left(\phi^i\wedge\partial_i\psi^k+\psi^i\wedge\partial_i\phi^k\right)\otimes\partial_k, \end{align*} where $\partial_i\phi^k=\frac{\partial\phi^k_{\bar{l}}}{\partial z_i}d\bar{z}_l$ and $\partial_i\psi^k=\frac{\partial\psi^k_{\bar{l}}}{\partial z_i}d\bar{z}_l$. For $k$ Beltrami differentials $\phi_1,\cdots,\phi_k\in A^{0,1}(M,T^{1,0}M)$, with $\phi_\alpha=\phi^i_\alpha\partial_i$ and $1\leq\alpha\leq k$, we define \begin{align*} \phi_1\wedge\cdots\wedge\phi_k=\sum_{i_1<\cdots<i_k}\left ( \sum_{\sigma\in S_k}\phi_{\sigma(1)}^{i_1}\wedge\cdots\wedge\phi_{\sigma(k)}^{i_k} \right )\otimes \left (\partial_{i_1}\wedge\cdots\wedge \partial_{i_k}\right), \end{align*} where $S_k$ is the symmetric group of $k$ elements. Especially we have \begin{align*} \wedge^k\phi=k!\sum_{i_1<\cdots<i_k}\left ( \phi^{i_1}\wedge\cdots\wedge\phi^{i_k}\right )\otimes \left (\partial_{i_1}\wedge\cdots\wedge \partial_{i_k}\right ). \end{align*} Then we define the contraction, \begin{align*} &(\phi_1\wedge\cdots\wedge\phi_k)\lrcorner\Omega= \phi_1\lrcorner(\phi_2\lrcorner(\cdots\lrcorner(\phi_k\lrcorner\Omega))) \\ =& \sum_{I=(i_1,\cdots,i_k)\in A_k}(-1)^{|I|+\frac{(k-1)(k-2)}{2}-1} f\left ( \sum_{\sigma\in S_k}\phi_{\sigma(1)}^{i_1}\wedge\cdots\wedge\phi_{\sigma(k)}^{i_k} \right )\wedge dz_{I^c}, \end{align*} where $A_k$ is the index set \begin{equation*} A_k=\{ (i_1,\cdots,i_k)\mid 1\leq i_1<\cdots<i_k\leq n\}. \end{equation*} Here for each $I=(i_1,\cdots,i_k)\in A_k$, we let $|I|=i_1+\cdots+i_k$ and $% dz_{I^c}=dz_{j_1}\wedge\cdots\wedge dz_{j_{n-k}}$ where $j_1<\cdots<j_{n-k}$ and $j_\alpha\ne i_\beta$ for any $\alpha,\beta$. With the above notations, for any Beltrami differentials $\phi$, $\psi\in A^{0,1}(M,T^{1,0}M)$ one has the following identity which was proved in \cite{tian1}, \cite{tod1}, \begin{align}\label{tian tod lemma} \partial((\phi\wedge\psi)\lrcorner\Omega)=-[\phi,\psi]\lrcorner\Omega+\phi\lrcorner\partial(\psi\lrcorner\Omega)+\psi\lrcorner\partial(\phi\Omega). \end{align} The following notation will be needed in the construction of the local canonical family of holomorphic $(n,0)$-classes. \begin{align}\label{family of smooth n0 form} e^{\phi}\lrcorner\Omega=\sum\limits_{k\geq 0}\frac{1}{k!}\wedge^k\phi\lrcorner\Omega. \end{align} By direct computation, we see that $e^{\phi}\lrcorner\Omega=f\left(dz_1+\phi(dz_1)\right)\wedge\cdots\wedge\left(dz_n+\phi(dz_n)\right)$ is a smooth $(n,0)$-form on $M_{\phi}$. \subsection{Local deformation of Calabi-Yau manifold}\label{section local deformation of cy mfds} For a point $p\in\mathcal{T}$, we denote by $(M_p, L)$ the corresponding polarized and marked Calabi-Yau manifold as the fiber over $p$. Yau's solution of the Calabi conjecture implies that there exists a unique Calabi-Yau metric $h_p$ on $M_p$, and the imaginary part $\omega_p=$Im\,$h_p\in L$ is the corresponding K\"{a}hler form. First by using the Calabi-Yau metric we have the following lemma, \begin{lemma}\label{iso} Let $\Omega_p$ be a nowhere vanishing holomorphic $(n,0)$-form on $M_p$ such that \begin{eqnarray}\label{normalization} \left ( \frac{\sqrt{-1}}{2}\right )^n(-1)^{\frac{n(n-1)}{2}}\Omega_p\wedge\bar{\Omega_p}=\omega_p^n. \end{eqnarray} Then the map $\iota:\, A^{0,1}\left ( M, T^{1,0}M\right )\to A^{n-1,1}(M)$ given by $\iota(\phi)=\phi\lrcorner\Omega_p$ is an isometry with respect to the natural Hermitian inner product on both spaces induced by $\omega_p$. Furthermore, $\iota$ preserves the Hodge decomposition. \end{lemma} Let us briefly recall the proof. We can pick local coordinates $z_1,\cdots,z_n$ on $M$ such that $\Omega_p=dz_1\wedge\cdots\wedge dz_n$ locally and $\omega_p=\frac{\sqrt{-1}}{2} g_{i\bar j}dz_i\wedge d\bar z_j$, then the condition \eqref{normalization} implies that $\det[g_{i\bar j}]=1$. The lemma follows from direct computations. Let $\partial_{M_p}$, $\bar{\partial}_{M_p}$, $\bar{\partial}_{M_p}^*$, $\square_{M_p}$, $G_{M_p}$, and $\mathbb{H}_{M_p}$ be the corresponding operators in the Hodge theory on $M_p$, where $\bar{\partial}_{M_p}^*$ is the adjoint operator of $\bar{\partial}_{M_p}$, $\square_{M_p}$ the Laplace operator, and $G_{M_p}$ the corresponding Green operator. We let $\mathbb{H}_{M_p}$denote the harmonic projection onto the kernel of $\square_{M_p}$. We also denote by $\mathbb{H}^{p,q}(M_p,\, E)$ the harmonic $(p,q)$-forms with value in a holomorphic vector bundle $E$ on $M_p$. By using the Calabi-Yau metric we have a more precise description of the local deformation of a polarized Calabi-Yau manifold. First from Hodge theory, we have the following identification \begin{align*} T_p^{1,0}\mathcal{T}\cong {\mathbb H}^{0,1}\left ( M_p,T_{M_p}^{1,0}\right ). \end{align*} From Kuranishi theory we have the following local convergent power series expansion of the Beltrami differentials, which is now well-known as the Bogomolov-Tian-Todorov theorem. \begin{thm}\label{flatcoord} Let $\phi_1,\cdots,\phi_N \in {\mathbb H}^{0,1}\left ( M_p,T_{M_p}^{1,0}\right )$ be a basis. Then there is a unique power series \begin{eqnarray}\label{10} \phi(\tau)=\sum_{i=1}^N \tau_i\phi_i +\sum_{|I|\geq 2}\tau^I\phi_I \end{eqnarray} which converges for $|\tau|<\varepsilon$ small. Here $I=(i_1,\cdots,i_N)$ is a multi-index, $\tau^I=\tau_1^{i_1}\cdots\tau_N^{i_N}$ and $\phi_I\in A^{0,1}\left ( M_p,T_{M_p}^{1,0}\right )$. Furthermore, the family of Beltrami differentials $\phi(\tau)$ satisfy the following conditions: \begin{align}\label{charflat} \begin{split} &\bar\partial_{M_p}\phi(\tau)=\frac{1}{2}[\phi(\tau),\phi(\tau)],\\ &\bar\partial_{M_p}^*\phi(\tau)=0,\\ &\phi_I\lrcorner\Omega_p =\partial_{M_p}\psi_I, \end{split} \end{align} for each $|I|\geq 2$ where $\psi_I\in A^{n-2,1}(M_p)$ are smooth $(n-2,1)$-forms. By shrinking $\varepsilon$ we can pick each $\psi_I$ appropriately such that $\sum_{|I|\geq 2}\tau^I\psi_I$ converges for $|\tau|<\varepsilon$. \end{thm} \begin{remark}\label{flatcoordremark} The coordinate $\{\tau_1, \cdots, \tau_N\}$ depends on the choice of basis $\phi_1,\cdots,\phi_N \in {\mathbb H}^{0,1}\left ( M_p,T_{M_p}^{1,0}\right )$. But one can also determine the coordinate by fixing a basis $\{\eta_0\}$ and $\{\eta_1, \cdots, \eta_N\}$ for $H^{n,0}(M_p)$ and $H^{n-1,1}(M_p)$ respectively. In fact, Lemma \ref{iso} implies that there is a unique choice of $\phi_1, \cdots, \phi_N$ such that $\eta_k=[\phi_k\lrcorner \eta_0],$ for each $1\leq k\leq N$. \end{remark} Theorem \ref{flatcoord} was proved in \cite{tod1}, and in \cite{tian1} in a form without specifying the Kuranishi gauge, the second and the third condition in (\ref{charflat}). This theorem implies that the local deformation of a Calabi-Yau manifold is unobstructed. Here we only mention two important points of its proof. For the convergence of $\sum_{|I|\geq 2}\tau^I\psi_I$, noting that $\partial_{M_p}\psi_I=\phi_I\lrcorner\Omega_p$ and $\bar{\partial}\phi_I\lrcorner\Omega_p=0$, we can pick $\psi_I=\partial^*_{M_p}G(\phi_I\lrcorner\Omega_p)$. It follows that \begin{align*} \|\psi_I\|_{k,\alpha}\leq C(k,\alpha)\|\phi_I\lrcorner\Omega_p\|_{k-1,\alpha}\leq C'(k,\alpha)\|\phi_I\|_{k-1,\alpha}. \end{align*} The desired convergence follows from the estimate on $\phi_I$. We note that the convergence of (\ref{10}) follows from standard elliptic estimate. See \cite{tod1}, or Chapter 4 of \cite{km1} for details. For the third condition in (\ref{charflat}), by using the first two conditions in (\ref{charflat}), for example we have in the case of $|I|=2$, \begin{align}\label{firsttwocond} \overline{\partial}_{M_p}\phi_{ij}=[\phi_i,\phi_j]\ \text{and}\ \overline{\partial}^*_{M_p}\phi_{ij}=0. \end{align} Then by using formula (\ref{tian tod lemma}) and Lemma \ref{iso}, we get that \begin{align*} [\phi_i,\phi_j]\lrcorner\Omega_p=\partial_{M_p}(\phi_i\wedge\phi_j\lrcorner\Omega_p) \end{align*} is $\partial_{M_p}$ exact. It follows that $\overline{\partial}_{M_p}(\phi_{ij}\lrcorner\Omega_p)=\left(\overline{\partial}_{M_p}\phi_{ij}\right)\lrcorner\Omega_p$ is also $\partial_{M_p}$ exact. Then by the $\partial\overline{\partial}$-lemma we have \begin{align*} \overline{\partial}_{M_p}(\phi_{ij}\lrcorner\Omega_p)=\overline{\partial}_{M_p}\partial_{M_p}\psi_{ij} \end{align*} for some $\psi_{ij}\in A^{n-2,1}$. It follows that \begin{align*} \phi_{ij}\lrcorner\Omega_p=\partial_{M_p}\psi_{ij}+\bar{\partial}_{M_p}\alpha+\beta \end{align*} for some $\alpha\in A^{n-1,0}(M_p)$ and $\beta\in \mathbb{H}^{n-1,1}(M_p)$. By using the condition $\overline{\partial}^*_{M_p}\phi_{ij}=0$ and Lemma \ref{iso}, we have \begin{align*} \phi_{ij}\lrcorner\Omega_p=\partial_{M_p}\psi_{ij}+\beta. \end{align*} Because $\phi_{ij}$ is not uniquely determined by condition (\ref{firsttwocond}), we can choose $\phi_{ij}$ such that the harmonic projection $\mathbb{H}(\phi_{ij})=0$. Then by using Lemma \ref{iso} again, we have \begin{align*} \phi_{ij}\lrcorner\Omega_p=\partial_{M_p}\psi_{ij}. \end{align*} Thus there exists a unique $\phi_{ij}$ which satisfies all three conditions in (\ref{charflat}). We can then proceed by induction and the same argument as above to show that the third condition in (\ref{charflat}) holds for all $|I|\geq 2$. See \cite{tod1} and \cite{tian1} for details. Theorem \ref{flatcoord} will be used to define the local holomorphic affine flat coordinates $\{\tau_1,\cdots,\tau_N\}$ around $p$, for a given orthonormal basis $\phi_1,\cdots,\phi_N \in {\mathbb H}^{0,1}\left ( M_p,T_{M_p}^{1,0}\right )$. Sometimes we also denote by $M_\tau$ the deformation given by the Beltrami differential $\phi(\tau)$. \subsection{Local canonical section of holomorphic $(n,0)$-classes}\label{section local canonical section} By using the local deformation theory, in \cite{tod1} Todorov constructed a canonical local holomorphic section of the line bundle $H^{n,0}=F^n$ over the local deformation space of a Calabi-Yau manifold at the differential form level. We first recall the construction of the holomorphic $(n,0)$-forms in \cite{tod1}. Let $\phi\in A^{0,1}\left ( M,T^{1,0}M\right )$ be an integrable Beltrami differential and let $M_\phi$ be the Calabi-Yau manifold defined by $\phi$. We refer the reader to Section \ref{section local deformation of complex structure} for the definition of the contraction $e^\phi\lrcorner\Omega_p$. \begin{lemma}\label{constructn0} Let $\Omega_p$ be a nowhere vanishing holomorphic $(n,0)$-form on $M_p$ and $\{z_1,\cdots,z_n\}$ is a local holomorphic coordinate system with respect to $J$ such that \begin{align*} \Omega_p=dz_1\wedge\cdots\wedge dz_n \end{align*} locally. Then the smooth $(n,0)$-form \begin{align*} \Omega_\phi=e^{\phi}\lrcorner\Omega_p \end{align*} is holomorphic with respect to the complex structure on $M_\phi$ if and only if $\partial_{M_p}(\phi\lrcorner\Omega_p)=0$. \end{lemma} \begin{proof} The proof in \cite{tod1} is by direct computations, here we give a simple proof. Being an $(n,0)$-form on $M_\phi$, $e^\phi\lrcorner\Omega_p$ is holomorphic on $M_\phi$ if and only if $d(e^\phi\lrcorner\Omega_p)=0$. For any smooth $(n,0)$-form $\Omega_p$ and Beltrami differential $\phi\in A^{0,1}\left(M,T^{1,0}M\right)$, we have the following formula, \begin{align*} d(e^{\phi}\lrcorner\Omega_p)=e^{\phi}\lrcorner(\bar{\partial}_{M_p}\Omega_p+\partial_{M_p}(\phi\lrcorner\Omega_p))+(\bar{\partial}_{M_p}\phi-\frac{1}{2}[\phi,\phi])\lrcorner(e^\phi\lrcorner\Omega_p), \end{align*} which can be verified by direct computations. In our case the Beltrami differential $\phi$ is integrable, i.e. $\bar{\partial}_{M_p}\phi-\frac{1}{2}[\phi,\phi]=0$ and $\Omega_p$ is holomorphic on $M_p$. Therefore we have \begin{align*} d(e^{\phi}\lrcorner\Omega_p)=e^{\phi}\lrcorner(\partial_{M_p}(\phi\lrcorner\Omega_p)), \end{align*} which implies that $e^{\phi}\lrcorner\Omega_p$ is holomorphic on $M_{\phi}$ if and only if $\partial_{M_p}(\phi\lrcorner\Omega_p)=0$. \end{proof} Now we can construct the canonical family of holomorphic $(n,0)$-forms on the local deformation space of Calabi-Yau manifolds. \begin{proposition}\label{canonicalfamily} We fix on $M_p$ a nowhere vanishing holomorphic $(n,0)$-form $\Omega_p$ and an orthonormal basis $\{\phi_i\}_{i=1}^N$ of $\mathbb{H}^1(M_p,T^{1,0}M_p)$. Let $\phi(\tau)$ be the family of Beltrami differentials given by \eqref{charflat} that defines a local deformation of $M_p$ which we denote by $M_\tau$. Let \begin{align}\label{can10} \Omega^c_p(\tau)=e^{\phi(\tau)}\lrcorner\Omega_p. \end{align} Then $\Omega^c_p(\tau)$ is a well-defined nowhere vanishing holomorphic $(n,0)$-form on $M_\tau$ and depends on $\tau$ holomorphically. \end{proposition} \begin{proof} We call such family the canonical family of holomorphic $(n,0)$-forms on the local deformation space of $M_p$. The fact that $\Omega(\tau)^c$ is a nowhere vanishing holomorphic $(n,0)$-form on the fiber $M_\tau$ follows from its definition and Lemma \ref{constructn0} directly. In fact we only need to check that $\partial_{M_p}(\phi(\tau)\lrcorner\Omega_p)=0$. By formulae \eqref{10} and \eqref{charflat} we know that \begin{align*} \phi(\tau)\lrcorner\Omega_p=\sum_{i=1}^N \tau_i(\phi_i\lrcorner\Omega_p)+\sum_{|I|\geq 2}\tau^I (\phi_I\lrcorner\Omega_p)= \sum_{i=1}^N \tau_i(\phi_i\lrcorner\Omega_p)+\partial_{M_p}\left ( \sum_{|I|\geq 2}\tau^I\psi_I\right ). \end{align*} Because each $\phi_i$ is harmonic, by Lemma \ref{iso} we know that $\phi_i\lrcorner\Omega_p$ is also harmonic and thus $\partial_{M_p}(\phi_i\lrcorner\Omega_p)=0$. Furthermore, since $\sum_{|I|\geq 2}\tau^I\psi_I$ converges when $|\tau|$ is small, we see that $\partial_{M_p}(\phi(\tau)\lrcorner\Omega_p)=0$ from formula \eqref{charflat}. The holomorphic dependence of $\Omega^c_p(\tau)$ on $\tau$ follows from formula \eqref{can10} and the fact that $\phi(\tau)$ depends on $\tau$ holomorphically. \end{proof} From Theorem \ref{flatcoord} and Proposition \ref{canonicalfamily} we get the expansion of the deRham cohomology classes of the canonical family of holomorphic $(n,0)$-forms. This expansion will be important in the construction of the holomorphic affine structure on the Teichm\"{u}ller space. We remark that one may also directly deduce this expansion from the local Torelli theorem for Calabi-Yau manifold and the Griffiths transversality. \begin{theorem}\label{expcoh} Let $\Omega^c_p(\tau)$ be a canonical family defined by \eqref{can10}. Then we have the following expansion for $|\tau|<\epsilon$ small, \begin{eqnarray}\label{cohexp10} [\Omega^c_p(\tau)]=[\Omega_p]+\sum_{i=1}^N \tau_i[\phi_i\lrcorner\Omega_p]+A(\tau), \end{eqnarray} where $\{[\phi_1\lrcorner\Omega_p], \cdots, [\phi_N\lrcorner\Omega_p]\}$ give a basis of $H^{n-1,1}(M_p)$ and $A(\tau)=O(|\tau|^2)\in \bigoplus_{k=2}^n H^{n-k,k}(M_p)$ denotes terms of order at least $2$ in $\tau$. \end{theorem} \begin{proof} By Theorem \ref{flatcoord} and Proposition \ref{canonicalfamily} we have \begin{align}\label{20000} \begin{split} \Omega^c_p(\tau)=& \Omega_p+\sum_{i=1}^N \tau_i(\phi_i\lrcorner\Omega_p)+\sum_{|I|\geq 2}\tau^I(\phi_I\lrcorner\Omega_p)+\sum_{k\geq 2} \frac{1}{k!} \left ( \wedge^k\phi(\tau)\lrcorner\Omega_p \right )\\ =&\Omega_p+\sum_{i=1}^N \tau_i(\phi_i\lrcorner\Omega_p)+\partial_{M_p}\left ( \sum_{|I|\geq 2}\tau^I\psi_I\right ) +a(\tau), \end{split} \end{align} where \begin{align}\label{a(tau)} a(\tau)=\sum_{k\geq 2} \frac{1}{k!} \left ( \wedge^k\phi(\tau)\lrcorner\Omega_p \right )\in \bigoplus_{k\geq 2}A^{n-k,k}(M). \end{align} By Hodge theory, we have \begin{eqnarray}\label{20020} \begin{split} [\Omega^c_p(\tau)]&=[\Omega_p]+\sum_{i=1}^N \tau_i[\phi_i\lrcorner\Omega_p]+\left[\mathbb{H}(\partial_{M_p}( \sum_{|I|\geq 2}\tau^I\psi_I) )\right]+[\mathbb{H}(a(\tau))]\\ &=[\Omega_p]+\sum_{i=1}^N \tau_i[\phi_i\lrcorner\Omega_p]+\left[\mathbb{H}(a(\tau))\right]. \end{split} \end{eqnarray} Let $A(\tau)=[\mathbb{H}(a(\tau))]$, then (\ref{a(tau)}) shows that $A(\tau)\in \bigoplus_{k=2}^n H^{n-k,k}(M)$ and $A(\tau)=O(|\tau|^2)$ which denotes the terms of order at least $2$ in $\tau$. \end{proof} In fact we have the following expansion of the canonical family of $(n,0)$-classes up to order $2$ in $\tau$, \begin{align*} [\Omega^c_p(\tau)]=[\Omega_p]+\sum_{i=1}^N \tau_i[\phi_i\lrcorner\Omega_p]+\frac{1}{2}\sum_{i,j}\tau_i\tau_j \left [ {\mathbb H}(\phi_i\wedge\phi_j\lrcorner\Omega_p) \right ]+\Xi(\tau), \end{align*} where $\Xi(\tau)=O(|\tau|^3)$ denotes terms of order at least $3$ in $\tau$, and $\Xi(\tau)\in \bigoplus_{k=2}^n H^{n-k,k}(M)$. This will not be needed in this paper. \section{Holomorphic Affine Structure on the Teichm\"{u}ller Space}\label{Section flat structure} In this section, we introduce holomorphic flat coordinate chart at each point of the Teichm\"{u}ller space of the Calabi-Yau manifold. Then we prove that the local holomorphic flat coordinate charts naturally give us a global holomorphic affine structure on the Teichm\"uller space. We call the holomorphic affine flat coordinate charts the Kuranishi coordinate cover of the Teichm\"uller space, since the construction is based on the local Kuranishi deformation theory of the Calabi-Yau manifold. In Section \ref{section def affine manifold}, for reader's convenience we review the definition and some basic properties of affine manifold and affine map. The results in this section are standard. In Section \ref{section kuranishi cover on tei}, we introduce the holomorphic flat coordinate chart and the Kuranishi coordinate cover on the Teichm\"uller space. In Section \ref{section Affine structure on T}, we prove, in Theorem \ref{affine structure}, that the holomorphic flat coordinate charts give us a holomorphic affine structure on the Teichm\"{u}ller space of polarized and marked Calabi-Yau manifolds. \subsection{Affine manifold and affine map}\label{section def affine manifold} We will first review the definition and some basic properties of affine manifold and affine map. We refer the reader to page~215 of \cite{Mats}, page~231 of \cite{Vit} or page 141 of \cite{auslander} for more details. \begin{definition}\label{affine manifold mats and vit} Let $M$ be a differentiable manifold of real dimension $n$, if there is a coordinate cover $\{U_i,\,\phi_i;\, i\in I\}$ of M satisfying that, $\phi_{ik}=\phi_i\circ\phi_k^{-1}$ is a real affine transformation on $\mathbb{R}^n$, whenever $U_i\cap U_k$ is not empty, then we say that $\{U_i,\,\phi_i;\, i\in I\}$ is a real affine coordinate cover and defines a real affine structure on $M$. For the definition of holomorphic affine manifold we just replace the "real" by "holomorphic" and "$\mathbb{R}^n$" by "$\mathbb{C}^n$" in the definition of real affine manifold. \end{definition} The notion of holomorphic affine map will be essentially used in our proof of the global Torelli theorem. For affine maps we have the following theorem, which is a special case of Theorem 6.1 in \cite{KN}. We refer the reader to page 252-255 of \cite{KN} for the proof. \begin{thm}\label{KN} Let $M$ be a connected, simply connected real affine manifold with an analytic linear connection. Let $M'$ be an analytic manifold with a complete analytic linear connection. Then every real affine map $f_U$ of a connected open subset $U$ of $M$ into $M'$ can be uniquely extended to a holomorphic affine map $f$ of $M$ into $M'$. \end{thm} The above theorem only deals with the real analytic manifold and analytic linear connections. We have a holomorphic analog of Theorem \ref{KN}. \begin{thm}\label{holomorphic extension} Let $M$ be a connected, simply connected holomorphic affine manifold and $M'$ be a complete holomorphic affine manifold. Then every holomorphic affine map $f_U$ of a connected open subset $U$ of $M$ into $M'$ can be uniquely extended to a holomorphic affine map $f$ of $M$ into $M'$. \end{thm} \begin{proof} Because both $M$ and $M'$ are holomorphic affine manifolds, they are naturally real affine manifolds. The map $f_U$ is automatically a real affine map. Then by using Theorem \ref{KN}, there exists a global real affine map \begin{align*} f:\, M\rightarrow M' \end{align*} which is an extension of $f_U$ to a map from $M$ to $M'$. We know that $f$ is real analytic on $M$ and holomorphic on $U$, so $f$ is globally holomorphic on $M$. For uniqueness, if a holomorphic affine map $g:\ M\rightarrow M'$ is another extension of $f_U$, then $g$ is holomorphic and $g=f$ on $U$, which implies $g=f$ on $M$. \end{proof} It is also easy to give a direct proof of this theorem by using the holomorphic affine structures following the proof of Theorem 6.1 in \cite{KN}. \subsection{Kuranishi coordinate cover on the Teichm\"uller space}\label{section kuranishi cover on tei} In this subsection we introduce Kuranishi coordinate cover to define a holomorphic affine structure on $\mathcal{T}$. We first review the definition of Hodge basis adapted to a given Hodge decomposition $H_\mathbb{C}=~H^{n,0}\oplus H^{n-1,1}\oplus\cdots\oplus H^{0,n}$. Let $e=\{e_0,\cdots,e_m\}$ be a basis of $H_\mathbb{C}$. We call $e$ a {Hodge basis} adapted to this Hodge decomposition, if \begin{align*} \text{Span}_{\mathbb{C}}\{e_{m_{k-1}+1},\cdots,e_{m_k}\}=H^{n-k,k}, \end{align*} where $m_k=f^{n-k}-1$, for each $0\leq k\leq n$. Using the relation \begin{equation}\label{quotient} F^{n-k}/F^{n-k+1}=H^{n-k,k}, \end{equation} we get that the Hodge basis defined above also satisfies \begin{equation} \text{Span}_{\mathbb{C}}\{e_{m_{k-1}+1},\cdots,e_{m_k}\}=F^{n-k}/F^{n-k+1}. \end{equation} Note that this identification is valid only when we fix a point $p$ in $\mathcal{T}$. A square matrix $T =[T^{\alpha,\beta}]$, with each $T^{\alpha,\beta}$ a submatrix, is called block upper triangular if $T^{\alpha,\beta}$ is zero matrix whenever $\alpha>\beta$. To clarify the notations, we will use $T_{ij}$ to denote the entries of the matrix $T$. We have the following lemma which should be well-known to experts. For example it follows easily from the unipotent orbit in the period domain as described in \cite{schmid2}. \begin{lemma}\label{use} We fix a base point $p\in\mathcal{T}$ and a Hodge basis $\{c_0(p),\cdots,c_m(p)\}$ of the Hodge decomposition of $M_p$, and denote by $C(p)=(c_0(p),\cdots,c_m(p))^T$ in column. Then there is an open neighborhood $U_p$ of $p$, such that for any $q\in U_p$, there exists a block upper triangular matrix $\sigma(q)$ such that the basis \begin{eqnarray}\label{tr500} \begin{bmatrix} c_0(q)\\ \vdots\\ c_m(q) \end{bmatrix} =\sigma(q) \begin{bmatrix} c_0(p)\\ \vdots\\ c_m(p) \end{bmatrix} \end{eqnarray} is a Hodge basis of the Hodge decomposition of $M_q$. \end{lemma} \begin{proof} This lemma is a direct corollary of the Griffiths transversality. Let $\tau=\{\tau_1,\cdots,\tau_N\}$ be a local holomorphic coordinates on a small neighborhood $U_p$ around $p$, and \begin{align*} \{F^n_\tau\subset F^{n-1}_\tau\subset\cdots\subset F^0_\tau\}=\Phi(\tau) \end{align*} be the period map. Then the Griffiths transversality (\ref{griffiths transversality quotient version}) is equivalent to, \begin{align}\label{griffiths transversality} \frac{\partial}{\partial \tau}(F^k_\tau/F^{k+1}_\tau)\subset F^{k-1}_\tau/F^k_\tau, \end{align} where for each $0\leq k\leq n-1$, the quotient bundle $F^k/F^{k+1}$ is a holomorphic vector bundle on $\mathcal{T}$ and has the relation (\ref{quotient}) with the Hodge decomposition. Let $B(\tau)=\{B_l(\tau)\}_{l=0}^{(f^0-1)}$ be a holomorphic family of bases for $F^0_\tau$ over $U_p$, such that $\{B_l(\tau)\}_{l=f^{k}}^{(f^{k-1}-1)}$ is a holomorphic family of bases for $F^{k-1}_\tau/F^k_\tau$, for each $k$. In fact we may shrink $U_p$ to trivialize the holomorphic bundles $F^{k-1}_\tau/F^k_\tau$ to get the bases. By using the Griffiths transversality (\ref{griffiths transversality}), we have for each ${f^{k}}\leq l\leq {(f^{k-1}-1)}$, $B_l(\tau)\in F_\tau^{k-1}/F_\tau^{k}$ and \begin{equation} \frac{\partial}{\partial \tau^i}B_l(\tau)\in F_\tau^{k-2}/F_\tau^{k-1}\ \ \text{for each}\ \ 1\leq i\leq N, \end{equation} and furthermore \begin{equation} \frac{\partial^{|I|}}{\partial \tau^I}B_l(\tau)\in F_\tau^{k-1-|I|}/F_\tau^{k-|I|}\ \ \text{for each}\ \ |I|\geq 1. \end{equation} Now let us look at the Taylor expansion of $B(\tau)$ at $\tau=0$, we get that there are polynomials $\sigma_{li}$ in $\tau$, such that \begin{align} B_l(\tau)&=\sum\limits_{|I|\geq 0}\frac{\partial^{|I|}}{\partial \tau^I}B_l(0)\tau^I\\ &=\sum\limits_{i\geq f^{k}}\sigma_{li}(\tau)B_i(0)\label{Taylor} \end{align} for $|\tau|<\epsilon$ small, $f^{k}\leq l\leq f^{k-1}-1$, and $0\leq k\leq n$. Let $q\in U_p$ be an arbitrary point, we have $\{B_l(q)\}_{l=0}^{f^0-1}$ is a Hodge basis adapted to the Hodge decomposition at point $q$. Then formula (\ref{Taylor}) shows that \begin{align*} B_l(q)=\sum\limits_{i\geq f^{k}}\sigma_{li}(q)B_i(p), \end{align*} where the matrix $[\sigma_{li}]_{0\leq l,i\leq f^0-1}$ is a block upper triangular matrix in the sense that $\sigma_{li}=0$ if $l\geq f^k$ and $i\leq f^{k}-1$ for some $0\leq k\leq n$. \end{proof} Here we remark that in \cite{CGL}, we have used row vector for the adapted basis for the Hodge decomposition of $\Phi(p)$, while in this paper we use column vector for the adapted basis of $\Phi(p)$. Due to such different conventions, the form of matrix $c(q)$ in this lemma and $T(q)$ in Lemma 3.7 of \cite{CGL} should be transpose to each other. From the above proof it is easy to see that the family of transition matrices $\sigma(\tau)=[\sigma^{\alpha,\beta}(\tau)]$ are block upper triangular matrices depending on $\tau$ holomorphically. Furthermore each diagonal matrix $\sigma^{k,k}(\tau)$ can be reduced to identity matrix. But these properties will not be needed during our discussion in this paper. In a letter to the authors \cite{schmid2}, Schmid indicated that the block upper triangular matrix as appeared in Lemma \ref{use} induces an affine flat structure in the unipotent orbit in the period domain. Now we are ready to define the Kuranishi coordinate cover. For each $p\in\mathcal{T}$, we denote by $\mathcal{C}_p$ the set consisting of all of the orthonormal bases for $\mathbb{H}^{0,1}(M_p,T^{1,0}M_p)$. Then for each pair $(p,\Psi)$, where $p\in\mathcal{T}$ and $\Psi\in\mathcal{C}_p$, we call the following coordinate chart \begin{align*} U_{p,\Psi}=(U_p,\{\tau_1, \cdots\tau_N\})=(U_p, \, \tau) \end{align*} a holomorphic affine {flat coordinate chart} around $p$, where the neighborhood $U_p$ of $p$ is chosen as in Lemma \ref{use}, and the coordinate system $\{\tau_1,\cdots,\tau_N\}$ is constructed by Theorem \ref{flatcoord}. Since the holomorphic affine flat coordinate chart can be defined around any point $p\in\mathcal{T}$, we have obtained a coordinate cover of $\mathcal{T}$ by the holomorphic affine flat coordinate charts \begin{align*} \{U_{p,\Psi}:\, p\in\mathcal{T}\ \text{and}\ \Psi\in\mathcal{C}_p\}. \end{align*} We call this coordinate cover the Kuranishi coordinate cover. Based on the existence of the local canonical section of the holomorphic $(n,0)$-forms on the local Kuranishi family of Calabi-Yau manifolds, in the next subsection we will prove that the Kuranishi coordinate cover induced by the local holomorphic affine coordinate charts gives us a global holomorohic affine structure on $\mathcal{T}$. We end this subsection with a simple remark. \begin{proposition}\label{coordinate is not depend on scalar} Let $p$ be a point in $\mathcal{T}$, and $M_p$ be the corresponding polarized and marked Calabi-Yau manifold. Let $(U_p,\{\tau_1,\cdots,\tau_N\})$ be a holomorphic affine flat coordinate chart around $p$, and $q\in U_p$. Then for any $[\Omega_q]\in H^{n,0}(M_q)$, there exists a constant $\lambda\in\mathbb{C}$ such that \begin{align*} [\Omega_q]=\lambda\left([\Omega_p]+\sum\limits_{i=1}^N \tau_i(q)(\phi_i\lrcorner\Omega_p)+A_q\right), \end{align*} where $A_q\in \bigoplus_{k\geq 2}H^{n-k,k}(M_p)$, and $\tau_i(q)$ are the flat coordinates of $q$ in $U_p$. \end{proposition} \begin{proof} Because $\dim_{\mathbb{C}}\,H^{n,0}(M_q)=1$, and both $[\Omega^c_p(q)]$ and $[\Omega_q]$ belong to $H^{n,0}(M_q)$, we know that there exists a complex number $\lambda$ such that, \begin{align*} [\Omega_q]=\lambda[\Omega^c_p(q)]. \end{align*} By using Theorem \ref{expcoh}, we get \begin{align*} [\Omega_q]=\lambda[\Omega^c_p(q)]=\lambda([\Omega_p]+\sum\limits_{i=1}^N \tau_i(q)(\phi_i\lrcorner\Omega_p)+A_q), \end{align*} where $\tau_i(q)$ are the values of the flat coordinates of $q$ in $U_p$, and $A_q\in \bigoplus_{k\geq 2} H^{n-k,k}(M_p)$. \end{proof} Note that in the above proposition, if we let $[\Omega_q]$ vary holomorphically in $q$, then $\lambda$ also depends on $q$ holomorphically. But we will not need this property in this paper. \subsection{Holomorphic affine structure on the Teichm\"uller space}\label{section Affine structure on T} In this subsection we prove that the Kuranishi coordinate cover gives us a natural holomorphic affine coordinate cover of $\mathcal{T}$, therefore a holomorphic affine structure on $\mathcal{T}$. \begin{thm}\label{affine structure} The Kuranishi coordinate cover on $\mathcal{T}$ is a holomorphic affine coordinate cover, thus defines a global holomorphic affine structure on $\mathcal{T}$. \end{thm} The proof of Theorem \ref{affine structure} is reduced to the proofs of the following two lemmas. \begin{lemma}\label{p=q} For a point $p\in\mathcal{T}$, let $\Phi$ and $\Phi'$ be two different orthonormal bases of $\mathbb{H}^1(M_p,T^{1,0}M_p)$. Then the transition map of these two coordinate charts $U_{p,\Phi}=(U_{p},\tau)$ and $U_{p,\Phi'}=(U_{p},t)$ is a holomorphic affine map. \end{lemma} \begin{proof} Let $A$ be an $n\times n$ matrix, such that $\Phi'=A\cdot \Phi$. By using Theorem \ref{flatcoord}, we have \begin{align*} (\tau_1,\cdots,\tau_N)^T=A(t_1,\cdots,t_N)^T \end{align*} which is clearly a holomorphic affine map. \end{proof} Lemma \ref{p=q} tells us that we can use any choice of the orthonormal basis of $\mathbb{H}^1(M_p,T^{1,0}M_p)$ for our discussion about the holomorphic affine structure on $\mathcal{T}$. From now on we will use $U_p$ instead of $U_{p,\Phi}$ for simplicity. \begin{lemma}\label{p,q close to each other} Let $p,q\in\mathcal{T}$ be two points. If $q\in U_p$, then the transition map between $(U_p,\tau)$ and $(U_q,t)$ is a holomorphic affine map. \end{lemma} \begin{proof} We take $\Phi_p=\{\phi_1,\cdots,\phi_N\}$ and $\Psi_q=\{\psi_1,\cdots,\psi_N\}$ to be the orthonormal bases of $\mathbb{H}^{0,1}(M_p, T^{1,0}M_p)$ and $\mathbb{H}^{0,1}(M_q, T^{1,0}M_q)$ respectively. Let $\Omega_p$ and $\Omega_q$ be respectively the holomorphic $(n,0)$-forms on $M_p$ and $M_q$, normalized as in Lemma \ref{iso}. Then by Lemma \ref{iso} we know that $\{\phi_1\lrcorner\Omega_p,\cdots,\phi_N\lrcorner\Omega_p\}$ and $\{\psi_1\lrcorner\Omega_q,\cdots,\psi_N\lrcorner\Omega_q\}$ are respectively the orthonormal bases for $\mathbb{H}^{n-1,1}(M_p)$ and $\mathbb{H}^{n-1,1}(M_q)$. Write \begin{eqnarray*} \eta_0=[\Omega_p],\ \ \ &\eta_i=[\phi_i\lrcorner\Omega_p]\ \ \ \ \ \text{for}\ 1\leq i\leq N;\\ \alpha_0=[\Omega_q],\ \ \ &\alpha_i=[\psi_j\lrcorner\Omega_q]\ \ \ \ \ \text{for}\ 1\leq j\leq N. \end{eqnarray*} We complete them to Hodge bases for $M_p$ and $M_q$ respectively, \begin{eqnarray*} \eta&=(\eta_0,\eta_1,\cdots,\eta_N,\cdots,\eta_m)^T\ \ \ \ \ \text{for}\ M_p;\\ \alpha&=(\alpha_0,\alpha_1,\cdots,\alpha_N,\cdots,\alpha_m)^T\ \ \ \ \text{for}\ M_q. \end{eqnarray*} For any point $r\in U_p\cap U_q$, let us compute the transition map between the holomorphic affine flat coordinates at $r$, $(\tau_1(r),\cdots,\tau_N(r))$ and $(t_1(r),\cdots,t_N(r))$. Let $[\Omega_r]=[\Omega^c_p(\tau(r))]\in H^{n,0}(M_r)$, where $[\Omega^c_p(\tau)]$ is the canonical section of the holomorphic $(n,0)$-classes around $p$. Then by Theorem \ref{expcoh} and Proposition \ref{coordinate is not depend on scalar}, we have the following identities: \begin{align} [\Omega_r]&=\eta_0+\sum\limits_{i=1}^N\tau_i(r)\eta_i+\sum\limits_{k>N}f_k¨\eta_k\label{omega_r_p};\\ [\Omega_r]&=\lambda\left(\alpha_0+\sum\limits_{i=1}^N\label{omega_r_q} t_i(r)\alpha_i+\sum\limits_{k>N}g_k¨\alpha_k\right). \end{align} Here each $f_k(r)$ is the coefficient of $\eta_k$ in the decomposition of $[\Omega_r]$ according to the Hodge decomposition on $M_p$, and each $g_k(r)$ is the coefficient of $\alpha_k$ in the decomposition of $\lambda^{-1}[\Omega_r]$ according to the Hodge decomposition on $M_q$, for $N+1\leq k\leq m$. We know that there is a Hodge basis corresponding to the Hodge decomposition of $M_q$, \begin{align*} C(q)=(c_0(q),\cdots,c_N(q),\cdots,c_m(q))^T, \end{align*} such that \begin{align*} c_i(q)=\sum\limits_j\sigma_{ij}\eta_j, \end{align*} and the matrix $\sigma=[\sigma^{\alpha,\beta}]$ is block upper triangular and nonsingular. Since we have two bases of the same Hodge decomposition on $M_q$, $C(q)$ and $\alpha$. They are related by a nonsingular block diagonal transition matrix of the form \begin{align*} A=\left[ \begin{array} [c]{cccc}% A^{0,0} & 0 & \cdots & 0\\ 0 & A^{1,1} & \cdots & 0\\ \cdots & \cdots & \cdots & \cdots\\ 0 & 0 &\cdots & A^{n,n} \end{array} \right] \end{align*} where each $A^{\alpha,\alpha}$ is an invertible $h^{n-\alpha,\alpha}\times h^{n-\alpha,\alpha}$ matrix, for $0\leq \alpha\leq n$. Thus we have \begin{align*} \alpha=&A\cdot C(q)\ \ \text{and}\ \ C(q)=\sigma\cdot \eta, \end{align*} from which we get the transition matrix between the basis $\alpha$ and basis $\eta$ \begin{align*} \alpha=A\sigma \eta. \end{align*} It is clear that $A\sigma$ is still a nonsingular block upper triangular matrix of the form \begin{align}\label{T} \left[ \begin{array} [c]{cccc}% A^{0,0}\cdot\sigma^{0,0} & *& \cdots & *\\ 0 & A^{1,1}\cdot\sigma^{1,1} & \cdots & *\\ \cdots & \cdots & \cdots & \cdots\\ 0 & 0 &\cdots & A^{n,n}\cdot\sigma^{n,n} \end{array} \right]. \end{align} Let us denote $A\sigma$ by $T=[T^{\alpha,\beta}]_{0\leq \alpha,\beta\leq n}$. Note that each $T^{\alpha,\beta}$ is an $h^{n-\alpha,\alpha}\times h^{n-\beta,\beta}$ matrix for $0\leq \alpha, \beta\leq n$, and each $T^{\alpha,\alpha}$ is invertibe. In particular note that the $1\times 1$ matrix $T^{0,0}=[T_{00}]$ and the $N\times N$ matrix $T^{1,1}$ in $T$ are nonsingular, which are used in the computation of holomorphic affine flat coordinate transformation in the following. Then we project $[\Omega_r]$ to $F^{n-1}_p=F^{n-1}(M_p)=H^{n,0}(M_p)\oplus H^{n-1,1}(M_p)$. Recall that we use $P_p^{k}$ to denote the projection from $H^n(M,\mathbb{C})$ to $F^k_p$. From (\ref{omega_r_p})and (\ref{omega_r_q}) we see that \begin{align} \eta_0+\sum\limits_{i=1}^N\tau_i\eta_i&=\lambda P^{n-1}_p(\alpha_0+\sum\limits_{i=1}^N\alpha_it_i+\sum\limits_{k=N+1}^mg_k(t)\alpha_k)\nonumber\\ &=\lambda P^{n-1}_p(\sum\limits_{j=0}^mT_{0j}\eta_j+\sum\limits_{i=1}^N t_i\sum\limits_{j=0}^m T_{ij}\eta_j+\sum\limits_{k=N+1}^m g_k(t)\sum\limits_{j=0}^m T_{kj}\eta_j)\nonumber\\ &=\lambda(\sum\limits_{j=0}^NT_{0j}\eta_j+\sum\limits_{i=1}^N t_i\sum\limits_{j=0}^N T_{ij}\eta_j)\label{aij is upper trangular}\\ &=\lambda T_{00}\eta_0+\sum\limits_{j=1}^N(\lambda T_{0j}+\sum\limits_{i=1}^N\lambda T_{ij}t_i)\eta_j.\nonumber \end{align} Here for brevity we have dropped the notation $r$ in the coordinates $\tau_i( r)$ and $t_i( r)$ in the above formulas. By comparing the coefficients of the basis $\{\eta_1,\cdots,\eta_N\}$ on both sides of the above identity, we get $1=T_{00}\lambda$ and $\tau_j=\lambda T_{0j}+\sum\limits_{i=1}^N\lambda T_{ij}t_i$. Therefore for $1\leq j\leq N$, we have the identity, \begin{align}\label{affine transformation on teich} \tau_j=T_{00}^{-1}T_{0j}+\sum\limits_{i=1}^NT_{00}^{-1}T_{ij}t_i. \end{align} In this identity, one notes that the transition matrix $T$, while depending on $p$ and $q$, is independent of $r$. Thus we have proved that the coordinate transformation (\ref{affine transformation on teich}) is a holomorphic affine transformation. \end{proof} Recall that here $N$ denotes the dimension of $H^{n-1,1}(M_p)\cong F^{n-1}(M_p)/F^n(M_p)\cong \mathbb{C}^N$ which we identify to $T^{1,0}_p\mathcal{T}$, the tangent space of the Teichm\"uller space at $p$. Also it is important to note that $T$, as a basis change matrix, is invertible, therefore each matrix $T^{\alpha,\alpha}$ on the diagonal is invertible. Now we are ready to prove Theorem \ref{affine structure}, \begin{proof}[Proof of Theorem \ref{affine structure}] Let $(U_p,\tau)$ and $(U_q,t)$ be two holomorphic affine flat coordinate charts in the Kuranishi coordinate cover. We need to show that the transition map between them is a holomorphic affine map. If $p=q$, Lemma \ref{p=q} shows the transition map between these two coordinate charts is a holomorphic affine map. If $p\neq q$, then we use a smooth curve $\gamma(s)$ to connect $p=\gamma(0)$ and $q=\gamma(1)$. Then by Lemma \ref{p,q close to each other}, we can easily choose \begin{align*} 0=s_0<s_1<\cdots<s_{k-1}<s_k=1, \end{align*} such that $\gamma(s_{l+1})\in U_{\gamma(s_l)}$ and the transition maps $\phi_{l,l+1}$ between the holomorphic affine flat coordinates in $U_{\gamma(s_l)}$ and $U_{\gamma(s_{l+1})}$ are holomorphic affine maps. Then the transition map between the holomorphic affine flat coordinates in $U_p$ and $U_q$, which is the following compositions of holomorphic affine maps, \begin{align*} \phi_{pq}=\phi_{0,1}\circ\cdots\circ\phi_{k-1,k} \end{align*} is also a holomorphic affine map, whenever $U_p\cap U_q$ is not empty. By Definition \ref{affine manifold mats and vit}, we see that the Kuranishi coordinate cover is a holomorphic affine coordinate cover, and it defines a holomorphic affine structure on $\mathcal{T}$. \end{proof} \subsection{Holomorphic affine structure and the period map} \label{affinePhi} Let us fix a base point $p\in \mathcal{T}$. In Appendix \ref{appi}, we defined the unipotent orbit $N_+$ with the fixed base point $p$ as a submanifold of $\check{D}$. Recall that in Theorem \ref{flatcoord}, by choosing a basis $(\eta_0, \cdots, \eta_N)^T$ of $H^{n,0}_p\oplus H^{n-1,1}_p$ for any $p\in \mathcal{T}$, we define the Kuranishi coordinate chart on a neighborhood $U_p$ of $p$. We denote the coordinate map by: $$\rho_{_{U_p}}:=(\tau_1, \cdots, \tau_N):\, U_p \to \mathbb{C}^N, \ \ \ \ q\mapsto (\tau_1(q), \cdots, \tau_N(q)).$$ Now we extend the basis $(\eta_0, \cdots, \eta_N)^T$ to an adapted basis $(\eta_0, \cdots, \eta_{m-1})^T$ for the Hodge decomposition at the base point $\Phi(p)\in D$. Therefore, according to the discussion after Remark \ref{N+inD}, elements in $N_+$ can be represented by nonsingular block upper triangular matrices with identity blocks in the diagonal blocks. \begin{proposition} \label{Phiaffine} The image of the period map $\Phi: \, \mathcal{T}\rightarrow D$ is in $N_+\cap D$. \end{proposition} \begin{proof}By Lemma \ref{use}, there exists a nonsingular block upper triangular matrix $T$ such that $T(\eta_0, \cdots \eta_{m-1})^T$ gives an adapted basis for the Hodge decomposition of $\Phi(q)$. Thus the matrix $T$ represents the point $\Phi(q)\in D$. Let $A$ be the nonsingular block diagonal matrix which consists of the diagonal blocks of $T$. Then $A^{-1}T$ is a nonsingular block upper triangular matrix with identies on the diagonal blocks. Thus the matrix $A^{-1}T$ represents an element in $N_+$. As $A^{-1}$ is a matrix in $V$ (recall $D\cong G_{\mathbb{R}}/V$), $A^{-1}T(\eta_0,\cdots, \eta_{m-1})$ still represents the point $\Phi(q)\in D$. Therefore, the matrix $A^{-1}T$ represents the point $\Phi(q)$ as well as a point in $N_+$. Thus we conclude that $\Phi(q)\in N_+$. \end{proof} Let $p$ be the fixed base point in $\mathcal{T}$, then the isomorphism $\text{Hom}(H^{n,0}_p, H^{n-1,1}_p)\cong H^{n-1,1}_p\cong \mathbb{C}^N$ follows from $\dim H^{n,0}_p=1$. We now define the following projection map \begin{align}\label{projectionmap} P: N_+\cap D&\rightarrow \text{Hom}(H^{n,0}_p, H^{n-1,1}_p)\cong H^{n-1,1}_p\cong \mathbb{C}^N,\\ F&\mapsto F^{(0,1)}(\eta_1, \cdots, \eta_N)^T=F_{01}\eta_1+\cdots F_{0N}\eta_N, \end{align} where $F^{(0,1)}$ is the $(0,1)$-block of the unipotent matrix $F$. Based on Proposition \ref{Phiaffine}, we can define the map: \begin{align*} \Psi:=P\circ \Phi: \,\mathcal{T}\rightarrow \mathbb{C}^N. \end{align*} Therefore the map $\Psi$ can also be describe as $\Psi(q)=P^{n-1,1}_p(P^{n,0}_q(A^{-1}T(\eta_0, \cdots, \eta_{m-1})^T))$, for any $q\in \mathcal{T}$, where $\Phi(q)=A^{-1}T(\eta_0, \cdots, \eta_{m-1})^T$ according to Proposition \ref{Phiaffine} and $P^{n-1,1}_p$ and $P^{n,0}_q$ are the projections defined at the end of section \ref{section period map}. By the definition of $\Psi$, it is easy to conclude the following lemma. \begin{lemma}\label{Psiinjective} If the holomorphic map $\Psi: \,\mathcal{T}\rightarrow\mathbb{C}^N$ is injective, then the period map $\Phi: \,\mathcal{T}\rightarrow D$ is also injective. \end{lemma} We recall that in Section \ref{flatcoord}, fixing the basis $(\eta_0, \cdots, \eta_N)$ of $H^{n,0}_p\oplus H^{n-1,1}_p$ at the reference point $p$, we defined the Kuranishi coordinate map in Theoreom \ref{flatcoord} as follows: $$\rho_{_{U_p}}:=(\tau_1, \cdots, \tau_N):\, U_p \to \mathbb{C}^N, \ \ \ \ q\mapsto (\tau_1(q), \cdots, \tau_N(q)).$$ Moreover, we have the Taylor expansion of the local canonical section of Hodge bundle $F^n$ over $U_p$ as given in Theorem \ref{expcoh}, \begin{equation* [\Omega_{p}^c](q)=[\Omega_p](q)/{a_0(q)}=\eta_0+(\tau_1(q), \cdots, \tau_N(q))(\eta_1, \cdots, \eta_N)^T+g(q), \end{equation*} where $\eta_0$ has constant coefficient $1$ and $g(q)\in \bigoplus_{k=2}^nH^{n-k,k}_{p}$. On the other hand, let us denote by $[\tilde{\Omega}_{p}^c](q)\in H^{n,0}_q$ the first element of the adapted basis $A^{-1}T(\eta_0, \cdots, \eta_{m-1})^T$. Let us set $(A^{-1}T)^{(0,1)}=((A^{-1}T)_{01}, (A^{-1}T)_{02},\cdots, (A^{-1}T)_{0N})^T$, which is the $(0,1)$-block of the matrix $A^{-1}T$, according to our convention about block matrix as in Section \ref{section kuranishi cover on tei}. Since all the diagonal blocks of $A^{-1}T$ are identity submatrix, we have \begin{align}\label{10block} [\tilde{\Omega}_{p}^c](q)=\eta_0+(A^{-1}T)^{(0,1)}(\eta_1, \cdots, \eta_N)^T+f(q), \quad\text{with}\quad f(q)\in \bigoplus_{k=2}^nH^{n-k,k}_{p} \end{align} However, the fact $\dim H^{n, 0}_q=1$ implies that there exists $\lambda\in \mathbb{C}$ such that $[\Omega_{p}^c](q)=\lambda [\tilde{\Omega}_{p}^c](q)$. Then by comparing the coefficient of $\eta_0$ in the expression of $[\Omega_{p}^c](q)$ and $[\tilde{\Omega}_{p}^c](q)$, we get $\lambda=1$, that is \begin{align}\label{10block2}[\tilde{\Omega}_{p}^c](q)=[\Omega_{p}^c](q)=\eta_0+(\tau_1(q), \cdots, \tau_N(q))(\eta_1, \cdots, \eta_N)^T+g(q),\end{align} with $ g(q)\in \oplus_{k=2}^nH^{n-k,k}.$ Now by comparing the coefficients of $(\eta_1, \cdots, \eta_N)$ in \eqref{10block} and \eqref{10block2}, we get that \begin{align}\label{10tau}(A^{-1}T)^{(0,1)}=(\tau_1(q), \cdots, \tau_N(q)). \end{align} Let us denote the restriction map $\psi:=\Psi|_{U_p}:U_p\rightarrow \mathbb{C}^N\cong\text{Hom}(H^{n,0}, H^{n-1,1})\cong H^{n-1,1}_p$. Then $\psi(q)=(\eta_1, \cdots, \eta_N)(A^{-1}T)^{(0,1)}=\tau_1(q)\eta_1+\cdots+\tau_N(q)\eta_N$. Therefore, with respect to the affine structure on $U_p$ given by the Kuranishi coordinate cover, the restriction map $\psi$ is a holomorphic affine map. In particular, the tangent map of $\psi$ at $p$ \begin{align*} (\psi_*)_p=P^{n}_p\circ \Phi_*:\, T^{1,0}_p U_p\cong H^{0,1}(M_p, T^{1,0}M_p)\to \text{Hom}(F^n_p,F^{n-1}_p/F^n_p)\cong H^{n-1,1}_p. \end{align*} is an isomorphism, since, according to Lemma \ref{iso}, $P^n_p\circ \Phi_*$ is an isomorphism. In particular, $(\psi_*)_p$ is nondegenerate. We now apply Theorem \ref{holomorphic extension} to prove the following proposition. \begin{proposition}\label{Psi local embedding}The holomorphic map $\Psi: \,\mathcal{T}\rightarrow\mathbb{C}^N$ is an affine map with respect to the affine structure on $\mathcal{T}$ given by the Kuranishi coordinate cover. Moreover, the map $\Psi$ is a local embedding. \end{proposition} \begin{proof}As was shown above, we have that $\psi: \, U_p\rightarrow \mathbb{C}^N$ is a holomorphic affine map with respect to the holomorphic affine structure on $U_p$ given by the Kuranishi coordinate cover. Since $\mathcal{T}$ is a simply connected complex affine manifold and that $\mathbb{C}^N$ is a complete affine manifold, there exists a holomorphic affine extension map $\Psi':\,\mathcal{T}\rightarrow \mathbb{C}^N$ with respect to the affine structure on $\mathcal{T}$ given by the Kuranishi coordinate cover such that $\Psi'|_{U_p}=\psi: \,U_p\rightarrow \mathbb{C}^N$. Since both $\Psi$ and $\Psi'$ are holomorphic maps from $\mathcal{T}$ to $\mathbb{C}^N$, and they agree on the open set $U_p$, they must be the same map globally on $\mathcal{T}$, that is, $\Psi=\Psi':\,\mathcal{T}\rightarrow \mathbb{C}^N$. Thus we conclude that $\Psi$ is a holomorphic affine map. We recall that the tangent map of the restriction $\psi: \, U_p\rightarrow \mathbb{C}^N$ is nondegenerate at $p\in \mathcal{T}$. Then the tangent map of $\Psi$ is also nondegerate at $p$. Therefore, since $\Psi: \,\mathcal{T}\rightarrow \mathbb{C}^N$ is a holomorphic affine map, we can conclude that the tangent map of $\Psi$ is nondegenerate at any point of $\mathcal{T}$. This shows that $\Psi$ is a local embedding. \end{proof} \section{Hodge metric completion of the Teich\"uller space with level structure}\label{THm} In Section \ref{defofTH}, we introduce the universal cover $\mathcal{T}^H_{_m}$ of $\mathcal{Z}^H_{_m}$, where $\mathcal{Z}^H_{_m}$ is the Hodge metric completion of the smooth moduli space $\mathcal{Z}_{m}$. We denote the lifting maps $i_m: \,\mathcal{T}\rightarrow \mathcal{T}^H_{_m}$ and $\Phi_{_{m}}^H: \,\mathcal{T}^H_{_m}\to D$ and take $\mathcal{T}_m:=i_{m}(\mathcal{T})$ and $\Phi_{m}:=\Phi^H_{_m}|_{\mathcal{T}_m}$. We will prove that $\mathcal{T}_{m}$ is a dense and open submanifold in $\mathcal{T}^H_{_m}$ and that $\Phi^H_{_m}$ is a holomorphic map from $\mathcal{T}^H_{_m}$ to $N_+\cap D$. We then define the map $\Psi^H_{_m}$ from $\mathcal{T}^H_{_m}$ to $\mathbb{C}^N$ and its restriction $\Psi_m$ on the submanifold $\mathcal{T}_m$. In Section \ref{affineness}, we use the local injectivity of $\Psi$ to show that $\Psi_m$ is also a local embedding and conclude that there is a holomorphic affine structure on $\mathcal{T}_m$ with $\Psi_m$ natually being affine on $\mathcal{T}_m$. Then the affineness of $\Psi_m$ shows that the the extension $\Psi^H_{_m}$ is also a local embedding. We then analogously conclude the affineness of $\mathcal{T}^H_{_m}$ and $\Psi_{_m}^H$. In Section \ref{injective}, we prove that $\Psi^H_{_m}$ is an injection by using the Hodge metric completeness and the global holomorphic affine structure on $\mathcal{T}^H_{_m}$ as well as the affineness of $\Psi^H_{_m}$. As corollaries, we show that $\mathcal{T}^H_{_m}$ can be embedded into $\mathbb{C}^N$ and that the holomorphic map $\Phi_{_m}^H$ is an injection. We remark here that most of the detailed proofs of results in this section can be found in \cite{CGL}. \subsection{Definitions and basic properties}\label{defofTH} Recall in Section \ref{section construction of Tei}, $\mathcal{Z}_m$ from \cite{sz} is the smooth moduli space of polarized Calabi--Yau manifolds with level $m$ structure. We defined the Teichm\"uller space $\mathcal{T}$ to be the universal cover of $\mathcal{Z}_m$. In particular, we have proved that the definition of $\mathcal{T}$ does not depend on the choice of level structures. By the work of Viehweg in \cite{v1}, we know that $\mathcal{Z}_m$ is quasi-projective and that we can find a smooth projective compactification $\bar{\mathcal{Z}}_m$ such that $\mathcal{Z}_m$ is open in $\bar{\mathcal{Z}}_m$ and the complement $\bar{\mathcal{Z}}_m\backslash\mathcal{Z}_m$ is a divisor of normal crossing. Therefore, $\mathcal{Z}_m$ is dense and open in $\bar{\mathcal{Z}}_{m}$ where the complex codimension of the complement $\bar{\mathcal{Z}}_m\backslash \mathcal{Z}_m$ is at least one. Moreover, as $\bar{\mathcal{Z}}_m$ a compact space, it is a complete space. Recall at the end of Section \ref{section period map}, we pointed out that there are induced Hodge metrics on $\mathcal{Z}_m$. Let us now take $\mathcal{Z}^H_{_m}$ to be the Hodge metric completion of $\mathcal{Z}_m$. Then $\mathcal{Z}_m^H$ is the smallest complete space with respect to the Hodge metric that contains $\mathcal{Z}_m$. Although the compact space $\bar{\mathcal{Z}}_m$ may not be unique, the Hodge metric completion space $\mathcal{Z}^H_{_m}$ is unique up to isometry. In particular, $\mathcal{Z}^H_{_m}\subseteq\bar{\mathcal{Z}}_m$ and the complex codimension of the complement $\mathcal{Z}^H_{_m}\backslash \mathcal{Z}_m$ is at least one. Then we have the following lemma and the proof of it can be found in Lemma 4.1 in \cite{CGL}. \begin{lemma}\label{cidim} The Hodge metric completion $\mathcal{Z}^H_{_m}$ is a dense and open smooth submanifold in $\bar{\mathcal{Z}}_m$ and the complex codimenison of $\mathcal{Z}^H_{_m}\backslash\mathcal{Z}_m$ is at least one. \end{lemma} Let $\mathcal{T}^{H}_{_m}$ be the universal cover of $\mathcal{Z}^H_{_m}$. Thus $\mathcal{T}^H_{_m}$ is a connected and simply connected complete smooth complex manifold with respect to the Hodge metric. We will call $\mathcal{T}^H_{_m}$ the \textit{Hodge metric completion space with level $m$ structure} of $\mathcal{T}$, or simply the \textit{Hodge metric completion space}. We denote the universal covering map by $\pi_{_m}^H:\, \mathcal{T}^{H}_{_m}\rightarrow \mathcal{Z}_m^H$. Since $\mathcal{Z}_m^H$ is the Hodge metric completion of $\mathcal{Z}_m$, there exists the natural continuous extension map ${\Phi}_{_{\mathcal{Z}_m}}^H: \,\mathcal{Z}_m^H\rightarrow D/\Gamma$. Moreover, recall that the Teichm\"uller space $\mathcal{T}$ is the universal cover of the moduli space $\mathcal{Z}_m$ with the universal covering map denoted by $\pi_m:\, \mathcal{T}\to \mathcal{Z}_m$. Thus we have the following commutative diagram \begin{align}\label{cover maps} \xymatrix{\mathcal{T}\ar[r]^{i_{m}}\ar[d]^{\pi_m}&\mathcal{T}^H_{_m}\ar[d]^{\pi_{_m}^H}\ar[r]^{{\Phi}^{H}_{_m}}&D\ar[d]^{\pi_D}\\ \mathcal{Z}_m\ar[r]^{i}&\mathcal{Z}^H_{_m}\ar[r]^{{\Phi}_{_{\mathcal{Z}_m}}^H}&D/\Gamma, } \end{align} where $i$ is the inclusion map, ${i}_{_m}$ is a lifting map of $i\circ\pi_m$, $\pi_D$ is the covering map and ${\Phi}^{H}_{_m}$ is a lifting map of ${\Phi}_{_{\mathcal{Z}_m}}^H\circ \pi_{_m}^H$. In particular, $\Phi^H_{_m}$ is a continuous map from $\mathcal{T}^H_{_m}$ to $D$. One may notice that the lifting maps ${i}_{_{\mathcal{T}}}$ and ${\Phi}^H_{_m}$ are not unique, but it is not hard to show implies that there exists a suitable choice of $i_{m}$ and $\Phi_{_m}^H$ such that $\Phi=\Phi^H_{_m}\circ i_{m}$. We refer the reader to the appendix of \cite{CGL} for the proof of this simple fact. We will fix the choice of $i_{m}$ and $\Phi^{H}_{_m}$ such that $\Phi=\Phi^H_{_m}\circ i_m$ in the rest of the paper. We remark that unless otherwise pointed out, when we mention a complete space, the completeness is always with respect to the Hodge metric in this paper. Let us consider $\mathcal{T}_m:=i_m(\mathcal{T})$, which is connected as $\mathcal{T}$ is. Then we have the following result, the proof of which is provided in the appendix of \cite{CGL}. \begin{proposition}\label{opend}The image $\mathcal{T}_m:=i_m(\mathcal{T})$ equals to the preimage $(\pi^H_{_m})^{-1}(\mathcal{Z}_{m})$. \end{proposition} We recall in Remark \ref{N+inD} that we fix a base point $p\in \mathcal{T}$ and identify the affine group $N_+$ with its unipotent orbit in $\check{D}$. In the following proposition, let us still identify $N_+$ with its unipotent orbit in $\check{D}$ by fixing the base point. Let us take the restriction map $\Phi_m:=\Phi^H_{_m}|_{\mathcal{T}_m}$. It is not hard to see that $\Phi_m$ is holomorphic. Indeed, we know that $i_m:\,\mathcal{T}\rightarrow \mathcal{T}_m$ is the lifting of $i\circ \pi_m$ and $\pi^H_{_m}|_{\mathcal{T}_m}:\, \mathcal{T}_m\rightarrow \mathcal{Z}_m$ is a holomorphic covering map, thus $i_m$ is also holomorphic. Since $\Phi=\Phi_m\circ i_m$ with both $\Phi$, $i_m$ holomorphic and $i_m$ locally invertible, we can conclude that $\Phi_m:\,\mathcal{T}_m\rightarrow D$ is a holomorphic map. Moreover, we have $\Phi_m(\mathcal{T}_m)=\Phi_m(i_m(\mathcal{T}))=\Phi(\mathcal{T})\subseteq N_+\cap D$ as $\Phi=i_m\circ \Phi_m$. Moreover, according to the above discussion, we know that the complex codimension of the complement $\mathcal{T}^H_{_m}\backslash\mathcal{T}_m$ is at least one, and $\Phi_m:\,\mathcal{T}_m\to N_+\cap D$ is a locally bounded holomorphic map. Then by applying Riemann extension theorem to $\Phi_{m}: \,\mathcal{T}_m\rightarrow N_+$, we conclude that there exists a holomorphic map $\Phi'_m:\, \mathcal{T}^H_{_m}\rightarrow N_+\cap D$ such that $\Phi'_m|_{\mathcal{T}_m}=\Phi_m$. We know that both $\Phi^H_{_m}$ and $\Phi'_m$ are continuous maps defined on $\mathcal{T}^H_{_m}$ that agree on the dense subset $\mathcal{T}_m$. Therefore, they must agree on the whole $\mathcal{T}^H_{_m}$, that is, $\Phi^H_{_m}=\Phi'_m$ on $\mathcal{T}^H_{_m}$. Thus we conclude the following result. \begin{proposition}\label{Riemannextension}The map $\Phi^{H}_{_m}$ is a holomorphic map from $\mathcal{T}^H_{_m}$ to $N_+\cap D$.\end{proposition} Based on Proposition \ref{Riemannextension}, we can analogously define the holomorphic map \begin{align}\label{PsiHm}\Psi_{_m}^H=P\circ\Phi^H_{_m}: \,\mathcal{T}^H_{_m}\rightarrow \mathbb{C}^N,\end{align} where $P$ is the projection map given by \eqref{projectionmap} in Section \ref{affinePhi} with the fixed base point $\Phi(p)\in D$ and $p\in \mathcal{T}$ and the fixed adapted basis $(\eta_0, \cdots, \eta_{m-1})$ for the Hodge decomposition of $\Phi(p)$. Moreover, we also have $\Psi=P\circ\Phi=P\circ \Phi^H_{m}\circ i_m=\Psi^H_{_m}\circ i_m$. Let us denote the restriction map $\Psi_m=\Psi^H_{_m}|_{\mathcal{T}_m}:\,\mathcal{T}_m\rightarrow \mathbb{C}^N$ in the following context. Then $\Psi^H_{_m}$ is the continuous extension of $\Psi_m$ and $\Psi=\Psi_m\circ i_m$. By the definition of $\Psi_{_m}^H$, we can easily conclude the following lemma, which is analogue to Lemma \ref{Psiinjective}. \begin{lemma}\label{PsiHm injective} If the holomorphic map $\Psi^H_{_m}$ is injective, then $\Phi^H_{_m}: \,\mathcal{T}^H_{_m}\rightarrow N_+\cap D$ is also injective. \end{lemma} \subsection{Holomorphic affine structure on the Hodge metric completion }\label{affineness} In this section, the map $\Psi^H_{_m}$ is defined as in \eqref{PsiHm}, where we fix the base point $\Phi(p)\in D$ with $p\in \mathcal{T}$, and an adapted basis for the Hodge decomposition of $\Phi(p)$. In the following lemma, we will crucially use the fact that the holomorphic map $\Psi: \,\mathcal{T}\rightarrow \mathbb{C}^N\cong H_p^{n-1,1}$ is a local embedding, which is based on the holomorphic affine structure on $\mathcal{T}$. We now use the property that $\Psi: \mathcal{T}\rightarrow \mathbb{C}^N$ is a local embedding and that $\Psi=\Psi_m\circ i_m$ to conclude the following lemma. One can find the proof of it in Lemma 4.6 in \cite{CGL}. \begin{lemma}\label{affine on Tm}For any $m\geq 3$, there exists a holomorphic affine structure on $\mathcal{T}_m$. In particular, the holomorphic map $\Psi_m:\,\mathcal{T}_m\rightarrow \mathbb{C}^N\cong H^{n-1,1}_p$ is an affine map with respect to this holomorphic affine structure on $\mathcal{T}_m$. \end{lemma} \begin{corollary}\label{Philocalembedding}The holomorphic map $\Psi_m$ is a local embedding. In particular, $\Phi_m$ is also a local embedding. \end{corollary} More importantly, Lemma \ref{affine on Tm} implies the following lemma, the proof of which can also be refer to Lemma 4.7 in \cite{CGL}. \begin{lemma}\label{injofPsi} The holomorphic map $\Psi^H_{_m}: \,\mathcal{T}^H_{_m}\rightarrow \mathbb{C}^N\cong H^{n-1,1}_p$ is a local embedding. \end{lemma} Since $\Psi_{_m}^H:\,\mathcal{T}^H_{_m}\rightarrow \mathbb{C}^N$ is a local embedding and $\dim\mathcal{T}^H_{_m}=N$, it is not hard to see that there exists an induced holomorphic affine structure on $\mathcal{T}^H_{_m}$ from the affine structure on $\mathbb{C}^N$ via the local embedding $\Psi^H_{_m}$. In particular, with respect to this affine structure on $\mathcal{T}_{_m}^H$, the holomorphic map $\Psi^H_{_m}$ is also an affine map. Thus we conclude the following. \begin{theorem}\label{THmaffine}There exists a holomorphic affine structure on $\mathcal{T}^H_{_m}$. Moreover, the holomorphic map $\Psi^H_{_m}:\,\mathcal{T}^H_{_m}\rightarrow \mathbb{C}^N$ is a holomorphic affine map with respect to this holomorphic affine structure on $\mathcal{T}^H_{_m}$. \end{theorem} It is important to note that the flat connections which correspond to the global holomorphic affine structures on $\mathcal{T}$, on $\mathcal{T}_m$ or on $\mathcal{T}_{_m}^H$ are in general not compatible with respect to the corresponding Hodge metric on them. \subsection{Injectivity of the period map on the Hodge metric completion space}\label{injective} \begin{theorem}\label{injectivityofPhiH}For any $m\geq 3$, the holomorphic map $\Psi^H_{_m}:\, \mathcal{T}^H_{_m}\rightarrow \mathbb{C}^N$ is an injection. \end{theorem} \noindent To prove this theorem, we state the following lemma, where we mainly use the completeness and the holomorphic affine structure on $\mathcal{T}^H_{_m}$ as well as the affineness of $\Psi^H_{_m}$. The detailed proof can be found in \cite{CGL}. \begin{lemma}\label{straightline} For any two points in $\mathcal{T}^H_{_m},$ there is a straight line in $\mathcal{T}^H_{_m}$ connecting them. \end{lemma} We remark that as $\mathcal{T}^H_{_m}$ is a complex affine manifold, we have the notion of straight lines in it with respect to the affine structure. \begin{proof}[Proof of Theorem \ref{injectivityofPhiH}] Let $p, q\in \mathcal{T}^H_{_m}$ be two different points. Then Lemma \ref{straightline} implies that there is a straight line $l\subseteq \mathcal{T}^H_{_m}$ connecting $p$ and $q$. Since $\Psi^H_{_m}:\, {\mathcal T}^H_{_m}\rightarrow \mathbb{C}^N$ is affine, the restriction $\Psi^H_{_m}|_l$ is a linear map. Suppose towards a contradiction that $\Psi^H_{_m}(p)=\Psi^H_{_m}(q)\in \mathbb{C}^N$. Then the restriction of $\Psi^H_{_m}$ to the straight line $l$ is a constant map as $\Psi^H_{_m}|_l$ is linear. By Lemma \ref{injofPsi}, we know that $\Psi^H_{_m}: \, \mathcal{T}^H_{_m}\to \mathbb{C}^N$ is locally injective. Therefore, we may take $U_p$ to be a neighborhood of $p$ in $\mathcal{T}^H_{_m}$ such that $\Psi^H_{_m}: \, U_p\rightarrow\mathbb{C}^N$ is injective. However, the intersection of $U_p$ and $l$ contains infinitely many points, but the restriction of $\Psi^H_{_m}$ to $U_p\cap l$ is a constant map. This contradicts the fact that when we restrict $\Psi_{_m}^H$ to $U_p\cap l$, $\Psi_{_m}^H$ is an injective map. Thus $\Psi^H_{_m}(p)\neq \Psi^H_{_m}(q)$ if $p\neq q\in \mathcal{T}^H_{_m}$. \end{proof}Since the holomorphic affine map $\Psi^H_{_m}: \,\mathcal{T}^H_{_m}\rightarrow \mathbb{C}^N$ is injective, Lemma \ref{PsiHm injective} implies the following corollary. \begin{corollary}\label{embedTHmintoCN}The completion space $\mathcal{T}^H_{_m}$ can be embedded into $\mathbb{C}^N$. \end{corollary} \begin{corollary}\label{injective PhiHm}The holomorphic map $\Phi^H_{_m}: \,\mathcal{T}^H_{_m}\rightarrow N_+\cap D$ is also an injection. \end{corollary} \begin{proof}We recall that $\Psi^H_{_m}=P\circ\Phi_{_m}^H$, where $P$ is the projection map defined as in \eqref{projectionmap} in Section \ref{affinePhi}. Since $\Psi^H_{_m}$ is injective, $\Phi^H_{_m}$ must also be injective. \end{proof} \section{Proof of the global Torelli theorem}\label{MainProof} In this section, We finish the proof of the global Torelli theorem. We first define the completion space $\mathcal{T}^H$ by $\mathcal{T}^H=\mathcal{T}^H_{_m}$, and the extended period map $\Phi^H$ by $\Phi^H=\Phi^H_{_m}$ for any $m\geq 3$ after proving that $\mathcal{T}^H_{_m}$ doesn't depend on the choice of the level structure. Therefore $\mathcal{T}^H$ is a complex affine manifold and that $\Phi^H$ is a holomorphic injection. We then prove Lemma \ref{injectivity of i}, which implies that $\mathcal{T}^H$ is the completion space of $\mathcal{T}$ with respect to the Hodge metric. As a consequence, we get the global Torelli theorem of the period map on the Teichm\"uller space to the period domain, which is the main result of this paper. For any two integers $m, m'\geq 3$, let $\mathcal{Z}_m$ and $\mathcal{Z}_{m'}$ be the smooth quasi-projective manifolds as in Theorem \ref{Szendroi theorem 2.2} and $\mathcal{Z}^H_{_m}$ and $\mathcal{Z}^H_{m'}$ their Hodge metric completions. Let $\mathcal{T}^H_{_m}$ and $\mathcal{T}^H_{m'}$ be the universal cover spaces of $\mathcal{Z}^H_{_m}$ and $\mathcal{Z}_{{m'}}^H$ respectively, then we have the following. \begin{proposition}\label{indepofm} The complete complex manifolds $\mathcal{T}^H_m$ and $\mathcal{T}^H_{m'}$ are biholomorphic to each other. \end{proposition} Proposition \ref{indepofm} shows that $\mathcal{T}^H_{_m}$ doesn't depend on the choice of the level $m$ structure, and it allows us to give the following definitions. \begin{definition}We define the complete complex manifold $\mathcal{T}^H=\mathcal{T}^H_{_m}$, the holomorphic map $i_{\mathcal{T}}: \,\mathcal{T}\to \mathcal{T}^H$ by $i_{\mathcal{T}}=i_m$, and the extended period map $\Phi^H:\, \mathcal{T}^H\rightarrow D$ by $\Phi^H=\Phi^H_{_m}$ for any $m\geq 3$. In particular, with these new notations, we have the commutative diagram with $\Phi=\Phi^H\circ i_{\mathcal{T}}$,\begin{align*} \xymatrix{\mathcal{T}\ar[r]^{i_{\mathcal{T}}}\ar[d]^{\pi_m}&\mathcal{T}^H\ar[d]^{\pi^H_m}\ar[r]^{{\Phi}^{H}}&D\ar[d]^{\pi_D}\\ \mathcal{Z}_m\ar[r]^{i}&\mathcal{Z}^H_{_m}\ar[r]^{{\Phi}_{_{\mathcal{Z}_m}}^H}&D/\Gamma. } \end{align*} \end{definition} \begin{theorem}\label{main theorem} The complex manifold $\mathcal{T}^H$ is a complex affine manifold and can be embedded into $\mathbb{C}^N$. Moreover, the extended period map $\Phi^H: \,\mathcal{T}^H\rightarrow N_+\cap D$ is a holomorphic injection. \end{theorem} \begin{proof}By the definition of $\mathcal{T}^H$, Theorem \ref{THmaffine}, and Corollary \ref{embedTHmintoCN}, it is easy to see that $\mathcal{T}^H_{_m}$ is a complex affine manifold, which can be embedded into $\mathbb{C}^N$. It is also not hard to see that the injectivity of $\Phi^H$ follows from Corollary \ref{injective PhiHm} by the definition of $\Phi^H$. \end{proof} Now to prove the global Torelli theorem, it is sufficient to show that $i_{\mathcal{T}}: \,\mathcal{T}\rightarrow \mathcal{T}^H$ is an embedding, which is given in the following lemma. The detailed proof is provided in Lemma 5.4 in \cite{CGL}. \begin{lemma}\label{injectivity of i} The map $i_{\mathcal{T}}:\,\mathcal{T}\to \mathcal{T}^H$ is an embedding. \end{lemma} Because $\Phi=\Phi^H\circ i_{\mathcal{T}}$ with both $\Phi^H$ and $i_{\mathcal{T}}$ embeddings, we get the global Torelli theorem for the period map from the Teichm\"uller space to the period domain in the following corollary. \begin{theorem}[Global Torelli theorem]\label{Global Torelli theorem} The period map $\Phi:\, \mathcal{T}\rightarrow D$ is injective. \end{theorem}
1,108,101,565,096
arxiv
\section{Introduction} We denote by $\mathcal{H}$ the class of holomorphic functions on the unit disk $\mathbb{D}=\{z: |z|<1\}$ of the complex plane $\mathbb{C}.$ For $a\in\mathbb{C}$ and $n\in\mathbb{N},$ let $\mathcal{H}[a,n]$ denote the subclass of $\mathcal{H}$ consisting of functions $h$ of the form $h(z)=a+c_nz^n+c_{n+1}z^{n+1}+\cdots.$ Here, $\mathbb{N}=\{1,2,3,\dots\}.$ Let also $\mathcal{A}_n$ be the set of functions $f$ of the form $f(z)=zh(z)$ for $h\in\mathcal{H}[1,n].$ A function $f\in\mathcal{A}_1$ is called {\it starlike} (resp.~{\it convex}) if $f$ is univalent on $\mathbb{D}$ and if the image $f(\mathbb{D})$ is starlike with respect to the origin (resp.~convex). It is well known (cf.~\cite{Duren:univ}) that $f\in\mathcal{A}_1$ is starlike precisely if $q_f(z)=zf'(z)/f(z)$ has positive real part on $|z|<1,$ and that $f\in\mathcal{A}_1$ is convex precisely if $\varphi_f(z)=1+zf''(z)/f'(z)$ has positive real part on $|z|<1.$ Note that the following relation holds for those quantities: $$ \varphi_f(z)=q_f(z)+\frac{zq_f'(z)}{q_f(z)}. $$ It is geometrically obvious that a convex function is starlike. This, in turn, means the implication $$ \Re\left[q(z)+\frac{zq'(z)}{q(z)}\right]>0~\text{on}~|z|<1 \quad\Rightarrow\quad \Re q(z)>0~\text{on}~|z|<1 $$ for a function $q\in\mathcal{H}[1,1].$ Interestingly, it looks highly nontrivial. Miller and Mocanu developed a theory (now called {\it differential subordination}) which enables us to deduce such a result systematically. See a monograph ~\cite{MM:ds} written by them for details. The set of functions $q\in\mathcal{H}[1,1]$ with $\Re q>0$ is called the Carath\'eodory class and will be denoted by $\mathcal P.$ It is well recognized that the function $q_0(z)=(1+z)/(1-z)$ (or its rotation) maps the unit disk univalently onto the right half-plane and is extremal in many problems. One can observe that the function $$ \varphi_0(z)=q_0(z)+\frac{zq_0'(z)}{q_0(z)} =\frac{1+z}{1-z}+\frac{2z}{1-z^2} =\frac{1+4z+z^2}{1-z^2} $$ maps the unit disk onto the slit domain $V(-\sqrt{3},\sqrt{3}),$ where $$ V(A,B)=\mathbb{C}\setminus \{iy: y\le A~\text{or}~y\ge B\} $$ for $A,B\in\mathbb R$ with $A<B.$ Note that $V(A,B)$ contains the right half-plane and has the ``window" $(Ai, Bi)$ in the imaginary axis to the left half-plane. The Open Door Lemma of Miller and Mocanu asserts for a function $q\in\mathcal{H}[1,1]$ that, if $q(z)+zq'(z)/q(z)\in V(-\sqrt3,\sqrt3)$ for $z\in\mathbb{D},$ then $q\in\mathcal P.$ Indeed, Miller and Mocanu ~\cite{MM97BB} (see also ~\cite{MM:ds}) proved it in a more general form. For a complex number $c$ with $\Re c>0$ and $n\in\mathbb{N},$ we consider the positive number $$ C_{n}(c)=\frac{n}{\Re c}\left[|c|\sqrt{\frac{2\Re c}{n}+1}+\Im{c}\right]. $$ In particular, $C_n(c)=\sqrt{n(n+2c)}$ when $c$ is real. The following is a version of the Open Door Lemma modified by Kuroki and Owa ~\cite{KO14}. \begin{thmA}[Open Door Lemma]\label{Thm:ODL} Let $c$ be a complex number with positive real part and $n$ be an integer with $n\ge1.$ Suppose that a function $q\in\mathcal{H}[c,n]$ satisfies the condition $$ q(z)+\frac{zq'(z)}{q(z)}\in V(-C_n(c),C_n(\bar c)),\quad z\in\mathbb{D}. $$ Then $\Re q>0$ on $\mathbb{D}.$ \end{thmA} \begin{remark} In the original statement of the Open Door Lemma in ~\cite{MM97BB}, the slit domain was erroneously described as $V(-C_n(c),C_n(c)).$ Since $C_n(\bar c)<C_n(c)$ when $\Im c>0,$ we see that $V(-C_n(\bar c),C_n(\bar c))\subset V(-C_n(c),C_n(\bar c)) \subset V(-C_n(c),C_n(c))$ for $\Im c\ge0$ and the inclusions are strict if $\Im c>0.$ As the proof will suggest us, seemingly the domain $V(-C_n(c),C_n(\bar c))$ is maximal for the assertion, which means that the original statement in ~\cite{MM97BB} and the form of the associated open door function are incorrect for a non-real $c.$ This, however, does not decrease so much the value of the original article ~\cite{MM97BB} by Miller and Mocanu because the Open Door Lemma is mostly applied when $c$ is real. We also note that the Open Door Lemma deals with the function $p=1/q\in\mathcal{H}[1/c,n]$ instead of $q.$ The present form is adopted for convenience of our aim. \end{remark} The Open Door Lemma gives a sufficient condition for $q\in\mathcal{H}[c,n]$ to have positive real part. We extend it so that $|\arg q|<\pi\alpha/2$ for a given $0<\alpha\le 1.$ First we note that the M\"obius transformation $$ g_c(z)=\frac{c+\bar cz}{1-z} $$ maps $\mathbb{D}$ onto the right half-plane in such a way that $g_c(0)=c,$ where $c$ is a complex number with $\Re c>0.$ In particular, one can take an analytic branch of $\log g_c$ so that $|\Im\log g_c|<\pi/2.$ Therefore, the function $q_0=g_c^\alpha=\exp(\alpha\log g_c)$ maps $\mathbb{D}$ univalently onto the sector $|\arg w|<\pi\alpha/2$ in such a way that $q_0(0)=c^\alpha.$ The present note is based mainly on the following result, which will be deduced from a more general result of Miller and Mocanu (see Section 2). \begin{theorem}\label{thm:main} Let $c$ be a complex number with $\Re c>0$ and $\alpha$ be a real number with $0<\alpha\le 1.$ Then the function $$ R_{\alpha,c,n}(z)=g_c(z)^\alpha+\frac{n\alpha zg_c'(z)}{g_c(z)} =\left(\frac{c+\bar{c}z}{1-z}\right)^{\alpha} +\frac{2n\alpha(\Re c)z}{(1-z)(c+\bar{c}z)} $$ is univalent on $|z|<1.$ If a function $q\in\mathcal{H}[c^\alpha,n]$ satisfies the condition $$ q(z)+\frac{zq'(z)}{q(z)}\in R_{\alpha,c,n}(\mathbb{D}),\quad z\in\mathbb{D}, $$ then $|\arg q|<\pi\alpha/2$ on $\mathbb{D}.$ \end{theorem} We remark that the special case when $\alpha=1$ reduces to Theorem \ref{Thm:ODL} (see the paragraph right after Lemma \ref{lem:2} below. Also, the case when $c=1$ is already proved by Mocanu ~\cite{Mocanu86A} even under the weaker assumption that $0<\alpha\le 2$ (see Remark \ref{rem:Mocanu}). Since the shape of $R_{\alpha,c,n}(\mathbb{D})$ is not very clear, we will deduce more concrete results as corollaries of Theorem \ref{thm:main} in Section 3. This is our principal aim in the present note. \section{Preliminaries} We first recall the notion of subordination. A function $f\in\mathcal{H}$ is said to be {\it subordinate} to $F\in\mathcal{H}$ if there exists a function $\omega\in\mathcal{H}[0,1]$ such that $|\omega|<1$ on $\mathbb{D}$ and that $f=F\circ\omega.$ We write $f\prec F$ or $f(z)\prec F(z)$ for subordination. When $F$ is univalent, $f\prec F$ precisely if $f(0)=F(0)$ and if $f(\mathbb{D})\subset F(\mathbb{D}).$ Miller and Mocanu ~\cite[Theorem 5]{MM97BB} (see also ~\cite[Theorem 3.2h]{MM:ds}) proved the following general result, from which we will deduce Theorem \ref{thm:main} in the next section. \begin{lemma}[Miller and Mocanu]\label{lem:MM} Let $\mu,\nu\in \mathbb{C}$ with $\mu\neq 0$ and $n$ be a positive integer. Let $q_0\in\mathcal{H}[c,1]$ be univalent and assume that $\mu q_0(z)+\nu\neq 0$ for $z\in\mathbb{D}$ and $\Re (\mu c+\nu)>0.$ Set $Q(z)=zq_0'(z)/(\mu q_0(z)+\nu),$ and \begin{equation}\label{eq:h} h(z)=q_0(z)+nQ(z)=q_0(z)+\frac{nzq_0'(z)}{\mu q_0(z)+\nu}. \end{equation} Suppose further that \begin{enumerate} \item[(a)] $\Re[zh'(z)/Q(z)]=\Re[h'(z)(\mu q_0(z)+\nu)/q_0'(z)]>0,$ and \item[(b)] either $h$ is convex or $Q$ is starlike. \end{enumerate} If $p\in\mathcal{H}[c,n]$ satisfies the subordination relation \begin{equation} q(z)+\frac{zq'(z)}{\mu q(z)+\nu}\prec h(z), \end{equation} then $q\prec q_0$, and $q_0$ is the best dominant. An extremal function is given by $q(z)=q_0(z^n).$ \end{lemma} In the investigation of the generalized open door function $R_{\alpha,c,n},$ we will need to study the positive solution to the equation \begin{equation}\label{eq:eq} x^2+Ax^{1+\alpha}-1=0, \end{equation} where $A>0$ and $0<\alpha\le1$ are constants. Let $F(x)=x^2+Ax^{1+\alpha}-1.$ Then $F(x)$ is increasing in $x>0$ and $F(0)=-1<0,~F(+\infty)=+\infty.$ Therefore, there is a unique positive solution $x=\xi(A,\alpha)$ to the equation. We have the following estimates for the solution. \begin{lemma}\label{lem:sol} Let $0<\alpha\le1$ and $A>0.$ The positive solution $x=\xi(A,\alpha)$ to equation \eqref{eq:eq} satisfies the inequalities $$ (1+A)^{-1/(1+\alpha)}\le\xi(A,\alpha)\le (1+A)^{-1/2}~(<1). $$ Here, both inequalities are strict when $0<\alpha<1.$ \end{lemma} \begin{proof} Set $\xi=\xi(A,\alpha).$ Since the above $F(x)$ is increasing in $x>0,$ the inequalities $F(x_1)\le 0=F(\xi)\le F(x_2)$ imply $x_1\le \xi\le x_2$ for positive numbers $x_1, x_2$ and the inequalities are strict when $x_1<\xi<x_2.$ Keeping this in mind, we now show the assertion. First we put $x_2=(1+A)^{-1/2}$ and observe $$ F(x_2)=\frac1{1+A}+\frac{A}{(1+A)^{(1+\alpha)/2}}-1 \ge\frac1{1+A}+\frac{A}{1+A}-1=0, $$ which implies the right-hand inequality in the assertion. Next put $x_1=(1+A)^{-1/(1+\alpha)}.$ Then $$ F(x_1)=\frac1{(1+A)^{2/(1+\alpha)}}+\frac{A}{1+A}-1 \le\frac1{1+A}+\frac{A}{1+A}-1=0, $$ which implies the left-hand inequality. We note also that $F(x_1)<0<F(x_2)$ when $\alpha<1.$ The proof is now complete. \end{proof} \section{Proof and corollaries} Theorem \ref{thm:main} can be rephrased in the following. \begin{theorem} Let $c$ be a complex number with $\Re c>0$ and $\alpha$ be a real number with $0<\alpha\le 1.$ Then the function $$ R_{\alpha,c,n}(z)=g_c(z)^\alpha+\frac{n\alpha zg_c'(z)}{g_c(z)} $$ is univalent on $|z|<1.$ If a function $q\in\mathcal{H}[c^\alpha,n]$ satisfies the subordination condition $$ q(z)+\frac{zq'(z)}{q(z)}\prec R_{\alpha,c,n}(z) $$ on $\mathbb{D},$ then $q(z)\prec g_c(z)^\alpha$ on $\mathbb{D}.$ The function $g_c^\alpha$ is the best dominant. \end{theorem} \begin{proof} We first show that the function $Q(z)=\alpha zg_c'(z)/g_c(z)$ is starlike. Indeed, we compute \begin{equation*} \frac{zQ'(z)}{Q(z)}=1-\frac{\bar{c}z}{c+\bar{c}z}+\frac{z}{1-z} =\frac12\left[\frac{c-\bar cz}{c+\bar cz}+\frac{1+z}{1-z}\right]. \end{equation*} Thus we can see that $\Re[zQ'(z)/Q(z)]>0$ on $|z|<1.$ Next we check condition (a) in Lemma \ref{lem:MM} for the functions $q_0=g_c^\alpha, h=R_{\alpha,c,n}$ with the choice $\mu=1,\nu=0.$ We have the expression $$ \frac{zh'(z)}{Q(z)}=q_c(z)^\alpha+n\frac{zQ'(z)}{Q(z)}. $$ Since both terms in the right-hand side have positive real part, we obtain (a). We now apply Lemma \ref{lem:MM} to obtain the required assertion up to univalence of $h=R_{\alpha,c,n}.$ In order to show the univalence, we have only to note that the condition (a) implies that $h$ is close-to-convex, since $Q$ is starlike. As is well known, a close-to-convex function is univalent (see \cite{Duren:univ}), the proof has been finished. \end{proof} We now investigate the shape of the image domain $R_{\alpha,c,n}(\mathbb{D})$ of the generalized open door function $R_{\alpha,c,n}$ given in Theorem \ref{thm:main}. Let $z=e^{i\theta}$ and $c=r e^{it}$ for $\theta\in\mathbb R, r>0$ and $-\pi/2<t<\pi/2.$ Then we have \begin{equation*} \begin{aligned} R_{\alpha,c,n}(e^{i\theta}) &=\left(\frac{re^{it}+re^{-it}e^{i\theta}}{1-e^{i\theta}}\right)^{\alpha} +\frac{2n\alpha e^{i\theta}\cos t}{(1-e^{i\theta})(e^{it}+e^{-it}e^{i\theta})}\\ &=\left(\frac{r\cos{(t-\theta/2)}}{\sin{(\theta/2)}}i\right)^{\alpha} +\frac i2\cdot \frac{n\alpha\cos{t}}{\sin{(\theta/2)\cos{(t-\theta/2)}}}\\ &=r^\alpha e^{\pi\alpha i/2} \left(\cos{t}\cot{(\theta/2)}+\sin{t}\right)^{\alpha} +\frac i2\cdot \frac{n\alpha(1+\cot^{2}{(\theta/2)})\cos{t}}{\cos{t}\cot{(\theta/2)}+\sin{t}}. \end{aligned} \end{equation*} Let $x=\cot{(\theta/2)}\cos{t}+\sin{t}.$ When $x>0,$ we write $R_{\alpha,c,n}(e^{i\theta})=u_+(x)+iv_+(x)$ and get the expressions \begin{equation* \left\{ \begin{aligned} u_+(x)&=a(rx)^{\alpha},\\ v_+(x)&=b(rx)^{\alpha}+\frac{n\alpha}{2\cos{t}}\left(x-2\sin t+\frac1x\right), \end{aligned} \right. \end{equation*} where $$ a=\cos\frac{\beta\pi}{2} {\quad\text{and}\quad} b=\sin\frac{\beta\pi}{2}. $$ Taking the derivative, we get \begin{equation*} v_+'(x)=\frac{n\alpha}{2x^2\cos{t}} \left[x^2+\frac{2br^{\alpha}\cos{t}}{n}x^{\alpha+1}-1\right]. \end{equation*} Hence, the minimum of $v_+(x)$ is attained at $x=\xi(A,\alpha),$ where $A=2br^\alpha n^{-1}\cos t.$ By using the relation \eqref{eq:eq}, we obtain \begin{align*} \min_{0<x}v_+(x)&=v_+(\xi) =\frac{n}{2\cos t}\left(A\xi^\alpha+\alpha\xi+\frac{\alpha}{\xi}\right) -n\alpha\tan t \\ &=\frac{n}{2\cos t}\left((\alpha-1)\xi+\frac{\alpha+1}{\xi}\right)-n\alpha\tan t =U(\xi), \end{align*} where $$ U(x)=\frac{n}{2\cos t}\left((\alpha-1)x+\frac{\alpha+1}{x}\right)-n\alpha\tan t. $$ Since the function $U(x)$ is decreasing in $0<x<1,$ Lemma \ref{lem:sol} yields the inequality \begin{align*} v_+(\xi)&=U(\xi)\ge U((1+A)^{-1/2}) \\ &=\frac{n}{2\cos t}\left(\frac{\alpha-1}{\sqrt{1+A}}+(\alpha+1)\sqrt{1+A}\right) -n\alpha\tan t. \end{align*} We remark here that $$ U((1+A)^{-1/2})>U(1)=\frac{n\alpha(1-\sin t)}{\cos t}>0; $$ namely, $v_+(x)>0$ for $x>0.$ When $x<0,$ letting $y=-x=-\cot{(\theta/2)}\cos{t}-\sin{t},$ we write $h(e^{i\theta})=u_-(y)+iv_-(y).$ Then, with the same $a$ and $b$ as above, \begin{equation* \left\{ \begin{aligned} u_-(y)&=a(ry)^{\alpha},\\ v_-(y)&=-b(ry)^{\alpha}-\frac{n\alpha}{2\cos{t}}\left(y+2\sin t+\frac1y\right), \end{aligned} \right. \end{equation*} We observe here that $u_+=u_->0$ and, in particular, we obtain the following. \begin{lemma}\label{lem:1} The left half-plane $\Omega_1=\{w: \Re w<0\}$ is contained in $R_{\alpha,c,n}(\mathbb{D}).$ \end{lemma} We now look at $v_-(y).$ Since $$ v_-'(y)=-\frac{n\alpha}{2y^2\cos{t}} \left[y^2+\frac{2br^{\alpha}\cos{t}}{n}y^{\alpha+1}-1\right], $$ in the same way as above, we obtain \begin{align*} \max_{0<y}v_-(y)&=v_-(\xi) =-\frac{n}{2\cos t}\left((\alpha-1)\xi+\frac{\alpha+1}{\xi}\right) -n\alpha\tan t \\ &\le-\frac{n}{2\cos t}\left(\frac{\alpha-1}{\sqrt{1+A}}+(\alpha+1)\sqrt{1+A}\right) -n\alpha\tan t, \end{align*} where $\xi=\xi(A,\alpha)$ and $A=2br^\alpha n^{-1}\cos t.$ Note also that $v_-(y)<0$ for $y>0.$ Since the horizontal parallel strip $v_-(\xi)<\Im w< v_+(\xi)$ is contained in the image domain $R_{\alpha,c,n}(\mathbb{D})$ of the generalized open door function, we obtain the following. \begin{lemma}\label{lem:2} The parallel strip $\Omega_2$ described by $$ \left|\Im w+n\alpha\tan t\right| <\frac{n}{2\cos t}\left(\frac{\alpha-1}{\sqrt{1+A}}+(\alpha+1)\sqrt{1+A}\right) $$ is contained in $R_{\alpha,c,n}(\mathbb{D}).$ Here, $t=\arg c\in(-\frac\pi2,\frac\pi2)$ and $A=\frac2n|c|^{\alpha}\sin\frac{\pi\alpha}2\cos t.$ \end{lemma} When $\alpha=1,$ we have $u_\pm=0,$ that is, the boundary is contained in the imaginary axis. Since $\xi(A,1)=(1+A)^{-1/2}$ by Lemma \ref{lem:sol}, the above computations tell us $\min v_+=(n/\cos t)(\sqrt{1+A}-\sin t)=C_n(\bar c).$ Similarly, we have $\max v_-=-(n/\cos t)(\sqrt{1+A}+\sin t)=-C_n(c).$ Therefore, we have $R_{1,c,n}(\mathbb{D})=V(-C_n(c), C_n(\bar c)).$ Note that the open door function then takes the following form \begin{align*} R_{1,c,n}(z)&=\frac{c+\bar{c}z}{1-z} +\frac{2n(\Re c)z}{(1-z)(c+\bar{c}z)} \\ &=\frac{2\Re c+n}{1+cz/\bar c}-\frac n{1-z}-\bar c, \end{align*} which is the same as given by Kuroki and Owa \cite[(2.2)]{KO14}. In this way, we see that Theorem \ref{thm:main} contains Theorem \ref{Thm:ODL} as a special case. \begin{remark} In ~\cite{KO14}, they proposed another open door function of the form \begin{equation*} R(z)=\frac{2n|c|}{\Re c}\sqrt{\frac{2\Re c}{n}+1} \frac{(\zeta-z)(1-\bar\zeta z)}{(1-\bar\zeta z)^2-(\zeta-z)^2} -\frac{\Im c}{\Re c}i, \end{equation*} where \begin{equation*} \zeta=1-\frac{2}{\omega},\quad \omega=\frac{c}{|c|}\sqrt{\frac{2\Re c}{n}+1}+1. \end{equation*} It can be checked that $R(z)=R_{1,c,n}(-\omega z/\bar\omega).$ Hence, $R$ is just a rotation of $R_{1,c,n}.$ \end{remark} We next study the argument of the boundary curve of $R_{\alpha,c,n}(\mathbb{D}).$ We will assume that $0<\alpha<1$ since we have nothing to do when $\alpha=1.$ As we noted above, the boundary is contained in the right half-plane $\Re w>0.$ When $x>0$, we have \begin{equation*} \frac{v_+(x)}{u_+(x)} =\frac{b}{a}+\frac{n\alpha}{2ar^\alpha x^\alpha\cos t} \left[x+\frac{1}{x}-2\sin{t}\right]. \end{equation*} We observe now that $v_+(x)/u_+(x)\to+\infty$ as $x\to0+$ or $x\to+\infty.$ We also have \begin{equation*} \left(\frac{v_+}{u_+}\right)'(x) =\frac{n\alpha}{2ar^\alpha x^{\alpha+2}\cos{t}} \left[(1-\alpha)x^{2}+2\alpha x\sin{t}-(1+\alpha)\right]. \end{equation*} Therefore, $v_+(x)/u_+(x)$ takes its minimum at $x=\xi,$ where $$ \xi=\frac{-\alpha\sin t+\sqrt{1-\alpha^2\cos^2t}}{1-\alpha} $$ is the positive root of the equation $(1-\alpha)x^{2}+2\alpha x\sin{t}-(1+\alpha)=0.$ It is easy to see that $1<\xi$ and that \begin{align*} T_+&:=\min_{0<x}\frac{v_+(x)}{u_+(x)} =\frac{v_+(\xi)}{u_+(\xi)} =\frac{b}{a}+\frac{n\alpha}{2ar^\alpha \xi^\alpha\cos t} \left[\xi+\frac{1}{\xi}-2\sin{t}\right] \\ &=\tan\frac{\pi\alpha}2+\frac{n(\xi-\xi^{-1})}{2ar^\alpha \xi^\alpha\cos t}. \end{align*} When $x=-y<0,$ we have \begin{equation*} \frac{v_-(y)}{u_-(y)} =-\frac{b}{a}-\frac{n\alpha}{2ar^\alpha y^\alpha\cos t} \left[y+\frac{1}{y}+2\sin{t}\right] \end{equation*} and \begin{equation*} \left(\frac{v_-}{u_-}\right)'(y) =\frac{-n\alpha}{2ar^\alpha y^{\alpha+2}\cos{t}} \left[(1-\alpha)y^{2}-2\alpha y\sin{t}-(1+\alpha)\right]. \end{equation*} Hence, $v_-(y)/u_-(y)$ takes its maximum at $y=\eta,$ where $$ \eta=\frac{\alpha\sin t+\sqrt{1-\alpha^2\cos^2t}}{1-\alpha}. $$ Note that $$ T_-:=\max_{0<y}\frac{v_-(y)}{u_-(y)} =\frac{v_-(\eta)}{u_-(\eta)} =-\tan\frac{\pi\alpha}2-\frac{n(\eta-\eta^{-1})}{2ar^\alpha \eta^\alpha\cos t}. $$ Therefore, the sector $\{w: T_-<\arg w<T_+\}$ is contained in the image $h(\mathbb{D}).$ It is easy to check that $T_-<-\tan(\pi\alpha/2)<\tan(\pi\alpha/2)<T_+.$ In particular $T_-<\arg c^\alpha=\alpha t<T_+.$ We summarize the above observations, together with Theorem \ref{thm:main}, in the following form. \begin{corollary}\label{cor:sector} Let $0<\alpha< 1$ and $c=re^{it}$ with $r>0, -\pi/2<t<\pi/2,$ and $n$ be a positive integer. If a function $q\in\mathcal{H}[c^\alpha,n]$ satisfies the condition $$ -\Theta_-<\arg\left(q(z)+\frac{zq'(z)}{q(z)}\right)<\Theta_+ $$ on $|z|<1,$ then $|\arg q|<\pi\alpha/2$ on $\mathbb{D}.$ Here, $$ \Theta_\pm=\arctan\left[ \tan\frac{\pi\alpha}2+ \frac{n(\xi_\pm-\xi_\pm^{-1})}{2r^\alpha \xi_\pm^\alpha\cos(\pi\alpha/2)\cos t} \right], $$ and $$ \xi_\pm=\frac{\mp\alpha\sin t+\sqrt{1-\alpha^2\cos^2t}}{1-\alpha}. $$ \end{corollary} It is a simple task to check that $x^{1-\alpha}-x^{-1-\alpha}$ is increasing in $0<x.$ When $\Im c>0,$ we see that $\xi_->\xi_+$ and thus $\Theta_->\Theta_+.$ It might be useful to note the estimates $\xi_-<\sqrt{(1+\alpha)/(1-\alpha)}<\xi_+$ and $\xi_-<1/\sin t$ for $\Im c>0.$ \begin{remark}\label{rem:Mocanu} When $c=1$ and $n=1,$ we have $\xi:=\xi_\pm=\sqrt{(1+\alpha)/(1-\alpha)},~ \xi-\xi^{-1}=2\alpha/\sqrt{1-\alpha^2},$ and thus \begin{align*} \Theta_\pm&=\arctan\left[ \tan\frac{\pi\alpha}2+ \frac{\xi-\xi^{-1}}{2\xi^\alpha\cos\tfrac{\pi\alpha}2} \right] \\ &=\arctan\left[ \tan\frac{\pi\alpha}2+ \frac{\alpha}{\cos\tfrac{\pi\alpha}2(1-\alpha)^{\frac{1-\alpha}2}(1+\alpha)^{\frac{1+\alpha}2}} \right] \\ &=\frac{\pi\alpha}2+ \arctan\left[ \frac{\alpha\cos\tfrac{\pi\alpha}2}% {(1-\alpha)^{\tfrac{1-\alpha}2}(1+\alpha)^{\tfrac{1+\alpha}2}+\alpha\sin\tfrac{\pi\alpha}2} \right]. \end{align*} Therefore, the corollary gives a theorem proved by Mocanu \cite{Mocanu89}. \end{remark} Since the values $\Theta_+$ and $\Theta_-$ are not given in an explicitly way, it might be convenient to have a simpler sufficient condition for $|\arg q|<\pi\alpha/2.$ \begin{corollary} Let $0<\alpha\le1$ and $c$ with $\Re c>0$ and $n$ be a positive integer. If a function $q\in\mathcal{H}[c^\alpha,n]$ satisfies the condition $$ q(z)+\frac{zq'(z)}{q(z)}\in\Omega, $$ then $|\arg q|<\pi\alpha/2$ on $\mathbb{D}.$ Here, $\Omega=\Omega_1\cup\Omega_2\cup\Omega_3,$ and $\Omega_1$ and $\Omega_2$ are given in Lemmas \ref{lem:1} and \ref{lem:2}, respectively, and $\Omega_3=\{w\in\mathbb{C}: |\arg w|<\pi\alpha/2\}.$ \end{corollary} \begin{proof} Lemmas \ref{lem:1} and \ref{lem:2} yield that $\Omega_1\cup\Omega_2\subset R_{\alpha,c,n}(\mathbb{D}).$ Since $\Theta_\pm>\pi\alpha/2,$ we also have $\Omega_3\subset R_{\alpha,c,n}(\mathbb{D}).$ Thus $\Omega\subset R_{\alpha,c,n}(\mathbb{D}).$ Now the result follows from Theorem \ref{thm:main}. \end{proof} See Figure 1 for the shape of the domain $\Omega$ together with $R_{\alpha,c,n}(\mathbb{D}).$ We remark that $\Omega=R_{\alpha,c,n}(\mathbb{D})$ when $\alpha=1.$ \begin{figure}[!bht] \begin{center} \scalebox{0.5}[0.46]{\includegraphics{3.pdf}} \caption{The image $R_{\alpha,c,n}(\mathbb{D})$ and $\Omega$ for $\alpha=1/2, c=4+3i, n=2.$} \label{fig1} \end{center} \end{figure} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,108,101,565,097
arxiv
\subsubsection*{\bibname}} \bibliographystyle{apalike} \usepackage[latin1]{inputenc} \usepackage{amssymb,dsfont,amsmath,amsthm,mathtools} \usepackage{hyperref} \usepackage{graphicx} \usepackage{notations, mystyle} \DeclareMathOperator{\defi}{def} \DeclareMathOperator{\GW}{GW} \DeclareMathOperator{\defeq}{\overset{\defi}{=}} \def\mathds{1}{\mathds{1}} \newenvironment{skproof}{% \renewcommand{\proofname}{Sketch of Proof}\proof}{\endproof} \usepackage{float} \usepackage{algorithm} \usepackage{algorithmic} \setlength\belowcaptionskip{0.8em} \begin{document} \runningtitle{Faster Unbalanced Optimal Transport: Translation invariant Sinkhorn and 1-D Frank-Wolfe} \twocolumn[ \aistatstitle{Faster Unbalanced Optimal Transport: \\ Translation invariant Sinkhorn and 1-D Frank-Wolfe} \aistatsauthor{Thibault Sejourne \And Francois-Xavier Vialard \And Gabriel Peyre} \aistatsaddress{DMA, ENS, PSL \And LIGM, UPEM \And CNRS, DMA, ENS, PSL} ] \input{sections/abstract} \input{sections/intro} \input{sections/invariant} \input{sections/sinkhorn} \input{sections/frank-wolfe} \input{sections/barycenter} \input{sections/conclusion} \section{Conclusion} We presented in this paper a translation invariant reformulation of $\UOT$ problems. While conceptually simple, this modification allows to make Sinkhorn's iterations as fast in the unbalanced as in the balanced case. This also allows to operate F-W steps, which turns out to be very efficient for 1-D problems. \section*{Acknowledgements} The work of Gabriel Peyr\'e was supported by the French government under management of Agence Nationale de la Recherche as part of the ``Investissements d'avenir'' program, reference ANR19-P3IA-0001 (PRAIRIE 3IA Institute) and by the European Research Council (ERC project NORIA). \section{Barycenters} \label{sec-barycenter} \paragraph{UOT barycenters.} To be able to derive an efficient F-W procedure for the computation of barycenters, we consider in this section asymmetric relaxations, where $\pi_1$ is penalized with $\rho\KL$ and we impose $\pi_2=\be$ (where $\be$ represents here the barycenter). To emphasize the role of the positions of the support of the histograms, we denote the resulting UOT value as \begin{align* \UW( (\al,x), (\be,y) ) \triangleq \umin{\pi\geq 0, \pi_2 = \be} \dotp{\pi}{C} +\D_\phi(\pi_1 | \al), \end{align*} where the cost is $\C_{i,j} = c(x,y_j)$ for some ground cost~$c$ We consider in this section the barycenter problem between $K$ measures $(\al_1,\ldots,\al_K)\in\RR_+^{N_1}\times\ldots\times\RR_+^{N_K}$, each measure being supported on a set of points $x_k = (x_{(k, 1)}, \ldots, x_{(k, N_k)})$. It reads \begin{align}\label{eq-bar} \umin{\be,y} \sum_{k=1}^K \om_k \UW((\al_k,x_k),(\be,y)), \end{align} where $\om_k \geq 0$ and $\sum_k \om_k=1$. \paragraph{Multi-marginal formulation.} The main difficulty in computing such barycenters is that the support $y$ is unknown, and the problem is non-convex with respect to $y$. Proposition~\ref{prop:barycentermulti} below states that this barycenter can be equivalently computed by solving the following convex unbalanced multi-marginal problem \begin{align}\label{eq-multimarg} \umin{\ga\geq 0} \dotp{\ga}{\Cc} + \sum_{k=1}^K \om_k \D_{\phi}(\ga_k | \al_k), \end{align} where $\ga_k$ is the $k^{\text{th}}$ marginal of the tensor $\ga \in \RR^{N_1 \times \ldots \times N_K}$ obtained by summing over all indices but $k$. The cost of this multi-marginal problem is \begin{align} \Cc_{(i_1,\ldots,i_K)} &\triangleq \umin{b}\sum_k \om_k c(x_{(k,i_k)},b). \label{EqCostMultiMarginal} \end{align} For instance, when $c(x,y) = \norm{x-y}^2$, then up to additive constants, one has $\Cc_{(i_1,\ldots,i_K)} = - 2 \sum_{k \neq \ell} \dotp{x_{(k, i_k)}}{x_{(\ell, i_\ell)}}$. We consider this setting in our experiments. In the following, we make use of the barycentric map \begin{equation*} B_\om(z_1,\ldots,z_k) \triangleq \uargmin{b} \sum_k \om_k c(z_k,b). \end{equation*} For instance, one has $B_\om(z_1,\ldots,z_k) = \sum_k \om_k z_k$ for $c(x,y) = \norm{x-y}^2$. \begin{prop}\label{prop:barycentermulti} Problems~\eqref{eq-bar} and~\eqref{eq-multimarg} have equal value. % Furthermore, for any optimal multimarginal plan $\ga^\star$, with support $I = \{i : \ga_i^\star \neq 0\}$, an optimal barycenter for problem~\eqref{eq-bar} is supported on the set of points $y = ( B_\om(x_{(1, i_1)},\ldots,x_{(K, i_K)}) )_{i \in I}$ with associated weights $\be = (\ga^\star_i)_{i \in I} \in \RR^{|I|}$. % \end{prop} \begin{proof} We can parameterize Program~\eqref{eq-bar} with variables $(\ga_1,\ldots,\ga_K)$ such that $\UW((\al_k,x_k),(\be,y))$ amounts to solve $\OT(\ga_k,\be)$ for an optimal choice of $(\ga_1,\ldots,\ga_K)$. Thus $\be$ is the solution of a balanced barycenter problem with inputs $(\ga_1,\ldots,\ga_K)$. We have from~\cite{agueh-2011} the equivalence with the multimarginal problem, hence the result. \end{proof} While convex, Problem~\eqref{eq-multimarg} is in general intractable because its size grows exponentially with $K$. A noticeable exception, which we detail below, is the balanced case in 1-D, when $\D_{\phi} = \iota_{\{=\}}$ imposes $\ga_k = \al_k$, in which case it can be solved in linear time $O(\sum_k N_k)$, with an extension of the 1-D OT algorithm detailed in the previous section. This is the core of our F-W method to solve the initial barycenter problem. \begin{figure \centering \includegraphics[width=0.35\textwidth]{sections/figures/plot_bench_fw_sink_wfr.pdf} \caption{\textit{ Comparison of $\norm{\f_t-\f^\star}_\infty$ depending on time for $5000$ iterations of Sinkhorn or FW without linesearch. The value of $\log_{10}(\epsilon)$ is reported in the legend.} } \label{fig-sinkhorn-vs-fw} \end{figure} \begin{figure*} \centering \begin{tabular}{c@{}c@{}c@{}c@{}c} {\includegraphics[width=0.3\linewidth]{sections/figures/plot_inputs_barycenter.pdf}} & $\quad$ & {\includegraphics[width=0.3\linewidth]{sections/figures/plot_unbalanced_barycenter.pdf}} & $\quad$ & {\includegraphics[width=0.3\linewidth]{sections/figures/score_unbalanced_barycenter.pdf}} \end{tabular} \caption{\textit{ Left: plot of 8 random mixtures supported in $[0,1]$ used to compute their isobarycenter. Center: Barycenter for $\OT$ and $\UOT$. Right: Value of dual objective $\Hh((\Bf^\star)_k) - \Hh((\Bf_t)_k)$.} } \label{fig-bar-fw} \end{figure*} \paragraph{Solving the 1-D balanced multimarginal problem.} Algorithm \ref{algo-multimarg-ot} solves the multi-marginal problem~\eqref{eq-multimarg} in the balanced case $\D_{\phi} = \iota_{\{=\}}$. It is valid in the 1-D case, when $c(x,y)=|x-y|^p$ for $p \geq 1$, and more generally when $\Cc$ satisfies some submodularity condition as defined in \cite{bach2019submodular, carlier2003class}. We show the correctness of this algorithm in the Appendix. Note that Algorithm~\ref{algo-multimarg-ot} differs from~\cite{cohen2021sliced, bach2019submodular} which solves 1D multimarginal OT by computing $\ell_2$ norms of inverse cumulative distribution functions in the barycenter setting. While both approaches can be used to backpropagate through the multimarginal loss, Algorithm~\ref{algo-multimarg-ot} holds for more general costs, and allows to compute an explicit plan and thus the barycenter. \begin{algorithm}[t] \caption{-- \textbf{SolveMOT($(x_k)_k$, $(\al_k)_l$, $\om$, $\Cc$)} }\label{algo-multimarg-ot} \small{ \textbf{Input:} $K$ measures $(x_k,\al_k,N_k)$, $K$ weights $\om_k$ and multimarginal cost $\Cc$\\ \textbf{Output:}~primal-dual solutions $(\ga,\, \{\f_k\}_k)$\\ \begin{algorithmic}[1] \STATE Set $\ga,\, \f_{k} \leftarrow 0,\, 0$ \STATE Set $ \{a_k\}_k ,\, \{i_k\}_k \, \leftarrow \,\{\al_{(k, 1)}\}_k ,\, \{1\}_k$ \STATE Set $\f_{(1, 1)} \leftarrow \Cc(x_{(1, 1)},\dots,x_{(K, 1)})$ \WHILE{$\exists k,\, i_k < N_k$} \STATE $p\leftarrow\arg\min\{k\mapsto a_k \,\, \textrm{s.t.}\,\, i_k<N_k\}$ \STATE $\ga_{(i_1,\ldots,i_K)},\,\{a_k\}_k\leftarrow a_p,\, \{a_k - a_p\}_k$ \STATE $i_p\leftarrow i_p + 1$ \STATE $\f_{(p, i_p)} \leftarrow \Cc(x_{(1, i_1)},\dots,x_{(K, i_K)}) - \sum_{k\neq p}\f_{(k,i_k)}$ \STATE $ a_p \leftarrow \al_{(p, i_p)}$ \ENDWHILE \STATE Return $(\ga,\, \{\f_k\}_k)$. \end{algorithmic} } \end{algorithm} \paragraph{Translation invariant multi-marginal.} The dual of the multi-marginal problem~\eqref{eq-multimarg} reads, for $\f = (\f_1,\ldots,\f_K)$, \begin{align} \umax{\f_1\oplus\ldots\oplus\f_K\leq\Cc} \Ff(\f) \triangleq \sum_{k=1}^K \dotp{\al_k}{-\om_k\phi^*(-\tfrac{\f_k}{\om_k}) }. \label{eq-dual-multimarg} \end{align} Similarly to Section~\ref{sec-trans-inv}, we define a translation invariant functional for $\Bf = (\Bf_1,\ldots,\Bf_K)$ \begin{align*} \Hh(\Bf) \triangleq \sup_{\sum_k \la_k=0}\Ff(\Bf_1+\la_1,\ldots,\Bf_K+\la_K). \end{align*} The following proposition generalizes Proposition~\ref{prop-equiv-mass-trans}. \begin{prop}\label{prop-optim-trans-multimarg} Assume that $\phi^*$ is smooth and strictly convex. Then there exists a unique $\la^\star(\Bf) = (\la_1,\ldots,\la_K)$ s.t. $\sum_k \la_k=0$ in the definition of $\Hh$. % Then $\nabla \Hh(\Bf) = \tal$ where $\tal_k \triangleq \nabla\phi^*(-\Bf_k-\la_k)\al_k$ is such that for any $(i,j)$ one has $m(\tal_i)=m(\tal_j)$. \end{prop} \begin{proof} For this proof we reparameterize $(\la_1,\ldots,\la_K)$ as $(\la_1-\La,\ldots,\la_K-\La)$ where $\La =\tfrac{1}{K}\sum_k \la_k$, such that the constraint $\sum_k \la_k=0$ can be dropped. % If any $\la_i\rightarrow-\infty$ then $\Ff(\ldots)\rightarrow-\infty$. % If any $\la_i\rightarrow+\infty$ then $\La\rightarrow+\infty$ and again $\Dd(\ldots)\rightarrow-\infty$. % Thus we are in a coercive setting, and there is existence of a minimizers. % Uniqueness is given by the strict convexity of $\phi^*$. % Finally, the first order optimality condition w.r.t. $\la_i$ reads $\dotp{\al_i}{\nabla\phi^*(-\f_i -\la_i + \La)} = \sum_k \dotp{\al_k}{\nabla\phi^*(-\f_k - \la_k + \La)}$. % The r.h.s. term is the same for any $i$, thus for any $(i,j)$ one has $\dotp{\al_i}{\nabla\phi^*(-\f_i -\la_i + \La)} = \dotp{\al_j}{\nabla\phi^*(-\f_j -\la_j + \La)}$, which reads $m(\tal_i)=m(\tal_j)$. \end{proof} The following proposition shows that for $\rho \KL$ divergences, one can compute the optimal translation in closed form. \begin{prop}[$\KL$ setting] % When $\D_{\phi} = \rho \KL$, then, denoting $q_k \triangleq \log\dotp{\al_k}{e^{-\f_k / (\om_k \rho)}}$, \eql{\label{eq-optim-const-multimarg} \la^\star(\Bf)_i = \om_i \rho q_i - \frac{\om_i}{\sum_k \om_k} \sum_{k=1}^K \om_k \rho q_k. } \end{prop} Note that Equation~\eqref{eq-optim-const-multimarg} can be computed in $O(\sum_k N_k)$ time, and could be used in the multi-marginal Sinkhorn algorithm to potentially improve its convergence. \paragraph{Unbalanced multi-marginal F-W.} Similarly to Sections~\ref{sec-trans-inv} and~\ref{sec-fw}, we propose to optimize the multimarginal problem~\eqref{eq-multimarg} via F-W applied to the problem \begin{equation*} \umax{\Bf_1\oplus\ldots\oplus\Bf_K\leq\Cc} \Hh(\Bf). \end{equation*} Each F-W step has the form $\Bf^{(t+1)} = (1-\tau_t) \Bf^{(t)} + \tau_t r$ where, thanks to Proposition~\ref{prop-optim-trans-multimarg}, $r=(r_1,\ldots,r_K)$ solves the following LMO \eq{ \umax{r_1 \oplus\ldots\oplus r_K\leq\Cc} \sum_{k=1}^K \dotp{\tilde\al_k^{(t)}}{r_k} \text{ with } \tilde\al_k^{(t)} \triangleq \nabla\phi_k^*(-\Bf_k^{(t)} - \la_k). } Proposition~\ref{prop-optim-trans-multimarg} guarantees that all the $\tilde\al_k^{(t)}$, have the same mass, so that the F-W iterations are well defined. In 1-D, this LMO can thus be solved in linear time using the function SolveMOT($(x_k)_k, (\tilde\al_k^{(t)})_k, \om, \Cc$). \paragraph{Numerical experiments.} Figure~\ref{fig-bar-fw} displays an example of computation of the balanced OT (corresponding to using $\rho=+\infty$) and UW barycenters of Gaussian mixtures. We consider the isobarycenter, $\om_k=\tfrac{1}{K}$. The input measures are $K=8$ Gaussian mixtures $a\cdot\Nn(\mu_1,\sigma) + b\cdot\Nn(\mu_2,\sigma)$ where $\sigma=0.03$, $\mu_1\sim\Uu([0.1,\,0.4])$, $\mu_2\sim\Uu([0.6,\,0.9])$ and $(a,b)\sim\Uu([0.8,\,0.1])$. The input measures and the barycenter are not densities, but $5.000$ samples smoothed on Figure~\ref{fig-bar-fw} using Gaussian kernel density estimation. One can observe that both $\OT$ and $\UW$ retrieve two modes. Note however that $\OT$ barycenter displays multiple undesirable minor modes between the two main modes. This highlights the ability of the unbalanced barycenter to cope with mass variations in the modes of the input distribution, which create undesirable artifact in the balanced barycenter. \section{Experiments} \label{sec-xp} \subsection{Convergence of Sinkhorn} In this section we track the convergence of the dual potentials of the Sinkhorn algorithm compared to our proposal where an optimal translation is computed at each loop using Equations~\eqref{eq-sink-with-trans-1} and~\eqref{eq-sink-with-trans-2}. Recall that the classical Sinkhorn algorithm has a geometric convergence rate of $(1 + \tfrac{\epsilon}{\rho})^{-1}$. For this reason we plot a first experiment where the ratio $\epsilon / \rho$ is fixed, and we update $\epsilon_t = t * \epsilon$ and $\rho_t = t * \rho$ for $t\in[10, 1, 0.1, 0.01]$. We display the experiment on Figure~\ref{fig-sinkhorn-ratio-fixed}. Note that all dotted lines are parallel, showing that the ratio $\epsilon / \rho$ being fixed yields the same convergence for the classic Sinkhorn algorithm, independently of the scaling. Compared to this, the adjunction of translations provides a convergence at least as fast as the standard algorithm, but which accelerates as the scaling increases, thus highlighting the benefits of translating the potentials in the iterations. \begin{figure}[H] \centering \includegraphics[width=0.45\textwidth]{sections/figures/plot_sinkhorn_ratio_fixed.png} \caption{Convergence of the dual potential $\f$ towards the optimal $\f^*$ w.r.t $\norm{\cdot}_\infty$. Each color corresponds to a scaling $t$ of $(\epsilon_t, \rho_t)$. The convergence for Sinkhorn are plotted as dotted lines, and our proposal with solid ones.} \label{fig-sinkhorn-ratio-fixed} \end{figure} We display in Figures~\ref{fig-sinkhorn-eps-fixed} and~\ref{fig-sinkhorn-rho-fixed} the plots of convergence when respectively $\epsilon$ and $\rho$ are fixed. In those cases the ratio $\epsilon / \rho$ is not fixed, hence the change of slopes for the dotted. We see that in both settings the homogeneous algorithm yields a convergence which is at least as fast as its standard counterpart, thus highlighting its benefits. What remains an open question is the rate of convergence for the homogeneous algorithm, and why the first iterations of both algorithm seem very close w.r.t. $\norm{\cdot-\f^*}_\infty$. \begin{figure}[!htb] \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=.9\linewidth]{sections/figures/plot_sinkhorn_eps_fixed.png} \caption{Value of $\norm{\f_t - \f^*}_\infty$ for $\epsilon=5.10^{-3}$ and $\rho\in\{0.1, 1, 10, 100\}$ for the standard Sinkhorn algorithm and the homogeneous version.}\label{fig-sinkhorn-eps-fixed} \end{minipage}\hfill \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=.9\linewidth]{sections/figures/plot_sinkhorn_rho_fixed.png} \caption{Value of $\norm{\f_t - \f^*}_\infty$ for $\rho=5$ and $\epsilon\in\{0.001, 0.01, 0.1, 1\}$ for the standard Sinkhorn algorithm and the homogeneous version.}\label{fig-sinkhorn-rho-fixed} \end{minipage} \end{figure} Lastly, we compute the Anderson acceleration~\cite{anderson1965iterative} for the standard and homogeneous Sinkhorn algorithm. \todo{Explain the acceleration.} The goal is to verify if the acceleration benefits from this proposed algorithm, and if the accelerated Sinkhorn is faster than the homogeneous algorithm. As displayed by Figure~\ref{fig-sinkhorn-anderson}, we see that Sinkhorn accelerated with Anderson extrapolation is slower than the homogeneous algorithm, and that accelerate this algorithm yields a faster convergence. \begin{figure}[H] \centering \includegraphics[width=0.45\textwidth]{sections/figures/plot_sinkhorn_with_anderson.png} \caption{Convergence of the dual potential $\f$ towards the optimal $\f^*$ w.r.t $\norm{\cdot}_\infty$ for $\epsilon=0.01$ and $\rho=10$.} \label{fig-sinkhorn-anderson} \end{figure} \subsection{Benchmark of 1D Frank-Wolfe} \subsection{Unbalanced Barycenters in 1D} \section{Consequences of the dual invariance to translations} \label{sec-transl} \subsection{Accelerating Sinkhorn} Optimizing the constant is an ingredient which we prove in this section to significantly accelerate the Sinkhorn algorithm. \todo{} We focus in this section on the KL setting, and introduce notations from~\cite{bibid}. Recall that the Sinkhorn algorithm optimizes the dual of entropically regularized UOT which reads \begin{align}\label{eq-entropic-uot} \UOT_\epsilon(\al,\be) \triangleq \umax{(\f,\g)} &\dotp{\al}{-\rho_1(e^{-\f / \rho_1} - 1)}\\ + &\dotp{\be}{-\rho_2(e^{-\g / \rho_2} - 1)}\\ - &\epsilon\dotp{\al\otimes\be}{e^{\tfrac{\f\oplus\g-\C}{\epsilon}} - 1}. \end{align} The Sinkhorn algorithm updates $\f$ and $\g$ alternatively until convergence. In the KL setting, it is proved in~\cite{bibid} that the Sinkhorn algorithm reads for any initialization $\f_0$ \begin{align*} \g_{t+1}(y) &= \aprox_{\rho_1}(\Smin{\al}{\epsilon}(\C(\cdot,y) - \f_{t})),\\ \f_{t+1}(x) &= \aprox_{\rho_2}(\Smin{\be}{\epsilon}(\C(x,\cdot) - \g_{t+1})), \end{align*} where the softmin $\Smin{}{}$ and the anisotropic proximity operator $\aprox$ read $\Smin{\al}{\epsilon}(\f) \triangleq -\epsilon\log\dotp{\al}{e^{-\f / \epsilon}}$, and $\aprox_\rho(\f) \triangleq \tfrac{\rho}{\rho + \epsilon} \f$. Both operators are respectively $1$-contractive and $(1+\tfrac{\epsilon}{\rho})^{-1}$-contractive for the uniform norm $\norm{\cdot}_\infty$. We introduce an operator which applies the optimal translation from Proposition~\ref{prop-kl-opt-trans}, noted $\la^*(\f,\g)$. It reads \begin{align} &T_1(\f,\g)\triangleq \f + \la^*(\f,\g), \\ &T_2(\f,\g)\triangleq \g + \la^*(\f,\g). \end{align} We propose to add an extra step in the Sinkhorn algorithm by translating after applying the operator $\aprox\circ\Smin{}{}$, which now read \begin{gather} \g_{t+1}(y) = \aprox_{\rho_1}\bigg(\Smin{\al}{\epsilon}\big(\C(\cdot,y) - \f_{t} - T_1(\f_{t}, \g_{t})\big)\bigg),\label{eq-sink-with-trans-1}\\ \f_{t+1}(x) = \aprox_{\rho_2}\bigg(\Smin{\be}{\epsilon}\big(\C(x,\cdot) - \g_{t+1} + T_2(\f_{t+1}, \g_{t+1})\big)\bigg).\label{eq-sink-with-trans-2} \end{gather} \todo{Move it in a first section on Sinkhorn and alternate descent} This sequence of updates operates an alternate dual ascent on the following formulation \begin{align} \sup_{\f,\g,\la} \dotp{\al}{-\rho_1(e^{\tfrac{\f-\la}{\rho_1}} - 1)} + \dotp{\be}{-\rho_2(e^{\tfrac{\g+\la}{\rho_2}} - 1)} -\epsilon\dotp{\al\otimes\be}{e^{\tfrac{\f\oplus\g - \C}{\epsilon}} - 1}. \label{eq-dual-form-with-trans} \end{align} Note that an alternate descent on the above formulation is not equivalent to operate an alternate descent on Formulation~\eqref{eq-dual-form-hell}. Indeed, when $\rho_1=\rho_2=\rho$, the first order condition of Formulation~\eqref{eq-dual-form-hell} w.r.t. $\f$ for fixed $\g_t$ reads \begin{align} e^{\f / \epsilon}\cdot\dotp{\be}{e^{\tfrac{\g_t - \C}{\epsilon}}} = e^{-\f / \rho}\cdot\sqrt{\frac{ \dotp{\be}{e^{-\g_t / \rho}} }{ \dotp{\al}{e^{-\f / \rho}} }}.\label{eq-optim-sink-v1} \end{align} Concerning Formulation~\eqref{eq-dual-form-with-trans}, the optimality on $\f$ for fixed $(\g_t,\la_t)$ where $\la_t=\la^*(\f_t,\g_t)$ reads \begin{align} e^{\f / \epsilon}\cdot\dotp{\be}{e^{\tfrac{\g_t - \C}{\epsilon}}} = e^{\tfrac{-\f-\la_t}{\rho}} = e^{-\f / \rho}\cdot\sqrt{\frac{ \dotp{\be}{e^{-\g_t / \rho}} }{ \dotp{\al}{e^{-\f_t / \rho}} }}.\label{eq-optim-sink-v2} \end{align} Note that while the optimality Equation~\eqref{eq-optim-sink-v1} is an equation which does not admit a closed form due to the dependence in $\f$ of the integral $\dotp{\al}{e^{-\f / \rho}}$, Equation~\eqref{eq-optim-sink-v2} admits a closed form for $\f$ because it uses the previously computed potential $\f_t$. We provide experiments in Section~\ref{sec-xp}. At the moment we have no proof that the rate of convergence is better using this algorithm, but we observe experimentally that this algorithm converges at least as fast as the standard Sinkhorn algorithm. \section{Properties of $\Gg_\epsilon$ and $\Hh_\epsilon$} \label{sec-trans-inv} We first give some important properties of the optimal translation parameter, which is important to link $\Gg_\epsilon$ and $\Hh_\epsilon$. We recall that $m(\al) \triangleq \sum_i\al_i$. \begin{prop} \label{prop-equiv-mass-trans} Assume that $\phi_1^*$, $\phi_2^*$ are smooth and strictly convex. Then there exists a unique maximizer $\la^\star(\Bf,\Bg)$ of $\Gg_\epsilon(\Bf,\Bg,\cdot)$. Furthermore, $(\tilde\al,\tilde\be) = \nabla \Hh_0(\Bf,\Bg)$ satisfy $\tal = \nabla\phi_1^*(-\Bf-\la^*(\Bf,\Bg))\al$, $\tbe=\nabla\phi_2^*(-\Bg+\la^*(\Bf,\Bg))\be)$ and $m(\tal)=m(\tbe)$. \end{prop} \begin{proof} From~\cite{liero2015optimal} we have that $\lim_{x\rightarrow\infty} \phi^*(x) = +\infty$. % Thus for any $(\Bf,\Bg)$, $\Gg_\epsilon(\Bf,\Bg,\la)\rightarrow-\infty$ when $\la\rightarrow\pm\infty$, i.e. $\Gg_\epsilon$ is coercive in $\la$. % It means that we have compactness, and the maximum is attained in $\RR$. % Uniqueness is given by the strict convexity of $\phi_i^*$. % The expression of $(\tal,\tbe)$ follows by applying the enveloppe theorem since $\phi_i^*$ are smooth. % Concerning the mass equality, the first order optimality condition of $\Gg_\epsilon$ in $\la$ reads $\dotp{\al}{\nabla\phi_1^*(-\Bf-\la)} = \dotp{\be}{\nabla\phi_2^*(-\Bg+\la)}$. % Thus by definition of $(\tal,\tbe)$, this condition is rewritten as $\dotp{\tal}{1} = \dotp{\tbe}{1}$, meaning that $m(\tal) = m(\tbe)$. \end{proof} \paragraph{Closed forms for KL.} The case $\phi_i(x)=\rho_i(x\log x -x+1)$ and $\phi_i^*(x)=\rho_i(e^{-x/\rho_i} - 1)$ (corresponding to penalties $\rho_i\KL$, where one can have $\rho_1\neq\rho_2$) enjoys simple closed form expression. We start with the property that if we fix $(\Bf,\Bg)$, then $\la^\star$ can be computed explicitly. \begin{prop} \label{prop-kl-opt-trans} One has \eql{\label{eq-opt-trans-kl} \la^\star(\Bf,\Bg) = \tfrac{\rho_1\rho_2}{\rho_1 + \rho_2} \log\Big[\frac{\dotp{\al}{e^{-\Bf / \rho_1}}}{ \dotp{\be}{e^{-\Bg / \rho_2}}}\Big]. } \end{prop} \begin{proof} The optimality of Equation~\ref{eq-def-g-func} in $\la$ reads $\dotp{\al}{\nabla\phi_1^*(-\Bf-\la)} = \dotp{\be}{\nabla\phi_2^*(-\Bg+\la)}$, which for $\rho_i\KL$ is $\dotp{\al}{e^{-\tfrac{\Bf+\la}{\rho_1}}} = \dotp{\be}{e^{-\tfrac{\Bg-\la}{\rho_2}}}$. % Solving this equation in $\la$ yields Equation~\eqref{eq-opt-trans-kl}. \end{proof} Note that Equation~\eqref{eq-opt-trans-kl} can be computed in $O(N)$ time, and stabilized via a logsumexp reduction. Equation~\eqref{eq-opt-trans-kl} is useful to rewrite $\Hh_\epsilon$ explicitly. It yields a formulation which is new to the best of our knowledge. \begin{prop} Setting $\tau_1=\tfrac{\rho_1}{\rho_1 + \rho_2}$ and $\tau_2=\tfrac{\rho_2}{\rho_1 + \rho_2}$, one has \begin{align} \Hh_\epsilon&(\Bf,\Bg) = \rho_1 m(\al) + \rho_2 m(\be) -\epsilon\dotp{\al\otimes\be}{e^{\tfrac{\Bf\oplus\Bg - \C}{\epsilon}} - 1}\nonumber\\ &- (\rho_1 + \rho_2)\Big(\dotp{\al}{ e^{-\Bf / \rho_1} }\Big)^{\tau_1} \Big(\dotp{\be}{ e^{-\Bg / \rho_2} }\Big)^{\tau_2}.\label{eq-dual-form-hell} \end{align} % In particular when $\rho_1 = \rho_2=\rho$ and $\epsilon=0$, \begin{align*} \Hh_0(\Bf,\Bg) = \rho \Big[m(\al) + m(\be)- 2\sqrt{ \dotp{\al}{ e^{-\Bf/\rho} } \dotp{\be}{ e^{-\Bg/\rho} } }\Big]. \end{align*} \end{prop} \section{Appendix of Section 5 - Barycenters} \label{seq-supp-barycenter} \subsection{Proof of Proposition 6} We provide in this section a different proof than that of the paper. It consists in completely rederiving the proof of~\cite{agueh-2011} for our functional $\UW$ which has an extra $\KL$ penalty term. \begin{proof} First note that both problems have minima which are attained. Indeed, both problems admit finite values since respectively $\be=\al_1$ and $\ga=\al_1\otimes\ldots\otimes\al_K$ are feasible. % We can assume that the optimal plans have bounded mass. % Assume for instance that it is not the case for the barycenter problem % Then there exists a sequence $\be_t$ approaching the infimum and such that $m(\be_t)\rightarrow\infty$, which would contradict the finiteness of the functional value. % Thus, we consider without loss of generality that $m(\be)<M_1$ and $m(\ga)< M_2$. % By Banach-Alaoglu theorem, $\be$ and $\ga$ are in compact sets. Taking a sequence approaching the infimum, one can extract a converging subsequence which attains the minimum, hence the existence of minimizers. We prove now that the multimarginal problem upper-bounds the barycenter problem. % Take $\ga$ optimal for the multimarginal problem. % Define the canonical projection $p_k$ such that $p_k(x_1,\ldots,x_K) \triangleq x_k$, and $\ga^{(k)} \triangleq (p_k, B_\la)_\sharp\ga$. % Note that all $\ga^{(k)}$ have the same second marginal $\tbe\triangleq B_{\la\sharp}\ga$, thus they are feasible for $\UW(\al_k,\tbe)$, and we get \begin{align*} \UW(\al_k,\tbe)&\leq \dotp{\ga^{(k)}}{\C} + \rho\KL(\ga^{(k)}_1 | \al_k)\\ &= \dotp{\ga}{\C[x_k, B_\la(x_1,\ldots,x_K)]} + \rho\KL(\ga_k | \al_k) \end{align*} Thus by summing over $k$, we get \begin{align*} \sum_k \la_k \UW(\al_k,\tbe) &\leq \sum_k \la_k\Bigg[ \dotp{\ga}{\C[x_k, B_\la(x_1,\ldots,x_K)]} + \rho\KL(\ga_k | \al_k) \Bigg]\\ &= \dotp{\ga}{\sum_k \la_k\C[x_k, B_\la(x_1,\ldots,x_K)]} + \sum_k \la_k\rho\KL(\ga_k | \al_k)\\ &= \dotp{\ga}{\Cc} + \sum_k \la_k\rho\KL(\ga_k | \al_k). \end{align*} % Hence the upper-bound on the barycenter problem. Now prove the converse inequality. Consider the optimal barycenter $\be^*$, and write $\pi^{(k)}(x_k, z)$ the optimal plan for $\UW(\al_k,\be^*)$. % Note that all $(\pi^{(k)})$ have the same second marginal $\be^*$. % It allows to define the gluing of all plans along $\be^*$ which is a $(K+1)$-dimensional tensor, noted $\eta(x_1,\ldots,x_K,z)$. % Write $\teta$ its marginal/summation over the variable $z$. the plan $\teta(x_1,\ldots,x_K)$ is feasible for the multimarginal problem. % It yields \begin{align*} (2) &\leq \dotp{\teta}{\Cc} + \sum_k \la_k \rho\KL(\teta_k | \al_k)\\ &= \dotp{\eta}{\Cc} + \sum_k \la_k \rho\KL(\eta_k | \al_k)\\ &\leq \dotp{\eta(x_1,\ldots,x_K,z)}{\sum_k\la_k\C(x_k, z)} + \sum_k \la_k \rho\KL(\eta_k | \al_k)\\ &= \sum_k \la_k \Bigg[ \dotp{\pi^{(k)}(x_k, z)}{\C(x_k, z)} + \rho\KL(\pi^{(k)}_1 | \al_k) \Bigg]. \end{align*} The last equality holds by construction of the gluing plan $\eta$, whose marginals with variables $(x_k,z)$ is $\pi^{(k)}$, which implies $\teta_k=\eta_k=\pi^{(k)}_1$. % The last line is exactly the value of the barycenter problem for the barycenter $\be^*$, which shows that it upper-bounds the multimarginal problem. Eventually, we have that both formulation yield the same value. % Reusing the first part of the proof, we have that the measure $\tbe = B_{\la\sharp}\ga$ yields the same value for both problems, thus it is an optimizer for the barycenter problem, which ends the proof. \end{proof} \subsection{Correctness of the OT multimarginal algorithm} As explained, solving the 1D barycenter problem is equivalent to solving a balanced, 1D multimarginal transport problem with respect to the barycentric cost. For the Euclidean distance squared, it is well-known in the case of $K=2$ marginals that the barycenter is characterized by its generalized inverse cumulative distribution function (icdf) equal to the mean of the icdf of the corresponding marginals. This property can be extended in 1D to multimarginal costs satisfying a submodularity condition as defined in \cite{bach2019submodular, carlier2003class}. Under this assumption, it is shown in these papers that the optimal plan $\gamma$, as defined in the corresponding multimarginal problem, is given by the distribution of $(F_{\alpha_1}^{-1}(U),\ldots,F_{\alpha_k}^{-1}(U))$ where $U$ is a uniform random variable on $[0,1]$ and $F^{-1}_{\mu}$ denotes the icdf of $\mu$ (see \cite{bach2019submodular} for the definition). Thus, the optimal primal variable $\gamma$ is explicitly parametrized by $t \in [0,1]$. It is direct to prove that the algorithm computes the optimal plan. The optimal dual variables are obtained by applying the primal-dual constraint which reads \begin{equation} \sum_{i = 1}^K f_i(x_{k_i}^i) = \Cc(x_{k_1}^1,\ldots,x_{k_K}^K)\, ,\label{EqPrimalDual} \end{equation} for every $(x_{k_i}^i)_{i =1,\ldots,K}$ in the support of the optimal plan. To initialize the dual variables, one has to remark that the multimarginal OT problem is invariant by the following translations $f_i \to f_i + \lambda_i$ with $\sum_{i = 1}^K \lambda_i = 0$. This implies that one can set the value of the last $K-1$ potentials to $0$ and initialize $f_1(x_1^1) $ with the primal dual constraint~\eqref{EqPrimalDual} which gives $f_k(x_1^K) = \Cc(x_1^1,\ldots,x_1^K)$. The standard iteration of the algorithm consists in updating the current point in the optimal plan (indexed by $t$) to the next point in the support of the plan $\ga$ and update the corresponding dual potential accordingly to the primal-dual constraint. Note that the primal-dual equality is satisfied on the support of the plan $\ga$ and on the support of $\al_1\otimes\ldots\otimes\al_K$ the inequality constraint \begin{equation} \sum_{i = 1}^K f_i(x_i) \leq \Cc(x_1,\ldots,x_K)\, \end{equation} is satisfied, in other terms the potentials $(\f_1,\ldots,\f_K)$ is dual-feasible. This fact is guaranteed by a submodularity condition on the cost $\Cc$, and it is not satisfied for a general cost (see~\cite[Proposition 4]{bach2019submodular}). \subsection{Proof of Proposition 8} We provide here a more general proof where each marginal $\al_k$ is penalized by $\rho_k\KL$. We retrieve the result of the paper for the particular setting $\rho_k = \omega_k\rho$ and $\sum_k \omega_k = 1$. We define $\rho_{tot}\triangleq \sum_k \rho_k$. \begin{proof} First note that $\sum_k \la_k = 0$ implies that $(\f_1+\la_1)\oplus\ldots\oplus(\f_K + \la_K)\leq\Cc$. % In what follows we replace the parameterization $(\la_1,\ldots,\la_K)$ by $(\la_1 -\La,\ldots,\la_K - \La)$ where $\La(\la_1,\ldots,\la_K) \triangleq \tfrac{1}{K}\sum_k \la_k$, such that the constraint $\sum_k \la_k=0$ is always satisfied. % The first order optimality condition on $\Ff(\f_1 + \la_1 - \La,\ldots,\f_K + \la_K -\La)$ w.r.t the coordinate $\la_i$ reads \begin{align*} &\dotp{\al_i}{e^{-(\f_i + \la_i -\La) / \rho_i}} - \sum_{k=1}^K \tfrac{1}{K}\dotp{\al_k}{e^{-(\f_k +\la_k - \La)/ \rho_k}} = 0\\ \Leftrightarrow & \dotp{\al_i}{e^{-(\f_i + \la_i -\La) / \rho_i}} = \sum_{k=1}^K \tfrac{1}{K}\dotp{\al_k}{e^{-(\f_k +\la_k - \La)/ \rho_k}}\\ \Leftrightarrow & \frac{-\la_i + \La}{\rho_i} + \log\dotp{\al_i}{e^{-\f_i / \rho_i}} = \log\Bigg[\sum_{k=1}^K \tfrac{1}{K}\dotp{\al_k}{e^{-(\f_k +\la_k - \La)/ \rho_k}}\Bigg]\triangleq v\\ \Leftrightarrow &-\la_i + \La + \rho_i \log\dotp{\al_i}{e^{-\f_i / \rho_i}} = \rho_i\cdot v. \end{align*} Summing those optimiality equations for all $i$, one has $\sum_k (\la_k - \La) = 0$, thus yielding \eq{ \rho_{tot}\cdot v = \sum_k \rho_k \log\dotp{\al_k}{e^{-\f_k / \rho_k}}. } Hence we get \eq{ v = \frac{1}{\rho_{tot}} \sum_k \rho_k \log\dotp{\al_k}{e^{-\f_k / \rho_k}}. } Reusing the optimality condition, we set \eq{ \la_i = \rho_i\log\dotp{\al_i}{e^{-\f_i / \rho_i}} - \frac{\rho_i}{\rho_{tot}} \sum_{k=1}^K \rho_k\log\dotp{\al_k}{e^{-\f_k / \rho_k}}. } Note that this formula verifies $K\La = \sum_k \la_k = 0$. Setting $\tal_k = e^{-(f_k + \la_k) / \rho_k}\al_k$, one has \eq{ m(\tal_k) = \exp\Big(- \frac{1}{\rho_{tot}} \sum_k \rho_k \log\dotp{\al_k}{e^{-\f_k / \rho_k}}\Big). } Hence the equality of masses $m(\tal_i) = m(\tal_j)$ for any $(i,j)$. \end{proof} \section{Appendix of Section 3 - Translation Invariant Sinkhorn algorithm} \label{sec-supp-sinkhorn} We focus in this appendix on detailing the properties of $\Hh$-Sinkhorn algorithm. We recall belowe some notations: \begin{align*} &\Psi_1: \Bf\mapsto \argmax_{\Bg} \Hh_\epsilon(\Bf,\Bg),\\ &\Psi_2: \Bg\mapsto \argmax_{\Bf} \Hh_\epsilon(\Bf,\Bg),\\ &\Phi: (\Bf,\Bg)\mapsto (\Bf + \la^\star(\Bf,\Bg), \Bg - \la^\star(\Bf,\Bg)),\\ &\Upsilon_1: \Bf\mapsto (\Bf, \Psi_1(\Bf)),\\ &\Upsilon_2: \Bg\mapsto (\Psi_2(\Bg), \Bg). \end{align*} In this section we focus on the properties on the map $\Psi_1$ (and $\Upsilon_1$) which represents the $\Hh$-Sinkhorn update of $\Bf$. By analogy, those results hold for the map $\Psi_2$ which updates $\Bg$. This section involves the use of several norms, namely the sup-norm $\norm{\cdot}_\infty$ and the Hilbert pseudo-norm $\norm{\f}_\star = \inf_{\la\in\RR} \norm{\f + \la}_\infty$ which is involved in the convergence study of the Balanced Sinkhorn algorithm~\cite{knight2008sinkhorn}. The pseudo-norm $\norm{\cdot}_\infty$ is zero iff the functions are equal up to a constant We also define a variant of the Hilbert norm defined for two functions $(\Bf,\Bg)$ which is definite up to the dual invariance $(\Bf + \la,\Bg-\la)$ of $\Hh$. It reads $\norm{(\Bf,\Bg)}_{\star\star} \triangleq \min_{\la\in\RR} \norm{\Bf + \la}_\infty + \norm{\Bg-\la}_\infty$. \subsection{Generic properties of $\Hh$-Sinkhorn updates} \begin{prop}\label{prop-aprox-eq-psi} The map $\Psi_1(\Bf)$ satisfies the implicit equation % \begin{align*} \Psi_1(\Bf) = -\aprox_{\phi^*_1}\Big(\; -\Smin{\al}{\epsilon}\big(\; \C - \Bf - \la^\star(\Bf, \Psi_1(\Bf)) \;\big) \;\Big) + \la^\star(\Bf, \Psi_1(\Bf)). \end{align*} \end{prop} \begin{proof} Recall the optimality Equation~\eqref{eq-optim-h-sink} transposed to optimality w.r.t. $\Bg$ % \begin{align} e^{\Bg/\epsilon}\dotp{\al}{e^{(\Bf-\C)/\epsilon}}=\nabla\phi_1^*(-\Bg+\la^\star(\Bf,\Bg)). \end{align} % Perform a change of variable $\hg = \Bg - \la^\star(\Bf,\Bg)$, such that the above equation reads % \begin{align} e^{\hg/\epsilon}\dotp{\be}{e^{(\Bf+\la^\star(\Bf,\Bg)-\C)/\epsilon}}=\nabla\phi_1^*(-\hg). \end{align} % One can recognize the optimality condition of $\Ff$-Sinkhorn, thus one has % \begin{align*} \hg = \Bg - \la^\star(\Bf,\Bg) = -\aprox_{\phi^*_1}\big(-\Smin{\al}{\epsilon}\big(\C - \Bf - \la^\star(\Bf,\Bg)\big)\big) \end{align*} Writing $\Bg = \Psi_1(\Bf)$ and adding on both sides $\la^\star(\Bf,\Bg)$ yields the result. \end{proof} \begin{prop} Assume $(\phi^*_1, \phi^*_2)$ are strictly convex. One has for any $\tau\in\RR$, $\la^\star(\Bf+\tau, \Bg)= \la^\star(\Bf,\Bg+\tau) - \tau$. \end{prop} \begin{proof} The strict convexity yields the uniqueness of $\la^\star$. Writing $\tilde{\la}=\la + \tau$, one has \begin{align*} \arg\max_{\la}\Ff_\epsilon(\Bf+\tau+\la, \Bg - \la)= \arg\max_{\tilde{\la}}\Ff_\epsilon(\Bf+\tilde{\la}, \Bg - \tilde{\la} + \tau) - \tau, \end{align*} % Hence the desired relation. \end{proof} \begin{prop}\label{prop-aditivity-psi} One has $\Psi_1(\Bf + \tau) = \Psi_1(\Bf) - \tau$. \end{prop} \begin{proof} It is a combination of the previous two propositions, and the fact that the $\Hh$-update is uniquely defined. \end{proof} \subsection{Proof of Proposition 4 - Derivation of $\Hh$-Sinkhorn updates in the KL setting} \label{app-proof-formula-h-sink} \begin{proof} We derive the $\Hh_\epsilon$-Sinkhorn optimality condition for $\Bf$ given $\Bg_t$, the equation for $\Bg$ are obtained by swapping the roles of $(\al,\Bf, \rho_1)$ and $(\be,\Bg, \rho_2)$. Recall that thanks to Proposition 1, the optimality condition reads \begin{align*} e^{\Bf/\epsilon}\dotp{\be}{e^{(\Bg_t-\C)/\epsilon}}=\nabla\phi_1^*(-\Bf-\la^\star(\Bf,\Bg_t)) = e^{-(\Bf+\la^\star(\Bf,\Bg_t)) / \rho_1}. \end{align*} In the $\KL$ setting, thanks to Proposition 2, by taking the log one has \begin{align*} &\frac{\Bf}{\epsilon} + \log\dotp{\be}{e^{(\Bg_t-\C)/\epsilon}} = -\frac{\Bf}{\rho_1} - \frac{\rho_2}{\rho_1 + \rho_2} \log\Big[\frac{\dotp{\al}{e^{-\Bf / \rho_1}}}{ \dotp{\be}{e^{-\Bg / \rho_2}}}\Big],\\ &\Leftrightarrow \frac{\epsilon + \rho_1}{\epsilon\rho_1}\Bf + \frac{\rho_2}{\rho_1 + \rho_2} \log\dotp{\al}{e^{-\Bf / \rho_1}} = -\log\dotp{\be}{e^{(\Bg_t-\C)/\epsilon}} + \frac{\rho_2}{\rho_1 + \rho_2} \log \dotp{\be}{e^{-\Bg_t / \rho_2}}. \end{align*} We recall the definition of the Softmin $\Smin{\al}{\epsilon}(\f) \triangleq -\epsilon\log\dotp{\al}{e^{-\f / \epsilon}}$. An important property used here is that for any $\tau\in\RR$, one has $\Smin{\al}{\epsilon}(\f + \tau) = \Smin{\al}{\epsilon}(\f) + \tau$. The calculation then reads \begin{align*} &\frac{\epsilon + \rho_1}{\epsilon\rho_1}\Bf - \frac{\rho_2}{\rho_1(\rho_1 + \rho_2)}\Smin{\al}{\rho_1}(\Bf) = \frac{1}{\epsilon}\Smin{\be}{\epsilon}(\C-\Bg_t) - \frac{1}{\rho_1 + \rho_2}\Smin{\be}{\rho_2}(\Bg_t),\\ &\Leftrightarrow \Bf - \frac{\epsilon}{\epsilon+\rho_1}\cdot\frac{\rho_2}{\rho_1 + \rho_2}\Smin{\al}{\rho_1}(\Bf) = \frac{\rho_1}{\rho_1 + \epsilon}\Smin{\be}{\epsilon}(\C-\Bg_t) - \frac{\epsilon}{\epsilon+\rho_1}\cdot\frac{\rho_1}{\rho_1 + \rho_2}\Smin{\be}{\rho_2}(\Bg_t). \end{align*} We now define the function $\hf_{t+1}$ as \begin{align*} \hf_{t+1} \triangleq \frac{\rho_1}{\rho_1 + \epsilon}\Smin{\be}{\epsilon}(\C-\Bg_t) - \frac{\epsilon}{\epsilon+\rho_1}\cdot\frac{\rho_1}{\rho_1 + \rho_2}\Smin{\be}{\rho_2}(\Bg_t), \end{align*} Such that the optimality equation now reads \begin{align*} \Bf - \frac{\epsilon}{\epsilon+\rho_1}\cdot\frac{\rho_2}{\rho_1 + \rho_2}\Smin{\al}{\rho_1}(\Bf) = \hf_{t+1}. \end{align*} Define $k \triangleq \tfrac{\epsilon}{\epsilon+\rho_1}\cdot\tfrac{\rho_2}{\rho_1 + \rho_2}$. Note that $\frac{\epsilon}{\epsilon+\rho_1}\cdot\frac{\rho_2}{\rho_1 + \rho_2}\Smin{\al}{\rho_1}(\Bf)$ is in $\RR$, thus there exists some $\tau\in\RR$ such that $\Bf = \hf_{t+1} + \tau$. Using such property in the above equation yields \begin{align*} &\hf_{t+1} + \tau - k\Smin{\al}{\rho_1}(\hf_{t+1} + \tau) = \hf_{t+1},\\ &\Leftrightarrow \tau - k\big(\Smin{\al}{\rho_1}(\hf_{t+1}) + \tau\big) = 0,\\ &\Leftrightarrow \tau(1 - k) = k\Smin{\al}{\rho_1}(\hf_{t+1}),\\ & \Leftrightarrow \tau = \frac{k}{(1 - k)}\Smin{\al}{\rho_1}(\hf_{t+1}). \end{align*} We can conclude and say that \begin{align}\label{eq-general-iter-h-sink} \Bf = \hf_{t+1} + \tau = \hf_{t+1} + \frac{k}{(1 - k)}\Smin{\al}{\rho_1}(\hf_{t+1}) \end{align} Note that we retrieve the map of Proposition 4 in the case $\rho_1=\rho_2=\rho$. Indeed one hase the simpification \begin{align*} \frac{k}{1-k} &= \frac{\epsilon\rho}{2\rho(\epsilon + \rho) - \epsilon\rho}\\ & = \frac{\epsilon}{2(\epsilon + \rho) - \epsilon}\\ &= \frac{\epsilon}{\epsilon + 2\rho}. \end{align*} Thus when $\rho_1=\rho_2=\rho$ the full iteration from $\Bg_t$ to $\Bf_{t+1}$ reads \begin{align*} \hf_{t+1} &= \frac{\rho}{\rho + \epsilon}\Smin{\be}{\epsilon}(\C-\Bg_t) - \frac{1}{2}\frac{\epsilon}{\epsilon+\rho}\Smin{\be}{\rho}(\Bg_t),\\ \Bf_{t+1} &= \hf_{t+1} + \frac{\epsilon}{\epsilon + 2\rho}\Smin{\al}{\rho_1}(\hf_{t+1}). \end{align*} \end{proof} We reformulate the full update $\Psi_1$ with a single formula instead of the above two formulas. While the above formulas formalize the most convenient way to implement it (because we only store one vector of length N at any time), the following result will be more convenient to derive a convergence analysis. \begin{prop}\label{prop-full-formula-psi} Assume $\rho_1 = \rho_2 = \rho$. One has \begin{align*} \Psi_1(\Bf) = \tfrac{\rho}{\rho+\epsilon}\Smin{\al}{\epsilon}(\C-\Bf) + \tfrac{\epsilon}{\epsilon + 2\rho} \Big( \Smin{\be}{\rho}(\tfrac{\rho}{\rho+\epsilon}\Smin{\al}{\epsilon}(\C-\Bf)) - \Smin{\al}{\rho}(\Bf) \Big). \end{align*} % Furthermore one has $\Psi_1(\Bf + \la) = \Psi_1(\Bf) - \la$ for any $\la\in\RR$. \end{prop} \subsection{Properties on $\Hh$-Sinkhorn updates in the KL setting} We now focus on the setting of $\KL$ penalties to derive sharper results on the convergence of $\Hh$-Sinkhorn. \begin{prop}\label{prop-psi-nonexp} In the $\KL$ setting with parameters $(\rho_1,\rho_2)$, the operator $\Psi_1$ is non-expansive for the sup-norm $\norm{\cdot}_\infty$, i.e for any $(\Bf,\Bg)$, one has % \begin{align*} \norm{\Psi_1(\Bf) - \Psi_1(\Bg)}_\infty \leq \norm{\Bf - \Bg}_\infty \end{align*} \end{prop} \begin{proof} We use the formulas given in the proof of Proposition~\ref{prop-conv-h-sink}, in Appendix~\ref{app-proof-formula-h-sink}, and reuse the same notations. Recall from~\cite{sejourne2019sinkhorn} that one has for any measure $\al$, cost $\C$, parameter $\epsilon,\rho >0$ % \begin{align*} \norm{\Smin{\al}{\epsilon}(\C-\Bf) - \Smin{\al}{\epsilon}(\C-\Bg)}_\infty &\leq \norm{\Bf - \Bg}_\infty\\ \norm{\Smin{\al}{\rho}(\Bf) - \Smin{\al}{\rho}(\Bg)}_\infty &\leq \norm{\Bf - \Bg}_\infty. \end{align*} By chaining those inequalities in the quantity $\norm{\Psi_1(\Bf) - \Psi_1(\Bg)}_\infty$ (By using Equation~\eqref{eq-general-iter-h-sink}), and because $\norm{\kappa\Bf}_\infty=\kappa\norm{\Bf}_\infty$ for any $\kappa\in\RR_+$ , one gets % \begin{align*} \norm{\Psi_1(\Bf) - \Psi_1(\Bg)}_\infty \leq \Xi \norm{\Bf - \Bg}_\infty, \end{align*} % where % % \begin{align*} \Xi &= \Bigg[ \frac{\rho_1}{\epsilon + \rho_1} + \frac{\epsilon}{\epsilon + \rho_1}\frac{\rho_1}{\rho_1 + \rho_2} \Bigg] + \frac{k}{1 - k} \Bigg[ \frac{\rho_1}{\epsilon + \rho_1} + \frac{\epsilon}{\epsilon + \rho_1}\frac{\rho_1}{\rho_1 + \rho_2} \Bigg]\\ &=\Bigg[ \frac{\rho_1}{\epsilon + \rho_1} + \frac{\epsilon}{\epsilon + \rho_1}\frac{\rho_1}{\rho_1 + \rho_2} \Bigg]\Big(1 + \frac{k}{1 - k}\Big)\\ &=\Bigg[ \frac{\rho_1}{\epsilon + \rho_1} + \frac{\epsilon}{\epsilon + \rho_1}\frac{\rho_1}{\rho_1 + \rho_2} \Bigg]\Big(\frac{1}{1 - k}\Big). \end{align*} % A calculation yields % \begin{align*} \frac{\rho_1}{\epsilon + \rho_1} + \frac{\epsilon}{\epsilon + \rho_1}\frac{\rho_1}{\rho_1 + \rho_2} &= \frac{\rho_1(\epsilon + \rho_1 + \rho_2)}{(\epsilon + \rho_1)(\rho_1 + \rho_2)},\\ \frac{1}{1 - k} &= \frac{(\epsilon + \rho_1)(\rho_1 + \rho_2)}{\rho_1(\epsilon + \rho_1 + \rho_2)}. \end{align*} % Thus $\Xi=1$, hence the nonexpansive property. \end{proof} We provide below additional details on the non-expansiveness of the map $\Phi$. \begin{prop}\label{prop-Phi-nonexpansive} The map $\Phi:(\Bf,\Bg)\mapsto(\Bf + \la^\star(\Bf,\Bg), \Bg - \la^\star(\Bf,\Bg))$, is $1$-Lipschitz from the norm $\norm{(\f,\g)}_\infty$ to $\norm{(\Bf,\Bg)}_{\star\star}$, i.e. one has % \begin{align*} \norm{\Bf_1 + \la^\star(\Bf_1,\Bg_1) - \Bf_2 - \la^\star(\Bf_2,\Bg_2)}_\infty + \norm{\Bg_1 - \la^\star(\Bf_1,\Bg_1) - \Bg_2 + \la^\star(\Bf_2,\Bg_2)}_\infty \leq \norm{(\Bf_1,\Bg_1) - (\Bf_2,\Bg_2)}_{\star\star}, \end{align*} % where $\norm{(\Bf,\Bg)}_{\star\star} \triangleq \min_{\la\in\RR} \norm{\Bf + \la}_\infty + \norm{\Bg-\la}_\infty.$ \end{prop} \begin{proof} To prove such statement, first note that we can rewrite $\la^\star$ as \begin{align*} \la^\star(\Bf, \Bg) = \tau_1\Smin{\be}{\rho_2}(\Bg) - \tau_2\Smin{\al}{\rho_1}(\Bf), \end{align*} where $\tau_1=\tfrac{\rho_1}{\rho_1 + \rho_2}$ and $\tau_2=\tfrac{\rho_2}{\rho_1 + \rho_2}$. We consider now that $(\Bf,\Bg)$ are discrete and concatenated to form a vector of size $N+M$. Note that the gradient of $\Smin{\al}{\rho_1}(\Bf)$ reads \begin{align*} \nabla\Smin{\al}{\rho_1}(\Bf)_i = \frac{e^{-\Bf_i / \rho_1}\al_i}{\sum_k e^{-\Bf_k / \rho_1}\al_k}. \end{align*} Using this formula we can compute the Jacobian of $\Phi$, noted $J\Phi(\Bf,\Bg)$. It reads \begin{align*} J\Phi(\Bf,\Bg) = \begin{pmatrix} I_N - \tau_2\nabla\Smin{\al}{\rho_1}(\Bf)\mathds{1}_N^\top & \tau_1\nabla\Smin{\be}{\rho_2}(\Bg)\mathds{1}_M^\top\\ \tau_2\nabla\Smin{\al}{\rho_1}(\Bf)\mathds{1}_N^\top & I_M - \tau_1\nabla\Smin{\be}{\rho_2}(\Bg)\mathds{1}_M^\top \end{pmatrix}. \end{align*} A key property of the Jacobian is that for any $\la\in\RR$, one has $J\Phi(\Bf,\Bg)^\top(\la\mathds{1}_N,-\la\mathds{1}_M)=0$. We now derive the Lipschitz bound. We define $(\Bf_t,\Bg_t) = (\Bf_1 + t(\Bf_2 - \Bf_1), \Bg_1 + t(\Bg_2 - \Bg_1))$. The computation reads \begin{align*} \norm{\Phi(\Bf_1,\Bg_1) - \Phi(\Bf_2, \Bg_2)}_\infty &= \norm{ \int_0^1 \frac{\d\Phi(\Bf_t,\Bg_t)}{\d t}\d t }_\infty\\ &=\norm{J\Phi(\Bf_t,\Bg_t)^\top\big((\Bf_2,\Bg_2) - (\Bf_1,\Bg_1)\big)}_\infty\\ &=\norm{J\Phi(\Bf_t,\Bg_t)^\top\big((\Bf_2,\Bg_2) - (\Bf_1,\Bg_1) + (\la\mathds{1}_N,-\la\mathds{1}_M) \big) }_\infty\\ &\leq\norm{J\Phi(\Bf_t,\Bg_t)}_\infty \Big(\min_{\la\in\RR} \norm{\Bf_2 - \Bf_1 + \la}_\infty + \norm{\Bg_2 - \Bg_1-\la}_\infty\Big)\\ &\leq\norm{J\Phi(\Bf_t,\Bg_t)}_\infty \norm{(\Bf_2,\Bg_2) - (\Bf_1,\Bg_1)}_{\star\star}. \end{align*} Since $\norm{J\Phi(\Bf_t,\Bg_t)}_\infty\leq 1$, we get the desired Lipschitz property. \end{proof} We define $\Upsilon_1(\Bf) \triangleq (\Bf, \Psi_1(\Bf))$ where $\Psi_1$ is detailed in Proposition~\ref{prop-conv-h-sink}. Similarly, one can define $\Upsilon_2(\Bg) \triangleq (\Psi_2(\Bg), \Bg)$. We present properties for $\Upsilon_1$, but they analogously hold for $\Upsilon_2$. \begin{prop}\label{prop-conj-redisual} Consider any functions $(\Bf,\Bg)$. One has % \begin{align*} \norm{\Upsilon_1(\Bf) - \Upsilon_1(\Bg)}_{\star\star} \leq 2\norm{\Bf - \Bg}_\star \end{align*} \end{prop} \begin{proof} By definition of $\norm{\cdot}_{\star\star}$, one has % \begin{align*} \norm{\Upsilon_1(\Bf) - \Upsilon_1(\Bg)}_{\star\star} &=\norm{(\Bf, \Psi_1(\Bf)) - (\Bg, \Psi_1(\Bg))}_{\star\star}\\ &= \inf_{\la\in\RR} \norm{\Bf - \Bg + \la}_\infty + \norm{\Psi_1(\Bf) - \Psi_1(\Bg) - \la}_\infty\\ &= \inf_{\la\in\RR} \norm{(\Bf + \la) - \Bg}_\infty + \norm{\Psi_1(\Bf + \la) - \Psi_1(\Bg)}_\infty\\ &\leq 2 \inf_{\la\in\RR} \norm{(\Bf + \la) - \Bg}_\infty\\ &= 2\norm{\Bf - \Bg}_\star, \end{align*} where we use the relation $\Psi_1(\Bf + \la) = \Psi_1(\Bf) - \la$ from Proposition~\ref{prop-aditivity-psi}, and where the inequality is given by Proposition~\ref{prop-psi-nonexp}. % Hence we get the desired bound, which ends the proof. \end{proof} Before providing a convergence result, we detail the contraction properties of the map $\Psi_1$ w.r.t the Hilbert norm $\norm{\cdot}_\star$. \begin{prop}\label{prop-psi-contractive-hilbert} Consider two functions $(\Bf,\Bg)$, one has % \begin{align*} \norm{\Psi_1(\Bf) - \Psi_1(\Bg)}_\star \leq \frac{\rho}{\epsilon + \rho} \kappa_\epsilon(\al)\norm{\Bf - \Bg}_\star, \end{align*} % where $\kappa_\epsilon(\al)<1$ is the contraction constant of the Softmin~\cite{knight2008sinkhorn} which reads % \begin{align*} \norm{\Smin{\al}{\epsilon}(\Bf) - \Smin{\al}{\epsilon}(\Bg)}_\star \leq \kappa_\epsilon(\al)\norm{\Bf - \Bg}_\star. \end{align*} \end{prop} \begin{proof} Thanks to Proposition~\ref{prop-full-formula-psi}, we have % \begin{align*} \Psi_1(\Bf) &= \tfrac{\rho}{\rho+\epsilon}\Smin{\al}{\epsilon}(\C-\Bf) + \tfrac{\epsilon}{\epsilon + 2\rho} \Big( \Smin{\be}{\rho}(\tfrac{\rho}{\rho+\epsilon}\Smin{\al}{\epsilon}(\C-\Bf)) - \Smin{\al}{\rho}(\Bf) \Big)\\ &= \tfrac{\rho}{\rho+\epsilon}\Smin{\al}{\epsilon}(\C-\Bf) + T(\Bf), \end{align*} % where $T(\Bf)\in\RR$ is a constant translation (Note that the only term outputing a function is the one involving $\C(x,y)$). % Thus because the Hilbert norm is invariant to translations, one has % \begin{align*} \norm{\Psi_1(\Bf) - \Psi_1(\Bg)}_\star = \norm{\tfrac{\rho}{\rho+\epsilon}\Smin{\al}{\epsilon}(\C-\Bf) - \tfrac{\rho}{\rho+\epsilon}\Smin{\al}{\epsilon}(\C-\Bg)}_\star. \end{align*} % Thanks to the results of~\cite{chizat2016scaling, knight2008sinkhorn}, one has % \begin{align*} \norm{\tfrac{\rho}{\rho+\epsilon}\Bf - \tfrac{\rho}{\rho+\epsilon}\Bg}_\star &= \tfrac{\rho}{\rho+\epsilon}\norm{\Bf - \Bg}_\star,\\ \norm{\Smin{\al}{\epsilon}(\Bf) - \Smin{\al}{\epsilon}(\Bg)}_\star &\leq \kappa_\epsilon(\al)\norm{\Bf - \Bg}_\star, \end{align*} % Which ends the proof of the contraction property of $\Psi_1$. \end{proof} We prove now a convergence result of the $\Hh$-Sinkhorn algorithm, before providing a quantitative rate. \begin{thm} The map $\Psi_2\circ\Psi_1$ converges to a fixed point $\Bf^\star$ such that $\Bf^\star=\Psi_2\circ\Psi_1(\Bf^\star)$, where $\Bf$ is defined up to a translation. % The function $\Bg^\star=\Psi_1(\Bf^\star)$ also satisfies $\Bg^\star=\Psi_1\circ\Psi_2(\Bg^\star)$. % Furthermore the functions $(\f^\star,\g^\star)=\Phi(\Bf^\star,\Bg^\star)$ are fixed points of the $\Ff$-Sinkhorn updates, and are thus optimizers of the functional $\Ff$. \end{thm} \begin{proof} Thanks to Proposition~\ref{prop-psi-contractive-hilbert}, we know that the map $\Psi_2\circ\Psi_1$ is contractive for the Hilbert norm, thus there is uniqueness of the fixed point. % Because the map $\Psi_2\circ\Psi_1(\Bf + \la) = \Psi_2\circ\Psi_1(\Bf) + \la$, one can assume without loss of generality that all iterates $\Bf_t$ satisfy $\Bf_t(x_0)=0$ for some $x_0$ in the support of $\al$. % Thus under similar assumptions as in~\cite{sejourne2019sinkhorn}, we get that the iterates lie in a compact set, which yields existence of a fixed point $\Bf^\star$ satisfying $\Psi_2\circ\Psi_1(\Bf^\star)=\Bf^\star$. % Defining $\Bg^\star=\Psi_1(\Bf^\star)$ and composing the previous relation by $\Psi_1$, we get $\Bg^\star=\Psi_1\circ\Psi_2(\Bg^\star)$. Recall from Proposition~\ref{prop-aprox-eq-psi} that $\Bf^\star$ satisfies the relation % \begin{align*} \Bg^\star = \Psi_1(\Bf^\star) = \frac{\rho}{\epsilon+\rho}\Big(\; \Smin{\al}{\epsilon}\big(\; \C - \Bf^\star - \la^\star(\Bf^\star, \Bg^\star) \;\big) \;\Big) + \la^\star(\Bf^\star, \Bg^\star). \end{align*} % Thus defining $(\f^\star,\g^\star) = \Phi(\Bf^\star,\Bg^\star) = (\Bf^\star + \la^\star(\Bf^\star, \Bg^\star),\Bg^\star - \la^\star(\Bf^\star, \Bg^\star))$, the above equation can be rephrased as % \begin{align*} \g^\star = \frac{\rho}{\epsilon+\rho}\Big(\; \Smin{\al}{\epsilon}\big(\; \C - \f^\star \;\big) \;\Big), \end{align*} % which is exactly the fixed point equation of $\Ff$-Sinkhorn, thus $(\f^\star,\g^\star)$ are optimal dual potentials for $\Ff$. \end{proof} Based on the above results, we can prove the following convergence rate \begin{thm} Write $\Bf^\star$ the fixed point of the map $\Psi_2\circ\Psi_1$. Take $\Bf_{t}$ obtained by $t$ iterations of the map $\Psi_2\circ\Psi_1$, starting from the function $\f_0$. One has % \begin{align*} \norm{\Phi(\Upsilon_1(\Bf)(\Bf_t)) - \Phi(\Upsilon_1(\Bf)(\Bf^\star))}_\infty \leq \norm{\Upsilon_1(\Bf)(\Bf_t) - \Upsilon_1(\Bf)(\Bf^\star)}_{\star\star} \leq 2\norm{\Bf_t - \Bf^\star}_\star \leq 2\bar{\kappa}^{t}\norm{\Bf_0 - \Bf^\star}_\star, \end{align*} where $\bar{\kappa}\triangleq (1 + \tfrac{\epsilon}{\rho_1})^{-1}\kappa_\epsilon(\al)(1 + \tfrac{\epsilon}{\rho_2})^{-1}\kappa_\epsilon(\be)$. \end{thm} \begin{proof} \begin{itemize} \item The first inequality is proved in Proposition~\ref{prop-Phi-nonexpansive}. \item The second inequality is given by Proposition~\ref{prop-conj-redisual} \item The last inequality is obtained by applying Proposition~\ref{prop-psi-contractive-hilbert} to both $\Psi_1$ and $\Psi_2$ consecutively to get the contraction rate $\bar{\kappa}$ for any $(\rho_1,\rho_2)$, i.e. we have % \begin{align*} \norm{\Bf_t - \Bf^\star}_\star &=\norm{\Psi_2\circ\Psi_1(\Bf_{t-1}) - \Psi_2\circ\Psi_1(\Bf^\star)}_\star\\ &\leq \bar{\kappa}\norm{\Bf_{t-1} - \Bf^\star}_\star. \end{align*} % By iterating this bound by induction over all iterations, we get last bound of the statement, which ends the proof. \end{itemize} \end{proof} \subsection{Interesting norms and closed formulas} We study now properties of the norm $\norm{(\Bf,\Bg)}_{\star\star} \triangleq \min_{\la\in\RR} \norm{\Bf + \la}_\infty + \norm{\Bg-\la}_\infty$. It will share connections with the Hilbert norm $\norm{\Bf}_\star \triangleq \min_{\la\in\RR} \norm{\Bf + \la}_\infty$. We firts prove the following Lemma on the Hilbert norm% \begin{lem}\label{lem-hilbert-closed-form} One has $\norm{\Bf}_\star=\tfrac{1}{2}(\max\Bf - \min\Bf)$. \end{lem} \begin{proof} We will use the relations $\norm{\Bf}_\infty = \max(\max\Bf, -\min\Bf)$ and $\max(x,y)=\tfrac{1}{2}(x + y + |x-y|)$.% % Applying those relations to the setting of the Hilbert norm yields for any $\la\in\RR$ % \begin{align*} \norm{\Bf + \la}_\infty &= \max(\max\Bf + \la, -\min\Bf - \la)\\ &= \tfrac{1}{2}(\max\Bf - \min\Bf + |\max\Bf + \min\Bf + 2\la|). \end{align*} % Since the Hilbert norm is obtained by minimizing over $\la\in\RR$, we see that the minimum is attained at the unique value $\la^\star=\tfrac{1}{2}(\max\Bf + \min\Bf)$. % Thus the absolute value cancels out and it yields $\norm{\Bf}_\star=\tfrac{1}{2}(\max\Bf - \min\Bf)$. \end{proof} We focus now on $\norm{(\Bf,\Bg)}_{\star\star}$. \begin{lem}\label{lem-starstar-closed-form} One has $\norm{(\Bf,\Bg)}_{\star\star} = \norm{\Bf}_\star + \norm{\Bg}_\star + \tfrac{1}{2}|\max\Bf + \min\Bf + \max\Bg + \min\Bg| = \norm{\Bf\oplus\Bg}_\infty$. \end{lem} \begin{proof} The proof reuses the same relations used in the previous lemma. One has for any $\la\in\RR$ % \begin{align*} \norm{\Bf + \la}_\infty + \norm{\Bg-\la}_\infty &= \max(\max\Bf + \la,-\min\Bf - \la) + \max(\max\Bg-\la,-\min\Bg + \la)\\ &= \tfrac{1}{2}(\max\Bf - \min\Bf + |\max\Bf + \min\Bf + 2 \la|) + \tfrac{1}{2}(\max\Bg - \min\Bg + |\max\Bg + \min\Bg - 2 \la|). \end{align*} % Thanks to Lemma~\ref{lem-scalar-solve-starstar}, we have that the minimization in $\la$ attains the following value % \begin{align*} \norm{(\Bf,\Bg)}_{\star\star} &= \tfrac{1}{2}(\max\Bf - \min\Bf) + \tfrac{1}{2}(\max\Bg - \min\Bg) + \tfrac{1}{2}|\max\Bf + \min\Bf + \max\Bg + \min\Bg|. \end{align*} % Reusing Lemma~\ref{lem-hilbert-closed-form} allows to rewrite the first two terms as Hilbert norms. % To get the last equality with $\norm{\Bf\oplus\Bg}_\infty$, note that $\max(\Bg\oplus\Bg) = \max\Bf + \max\Bg$, and that the above relation reads $max(x,y)$ with $x= \max\Bf + \max\Bg$ and $y=-\min\Bf-\min\Bg$. \end{proof} We end with the proof of the Lemma we used in the previous demonstration. \begin{lem}\label{lem-scalar-solve-starstar} For any $(a,b)\in\RR$, one has $\min_{\la\in\RR} |a + \la| + |b - \la| = |a + b|$, which is attained for any $\la\in[\min(-a,b), \max(-a,b)]$. \end{lem} \begin{proof} Given that $a\leq b$, we study all cases depending on the sign of the absolute value. % We define $V(\la)\triangleq |a + \la| + |b - \la|$. % \paragraph{Case 1: $(a+\la\geq 0)$ and $(b-\la\geq 0)$.} % It is equivalent to $\la\in[-a, b]$, which is non-empty when $\min(-a,b) = -a$ and $\max(-a,b) = b$. % One has $V(\la) = a + \la + b - \la = a + b = |a + b|$ because in this case $-a\leq b$. % \paragraph{Case 2: $(a+\la\leq 0)$ and $(b-\la\leq 0)$.} % It is equivalent to $\la\in[b, -a]$, which is non-empty when $\min(-a,b) = b$ and $\max(-a,b) = -a$. % One has $V(\la) = -a - \la - b + \la = -(a + b)= |a + b|$ because in this case $-a\geq b$. % \paragraph{Case 3: $(a+\la\geq 0)$ and $(b-\la\leq 0)$.} % It is equivalent to $\la\in[\max(-a, b), +\infty)$. % One has $V(\la) = a + \la - b + \la = a - b + 2\la$, which is minimized for $\la=\max(-a, b)$. It yields $$V(\la)=a - b + 2\max(-a, b) = a - b + (b - a + |a + b|) = |a + b|.$$ % \paragraph{Case 4: $(a+\la\leq 0)$ and $(b-\la\geq 0)$.} % It is equivalent to $\la\in](-\infty, \min(-a, b)]$. % One has $V(\la) = -a - \la + b - \la = b - a - 2\la$, which is minimized at $\la=\min(-a,b)$. % It yields $$V(\la)=b - a - 2\min(-a, b) = b - a - (b - a - |a + b|) = |a + b|.$$ \paragraph{Conclusion.} % All in all, we have from all cases altogether that the $\min_{\la\in\RR}V(\la) = |a + b|$, and that any $\la\in[\min(-a,b), \max(-a,b)]$ attains this minimum. \end{proof} \subsection{Experiments - Combining Sinkhorn with Anderson acceleration} Anderson acceleration~\cite{anderson1965iterative} applies to any iterative map of the form $x_{t+1} = T(x_t)$ for $x_t\in\RR^d$. At a given iterate $x_t$ consists in storing $K$ residuals of the form $u_k=T(x_{t+k}) - x_{t+k}$ for $k=0..K-1$ as a matrix $U \triangleq [u_0, \ldots, u_{K-1}]$, and to find the best interpolation of the residuals which satisfy the following criteria \begin{align*} c^\star\in\arg\min_{\mathds{1}^\top c = 1} \norm{Uc}_2, \end{align*} where $c\in\RR^K$ Then one defines the next iterate as $x_{t+1} = \sum_{k=0}^{K-1} c_k x_{t+k}$. Such procedure is known to converge faster to a fixed point than the standard iterations~\cite{scieur2016regularized}. To ensure the convergence and the well-posedness of $c^\star$, it is common to regularize the problem as \begin{align*} c^\star\in\arg\min_{\mathds{1}^\top c = 1} c^\top(U^\top U + r I)c, \end{align*} where $r\geq 0$ is the regularization parameter. In this case we have the following closed form \begin{align*} c^\star_r \triangleq \frac{(U^\top U + r I)^{-1}\mathds{1}}{\mathds{1}^\top(U^\top U + r I)^{-1}\mathds{1}}. \end{align*} The interest of Anderson acceleration is that the above extrapolation amounts to invert a small matrix of size $K\times K$. We provide below an experiment on the estimation of the contraction rate similar to Figure 1. We take $K=4$ and $r=10^{-7}$. We test the acceleration on each version of Sinkhorn, and we observe that it yields a faster convergence for all three versions $(\Ff_\epsilon,\Gg_\epsilon,\Hh_\epsilon)$- Sinkhorn. \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{sections/figures/plot_log_contraction_rate_anderson.pdf} \caption{\textit{ Estimation of the contraction for the Anderson acceleration applied to $(\Ff_\epsilon,\Gg_\epsilon,\Hh_\epsilon)$- Sinkhorn compared with the contraction rate of $(\Ff_\epsilon,\Gg_\epsilon,\Hh_\epsilon)$-Sinkhorn. Dashed lines represent the accelerated version and dotted lines the 'standard' algorithm. We take $K=4$ iterations for Anderson extrapolation, and we regularize with $r=10^{-7}$.} } \end{figure} \if 0 \subsection{Remarks on translations} \paragraph{Translation $(\f+\mu, \g+\mu)$.} In the setting $\epsilon>0$, another transformation is the translation $(\f+\mu, \g+\mu)$ with $\mu\in\RR$, to optimize $\Ff_\epsilon(\f+\mu,\g+\mu)$. It is not possible in the unregularized setting $\epsilon=0$ because of the dual constraint $\f\oplus\g\leq\C$. Indeed, satisfying $\f(x)+\g(y)=\C(x,y)$ for at least one pair $(x,y)$ imposes $\mu=0$. This transformation admits a closed form when $\rho_1=\rho_2=\rho$. Otherwise a Newton algorithm allows to approximate $\mu$. \begin{prop} When $\rho_1=\rho_2=\rho$ the optimum $\mu^*\in\arg\max_{\mu\in\RR}\Ff_\epsilon(\f+\mu,\g+\mu)$ satisfies \begin{align} \mu^*(\f,\g)\triangleq \frac{\epsilon\rho}{\epsilon + 2\rho}\log\Bigg[ \frac{ \dotp{\al}{e^{-\frac{\f}{\rho}}} + \dotp{\be}{e^{-\frac{\g}{\rho}}} }{ 2\dotp{\al\otimes\be}{e^{\tfrac{\f\oplus\g - \C}{\epsilon}}} } \Bigg]. \end{align} \end{prop} \todo{Add translation $(\f+ \mu, \g +\nu)$}. \paragraph{Remark on optimality.} \todo{Leave 1st phrase + appendix} It is important to mention that this procedure optimizes $\Gg_\epsilon(\f,\g,\la)$, but differs from optimizing $\Hh_\epsilon(\f,\g)$. Indeed, when $\rho_1=\rho_2=\rho$, the first order condition of $\Hh_\epsilon(\f,\g_t)$ w.r.t. $\f$ for fixed $\g_t$ reads \begin{align} e^{\frac{\f}{\epsilon}}\dotp{\be}{e^{\tfrac{\g_t - \C}{\epsilon}}} = e^{-\frac{\f}{\rho}}\sqrt{\frac{ \dotp{\be}{e^{-\g_t / \rho}} }{ \dotp{\al}{e^{-\f / \rho}} }}.\label{eq-optim-sink-v1} \end{align} Meanwhile, the optimality of $\Gg_\epsilon(\f,\g_t,\la_t)$ w.r.t. $\f$ for fixed $(\g_t,\la_t)$ with $\la_t=\la^*(\f_t,\g_t)$ reads \begin{align} e^{\frac{\f}{\epsilon}}\dotp{\be}{e^{\tfrac{\g_t - \C}{\epsilon}}} = e^{\tfrac{-\f-\la_t}{\rho}} = e^{-\frac{\f}{\rho}}\sqrt{\frac{ \dotp{\be}{e^{-\g_t / \rho}} }{ \dotp{\al}{e^{-\f_t / \rho}} }}.\label{eq-optim-sink-v2} \end{align} Note that Equation~\eqref{eq-optim-sink-v1} does not admit a closed form due to the dependence in $\f$ of the integral $\dotp{\al}{e^{-\f / \rho}}$. Contrary to this, Equation~\eqref{eq-optim-sink-v2} admits a closed form for $\f$ because it uses the previously computed potential $\f_t$. It is for this reason that we seek to optimize $\Gg_\epsilon$. \paragraph{Considering other translations.} \todo{Remove this paragraph according to Pierre.} \todo{Numeric, look other translations in regime $\epsilon\geq\rho$} Without a quantitative proof of convergence, we considered other translations of potentials to accelerate Sinkhorn. One can optimize $\Ff_\epsilon(\f+\nu,\g+\nu)$ for $\nu\in\RR$, which admits a closed form when $\rho_1=\rho_2$. Another option is to perform both translations by optimizing $\Ff_\epsilon(\f + \mu + \nu, \g - \mu + \nu)$ for $(\mu,\nu)\in\RR^2$. We compared those three translations to Sinkhorn algorithm in the regime $\epsilon\leq\rho$, and observed that translations $(\f+\nu,\g+\nu)$ yield a slower convergence than $(\f+\mu,\g-\mu)$. Doing both translations yields no acceleration compared to $(\f+\mu,\g-\mu)$. Thus we emphasize that our proposal seems to be the most relevant idea, see Appendix for formulas and experimental details. \fi \section{Frank-Wolfe solver in 1-D} \label{sec-fw} \if 0 \paragraph{Frank-Wolfe methods.} Frank-Wolfe or conditional gradient methods~\cite{frank1956algorithm} aims at minimizing $\min_{x\in conv(\Aa)} \Ff(x)$, where $\Aa$ is called the set of atoms. To do so they propose instead to minimize a linear minimization oracle (LMO) at the current iterate $x_t$, which reads $v_t\in\arg\min_{v\in\Aa} \dotp{\nabla\Ff(x_t)}{v}$. The next iterate $x_{t+1}$ is updated as a convex combination of $(x_t,v_t)$ via line-search $\gamma_t\in\arg\min_{\gamma\in[0,1]} \Ff(x_t + \gamma d_t)$, where $d_t=v_t-x_t$ is the descent direction. One can skip the line-search and set $\gamma_t=\tfrac{2}{2+t}$, which gives a $O(\tfrac{1}{t})$ approximation rate of the optimizer for gradient-Lipschitz functions. We also consider in this paper the Paiwise FW (PFW) variant~\cite{lacoste2015global}. We store at each iterate the atom $v_t$ and its weight $w_t\geq 0$ in the convex combination as a dictionnary $\Vv_t$, i.e. at time $t$ one has $x_t = \sum_{k=1}^t w_k v_k$, and $\sum_{k=1}^t w_k = 1$. What changes is the descent direction in the linesearch $d_t=v_t-s_{t^\star}$ where $s_{t^\star}\in\arg\max_{s\in\Vv_t}\dotp{\nabla\Ff(x_t)}{s}$. The linesearch seeks $\gamma\in[0,w_{t^\star}]$ instead of $[0,1]$ to ensure that $x_{t+1}=x_t + \gamma d_t$ remains a convex combination. One can interpret this variant as removing previous atoms which became irrelevant to replace them with more optimal ones. There is an affine-invariant analysis of this variant in~\cite{lacoste2015global} which ensures a linear convergence in general settings including ours, see Figure~\ref{fig-comparison-fw-ver}. \fi In this section, we derive an efficient 1-D solver using Frank-Wolfe's algorithm in the unregularized setting $\epsilon=0$. Frank-Wolfe or conditional gradient method~\cite{frank1956algorithm} minimizes a smooth convex function on a compact convex set by linearizing the function at each iteration. It is tempting to apply F-W's algorithm to solve the UOT problem since the resulting linearized problem is a balanced OT problem, which itself can be solved efficiently in 1-D. One cannot however directly apply F-W to $\Ff_0$ because the associated constraint set $f \oplus g \leq \C$ (i.e. $\f_i+\g_j\leq\C_{i,j}$) is a priori unbounded, since it is left unchanged by the translation $(f+\la,g-\la)$. We thus propose to rather apply it on the translation invariant functional $\Hh_0$, which results in an efficient numerical scheme, which we now detail. \paragraph{F-W for UOT.} We apply F-W to the problem $\sup_{\Bf\oplus\Bg\leq\C} \Hh_0(\Bf,\Bg)$. While the constraint set is a priori unbounded, we will show that the iterates remain well defined nevertheless. The iterations of F-W read $$ (\Bf_{t+1},\Bg_{t+1}) = (1-\ga_t) (\Bf_t,\Bg_t) + \ga_t (r_{t},s_{t}) $$ for some step size $\ga_t>0$ where $(f_{t},g_{t})$ are solutions of a Linear Minmization Oracle (LMO), $$ (r_{t},s_{t}) \in \uargmin{r \oplus s \leq C} \dotp{ (r,s) }{ \nabla \Hh_0(\Bf_t,\Bg_t) }. $$ Thanks to Proposition~\ref{prop-equiv-mass-trans}, this LMO thus reads \begin{align} &(r_{t},s_{t}) \in \uargmin{r \oplus s \leq C} \dotp{r}{\tilde\al_t} + \dotp{s}{\tilde\be_t}, \\ \qwhereq &\choice{ \tal_t \triangleq \nabla\phi_1^*(-\Bf_t-\la^\star(\Bf_t,\Bg_t))\al, \\ \tbe_t \triangleq \nabla\phi_2^*(-\Bg_t+\la^\star(\Bf_t,\Bg_t))\be. }\label{eq:fw-updated-histo} \end{align} It is thus the solution of a balanced OT problem between two histograms with equal masses. Hence the iteration of this F-W are well-defined. Recall that $\la^\star$ is computable in closed form for $\KL$ (Proposition~\ref{prop-kl-opt-trans}) or via a Newton scheme for smooth $\phi^*$. Note that this approach holds for any measures defined on any space. Thus one could use any algorithm such as network simplex or Sinkhorn. While the computational gain of this approach is not clear in general, we propose to focus on the setting of 1-D data, where the LMO is particularly fast to solve. \if 0 Fix a current iterate $(\f_0,\g_0)$ such that $\f_0\oplus\g_0\leq\C$. Since the envelope theorem applies for $\Hh$ (Proposition~\ref{prop-equiv-mass-trans}), the LMO of UOT reads \begin{align} \sup_{\f\oplus\g\leq\C} &\dotp{\al}{\nabla\phi_1^*(-\f_0-\la^\star(\f_0,\g_0))\cdot\f}\nonumber \\ &+ \dotp{\be}{\nabla\phi_2^*(-\g_0+\la^\star\f_0,\g_0))\cdot\g}. \end{align} It is exactly $\OT(\tal, \tbe)$, where $\tal=\nabla\phi_1^*(-\f_0-\la^\star(\f_0,\g_0))\al$ and $\tbe=\nabla\phi_2^*(-\g_0+\la^\star(\f_0,\g_0))\be$. Note that one must have $m(\tal) = m(\tbe)$ in order to have a finite value for $\OT(\tal, \tbe)$, otherwise it is $+\infty$. Such property is guaranteed by Proposition~\ref{prop-equiv-mass-trans}. \fi \begin{algorithm}[t] \caption{-- \textbf{SolveOT1D($x$, $\al$, $y$, $\be$, $\C$)} }\label{alg-ot-lmo} \small{ \textbf{Input:} measures $(x,\al,N)$ and $(y,\be,M)$, cost $\C$\\ \textbf{Output:} primal-dual solutions $(\pi, \f,\g)$\\ \begin{algorithmic}[1] \STATE Set $\pi,\, \f,\, \g \leftarrow 0,\, 0,\, 0$ \STATE Set $\g_1,\, a,\, b,\, i,\, j \leftarrow \C(x_1,y_1),\, \al_1,\, \be_1,\, 1,\, 1$ \WHILE{$i<N$ or $j<M$} \IF{($a\geq b$ and $i<N$) or ($j=M$)} \STATE $\pi_{i,j},\, b\leftarrow a,\, b - a$ \STATE $i\leftarrow i + 1$ \STATE $\f_i,\, a\leftarrow \C(x_i, y_j) - \g_j,\, \al_i$ \ELSIF{($a>b$ and $j<M$) or ($i=N$)} \STATE $\pi_{i,j},\, a\leftarrow b,\, a - b$ \STATE $j\leftarrow j + 1$ \STATE $\g_j,\, b\leftarrow \C(x_i, y_j) - \f_i,\, \be_j$ \ENDIF \ENDWHILE \STATE Return $(\pi,\f,\g)$. \end{algorithmic} } \end{algorithm} \paragraph{The 1-D case.} Algorithm~\ref{alg-ot-lmo} details a fast and exact $O(N+M)$ time solver for 1-D optimal transport when $C_{i,j}=|x_i-y_j|^p$ ($p\geq 1$) in 1-D and the points are already sorted. It can thus be used to compute the LMO for 1-D UOT problems. Algorithm~\ref{algo-fw} details the resulting F-W algorithm for 1-D UOT. It uses either the standard step size $\gamma_t=\tfrac{2}{2+t}$ or a line search optimizing $$ \ga \in [0,1] \mapsto \Hh_0( (1-\ga) \Bf_t + \gamma r_t, (1-\ga) \Bg_t + \gamma s_t ). $$ For a KL divergence, the computation of $(\tal_t,\tbe_t)$ is in closed form and require $O(N+M)$ operations, while for other divergences, it can be obtained using a few iterations of a Newton solver. The induced cost is in any case comparable with the one of the SolveOT1D sub-routine. We also test the Pairwise FW (PFW) variant~\cite{lacoste2015global}, which we detail in the Appendix. This variant requires to store all the iterates of the algorithm (thus being memory intensive to reach high precision), but ensures a linear convergence under looser additional conditions than FW. \begin{figure*}[h!] \centering \begin{tabular}{c@{}c@{}c} {\includegraphics[width=0.3\linewidth]{sections/figures/sequence_marginals_fw_iter_0.pdf}} & {\includegraphics[width=0.3\linewidth]{sections/figures/sequence_marginals_fw_iter_1.pdf}} & {\includegraphics[width=0.3\linewidth]{sections/figures/sequence_marginals_fw_iter_2.pdf}} \end{tabular} \caption{ \textit{ Evolution of the plan's marginals at the first iterations. The inputs $(\al,\be)$ are the dashed lines, the optimal marginals $(\pi_1^\star,\pi_2^\star)$ are dotted in cyan/magenta. The initialization is $(\f_0,\g_0)=(0,0)$, thus $(\tal_0,\tbe_0)=(\al,\be)$. The filled area is the error between $(\tal_t,\tbe_t)$ and $(\pi_1^\star,\pi_2^\star)$. } } \label{fig-iter_fw} \end{figure*} We provide an illustration of Algorithm~\ref{algo-fw} on Figure~\ref{fig-iter_fw} to illustrate the optimization from the primal point of view. At optimality, in the $\KL$ setting, one has $\pi_1^\star=e^{-\f^\star / \rho}\al$ and $\pi_2^\star=e^{-\g^\star / \rho}\be$. Thus we can estimate suboptimal marginals as $\pi_{1,t}=e^{-\f_t / \rho}\al$ and $\pi_{2,t}=e^{-\g_t / \rho}\be$, where $(\f_t,\g_t)$ are the FW iterates. We observe that the term $e^{-\f / \rho}$ acts as a normalization on the marginals. We also observe that on these examples, the marginals are close to $(\pi_1^\star,\pi_2^\star)$ after only 2 iterations (Iteration zero is the initialization $(\pi_{1,0},\pi_{2,0})=(\al,\be)$). \begin{figure}[H] \centering \includegraphics[width=0.35\textwidth]{sections/figures/plot_fw_comparison.pdf} \caption{\textit{ Comparison of FW with and without line-search and PFW during $10.000$ iterations. The computation time per iteration is averaged and reported. We display the error $\norm{\f_t-\f^\star}_\infty$.} } \label{fig-comparison-fw-ver} \end{figure} \begin{algorithm}[t] \caption{-- \textbf{SolveUOT($x$, $\al$, $y$, $\be$, $\C$, $\rho_1$, $\rho_2$)} }\label{algo-fw} \small{ \textbf{Input:} sorted $(x,y)$, histograms $(\al,\be)$, cost $\C$. \\ \textbf{Output:}~dual potentials $(\f_t,\g_t)$ \newline \vspace{-4mm} \begin{algorithmic}[1] \STATE Initialize $(\Bf_0,\Bg_0)$, $t=0$ \WHILE{$(\Bf_t,\Bg_t)$ has not converged} \STATE Compute $(\tal_t,\tbe_t)$ using~\eqref{eq:fw-updated-histo}. \STATE $(r_t,s_t) \leftarrow$ SolveOT($x$, $\tal_t$, $y$, $\tbe_t$, $\C$) \STATE $\gamma_t = \text{LineSearch} \Bf_t, \Bg_t, r_t, s_t) \text{ or } \gamma_t=\tfrac{2}{2+t}$ \STATE $(\Bf_{t+1},\Bg_{t+1}) = (1-\ga_t) (\Bf_t,\Bg_t) + \ga_t (r_{t},s_{t})$, $t \leftarrow t+1$ \ENDWHILE \STATE Return $(\f_t,\g_t) \triangleq (\Bf_t+\la^\star(\Bf_t,\Bg_t), \Bg_t-\la^\star(\Bf_t,\Bg_t))$. \end{algorithmic} } \end{algorithm} We showcase a comparison of FW (with and without linesearch on $\Hh_0$) and PFW on Figure~\ref{fig-comparison-fw-ver}. We solve the $\UOT$ problem for $\rho=10^{-1}$ between the 1-D measures displayed Figure~\ref{fig-iter_fw}, each one having $5.000$ samples. We run $10.000$ iterations and compare the potential $\f_t$ after we precomputed $\f^\star$. We report the computation time to see whether or not the gain of a line-search is worth the extra computation time. In this example we observe that FW with lineseach outperforms. We also observe that the three variants have linear convergence. \paragraph{Comparison of performance.} We now compare our implementation with the Sinkhorn algorithm, which is the reference algorithm, and is especially tailored for GPU architectures. We consider two histograms of size $N=M=200$, set $\rho=1$, compute the optimal potentials $(\f^\star,\g^\star)$ with CVXPY~\cite{diamond2016cvxpy}, run $5000$ iterations of FW without line-search and $\Hh$-Sinkhorn and report $\norm{\f_t-\f^\star}_\infty$. We consider Sinkhorn only on GPU and use stabilized soft-max operations since we wish to approximate the unregularized problem where $\epsilon$ should be small. We also perform log-stable updates in FW when we compute the translation $\la^\star$. We display the result on Figure~\ref{fig-sinkhorn-vs-fw}, where the horizontal axis refers to computation time. Note that even for problems of such a small size, the $O(N^2)$ cost per iteration of Sinkhorn dominates the $O(N)$ cost of FW. FW provides a better estimate of $\f^\star$ during all the iterations for a wide range of $\epsilon$. Additional results in Appendix show that that this behaviour is similar for other values of $\rho$. \section{Appendix of Section 4 - Frank-Wolfe solver in 1-D} \label{seq-supp-fw} \subsection{Details on the Pairwise Frank-Wolfe algorithm} Frank-Wolfe or conditional gradient methods~\cite{frank1956algorithm} aims at minimizing $\min_{x\in conv(\Aa)} \Ff(x)$, where $\Aa$ is called the set of atoms. To do so one can minimize a linear minimization oracle (LMO) at the current iterate $x_t$, which reads $v_t\in\arg\min_{v\in\Aa} \dotp{\nabla\Ff(x_t)}{v}$. The next iterate $x_{t+1}$ is updated as a convex combination of $(x_t,v_t)$ via line-search $\gamma_t\in\arg\min_{\gamma\in[0,1]} \Ff(x_t + \gamma d_t)$, where $d_t=v_t-x_t$ is the descent direction. It is also possible to skip the line-search and set $\gamma_t=\tfrac{2}{2+t}$, which gives a $O(\tfrac{1}{t})$ approximation rate of the optimizer for gradient-Lipschitz functions. We also consider in this paper the paiwise FW (PFW) variant~\cite{lacoste2015global}. We store at each iterate the atom $v_t$ and its weight $w_t\geq 0$ in the convex combination as a dictionnary $\Vv_t$, i.e. at time $t$ one has $x_t = \sum_{k=1}^t w_k v_k$, and $\sum_{k=1}^t w_k = 1$. What changes is the descent direction in the linesearch $d_t=v_t-s_{t^\star}$ where $s_{t^\star}\in\arg\max_{s\in\Vv_t}\dotp{\nabla\Ff(x_t)}{s}$. The linesearch seeks $\gamma\in[0,w_{t^\star}]$ instead of $[0,1]$ to ensure that $x_{t+1}=x_t + \gamma d_t$ remains a convex combination. One can interpret this variant as removing previous atoms which became irrelevant to replace them with more optimal ones. There is an affine-invariant analysis of this variant in~\cite{lacoste2015global} which ensures linear convergence under milder assumptions than the conditions of FW with step $\gamma_t=\tfrac{2}{2+t}$. \subsection{Additional figures on the comparison FW v.s. Sinkhorn} We provide below a Figure which is in the same setting as Figure 4, except that the value is set to $\rho=10^{-1}$ and $\rho=10$. The results are represented in the Figure below. \begin{figure*}[h!] \centering \begin{tabular}{c@{}c@{}c} {\includegraphics[width=0.3\linewidth]{sections/figures/plot_bench_fw_sink_wfr-1.pdf}} & {\includegraphics[width=0.3\linewidth]{sections/figures/plot_bench_fw_sink_wfr+0.pdf}} & {\includegraphics[width=0.3\linewidth]{sections/figures/plot_bench_fw_sink_wfr+1.pdf}} \end{tabular} \caption{ \textit{ Same experiments as in Figure 4. The center plot is Figure 4 when $\rho=1$, while the left figure sets $\rho=10^{-1}$ and the right one sets $\rho=10$. } } \end{figure*} \section{Appendix of Section 2 - Translation Invariance} \subsection{Detailed proof of Proposition 2} \begin{proof} Recall the optimality condition in the $\KL$ setting which reads \begin{align*} &\dotp{\al}{e^{-\frac{\Bf +\la^\star}{\rho_1}}} = \dotp{\be}{e^{-\frac{\Bg -\la^\star}{\rho_2}}},\\ &\Leftrightarrow e^{-\frac{\la^\star}{\rho_1}}\dotp{\al}{e^{-\frac{\Bf}{\rho_1}}} = e^{+\frac{\la^\star}{\rho_2}}\dotp{\be}{e^{-\frac{\Bg}{\rho_2}}},\\ &\Leftrightarrow -\frac{\la^\star}{\rho_1} + \log\dotp{\al}{e^{-\frac{\Bf}{\rho_1}}} = \frac{\la^\star}{\rho_2} + \log\dotp{\be}{e^{-\frac{\Bg}{\rho_2}}},\\ &\Leftrightarrow \la^\star\big(\frac{1}{\rho_1} + \frac{1}{\rho_2}\big) = \log\Bigg[\frac{\dotp{\al}{e^{-\frac{\Bf}{\rho_1}}}}{\dotp{\al}{e^{-\frac{\Bg}{\rho_2}}}}\Bigg],\\ &\Leftrightarrow \la^\star(\Bf,\Bg) = \frac{\rho_1\rho_2}{\rho_1 + \rho_2} \log\Bigg[\frac{\dotp{\al}{e^{-\frac{\Bf}{\rho_1}}}}{\dotp{\al}{e^{-\frac{\Bg}{\rho_2}}}}\Bigg]. \end{align*} Hence the result of Proposition 2. \end{proof} \subsection{Proof of Proposition 3} \begin{proof} Recall the expression of $\Ff_\epsilon$ which reads in the $\KL$ setting % \begin{align*} \Ff_\epsilon(\f,\g) &= \dotp{\al}{-\rho_1(e^{-\f / \rho_1} - 1)} + \dotp{\be}{-\rho_2(e^{-\g / \rho_2} - 1)}\\ &= \rho_1 m(\al) + \rho_2 m(\be) - \rho_1\dotp{\al}{e^{-\f / \rho_1}} - \rho_2\dotp{\be}{e^{-\g / \rho_2}}. \end{align*} % We have that $\Hh_\epsilon(\Bf,\Bg) = \Ff_\epsilon(\Bf + \la^*(\Bf,\Bg), \Bg - \la^*(\Bf,\Bg))$. % Applying Proposition 2, we have that \begin{align*} \dotp{\al}{e^{-\frac{\Bf + \la^\star(\Bf,\Bg)}{\rho_1}}} &= \dotp{\al}{e^{-\f / \rho_1}}\cdot\exp\Bigg(-\frac{\rho_2}{\rho_1 + \rho_2} \log\Bigg[\frac{\dotp{\al}{e^{-\frac{\Bf}{\rho_1}}}}{\dotp{\al}{e^{-\frac{\Bg}{\rho_2}}}}\Bigg]\Bigg)\\ &= \dotp{\al}{e^{-\Bf / \rho_1}}\cdot\dotp{\al}{e^{-\Bf / \rho_1}}^{-\frac{\rho_2}{\rho_1 + \rho_2}}\cdot\dotp{\be}{e^{-\Bg / \rho_2}}^{\frac{\rho_2}{\rho_1 + \rho_2}}\\ &=\dotp{\al}{e^{-\Bf / \rho_1}}^{\frac{\rho_1}{\rho_1 + \rho_2}}\cdot\dotp{\be}{e^{-\Bg / \rho_2}}^{\frac{\rho_2}{\rho_1 + \rho_2}} \end{align*} % A similar calculation for $\dotp{\be}{e^{-(\Bg - \la^\star) / \rho_2}}$ yields % \begin{align*} \dotp{\be}{e^{-\frac{\Bg - \la^\star(\Bf,\Bg)}{\rho_2}}} = \dotp{\al}{e^{-\frac{\Bf + \la^\star(\Bf,\Bg)}{\rho_1}}} =\dotp{\al}{e^{-\Bf / \rho_1}}^{\frac{\rho_1}{\rho_1 + \rho_2}}\cdot\dotp{\be}{e^{-\Bg / \rho_2}}^{\frac{\rho_2}{\rho_1 + \rho_2}}. \end{align*} % Applying the above result in the definition of $\Hh_\epsilon$ yields the result of Proposition 3. \end{proof} \section{Introduction} \label{sec-intro} \paragraph{Optimal transport in ML.} Optimal Transport (OT) is now used extensively to solve various ML problems. For probability vectors $(\al,\be) \in \RR_+^N \times\RR_+^M$, $\sum_i \al_i = \sum_j \be_j=1$, a cost matrix $\C \in \RR^{N\times M}$, it computes a coupling matrix $\pi\in\RR^{N\times M}$ solving \begin{align*} \OT(\al,\be) \triangleq \inf_{\pi\geq 0,\, \pi_1=\al,\, \pi_2=\be} \dotp{\pi}{\C} = \sum_{i,j} \pi_{i,j}\C_{i,j} \end{align*} where $(\pi_1,\pi_2) \triangleq (\pi\mathds{1},\pi^\top\mathds{1})$ are the marginals of the coupling $\pi$. The optimal transport matrix $\pi$ can be used to perform for instance domain adaptation~\cite{courty2014domain} and differentiable sorting~\cite{cuturi2019differentiable}. If the cost is of the form $\C_{i,j}=d(x_i,x_j)^p$ where $d$ is some distance, then $\OT(\al,\be)^{1/p}$ is itself a distance between probability vectors with many favorable geometrical properties~\cite{peyre2019computational}. This distance is used for supervised learning over histograms~\cite{2015-Frogner} or unsupervised learning of generative models~\cite{WassersteinGAN}. \paragraph{Csiszar divergences.} The simplest formulation of UOT penalizes the discrepancy between $(\pi_1,\pi_2)$ and $(\al,\be)$ using Csiszar divergences~\cite{csiszar1967information}. We consider an ``entropy'' function $\phi:\RR_+\rightarrow\RR_+$ which is positive, convex, lower semi-continuous and such that $\phi(1)=0$. We define $\phi^\prime_\infty\triangleq\lim_{x\rightarrow\infty} \tfrac{\phi(x)}{x}$. Its associated Csiszar divergence reads, for $(\mu,\nu)\in\RR_+^N$ \begin{align} \D_\phi(\mu|\nu)\triangleq \sum_{\nu_i>0} \phi(\tfrac{\mu_i}{\nu_i})\nu_i + \phi^\prime_\infty\sum_{\nu_i=0}\mu_i. \end{align} A popular instance is the Kullback-Leibler divergence ($\KL$) where $\phi(x)=x\log x - x + 1$ and $\phi^\prime_\infty=+\infty$, such that $\KL(\mu|\nu)\triangleq \sum_i \log(\tfrac{\mu_i}{\nu_i})\mu_i - m(\mu) + m(\nu)$ when $(\nu_i=0)\Rightarrow(\mu_i=0)$, and $\KL(\mu|\nu)=+\infty$ otherwise. \paragraph{Unbalanced optimal transport.} Unbalanced optimal transport (UOT) is a generalization of OT which relaxes the constraint that $(\al,\be)$ must be probability vectors. Defining $m(\al)\triangleq\sum_i\al_i$ the mass of measure, we can have $m(\al)\neq m(\be)$. This generalization is crucial to cope with outlier to perform robust learning~\cite{mukherjee2021outlier, balaji2020robust} and avoid to perform some a priori normalization of datasets~\cite{lee2019parallel}. Unbalanced OT enables mass creation and destruction, which is important for instance to model growth in cell populations~\cite{schiebinger2017reconstruction}. We refer to~\cite{liero2015optimal} for a thorough presentation of UOT. To derive efficient algorithms, following~\cite{chizat2016scaling}, we consider an entropic-regularized problem \begin{align}\label{eq-primal-uot-kl} \UOT(\al,\be) \triangleq \inf_{\pi\geq 0} &\dotp{\pi}{\C} + \epsilon\KL(\pi|\al\otimes\be) \\ &+ \D_{\phi_1}(\pi_1 | \al) + \D_{\phi_2}(\pi_2 | \be)\,.\nonumber \end{align} Here $\KL(\pi|\al\otimes\be)$ is the Kullback-Leibler divergence between $\pi$ and $\al \otimes \be= (\al_i \be_j)_{i,j}$. The original (unregularized) formulation of UOT~\cite{liero2015optimal} corresponds to the special case $\epsilon=0$. A popular case uses $\D_\phi=\rho\KL$ where $\rho>0$ controls the tradeoff between mass transportation and mass creation/destruction. Balanced OT is retrieved in the limit $\rho\rightarrow +\infty$, while when $\rho \rightarrow 0$ $\UOT(\al,\be)/\rho$ converges to the Hellinger distance (no transport). The dual problem to~\eqref{eq-primal-uot-kl} reads $\UOT(\al,\be) = \sup_{(\f,\g)} \Ff_\epsilon(\f,\g)$, where \begin{align} \Ff_\epsilon(\f,\g) \triangleq &\dotp{\al}{-\phi_1^*(-\f)} + \dotp{\be}{-\phi_2^*(-\g)}\nonumber\\ &-\epsilon\dotp{\al\otimes\be}{e^{\tfrac{\f\oplus\g - \C}{\epsilon}} - 1}. \label{eq-dual-uot-kl} \end{align} Here $(\f,\g)$ are vectors in $\RR^N\times\RR^M$, $\phi^*$ is the Legendre transform of $\phi$ and we used the shorthand notations $\f\oplus\g - \C \triangleq (\f_i+\g_j-\C_{i,j})_{i,j} \in \RR^{N \times M}, \phi_1^*(-\f) \triangleq (\phi_1^*(-\f_i))_i \in \RR^N$. When $\epsilon=0$ the last term in~\eqref{eq-dual-uot-kl} becomes the constraint $\f\oplus\g\leq\C$. \paragraph{Sinkhorn's algorithm and its limitation for UOT.} Problem~\eqref{eq-primal-uot-kl} can be solved using a generalization of Sinkhorn's algorithm, which is the method of choice to solve large scale ML problem with balanced OT~\cite{CuturiSinkhorn}, since it enjoys easily parallelizable computation which streams well on GPUs~\cite{pham2020unbalanced}. Following~\cite{chizat2016scaling, sejourne2019sinkhorn}, Sinkhorn algorithm maximizes the dual problem~\eqref{eq-dual-uot-kl} by an alternate maximization on the two variables. In sharp contrast with balanced OT, UOT Sinkhorn algorithm might converge slowly, even if $\epsilon$ is large. For instance when $\D_\phi=\rho\KL$, Sinkhorn converges linearly at a rate $(1+\tfrac{\epsilon}{\rho})^{-1}$~\cite{chizat2016scaling}, which is close to $1$ and thus slow when $\epsilon\ll\rho$. One of the main goal of this paper is to alleviate this issue by introducing a \emph{translation invariant} formulation of the dual problem together with variants of the initial Sinkhorn's iterations which enjoy better convergence rates. \paragraph{Translation invariant formulations.} The balanced OT problem corresponds to using $\phi_1(x)=\phi_2(x)=\iota_{\{1\}}(x)$ (i.e. $\phi(1)=0$ and $\phi(x)=+\infty$ otherwise), such that $\phi^*(x)=x$. In this case $\Ff_\epsilon$ reads \begin{align*} \Ff_\epsilon(\f,\g) = \dotp{\al}{\f} + \dotp{\be}{\g} -\epsilon\dotp{\al\otimes\be}{e^{\tfrac{\f\oplus\g - \C}{\epsilon}} - 1}. \end{align*} A key property is that for any constant translation $\la\in\RR$, $\Ff_\epsilon(\f+\la,\g-\la)=\Ff_\epsilon(\f,\g)$ (because $\phi^*$ is linear), while this does not hold in general for UOT. In particular, optimal $(\f^\star,\g^\star)$ for balanced OT are only unique up to such translation, while for $\UOT$ with strictly convex $\phi^*$ (hessian $\phi^{*\prime\prime}>0$), the dual problem has a unique pair of maximizers. We emphasize that this lack of translation invariance is what makes UOT slow with the following example. Let $\f^\star$ minimize $\Ff_\epsilon$. If one initializes Sinkhorn with $\f^\star + \tau$ for some $\tau\in\RR$, $\UOT$-$\KL$ Sinkhorn iterates read $\f_t=\f^\star + (\tfrac{\rho}{\epsilon + \rho})^{2t}\tau$. Thus iterates are sensitive to translations, and the error $(\tfrac{\rho}{\epsilon + \rho})^{2t}\tau$ decays slowly when $\epsilon\ll\rho$. We solve this issue by explicitly dealing with the translation parameter using an \emph{overparameterized} dual functional $\Gg_\epsilon$ and its associated invariant functional~$\Hh_\epsilon$ \begin{align} \Gg_\epsilon(\Bf,\Bg,\la)&\triangleq \Ff_\epsilon(\Bf+\la,\, \Bg-\la),\label{eq-def-g-func}\\ \Hh_\epsilon(\Bf,\Bg)&\triangleq\sup_{\la\in\RR}\Gg_\epsilon(\Bf,\Bg,\la).\label{eq-def-h-func} \end{align} Note that maximizing $\Ff_\epsilon$, $\Gg_\epsilon$ or $\Hh_\epsilon$ yields the same value $\UOT(\al,\be)$ and one can switch between the maximizers using \begin{align} (f,g) = (\Bf + \la^\star(\Bf,\Bg),\Bg - \la^\star(\Bf,\Bg)) \\ \qwhereq \la^\star(\Bf,\Bg) \triangleq \argmax \Gg_\epsilon(\Bf,\Bg,\cdot). \label{eq:lambdastar} \end{align} By construction, one has $\Hh_\epsilon(\Bf+\la,\Bg-\la)=\Hh_\epsilon(\Bf,\Bg)$, making the functional \emph{translation invariant}. When $\epsilon=0$ we will write $(\Gg_0,\Hh_0)$. All our contributions design efficient UOT solvers by operating on the functionals $\Gg_\epsilon$ and $\Hh_\epsilon$ instead of $\Ff_\epsilon$. In particular, in Section~\ref{sec-sinkhorn} we define the associated $\Gg_\epsilon$-Sinkhorn and $\Hh_\epsilon$-Sinkhorn which performs alternate maximization on these functionals. \paragraph{Solving Unregularized UOT.} Some previous works directly adress the unregularized case $\epsilon=0$ and some specific entropy functionals. An alternate minimization scheme is proposed in~\cite{bauer2021square} for the special case of KL divergences, but without quantitative rates of convergence. In the case of a quadratic divergence $\phi(x)=\rho (x-1)^2$, it is possible to compute the whole path of solutions for varying $\rho$ using LARS algorithm~\cite{chapel2021unbalanced}. Primal-dual approaches are possible for the Wasserstein-1 case by leveraging a generalized Beckmann flow formulation~\cite{schmitzer2019framework, lee2019parallel}. In this specific case $\epsilon=0$ and for 1-D problems, we take a different route in Section~\ref{sec-fw} by applying Frank-Wolfe algorithm on $\Hh_0$, which leads to a $O(N)$ approximate algorithm. \paragraph{Solving 1-D (U)OT problems.} 1-D balanced OT is straightforward to solve because the optimal transport plan is monotonic, and is thus solvable in $O(N)$ operations once the input points supporting the distributions have been sorted (which requires $O(N \log N)$ operations). This is also useful to define losses in higher dimension using the sliced Wasserstein distance~\cite{bonneel2015sliced,rabin2011wasserstein}, which integrates 1-D OT problems. In the specific case where $\D_\phi$ is the total variation, $\phi(x)=|x-1|$, it is possible to develop an in $O(N\log(N)^2)$ network flow solver when $\C$ represents a tree metric (so this applies in particular for 1-D problem). In~\cite{bonneel2019spot} an approximation is performed by considering only transport plan defining injections between the two distributions, and a $O(NM)$ algorithm is detailed. To the best of our knowledge, there is no general method to address 1-D UOT, and we detail in Section~\ref{sec-fw} an efficient linear time solver which applies for smooth settings. \paragraph{Wasserstein barycenters.} Balanced OT barycenters, as defined in~\cite{agueh-2011}, enable to perform geometric averaging of probability distribution, and finds numerous application in ML and imaging~\cite{rabin2011wasserstein,solomon2015convolutional}. It is however challenging to solve because the support of the barycenter is apriori unknown. Its exact computation requires the solution of a multimarginal $\OT$ problems~\cite{carlier2003class, gangbo1998optimal}, it has a polynomial complexity~\cite{altschuler2021wasserstein} but does not scale to large input distributions. In low dimension, one can discretize the barycenter support and use standard solvers such as Frank-Wolfe methods~\cite{luise2019sinkhorn}, entropic regularization~\cite{cuturi2014fast,janati2020debiased}, interior point methods~\cite{ge2019interior} and stochastic gradient descent~\cite{li2015generative}. These approaches could be generalized to compute UOT barycenters. In 1-D, computing balanced OT can be done in $O(N)$ operations by operating once the support of the distributions is sorted \cite{carlier2003class, bach2019submodular}. This approach however does not generalize to UOT, and we detail in Section~\ref{sec-barycenter} an extension of our F-W solver to compute in $O(N)$ operations an approximation of the barycenter. \paragraph{Contributions.} Our first main contribution is the derivation in Section~\ref{sec-sinkhorn} of the $\Gg_\epsilon$-Sinkhorn algorithm (which can be applied for any divergence provided that $\phi^*$ is smooth) and $\Hh_\epsilon$-Sinkhorn algorithm (which is restricted to the KL divergence). We provide empirical evidence that they converge faster than the standard $\Ff_\epsilon$-Sinkhorn algorithm: when $\epsilon\leq\rho$ for $\Gg_\epsilon$, and for any $(\epsilon,\rho)$ for $\Hh_\epsilon$. We prove that $\Hh_\epsilon$ iterates converge faster than fo $\Ff_\epsilon$ in Theorem~\ref{thm-rate-cv-h-sink}. Section~\ref{sec-fw} details our second contribution, which is an efficient linear time approximate 1-D UOT applying Frank-Wolfe iterations to $\Hh_0$. To the best of our knowledge, it is the first proposal which applies FW to the UOT dual, because the properties of $\Hh_0$ allow to overcome the issue of the constraint set being unbounded. Numerical experiments show that it compares favorably against the Sinkhorn algorithm when the goal is to approximate unregularized UOT. In Section~\ref{sec-barycenter} we extend the Frank-Wolfe approach to compute 1-D barycenters. All those contributions are implemented in Python, and available at \url{https://github.com/thibsej/fast\_uot}. \section{Translation invariant Sinkhorn} \label{sec-sinkhorn} We propose in this section two variants of the Sinkhorn algorithm based on an alternate maximization on $\Gg_\epsilon$ and $\Hh_\epsilon$. \paragraph{$\Ff$-Sinkhorn (the original one).} Sinkhorn's algorithm reads, for any initialization $\f_0$, \begin{align*} \g_{t+1}(y) &= -\aprox_{\phi^*_1}(-\Smin{\al}{\epsilon}(\C(\cdot,y) - \f_{t})),\\ \f_{t+1}(x) &= -\aprox_{\phi^*_2}(-\Smin{\be}{\epsilon}(\C(x,\cdot) - \g_{t+1})), \end{align*} where the softmin is $\Smin{\al}{\epsilon}(\f) \triangleq -\epsilon\log\dotp{\al}{e^{-\f / \epsilon}}$, and the anisotropic prox reads \begin{align} \aprox_{\phi^*}(x)\triangleq\arg\min_{y\in\RR}\epsilon e^{\tfrac{x-y}{\epsilon}} + \phi^*(y). \end{align} We refer to \cite{sejourne2019sinkhorn} for more details. For $\rho\KL$ we have $\aprox_{\phi^*}(x) = \tfrac{\rho}{\epsilon + \rho}x$. The softmin and $\aprox_{\phi^*}$ are respectively $1$-contractive and $(1+\tfrac{\epsilon}{\rho})^{-1}$-contractive for the sup-norm $\norm{\cdot}_\infty$. \paragraph{$\Gg$-Sinkhorn.} Alternate maximization on $\Gg_\epsilon$ reads \begin{align*} \Bg_{t+1}(y) &= -\aprox_{\phi^*_1}\big(-\Smin{\al}{\epsilon}\big(\C(\cdot,y) - \Bf_{t} - \la_{t}\big)\big), \\ \Bf_{t+1}(x) &= -\aprox_{\phi^*_2}\big(-\Smin{\be}{\epsilon}\big(\C(x,\cdot) - \Bg_{t+1} + \la_{t}\big)\big),\\ \la_{t+1} &= \la^\star(\Bf_{t+1},\Bg_{t+1}), \end{align*} and the associated dual iterates are retrieved as $(\f_t,\g_t) \triangleq (\Bf_t+\la_t, \Bg_t-\la_t)$. For smooth $\phi^*_i$, standard results on alternate convex optimization ensure its convergence~\cite{tseng2001convergence}. Note that the extra step to compute $\la_{t+1}$ has $O(N)$ complexity for $\KL$ (Equation~\eqref{eq-opt-trans-kl}). For smooth $\phi^*$, computing $\la_{t+1}$ is a 1-D optimization problem whose gradient and hessian have $O(N)$ cost and converges in few iterations with a Newton method. \paragraph{$\Hh$-Sinkhorn. In the following, we denote $\Psi_1(\Bg) \triangleq \argmax \Hh_\epsilon(\cdot,\Bg)$ and $\Psi_2(\Bf) \triangleq \argmax \Hh_\epsilon(\Bf,\cdot)$. The $\Hh$-Sinkhorn's algorithm is the alternate minimization on $\Hh_\epsilon$, it thus reads $$ \bar g_{t+1}=\Psi_{2}(\bar f_t), \: \bar f_{t+1}=\Psi_1(\bar g_{t+1}), $$ and the associated dual iterates are retrieved as $(\f_t,\g_t) \triangleq (\Bf_t+\la^\star(\Bf_t,\Bg_t), \Bg_t-\la^\star(\Bf_t,\Bg_t))$. Contrary to $\Gg$-Sinkhorn, $\Hh$-Sinkhorn inherits the invariance of $\Hh_\epsilon$: One has $\Psi_1(\Bf_t + \mu)=\Psi_1(\Bf_t) - \mu$ i.e. $\bar\g_{t+1}\rightarrow\bar\g_{t+1}-\mu$. Computing $\Bf=\Psi_1(\Bg)$ for some fixed $\Bg$ requires to solve the equation in $\Bf$ \begin{align}\label{eq-optim-h-sink} e^{\Bf/\epsilon}\dotp{\be}{e^{(\Bg-\C)/\epsilon}}=\nabla\phi_1^*(-\Bf-\la^\star(\Bf,\Bg)). \end{align} For a generic divergence, without an explicit expression of $\la^\star$, there is a priori no closed form expression for $\f$, and one would need to re-sort to sub-iterations. However, thanks to the closed form~\eqref{eq-opt-trans-kl} for $\KL$, the following proposition proved in the appendices shows that it can be computed in closed form. \begin{prop}\label{prop-conv-h-sink} For fixed $(\Bf,\Bg)$, assuming for simplicity $\rho_1=\rho_2=\rho$, denoting $\xi \triangleq \tfrac{\epsilon}{\epsilon+2\rho}$, one has \begin{align*} \Psi_1(\Bf) = \hg + \xi \Smin{\be}{\rho}(\hg), \: \Psi_2(\Bg) = \hf + \xi \Smin{\al}{\rho}(\hf) \\ \text{where}\: \choice{ \hg \triangleq \tfrac{\rho}{\rho+\epsilon}\Smin{\al}{\epsilon}(\C-\Bf)-\tfrac{1}{2}\tfrac{\epsilon}{\rho+\epsilon} \Smin{\al}{\rho}(\Bf),\\ \hf \triangleq \tfrac{\rho}{\rho+\epsilon}\Smin{\be}{\epsilon}(\C-\Bg)-\tfrac{1}{2}\tfrac{\epsilon}{\rho+\epsilon} \Smin{\be}{\rho}(\Bg). } \end{align*} \end{prop} The following theorem shows that this algorithm enjoys a better convergence rate than $\Ff$-Sinkhorn. It involves the Hilbert pseudo-norm $\norm{\f}_\star\triangleq\inf_{t\in\RR}\norm{\f+t}_\infty$ which is relevant here due to the translation invariance of the map $\Psi_1$. A key property that $\Hh$-Sinkhorn inherits is the contractance rate of $\Smin{\al}{\epsilon}$ for $\norm{\f}_\star$, which we write $\norm{\Smin{\al}{\epsilon}(\C-\f) - \Smin{\al}{\epsilon}(\C-\g)}_\star \leq \kappa_\epsilon(\al)\norm{\f-\g}_\star$, where $\kappa_\epsilon(\al)$ denotes the contraction rate. Local and global estimation of $\kappa_\epsilon(\al)$ are detailed respectively in~\cite{knight2008sinkhorn} and~\cite{birkhoff1957extensions, franklin1989scaling}. The latter estimate reads $\kappa_\epsilon(\al)\leq 1 - \tfrac{2}{1 + \eta}$, where $\eta=\exp(-\tfrac{1}{2\epsilon}\max_{i,j,k,l}(\C_{j,k} + \C_{i,l} - \C_{j,l} - \C_{i,k}))$. Note that $\eta$ depends on $\al$ via its support. \begin{thm}\label{thm-rate-cv-h-sink} Write $\Bf_0$ the initialization, $(\f_t,\g_t)$ the iterates of $\Hh$-Sinkhorn after the translation $\la^\star(\Bf_t,\Bg_t)$, and $(f^\star,\g^\star)$ the optimal dual solutions for $\Ff$. % Defining $\bar\kappa \triangleq \kappa_\epsilon(\al)\kappa_\epsilon(\be) (1+\tfrac{\epsilon}{\rho})^{-2} < 1$, one has % \begin{align*} \norm{\f_t - \f^\star}_\infty + \norm{\g_t - \g^\star}_\infty \leq 2 \bar\kappa^t \norm{\Bf_0 - \f^\star}_\star. \end{align*} \end{thm} \begin{proof} The proof is deferred in Appendix~\ref{sec-supp-sinkhorn} \end{proof} The rate of $\Hh$-Sinkhorn is improved compared to its $\Ff$ counterpart by a factor $\kappa_\epsilon(\al)\kappa_\epsilon(\be)$, hence the speed-up illustrated in Figures~\ref{fig-sinkhorn-cv-rate} and~\ref{fig-sinkhorn-cv-rate-wot}. Note that we leave a study of the overall complexity of $\Hh$-Sinkhorn for future works. \if 0 One can thus study the convergence with the Hilbert semi-norm $\norm{\f}_\star\triangleq\inf_{\mu\in\RR}\norm{\f+\mu}_\infty$. Compared to $\Ff$-Sinkhorn, $\Hh$-Sinkhorn performs additional uniform translations, which preserves the Hilbert norm. For this norm $\Hh$-Sinkhorn is $\kappa_\epsilon(1+\tfrac{\epsilon}{\rho})^{-1}$-contractive, where $\kappa_\epsilon$ is the contraction rate of $\Smin{\al}{\epsilon}$~\cite{knight2008sinkhorn} \todo{donner une upper bound pour ce rate}. \todo{is this obvious??}  This rate on $(\bar\f_t,\bar\g_t)$ translates into a convergence for $\norm{\cdot}_\infty$ on $(f_t,g_t)$ at the rate $\kappa_\epsilon(1+\tfrac{\epsilon}{\rho})^{-1}$. \fi \begin{figure \centering \includegraphics[width=0.4\textwidth]{sections/figures/plot_log_contraction_rate_kl_berg.pdf} \caption{\textit{ Estimation of the contraction rate of $\Ff$, $\Gg$ and $\Hh$-Sinkhorn as a function of $\rho$ for a fixed $\epsilon$. Performed on the measures of Figure~\ref{fig-iter_fw}.} } \label{fig-sinkhorn-cv-rate} \end{figure} \paragraph{Empirical convergence study.} Figures~\ref{fig-sinkhorn-cv-rate} and~\ref{fig-sinkhorn-cv-rate-wot} show a numerical evaluation of the the convergence rate for $\Ff$, $\Gg$ and $\Hh$-Sinkhorn, respectively performed on synthetic 1D data (displayed Figure~\ref{fig-iter_fw}) and on the single-cell biology dataset of the WOT package\footnote{\url{https://broadinstitute.github.io/wot}}~\cite{schiebinger2017reconstruction}. We compute the versions $(\Ff,\Gg,\Hh)$ of Sinkhorn for the $\KL$ setting. To emphasize the generality of $\Gg$-Sinkhorn, we compute it for the Berg entropy $\phi(x) = \rho(x - 1 - \log x)$ (and $\phi^*(x) = -\rho\log(1-\tfrac{x}{\rho})$) We observe empirically that all versions converge linearly to fixed points $(\f^\star,\g^\star)$. Thus we focus on estimating the convergence rate $\kappa$. Given iterates $\f_t$, it is estimated as $\kappa=e^c$ where $c$ is the median over $t$ of $\log\norm{\f_{t+1}-\f^\star}_\infty - \log\norm{\f_{t}-\f^\star}_\infty$. We report those estimates as curves where $\rho$ varies while $\epsilon$ is fixed. \begin{figure \centering \includegraphics[width=0.35\textwidth]{sections/figures/plot_log_contraction_rate_kl_fast_wot} \caption{\textit{ Estimation of the contraction rate of $\Ff$, $\Gg$ and $\Hh$-Sinkhorn as a function of $\rho$ for a fixed $\epsilon$. Performed on WOT single-cell data.} } \label{fig-sinkhorn-cv-rate-wot} \end{figure} We observe that $\Hh$-Sinkhorn outperforms both $\Gg$ and $\Ff$-Sinkhorn, and its convergence curve appears as a translation of the $\Ff$-Sinkhorn curve (of approximately $\log(\kappa_\epsilon(\al)\kappa_\epsilon(\be))<0$). Note also that the overall complexity of $\Hh$-Sinkhorn remains $O(N^2)$ because $\Smin{}{\rho}$ translations cost $O(N)$. Concerning $\Gg$-Sinkhorn, it outperforms $\Ff$-Sinkhorn in its slow regime $\epsilon\leq\rho$, but is slower when $\rho\leq\epsilon$. This behaviour is consistent with $\KL$ and Berg entropies. Thus the criteria whether $\epsilon\leq\rho$ or not seems a correct rule of thumb to decide when $\Ff$ or $\Gg$-Sinkhorn is preferable. \paragraph{Extensions.} It is also possible to accelerate the convergence of Sinkhorn using Anderson extrapolation~\cite{anderson1965iterative}, see Appendix for details.
1,108,101,565,098
arxiv
\section{Introduction} \label{sec:intro} Extremely low mass white dwarfs (ELM WDs) are helium-core WDs with masses below $0.3\,M_\odot$ \citep{Li2019}, which are different from most WDs that have C/O cores with mass around $0.6\,M_\odot$ \citep{Kepler2015}. ELM WDs are thought to be born in interactive binaries and have lost most of their mass to the companions through either the stable Roche lobe overflow or the common-envelope evolution (CE), as the formation time of a WD with mass less than 0.3\,$M_\odot$ produced from a single star exceeds the Hubble timescale. \citet{chen2017,sun2018,Li2019} theoretical studied the formation of ELMs and showed that the progenitors of ELMs fill the Roche lobes at the end of the main sequence (MS) or near the base of the red giant branch (RGB). When the mass transfer ceases, possibly due to the stop of the magnetic braking driven orbital contraction \citep[see][]{sun2018,Li2019}, a pre-ELM with a helium core and a hydrogen envelope is formed. In this paper, we use pre-ELMs to refer to all progenitors of ELMs. After the detachment of the binary, the envelope will continue burning to keep a nearly constant luminosity until the burnable hydrogen is exhausted, and the radius of the envelope will gradually shrink. The hydrogen exhausted pre-ELM will enter the WD cooling track. The research of ELMs/pre-ELMs has gradually become active in recent years. Many pre-ELMs or ELMs have pulsations \citep{Maxted2011,Maxted2013,Maxted2014a,Gianninas2016,zhang2017}, which provide unprecedented opportunities to explore their interiors. The high accretion rate in the early stages of the ELM formation may contribute enough mass to the C/O WD companion in the binary and makes it a progenitor of a Type Ia supernova \citep{Han2004}. The compact ELMs, such as J0651+2844, that has a period of 765\,s \citep{brown2011,Amaro2012}, could be resolved by future space-based gravitational-wave detectors \citep{Amaro2012,luo2016b}. More than 100 ELM WDs and their progenitors have been reported by several surveys, e.g., the Kepler project \citep{van2010,Carter2011,Breton2012,Rappaport2015}, the WASP project \citep{Maxted2011,Maxted2013,Maxted2014a,Maxted2014b}, the ELM survey \citep{brown2010,brown2012,brown2013,brown2016,brown2020,Kilic2011,Kilic2012,Kilic2015,Gianninas2014,Gianninas2015}, and the ELM survey South \citep{Kosakowski2020}. Most of the objects reported by previous works are ELMs in double degenerates (DD) binaries, and their companions are WDs \citep{Li2019} or neutron stars \citep{Istrate2014b,Istrate2014}. Some works reported the pre-ELMs in EL CVn-type binaries, which are post-mass transfer eclipsing binaries that are composed of an A/F-type main sequence star and a progenitor of ELM in the shrinking stage \citep{Maxted2011,Maxted2014a,wangkun2018,wangkun2020}. All of these sources have ended their mass interactions. The pre-ELMs in mass transfer or terminated mass transfer recently have temperatures similar to the main sequence A and F stars and can not be selected by their colors. By inspecting light curves with large amplitude ellipsoidal variability and luminosities below the main sequence, \citet{elbadry2021b,Badry2021} reported a sample of pre-ELMs in DDs with periods less than 6 hours. They name these sources proto-ELMs. The objects of \citet{elbadry2021b,Badry2021} have lower temperatures than ELMs and the pre-ELMs in EL CVns. Moreover, their objects with temperatures lower than 6500\,K have emission lines, and the rest objects with higher temperatures do not have, indicating that the sample of \citet{elbadry2021b,Badry2021} are in the transition from mass transfer to detached. The pre-ELM WDs with stable mass transfer behave as cataclysmic variables (CVs). Unlike normal CVs, pre-ELMs are evolved stars with Helium-cores. They have much lower mass transfer rates than normal CVs, and do not show typical CV characteristics in the light curves, such as random variation in a short timescale, outburst events. Normal CVs generally have small-mass donors in the main sequence whose orbital periods are several hours \citep{Knigge2006,Knigge2011}. The stellar parameters, such as mass, radius, spectral type, and luminosity, are closely related to the orbital period, which is called ``donor sequence'' \citep{Patterson1984,Beuermann1998,Smith1998,Knigge2006,Knigge2011}. With bloated radius and helium cores, the evolved donors significantly deviate from the donor sequence. They have smaller mass and higher temperatures compared with the donor in normal CVs \citep{Podsiadlowski2003,van2005,Kalomeni2016}. The evolution trace of the ELMs mainly depends upon the initial period and initial mass \citep{Li2019}. The donors with longer initial periods will be more evolved before the mass transfer, resulting in pre-ELMs with more bloated radii and longer periods. These long-period pre-ELMs are close to the main sequence and, therefore, cannot be selected using the HR diagram. Meanwhile, some long-period pre-ELMs may have periods close to the bifurcation period. Theoretically speaking, for the systems with orbital periods longer than the bifurcation period (16--22 h), the donors ascend the giant branch as the mass transfer begins, and the systems evolve toward long orbital periods with mass loss \citep{Podsiadlowski2003}. For the systems whose period is shorter than the bifurcation period, the orbits of these targets are contracting rather than expanding because of magnetic braking. The pre-ELMs with periods close to the bifurcation period are special cases between these two situations and vital for our understanding of the evolution of ELM systems. In this work, we report the discovery of a pre-ELM with a period of 14.6 hours, which is much longer than typical pre-ELMs. The orbital period of this source is about three times that of the sample in \citet{elbadry2021b}, so the surface gravity is less than that of all ELMs or pre-ELMs we have known. Because of the larger radius and higher luminosity, this object almost falls on the main sequence, making it inefficient to select this type of object using the HR diagram. Thanks to the time-domain spectroscopic (e.g., the Large Sky Area Multi-Object Fiber Spectroscopic Telescope; LAMOST; see \citealt{cui2012,zhao2012}) and photometric surveys, we are able to select such a particular pre-ELM. The paper is organized as follows. In Section \ref{sec:data}, we describe the data, which include the spectroscopic data from several telescopes or instruments, and the photometric data from publicly available photometric surveys. In Section \ref{sec:data_analysis}, we present the process of data measurement and analysis, including determination of orbital period, radial-velocity (RV) measurements, SED fitting, and spectral matching. Discussion and summary are made in Sections 4 and 5. \section{Data} \label{sec:data} J0419 (R.A. = $04^h19^m20^s.07$, Decl. = $07\degree25'45''.4$, J2000) is selected from the LAMOST medium-resolution surveys \citep[MRS;][]{liuchao2020} and has a stellar type of G8 and a magnitude of 14.70~mag in the \gaia $G$-band. The RV measurements of LAMOST DR8 MRS show that this source has an RV variation of about 212\,\ensuremath{\mathrm{km\,s^{-1}}}\ by six exposures on Nov 8, 2019. Since the absorption lines of the LAMOST spectra are single-lined, we speculate J0419 is a binary composed of a visible star and a compact object. We applied for additional spectroscopic observations to constrain the RV amplitude of J0419 by using the 2.16-meter telescope in Xinglong and the Lijiang 2.4-meter telescope. We also requested several LAMOST follow-up observations on this source. The observation information is summarized in Table \ref{tab:spec_stat}. In addition to spectroscopic data, we collected photometric data from several publicly available sky surveys, which include the Transiting Exoplanet Survey Satellite \citep[TESS;][]{Ricker2015}, the Catalina Real-time Transient Survey \citep[CRTS;][]{Drake2009,Drake2014}, the All-Sky Automated Survey for Supernovae \citep[ASAS-SN][]{Shappee2014,Kochanek2017} and the Zwicky Transient Facility \citep[ZTF;][]{Masci2019}. The data are described below. \begin{deluxetable*}{llccccrrrr} \tablenum{1} \tablecaption{Statistics of the observed spectra of J0419} \label{tab:spec_stat} \tablewidth{0pt} \tablehead{ \colhead{Num} & \colhead{Telescope} & \colhead{HMJD} & \colhead{Obs. Date} & \colhead{Exp. time (s)} & \colhead{Phase} & \colhead{SNR} & \colhead{Resolution} & \colhead{RV (\ensuremath{\mathrm{km\,s^{-1}}})} & \colhead{$\rm EW_{H\alpha}$ (\AA)} } \startdata 1 & LAMOST MRS & 58795.69 & 2019-11-08 16:36:34 & 1200 & 0.93 & 10.1 & 7500 & $166.3_{-5.8}^{+5.8}$ & $4.03\pm 0.17$ \\ 2 & LAMOST MRS & 58795.71 & 2019-11-08 16:59:34 & 1200 & 0.96 & 9.2 & 7500 & $129.7_{-4.8}^{+6.8}$ & $3.41\pm 0.18$ \\ 3 & LAMOST MRS & 58795.72 & 2019-11-08 17:22:34 & 1200 & 0.99 & 8.4 & 7500 & $97.2_{-7.5}^{+5.8}$ & $4.12\pm 0.20$ \\ 4 & LAMOST MRS & 58795.76 & 2019-11-08 18:12:34 & 1200 & 0.04 & 10.7 & 7500 & $17.5_{-5.8}^{+5.5}$ & $4.36\pm 0.15$ \\ 5 & LAMOST MRS & 58795.78 & 2019-11-08 18:36:34 & 1200 & 0.07 & 10.1 & 7500 & $-13.3_{-4.0}^{+4.0}$ & $4.22\pm 0.17$ \\ 6 & LAMOST MRS & 58795.79 & 2019-11-08 18:59:34 & 1200 & 0.10 & 10.3 & 7500 & $-45.8_{-4.8}^{+5.5}$ & $3.94\pm 0.17$ \\ 7 & LAMOST LRS & 58837.61 & 2019-12-20 14:42:16 & 600 & 0.97 & 20.6 & 1800 & $104.5_{-7.0}^{+6.0}$ & $10.10\pm 0.17$ \\ 8 & LAMOST LRS & 58837.62 & 2019-12-20 14:56:16 & 600 & 0.99 & 20.2 & 1800 & $90.5_{-8.0}^{+7.0}$ & $9.05\pm 0.17$ \\ 9 & LAMOST LRS & 58837.63 & 2019-12-20 15:09:16 & 600 & 0.00 & 21.5 & 1800 & $65.8_{-7.0}^{+7.0}$ & $8.43\pm 0.16$ \\ 10 & LAMOST LRS & 58837.65 & 2019-12-20 15:31:16 & 600 & 0.03 & 23.5 & 1800 & $30.0_{-6.0}^{+6.0}$ & $9.59\pm 0.16$ \\ 11 & LAMOST LRS & 58837.66 & 2019-12-20 15:45:16 & 600 & 0.05 & 25.2 & 1800 & $9.3_{-6.0}^{+6.0}$ & $9.14\pm 0.15$ \\ 12 & LAMOST LRS & 58837.67 & 2019-12-20 15:58:16 & 600 & 0.06 & 26.3 & 1800 & $-12.8_{-6.8}^{+6.0}$ & $9.73\pm 0.14$ \\ 13 & 2.16-meter & 59140.74 & 2020-10-18 17:39:32 & 1800 & 0.20 & 119.0 & 300 & - & $15.69\pm 0.12$ \\ 14 & 2.16-meter & 59140.76 & 2020-10-18 18:09:37 & 1800 & 0.23 & 107.7 & 300 & - & $15.64\pm 0.11$ \\ 15 & 2.16-meter & 59140.79 & 2020-10-18 18:56:15 & 1800 & 0.28 & 107.2 & 300 & - & $16.03\pm 0.11$ \\ 16 & 2.16-meter & 59140.81 & 2020-10-18 19:26:20 & 1800 & 0.32 & 102.4 & 300 & - & $16.65\pm 0.11$ \\ 17 & 2.16-meter & 59140.83 & 2020-10-18 19:57:45 & 1200 & 0.35 & 80.3 & 300 & - & $16.22\pm 0.13$ \\ 18 & 2.16-meter & 59140.85 & 2020-10-18 20:17:50 & 1200 & 0.38 & 73.4 & 300 & - & $16.45\pm 0.14$ \\ 19 & 2.16-meter & 59140.86 & 2020-10-18 20:37:55 & 1200 & 0.40 & 70.9 & 300 & - & $17.03\pm 0.14$ \\ 20 & 2.16-meter & 59141.75 & 2020-10-19 17:56:03 & 1200 & 0.86 & 68.7 & 300 & - & $14.09\pm 0.17$ \\ 21 & 2.16-meter & 59141.76 & 2020-10-19 18:16:08 & 1200 & 0.88 & 67.7 & 300 & - & $13.36\pm 0.18$ \\ 22 & 2.16-meter & 59141.78 & 2020-10-19 18:36:13 & 1200 & 0.91 & 64.9 & 300 & - & $13.05\pm 0.18$ \\ 23 & 2.16-meter & 59141.80 & 2020-10-19 19:09:25 & 1200 & 0.95 & 64.1 & 300 & - & $12.07\pm 0.18$ \\ 24 & 2.16-meter & 59141.81 & 2020-10-19 19:29:30 & 1200 & 0.97 & 65.3 & 300 & - & $12.39\pm 0.17$ \\ 25 & 2.16-meter & 59141.83 & 2020-10-19 19:49:35 & 1200 & 0.99 & 63.3 & 300 & - & $14.59\pm 0.17$ \\ 26 & 2.16-meter & 59141.84 & 2020-10-19 20:10:01 & 1200 & 0.02 & 61.1 & 300 & - & $14.35\pm 0.19$ \\ 27 & 2.16-meter & 59141.85 & 2020-10-19 20:30:06 & 1200 & 0.04 & 61.2 & 300 & - & $15.92\pm 0.18$ \\ 28 & 2.16-meter & 59141.87 & 2020-10-19 20:50:11 & 1200 & 0.06 & 57.7 & 300 & - & $14.97\pm 0.17$ \\ 29 & 2.16-meter & 59192.62 & 2020-12-09 14:46:53 & 1800 & 0.64 & 47.4 & 620 & $232.5_{-24.0}^{+12.8}$ & $8.27\pm 0.20$ \\ 30 & 2.16-meter & 59192.64 & 2020-12-09 15:16:59 & 1800 & 0.67 & 48.9 & 620 & $284.2_{-24.0}^{+13.0}$ & $9.31\pm 0.21$ \\ 31 & 2.16-meter & 59192.66 & 2020-12-09 15:57:02 & 1800 & 0.72 & 56.1 & 620 & $318.7_{-19.8}^{+5.9}$ & $9.63\pm 0.17$ \\ 32 & 2.16-meter & 59192.69 & 2020-12-09 16:27:08 & 1800 & 0.75 & 56.3 & 620 & $303.0_{-21.2}^{+6.5}$ & $9.85\pm 0.17$ \\ 33 & 2.16-meter & 59192.71 & 2020-12-09 17:06:44 & 1800 & 0.80 & 55.7 & 620 & $278.8_{-21.9}^{+6.1}$ & $8.57\pm 0.17$ \\ 34 & 2.16-meter & 59192.74 & 2020-12-09 17:40:42 & 1800 & 0.84 & 53.2 & 620 & $265.1_{-22.3}^{+6.4}$ & $7.94\pm 0.19$ \\ 35 & LAMOST MRS & 59213.57 & 2020-12-30 13:37:59 & 1200 & 0.15 & 3.4 & 7500 & $-74.7_{-23.2}^{+21.2}$ & $4.83\pm 0.76$ \\ 36 & LAMOST MRS & 59213.58 & 2020-12-30 14:01:23 & 1200 & 0.17 & 3.5 & 7500 & $-66.6_{-21.2}^{+21.2}$ & $8.03\pm 1.10$ \\ 37 & LAMOST MRS & 59213.60 & 2020-12-30 14:24:45 & 1200 & 0.20 & 3.6 & 7500 & $-111.0_{-14.2}^{+14.1}$ & $11.95\pm 0.92$ \\ 38 & LAMOST MRS & 59213.62 & 2020-12-30 14:48:09 & 1200 & 0.23 & 3.7 & 7500 & $-118.1_{-18.1}^{+13.2}$ & $12.73\pm 0.75$ \\ 39 & LAMOST MRS & 59213.64 & 2020-12-30 15:18:54 & 1200 & 0.26 & 3.2 & 7500 & $-108.9_{-20.3}^{+27.2}$ & $13.20\pm 0.88$ \\ 40 & LAMOST MRS & 59213.65 & 2020-12-30 15:42:17 & 1200 & 0.29 & 3.1 & 7500 & $-112.0_{-67.6}^{+29.3}$ & - \\ 41 & LAMOST MRS & 59240.47 & 2021-01-26 11:13:44 & 1200 & 0.45 & 2.0 & 7500 & $7.0_{-18.3}^{+21.8}$ & $-0.19\pm 0.96$ \\ 42 & LAMOST MRS & 59240.48 & 2021-01-26 11:37:44 & 1200 & 0.48 & 2.6 & 7500 & $27.0_{-20.0}^{+24.8}$ & $1.40\pm 0.83$ \\ 43 & LAMOST MRS & 59240.50 & 2021-01-26 12:00:44 & 1200 & 0.50 & 2.6 & 7500 & $98.7_{-56.1}^{+23.5}$ & $-0.22\pm 0.83$ \\ 44 & LAMOST MRS & 59242.47 & 2021-01-28 11:11:18 & 1200 & 0.74 & 5.8 & 7500 & $307.5_{-9.1}^{+10.0}$ & $9.03\pm 0.60$ \\ 45 & LAMOST MRS & 59242.48 & 2021-01-28 11:34:40 & 1200 & 0.77 & 4.8 & 7500 & $305.4_{-24.1}^{+23.2}$ & $8.13\pm 0.59$ \\ 46 & LAMOST MRS & 59242.50 & 2021-01-28 11:58:02 & 1200 & 0.79 & 5.0 & 7500 & $301.4_{-16.1}^{+13.1}$ & $5.88\pm 0.58$ \\ 47 & Lijiang 2.4-meter & 59248.55& 2021-02-03 13:09:16& 1801& 0.76& 40.5 & 850 & $299.7_{-12.0}^{+13.0}$ & $2.67\pm 0.18$ \\ \enddata \tablecomments{ The HMJD is the mid-exposure time. The heliocentric corrections have been applied to the RVs. We did not measure the RVs of the spectra observed by using the G4 grism due to their low resolution. The spectrum of line 40 has no red arm data and therefore no information about the \ensuremath{\rm H\alpha}\ emission line. } \end{deluxetable*} \subsection{Spectroscopic Data} \subsubsection{LAMOST spectra} LAMOST is a uniquely designed 4-meter reflecting Schmidt telescope that enables it to observe 4000 spectra simultaneously in a field of view of $5^{\circ}$ \citep{cui2012,zhao2012}. The wavelength coverage of LAMOST low-resolution ($R \sim 1800$) spectra ranges from 3690\,\AA\ to 9100\,\AA\ \citep{luo2016}. The LAMOST medium-resolution ($R \sim 7500$; see \citealt{liuchao2020}) spectra have two arms, of which the blue arm covers the wavelength range of 4950\,\AA\ to 5350\,\AA, and the red arm covers a wavelength range of 6300\,\AA\ to 6800\,\AA\ \citep{zong2018}. For both low- and medium-resolution spectra, the LAMOST's observation strategy is to perform 2--4 consecutive short exposures for 10--20 minutes each (see Table \ref{tab:spec_stat}). In the study of close binaries with a period of less than one day, the RV changes significantly between two single LAMOST exposures. Hence, the RVs of the short exposure LAMOST spectra are crucial in our study \citep{Mu2022}. LAMOST MRS conducted the first observation of J0419 on Nov 8, 2019, with six consecutive exposures. The RVs (see Section \ref{sec:rv}) span from 166\,\ensuremath{\mathrm{km\,s^{-1}}}\ to -46\,\ensuremath{\mathrm{km\,s^{-1}}}\ in 2.4 hours. LAMOST LRS made another six consecutive exposures on Dec 20, 2019, each with an exposure time of 600\,s, and the resulting RVs are from 105\,\ensuremath{\mathrm{km\,s^{-1}}}\ to -13\,\ensuremath{\mathrm{km\,s^{-1}}}. On 2020-12-30, 2021-01-26, and 2021-01-28, LAMOST MRS performed follow-up observations on J0419 and obtained a total of 12 single exposure spectra. Due to the bright moon nights, the LAMOST follow-up spectra have very low SNRs. Nevertheless, we still use them to measure the corresponding RVs. The information of LAMOST spectroscopic data are summarized in Table \ref{tab:spec_stat}. We combine the LAMOST spectra observed on the same night after correcting the wavelength of each spectrum to rest frame and plot them in Figure \ref{fig:spec_all}. The spectra show evident emission lines of Balmer and $\mathrm{He~I}$ with significant double peak characteristics in most of LAMOST observations, suggesting that the emission lines are not produced by the visible star. We discuss the emission lines in Section \ref{sec:dis_emission}. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{fig_spec_all.pdf} \caption{Normalized average spectra of J0419. Each average spectrum is generated by combining the spectra observed on the same night. Prior to the combination, the wavelength has been shifted to rest frame. The observation information is marked next to each average spectrum.} \label{fig:spec_all} \end{figure*} \subsubsection{The 2.16-meter telescope spectra} We applied for two spectral observations of J0419 using the 2.16-meter telescope \citep{Fan2016} at the Xinglong Observatory\footnote{\url{http://www.xinglong-naoc.org/}}. The first observations were performed from 2020-10-18 to 2020-10-19, with 16 exposures. We chose the G4 grism and 1.8 slit combination, which yielded an instrumental broadening (FWHM) of 18.4\,\AA\ measured from skylines. The resolution is too low to measure the RVs. These spectra show strong Balmer and He~I emission lines (see Figure \ref{fig:spec_all}). The second observations were performed on Dec 9, 2020, and the grism was adjusted to G7 to improve the spectral resolution, which yielded an FWHM of 9.0\,\AA\ measured from skylines. The spectra also show evident emission lines, albeit the equivalent widths (EW) of the emission lines are less than the first observations. We used \texttt{IRAF v2.16} to reduce the spectra with standard process. The heliocentric correction was made using the \texttt{helcorr} function in python package \texttt{PyAstronomy}. \subsubsection{The Lijiang 2.4-meter telescope spectra} On February 3, 2021, we used the Yunnan Faint Object Spectrograph and Camera (YFOSC), which is mounted on the Lijiang 2.4-meter telescope\footnote{\url{http://gmg.org.cn/v2/}} at the Yunnan Observatories of the Chinese Academy of Sciences, to observe J0419. YFOSC is a multifunctional instrument both for photometry and spectroscopy that has a $2k \times 4k$ back-illuminated CCD detector. More information about YFOSC can be found in \citet{lu2019}. A grism G14 and a 1\arcsec.0 slit are used, resulting in wavelength coverage of 3800\,\AA\ -- 7200\,\AA\ with a spectral resolution of 6.5\,\AA\ measured from skylines. The Lijiang spectrum shows weak Balmer emission lines, and most of the He~I emissions cannot even be seen in the spectrum. The data reduction process of the Lijiang data is similar to that of the Xinglong 2.16-meter spectra. \subsection{Photometric data} We collect the light curves of J0419 from several publicly available photometric surveys. The light curves are used to determine the orbital period and analysis the variability (Figure \ref{fig:lc_all}). We introduce the photometric data below. \begin{figure*} \centering \includegraphics[width=\textwidth]{fig_all_lc.pdf} \caption{The light curves of J0419. Colors represent different surveys. No outburst events were captured on the light curves.} \label{fig:lc_all} \end{figure*} \subsubsection{TESS} TESS observed J0419 in two sectors in 2018 and 2020, respectively, using the FFIs mode. The first sector was taken from 2018-11-15 to 2018-12-11, and the second was from 2020-11-20 to 2020-12-16, with each exposure time of 1426\,s, and 475\,s, respectively. A total number of 1176 and 3589 points were obtained in the two sectors. We use a python package \texttt{lightkurve}\footnote{\url{https://docs.lightkurve.org/}} \citep{Lightkurve2018} to reduce the data and get the TESS light curves of J0419. Images with background counts higher than 150 have been eliminated before the light curve extraction, because we cannot obtain reliable flux from these seriously contaminated images. After the visual inspection, we retain 1111 and 3347 points for the first and second observations, respectively. We use a Pixel Level Decorrelation (PLD, see \citealt{Deming2015,Luger2016,Luger2018}) method to remove systematic instrumental trends. The TESS light curves are shown in Figure \ref{fig:tess_lc}. \begin{figure}[htbp] \centering \includegraphics[width=0.47\textwidth]{tess_J0419_0.pdf}\\ \includegraphics[width=0.47\textwidth]{tess_J0419_1.pdf} \caption{The TESS light curves of J0419. The top panel was observed from 2018-11-15 to 2018-12-11, and the bottom panel was observed from 2020-11-20 to 2020-12-16. The light curve of the top panel shows obvious evolution of the peaks and valleys with time. } \label{fig:tess_lc} \end{figure} \subsubsection{ASAS-SN} The ASAS-SN is an automated program to survey the entire visible sky every night down to about 18th magnitude \citep{Shappee2014,Kochanek2017}. For J0419, ASAS-SN observed the light curve of $V$-band from 2012-02-16 to 2018-11-29 and $G$-band from 2017-09-22 to 2021-07-14. The $V$-band light curve contains 1000 points with a typical uncertainty of 0.051~mag, and the $G$-band light curve contains 1943 points with a typical uncertainty of 0.047~mag. We only include data points with uncertainties less than 0.1 mag, i.e., 895 data points for the $V$-band light curve, and 1656 data points for the $G$-band light curve. During the ASAS-SN observations, the $V$-band light curve shows a long trend of flux increasing from 2014 to 2020 with an amplitude of about 0.2~mag (see Figure \ref{fig:lc_all}). In addition, three points of the ASAS-SN $G$-band light curve near 2019 in Figure \ref{fig:lc_all} far exceed the mean flux range, which raises the suspicion that there is an outburst event. However, the ZTF points observed on the same night have normal fluxes, and there is no sign of outburst of ASAS-SN points observed on adjacent nights. We suspect that the three points are outliers that might be caused by unknown instrumental or data processing problems. \subsubsection{CRTS} J0419 is in the catalog of CRTS with observation time from 2005-10-01 to 2013-10-27. In the eight years of monitoring, CRTS has obtained 394 points. The CRTS monitoring was about 7 years earlier than the ASAS-SN sky survey. During the CRTS monitoring, the light curve was stable and did not show a long-term trend or short-term outburst. \subsubsection{ZTF} We also collected the optical light curve of J0419 from the public DR7 of the ZTF program. The ZTF $g$-band light curve of J0419 has 340 points with a median flux uncertainty of 0.013~mag during the observation from 2018-03-27 to 2021-03-23. The $r$-band has 349 points with median flux uncertainty of 0.012~mag during the observation from 2018-03-28 to 2021-03-28. The ZTF data almost overlaps the $G$-band light curve of ASAS-SN in time coverage but has a higher flux precision. \section{Data analysis} \label{sec:data_analysis} \subsection{\gaia information} The \gaia DR3 ID of J0419 is 3298897073626626048. We collect the astrometric information of J0419 from \gaia early data release 3 (EDR3; see \citealt{gaia2021}), which provide a parallax of $\parallax = 1.45 \pm 0.03$~mas with proper motions of $\mu_\alpha = 2.17 \pm 0.03$~mas\,yr$^{-1}$ and $\mu_\delta = -0.68 \pm 0.02$~mas\,yr$^{-1}$. Based on a parallax zero-point correction $zpt = -0.043908$~mas from \citet{Lindegren2021}, we obtain a distance of J0419 to the Sun, $d = 671.3 \pm 12.5 \pc$. \begin{deluxetable*}{llrl} \tablenum{2} \tablecaption{Orbital and stellar parameters of J0419. The astronomical parameters are from \gaia EDR3. The stellar parameters from SED fitting and spectral fitting are all listed in this table.} \label{tab:orbpar} \tablewidth{0pt} \tablehead{ \colhead{\textbf{Parameter}} & \colhead{\textbf{Unit}} & \colhead{\textbf{Value}} & \colhead{\textbf{Note}} } \startdata \multicolumn{4}{l}{\textbf{The astronomical parameters}} \\ R.A. & h:m:s (J2000) & 04:19:20.07 & Right Ascension\\ Decl. & d:m:s (J2000) & +07:25:45.4 & Declination\\ \gaia parallax & mas & $1.45\pm 0.03$ & The parallax measured by \gaia EDR3\\ $d (\gaia)$ & pc & $671.3 \pm 12.5$ & Distance derived from \gaia EDR3\\ $\mu_\alpha$ & mas\,yr$^{-1}$ & $2.17 \pm 0.03$ & Proper motion in right ascension direction\\ $\mu_\delta$ & mas\,yr$^{-1}$ & $-0.68 \pm 0.02$ & Proper motion in declination direction\\ $G$-band magnitude & mag & $14.70\pm 0.01$ & The $G$-band magnitude measured by \gaia EDR3\\ \hline \multicolumn{4}{l}{\textbf{The Orbital parameters}} \\ $P_\mathrm{orb}$ & days & 0.6071890(3) & Orbital period\\ $T_0$ & HJD & 2453644.8439(5) & Ephemeris zero-point\\ $K_1$ & \ensuremath{\mathrm{km\,s^{-1}}} & $216\pm3$ & RV semi-amplitude of the visible star\\ $\gamma$ & \ensuremath{\mathrm{km\,s^{-1}}} & $86\pm3$ & The systemic RV of J0419 \\ $f(M_2)$ & $M_\odot$ & $0.63 \pm 0.03$ & Mass function of the compact star\\ \hline \multicolumn{4}{l}{\textbf{Parameters of the pre-ELM}} \\ $T_\mathrm{eff}$ & K & $5793_{-133}^{+124}$ & Effective temperature derived from SED fitting \\ $T_\mathrm{eff}$ (spectral fit) & K & $5776\pm168$ & Effective temperature derived from spectral fitting\\ $\log g$ (spectral fit) & dex & $3.95\pm0.45$ & Surface gravity from spectral fitting\\ $\log g$ & dex & $3.90\pm0.01$ & Surface gravity from SED fitting\\ Metallicity & $\mathrm{[M/H]}$ & $-0.86\pm0.24$ & Metallicity from spectral fitting\\ $M_1$ & $M_\odot$ & $0.176\pm 0.014$ & Mass of the visible star\\ $R_1$ & $R_\odot$ & $0.782_{-0.019}^{+0.021}$ & Effective radius of the visible star \\ $L_\mathrm{bol}$ & $L_\odot$ & $0.62_{-0.10}^{+0.11}$ & Bolometric luminosity of the visible star\\ $A(V)_{\rm SED}$ & mag & $0.34_{-0.10}^{+0.07}$ & The extinction value obtained from the SED fitting\\ \enddata \end{deluxetable*} \subsection{Orbital period} \label{sec:orb_period} We use the Lomb–Scargle periodogram \citep{Lomb1976,Scargle1982} to determine the photometric period of J0419. To improve the accuracy of the period value, we use an as-long-as-possible time series from 2005 to 2021 to calculate the Lomb–Scargle power spectrum. We reject the ASAS-SN data for the concerns that the long trend might interfere with the measurement results. The light curves of CRTS, TESS, and ZTF are combined after the flux normalization. We estimate the uncertainty of the period by using a bootstrap method \citep{Efron1979} that we repeat 10000 times measurements with randomly removing partial points in each measurement. The Lomb–Scargle periodogram gives a period with an error of $P_\mathrm{orb} = 0.6071890(3)$ days. Note that for the ellipsoidal variation, the real orbital period is twice the peak period on the Lomb–Scargle power spectrum. In order to determine the zero point of ephemeris $T_0$, we use a three-term Fourier model \citep{Morris1993}, \begin{equation} \begin{aligned} f(t) =\ & a_0 \cos [\omega (t-T_0)] + a_1 \cos[2\omega (t-T_0)]\\ & + a_2 \cos [3\omega (t-T_0)], \end{aligned} \end{equation} to fit the normalized light curve, where $\omega = 2 \pi/ P_\mathrm{orb}$, $a_0$, $a_1$, $a_2$ are the parameters used to fit the light curve profile. We find the best-fitting parameters by minimizing the $\chi^2$ statistics, which yield the zero point of ephemeris of $T_0 = 2453644.8439(5)$, where $T_0$ corresponds to the superior conjunction. We list $P_\mathrm{orb}$ and $T_0$ in Table \ref{tab:orbpar}. The folded light curves from different surveys or filters using $P_\mathrm{orb}$ and $T_0$ are shown in Figure \ref{fig:folded_lc}. \subsection{Photometric variability} \label{sec:phot_var} The folded light curves (Figure \ref{fig:folded_lc}) show ellipsoidal variability with amplitudes of about 0.3~mag, together with the evidence of mass transfer (the obvious emission lines in the spectra), indicating that the visible star is already full of the Roche lobe. We did not find any outburst event of this source in the 15 years of photometric monitoring, suggesting that the mass transfer rate is very low. The result is similar to \citet{elbadry2021b,Badry2021} and different to normal CVs. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{fig_all_foldlc.pdf} \caption{The folded light curves of J0419. The light curves show ellipsoidal variability with a full amplitude of about 0.3~mag. The two sectors of TESS data are shown in two panels, respectively.} \label{fig:folded_lc} \end{figure*} The high cadence TESS observation can be used to show the light curve profile at each period. The light curve observed in 2020 (bottom panel of Figure \ref{fig:tess_lc}) exhibits a typical ellipsoidal variation, while the light curve observed in 2018 (top panel of Figure \ref{fig:tess_lc}) has a long-term evolution trend of the peaks and valleys beyond the period. The timescale of the evolution seems to be several ten days, which is consistent with the timescale of spot activity \citep{Hussain2002,Reinhold2013}. We suspect that the long-term evolution of the light curve observed in 2018 may result from the spot activity. The folded light curves (except for the TESS data observed in 2020) show larger scatters than the measurement errors (see the ZTF light curves in Figure \ref{fig:folded_lc}). The extra scatter could be due to multiple reasons. The spot activity we mentioned above may bring about an additional scatter. The flux from the accretion disk may also contribute to the dispersion of the light curves, although the mass transfer rate of J0419 is very low. The temperature and $\log g$ (see Section \ref{sec:sed_fit}) suggest that J0419 falls in the pre-ELM WD instability strip \citep{corsico2016,wangkun2020}, in which the pulsation can be driven by the $\kappa - \gamma$ mechanism \citep{Unno1989} and the ``convective driving'' mechanism \citep{Brickhill1991} acting at the H-ionization and He-ionization zones, the scatter may be partly from the pulsation. \subsection{Radial velocities} \label{sec:rv} We obtain the template used to measure RV of each single epoch spectrum of J0419 through a python package \texttt{PyHammar}\footnote{\url{https://github.com/BU-hammerTeam/PyHammer}}, and the best-fitting stellar type is G0. The RVs are then measured by using the cross-correlation function (CCF). The uncertainties of the RVs are estimated using the ``flux randomization random subset sampling (FR/RSS)'' method \citep{peterson1998}. Only the spectral wavelength from 4910\,\AA\ to 5375\,\AA\ is used to measure the RVs to avoid the disruption of Balmer and $\rm He~I$ emission lines and telluric lines (see Figure \ref{fig:spec_all}). Because of the low resolution, the spectra observed by the 2.16-meter telescope using the G4 grism are excluded from the RV measurements. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{fig_RV_all.pdf} \caption{Radial velocities of the visible star. The period used to fold the RVs is $P_\mathrm{orb} = 0.607189$~days. The points observed by different telescopes or nights are plotted with different colors and have been labeled at the top left of the panel. A sinusoidal function is used to fit the RVs. The dashed line represents the systemic RV of $\rm \gamma = 86$\,\ensuremath{\mathrm{km\,s^{-1}}}, and the black solid line is the best-fit RV curve with a semi-amplitude of $K_1 = 216$\,\ensuremath{\mathrm{km\,s^{-1}}}}. \label{fig:RV_all} \end{figure*} We fold the RVs in one phase using the period of $P_\mathrm{orb} = 0.607189$~days and $T_0 = 2453644.8439$ derived from Section \ref{sec:orb_period} and display the result in Figure \ref{fig:RV_all}. Since the visible star is full of Roche lobe, the orbital circularization is effective, i.e., the binary moves along a circular orbit \citep{Zahn1977}. Therefore, we fit the RVs with a circular orbit model following the equation: \begin{equation} V(t) = -K_1 \sin\left[\omega (t + \Delta t)\right] + \gamma, \end{equation} where $K_1$ is the semi-amplitude of RVs of the visible star, $\omega = 2\pi/P_\mathrm{orb}$, $\gamma$ is the systemic velocity of J0419 to the Sun, and $\Delta t$ represents the possible zero-point shift caused by the limited period accuracy when folding the RVs. The fitting results are $K_1 = 216\pm 3$\,\ensuremath{\mathrm{km\,s^{-1}}}, $\gamma = 86 \pm 3$\,\ensuremath{\mathrm{km\,s^{-1}}}, and $\Delta t = 12 \pm 3$~minutes. We display the RV model curve in Figure \ref{fig:RV_all}. The best-fitting RV model well matches the measured RVs. The mass function of a binary is defined as \begin{equation} f(M_2) = \frac{M_2^3 \sin^3 i}{(M_1 + M_2)^2} = \frac{K_1^3 P_{\rm orb}}{2\pi G}, \label{eq:fm} \end{equation} where $M_1$ and $M_2$ are the mass of the visible star and the compact star, respectively, $K_1$ is the semi-amplitude of the RVs of the visible star, $P_\mathrm{orb}$ is the orbital period, and $i$ is the inclination angle of the binary to observer. The mass function gives the minimum possible mass of the compact star. Using $P_\mathrm{orb} = 0.6071890(3)$~days and $K_1 = 216\pm 3$\,\ensuremath{\mathrm{km\,s^{-1}}}, we get the mass function of the compact star, $f(M_2) = 0.63\pm0.03\,M_\odot$. \subsection{Spectroscopic stellar parameters}\label{sec:spec_fit} Our spectra were observed in different telescopes or instruments with very different wavelength coverage and resolution. It is difficult to generate a mean spectrum by including all the spectra. Another problem is that most of the spectra have obvious emission lines that may interfere with the measurements of stellar parameters. Therefore, we only use the Lijiang spectrum to take the measurement, which has a good SNR (40.5) and weakest emission lines. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{fig_J0419_spec_fit.pdf} \caption{The stellar spectrum fitting result. The gray component in upper panel is the observed spectrum. The red spectrum in upper panel is the model spectrum. The gray component in bottom panel is the residual spectrum. The wavelength of the observed spectrum contaminated by emission lines or telluric lines has been masked before our fitting and shaded with gray in top panel.}\label{fig:spec_fit} \end{figure*} We use a python package \texttt{The\_Payne}\footnote{\url{https://github.com/tingyuansen/The_Payne}} to interpolate the model spectra. \texttt{The\_Payne} is a spectral interpolate tool that enable to return a template spectrum when we provide a group of stellar parameters. Based on a neural-net and spectral interpolation algorithm \citep{Ting2019}, \texttt{The\_Payne} can interpolate the spectral grid with flexible labels efficiently. We adopt the BOSZ grid of Kurucz model spectra \citep{Bohlin2017} with five labels ($T_\mathrm{eff}$, $\log g$, $\rm [M/H]$, $\rm [C/M]$, $\rm [\alpha/M]$) to train the template model. The stellar label intervals provided by BOSZ are $\Delta T_\mathrm{eff} = 250\,\mathrm{K}$, $\Delta \log g = 0.5$\,dex, $\Delta \mathrm{[Fe/H] = 0.25}$\,dex. Before the training, we have reduced the resolution of the template to match the resolution of the observed spectrum, and the flux of the template spectra are also normalized using pseudo-continuums that were generated by convolving the template spectra with a gaussian kernel ($\sigma_\mathrm{width} = 50\,\mathrm{\AA}$). We construct a likelihood function considering both $\chi^2$ and a systematic error, \begin{equation} \begin{aligned} \ln & p(f|{\rm \lambda, pms, \sigma_{sys}}) = \\ & -\frac{1}{2}\sum_{\lambda = \lambda_0}^{\lambda_n}\left[ \frac{(f_\lambda - {\rm model}_\lambda)^2}{s_\lambda^2} + \ln (2\pi s_\lambda^2) \right], \end{aligned} \end{equation} where \begin{equation} s_\lambda^2 = \sigma_\lambda^2 + \sigma_\mathrm{sys}^2, \end{equation} $f$ represent the normalized spectrum, $\rm pms$ represent the stellar labels of the template spectrum, $\sigma_{\rm sys}$ is the systematic error. The posteriors of $\rm pms$ and $\sigma_{\rm sys}$ are sampled by the software \texttt{emcee} \citep{Foreman2013} based on the Markov chain Monte Carlo (MCMC) method. The \texttt{emcee} sampling yield a result of $T_{\rm eff} = 5776$\,K, $\log g = 3.95$\,dex, $\rm [M/H] = -0.86$\,dex. Similar to \citet{xiang2019} and \citet{Badry2021}, our fitting also underestimate the uncertainties of the stellar parameters with small $\Delta T_\mathrm{eff} = 56$K, $\Delta\log g = 0.19$~dex, $\Delta \mathrm{[M/H]} = 0.08$~dex. For simplicity, we directly use $3\sigma$ of the posterior sample as our fitting uncertainties. The fitting results are listed in Table \ref{tab:orbpar}. Figure \ref{fig:spec_fit} shows the fitting result of the Lijiang spectrum. The gray spectrum in the top panel is the observation data, and the red spectrum is the best-fitting template. The residual spectrum of $f_{\rm obs} - f_{\rm model}$ is shown in bottom panel. The shadow region in Figure \ref{fig:spec_fit} is masked when we perform the fit. We find the model spectrum agrees well with the observed spectrum, except for $\rm H\beta$, $\rm H\alpha$ and several weak $\rm He~I$ emission lines clearly showing on the residual spectrum. These emission lines are common in CV spectra \citep[e.g.][]{Sheets2007}. The LAMOST spectra with higher resolution show clear double peak emission lines, which suggest that the emission lines should be produced by the accretion disk rather than the visible star. The emission lines' equivalent widths (EW) vary greatly in the different observations. If the EWs of the emission lines reflect the mass transfer rate in the binary, the EW variations indicate that the mass transfer process is intermittent. We discuss the emission lines more in Section \ref{sec:dis_emission}. Our spectra show strong sodium ``D'' absorption lines beyond the template spectrum at wavelengths of 5890\,\AA\ and 5896\,\AA, which is similar to the result of \citet{Badry2021}. Sodium is thought to originate from the CNO-processing that can only reach the surface of a star after most of its envelope has been stripped off. Therefore, sodium enhancement is generally observed in evolved CV donors. The presence of strong sodium absorption lines suggests that the visible star of J0419 is an evolved star and has lost most of its hydrogen envelope. \subsection{The broad-band spectral energy distribution fitting}\label{sec:sed_fit} \begin{table*} \centering \tablenum{3} \begin{tabular}{ccccccc} \toprule Survey & Filter & $N_\mathrm{obs}$ & $\lambda_\mathrm{effective}$ & AB mag & Vega mag & $\log \lambda f_\lambda$ \\ & & & ($\mu m$) & (mag) & (mag) & $\log(\mathrm{erg\ s^{-1}\ cm^{-2}})$ \\ \hline \multirow{5}{*}{APASS} & Johnson~$B$ & 10 & 0.435 & & $15.55 \pm 0.07$ & $-10.787 \pm 0.026$ \\ & SDSS~$g$ & 10 & 0.472 & $15.13 \pm 0.05$ & & $-10.689 \pm 0.020$ \\ & Johnson~$V$ & 10 & 0.550 & & $14.83 \pm 0.07$ & $-10.633 \pm 0.027$ \\ & SDSS~$r$ & 10 & 0.619 & $14.61 \pm 0.07$ & & $-10.597 \pm 0.026$ \\ & SDSS~$i$ & 10 & 0.750 & $14.42 \pm 0.07$ & & $-10.606 \pm 0.028$ \\ \hline \multirow{5}{*}{Pan-STARSS} & PS1~g & 8 & 0.487 & $14.98 \pm 0.03$ & & $-10.643 \pm 0.014$ \\ & PS1~$r$ & 10 & 0.621 & $14.69 \pm 0.03$ & & $-10.633 \pm 0.012$ \\ & PS1~$i$ & 22 & 0.754 & $14.49 \pm 0.02$ & & $-10.637 \pm 0.008$ \\ & PS1~$z$ & 16 & 0.868 & $14.35 \pm 0.02$ & & $-10.641 \pm 0.009$ \\ & PS1~$y$ & 13 & 0.963 & $14.30 \pm 0.03$ & & $-10.669 \pm 0.010$ \\ \hline \multirow{3}{*}{2MASS} & 2MASS~$J$ & 1 & 1.241 & & $13.43 \pm 0.1$ & $-10.787 \pm 0.040$ \\ & 2MASS~$H$ & 1 & 1.651 & & $13.10 \pm 0.1$ & $-10.968 \pm 0.040$ \\ & 2MASS~$K_\mathrm{s}$ & 1 & 2.166 & & $13.03 \pm 0.1$ & $-11.243 \pm 0.040$ \\ \hline WISE & $\mathrm{W_1}$ & 26 & 3.379 & & $12.97 \pm 0.02$ & $-11.745 \pm 0.010$\\ & $\mathrm{W_2}$ & 26 & 4.629 & & $12.91 \pm 0.03$ & $-12.115 \pm 0.011$\\ \hline \end{tabular} \caption{The SED of J0419. The number of observations is shown in the third column, where the APASS data is a combination of DR9 and DR10. Both APASS and Pan-STARRS data have added additional systematic errors caused by sampling. The magnitudes of 2MASS have been increased by 0.1~mag to correct the phase offset, and the errors have been increased to 0.1~mag. The AB and Vega magnitude systems are displayed in two columns.} \label{tab:mag} \end{table*} We use the broad-band spectral energy distribution (SED) to constrain the stellar parameters and the extinction of J0419. In the SED fitting, the peak wavelength from UV to optical can be used to constrain the effective temperature, $T_\mathrm{eff}$. The deviation of the SED slope from Rayleigh-Jeans law in the mid-infrared band is generally considered to be caused by extinction, which can be used to estimate the extinction value \citep{Majewski2011}. The color of $u$-band $v.s.$ other-band represents the metal abundance, $\mathrm{[Fe/H]}$ \citep{huang2021}. If we know the distance of a source, we can get the effective radius from the SED fitting. We use a python package \texttt{astroARIADNE}\footnote{ \url{https://github.com/jvines/astroARIADNE}} to fit the SED of J0419. \texttt{astroARIADNE} is designed to fit broadband photometry automatically that based on a list of stellar atmosphere models by using the Nested Sampling algorithm. The fitting parameters in \texttt{astroARIADNE} include $T_\mathrm{eff}$, $\log g$, [Fe/H], distance, stellar radius, extinction parameter $A(V)$. \begin{figure*}[ht] \centering \includegraphics[width=0.9\textwidth]{fig_J0419_SED.pdf} \caption{The SED fitting of J0419. The multiband photometric points are plotted in the top panel, and the filter information are displayed near the data points. The model data is also shown in top panel with orange open squares. The model template spectrum is plotted in top panel with gray. The residuals of $f_{\rm obs} - f_{\rm model}$ are plotted in bottom panel.} \label{fig:sed} \end{figure*} We collect multi-band photometric data of J0419. Because GALEX \citep{Martin2005} and $Swift$ \citep{Gehrels2004} did not observe J0419, we do not have the UV points. The photometric data flag of J0419 in SDSS survey \citep{York2000} suggest that its magnitudes may have problems. We therefore use the APASS \citep[$B$, $g$, $V$, $r$, and $i$ bands;][]{Henden2015} data instead. Our photometric data also include Pan-STARRS \citep[$g$, $r$, $i$, $z$, and $y$ bands;][]{Chambers2016}, 2MASS \citep[$J$, $H$, and $K_\mathrm{s}$ bands;][]{Skrutskie2006}, and WISE \citep[$\rm W_1$, $\rm W_2$ bands;][]{Wright2010}. For the APASS survey, the DR9 data includes six detections, and DR10 includes four detections. We merge these two datasets using the inverse of the error as the weight. For the Pan-STARRS survey, the numbers of single epoch detections in each band are 8, 10, 22, 16, and 13, respectively. Considering that J0419 shows significant variations, we add systematic uncertainties caused by random sampling of the light curves to the errors of the photometric data. The systematic uncertainties in each band are estimated as $\sigma_\mathrm{sys} = \mathrm{std} / \sqrt{N}$, where $\mathrm{std}$ is the mean standard deviation of the light curves of J0419, $N$ is the number of observations in a band. 2MASS survey only observed J0419 once on 1999-12-07, 08:27:34.51, and the corresponding phase is 0.28. We assume that the IR band light curve has the same variation amplitude as the optical band to calculate the deviation between the observed magnitude and the mean magnitude and add 0.1~mag to 2MASS data to correct the deviation. Considering the time interval between 2MASS and other surveys (10 to 20 years) and possible light curve trend, we increase the magnitude uncertainties of 2MASS to 0.1~mag. All the photometric data are summarized in Table \ref{tab:mag}. The visible star of J0419 has filled the Roche lobe (Section \ref{sec:phot_var}). In this case, the mean density is given by \begin{equation} \bar{\rho} = \frac{3 M_1}{4 \pi R_1^3} \cong 110 P_\mathrm{hr}^{-2}\,\mathrm{g\,cm^{-3}}, \label{eq:rourobe} \end{equation} where $M_1$ is the mass, $R_1$ is the equivalent radius of the Roche-filling star, the period, $P_\mathrm{hr}$, is in the unit of hours \citep{Frank2002}. Equation \ref{eq:rourobe} shows that the mean density of Roche lobe only depends on the orbital period. Combining the radius derived from \texttt{astroARIADNE} fitting and the mean density of Roche lobe, we can calculate the mass and \ensuremath{\log g~} of the visible star. We fit the SED in the following way. First, we use the distance measured by \gaia EDR3 as the prior of distance parameter, and other parameters are set to default values. Then we use the radius from the fitting result to calculate the mass and $\log g$ of the visible star (Equation \ref{eq:rourobe}). Second, we update the \ensuremath{\log g~} prior by using our calculation and re-fit the SED. \ensuremath{\log g~} and $\rm [Fe/H]$ only have little effect on the SED, so the fitting parameters converge quickly. The radius obtained from the SED fitting is $R_1 = 0.782_{-0.019}^{+0.021} \,R_\odot$, and the corresponding visible star mass is $M_1 = 0.176\pm0.014\, M_\odot$. The effective temperature of the visible star is $T_\mathrm{eff} = 5793_{-133}^{+124}\, \mathrm{K}$. The bolometric luminosity derived from the SED fitting is $L_\mathrm{bol} = 4\pi R_1^2\,\sigma T_\mathrm{eff}^4 = 0.62_{-0.10}^{+0.11}\, L_\odot$. We summarize the SED fitting result in Table \ref{tab:orbpar}, and show the best fit model in Figure \ref{fig:sed}. According to the mass function, $f(M_2) = 0.63\,M_\odot$ and the visible star mass $M_1 = 0.176 \,M_\odot$, we obtain the minimum mass of the compact star, $M_2 \geq 0.9\,M_\odot$. If we assume the compact star is a WD and estimate its radius using the mass-radius relation of \citet{bedard2020}, the corresponding WD radius is $R_\mathrm{2,max} < 0.01 R_\odot$. Even if the temperature of the compact star is 20,000\,K, its flux contribution in total luminosity is less than 3\% and is negligible. The flux contribution from the accretion disk is more complex and will be discussed in Section \ref{sec:dis_poscontrib}. Our analysis in that Section demonstrate that the flux contribution in optical band is dominated by the visible star. \subsection{The light curve fitting} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{J0419_lc_rv_best_fit_result.pdf} \caption{ Best-fit light and RV curves. The top panel shows the rebinned TESS light curve (filled circles). The dashed blue curve represents the best-fitting ellipsoidal variability model, which cannot well match the observations. Hence, we add a star spot to the model and refit the TESS light curve, and the best-fitting result is shown as the solid red curve. Indeed, adding a spot improves the fit result greatly. The second panel indicates the residuals between the observed and model fluxes. Our model with a star spot well matches the observed data, and the reduced $\chi^2$ is 1.1 (close to 1.0). The third panel displays the RV curve, in which the circles are the observed data, and the solid curve is the best-fitting RV model (with a star spot). The bottom panel presents the residuals between the observed RV curve and the model RV curve.} \label{fig:lcfit} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{J0419_MCMC_corner.pdf} \caption{ Parameter distributions from the joint fitting of light and RV curves. We only illustrate a fraction of parameters, and other parameters are summarized in Table \ref{tab:lc_fit}.} \label{fig:lcfit_corner} \end{figure*} J0419 exhibits ellipsoidal variability. The main parameters of modulating the ellipsoidal light curves are the inclination angle $i$, the mass ratio $q = M_2 / M_1$, the filling factor $f_\mathrm{fill}$, the limb darkening factors, and the gravity darkening exponent $\beta_1$ \citep{von1924}. To estimate the inclination of the binary, we use \texttt{Phoebe 2.3}\footnote{\url{http://phoebe-project.org/}} \citep{prsa2005,prsa2016,Conroy2020} to model the light curve and RV curve of J0419, and inversely solve the orbital parameters. \texttt{Phoebe} is an open-source software package, which is based on the Wilson-Devinney software \citep{wd1971}, for computing light and RV curves of binaries using a superior surface discretization algorithm. Main physical effects in a binary system have been considered, including eclipse, the distortion of star shape due to the Roche potential, radiative properties (atmosphere intensities, gravity darkening, limb darkening, mutual irradiation), and spots. In the light curve fitting, we set the temperature of the donor to 5793\,K derived from the SED fitting (Section \ref{sec:sed_fit}) with the \texttt{Phoenix} atmosphere model. We adopt the logarithmic limb darkening law and obtain the coefficient values self-consistently from the \texttt{Phoebe} atmosphere model. The gravity darkening law states that, $T_\mathrm{eff}^4 \propto g_\mathrm{eff}^{\beta_1}$, where $\beta_1$ is the gravity darkening exponent. It is generally assumed that, for stars in hydrostatic and radiative equilibrium ($T_\mathrm{eff} \gtrsim 8000$\,K), $\beta_1 = 1$ \citep{von1924}, and for stars with convective envelopes ($T_\mathrm{eff} \lesssim 6300$\,K), $\beta_1 = 0.32$ \citep{Lucy1967}. The theoretical dependence of $\beta_1$ upon $T_{\mathrm{eff}}$ is obtained by \citet[][see their Figure 2]{Claret2011}. J0419 appears to be in the temperature range where the transition from convection to equilibrium occurs, therefore we set $\beta_1$ as a free parameter and adopt a common normal distribution, $\beta_1 \sim \mathcal{N}(0.32, 0.1)$, as its prior, just similar to \citet{Badry2021}. Because the visible star of J0419 is filling the Roche lobe, we set the model to be semi-detached ($f_\mathrm{fill} = 1$). We use an equivalent radius of $R_1 \sim \mathcal{N}(0.78, 0.02)\,R_\odot$ obtained from the SED fitting as the prior of the radius parameter. The free parameters in our fit are $i$, $q$, $\beta_1$, $\gamma$, and sma, where sma is the semi-major axis of a binary orbit. Except for the second TESS observation data, the folded light curves show larger scatters than the measurement errors. We, therefore, only fit the second TESS data. To reduce the calculation efforts of the model, we rebin the light curve of TESS to 40 points. The errors of the rebinned light curve include both the measurement errors and a systematic error that is estimated using a median filter method \citep{zhang2019}. We find that a pure ellipsoidal model cannot well explain the observed data. Indeed, the residuals between the observed and model fluxes depend upon the phases (see Figure \ref{fig:lcfit}). Thus, we add a spot to the model to compensate for the phase-dependent residuals. The spot component is defined with four parameters, relteff, radius, colatitude, and longitude. The relteff parameter is the ratio of the spot temperature to the local intrinsic temperature of the star. The radius parameter represents the spot angular radius. The remaining two parameters, colatitude and longitude, indicate the colatitude and longitude of the spot on the stellar surface, respectively. We only set relteff, radius, and longitude to be free parameters. The colatitude is fixed to be 90 degrees. We use the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) to compare the pure ellipsoidal model with the one with a spot. For the model without spot, the AIC and BIC of the best fitting result are -105 and -85, respectively. For the model with a spot, the AIC and BIC of the best fitting result are -282 and -255, respectively. Hence, adding a spot improves the fitting result greatly. Figure \ref{fig:lcfit} illustrates the best fit model with a spot, which is in good agreement with the observed light curve. The residuals between the TESS light curve and the model with a spot are much smaller than the variability amplitude. The light curve residuals in Figure \ref{fig:lcfit} show a weak periodic structure. Considering that the reduced $\chi^2$ is 1.1 (close to 1.0), the above structure is statistically insignificant. The fitting yields an inclination angle of $i = {66.5}_{-1.7}^{+1.4}$ degrees and a mass of the compact object of $M_2 = {1.09}_{-0.05}^{+0.05}\,M_\odot$. Figure \ref{fig:lcfit_corner} illustrates the distributions of the model parameters (see also Table \ref{tab:lc_fit}). We stress that the inferred inclination angle would not significantly change if we omit the spot component. \begin{deluxetable}{lrlr} \tablenum{4} \tablecaption{Parameters from the joint fit of light and RV curves.} \label{tab:lc_fit} \tablewidth{0pt} \tablehead{ \colhead{Parameter} & \colhead{Value} & \colhead{Parameter} & \colhead{Value} } \startdata $i$ (deg) & ${66.5}_{-1.7}^{+1.4}$ & $q$ ($M_2 / M_1$) & ${6.2}_{-0.5}^{+0.6}$ \\ sma ($R_\odot$) & ${3.27}_{-0.05}^{+0.05}$ & $\gamma$ (\ensuremath{\mathrm{km\,s^{-1}}}) & ${87.5}_{-3.6}^{+3.6}$ \\ $\beta_1$ & ${0.22}_{-0.04}^{+0.04}$ & spot relteff & ${1.018}_{-0.001}^{+0.001}$ \\ spot radius (deg) & ${78}_{-9}^{+8}$ & spot long (deg) & ${186}_{-1}^{+1}$ \\ t0\_rv (minutes) & ${11.7}_{-2.8}^{+2.7}$ & $R_1\,(R_\odot)$ & ${0.771}_{-0.022}^{+0.020}$ \\ $M_1\,(M_\odot)$ & ${0.177}_{-0.015}^{+0.014}$ & $M_2\,(M_\odot)$ & ${1.094}_{-0.049}^{+0.053}$ \\ $K_1$ (\ensuremath{\mathrm{km\,s^{-1}}}) & ${214.8}_{-3.4}^{+3.4}$ & & \\ \enddata \tablecomments{ The spot long is the longitude of the spot, where 0 means that the spot points towards the companion of the binary. The t0\_rv parameter accounts for the small phase offset.} \end{deluxetable} \section{Discussion} \label{sec:discussion} \subsection{The properties of the emission lines} \label{sec:dis_emission} J0419 has nine nights of observations, and most of the nights have taken multiple spectra. We normalize each single exposure spectrum and measure the EW of $\rm H\alpha$ emission line after subtracting the continuum component, where we generate the continuum template by using \texttt{The\_Payne} with the stellar parameters obtained in Section \ref{sec:spec_fit}. We obtain the EW of the \ensuremath{\rm H\alpha}\ emission line by integrating each residual spectrum from 6520\,\AA\ to 6610\,\AA. The EWs of \ensuremath{\rm H\alpha}\ are listed in Table \ref{tab:spec_stat}. As can be seen from Figure \ref{fig:spec_all}, the EWs of the \ensuremath{\rm H\alpha}\ emission line have changed greatly from night to night. But at the same night, or two nearby observation nights, the EWs change little. These show that the timescale of the variations of emission lines is from several days to tens of days. Some works found the flickering timescale of the emission lines is similar to the continuum in CVs \citep[e.g.][]{Ribeiro2009}, ranging from minutes to hours. The mechanisms of the flicker could be condensations in the matter stream \citep{Stockman1979}, non-uniform mass accretion, or turbulence in the accretion disk \citep{Elsworth1982}. The longer variability timescale of the emission lines of J0419 may be due to the low mass transfer rate. According to the mass function (Equation \ref{eq:fm}) and visible star mass, the mass ratio is $q = M_2 / M_1 > 5.3$. If we assume that the emission lines originate from the accretion disk, the RV semi-amplitude of the emission lines will be $K_\mathrm{em} < 41.3$\,\ensuremath{\mathrm{km\,s^{-1}}}. The resolution and SNR of the J0419 spectra are not enough to measure the RVs of the emission lines. However, we find that the wavelength shift of the emission lines is much smaller than the continuum component, which disfavors the stellar origin of the emission lines. \begin{figure*}[htp] \centering \includegraphics[width=0.8\textwidth]{fig_m1_r1.pdf} \caption{The mass-radius distribution of CVs and pre-ELMs. The red and black points are normal CVs from \citet{Patterson2005}. The orange diamond points are pre-ELMs from \citet{elbadry2021b}, and the red star is J0419. The solid line is the mass-radius relation of CVs adopted from \citet{Knigge2011}. } \label{fig:m1r1} \end{figure*} \subsection{Possible flux from the compact star or disk} \label{sec:dis_poscontrib} The radiation from the companion star or disk for an accreting binary system will lead to an overestimation of the radius and mass of the donor. For J0419, as we mentioned in Section \ref{sec:sed_fit}, due to the large mass of the compact star, its flux contribution in total luminosity is negligible. The radiation from the accretion disk is more complicated. Most normal CVs have strong radiation from disks in the optical band. However, for the evolved donors with higher temperatures, their mass transfer rate is very low \citep{elbadry2021b,Badry2021} and the donors dominate the luminosities. The spectra of J0419 are also clearly dominated by the donor. We list the reasons below: \begin{enumerate} \item[(1)] The SED is well fitted by a pure stellar model; \item[(2)] The template spectrum matches well with the observation spectra; \item[(3)] The light curves of J0419 show ellipsoidal variability, and no CV characteristics were found, such as violent variability in a short timescale, outburst events. \end{enumerate} \subsection{Comparison to CVs} Most CVs have low mass donors whose orbital periods are several hours (Section \ref{sec:intro}). The donors show characteristics similar to main-sequence stars with the same mass as they have the same chemical composition and structure. Unlike normal CVs, the pre-ELMs in CV stage have evolved and do not follow the donor sequence. Evolved donors generally have higher temperatures and possibly more bloated radii than normal CVs with similar donor masses. We collect the mass and radius data of normal CVs from \citet{Patterson2005}, and pre-ELM from \citet{elbadry2021b}, and plot them in Figure \ref{fig:m1r1}. The red and black points in Figure \ref{fig:m1r1} are normal CVs; the orange diamond points are pre-ELMs. The solid line is the empirical mass-radius relation of normal CVs from \citet{Knigge2011}. J0419 is labeled by the red star for comparison. We can see that J0419 completely deviates from the empirical mass-radius relation of normal CVs. Objects of \citet{elbadry2021b} either fall on the mass-radius relation or slightly deviate from it, although their temperatures are significantly higher than the normal CV donors with the same mass. These show that the visible star of J0419 is very bloated and more evolved than the objects of \citet{elbadry2021b}. Compared with normal CVs, the SED of J0419 is dominated by the donor star, and emission lines in the spectra are weaker. No outburst events were detected in 15 years of monitoring. We do not find random variability in a short timescale from hours to days in the high cadence TESS light curves, which is different from the light curves of most normal CVs \citep{Bruch2021}. The objects of \citet{elbadry2021b} exhibit similar properties. These indicate that the mass transfer rate of pre-ELM is very low compared with normal CVs. \subsection{Comparison to other pre-ELMs} \label{sec:cmp_pre_ELMs} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig_J0419_Teff_logg.pdf} \caption{ $T_\mathrm{eff}-\log g$ diagram of ELMs and pre-ELMs. The meaning of different points is labeled in top left of the panel.} \label{fig:teff_logg} \end{figure} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{fig_HR.pdf} \caption{HR diagram. The green points are the ELMs from \citet{brown2020}. The orange diamond points are pre-ELMs from \citet{elbadry2021b}. The gray points are CVs from \citet{Knigge2006}. The solid line is the main sequence obtained from \texttt{isochrones} \citep{isochrones2015}. The red star is J0419.} \label{fig:HR} \end{figure*} Several works have reported the pre-ELMs. These objects are stripped stars with burning hydrogen envelopes that more bloated and cooler than ELMs. Most of the reported pre-ELMs are found in EL CVns. Their companions are main-sequence A or F stars. We collect ELMs/pre-ELMs to compare their properties. The EL CVn-type stars are from \citet{Maxted2013,Maxted2014a,corsico2016,Corti2016,Gianninas2016,zhang2017,wangkun2020,lee2020}, the pre-ELMs in DDs are from \citet{elbadry2021b}, and the ELMs are from \citet{brown2020}. The pre-ELMs in EL-CVns are hotter than pre-ELMs in DDs (see Figure \ref{fig:teff_logg}), which should be due to the selection effect. Most of the reported EL CVns are detached binaries selected from the eclipse systems. Their radii have been shrinking after the detachment, accompanied by the increase of surface temperatures. The reported pre-ELMs (including J0419 and the sources of \citealt{elbadry2021b}) in DDs all have distinct ellipsoidal variability. They are filling or close filling the Roche lobe with lower temperatures. Hence, the pre-ELMs in EL-CVns are at the latter stage of the evolution than the reported pre-ELMs in DDs. For J0419, its mass and temperature are similar to the sources of \citet{elbadry2021b}. They are all in the transition from mass transfer to detached. Compared with the sources in \citet{elbadry2021b}, J0419 has the smallest surface gravity (see Figure \ref{fig:teff_logg}), which is due to its far longer orbital period. According to the evolution model of ELMs \citep{sun2018,Li2019}, pre-ELMs with longer initial periods will be more evolved before the mass transfer begins, resulting in smaller $\log g$ and longer periods. Long-period pre-ELMs like J0419 are rarely reported in previous works. Similar to \citet{elbadry2021b,Badry2021}, we show the position of J0419 on the HR diagram in Figure \ref{fig:HR}. For comparison, we plot the ELMs obtained from \citet{brown2020} and the pre-ELM objects obtained from \citet{elbadry2021b}. We also show the CV sample using the data from \citet{Ritter2003,Knigge2006}. The solid line in Figure \ref{fig:HR} is the main sequence obtained from \texttt{isochrones} \citep{isochrones2015} with an age of $\rm \log age = 8.3$. Most CVs in Figure \ref{fig:HR} are fall on the main sequence. Both the ELMs and pre-ELMs are well below the main sequence, showing that they are evolved stripped stars. \begin{figure*}[htp] \centering \includegraphics[width=0.7\textwidth]{fig_period_mass.pdf} \caption{$M_\mathrm{WD}-P_\mathrm{orb}$ relation. The black squares are helium WDs orbiting pulsars or pre-WDs orbiting an A-type MS star \citep[see][]{Tauris2014}, and the data are obtained from \citet{Antoniadis2012,Antoniadis2013,van2000,van2005b,van2010,Maxted2013,Corongiu2012,Jacoby2005,Ransom2014,Breton2012,Verbiest2008,Splaver2005,Pietrzy2012}. The blue triangles are pre-ELMs in EL CVns referred to Section \ref{sec:cmp_pre_ELMs}. The green circle points are ELMs reported in \citet{brown2020}. The orange diamonds points are pre-ELMs from \citet{elbadry2021b}. The red star is J0419. The shadow is the $M_\mathrm{WD}-P_\mathrm{orb}$ relation calculated according to the analytical formula in \citet{Tauris1999}. The upper and lower limits of the shadow correspond to the metallicities of $\rm Z = 0.001-0.02$. } \label{fig:mass_period} \end{figure*} While the pre-ELMs of \citet{elbadry2021b} deviate significantly from the main sequence, J0419 almost falls on the main sequence, which makes the method of selecting J0419 analogs via HR diagram ineffective. For the J0419 analogs, we can only search for these objects by combining the variation features along with the SED fitting and spectroscopic information, which significantly limits their sample size. The multiple exposure strategy of LAMOST is beneficial to search for such long-period pre-ELMs and binary systems consisting of a visible star and a compact star \citep{yi2019}. \subsection{The evolutionary properties} \label{subsec:evo_prop} According to \citet{sun2018,Li2019}, the orbit of J0419 will continue to shrink with angular momentum loss due to the magnetic braking until the convective envelope becomes too thin. After the orbital contraction ends, the accretion process will stop, and the radius of the visible star begins to shrink with increasing temperature and $\log g$. The visible star will gradually evolve into an ELM WD. The typical temperature of the transition from mass-transferring CVs to detached ELMs is about 6500\,K \citep{sun2018}, which is related to the \textit{Kraft} break \citep{Kraft1967}. When the temperature is higher than this value, stars will lack the convective envelopes to generate the magnetic field, so that the magnetic braking is no longer effective. The sample of \citet{elbadry2021b} well verifies this statement. In their sample, the emission lines only occur in the sources with $T_\mathrm{eff} < 6600$\,K, and the sources with higher temperatures have no emission lines found. J0419 with the donor temperature of $T_\mathrm{eff} = 5793$\,K seems also obey the law. The evolution of ELMs mainly depends on the initial mass of the visible star and the initial orbital period. The stars in WD + MS binaries with sufficiently long initial periods will ascend to the red giant branch when the mass transfer begins. The orbits of such systems will expand with the mass transfer (above the bifurcation period, see Figure 6 of \citealt{Li2019}). For the donors just leaving the main sequence when mass transfer begins, their orbits will shrink due to the magnetic braking. Both the mass and period of J0419 appear to be in between these two cases. For WDs in binaries beyond the bifurcation period, there is a tight relationship between the core mass of a low mass giant and the radius of its envelope, resulting in a good correlation between the period and the core mass at the termination of mass transfer \citep{Rappaport1995}. For systems below the bifurcation period ($P_\mathrm{orb} \lesssim 16-22$\,h, $M_\mathrm{core} \lesssim 0.18 M_\odot$), the correlation between the radius and mass of the donors is unclear. Figure \ref{fig:mass_period} shows the mass---period distribution of helium stars. The long-period systems are radio pulsar binaries or EL CVn-type systems, and most of the short-period objects are ELM/pre-ELMs reported in recent years. J0419 is just at the junction of the upper and the lower systems. With a mass of $M_1 = 0.176\pm0.014 M_\odot$ and a period of $P_\mathrm{orb} = 0.607189$~days, J0419 appears to follow the $M_\mathrm{WD} - P_\mathrm{orb}$ relation. The proprieties of closing to the bifurcation period and ongoing mass transfer make J0419 a unique source to connect ELM/pre-ELM systems, wide binaries, and CVs. The systems with periods longer than 14 hours well follow the $M_\mathrm{WD} - P_\mathrm{orb}$ relation. But for short period targets, the correlation becomes diffuse. The sources with short periods and relatively large mass at the bottom of Figure \ref{fig:mass_period} are thought to be generated through CE channel \citep{Li2019}. \section{Summary} \label{sec:summary} We report a pre-ELM, J0419, consisting of a visible star and a compact star selected from the LAMOST medium-resolution survey with a period of $P_\mathrm{orb} = 0.607189$~days. The follow-up spectroscopic observations provide a RV semi-amplitude of the visible star, $K_1 = 216\pm 3$\,\ensuremath{\mathrm{km\,s^{-1}}}, yielding a mass function of $f(M_2) = 0.63 \pm 0.03 M_\odot$. Both the large-amplitude ellipsoidal variability and the emission lines in the spectra indicate that the visible star has filled the Roche lobe. We use the mean density of the Roche lobe (only depending on the orbital period) and the radius of the visible star obtained from the SED fitting to calculate the mass of the visible star, which yields a mass of $M_1 = 0.176\pm0.014\,M_\odot$. The visible star mass is significantly lower than the expected mass of a star with a G-type spectrum, indicating that the donor of J0419 is an evolved star, i.e., a pre-ELM. By fitting both the light and RV curves using \texttt{Phoebe}, we obtain an inclination angle of $i = 66.5_{-1.7}^{+1.4}$ degrees, corresponding to the compact object mass of $M_2 = 1.09\pm 0.05\,M_\odot$. We find that J0419 has many features that are similar to the pre-ELM sample in \citet{elbadry2021b}, such as the visible star mass, temperature, the low mass transfer rate. However, the orbital period of J0419 is about three times the mean period of the sample in \citet{elbadry2021b}. We list the main properties of J0419 below: \begin{itemize} \item J0419 has a more evolved donor than the objects of \citet{elbadry2021b}. The surface gravity of the visible star, $\log g = 3.9$, is smaller than known pre-ELM systems, showing that the visible star of J0419 is very bloated (see Section \ref{sec:cmp_pre_ELMs} and Figure \ref{fig:teff_logg}). \item J0419 shows clear signatures of mass transfer. With a temperature of $T_\mathrm{eff} = 5793_{-133}^{+124}$K, J0419 seems to obey the empirical relation in \citet{elbadry2021b} that the low-temperature pre-ELMs ($T_\mathrm{eff} < 6500 \sim 7000$\,K) have mass transfer, but the pre-ELMs with higher temperature have no mass transfer. The phenomenon is generally considered to be caused by the magnetic braking becoming inefficient to take away the angular momentum, and the donors shrink inside their Roche lobe (see Section \ref{subsec:evo_prop}). \item According to the evolutionary model of \citet{Li2019}, we suspect that J0419 may be a rare source close to the bifurcation period of orbit evolution and therefore did not shrink or expand its orbit significantly (see Section \ref{subsec:evo_prop} and Figure \ref{fig:mass_period}). \item J0419 is close to the main sequence, which makes the selection of the long-period pre-ELMs like J0419 based on the HR diagram inefficient. Our work demonstrates a unique way to select such pre-ELMs by combining time-domain photometric and spectroscopic observations (see Figure \ref{fig:HR}). \end{itemize} \section{Acknowledgements} We thank Fan Yang, Honggang Yang, and Weikai Zong for beneficial discussions, and thank the anonymous referee for constructive suggestions that improved the paper. This work was supported by the National Key R\&D Program of China under grant 2021YFA1600401, and the National Natural Science Foundation of China under grants 11925301, 12103041, 12033006, 11973002, 11988101, 11933004, 12090044, 11833006, U1831205, and U1938105. This paper uses the LAMOST spectra. We also acknowledge the support of the staff of the Xinglong 2.16-meter telescope and Lijiang 2.4-meter telescope. \software{IRAF \citep{Tody1986,Tody1993}, PyAstronomy \citep{Czesla2019}, lightkurve \citep{Lightkurve2018}, PyHammer \citep{Kesseli2017,Roulston2020}, The\_Payne \citep{Ting2019}, astroARIADNE \citep{Vines2022}, isochrones \citep{isochrones2015}, Phoebe \citep{prsa2005,prsa2016,Conroy2020}}
1,108,101,565,099
arxiv
\section*{Acknowledgments} It is a pleasure to thank Frank Close and Z.P. Li for our stimulating and productive collaboration.
1,108,101,565,100
arxiv
\section{Introduction} \label{intro} Multiwavelength observations of radio pulsars are an important tool for the study of not yet clearly understood radiative mechanisms and spectral evolution of rotationpowered isolated neutron stars (NSs). Optical observations are an essential part of these studies. Among a dozen optically identified NSs, only seven pulsars have parallax based distances and, therefore, minimal uncertainties ($\le$10\%) in luminosities. Their optical spectra are shown in Fig.~1. A reliable optical spectrum has been obtained only for the young and bright Crab pulsar \citep{Sollerman}. \begin{figure} \centering \includegraphics[width=7cm,clip=]{zharikovfig1.eps} \caption{Optical spectra of seven pulsars of different characteristic age indicated in the left-bottom corner of each panel in years. The youngest Crab pulsar is at the top and the oldest PSR B0950-08 is at the bottom. For PSR B0656+14 dashed lines show the low energy extensions of the blackbody (BB, T=0.84MK) and power law (PL, $\alpha =0.45$) X-ray spectral components and their sum \citep{Koptsevich}. } \label{fig:1} \end{figure} Other pulsars are fainter and are represented mainly by broadband photometric points. Published optical spectra of PSR B0540-69 % \citep{Hill, Serafim} are strongly contaminated by a bright pulsar nebula \citep{Serafim}, while a tentative spectrum of Geminga \citep{Martin} is much noisier than available photometric fluxes. \cite{Mignani2} reported on the Vela pulsar spectral observations but these data are not published yet. In Figure 2 we show the evolution of luminosity L and radiation efficiency ${\rm\eta=L/L_{sd}}$ (${\rm L_{sd}}$ is spindown luminosity) demonstrated by these seven pulsars using the data of Table~1 from \cite{Zhar2}. We note significantly non-monotonic dependencies of ${\rm\eta_{Opt}}$ and ${\rm\eta_{X}}$ versus pulsar age with a pronounced minimum at the beginning of the middle-age epoch (${\rm \simeq10^4}$~yr) and comparably higher efficiencies of younger and older pulsars. \begin{figure}[t] \centering \includegraphics[width=6.5cm,bb=0 0 560 955,clip=]{zharikovfig2.eps} \caption{{\sl From top to bottom:} Evolution of the spindown, radio, optical, X-ray, and $\gamma$-ray luminosities and respective efficiencies demonstrated by the 7 optical pulsars from Fig.~1.} \label{fig:1} \end{figure} Owing to its relative brightness and proximity, the middle-aged PSR B0656+14 is one of isolated NSs most intensively studied in different wavelengths. It was discovered in radio by \cite{Manchester}, then identified in X-rays with Einstein, and observed in details with ROSAT, ASCA, Chandra and XMM (for references see \cite{Shib1, Shib2}). The X-ray emission can be described as a combination of thermal radiation from the entire surface of a cooling NS and from hotter polar caps heated by relativistic particles of magnetospheric origin. An excess over the hot thermal component at energies $\ge$2 keV was interpreted as nonthermal radiation from the pulsar magnetosphere \citep{Greiv}. The pulsar has been also marginally detected in $\gamma$-rays ($\ge$50 MeV) by \cite{Raman}. In the optical PSR B0656+14 was identified by \cite{caraveo} with the ESO/NTT telescopes in the V band. It was then studied in UV with the HST/FOC in the F130LP, F430W, F342W and F195W bands \citep{Pavlov1} and with the HST/WFPC in the F555W band \citep{Mignani1}. Detailed photometric studies in the optical-NIR were performed by \cite{Kurt}, \cite{Koptsevich}, \cite{Komarova}, and \cite{Shib2}. The studies showed that the bulk of the optical radiation is of nonthermal origin. This was confirmed by the detection of coherent optical pulsations with the radio pulsar period in the B band \citep{Shearer}, in a wide 400-600 nm passband \citep{Kern} and in NUV \citep{Shib1}. The pulse profiles are rather sharp with a high pulse fraction as expected for nonthermal emission mechanisms. \begin{figure} \centering \includegraphics[width=8cm,bb=0 0 760 403,clip=]{zharikovfig3.eps} \caption{Relationship between the B-band optical and 2-10 keV X-ray efficiencies for the same 7 pulsars as in Fig.~1,2.} \label{fig:1} \end{figure} The phase integrated multiwavelength spectrum (Fig.~4) shows \citep{Koptsevich Shib2} that the NIR-optical-UV spectral energy distribution is, in a first approach, compatible with the low energy extension of the sum of the X-ray thermal blackbody (BB) spectral component from the whole NS surface and the power law (PL) component dominating in the high energy tail. The BB extension does not contribute at longer wavelengths where the optical-NIR fluxes are in a good agreement with the PL alone (Fig.~1). This indicates a common origin of the nonthermal optical and X-ray emission, which is strongly supported by a good coincidence in phase and shape of the pulse profiles in the optical and the X-ray tail \citep{Shib1}. The same origin of the nonthermal optical and X-ray photons is likely to be a general property for other pulsars detected in both ranges, as follows from a strong correlation between respective efficiencies (Fig.~3) found by \cite{Zhar1,Zhar2}. In the optical emission of PSR B0656+14 there is an apparent, $\simeq$(3-5)$\sigma$, flux excess over the PL ``continuum" at Log$(\nu)\simeq$14.7 (Fig.~1). This could reveal an additional, 3rd, spectral component to the BB+PL discussed above. Here we present first results of the optical spectroscopy of the pulsar partially motivated by more detailed studies of the excess. In a broader sense, these results also allow us to consider, for the first time, optical properties of a middle-aged ordinary pulsar at the spectroscopic level, as it has been achieved so far only for the Crab pulsar. \begin{figure*} \centering \includegraphics[width=15.cm,clip=]{zharikovfig4.eps} \caption{{\sl Left:} Unabsorbed multiwavelength spectrum of PSR B0656+14 from the radio through $\gamma$-rays. The box marks the range zoomed in the right panel. {\sl Right:} The spectral and photometric fluxes of PSR B0656+14 in the NIR-optical-NUV range obtained with different telescopes and instruments, as notified in the plot. Black dashed lines show low energy extensions of the soft blackbody (BB) and power law (PL) spectral fits and their sum (BB+PL) obtained in X-rays. The black solid lines show the same but with the PL normalization shifted up by a factor of 1.4 (PL1) to fit the upper edge of its 1$\sigma$ error bar shown at the left side of the plot. These are possibly a better match for the optical and NUV spectra than the dashed lines. The symbol $\oplus$ marks the Earth atmospheric absorption band near 7600\AA. } \label{fig:1} \end{figure*} \section{Observations and data analysis } \label{sec:1} The spectrum of PSR B0656+14 was obtained on November--December 2004 and February 2005 during several observational runs of the ESO program 074.D-0512A using the VLT/UT1 telescope in a service mode. The FORS2\footnote{www.eso.org/instruments/fors} instrument was used in a long slit spectroscopic setup with the grating GRIS\_300V and the filter GG435, which cover the wavelength interval of about 4300-9600\AA\ and provide a medium spectral resolution of 3.35\AA/pixel. The slit width was 1$''$ and its position angle was selected in a such way as to obtain also spectra of several nearby stars for a sure pulsar astrometric referencing and wavelength/flux calibration. Eighteen 1400~s science spectroscopic exposures were taken with a total exposure time of 25200~s at a mean seeing of 0.6$''$. Standard reference frames (biases, darks, flatfields, lamps) were obtained in each observational run, while the slit and slitless observations of spectrophotometric standards (Feige110, LTT3218 and LTT1788) for the flux calibration were carried out in separate runs on the same nights. A combination of the MIDAS and IRAF packages was used for standard CCD data reduction, cosmic-ray track removing, spectra extraction, and subsequent data analysis. A faint, R=24.65, pulsar is at a limit of spectroscopic capability of the VLT. Nevertheless, excellent seeing conditions allowed us to resolve its spectrum even at each individual exposure, albeit with a low signal to noise ratio S/N. These exposures were co-added. The spectrum was then extracted with a 3 pixel wide extraction slit (0.2$"$/pix) centered on the pulsar. The backgrounds were extracted with a 6 pixel wide slit centered above and below the center of the pulsar spectrum. The correction factor for the PSF and sensitivity function were obtained from the Feige110 standard observations. The S/N of the resulting spectrum was about 4 (per pixel) in the 4450-5500\AA\ range and declined to $\sim 1$ near/above 8000\AA, due to higher sky backgrounds and a drop in sensitivity towards longer wavelengths. We binned the spectral flux in 20 pixel bins (67\AA) to get S/N near/above 15 and 4, respectively, making the flux accuracy to be comparable with that of available photometric data. \section{Results and discussion} \label{sec:3} The binned and dereddened with ${\rm A_V}$=0.093 spectrum of the pulsar is shown in Figure~4 (red curve in the right panel). We show also available multiwavelength data (see \cite{Shib2} for the data and ${\rm A_V}$ description). The spectrum is in a good agreement with the broadband VRI fluxes, while it is somewhat higher than the B band flux. It does not show any strong nebular emission lines that could be responsible for the apparent VR excess, mentioned above as a 3rd component, while the presence of weak features can not be completely ruled out. The continuum of the dereddened spectrum in 4600-7000\AA\ range has a power law shape $\propto\nu^{-0.2\pm0.2}$ (green line) in agreement with the nonthermal nature of the bulk of the optical emission. Its slope is close to the value expected in this range from the BB+PL extended from X-rays. The slope likely undergoes a change to a positive value between the R and I bands, as follows also from the photometric data. Within uncertainties the bluest end of the optical and the reddest end of the NUV \citep{Shib1} spectra are compatible with each other, suggesting a smooth connection of both. However, the F430W photometric flux in the gap between them drops below this connection with a significance of about 2$\sigma$ of the flux confidence level. Unless this is a result of some unknown systematics in calibration, it suggests a spectral dip in the pulsar emission centered near $\simeq$4300\AA\ ($\simeq$14.83 in Log($\nu$)). The optical and NUV spectra and two NIR photometric points, F160W and F187W, almost perfectly match the BB+PL extension from X-rays, if the PL normalization is taken to be a factor of 1.4 higher (solid lines) than its best X-ray fit value (dashed lines). The change is within 1$\sigma$ uncertainty of the fit. Considering the solid line version as a new optical ``continuum level" we find an additional and more significant flux depression in the red part of the spectrum overlapping the I and F110W bands and centered near 9000-10000\AA\ (Log($\nu)$$\simeq$14.5). Additional spectral studies are necessary to confirm the suggested ``red" and ``blue" features and to measure their shapes and wavelengths more accurately. Nevertheless, if they are real, an approximate blue-to-red frequency ratio is $\simeq$2. This indicates that they can be the 1st and 2nd harmonics of an electron/positron cyclotron absorption formed in the upper magnetosphere of the pulsar at an effective altitude where the magnetic field B$\simeq$10$^{8}$~G. This is $\simeq$360 km, assuming a dipole NS field with a surface value of 4.66$\times$10$^{12}$ G, as derived from spindown measurements. The absorbing $e^{\pm}$ have to be cooled enough and provide a sufficient optical depth above a source of the nonthermal continuum. The source altitude is, therefore, $<$360 km and much below the light cylinder radius of $\simeq$18$\times$10$^{3}$ km, likely suggesting its polar cap origin. The features are broad, as expected from the magnetospheric field inhomogeneity. Tentative ($\leq$2$\sigma$) absorption features in the NUV spectrum at Log($\nu)\simeq$15 and 15.1 \citep{Shib1} may be the 3rd and 4th harmonics, respectively, which are fainter as the cyclotron harmonic intensity decreases with its number ${n}$. Similar, albeit less significant, spectral features are likely seen in the photometric and spectral data of another middle-aged pulsar Geminga (Fig.1). The absence of strong nebular lines suggests that the features are also of the NS magnetospheric origin. To explain the Geminga spectrum \cite{Martin} applied a toy model of an ion cyclotron absorption at B$\simeq$10$^{11}$~G in the inner magnetosphere of the NS combined with the BB and PL components. At this field a low ${n}$ ion cyclotron frequencies indeed fall in the optical range. However, it is likely that the nonthermal optical emission is generated in the upper magnetosphere where the magnetic field is by orders of magnitude weaker and any ion cyclotron absorption of the respective optical continuum is negligible. In this case the electron/positron cyclotron absorption or scattering appears to be more plausible interpretation. The optical spectrum of the young Crab-pulsar is featureless and has a different (positive) slope (Fig.~1). The ten times older Vela-pulsar has a flat and also featureless spectrum (Mignani, private communication). The spectroscopy of PSR B0656+14 demonstrates that optical spectra of middle-aged pulsars can be distinct from those of younger ones by the presence of unusual spectral features or slope changes. To study this new spectral observations of PSR B0656+14 in the NIR and 3000-5000\AA\ ranges are needed. A question, whether the observed difference in pulsar spectra is simply caused by different pulsar geometry or by a change of physical conditions in the emission region with age, demands quantitative modeling physical processes in pulsar magnetospheres. \begin{acknowledgements} The work was partially supported by DGAPA/PAPIIT project IN101506, CONACYT 48493, RFBR (grants 05-02-16245, 05-02-22003) and Nsh 9879.2006.2. REM was supported by Fondecyt 1030707. \end{acknowledgements}
1,108,101,565,101
arxiv
\section*{Introduction} Given a compact complex manifold $X$, the \emph{Bott-Chern cohomology}, $H^{\bullet,\bullet}_{BC}(X)$, \cite{bott-chern}, and the \emph{Aeppli cohomology}, $H^{\bullet,\bullet}_{A}(X)$, \cite{aeppli}, provide useful invariants, and have been studied by several authors in different contexts, see, e.g., \cite{aeppli, bott-chern, bigolin, deligne-griffiths-morgan-sullivan, varouchas, alessandrini-bassanelli, schweitzer, kooistra, bismut, tseng-yau-3, angella-1, angella-tomassini-3}. In the case of compact K\"ahler manifolds, or, more in general, of compact complex manifolds satisfying the \emph{$\partial\overline{\del}$-Lemma}, the Bott-Chern and the Aeppli cohomology groups are naturally isomorphic to the Dolbeault cohomology groups. The $\partial\overline{\del}$-Lemma for compact complex manifolds has been studied by P. Deligne, Ph.~A. Griffiths, J. Morgan, and D.~P. Sullivan in \cite{deligne-griffiths-morgan-sullivan}, where it is proven that the validity of the $\partial\overline{\del}$-Lemma on a compact complex manifold $X$ yields the formality of the differential graded algebra $\left( \wedge^\bullet X \otimes_\mathbb{R} \mathbb{C} ,\, \de \right)$, \cite[Main Theorem]{deligne-griffiths-morgan-sullivan}; in particular, a topological obstruction to the existence of K\"ahler structures on compact differentiable manifolds follows, \cite[Lemma 5.11]{deligne-griffiths-morgan-sullivan}. Furthermore, they showed that any compact manifold admitting a proper modification from a K\"ahler manifold (namely, a manifold in class $\mathcal{C}$ of Fujiki, \cite{fujiki}) satisfies the $\partial\overline{\del}$-Lemma, \cite[Corollary 5.23]{deligne-griffiths-morgan-sullivan}. An adapted version of the $\partial\overline{\del}$-Lemma for differential graded Lie algebras has been considered also in \cite{goldman-millson} by W.~M. Goldman and J.~J. Millson, where they used a ``principle of two types'', see \cite[Proposition 7.3(ii)]{goldman-millson}, as a key tool to prove formality of certain differential graded Lie algebras in the context of deformation theory, \cite[Corollary page 84]{goldman-millson}. An algebraic approach to the $\partial\overline{\del}$-Lemma has been developed also by Y.~I. Manin in \cite{manin} in the context of differential Gerstenhaber-Batalin-Vilkovisky algebras, in order to study Frobenius manifolds arising by means of solutions of Maurer-Cartan type equations. A generalized complex version of the $\partial\overline{\del}$-Lemma has been introduced and studied by G.~R. Cavalcanti in \cite{cavalcanti, cavalcanti-jgp}. \medskip Since Bott-Chern and Aeppli cohomologies on compact K\"ahler manifolds coincide with Dolbeault cohomology, in \cite{angella-tomassini-3}, we were concerned in studying Bott-Chern cohomology of compact complex (possibly non-K\"ahler) manifolds $X$, showing the following \emph{inequality \itshape{à la} Fr\"olicher}, which relates the dimensions of the Bott-Chern and Aeppli cohomologies to the Betti numbers, \cite[Theorem A]{angella-tomassini-3}: $$ \text{for any }k\in\mathbb{Z}\;, \qquad \sum_{p+q=k} \left( \dim_\mathbb{C} H^{p,q}_{BC}(X) + \dim_\mathbb{C} H^{p,q}_{BC}(X) \right) \;\geq\; 2\,\dim_\mathbb{C} H^k_{dR}(X;\mathbb{C}) \;; $$ furthermore, the authors showed that the equality in the above inequality holds for every $k\in\mathbb{Z}$ if and only if $X$ satisfies the $\partial\overline{\del}$-Lemma, \cite[Theorem B]{angella-tomassini-3}. It turns out that such results depend actually on the structure of double complex of $\left(\wedge^{\bullet,\bullet}X ,\, \partial ,\, \overline{\del}\right)$. In this paper, we are concerned in a generalization of the inequality {\itshape à la} Fr\"olicher in a more algebraic framework, so as to highlight the algebraic aspects. As an application, we recover the above results on the cohomology of compact complex manifolds, and we get results on the cohomology of compact symplectic manifolds and compact generalized complex manifolds: more precisely, characterizations of compact symplectic manifolds satisfying the Hard Lefschetz Condition and of compact generalized complex manifolds satisfying the $\de\de^{\mathcal{J}}$-Lemma are provided. \medskip More precisely, consider a double complex $\left(B^{\bullet,\bullet}, \, \partial,\, \overline{\del}\right)$ of $\mathbb{K}$-vector spaces (namely, a $\mathbb{Z}^2$-graded $\mathbb{K}$-vector space $B^{\bullet,\bullet}$ endowed with $\partial\in\End^{1,0}(B^{\bullet,\bullet})$ and $\overline{\del}\in\End^{0,1}(B^{\bullet,\bullet})$ such that $\partial^2=\overline{\del}^2=\partial\overline{\del}+\overline{\del}\partial=0$). Several cohomologies can be studied: other than the \emph{Dolbeault cohomologies} $$ H^{\bullet,\bullet}_{\left( \partial ; \partial \right)}\left(B^{\bullet,\bullet}\right) \;:=\; \frac{\ker\partial}{\imm\partial} \qquad \text{ and } \qquad H^{\bullet,\bullet}_{\left( \overline{\del} ; \overline{\del} \right)}\left(B^{\bullet,\bullet}\right) \;:=\; \frac{\ker\overline{\del}}{\imm\overline{\del}} \;, $$ and than the cohomology of the associated total complex, $\left(\Tot^\bullet B^{\bullet,\bullet} := \bigoplus_{p+q=\bullet} B^{p,q},\, \de:=\partial+\overline{\del}\right)$, $$ H^\bullet_{\left( \de ; \de \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \;:=\; \frac{\ker\de}{\imm\de} \;,$$ one can consider also the \emph{Bott-Chern cohomology} and the \emph{Aeppli cohomology}, that is, $$ H^{\bullet,\bullet}_{\left( \partial , \overline{\del} ; \partial\overline{\del} \right)}\left(B^{\bullet,\bullet}\right) \;:=\; \frac{\ker\partial\cap\ker\overline{\del}}{\imm\partial\overline{\del}} \qquad \text{ and } \qquad H^{\bullet,\bullet}_{\left( \partial\overline{\del} ; \partial , \overline{\del} \right)}\left(B^{\bullet,\bullet}\right) \;:=\; \frac{\ker\partial\overline{\del}}{\imm\partial+\imm\overline{\del}} \;. $$ The identity induces natural morphisms of (possibly $\mathbb{Z}$-graded, possibly $\mathbb{Z}^2$-graded) $\mathbb{K}$-vector spaces: $$ \xymatrix{ & H^{\bullet,\bullet}_{\left( \partial , \overline{\del} ; \partial\overline{\del} \right)}\left(B^{\bullet,\bullet}\right) \ar[ld] \ar[rd] \ar[d] & \\ H^{\bullet,\bullet}_{\left( \partial ; \partial \right)}\left(B^{\bullet,\bullet}\right) \ar[rd] & H^\bullet_{\left( \de ; \de \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \ar[d] & H^{\bullet,\bullet}_{\left( \overline{\del} ; \overline{\del} \right)}\left(B^{\bullet,\bullet}\right) \ar[ld] \\ & H^{\bullet,\bullet}_{\left( \partial\overline{\del} ; \partial , \overline{\del} \right)}\left(B^{\bullet,\bullet}\right) & } $$ In general, the above maps are neither injective nor surjective; actually, the map $H^{\bullet,\bullet}_{\left( \partial , \overline{\del} ; \partial\overline{\del} \right)}\left(B^{\bullet,\bullet}\right) \to H^{\bullet,\bullet}_{\left( \partial\overline{\del} ; \partial , \overline{\del} \right)}\left(B^{\bullet,\bullet}\right)$ being injective is equivalent to all the above maps being isomorphisms, \cite[Lemma 5.15, Remark 5.16, 5.21]{deligne-griffiths-morgan-sullivan}. In such a case, one says that $\left( B^{\bullet,\bullet},\, \partial,\, \overline{\del}\right)$ \emph{satisfies the $\partial\overline{\del}$-Lemma}. By considering the spectral sequence associated to the structure of double complex of $\left(B^{\bullet,\bullet},\, \partial,\, \overline{\del}\right)$, one gets the \emph{Fr\"olicher inequality}, \cite[Theorem 2]{frolicher}, $$ \min \left\{ \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \partial ; \partial \right)}\left(B^{\bullet,\bullet}\right) ,\, \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \overline{\del} ; \overline{\del} \right)}\left(B^{\bullet,\bullet}\right) \right\} \;\geq\; \dim_\mathbb{K} H^\bullet_{\left( \de ; \de \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \;.$$ We prove an \emph{inequality {\itshape à la} Fr\"olicher} also for the Bott-Chern and Aeppli cohomologies. More precisely, we prove the following result. \smallskip \noindent {\bfseries Theorem 1 (see Theorem \ref{thm:disug-frol} and Corollary \ref{cor:frolicher-like-double-complexes}).\ } {\itshape Let $A^\bullet$ be a $\mathbb{Z}$-graded $\mathbb{K}$-vector space endowed with two endomorphisms $\delta_1 \in \End^{\hat\delta_1}\left(A^\bullet\right)$ and $\delta_2 \in \End^{\hat\delta_2}\left(A^\bullet\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$. Suppose that $$ \dim_\mathbb{K} H^\bullet_{\left( \delta_1 ; \delta_1 \right)}\left(A^\bullet\right) < +\infty \qquad \text{ and } \qquad \dim_\mathbb{K} H^\bullet_{\left( \delta_2 ; \delta_2 \right)}\left(A^\bullet\right) \;<\; +\infty \;.$$ Then $$ \dim_\mathbb{K} H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) + \dim_\mathbb{K} H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right) \;\geq\; \dim_\mathbb{K} H^\bullet_{\left( \delta_1 ; \delta_1 \right)}\left(A^\bullet\right) + \dim_\mathbb{K} H^\bullet_{\left( \delta_2 ; \delta_2 \right)}\left(A^\bullet\right) \;. $$ In particular, given a bounded double complex $\left( B^{\bullet,\bullet},\, \partial,\, \overline{\del} \right)$, and supposed that $$ \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_1 ; \delta_1 \right)}\left(B^{\bullet,\bullet}\right) < +\infty \qquad \text{ and } \qquad \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_2 ; \delta_2 \right)}\left(B^{\bullet,\bullet}\right) \;<\; +\infty \;,$$ then, for $\pm\in\{+,-\}$, $$ \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(B^{\bullet,\bullet}\right) + \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(B^{\bullet,\bullet}\right) \;\geq\; 2\, \dim_\mathbb{K} H^\bullet_{\left( \delta_1 \pm \delta_2 ; \delta_1 \pm \delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \;. $$ } \smallskip Furthermore, we provide a characterization of the equality in the above inequality {\itshape à la} Fr\"olicher in terms of the validity of the $\delta_1\delta_2$-Lemma. \smallskip \noindent {\bfseries Theorem 2 (see Theorem \ref{thm:caratt-deldelbar-lemma-double}).\ } {\itshape Let $\left( B^{\bullet,\bullet},\, \delta_1,\, \delta_2 \right)$ be a bounded double complex. Suppose that $$ \dim_\mathbb{K} H^{\bullet,\bullet}_{\left( \delta_1 ; \delta_1 \right)}\left(B^{\bullet,\bullet}\right) < +\infty \qquad \text{ and } \qquad \dim_\mathbb{K} H^{\bullet,\bullet}_{\left( \delta_2 ; \delta_2 \right)}\left(B^{\bullet,\bullet}\right) \;<\; +\infty \;.$$ The following conditions are equivalent: \begin{enumerate} \item[{\itshape (\ref{item:caratt-bi-1})}] $B^{\bullet,\bullet}$ satisfies the $\delta_1\delta_2$-Lemma; \item[{\itshape (\ref{item:caratt-bi-2})}] the equality \begin{eqnarray*} \lefteqn{ \dim_\mathbb{K} \Tot^{\bullet} H^{\bullet,\bullet}_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(B^{\bullet,\bullet}\right) + \dim_\mathbb{K} \Tot^{\bullet} H^{\bullet,\bullet}_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(B^{\bullet,\bullet}\right) } \\[5pt] &=& 2\, \dim_\mathbb{K} H^{\bullet}_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \;. \end{eqnarray*} holds. \end{enumerate} } \smallskip \medskip Given a compact complex manifold $X$, one can apply Corollary \ref{cor:frolicher-like-double-complexes} and Theorem \ref{thm:caratt-deldelbar-lemma-double} to the double complex $\left(\wedge^{\bullet,\bullet}X ,\, \partial ,\, \overline{\del}\right)$. More precisely, one recovers \cite[Theorem A]{angella-tomassini-3}, getting that, on every compact complex manifold, $$ \dim_\mathbb{C} \Tot^\bullet H^{\bullet,\bullet}_{BC}(X) + \dim_\mathbb{C} \Tot^\bullet H^{\bullet,\bullet}_{A}(X) \;\geq\; 2\, \dim_\mathbb{C} H^\bullet_{dR}(X;\mathbb{C}) \;, $$ and the characterization of the $\partial\overline{\del}$-Lemma in terms of the Bott-Chern cohomology given in \cite[Theorem B]{angella-tomassini-3}, namely, that the equality holds if and only if the $\partial\overline{\del}$-Lemma holds. \medskip Furthermore, Corollary \ref{cor:frolicher-like-double-complexes} and Theorem \ref{thm:caratt-deldelbar-lemma-double} allow also to study the cohomology of compact manifolds $X$ endowed with symplectic forms $\omega$. In this case, one considers the $\mathbb{Z}$-graded algebra $\wedge^\bullet X$ endowed with $\de \in \End^{1}\left(\wedge^\bullet X\right)$ and $\de^\Lambda := \left[\de,\, -\iota_{\omega^{-1}}\right] \;\in\; \End^{-1}\left(\wedge^\bullet X \right)$, which satisfy $\de^2=\left(\de^\Lambda\right)^2=\de\de^\Lambda+\de^\Lambda\de=0$. The symplectic Bott-Chern and Aeppli cohomologies have been introduced and studied by L.-S. Tseng and S.-T. Yau in \cite{tseng-yau-1, tseng-yau-2, tseng-yau-3}. In particular, we get the following result. \smallskip \noindent {\bfseries Theorem 3 (see Theorem \ref{thm:sympl}).\ } {\itshape Let $X$ be a compact manifold endowed with a symplectic structure $\omega$. The inequality \begin{equation} \tag{\ref{eq:sympl}} \dim_\mathbb{R} H^{\bullet}_{\left( \de , \de^\Lambda ; \de\de^\Lambda \right)}\left(X\right) + \dim_\mathbb{R} H^{\bullet}_{\left( \de\de^\Lambda ; \de , \de^\Lambda \right)}\left(X\right) \;\geq\; 2\, \dim_\mathbb{R} H^{\bullet}_{dR}(X;\mathbb{R}) \end{equation} holds. Furthermore, the equality in \eqref{eq:sympl} holds if and only if $X$ satisfies the Hard Lefschetz Condition. } \smallskip We recall that a compact $2n$-dimensional manifold $X$ endowed with a symplectic form $\omega$ is said to satisfy the \emph{Hard Lefschetz Condition} if $\left[\omega\right]^k\smile \cdot \colon H^{n-k}_{dR}(X;\mathbb{R}) \to H^{n+k}_{dR}(X;\mathbb{R})$ is an isomorphism for every $k\in\mathbb{Z}$. \medskip Finally, Corollary \ref{cor:frolicher-like-double-complexes} and Theorem \ref{thm:caratt-deldelbar-lemma-double} can be applied also to the study of the cohomology of generalized-complex manifolds. Generalized-complex geometry has been introduced by N. Hitchin in \cite{hitchin}, and studied, among others, by M. Gualtieri, \cite{gualtieri-phdthesis, gualtieri, gualtieri-kahler}, and G.~R. Cavalcanti, \cite{cavalcanti}. It provides a way to generalize both complex and symplectic geometry, since complex structures and symplectic structures appear as special cases of generalized-complex structures. See, e.g., \cite{hitchin-introduction} for an introduction to generalized-complex geometry; the cohomology of generalized-complex manifolds has been studied especially by G.~R. Cavalcanti, \cite{cavalcanti, cavalcanti-jgp, cavalcanti-computations}. On a manifold $X$ endowed with an $H$-twisted generalized complex structure $\mathcal{J}$, (see \S\ref{subsec:gen-cplx} for the definitions,) one can consider the $\mathbb{Z}$-graduation $\Tot \wedge^\bullet X\otimes_\mathbb{R}\mathbb{C} = \bigoplus_{k\in\mathbb{Z}} U^k_{\mathcal{J}}$, and the endomorphisms $\partial_{\mathcal{J},H}\in\End^1\left(U^\bullet_{\mathcal{J}}\right)$ and $\overline{\del}_{\mathcal{J},H}\in\End^{-1}\left(U^\bullet_{\mathcal{J}}\right)$, which satisfy $\partial_{\mathcal{J},H}^2=\overline{\del}_{\mathcal{J},H}^2=\partial_{\mathcal{J},H}\overline{\del}_{\mathcal{J},H}+\overline{\del}_{\mathcal{J},H}\partial_{\mathcal{J},H}=0$; then, let $$ GH^{\bullet}_{\partial_{\mathcal{J},H}}(X) \;:=\; \frac{\ker \partial_{\mathcal{J},H}}{\imm \partial_{\mathcal{J},H}} \;, \qquad GH^{\bullet}_{\overline{\del}_{\mathcal{J},H}}(X) \;:=\; \frac{\ker \overline{\del}_{\mathcal{J},H}}{\imm \overline{\del}_{\mathcal{J},H}} \;, $$ and $$ GH^{\bullet}_{BC_{\mathcal{J},H}}(X) \;:=\; \frac{\ker{\partial_{\mathcal{J},H} \cap \ker \overline{\del}_{\mathcal{J},H}}}{\imm \partial_{\mathcal{J},H}\overline{\del}_{\mathcal{J},H}} \;, \qquad GH^{\bullet}_{A_{\mathcal{J},H}}(X) \;:=\; \frac{\ker \partial_{\mathcal{J},H}\overline{\del}_{\mathcal{J},H}}{\imm \partial_{\mathcal{J},H} + \imm \overline{\del}_{\mathcal{J},H}} \;. $$ The above general results yield the following. \smallskip \noindent {\bfseries Theorem 4 (see Theorem \ref{thm:gen-frol-ineq} and Theorem \ref{thm:gen-charact}).\ } {\itshape Let $X$ be a compact differentiable manifold endowed with an $H$-twisted generalized complex structure $\mathcal{J}$. Then \begin{equation}\tag{\ref{eq:ineq-frol-cplx-gen}} \dim_\mathbb{C} GH^{\bullet}_{BC_{\mathcal{J},H}}(X) + \dim_\mathbb{C} GH^{\bullet}_{A_{\mathcal{J},H}}(X) \;\geq\; \dim_\mathbb{C} GH^{\bullet}_{\overline{\del}_{\mathcal{J},H}}(X) + \dim_\mathbb{C} GH^{\bullet}_{\partial_{\mathcal{J},H}}(X) \;. \end{equation} Furthermore, $X$ satisfies the $\partial_{\mathcal{J},H}\overline{\del}_{\mathcal{J},H}$-Lemma if and only if the Hodge and Fr\"olicher spectral sequences associated to the canonical double complex $\left(U^{\bullet_1-\bullet_2}_{\mathcal{J}}\otimes\beta^{\bullet_2},\, \partial_{\mathcal{J},H} \otimes_\mathbb{C} \id,\, \overline{\del}_{\mathcal{J},H} \otimes_\mathbb{C} \beta\right)$ degenerate at the first level and the equality in \eqref{eq:ineq-frol-cplx-gen} holds. } \smallskip \section{Preliminaries and notation} Fix $\mathbb{K}\in\{\mathbb{R},\,\mathbb{C}\}$. In this section, we summarize some notation and results concerning graded $\mathbb{K}$-vector spaces endowed with two commuting differentials. \subsection{(Bi-)graded vector spaces} We set the notation, in constructing two functors in order to change over $\mathbb{Z}$-graduation and $\mathbb{Z}^2$-graduation of a $\mathbb{K}$-vector space. \medskip Consider a $\mathbb{Z}^2$-graded $\mathbb{K}$-vector space $A^{\bullet,\bullet}$ endowed with two endomorphisms $\delta_1 \in \End^{\hat\delta_{1,1}, \hat\delta_{1,2}}\left(A^{\bullet,\bullet}\right)$ and $\delta_2 \in \End^{\hat\delta_{2,1}, \hat\delta_{2,2}}\left(A^{\bullet,\bullet}\right)$ such that $\delta_1^2=\delta_2^2=\delta_1\delta_2+\delta_2\delta_1=0$. Define the $\mathbb{Z}$-graded $\mathbb{K}$-vector space $$ \Tot^\bullet\left(A^{\bullet,\bullet}\right) \;:=\; \bigoplus_{p+q=\bullet} A^{p,q} \;, $$ endowed with the endomorphisms $$ \delta_1 \in \End^{\hat\delta_{1,1}+\hat\delta_{1,2}}\left(\Tot^\bullet\left(A^{\bullet,\bullet}\right)\right) \qquad \text{ and } \qquad \delta_2 \in \End^{\hat\delta_{2,1}+\hat\delta_{2,2}}\left(\Tot^\bullet\left(A^{\bullet,\bullet}\right)\right) $$ such that $\delta_1^2=\delta_2^2=\delta_1\delta_2+\delta_2\delta_1=0$. \medskip Conversely, consider a $\mathbb{Z}$-graded $\mathbb{K}$-vector space $A^\bullet$ endowed with two endomorphisms $\delta_1 \in \End^{\hat\delta_1}\left(A^\bullet\right)$ and $\delta_2 \in \End^{\hat\delta_2}\left(A^\bullet\right)$ such that $\delta_1^2=\delta_2^2=\delta_1\delta_2+\delta_2\delta_1=0$. Following \cite[\S1.3]{brylinski}, \cite[\S4.2]{cavalcanti}, see \cite[\S II.2]{goodwillie}, \cite[\S II]{connes}, take an infinite cyclic multiplicative group $\left\{ \beta^m \;:\; m \in \mathbb{Z} \right\}$ generated by some $\beta$, and consider the $\mathbb{Z}$-graded $\mathbb{K}$-vector space $\bigoplus_{\bullet\in\mathbb{Z}}\mathbb{K}\, \beta^\bullet$. Define the $\mathbb{Z}^2$-graded $\mathbb{K}$-vector space $$ \Doub^{\bullet_1,\bullet_2}\left(A^{\bullet}\right) \;:=\; A^{\hat\delta_1\,\bullet_1 + \hat\delta_2\,\bullet_2} \otimes_\mathbb{K} \mathbb{K}\,\beta^{\bullet_2} \;, $$ endowed with the endomorphisms $$ \delta_1\otimes_\mathbb{K} \id \in \End^{1,0}\left(\Doub^{\bullet,\bullet}\left(A^\bullet\right)\right) \qquad \text{ and } \qquad \delta_2\otimes_\mathbb{K}\beta \in \End^{1,0}\left(\Doub^{\bullet,\bullet}\left(A^\bullet\right)\right) \;, $$ which satisfy $\left(\delta_1\otimes_\mathbb{K}\id\right)^2=\left(\delta_2\otimes_\mathbb{K}\beta\right)^2=\left(\delta_1\otimes_\mathbb{K}\id\right)\left(\delta_2\otimes_\mathbb{K}\beta\right)+\left(\delta_2\otimes_\mathbb{K}\beta\right)\left(\delta_1\otimes_\mathbb{K}\id\right)=0$; following \cite[\S1.3]{brylinski}, \cite[\S4.2]{cavalcanti}, the double complex $\left(\Doub^{\bullet,\bullet}\left(A^{\bullet}\right),\, \delta_1\otimes_\mathbb{K}\id,\, \delta_2\otimes_\mathbb{K}\beta\right)$ is called the \emph{canonical double complex} associated to $A^{\bullet}$. \subsection{Cohomologies}\label{subsec:cohom-complexes} Let $A^\bullet$ be a $\mathbb{Z}$-graded $\mathbb{K}$-vector space endowed with two endomorphisms $\delta_1 \in \End^{\hat\delta_1}\left(A^\bullet\right)$ and $\delta_2 \in \End^{\hat\delta_2}\left(A^\bullet\right)$ such that $$ \delta_1^2 \;=\; \delta_2^2 \;=\; \delta_1\delta_2+\delta_2\delta_1 \;=\; 0 \;. $$ Since one has the $\mathbb{Z}$-graded $\mathbb{K}$-vector sub-spaces $\imm\delta_1\delta_2 \subseteq \ker\delta_1 \cap \ker\delta_2$, and $\imm\delta_1 \subseteq \ker\delta_1$, and $\imm\delta_2 \subseteq \ker\delta_2$, and $\imm\delta_1+\imm\delta_2 \subseteq \ker\delta_1\delta_2$, one can define the $\mathbb{Z}$-graded $\mathbb{K}$-vector spaces \begin{eqnarray*} & H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) \;:=\; \frac{\ker\delta_1 \cap \ker\delta_2}{\imm\delta_1\delta_2} \;, & \\[5pt] H^\bullet_{\left( \delta_1 ; \delta_1 \right)}\left(A^\bullet\right) \;:=\; \frac{\ker\delta_1}{\imm\delta_1} \;, & & H^\bullet_{\left( \delta_2 ; \delta_2 \right)}\left(A^\bullet\right) \;:=\; \frac{\ker\delta_2}{\imm\delta_2} \;, \\[5pt] & H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right) \;:=\; \frac{\ker\delta_1\delta_2}{\imm\delta_1 + \imm\delta_2} \;, & \end{eqnarray*} and, since one has the $\mathbb{K}$-vector sub-space $\imm\left(\delta_1+\delta_2\right) \subseteq \ker\left(\delta_1+\delta_2\right)$, one can define the $\mathbb{K}$-vector space $$ H_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot A^\bullet\right) \;:=\; \frac{\ker\left(\delta_1+\delta_2\right)}{\imm\left(\delta_1+\delta_2\right)} \;; $$ we follow notation in \cite[Remark 5.16]{deligne-griffiths-morgan-sullivan}: more precisely, if maps $f_j \colon C_j\to A$ for $j\in\{1,\ldots,r\}$ and $g_k \colon A \to B_k$ for $k\in\{1,\ldots,s\}$ of $\mathbb{K}$-vector spaces are given, then $H_{ \left( f_1 , \ldots, f_r ; g_1 , \ldots, g_s \right) }$ denotes the quotient $\frac{\bigcap_{j=1}^{r} \ker f_j}{ \sum_{k=1}^{s} \imm g_k }$. (Note that, up to consider $-\delta_2\in\End^{\hat\delta_2}\left(A^\bullet\right)$ instead of $\delta_2 \in \End^{\hat\delta_2}\left(A^\bullet\right)$, one has the $\mathbb{K}$-vector sub-space $\imm\left(\delta_1-\delta_2\right) \subseteq \ker\left(\delta_1-\delta_2\right)$, and hence one can consider also the $\mathbb{K}$-vector space $H_{\left( \delta_1-\delta_2 ; \delta_1-\delta_2 \right)}\left(\Tot A^\bullet\right) := \frac{\ker\left(\delta_1-\delta_2\right)}{\imm\left(\delta_1-\delta_2\right)}$; note that, for $\sharp_{\delta_1,\delta_2}\in\left\{\left(\delta_1 ; \delta_1\right),\, \left(\delta_2 ; \delta_2\right),\, \left(\delta_1 , \delta_2 ; \delta_1\delta_2\right),\, \left(\delta_1\delta_2 ; \delta_1 , \delta_2\right)\right\}$, one has $H^\bullet_{\sharp_{\delta_1,\delta_2}}\left(A^\bullet\right)=H^\bullet_{\sharp_{\delta_1,-\delta_2}}\left(A^\bullet\right)$.) \begin{rem}\label{rem:z-grad} Note that $H_{\left( \delta_1 + \delta_2 ; \delta_1 + \delta_2 \right)}\left(A^\bullet\right)$ admits a $\left(\left. \mathbb{Z} \middle\slash \left(\hat\delta_1-\hat\delta_2\right)\mathbb{Z}\right.\right)$-graduation; in particular, if $\hat\delta_1 = \hat\delta_2$, then $H^\bullet_{\left( \delta_1 + \delta_2 ; \delta_1 + \delta_2 \right)}\left(A^\bullet\right)$ is actually a $\mathbb{Z}$-graded $\mathbb{K}$-vector space. \end{rem} \begin{rem}\label{rem:z2-grad} Note that, for $\sharp \in \left\{ \left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right) ,\, \left( \delta_1 ; \delta_1 \right) ,\, \left( \delta_2 ; \delta_2 \right) ,\, \left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right) \right\}$, if $A^{\bullet,\bullet}$ is actually $\mathbb{Z}^2$-graded, then $H^\bullet_{\sharp}\left(A^\bullet\right)$ admits a $\mathbb{Z}^2$-graduation such that $\Tot^\bullet H^{\bullet,\bullet}_{\sharp}\left(A^{\bullet,\bullet}\right)=H^\bullet_{\sharp}\left(\Tot^\bullet A^{\bullet,\bullet}\right)$. Furthermore, for $\delta_1 \in \End^{\hat\delta_{1,1}, \hat\delta_{1,2}}\left(A^{\bullet,\bullet}\right)$ and $\delta_2 \in \End^{\hat\delta_{2,1}, \hat\delta_{2,2}}\left(A^{\bullet,\bullet}\right)$, one has that $H_{\left( \delta_1 + \delta_2 ; \delta_1 + \delta_2 \right)}\left(\Tot A^\bullet\right)$ admits a $\left(\left(\left. \mathbb{Z} \middle\slash \left(\hat\delta_{1,1}-\hat\delta_{2,1}\right)\mathbb{Z}\right.\right) \times \left(\left. \mathbb{Z} \middle\slash \left(\hat\delta_{1,2}-\hat\delta_{ 2,2}\right)\mathbb{Z}\right.\right)\right)$-graduation; in particular, if $\hat\delta_{1,1}=\hat\delta_{2,1}$ and $\hat\delta_{1,2}=\hat\delta_{2,2}$, then $H_{\left( \delta_1 + \delta_2 ; \delta_1 + \delta_2 \right)}\left(\Tot A^\bullet\right)$ is actually $\mathbb{Z}^2$-graded. \end{rem} Since $\ker\delta_1\cap\ker\delta_2 \subseteq \ker\left(\delta_1\pm\delta_2\right)$ and $\imm\delta_1\delta_2\subseteq \imm\left(\delta_1\pm\delta_2\right)$ for $\pm\in\{+,-\}$, and $\ker\delta_1\cap\ker\delta_2 \subseteq \ker\delta_1$ and $\imm\delta_1\delta_2\subseteq \imm\delta_1$, and $\ker\delta_1\cap\ker\delta_2 \subseteq \ker\delta_2$ and $\imm\delta_1\delta_2\subseteq \imm\delta_2$, and $\ker\left(\delta_1\pm\delta_2\right)\subseteq\ker\delta_1\delta_2$ and $\imm\left(\delta_1\pm\delta_2\right)\subseteq\imm\delta_1+\imm\delta_2$ for $\pm\in\{+,-\}$, and $\ker\delta_1\subseteq\ker\delta_1\delta_2$ and $\imm\delta_1\subseteq\imm\delta_1+\imm\delta_2$, and $\ker\delta_2\subseteq\ker\delta_1\delta_2$ and $\imm\delta_2\subseteq\imm\delta_1+\imm\delta_2$, then the identity map induces natural morphisms of (possibly $\mathbb{Z}$-graded, possibly $\mathbb{Z}^2$-graded) $\mathbb{K}$-vector spaces \begin{footnotesize} $$ \xymatrix{ && H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) \ar@/_1.5pc/[lld] \ar@/_1pc/[ld] \ar@/^1pc/[rd] \ar@/^1.5pc/[rrd] \ar[dd] && \\ H^\bullet_{\left( \delta_1 ; \delta_1 \right)}\left(A^\bullet\right) \ar@/_1.5pc/[rrd] & H_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot A^\bullet\right) \ar@/_1pc/[rd] & & H_{\left( \delta_1-\delta_2 ; \delta_1-\delta_2 \right)}\left(\Tot A^\bullet\right) \ar@/^1pc/[ld] & H^\bullet_{\left( \delta_2 ; \delta_2 \right)}\left(A^\bullet\right) \ar@/^1.5pc/[lld] \\ && H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right) && } $$ \end{footnotesize} (As a matter of notation, by writing, for example, $H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) \to H_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot A^\bullet\right)$, we mean $\Tot H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) \to H_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot A^\bullet\right)$.) \subsection{$\delta_1\delta_2$-Lemma} Let $A^\bullet$ be a $\mathbb{Z}$-graded $\mathbb{K}$-vector space endowed with two endomorphisms $\delta_1 \in \End^{\hat\delta_1}\left(A^\bullet\right)$ and $\delta_2 \in \End^{\hat\delta_2}\left(A^\bullet\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$, and consider the cohomologies introduced in \S\ref{subsec:cohom-complexes}. In general, the natural maps induced by the identity between such cohomologies are neither injective nor surjective: the following definition, \cite{deligne-griffiths-morgan-sullivan}, points out when they are actually isomorphisms. \begin{defi}[{\cite{deligne-griffiths-morgan-sullivan}}] A $\mathbb{Z}$-graded $\mathbb{K}$-vector space $A^\bullet$ endowed with two endomorphisms $\delta_1 \in \End^{\hat\delta_1}\left(A^\bullet\right)$ and $\delta_2 \in \End^{\hat\delta_2}\left(A^\bullet\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$ is said to satisfy the \emph{$\delta_1\delta_2$-Lemma} if and only if $$ \ker\delta_1 \cap \ker \delta_2 \cap \left(\imm\delta_1 + \imm\delta_2\right) \;=\; \imm\delta_1\delta_2 \;,$$ namely, if and only if the natural map $H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) \to H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$ induced by the identity is injective. A $\mathbb{Z}^2$-graded $\mathbb{K}$-vector space $A^{\bullet,\bullet}$ endowed with two endomorphisms $\delta_1 \in \End^{\hat\delta_{1,1}, \hat\delta_{1,2}}\left(A^{\bullet,\bullet}\right)$ and $\delta_2 \in \End^{\hat\delta_{2,1}, \hat\delta_{2,2}}\left(A^{\bullet,\bullet}\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$ is said to satisfy the \emph{$\delta_1\delta_2$-Lemma} if and only if $\Tot^\bullet\left(A^{\bullet,\bullet}\right)$ satisfies the $\delta_1\delta_2$-Lemma. \end{defi} We recall the following result, which provides further characterizations of the validity of the $\delta_1\delta_2$-Lemma. (Note that, according to Remark \ref{rem:z-grad} and Remark \ref{rem:z2-grad}, the natural maps induced by the identity in Lemma \ref{lemma:equiv} are maps of possibly $\mathbb{Z}$-graded, possibly $\mathbb{Z}^2$-graded $\mathbb{K}$-vector spaces.) \begin{lemma}[{see \cite[Lemma 5.15]{deligne-griffiths-morgan-sullivan}}]\label{lemma:equiv} Let $A^\bullet$ be a $\mathbb{Z}$-graded $\mathbb{K}$-vector space endowed with two endomorphisms $\delta_1 \in \End^{\hat\delta_1}\left(A^\bullet\right)$ and $\delta_2 \in \End^{\hat\delta_2}\left(A^\bullet\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$. The following conditions are equivalent: \begin{enumerate} \item\label{item:equiv-1} $A^\bullet$ satisfies the $\delta_1\delta_2$-Lemma, namely, the natural map $H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) \to H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$ induced by the identity is injective; \item\label{item:equiv-2} the natural map $H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) \to H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$ induced by the identity is surjective; \item\label{item:equiv-3} both the natural map $H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) \to H^\bullet_{\left( \delta_1 ; \delta_1 \right)}\left(A^\bullet\right)$ induced by the identity and the natural map $H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) \to H^\bullet_{\left( \delta_2 ; \delta_2 \right)}\left(A^\bullet\right)$ induced by the identity are injective; \item\label{item:equiv-4} both the natural map $H^\bullet_{\left( \delta_1 ; \delta_1 \right)}\left(A^\bullet\right) \to H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$ induced by the identity and the natural map $H^\bullet_{\left( \delta_2 ; \delta_2 \right)}\left(A^\bullet\right) \to H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$ induced by the identity are surjective. \end{enumerate} Furthermore, suppose that the $\mathbb{K}$-vector space $\ker\delta_1\delta_2$ admits a $\mathbb{Z}$-graduation $$ \ker\delta_1\delta_2 \;=\; \bigoplus_{\ell\in\mathbb{Z}} \left(\ker\delta_1\delta_2 \cap \tilde A^\ell\right) $$ with respect to which $\ker\left(\delta_1 \pm \delta_2\right) \cap \tilde A^\bullet = \left(\ker\delta_1\cap\ker\delta_2\right) \cap \tilde A^\bullet$. (For example, if $\hat\delta_1\neq\hat\delta_2$, then take the $\mathbb{Z}$-graduation given by $A^\bullet$. For example, if $A^{\bullet,\bullet}$ is actually $\mathbb{Z}^2$-graded and $\delta_1\in\End^{\hat\delta_{1,1},\hat\delta_{1,2}}\left(A^{\bullet,\bullet}\right)$ and $\delta_2\in\End^{\hat\delta_{2,1},\hat\delta_{2,2}}\left(A^{\bullet,\bullet}\right)$ with $\left(\hat\delta_{1,1},\hat\delta_{1,2}\right)\neq \left(\hat\delta_{2,1},\hat\delta_{2,2}\right)$, then take the $\mathbb{Z}$-graduation induced by the $\mathbb{Z}^2$-graduation of $A^{\bullet,\bullet}$ by means of a chosen bijection $\mathbb{Z}\stackrel{\simeq}{\to}\mathbb{Z}^2$.) Then the previous conditions are equivalent to each of the following: \begin{enumerate}\setcounter{enumi}{4} \item\label{item:equiv-5} the natural map $\Tot H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) \to H_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot A^\bullet\right)$ induced by the identity is injective; \item\label{item:equiv-6} the natural map $H_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot A^\bullet\right) \to \Tot H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$ induced by the identity is surjective; \item\label{item:equiv-7} the natural map $\Tot H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) \to H_{\left( \delta_1-\delta_2 ; \delta_1-\delta_2 \right)}\left(\Tot A^\bullet\right)$ induced by the identity is injective; \item\label{item:equiv-8} the natural map $H_{\left( \delta_1-\delta_2 ; \delta_1-\delta_2 \right)}\left(\Tot A^\bullet\right) \to \Tot H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$ induced by the identity is surjective. \end{enumerate} \end{lemma} \begin{proof} For the sake of completeness, we recall here the proof in \cite{deligne-griffiths-morgan-sullivan}. \paragrafoo{\eqref{item:equiv-1} $\Rightarrow$ \eqref{item:equiv-3}} By the hypothesis, $\ker\delta_1 \cap \ker\delta_2 \cap \left(\imm\delta_1 + \imm\delta_2\right) = \imm\delta_1\delta_2$, and we have to prove that $\ker\delta_2 \cap \imm\delta_1 \subseteq \imm\delta_1\delta_2$ and $\ker\delta_1 \cap \imm\delta_2 \subseteq \imm\delta_1\delta_2$. Since $\imm\delta_1 \subseteq \imm\delta_1 + \imm\delta_2$ and $\imm\delta_2 \subseteq \imm\delta_1 + \imm\delta_2$, one gets immediately that the natural maps $H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) \to H^\bullet_{\left( \delta_1 ; \delta_1 \right)}\left(A^\bullet\right)$ and $H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) \to H^\bullet_{\left( \delta_2 ; \delta_2 \right)}\left(A^\bullet\right)$ are injective. \paragrafoo{\eqref{item:equiv-3} $\Rightarrow$ \eqref{item:equiv-4}} By the hypotheses, we have that $\ker\delta_2 \cap \imm\delta_1 = \imm\delta_1\delta_2$ and $\ker\delta_1 \cap \imm\delta_2 = \imm\delta_1\delta_2$, and we have to prove that $\ker\delta_1 + \imm\delta_2 \supseteq \ker\delta_1\delta_2$ and $\ker\delta_2 + \imm\delta_1 \supseteq \ker\delta_1\delta_2$. Let $x\in\ker\delta_1\delta_2$. Then $\delta_1(x) \in \ker\delta_2 \cap \imm\delta_1 = \imm\delta_1\delta_2$: let $y \in A^{\bullet}$ be such that $\delta_1(x) = \delta_1\delta_2(y)$. Then $x = \left(x - \delta_2(y)\right) + \delta_2(y) \in \ker\delta_1 + \imm\delta_2$, since $\delta_1\left(x - \delta_2(y)\right) = 0$; it follows that the natural map $H^\bullet_{\left( \delta_1 ; \delta_1 \right)}\left(A^\bullet\right) \to H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$ is surjective. Analogously, $\delta_2(x) \in \ker\delta_1 \cap \imm\delta_2 = \imm\delta_1\delta_2$: let $z$ be such that $\delta_2(x) = \delta_1\delta_ 2(z)$. Then $x = \left(x + \delta_1(z)\right) - \delta_1(z) \in \ker\delta_2 + \imm\delta_1$, since $\delta_2\left(x + \delta_1(z)\right) = 0$; it follows that the natural map $H^\bullet_{\left( \delta_2 ; \delta_2 \right)}\left(A^\bullet\right) \to H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$ is surjective. \paragrafoo{\eqref{item:equiv-4} $\Rightarrow$ \eqref{item:equiv-2}} By the hypothesis, $\ker\delta_1 + \imm\delta_2 = \ker\delta_1\delta_2$ and $\ker\delta_2 + \imm\delta_1 = \ker\delta_1\delta_2$, and we have to prove that $\left(\ker\delta_1 \cap \ker\delta_2\right) + \imm\delta_1 + \imm\delta_2 \supseteq \ker\delta_1\delta_2$. Since $\ker\delta_1\delta_2 = \left(\ker\delta_1 + \imm\delta_2\right) \cap \left(\ker\delta_2 + \imm\delta_1\right) \subseteq \left(\ker\delta_1 \cap \ker\delta_2\right) + \imm\delta_1 + \imm\delta_2$, one gets that the natural map $H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) \to H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$ is surjective. \paragrafoo{\eqref{item:equiv-2} $\Rightarrow$ \eqref{item:equiv-1}} By the hypothesis, $\left(\ker\delta_1 \cap \ker\delta_2\right) + \imm\delta_1 + \imm\delta_2 = \ker\delta_1\delta_2$, and we have to prove that $\ker \delta_1 \cap \ker\delta_2 \cap \left(\imm\delta_1 + \imm\delta_2\right) \subseteq \imm\delta_1\delta_2$. Let $x :=: \delta_1(y) + \delta_2(z) \in \ker\delta_1 \cap \ker\delta_2 \cap \left(\imm\delta_1 + \imm\delta_2\right)$. Therefore $y \in \ker\delta_1\delta_2 = \left(\ker\delta_1 \cap \ker\delta_2\right) + \imm\delta_1 + \imm\delta_2$ and $z \in \ker\delta_1\delta_2 = \left(\ker\delta_1 \cap \ker\delta_2\right) + \imm\delta_1 + \imm\delta_2$. It follows that $\delta_1(y) \in \imm\delta_1\delta_2$ and $\delta_2(z) \in \imm\delta_1\delta_2$, and hence $x = \delta_1(y) + \delta_2(z) \in \imm\delta_1\delta_2$, proving that the natural map $H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) \to H^\bullet_{\left( \delta_1\delta_ 2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$ is injective. \paragrafoo{\eqref{item:equiv-1} $\Rightarrow$ \eqref{item:equiv-5}, and \eqref{item:equiv-1} $\Rightarrow$ \eqref{item:equiv-7}} By the hypothesis, $\ker\delta_1 \cap \ker\delta_2 \cap \left(\imm\delta_1 + \imm\delta_2\right) = \imm\delta_1\delta_2$, and we have to prove that $\ker\delta_1 \cap \ker\delta_2 \cap \imm\left(\delta_1 \pm \delta_2\right) \subseteq \imm\delta_1\delta_2$ for $\pm\in\{+,-\}$. Since $\ker\delta_1 \cap \ker\delta_2 \cap \imm\left(\delta_1 \pm \delta_2\right) \subseteq \ker\delta_1 \cap \ker\delta_2 \cap \left(\imm\delta_1 + \imm\delta_2\right)$, one gets immediately that the natural map $\Tot H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) \to H_{\left( \delta_1 \pm \delta_2 ; \delta_1 \pm \delta_2 \right)}\left(A^\bullet\right)$ is injective. \paragrafoo{\eqref{item:equiv-5} $\Rightarrow$ \eqref{item:equiv-6}, and \eqref{item:equiv-7} $\Rightarrow$ \eqref{item:equiv-8}} Fix $\pm\in\{+,-\}$. By the hypothesis, $\ker\delta_1 \cap \ker\delta_2 \cap \imm \left(\delta_1 \pm \delta_2\right) = \imm\delta_1\delta_2$, and we have to prove that $\ker\left( \delta_1 \pm \delta_2 \right) + \imm\delta_1 + \imm\delta_2 \supseteq \ker\delta_1\delta_2$. Let $x\in \ker\delta_1\delta_2$. Then $\left(\delta_1 \pm \delta_2\right)(x) \in \ker\delta_1 \cap \ker\delta_2 \cap \imm\left(\delta_1 \pm \delta_2\right) = \imm\delta_1\delta_2$; let $z\in\Tot A^{\bullet}$ be such that $\left(\delta_1 \pm \delta_2\right)(x) = \delta_1\delta_2(z)$. Since $\left(\delta_1 \pm \delta_2\right) \left(x \pm \frac{1}{2}\, \delta_1(z) - \frac{1}{2}\, \delta_2(z)\right) = 0$, one gets that $x = \left(x \pm \frac{1}{2}\, \delta_1(z) - \frac{1}{2}\, \delta_2(z)\right) - \left(\pm \frac{1}{2}\, \delta_1(z)\right) + \frac{1}{2}\, \delta_2(z) \in \ker\left( \delta_1 \pm \delta_2 \right) + \imm\delta_1 + \imm\delta_2$, proving that the natural map $H_{\left( \delta_1 \pm \delta_2 ; \delta_1 \pm \delta_2 \right)}\left(\Tot A^\bullet\right) \to \Tot H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$ is surjective. To conclude the equivalences, we assume the additional hypothesis given in the statement. \paragrafoo{\eqref{item:equiv-6} $\Rightarrow$ \eqref{item:equiv-2}, and \eqref{item:equiv-8} $\Rightarrow$ \eqref{item:equiv-2}} Fix $\pm\in\{+,-\}$. By the hypothesis, $\ker\left( \delta_1 \pm \delta_2 \right) + \imm\delta_1 + \imm\delta_2 = \ker\delta_1\delta_2$, and we have to prove that $\left(\ker\delta_1 \cap \ker\delta_2\right) + \imm\delta_1 + \imm\delta_2 \supseteq \ker\delta_1\delta_2$. By the additional hypothesis, we have that $\ker\delta_1\delta_2$ admits a $\mathbb{Z}$-graduation $\ker\delta_1\delta_2 = \bigoplus_{\ell\in\mathbb{Z}} \left(\ker\delta_1\delta_2 \cap \tilde A^\ell\right)$ with respect to which $\ker\left(\delta_1 \pm \delta_2\right) \cap \tilde A^\bullet = \left(\ker\delta_1\cap\ker\delta_2\right) \cap \tilde A^\bullet$. Then one has that \begin{eqnarray*} \ker\delta_1\delta_2 &=& \bigoplus_{\ell\in\mathbb{Z}} \left(\ker\delta_1\delta_2 \cap \tilde A^\ell\right) \;=\; \bigoplus_{\ell\in\mathbb{Z}} \left( \left(\ker\left( \delta_1 \pm \delta_2 \right) + \imm\delta_1 + \imm\delta_2\right) \cap \tilde A^\ell\right) \\[5pt] &\subseteq& \bigoplus_{\ell\in\mathbb{Z}} \left( \left(\ker\left( \delta_1 \pm \delta_2 \right) \cap \tilde A^\ell\right) + \imm\delta_1 + \imm\delta_2\right) \;=\; \bigoplus_{\ell\in\mathbb{Z}} \left( \left(\left(\ker \delta_1 \cap \delta_2 \right) \cap \tilde A^\ell\right) + \imm\delta_1 + \imm\delta_2\right) \\[5pt] &\subseteq& \left(\ker \delta_1 \cap \delta_2 \right) + \imm\delta_1 + \imm\delta_2 \;, \end{eqnarray*} proving that the natural map $H_{\left( \delta_1 \pm \delta_2 ; \delta_1 \pm \delta_2 \right)}\left(\Tot A^\bullet\right) \to \Tot H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$ is surjective. \end{proof} \medskip By noting that, for $\sharp_{\delta_1,\delta_2}\in\left\{\delta_1,\, \delta_2,\, \delta_1\delta_2,\, \delta_1+\delta_2,\, \delta_1-\delta_2\right\}$, $$ \left(\ker \sharp_{\delta_1\otimes_\mathbb{K}\id, \delta_2\otimes_\mathbb{K}\beta}\right)^{\bullet_1,\bullet_2} \;=\; \left(\ker \sharp_{\delta_1,\delta_2}\right)^{\hat\delta_1\, \bullet_1 + \hat\delta_2\, \bullet_2} \otimes_\mathbb{K} \mathbb{K}\, \beta^{\bullet_2} $$ and $$ \left(\imm \sharp_{\delta_1\otimes_\mathbb{K}\id, \delta_2\otimes_\mathbb{K}\beta}\right)^{\bullet_1,\bullet_2} \;=\; \left(\imm \sharp_{\delta_1,\delta_2}\right)^{\hat\delta_1\, \bullet_1 + \hat\delta_2\, \bullet_2} \otimes_\mathbb{K} \mathbb{K}\, \beta^{\bullet_2} \;, $$ we get the following lemmata. \begin{lemma}\label{lemma:cohom-a-doub} Let $A^\bullet$ be a $\mathbb{Z}$-graded $\mathbb{K}$-vector space endowed with two endomorphisms $\delta_1 \in \End^{\hat\delta_1}\left(A^\bullet\right)$ and $\delta_2 \in \End^{\hat\delta_2}\left(A^\bullet\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$. Then, there are natural isomorphisms of $\mathbb{K}$-vector spaces $$ H^{\bullet_1,\bullet_2}_{\sharp_{\delta_1\otimes_\mathbb{K}\id, \delta_2\otimes_\mathbb{K}\beta}} \left(\Doub^{\bullet,\bullet}A^{\bullet}\right) \;\simeq\; \Doub^{\bullet_1,\bullet_2} H_{\sharp_{\delta_1,\delta_2}}^{\bullet}\left(A^\bullet\right) \;, $$ where $\sharp_{\delta_1,\delta_2} \in \left\{ (\delta_1 , \delta_2 ; \delta_1\delta_2) ,\, (\delta_1 ; \delta_1) ,\, (\delta_2 ; \delta_2) ,\, (\delta_1\delta_2 ; \delta_1 , \delta_2) \right\}$. \end{lemma} \begin{lemma}\label{lemma:deldelbar-lemma-a-doub} Let $A^\bullet$ be a $\mathbb{Z}$-graded $\mathbb{K}$-vector space endowed with two endomorphisms $\delta_1 \in \End^{\hat\delta_1}\left(A^\bullet\right)$ and $\delta_2 \in \End^{\hat\delta_2}\left(A^\bullet\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$. Denote the greatest common divisor of $\hat\delta_1$ and $\hat\delta_2$ by $\GCD{\hat\delta_1}{\hat\delta_2}$. The following conditions are equivalent: \begin{enumerate} \item\label{item:deldelbar-lemma-ab-1} $A^{\GCD{\hat\delta_1}{\hat\delta_2} \, \bullet}$ satisfies the $\delta_1\delta_2$-Lemma; \item\label{item:deldelbar-lemma-ab-2} $\Doub^{\bullet,\bullet}\left(A^{\bullet}\right)$ satisfies the $\left(\delta_1\otimes_\mathbb{K}\id\right)\left(\delta_2\otimes_\mathbb{K}\beta\right)$-Lemma. \end{enumerate} \end{lemma} \begin{proof} Indeed, $$ \left( \ker\left(\delta_1\otimes_\mathbb{K}\id\right) \cap \imm\left(\delta_2\otimes_\mathbb{K}\beta\right) \right)^{\bullet_1,\bullet_2} \;=\; \left(\ker\delta_1 \cap \imm\delta_2 \cap A^{\hat\delta_1\, \bullet_1 + \hat\delta_2\, \bullet_2}\right) \otimes_\mathbb{K} \mathbb{K}\, \beta^{\bullet_2} $$ and $$ \left(\imm\left(\delta_1\otimes_\mathbb{K}\id\right)\left(\delta_2\otimes_\mathbb{K}\beta\right)\right)^{\bullet_1,\bullet_2} \;=\; \left(\imm\delta_1\delta_2 \cap A^{\hat\delta_1\, \bullet_1 + \hat\delta_2\, \bullet_2}\right) \otimes_\mathbb{K} \mathbb{K}\, \beta^{\bullet_2} \;,$$ completing the proof. \end{proof} \section{An inequality {\itshape à la} Fr\"olicher} Let $A^{\bullet,\bullet}$ be a bounded $\mathbb{Z}^2$-graded $\mathbb{K}$-vector space endowed with two endomorphisms $\delta_1 \in \End^{1,0}\left(A^{\bullet,\bullet}\right)$ and $\delta_2 \in \End^{0,1}\left(A^{\bullet,\bullet}\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$. The bi-grading induces two natural bounded filtrations of the $\mathbb{Z}$-graded $\mathbb{K}$-vector space $\Tot^\bullet \left( A^{\bullet,\bullet} \right)$ endowed with the endomorphism $\delta_1+\delta_2\in \End^{1}\left(\Tot^\bullet \left( A^{\bullet,\bullet} \right)\right)$, namely, $$ \left\{ {'F}^{p} \Tot^\bullet \left( A^{\bullet,\bullet} \right) := \bigoplus_{\substack{r+s=\bullet\\r\geq p}}A^{r,s} \hookrightarrow \Tot^\bullet\left(A^{\bullet,\bullet}\right) \right\}_{p\in\mathbb{Z}} $$ and $$ \left\{ {''F}^{q} \Tot^\bullet \left( A^{\bullet,\bullet} \right) := \bigoplus_{\substack{r+s=\bullet\\s\geq q}}A^{r,s} \hookrightarrow \Tot^\bullet\left(A^{\bullet,\bullet}\right) \right\}_{q\in\mathbb{Z}} \;. $$ Such filtrations induce naturally two spectral sequences, respectively, $$ \left\{{'E}_r^{\bullet,\bullet}\left(A^{\bullet,\bullet},\, \delta_1,\, \delta_2\right)\right\}_{r\in\mathbb{Z}} \qquad \text{ and } \qquad \left\{{''E}_r^{\bullet,\bullet}\left(A^{\bullet,\bullet},\, \delta_1,\, \delta_2\right)\right\}_{r\in\mathbb{Z}} \;, $$ such that $$ {'E}_1^{\bullet_1,\bullet_2}\left(A^{\bullet,\bullet},\, \delta_1,\, \delta_2\right) \;\simeq\; H^{\bullet_1,\bullet_2}_{\left( \delta_2 ; \delta_2 \right)}\left(A^{\bullet,\bullet}\right) \;\Rightarrow\; H^{\bullet_1+\bullet_2}_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet \left( A^{\bullet,\bullet} \right)\right) \;, $$ and $$ {''E}_1^{\bullet_1,\bullet_2}\left(A^{\bullet,\bullet},\, \delta_1,\, \delta_2\right) \;\simeq\; H^{\bullet_1,\bullet_2}_{\left( \delta_1 ; \delta_1 \right)}\left(A^{\bullet,\bullet}\right) \;\Rightarrow\; H^{\bullet_1+\bullet_2}_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet\left(A^{\bullet,\bullet}\right)\right) \;, $$ see, e.g., \cite[\S2.4]{mccleary}, see also \cite[\S3.5]{griffiths-harris}. \medskip By using these spectral sequences (and up to consider $-\delta_2$ instead of $\delta_2$), one gets the classical \emph{A. Fr\"olicher inequality}. \begin{notation} Given two $\mathbb{Z}$-graded $\mathbb{K}$-vector spaces $A^\bullet$ and $B^\bullet$, writing, for example, $\dim_\mathbb{K} A^\bullet \geq \dim_\mathbb{K} B^\bullet$, we mean that, for any $k\in\mathbb{Z}$, the inequality $\dim_\mathbb{K} A^k \geq \dim_\mathbb{K} B^k$ holds. \end{notation} \begin{prop}[{\cite[Theorem 2]{frolicher}}]\label{prop:frolicher-classico} Let $A^{\bullet,\bullet}$ be a bounded $\mathbb{Z}^2$-graded $\mathbb{K}$-vector space endowed with two endomorphisms $\delta_1 \in \End^{1,0}\left(A^{\bullet,\bullet}\right)$ and $\delta_2 \in \End^{0,1}\left(A^{\bullet,\bullet}\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$. Then, for $\pm\in\{+,-\}$, $$ \min\left\{ \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_1 ; \delta_1 \right)}\left(A^{\bullet,\bullet}\right) ,\; \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_2 ; \delta_2 \right)}\left(A^{\bullet,\bullet}\right) \right\} \;\geq\; \dim_\mathbb{K} H^\bullet_{\left( \delta_1 \pm \delta_2 ; \delta_1 \pm \delta_2 \right)}\left(\Tot^\bullet A^{\bullet,\bullet}\right) \;. $$ \end{prop} As a straightforward consequence, the following result holds in the $\mathbb{Z}$-graded case. \begin{cor} Let $A^{\bullet}$ be a bounded $\mathbb{Z}$-graded $\mathbb{K}$-vector space endowed with two endomorphisms $\delta_1 \in \End^{\hat\delta_1}\left(A^{\bullet}\right)$ and $\delta_2 \in \End^{\hat\delta_2}\left(A^{\bullet}\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$. Then, for $\pm\in\{+,-\}$, \begin{eqnarray*} \lefteqn{\min\left\{ \sum_{p+q=\bullet}\dim_\mathbb{K} H^{\hat\delta_1 \, p + \hat\delta_2 \, q}_{\left( \delta_1 ; \delta_1 \right)}\left(A^{\bullet}\right) ,\; \sum_{p+q=\bullet} \dim_\mathbb{K} H^{\hat\delta_1 \, p + \hat\delta_2 \, q}_{\left( \delta_2 ; \delta_2 \right)}\left(A^{\bullet}\right) \right\}} \\[5pt] &\geq& \dim_\mathbb{K} H^\bullet_{\left( \left(\delta_1\otimes_\mathbb{K}\id\right) \pm \left(\delta_2\otimes_\mathbb{K}\beta\right) ; \left(\delta_1\otimes_\mathbb{K}\id\right) \pm \left(\delta_2\otimes_\mathbb{K}\beta\right) \right)}\left(\Tot^\bullet \Doub^{\bullet,\bullet}A^\bullet\right) \;. \end{eqnarray*} \end{cor} \begin{proof} By Lemma \ref{lemma:cohom-a-doub}, one has that, for $\sharp_{\delta_1,\delta_2} \in \left\{ \left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right) ,\, \left( \delta_1 ; \delta_1 \right) ,\, \left( \delta_2 ; \delta_2 \right) ,\, \left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right) \right\}$, $$ \dim_\mathbb{K} H^{\bullet_1,\bullet_2}_{\sharp_{\delta_1\otimes_\mathbb{K}\id, \delta_2\otimes_\mathbb{K}\beta}} \left(\Doub^{\bullet,\bullet}A^{\bullet}\right) \;=\; \dim_\mathbb{K} H_{\sharp_{\delta_1,\delta_2}}^{\hat\delta_1 \, \bullet_1 + \hat\delta_2 \, \bullet_2}\left(A^\bullet\right) \;. $$ Hence, by applying the classical Fr\"olicher inequality, Proposition \ref{prop:frolicher-classico}, to $\Doub^{\bullet,\bullet}$ endowed with $\delta_1\otimes_\mathbb{K}\id$ and $\delta_2\otimes_\mathbb{K}\beta$, one gets, for $\pm\in\{+,-\}$, \begin{eqnarray*} \lefteqn{\min\left\{ \sum_{p+q=\bullet}\dim_\mathbb{K} H^{\hat\delta_1 \, p + \hat\delta_2 \, q}_{\left( \delta_1 ; \delta_1 \right)}\left(A^{\bullet}\right) ,\; \sum_{p+q=\bullet} \dim_\mathbb{K} H^{\hat\delta_1 \, p + \hat\delta_2 \, q}_{\left( \delta_2 ; \delta_2 \right)}\left(A^{\bullet}\right) \right\}} \\[5pt] &=& \min\left\{ \sum_{p+q=\bullet} \dim_\mathbb{K} H^{p,q}_{\left( \delta_1\otimes_\mathbb{K}\id ; \delta_1\otimes_\mathbb{K}\id \right)}\left(\Doub^{\bullet,\bullet} A^{\bullet}\right) ,\; \sum_{p+q=\bullet} \dim_\mathbb{K} H^{p,q}_{\left( \delta_2\otimes_\mathbb{K}\beta ; \delta_2\otimes_\mathbb{K}\beta \right)}\left(\Doub^{\bullet,\bullet} A^{\bullet}\right) \right\} \\[5pt] &=& \min\left\{ \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_1\otimes_\mathbb{K}\id ; \delta_1\otimes_\mathbb{K}\id \right)}\left(\Doub^{\bullet,\bullet} A^{\bullet}\right) ,\; \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_2\otimes_\mathbb{K}\beta ; \delta_2\otimes_\mathbb{K}\beta \right)}\left(\Doub^{\bullet,\bullet} A^{\bullet}\right) \right\} \\[5pt] &\geq& \dim_\mathbb{K} H^\bullet_{\left( \left(\delta_1\otimes_\mathbb{K}\id\right) \pm \left(\delta_2\otimes_\mathbb{K}\beta\right) ; \left(\delta_1\otimes_\mathbb{K}\id\right) \pm \left(\delta_2\otimes_\mathbb{K}\beta\right) \right)}\left(\Tot^\bullet \Doub^{\bullet,\bullet} A^\bullet\right) \;, \end{eqnarray*} completing the proof. \end{proof} We prove the following inequality {\itshape à la} Fr\"olicher involving the cohomologies $H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right)$ and $H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$, other than $H^\bullet_{\left( \delta_1 ; \delta_1 \right)}\left(A^\bullet\right)$ and $H^\bullet_{\left( \delta_2 ; \delta_2 \right)}\left(A^\bullet\right)$. \begin{thm}\label{thm:disug-frol} Let $A^\bullet$ be a $\mathbb{Z}$-graded $\mathbb{K}$-vector space endowed with two endomorphisms $\delta_1 \in \End^{\hat\delta_1}\left(A^\bullet\right)$ and $\delta_2 \in \End^{\hat\delta_2}\left(A^\bullet\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$. Suppose that $$ \dim_\mathbb{K} H^\bullet_{\left( \delta_1 ; \delta_1 \right)}\left(A^\bullet\right) < +\infty \qquad \text{ and } \qquad \dim_\mathbb{K} H^\bullet_{\left( \delta_2 ; \delta_2 \right)}\left(A^\bullet\right) \;<\; +\infty \;.$$ Then \begin{equation}\label{eq:disug-frol} \dim_\mathbb{K} H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) + \dim_\mathbb{K} H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right) \;\geq\; \dim_\mathbb{K} H^\bullet_{\left( \delta_1 ; \delta_1 \right)}\left(A^\bullet\right) + \dim_\mathbb{K} H^\bullet_{\left( \delta_2 ; \delta_2 \right)}\left(A^\bullet\right) \;. \end{equation} \end{thm} \begin{proof} If either $H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right)$ or $H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$ is not finite-dimensional, then the inequality holds trivially; hence, we are reduced to suppose that also $$ \dim_\mathbb{K} H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right) < +\infty \qquad \text{ and } \qquad \dim_\mathbb{K} H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right) < +\infty \;.$$ Following J. Varouchas, \cite[\S3.1]{varouchas}, consider the exact sequences \begin{eqnarray*} 0 \to \frac{\imm\delta_1 \cap \imm\delta_2}{\imm\delta_1\delta_2} \to \frac{\ker\delta_1 \cap \imm\delta_2}{\imm\delta_1\delta_2} \to \frac{\ker\delta_1}{\imm\delta_1} \to \frac{\ker\delta_1\delta_2}{\imm\delta_1 + \imm\delta_2} \to \frac{\ker\delta_1\delta_2}{\ker\delta_1 + \imm\delta_2} \to 0 \;, \\[5pt] 0 \to \frac{\imm\delta_1 \cap \imm\delta_2}{\imm\delta_1\delta_2} \to \frac{\ker\delta_2 \cap \imm\delta_1}{\imm\delta_1\delta_2} \to \frac{\ker\delta_2}{\imm\delta_2} \to \frac{\ker\delta_1\delta_2}{\imm\delta_1 + \imm\delta_2} \to \frac{\ker\delta_1\delta_2}{\ker\delta_2 + \imm\delta_1} \to 0 \;, \\[5pt] 0 \to \frac{\imm\delta_1 \cap \ker\delta_2}{\imm\delta_1\delta_2} \to \frac{\ker\delta_1 \cap \ker\delta_2}{\imm\delta_1\delta_2} \to \frac{\ker\delta_1}{\imm\delta_1} \to \frac{\ker\delta_1\delta_2}{\ker\delta_2 + \imm\delta_1} \to \frac{\ker\delta_1\delta_2}{\ker\delta_1 + \ker\delta_2} \to 0 \;, \\[5pt] 0 \to \frac{\imm\delta_2 \cap \ker\delta_1}{\imm\delta_1\delta_2} \to \frac{\ker\delta_1 \cap \ker\delta_2}{\imm\delta_1\delta_2} \to \frac{\ker\delta_2}{\imm\delta_2} \to \frac{\ker\delta_1\delta_2}{\ker\delta_1 + \imm\delta_2} \to \frac{\ker\delta_1\delta_2}{\ker\delta_1 + \ker\delta_2} \to 0 \end{eqnarray*} of $\mathbb{Z}$-graded $\mathbb{K}$-vector spaces. Note that all the $\mathbb{K}$-vector spaces appearing in the exact sequences have finite dimension. Indeed, since $H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$ has finite dimension, then $$ \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\ker\delta_1 + \imm\delta_2} \;<\; +\infty \qquad \text{ and } \qquad \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\ker\delta_2 + \imm\delta_1} \;<\; +\infty \;;$$ since $H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1 \delta_2 \right)}\left(A^\bullet\right)$ has finite dimension, then $$ \dim_\mathbb{K} \frac{\imm\delta_1 \cap \ker\delta_2}{\imm\delta_1\delta_2} \;<\; +\infty \qquad \text{ and } \qquad \dim_\mathbb{K} \frac{\imm\delta_2 \cap \ker\delta_1}{\imm\delta_1\delta_2} \;<\; +\infty \;.$$ Furthermore, note that the natural maps $\frac{\ker\delta_1 \cap \imm\delta_2}{\imm\delta_1\delta_2} \to H^{\bullet}_{\left( \delta_1 , \delta_2 ; \delta_1 \delta_2 \right)}\left(A^\bullet\right)$ and $\frac{\ker\delta_2 \cap \imm\delta_1}{\imm\delta_1\delta_2} \to H^{\bullet}_{\left( \delta_1 , \delta_2 ; \delta_1 \delta_2 \right)}\left(A^\bullet\right)$ induced by the identity are injective, and hence $$ \dim_\mathbb{K} \frac{\ker\delta_1 \cap \imm\delta_2}{\imm\delta_1\delta_2} \;<\; +\infty \qquad \text{ and } \qquad \dim_\mathbb{K} \frac{\ker\delta_2 \cap \imm\delta_1}{\imm\delta_1\delta_2} \;<\; +\infty \;;$$ it follows also that $$ \dim_\mathbb{K} \frac{\imm\delta_1 \cap \imm\delta_2}{\imm\delta_1\delta_2} \;<\; +\infty \;. $$ Analogously, since the natural maps $H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right) \to \frac{\ker\delta_1\delta_2}{\ker\delta_2 + \imm\delta_1}$ and $H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right) \to \frac{\ker\delta_1\delta_2}{\ker\delta_1 + \imm\delta_2}$ induced by the identity are surjective, then $$ \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\ker\delta_2 + \imm\delta_1} \;<\; +\infty \qquad \text{ and } \qquad \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\ker\delta_1 + \imm\delta_2} \;<\; +\infty \;,$$ and hence also $$ \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\ker\delta_1 + \ker\delta_2} \;<\; +\infty \;. $$ By using the above exact sequences, it follows that \begin{eqnarray*} \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\imm\delta_1 + \imm\delta_2} &=& \dim_\mathbb{K} \frac{\imm\delta_1 \cap \imm\delta_2}{\imm\delta_1\delta_2} - \dim_\mathbb{K} \frac{\ker\delta_1 \cap \imm\delta_2}{\imm\delta_1\delta_2} + \dim_\mathbb{K} \frac{\ker\delta_1}{\imm\delta_1} + \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\ker\delta_1 + \imm\delta_2} \;, \\[5pt] \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\imm\delta_1 + \imm\delta_2} &=& \dim_\mathbb{K} \frac{\imm\delta_1 \cap \imm\delta_2}{\imm\delta_1\delta_2} - \dim_\mathbb{K} \frac{\ker\delta_2 \cap \imm\delta_1}{\imm\delta_1\delta_2} + \dim_\mathbb{K} \frac{\ker\delta_2}{\imm\delta_2} + \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\ker\delta_2 + \imm\delta_1} \;, \\[5pt] \dim_\mathbb{K} \frac{\ker\delta_1 \cap \ker\delta_2}{\imm\delta_1\delta_2} &=& \dim_\mathbb{K} \frac{\imm\delta_1 \cap \ker\delta_2}{\imm\delta_1\delta_2} + \dim_\mathbb{K} \frac{\ker\delta_1}{\imm\delta_1} - \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\ker\delta_2 + \imm\delta_1} + \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\ker\delta_1 + \ker\delta_2} \;, \\[5pt] \dim_\mathbb{K} \frac{\ker\delta_1 \cap \ker\delta_2}{\imm\delta_1\delta_2} &=& \dim_\mathbb{K} \frac{\imm\delta_2 \cap \ker\delta_1}{\imm\delta_1\delta_2} + \dim_\mathbb{K} \frac{\ker\delta_2}{\imm\delta_2} - \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\ker\delta_1 + \imm\delta_2} + \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\ker\delta_1 + \ker\delta_2} \;, \end{eqnarray*} from which, by summing up, one gets \begin{eqnarray} \lefteqn{ 2\, \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\imm\delta_1 + \imm\delta_2} + 2\, \dim_\mathbb{K} \frac{\ker\delta_1 \cap \ker\delta_2}{\imm\delta_1\delta_2} } \nonumber\\[5pt] &=& 2\, \dim_\mathbb{K} \frac{\ker\delta_1}{\imm\delta_1} + 2\, \dim_\mathbb{K} \frac{\ker\delta_2}{\imm\delta_2} + 2\, \dim_\mathbb{K} \frac{\imm\delta_1 \cap \imm\delta_2}{\imm\delta_1\delta_2} + 2\, \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\ker\delta_1 + \ker\delta_2} \label{eq:BC-A-delta1-delta2}\\[5pt] &\geq& 2\, \dim_\mathbb{K} \frac{\ker\delta_1}{\imm\delta_1} + 2\, \dim_\mathbb{K} \frac{\ker\delta_2}{\imm\delta_2} \;, \nonumber \end{eqnarray} yielding $$ \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\imm\delta_1 + \imm\delta_2} + \dim_\mathbb{K} \frac{\ker\delta_1 \cap \ker\delta_2}{\imm\delta_1\delta_2} \; \geq \; \dim_\mathbb{K} \frac{\ker\delta_1}{\imm\delta_1} + \dim_\mathbb{K} \frac{\ker\delta_2}{\imm\delta_2} \;, $$ and hence the theorem. \end{proof} \begin{rem} Note that the proof of Theorem \ref{thm:disug-frol} works also for $\mathbb{Z}^2$-graded $\mathbb{K}$-vector spaces, since in this case J. Varouchas' exact sequences are in fact exact sequences of $\mathbb{Z}^2$-graded $\mathbb{K}$-vector spaces. More precisely, one gets that, given a $\mathbb{Z}^2$-graded $\mathbb{K}$-vector space $A^{\bullet,\bullet}$ endowed with two endomorphisms $\delta_1 \in \End^{\hat\delta_{1,1}, \hat\delta_{1,2}}\left(A^{\bullet,\bullet}\right)$ and $\delta_2 \in \End^{\hat\delta_{2,1}, \hat\delta_{2,2}}\left(A^{\bullet,\bullet}\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$, and supposed that $$ \dim_\mathbb{K} H^{\bullet,\bullet}_{\left( \delta_1 ; \delta_1 \right)}\left(A^{\bullet,\bullet}\right) < +\infty \qquad \text{ and } \qquad \dim_\mathbb{K} H^{\bullet,\bullet}_{\left( \delta_2 ; \delta_2 \right)}\left(A^{\bullet,\bullet}\right) \;<\; +\infty \;,$$ then \begin{equation*} \dim_\mathbb{K} H^{\bullet,\bullet}_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^{\bullet,\bullet}\right) + \dim_\mathbb{K} H^{\bullet,\bullet}_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^{\bullet,\bullet}\right) \;\geq\; \dim_\mathbb{K} H^{\bullet,\bullet}_{\left( \delta_1 ; \delta_1 \right)}\left(A^{\bullet,\bullet}\right) + \dim_\mathbb{K} H^{\bullet,\bullet}_{\left( \delta_2 ; \delta_2 \right)}\left(A^{\bullet,\bullet}\right) \;. \end{equation*} \end{rem} As a consequence of Theorem \ref{thm:disug-frol} and Proposition \ref{prop:frolicher-classico}, one gets the following inequality {\itshape à la} Fr\"olicher for double complexes, namely, $\mathbb{Z}^2$-graded $\mathbb{K}$-vector spaces $B^{\bullet,\bullet}$ endowed with two endomorphisms $\delta_1 \in \End^{1,0}\left(B^{\bullet,\bullet}\right)$ and $\delta_2 \in \End^{0,1}\left(B^{\bullet,\bullet}\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$. \begin{cor}\label{cor:frolicher-like-double-complexes} Let $B^{\bullet,\bullet}$ be a bounded $\mathbb{Z}^2$-graded $\mathbb{K}$-vector space endowed with two endomorphisms $\delta_1 \in \End^{1,0}\left(B^{\bullet,\bullet}\right)$ and $\delta_2 \in \End^{0,1}\left(B^{\bullet,\bullet}\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$. Suppose that $$ \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_1 ; \delta_1 \right)}\left(B^{\bullet,\bullet}\right) < +\infty \qquad \text{ and } \qquad \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_2 ; \delta_2 \right)}\left(B^{\bullet,\bullet}\right) \;<\; +\infty \;.$$ Then, for $\pm\in\{+,-\}$, \begin{equation}\label{eq:disug-frol-double-complexes} \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(B^{\bullet,\bullet}\right) + \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(B^{\bullet,\bullet}\right) \;\geq\; 2\, \dim_\mathbb{K} H^\bullet_{\left( \delta_1 \pm \delta_2 ; \delta_1 \pm \delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \;. \end{equation} \end{cor} \section{A characterization of $\delta_1\delta_2$-Lemma by means of the inequality {\itshape à la} Fr\"olicher} With the aim to characterize the validity of the $\delta_1\delta_2$-Lemma in terms of the dimensions of the cohomologies $H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^\bullet\right)$ and $H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^\bullet\right)$, we need the following lemmata. \begin{lemma}\label{lemma:BC-dr-surj} Let $B^{\bullet,\bullet}$ be a $\mathbb{Z}^2$-graded $\mathbb{K}$-vector space endowed with two endomorphisms $\delta_1 \in \End^{1,0}\left(B^{\bullet,\bullet}\right)$ and $\delta_2 \in \End^{0,1}\left(B^{\bullet,\bullet}\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$. If $$ \frac{\imm\delta_1 \cap \imm\delta_2}{\imm\delta_1\delta_2} \;=\; \{0\} \;, $$ then the natural map $\iota\colon \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(B^{\bullet,\bullet}\right) \to H^\bullet_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right)$ induced by the identity is surjective. \end{lemma} \begin{proof} Let $\mathfrak{a} :=: \left[x\right] \in H^\bullet_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right)$. Since $\delta_1(x)+\delta_2(x)=0$ and $\imm\delta_1 \cap \imm\delta_2 = \imm\delta_1\delta_2$, then we have $\delta_1(x) = -\delta_2(x) \in \imm\delta_1\cap\imm\delta_2 \;=\; \imm\delta_1\delta_2$; let $y \in \Tot^{\bullet-1} B^{\bullet,\bullet}$ be such that $\delta_1(x) = \delta_1\delta_2(y) \;=\; -\delta_2(x)$. Hence, consider $\mathfrak{a} = \left[x\right] = \left[x-\left(\delta_1+\delta_2\right)(y)\right] \in H^\bullet_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right)$, and note that $\mathfrak{a} = \iota\left(\left[x-\left(\delta_1+\delta_2\right)(y)\right]\right)$ where $\left[x-\left(\delta_1+\delta_2\right)(y)\right] \in \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(B^{\bullet,\bullet}\right)$, since $\delta_1\left(x-\left(\delta_1+\delta_2\right)(y)\right)=0$ and $\delta_2\left(x-\left(\delta_1+\delta_2\right)(y)\right)=0$. \end{proof} \begin{lemma}\label{lemma:dR-A-inj} Let $B^{\bullet,\bullet}$ be a $\mathbb{Z}^2$-graded $\mathbb{K}$-vector space endowed with two endomorphisms $\delta_1 \in \End^{1,0}\left(B^{\bullet,\bullet}\right)$ and $\delta_2 \in \End^{0,1}\left(B^{\bullet,\bullet}\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$. If $$ \frac{\ker\delta_1\delta_2}{\ker\delta_1 + \ker\delta_2} \;=\; \{0\} \;, $$ then the natural map $\iota\colon H^\bullet_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \to \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(B^{\bullet,\bullet}\right)$ induced by the identity is injective. \end{lemma} \begin{proof} Let $\mathfrak{a} :=: \left[x\right] \in H^\bullet_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right)$. Suppose that $\iota(\mathfrak{a}) = \left[0\right] \in \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(B^{\bullet,\bullet}\right)$, that is, there exist $y \in \Tot^{\bullet-1}B^{\bullet,\bullet}$ and $z \in \Tot^{\bullet-1}B^{\bullet,\bullet}$ such that $x = \delta_1(y) + \delta_2(z)$. Since $\left(\delta_1+\delta_2\right)(x)=0$ and $\ker\delta_1\delta_2 = \ker\delta_1 + \ker\delta_2$, it follows that $\delta_1\delta_2\left(z-y\right) = 0$, that is, $z-y \in \ker\delta_1\delta_2 = \ker\delta_1 + \ker\delta_2$. Let $u \in \ker\delta_1$ and $v \in \ker\delta_2$ be such that $z-y = u+v$. Then, one has that $x = \delta_1(y) + \delta_2(z) = \delta_1(y) + \delta_2(y+u+v) = \left(\delta_1+\delta_2\right)(y+u) \in \imm\left(\delta_1+\delta_2\right)$, proving that $\mathfrak{a} = \left[0\right] \in H^\bullet_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right)$. \end{proof} We can now prove the following characterization of the $\delta_1\delta_2$-Lemma for double complexes in terms of the equality in \eqref{eq:disug-frol-double-complexes}. \begin{thm}\label{thm:caratt-deldelbar-lemma-double} Let $B^{\bullet,\bullet}$ be a bounded $\mathbb{Z}^2$-graded $\mathbb{K}$-vector space endowed with two endomorphisms $\delta_1 \in \End^{1,0}\left(B^{\bullet,\bullet}\right)$ and $\delta_2 \in \End^{0,1}\left(B^{\bullet,\bullet}\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$. Suppose that $$ \dim_\mathbb{K} H^{\bullet,\bullet}_{\left( \delta_1 ; \delta_1 \right)}\left(B^{\bullet,\bullet}\right) < +\infty \qquad \text{ and } \qquad \dim_\mathbb{K} H^{\bullet,\bullet}_{\left( \delta_2 ; \delta_2 \right)}\left(B^{\bullet,\bullet}\right) \;<\; +\infty \;.$$ The following conditions are equivalent: \begin{enumerate} \item\label{item:caratt-bi-1} $B^{\bullet,\bullet}$ satisfies the $\delta_1\delta_2$-Lemma; \item\label{item:caratt-bi-2} the equality \begin{eqnarray*} \lefteqn{ \dim_\mathbb{K} \Tot^{\bullet} H^{\bullet,\bullet}_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(B^{\bullet,\bullet}\right) + \dim_\mathbb{K} \Tot^{\bullet} H^{\bullet,\bullet}_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(B^{\bullet,\bullet}\right) } \\[5pt] &=& 2\, \dim_\mathbb{K} H^{\bullet}_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \;. \end{eqnarray*} holds. \end{enumerate} \end{thm} \begin{proof} \paragrafoo{\eqref{item:caratt-bi-1} $\Rightarrow$ \eqref{item:caratt-bi-2}} By Lemma \ref{lemma:equiv}, it follows that $$ \dim_\mathbb{K} \Tot^{\bullet} H^{\bullet,\bullet}_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(B^{\bullet,\bullet}\right) \;\leq\; \dim_\mathbb{K} H^{\bullet}_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) $$ and $$ \dim_\mathbb{K} \Tot^{\bullet} H^{\bullet,\bullet}_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(B^{\bullet,\bullet}\right) \;\leq\; \dim_\mathbb{K} H^{\bullet}_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \;.$$ By Corollary \ref{cor:frolicher-like-double-complexes}, it follows that $$ \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(B^{\bullet,\bullet}\right) + \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(B^{\bullet,\bullet}\right) \;\geq\; 2\, \dim_\mathbb{K} H^\bullet_{\left( \delta_1 + \delta_2 ; \delta_1 + \delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \;.$$ Hence actually the equality holds. \paragrafoo{\eqref{item:caratt-bi-2} $\Rightarrow$ \eqref{item:caratt-bi-1}} Since, by \eqref{eq:BC-A-delta1-delta2} and Proposition \ref{prop:frolicher-classico}, it holds \begin{eqnarray*} \lefteqn{\dim_\mathbb{K} \Tot^{\bullet} H^{\bullet,\bullet}_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(B^{\bullet,\bullet}\right) + \dim_\mathbb{K} \Tot^{\bullet} H^{\bullet,\bullet}_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(B^{\bullet,\bullet}\right)} \\[5pt] &=& \dim_\mathbb{K} \Tot^{\bullet} H^{\bullet,\bullet}_{\left( \delta_1 ; \delta_1 \right)}\left(B^{\bullet,\bullet}\right) + \dim_\mathbb{K} \Tot^{\bullet} H^{\bullet,\bullet}_{\left( \delta_2 ; \delta_2 \right)}\left(B^{\bullet,\bullet}\right) \\[5pt] && + \dim_\mathbb{K} \frac{\imm\delta_1 \cap \imm\delta_2}{\imm\delta_1\delta_2} + \dim_\mathbb{K} \frac{\ker\delta_1\delta_2}{\ker\delta_1 + \ker\delta_2} \\[5pt] &\geq& 2\, \dim_\mathbb{K} H^{\bullet}_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \;, \end{eqnarray*} then, by the hypothesis, it follows that $$ \frac{\imm\delta_1 \cap \imm\delta_2}{\imm\delta_1\delta_2} \;=\; \left\{0\right\} \qquad \text{ and } \qquad \frac{\ker\delta_1\delta_2}{\ker\delta_1 + \ker\delta_2} \;=\; \left\{0\right\} \;.$$ By Lemma \ref{lemma:BC-dr-surj}, one gets that the natural map $H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \to H^\bullet_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right)$ induced by the identity is surjective; by Lemma \ref{lemma:dR-A-inj}, one gets that the natural map $H^\bullet_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \to H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right)$ induced by the identity is injective. In particular, one has that $$ \dim_\mathbb{K} \Tot^{\bullet} H^{\bullet,\bullet}_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(B^{\bullet,\bullet}\right) \;\geq\; \dim_\mathbb{K} H^{\bullet}_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) $$ and that $$ \dim_\mathbb{K} \Tot^{\bullet} H^{\bullet,\bullet}_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(B^{\bullet,\bullet}\right) \;\geq\; \dim_\mathbb{K} H^{\bullet}_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \;.$$ Hence, by the hypothesis, it holds in fact that $$ \dim_\mathbb{K} \Tot^{\bullet} H^{\bullet,\bullet}_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(B^{\bullet,\bullet}\right) \;=\; \dim_\mathbb{K} H^{\bullet}_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \;=\; \dim_\mathbb{K} \Tot^{\bullet} H^{\bullet,\bullet}_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(B^{\bullet,\bullet}\right) \;.$$ Since $H^{\bullet,\bullet}_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(B^{\bullet,\bullet}\right)$ and $H^{\bullet,\bullet}_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(B^{\bullet,\bullet}\right)$ are finite-dimensional by hypothesis, it follows that the natural maps $H^\bullet_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \to H^\bullet_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right)$ and $H^\bullet_{\left( \delta_1+\delta_2 ; \delta_1+\delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right) \to H^\bullet_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(\Tot^\bullet B^{\bullet,\bullet}\right)$ induced by the identity are in fact isomorphisms. By Lemma \ref{lemma:equiv}, one gets the theorem. \end{proof} \medskip In order to apply Theorem \ref{thm:caratt-deldelbar-lemma-double} to $\mathbb{Z}$-graded $\mathbb{K}$-vector spaces to get geometric applications, e.g., for compact symplectic manifolds, we need to record the following corollaries. \begin{cor}\label{cor:charact-hodge-frolicher-double-1} Let $A^\bullet$ be a bounded $\mathbb{Z}$-graded $\mathbb{K}$-vector space endowed with two endomorphisms $\delta_1 \in \End^{\hat\delta_1}\left(A^\bullet\right)$ and $\delta_2 \in \End^{\hat\delta_2}\left(A^\bullet\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$. Denote the greatest common divisor of $\hat\delta_1$ and $\hat\delta_2$ by $\GCD{\hat\delta_1}{\hat\delta_2}$. Suppose that $$ \dim_\mathbb{K} H^{\bullet}_{\left( \delta_1 ; \delta_1 \right)}\left(A^{\bullet}\right) < +\infty \qquad \text{ and } \qquad \dim_\mathbb{K} H^{\bullet}_{\left( \delta_2 ; \delta_2 \right)}\left(A^{\bullet}\right) \;<\; +\infty \;.$$ The following conditions are equivalent: \begin{enumerate} \item $A^{\GCD{\hat\delta_1}{\hat\delta_2}\,\bullet}$ satisfies the $\delta_1\delta_2$-Lemma; \item the equality \begin{eqnarray*} \lefteqn{\sum_{p+q=\bullet} \left( \dim_\mathbb{K} H^{\hat\delta_1 \, p + \hat\delta_2 \, q}_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^{\bullet}\right) + \dim_\mathbb{K} H^{\hat\delta_1 \, p + \hat\delta_2 \, q}_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^{\bullet}\right) \right)} \\[5pt] &=& 2\, \dim_\mathbb{K} H^\bullet_{\left( \delta_1\otimes_\mathbb{K}\id + \delta_2\otimes_\mathbb{K}\beta ; \delta_1\otimes_\mathbb{K}\id + \delta_2\otimes_\mathbb{K}\beta \right)}\left(\Tot^\bullet\Doub^{\bullet,\bullet}A^\bullet\right) \;. \end{eqnarray*} holds. \end{enumerate} \end{cor} \begin{proof} The Corollary follows from Lemma \ref{lemma:deldelbar-lemma-a-doub}, Theorem \ref{thm:caratt-deldelbar-lemma-double}, and Lemma \ref{lemma:cohom-a-doub}. \end{proof} \begin{cor}\label{cor:charact-hodge-frolicher-double} Let $A^\bullet$ be a bounded $\mathbb{Z}$-graded $\mathbb{K}$-vector space endowed with two endomorphisms $\delta_1 \in \End^{\hat\delta_1}\left(A^\bullet\right)$ and $\delta_2 \in \End^{\hat\delta_2}\left(A^\bullet\right)$ such that $\delta_1^2 = \delta_2^2 = \delta_1\delta_2+\delta_2\delta_1 = 0$. Suppose that the greatest common divisor of $\hat\delta_1$ and $\hat\delta_2$ is $\GCD{\hat\delta_1}{\hat\delta_2}=1$, and that $$ \dim_\mathbb{K} H^{\bullet}_{\left( \delta_1 ; \delta_1 \right)}\left(A^{\bullet}\right) < +\infty \qquad \text{ and } \qquad \dim_\mathbb{K} H^{\bullet}_{\left( \delta_2 ; \delta_2 \right)}\left(A^{\bullet}\right) \;<\; +\infty \;.$$ The following conditions are equivalent: \begin{enumerate} \item $A^{\bullet}$ satisfies the $\delta_1\delta_2$-Lemma; \item \begin{enumerate}[\itshape (a)] \item both the Hodge and Fr\"olicher spectral sequences of $\left(\Doub^{\bullet,\bullet}A^{\bullet} ,\, \delta_1\otimes_\mathbb{K}\id ,\, \delta_2\otimes_\mathbb{K}\beta\right)$ degenerate at the first level, equivalently, the equalities \begin{eqnarray*} \lefteqn{\dim_\mathbb{K} H^\bullet_{\left( \delta_1\otimes_\mathbb{K}\id + \delta_2\otimes_\mathbb{K}\beta ; \delta_1\otimes_\mathbb{K}\id + \delta_2\otimes_\mathbb{K}\beta \right)} \left(\Tot^\bullet\Doub^{\bullet,\bullet}A^\bullet\right)} \\[5pt] &=& \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_2\otimes_\mathbb{K}\beta ; \delta_2\otimes_\mathbb{K}\beta \right)}\left(\Doub^{\bullet,\bullet}A^{\bullet}\right) \\[5pt] &=& \dim_\mathbb{K} \Tot^\bullet H^{\bullet,\bullet}_{\left( \delta_1\otimes_\mathbb{K}\id ; \delta_1\otimes_\mathbb{K}\id \right)}\left(\Doub^{\bullet,\bullet}A^{\bullet}\right) \end{eqnarray*} hold; \item the equality \begin{eqnarray*} \lefteqn{\dim_\mathbb{K} H^{\bullet}_{\left( \delta_1 , \delta_2 ; \delta_1\delta_2 \right)}\left(A^{\bullet}\right) + \dim_\mathbb{K} H^{\bullet}_{\left( \delta_1\delta_2 ; \delta_1 , \delta_2 \right)}\left(A^{\bullet}\right)} \\[5pt] &=& \dim_\mathbb{K} H^{\bullet}_{\left( \delta_1 ; \delta_1 \right)}\left(A^{\bullet}\right) + \dim_\mathbb{K} H^{\bullet}_{\left( \delta_2 ; \delta_2 \right)}\left(A^{\bullet}\right) \end{eqnarray*} holds. \end{enumerate} \end{enumerate} \end{cor} \begin{proof} The Corollary follows from Corollary \ref{cor:charact-hodge-frolicher-double-1}, Proposition \ref{prop:frolicher-classico}, Theorem \ref{thm:disug-frol}, and Lemma \ref{lemma:cohom-a-doub}. \end{proof} \section{Applications} In this section, we prove or recover applications of the inequality {\itshape à la} Fr\"olicher, Theorem \ref{thm:disug-frol} and Theorem \ref{thm:caratt-deldelbar-lemma-double}, to the complex, symplectic, and generalized complex cases. \subsection{Complex structures} Let $X$ be a compact complex manifold. Consider the $\mathbb{Z}^2$-graded $\mathbb{C}$-vector space $\wedge^{\bullet,\bullet}X$ of bi-graded complex differential forms endowed with the endomorphisms $\partial\in\End^{1,0}\left(\wedge^{\bullet,\bullet}X\right)$ and $\overline{\del}\in\End^{0,1}\left(\wedge^{\bullet,\bullet}X\right)$, which satisfy $\partial^2=\overline{\del}^2=\partial\overline{\del}+\overline{\del}\partial=0$. As usual, define the Dolbeault cohomologies as $$ H^{\bullet,\bullet}_{\partial}(X) \;:=\; H^{\bullet,\bullet}_{\left( \partial ; \partial \right)}\left(\wedge^{\bullet,\bullet}X\right) \;, \qquad H^{\bullet,\bullet}_{\overline{\del}}(X) \;:=\; H^{\bullet,\bullet}_{\left( \overline{\del} ; \overline{\del} \right)}\left(\wedge^{\bullet,\bullet}X\right) \;, $$ and the \emph{Bott-Chern cohomology} and the \emph{Aeppli cohomology} as, respectively, \cite{bott-chern, aeppli}, $$ H^{\bullet,\bullet}_{BC}(X) \;:=\; H^{\bullet,\bullet}_{\left( \partial , \overline{\del} ; \partial\overline{\del} \right)}\left(\wedge^{\bullet,\bullet}X\right) \;, \qquad H^{\bullet,\bullet}_{A}(X) \;:=\; H^{\bullet,\bullet}_{\left( \partial\overline{\del} ; \partial , \overline{\del} \right)}\left(\wedge^{\bullet,\bullet}X\right) \;. $$ Note that, since $X$ is a compact manifold, $\dim_\mathbb{C} \Tot^\bullet H^{\bullet,\bullet}_{\overline{\del}}(X)<+\infty$: indeed, for any Hermitian metric $g$ with $\mathbb{C}$-linear Hodge-$*$-operator $*_g\colon \wedge^{\bullet_1,\bullet_2}X\to \wedge^{\dim_\mathbb{C} X - \bullet_2, \dim_\mathbb{C} X - \bullet_1}$, one has an isomorphism $\ker \left[\overline{\del},\, \overline{\del}^*\right] \stackrel{\simeq}{\to} H^\bullet_{\overline{\del}}(X)$, where $\overline{\del}^*$ is the adjoint operator of $\overline{\del}$ with respect to the inner product induced on $\wedge^{\bullet,\bullet}X$ by $g$, and the $2^{\text{nd}}$-order self-adjoint differential operator $\left[\overline{\del},\, \overline{\del}^*\right]$ is elliptic. Furthermore, $\dim_\mathbb{C} \Tot^\bullet H^{\bullet,\bullet}_{\partial}(X)=\dim_\mathbb{C} \Tot^\bullet H^{\bullet,\bullet}_{\overline{\del}}(X)<+\infty$, since conjugation induces the ($\mathbb{C}$-anti-linear) isomorphism $H^{\bullet_1,\bullet_2}_{\partial}(X) \simeq H^{\bullet_2,\bullet_1}_{\overline{\del}}(X)$ of $\mathbb{R}$-vector spaces. Note also that $\dim_\mathbb{C} \Tot^\bullet H^{\bullet,\bullet}_{BC}(X) = \dim_\mathbb{C} \Tot^{2\,\dim_\mathbb{C} X - \bullet} H^{\bullet,\bullet}_{A}(X) < +\infty$, \cite[Corollaire 2.3, \S2.c]{schweitzer}: indeed, for any Hermitian metric $g$ on $X$, the $\mathbb{C}$-linear Hodge-$*$-operator $*_g\colon \wedge^{\bullet_1,\bullet_2}X\to \wedge^{\dim_\mathbb{C} X - \bullet_2, \dim_\mathbb{C} X - \bullet_1}X$ induces the isomorphism $*_g\colon H^{\bullet_1,\bullet_2}_{BC}(X) \stackrel{\simeq}{\to} H^{\dim_\mathbb{C} X - \bullet_2, \dim_\mathbb{C} X - \bullet_1}_{A}(X)$, \cite[\S2.c]{schweitzer}, and $\ker\tilde\Delta_{BC} \stackrel{\simeq}{\to} H^{\bullet,\bullet}_{BC}(X)$, \cite[Théorème 2.2]{schweitzer}, where $\tilde\Delta_{BC}:=\left(\partial\overline{\del}\right)\left(\partial\overline{\del}\right)^*+\left(\partial\overline{\del}\right)^*\left(\partial\overline{\del}\right)+\left(\overline{\del}^*\partial\right)\left(\overline{\del}^*\partial\right)^*+\left(\overline{\del}^*\partial\right)^*\left(\overline{\del}^*\partial\right)+\overline{\del}^*\overline{\del}+\partial^*\partial$ is a $4^{\text{th}}$-order self-adjoint elliptic differential operator, \cite[Proposition 5]{kodaira-spencer-3}, see also \cite[\S2.b]{schweitzer}. By abuse of notation, one says that $X$ \emph{satisfies the $\partial\overline{\del}$-Lemma} if the double complex $\left(\wedge^{\bullet,\bullet}X,\, \partial,\, \overline{\del}\right)$ satisfies the $\partial\overline{\del}$-Lemma, and one says that $X$ \emph{satisfies the $\de\de^{c}$-Lemma} if the $\mathbb{Z}$-graded $\mathbb{C}$-vector space $\wedge^\bullet X \otimes_\mathbb{R}\mathbb{C}$ endowed with the endomorphisms $\de\in\End^1\left(\wedge^\bullet X\otimes_\mathbb{R}\mathbb{C}\right)$ and $\de^c:=-\im\left(\partial-\overline{\del}\right)\in\End^1\left(\wedge^\bullet X\otimes_\mathbb{R}\mathbb{C}\right)$ such that $\de^2=\left(\de^c\right)^2=\left[\de,\, \de^c\right]=0$ satisfies the $\de\de^c$-Lemma. Actually, it turns out that $X$ satisfies the $\de\de^{c}$-Lemma if and only if $X$ satisfies the $\partial\overline{\del}$-Lemma, \cite[Remark 5.14]{deligne-griffiths-morgan-sullivan}: indeed, note that $\partial = \frac{1}{2} \, \left(\de+\im\de^c\right)$ and $\overline{\del} = \frac{1}{2}\, \left(\de-\im\de^c\right)$, and $\partial\overline{\del} = -\frac{\im}{2}\, \de\de^c$. From Corollary \ref{cor:frolicher-like-double-complexes} and Theorem \ref{thm:caratt-deldelbar-lemma-double}, one gets straightforwardly the following inequality {\itshape à la} Fr\"olicher for the Bott-Chern cohomology of a compact complex manifolds and the corresponding characterization of the $\partial\overline{\del}$-Lemma by means of the Bott-Chern cohomology, first proved by the authors in \cite{angella-tomassini-3}. \begin{cor}[{\cite[Theorem A, Theorem B]{angella-tomassini-3}}]\label{cor:cplx} Let $X$ be a compact complex manifold. The inequality \begin{equation}\label{eq:cplx} \dim_\mathbb{C} \Tot^\bullet H^{\bullet,\bullet}_{BC}(X) + \dim_\mathbb{C} \Tot^\bullet H^{\bullet,\bullet}_{A}(X) \;\geq\; 2\, \dim_\mathbb{C} H^\bullet_{dR}(X;\mathbb{C}) \end{equation} holds. Furthermore, the equality in \eqref{eq:cplx} holds if and only if $X$ satisfies the $\partial\overline{\del}$-Lemma. \end{cor} \subsection{Symplectic structures} Let $X$ be a $2n$-dimensional compact manifold endowed with a \emph{symplectic structure} $\omega$, namely, a non-degenerate $\de$-closed $2$-form on $X$. The symplectic form $\omega$ induces a natural isomorphism $I \colon TX \stackrel{\simeq}{\to} T^*X$; more precisely, $I(\cdot)(\cdot\cdot) := \omega(\cdot,\cdot\cdot)$. Set $\Pi :=: \omega^{-1} := \omega\left(I^{-1}\cdot,I^{-1}\cdot\cdot\right) \in \wedge^2 TX$ the \emph{canonical Poisson bi-vector} associated to $\omega$, namely, in a Darboux chart with local coordinates $\left\{x^1,\ldots,x^n,y^1,\ldots,y^n\right\}$ such that $\omega\stackrel{\text{loc}}{=}\sum_{j=1}^{n}\de x^j\wedge \de y^j$, one has $\omega^{-1}\stackrel{\text{loc}}{=}\sum_{j=1}^{n}\frac{\partial}{\partial x^j}\wedge\frac{\partial}{\partial y^j}$. One gets a bi-$\mathbb{R}$-linear form on $\wedge^k X$, denoted by $\left(\omega^{-1}\right)^k$, by defining it on the simple elements $\alpha^1\wedge\cdots \wedge \alpha^k \in \wedge^kX$ and $\beta^1\wedge\cdots \wedge \beta^k\in\wedge^kX$ as $$ \left(\omega^{-1}\right)^k\left(\alpha^1\wedge\cdots \wedge \alpha^k,\, \beta^1\wedge\cdots \wedge \beta^k\right) \;:=\; \det\left(\omega^{-1}\left(\alpha^\ell,\, \beta^m\right)\right)_{\ell,m\in\{1,\ldots,k\}} \;; $$ note that $\left(\omega^{-1}\right)^k$ is skew-symmetric, respectively symmetric, according to $k$ is odd, respectively even. We recall that the operators \begin{eqnarray*} L \;\in\; \End^2\left(\wedge^\bullet X\right) \;, \quad && L(\alpha) \;:=\; \omega\wedge\alpha \;,\\[5pt] \Lambda \;\in\; \End^{-2}\left(\wedge^\bullet X\right) \;, \quad && \Lambda(\alpha) \;:=\; -\iota_\Pi\alpha \;,\\[5pt] H \;\in\; \End^0\left(\wedge^\bullet X\right) \;, \quad && H(\alpha) \;:=\; \sum_{k\in\mathbb{Z}} \left(n-k\right)\,\pi_{\wedge^kX}\alpha \;, \end{eqnarray*} yield an $\mathfrak{sl}(2;\mathbb{R})$-representation on $\wedge^\bullet X$ (where $\iota_{\xi}\colon\wedge^{\bullet}X\to\wedge^{\bullet-2}X$ denotes the interior product with $\xi\in\wedge^2\left(TX\right)$, and $\pi_{\wedge^kX}\colon\wedge^\bullet X\to\wedge^kX$ denotes the natural projection onto $\wedge^kX$, for $k\in\mathbb{Z}$). Define the \emph{symplectic co-differential operator} as $$ \de^\Lambda \;:=\; \left[\de,\, \Lambda\right] \;\in\; \End^{-1}\left(\wedge^\bullet X \right) \;; $$ one has that $\left(\de^\Lambda\right)^2 = \left[\de,\, \de^\Lambda\right] = 0$, see \cite[page 266, page 265]{koszul}, \cite[Proposition 1.2.3, Theorem 1.3.1]{brylinski}. \medskip As a matter of notation, for $\sharp \in \left\{ \left( \de, \de^\Lambda ; \de\de^\Lambda \right) ,\, \left( \de ; \de \right) ,\, \left( \de^\Lambda ; \de^\Lambda \right) ,\, \left( \de\de^\Lambda ; \de , \de^\Lambda \right) \right\}$, we shorten $H^\bullet_{\sharp}(X):=H^\bullet_{\sharp}\left(\wedge^\bullet X\right)$. Note that $H^\bullet_{\left( \de ; \de \right)}(X)=H^\bullet_{dR}(X;\mathbb{R})$. As regards notation introduced by L.-S. Tseng and S.-T. Yau in \cite[\S3]{tseng-yau-1}, note that $H^\bullet_{\left( \de^\Lambda ; \de^\Lambda\right)}(X) = H^\bullet_{\de^\Lambda}(X)$, and that $H^\bullet_{\left( \de , \de^\Lambda ; \de\de^\Lambda\right)}(X) = H^\bullet_{\de+\de^\Lambda}(X)$, and that $H^\bullet_{\left( \de\de^\Lambda ; \de , \de^\Lambda\right)}(X) = H^\bullet_{\de\de^\Lambda}(X)$. Note also that, as a consequence of the Hodge theory developed by L.-S. Tseng and S.-T. Yau in \cite[Proposition 3.3, Theorem 3.5, Theorem 3.16]{tseng-yau-1}, one has that, \cite[Corollary 3.6, Corollary 3.17]{tseng-yau-1}, $X$ being compact, for $\sharp \in \left\{ \left( \de, \de^\Lambda ; \de\de^\Lambda \right) ,\, \left( \de ; \de \right) ,\, \left( \de^\Lambda ; \de^\Lambda \right) ,\, \left( \de\de^\Lambda ; \de , \de^\Lambda \right) \right\}$, $$ \dim_\mathbb{R} H^\bullet_{\sharp}(X) \;<\; +\infty \;.$$ \medskip With the aim to develop a symplectic counterpart of Riemannian Hodge theory for compact symplectic manifolds, J.-L. Brylinski defined the \emph{symplectic-$\star$-operator}, \cite[\S2]{brylinski}, $$ \star_\omega\colon \wedge^\bullet X\to \wedge^{2n-\bullet}X $$ requiring that, for every $\alpha,\,\beta\in\wedge^k X$, $$ \alpha\wedge\star_\omega \beta \;=\; \left(\omega^{-1}\right)^k\left(\alpha,\beta\right)\,\omega^n \;.$$ Since $\de^\Lambda\lfloor_{\wedge^kX}=(-1)^{k+1}\,\star_\omega\,\de\,\star_\omega$, \cite[Theorem 2.2.1]{brylinski}, and $\star_\omega^2=\id$, \cite[Lemma 2.1.2]{brylinski}, then one gets that $\star_\omega$ induces the isomorphism $$ \star_\omega \colon H^\bullet_{\left( \de ; \de \right)}(X) \stackrel{\simeq}{\to} H^{2n-\bullet}_{\left( \de^\Lambda ; \de^\Lambda \right)}(X) \;.$$ In particular, by the Poincaré duality, it follows that $$ \dim_\mathbb{R} H^\bullet_{\left( \de ; \de \right)}(X) \;=\; \dim_\mathbb{R} H^\bullet_{\left( \de^\Lambda ; \de^\Lambda \right)}(X) \;<\; +\infty \;.$$ Furthermore, by choosing an almost-complex structure $J$ compatible with $\omega$ (namely, such that $\omega(\cdot, \, J\cdot)$ is positive definite and $\omega(J\cdot,\, J\cdot\cdot)=\omega$), and by considering the $J$-Hermitian metric $g:=\omega(\cdot,\, J\cdot\cdot)$, one gets that, \cite[Corollary 3.25]{tseng-yau-1}, the Hodge-$*$-operator $*_g\colon \wedge^\bullet X \to \wedge^{2n-\bullet}X$ associated to $g$ induces the isomorphism, \cite[Corollary 2.2.2]{brylinski}, $$ *_g \colon H^\bullet_{\left( \de , \de^\Lambda ; \de\de^\Lambda \right)}(X) \stackrel{\simeq}{\to} H^{2n-\bullet}_{\left( \de\de^\Lambda ; \de , \de^\Lambda \right)}(X) \;.$$ In particular, it follows that $$ \dim_\mathbb{R} H^\bullet_{\left( \de , \de^\Lambda ; \de\de^\Lambda \right)}(X) \;=\; \dim_\mathbb{R} H^{2n-\bullet}_{\left( \de\de^\Lambda ; \de , \de^\Lambda \right)}(X) \;<\; +\infty \;.$$ \medskip Recall that one says that the \emph{Hard Lefschetz Condition} holds on $X$ if \begin{equation}\label{eq:hlc} \tag{HLC} \text{for every } k\in\mathbb{N}\;, \qquad L^k\colon H^{n-k}_{dR}(X;\mathbb{R}) \stackrel{\simeq}{\to} H^{n+k}_{dR}(X;\mathbb{R}) \;. \end{equation} \medskip As in \cite{angella-tomassini-4}, and miming \cite{li-zhang} in the almost-complex case, define, for $r,s\in\mathbb{N}$, $$ H^{(r,s)}_\omega(X;\mathbb{R}) \;:=\; \left\{\left[L^r\,\gamma^{(s)}\right]\in H^{2r+s}_{dR}(X;\mathbb{R}) \;:\; \Lambda \gamma^{(s)} = 0 \right\} \;\subseteq\; H^{2r+s}_{dR}(X;\mathbb{R}) \;; $$ one has that $$ \sum_{2r+s=\bullet} H^{(r,s)}_\omega(X;\mathbb{R}) \;\subseteq\; H^{\bullet}_{dR}(X;\mathbb{R}) \;,$$ but in general neither the sum is direct, nor the inclusion is an equality. As proved by Y. Lin in \cite[Proposition A.5]{lin}, if the Hard Lefschetz Condition holds on $X$, then $$ H^{(0,\bullet)}_\omega(X;\mathbb{R}) \;=\; \mathrm{P}H^\bullet_{\de}(X;\mathbb{R}) \;, $$ where $$ \mathrm{P}H^\bullet_{\de}(X;\mathbb{R}) \;:=\; \frac{\ker\de\cap\ker\de^\Lambda\cap\ker\Lambda}{\imm\de\lfloor_{\ker\de^\Lambda\cap\ker\Lambda}} $$ is the \emph{primitive cohomology} introduced by L.-S. Tseng and S.-T. Yau in \cite[\S4.1]{tseng-yau-1}. \medskip We recall the following result. \begin{thm}[{\cite[Corollary 2]{mathieu}, \cite[Theorem 0.1]{yan}, \cite[Proposition 1.4]{merkulov}, \cite{guillemin}, \cite[Proposition 3.13]{tseng-yau-1}, \cite[Theorem 5.4]{cavalcanti}, \cite[Remark 2.3]{angella-tomassini-4}}] Let $X$ be a compact manifold endowed with a symplectic structure $\omega$. The following conditions are equivalent: \begin{enumerate} \item every de Rham cohomology class of $X$ admits a representative being both $\de$-closed and $\de^\Lambda$-closed, namely, Brylinski's conjecture \cite[Conjecture 2.2.7]{brylinski} holds on $X$; \item the Hard Lefschetz Condition holds on $X$; \item the natural map $H_{\left(\de , \de^\Lambda ; \de\de^\Lambda\right)}^\bullet\left(\wedge^\bullet X\right) \to H_{dR}^{\bullet}(X;\mathbb{R})$ induced by the identity is surjective; \item the natural map $H_{\left(\de , \de^\Lambda ; \de\de^\Lambda\right)}^\bullet\left(\wedge^\bullet X\right) \to H_{dR}^{\bullet}(X;\mathbb{R})$ induced by the identity is an isomorphism; \item the bounded $\mathbb{Z}$-graded $\mathbb{R}$-vector space $\wedge^\bullet X$ endowed with the endomorphisms $\de\in\End^1\left(\wedge^\bullet X\right)$ and $\de^\Lambda \in \End^{-1}\left(\wedge^\bullet X \right)$ satisfies the $\de\de^\Lambda$-Lemma; \item the decomposition $$ H^\bullet_{dR}(X;\mathbb{R}) \;=\; \bigoplus_{r\in\mathbb{N}} L^r\, H^{(0,\bullet-2r)}_\omega(X;\mathbb{R}) \;, $$ holds. \end{enumerate} \end{thm} \medskip In order to apply Corollary \ref{cor:charact-hodge-frolicher-double} to the $\mathbb{Z}$-graded $\mathbb{R}$-vector space $\wedge^\bullet X$ endowed with the endomorphisms $\de \in \End^1\left(\wedge^\bullet X\right)$ and $\de^\Lambda \in \End^{-1}\left(\wedge^\bullet X \right)$, satisfying $\de^2 = \left(\de^\Lambda\right)^2 = \left[\de,\, \de^\Lambda\right] = 0$, we need the following result. \begin{lemma}[{\cite[Theorem 2.3.1]{brylinski}, \cite[Theorem 2.5]{fernandez-ibanez-deleon-IsrJMath}; see also \cite[Theorem 2.9]{fernandez-ibanez-deleon-IsrJMath}, \cite[Theorem 5.2]{cavalcanti-jgp}}] Let $X$ be a compact manifold endowed with a symplectic structure $\omega$. Consider the $\mathbb{Z}^2$-graded $\mathbb{R}$-vector space $\wedge^{\bullet}X$ endowed with the endomorphisms $\de \in \End^1\left(\wedge^\bullet X\right)$ and $\de^\Lambda \in \End^{-1}\left(\wedge^\bullet X\right)$. Both the spectral sequences associated to the canonical double complex $\left(\Doub^{\bullet,\bullet}\wedge^{\bullet}X ,\, \de\otimes_\mathbb{R}\id ,\, \de^\Lambda\otimes_\mathbb{R}\beta\right)$ degenerate at the first level. \end{lemma} Hence, by applying Theorem \ref{thm:disug-frol} and Corollary \ref{cor:charact-hodge-frolicher-double} to the $\mathbb{Z}$-graded $\mathbb{R}$-vector space $\wedge^\bullet X$ endowed with $\de\in\End^{1}\left(\wedge^\bullet X\right)$ and $\de^\Lambda\in\End^{-1}\left(\wedge^\bullet X\right)$, we get the following result. \begin{thm}\label{thm:sympl} Let $X$ be a compact manifold endowed with a symplectic structure $\omega$. The inequality \begin{equation}\label{eq:sympl} \dim_\mathbb{R} H^{\bullet}_{\left( \de , \de^\Lambda ; \de\de^\Lambda \right)}\left(X\right) + \dim_\mathbb{R} H^{\bullet}_{\left( \de\de^\Lambda ; \de , \de^\Lambda \right)}\left(X\right) \;\geq\; 2\, \dim_\mathbb{R} H^{\bullet}_{dR}(X;\mathbb{R}) \end{equation} holds. Furthermore, the equality in \eqref{eq:sympl} holds if and only if $X$ satisfies the Hard Lefschetz Condition. \end{thm} \medskip Consider $X = \left. \Gamma \middle\backslash G \right.$ a solvmanifold endowed with a $G$-left-invariant symplectic structure $\omega$; in particular, $\omega$ induces a linear symplectic structure on $\mathfrak{g}$; therefore the endomorphisms $\de \in \End^1\left(\wedge^\bullet X\right)$ and $\de^\Lambda \in \End^{-1}\left(\wedge^\bullet X \right)$ yield endomorphisms $\de \in \End^1\left(\wedge^\bullet \duale{\mathfrak{g}}\right)$ and $\de^\Lambda \in \End^{-1}\left(\wedge^\bullet \duale{\mathfrak{g}} \right)$ on the $\mathbb{Z}$-graded $\mathbb{R}$-vector sub-space $\wedge^\bullet\duale{\mathfrak{g}} \hookrightarrow \wedge^\bullet X$, where we identify objects on $\mathfrak{g}$ with $G$-left-invariant objects on $X$ by means of left-translations. For $\sharp \in \left\{ \left( \de, \de^\Lambda ; \de\de^\Lambda \right) ,\, \left( \de ; \de \right) ,\, \left( \de^\Lambda ; \de^\Lambda \right) ,\, \left( \de\de^\Lambda ; \de , \de^\Lambda \right) \right\}$, one has the natural map $\iota\colon H^\bullet_{\sharp}\left(\wedge^\bullet\mathfrak{g}^*\right) \to H^\bullet_{\sharp}\left(X\right)$. We recall the following result, which allows to compute the cohomologies of a completely-solvable solvmanifold by using just left-invariant forms; recall, e.g., that, by A. Hattori's theorem \cite[Corollary 4.2]{hattori}, if $G$ is \emph{completely-solvable} (that is, for any $g\in G$, all the eigenvalues of $\mathrm{Ad}_g:=\de\left(\psi_g\right)_e\in\mathrm{Aut}(\mathfrak{g})$ are real, equivalently, if, for any $X\in\mathfrak{g}$, all the eigenvalues of $\mathrm{ad}_X:=\left[X,\cdot\right]\in\End(\mathfrak{g})$ are real, where $\psi\colon G \ni g \mapsto \left( \psi_g \colon h \mapsto g\,h\,g^{-1} \right) \in \mathrm{Aut}(G)$ and $e$ is the identity element of $G$), then the natural map $H^\bullet_{dR}\left(\wedge^\bullet\duale{\mathfrak{g}}\right) \to H^\bullet_{dR}\left(X;\mathbb{R}\right)$ is an isomorphism. \begin{thm}[{\cite[Theorem 3, Remark 4]{macri}, see also \cite{angella-kasuya}}] Let $X = \left. \Gamma \middle\backslash G \right.$ be a completely-solvable solvmanifold endowed with a $G$-left-invariant symplectic structure $\omega$. Then, for $\sharp \in \left\{ \left( \de, \de^\Lambda ; \de\de^\Lambda \right) ,\, \left( \de ; \de \right) ,\, \left( \de^\Lambda ; \de^\Lambda \right) ,\, \left( \de\de^\Lambda ; \de , \de^\Lambda \right) \right\}$, the natural map $$ \iota\colon H^\bullet_{\sharp}\left(\wedge^\bullet\mathfrak{g}^*\right) \to H^\bullet_{\sharp}\left(X\right) $$ is an isomorphism. \end{thm} \begin{ex} Let $\mathbb{I}_3:=\left. \mathbb{Z}\left[\im\right]^3 \middle\backslash \left(\mathbb{C}^3,\,*\right) \right.$ be the {\em Iwasawa manifold}, where the group structure $*$ on $\mathbb{C}^3$ is defined by $$ \left(z_1,\,z_2,\,z_3\right) * \left(w_1,\,w_2,\,w_3\right) := \left(z_1+w_1 ,\, z_2+w_2 ,\, z_3+z_1w_2+w_3\right) \;.$$ There exists a $\left(\mathbb{C}^3,\,*\right)$-left-invariant co-frame $\left\{e^j\right\}_{j\in\{1,\ldots,6\}}$ of $T^*X$ such that $$ \de e^1 \;=\; \de e^2 \;=\; \de e^3 \;=\; \de e^4 \;=\; 0 \;, \qquad \de e^5 \;=\; -e^{13}+e^{24} \;, \qquad \de e^6 \;=\; -e^{14}-e^{23} $$ (in order to simplify notation, we shorten, e.g., $e^{12} := e^1\wedge e^2$). Consider the $\left(\mathbb{C}^3,\,*\right)$-left-invariant almost-K\"ahler structure $\left(J,\, \omega,\, g\right)$ on $\mathbb{I}_3$ defined by $$ Je^1 \;:=\; -e^6\;,\quad Je^2 \;:=\; -e^5\;,\quad Je^3 \;:=\; -e^4 \;, \qquad \omega \;:=\; e^{16}+e^{25}+e^{34}\;, \qquad g \;:=\; \omega\left(\cdot,\, J\cdot\cdot\right) \;; $$ it has been studied in \cite[\S4]{angella-tomassini-zhang} as an example of an almost-K\"ahler structure non-inducing a decomposition in cohomology according to the almost-complex structure, \cite[Proposition 4.1]{angella-tomassini-zhang}. The symplectic cohomologies of the Iwasawa manifold $\mathbb{I}_3$ endowed with the $\left(\mathbb{C}^3,\,*\right)$-left-invariant symplectic structure $\omega$ can be computed using just $\left(\mathbb{C}^3,\,*\right)$-left-invariant forms, and their real dimensions are summarized in Table \ref{table:iwasawa-alm-kahler}. \begin{center} \begin{table}[h] \centering \begin{tabular}{>{$\mathbf\bgroup}c<{\mathbf\egroup$} || >{$}c<{$} | >{$}c<{$} | >{$}c<{$} | >{$}c<{$}} \toprule \dim_\mathbb{C} H_{\sharp}^{\bullet}\left(\mathbb{I}_3\right) & \left( \de ; \de \right) & \left( \de^\Lambda ; \de^\Lambda \right) & \left( \de , \de^\Lambda ; \de\de^\Lambda \right) & \left( \de\de^\Lambda ; \de , \de^\Lambda \right) \\ \toprule 0 & 1 & 1 & 1 & 1 \\ \midrule[0.02em] 1 & 4 & 4 & 4 & 4 \\ \midrule[0.02em] 2 & 8 & 8 & 9 & 10 \\ \midrule[0.02em] 3 & 10 & 10 & 11 & 11 \\ \midrule[0.02em] 4 & 8 & 8 & 10 & 9 \\ \midrule[0.02em] 5 & 4 & 4 & 4 & 4 \\ \midrule[0.02em] 6 & 1 & 1 & 1 & 1 \\ \bottomrule \end{tabular} \caption{The symplectic cohomologies of the Iwasawa manifold $\mathbb{I}_3:=\left. \mathbb{Z}\left[\im\right]^3 \middle\backslash \left(\mathbb{C}^3,\,*\right) \right.$ endowed with the symplectic structure $\omega := e^{1} \wedge e^{6} + e^{2} \wedge e^{5} + e^{3} \wedge e^{4}$.} \label{table:iwasawa-alm-kahler} \end{table} \end{center} In particular, note that \begin{eqnarray*} \dim_\mathbb{K} H^{1}_{\left( \de , \de^\Lambda ; \de\de^\Lambda \right)}\left(X\right) + \dim_\mathbb{K} H^{1}_{\left( \de\de^\Lambda ; \de , \de^\Lambda \right)}\left(X\right) - 2\, \dim_\mathbb{K} H^{1}_{dR}(X;\mathbb{R}) &=& 0 \;, \\[5pt] \dim_\mathbb{K} H^{2}_{\left( \de , \de^\Lambda ; \de\de^\Lambda \right)}\left(X\right) + \dim_\mathbb{K} H^{2}_{\left( \de\de^\Lambda ; \de , \de^\Lambda \right)}\left(X\right) - 2\, \dim_\mathbb{K} H^{2}_{dR}(X;\mathbb{R}) &=& 3 \;, \\[5pt] \dim_\mathbb{K} H^{3}_{\left( \de , \de^\Lambda ; \de\de^\Lambda \right)}\left(X\right) + \dim_\mathbb{K} H^{3}_{\left( \de\de^\Lambda ; \de , \de^\Lambda \right)}\left(X\right) - 2\, \dim_\mathbb{K} H^{3}_{dR}(X;\mathbb{R}) &=& 2 \;. \end{eqnarray*} \end{ex} \begin{rem} More in general, let $X$ be a compact manifold endowed with a Poisson bracket $\left\{\cdot, \cdot\cdot\right\}$, and denote by $G$ the Poisson tensor associated to $\left\{\cdot, \cdot\cdot\right\}$. By following J.-L. Koszul, \cite{koszul}, one defines $\delta:=\left[\iota_G,\, \de\right]\in\End^{-1}\left(\wedge^\bullet X\right)$. One has that $\delta^2=0$ and $\left[\de,\delta\right]=0$, \cite[page 266, page 265]{koszul}, see also \cite[Proposition 1.2.3, Theorem 1.3.1]{brylinski}. One has that, on any compact Poisson manifold, the first spectral sequence ${'E}_r^{\bullet,\bullet}$ associated to the canonical double complex $\left(\Doub^{\bullet,\bullet}\wedge^\bullet X,\, \de\otimes_\mathbb{R}\id,\, \delta\otimes_\mathbb{R}\beta\right)$ degenerates at the first level, \cite[Theorem 2.5]{fernandez-ibanez-deleon-IsrJMath}. On the other hand, M. Fernández, R. Ibáñez, and M. de León provided an example of a compact Poisson manifold (more precisely, of a nilmanifold endowed with a co-symplectic structure) such that the second spectral sequence ${''E}_r^{\bullet,\bullet}\left(\Doub^{\bullet,\bullet}\wedge^\bullet X,\, \de\otimes_\mathbb{R}\id,\, \delta\otimes_\mathbb{R}\beta\right)$ does not degenerate at the first level, \cite[Theorem 5.1]{fernandez-ibanez-deleon-IsrJMath}. In fact, on a compact $2n$-dimensional manifold $X$ endowed with a symplectic structure $\omega$, the symplectic-$\star$-operator $\star_\omega\colon \wedge^\bullet X \to \wedge^{2n-\bullet}X$ induces the isomorphism $\star_\omega\colon {'E}^{\bullet_1,\bullet_2}_r \stackrel{\simeq}{\to} {''E}^{\bullet_2,2n+\bullet_1}_r$, \cite[Theorem 2.9]{fernandez-ibanez-deleon-IsrJMath}; it follows that, on a compact symplectic manifold, also the second spectral sequence ${''E}_r^{\bullet,\bullet}\left(\Doub^{\bullet,\bullet}\wedge^\bullet X,\, \de\otimes_\mathbb{R}\id,\, \delta\otimes_\mathbb{R}\beta\right)$ actually degenerates at the first level, \cite[Theorem 2.3.1]{brylinski}, see also \cite[Theorem 2.8]{fernandez-ibanez-deleon-IsrJMath}. \end{rem} \subsection{Generalized complex structures}\label{subsec:gen-cplx} Let $X$ be a compact differentiable manifold of dimension $2n$. Consider the bundle $TX\oplus T^*X$ endowed with the natural symmetric pairing $$ \scalar{\cdot}{\cdot\cdot} \colon \left(TX\oplus T^*X\right) \times \left(TX\oplus T^*X\right) \to \mathbb{R} \;, \qquad \scalar{X+\xi}{Y+\eta}\;:=\; \frac{1}{2}\,\left(\xi(Y)+\eta(X)\right) \;. $$ Fix a $\de$-closed $3$-form $H$ on $X$. On the space $\mathcal{C}^{\infty}\left(X; \, TX\oplus T^*X\right)$ of smooth sections of $TX\oplus T^*X$ over $X$, define the \emph{$H$-twisted Courant bracket} as \begin{eqnarray*} && \left[\cdot ,\, \cdot\cdot\right]_H \colon \mathcal{C}^{\infty}\left(X; \, TX\oplus T^*X\right) \times \mathcal{C}^{\infty}\left(X; \, TX\oplus T^*X\right) \to \mathcal{C}^{\infty}\left(X; \, TX\oplus T^*X\right) \;, \\[5pt] && \left[X+\xi,\, Y+\eta\right]_H \;:=\; \left[X,\, Y\right] + \mathcal{L}_X\eta - \mathcal{L}_Y\xi - \frac{1}{2}\, \de \left(\iota_X\eta-\iota_Y\xi\right) + \iota_Y\iota_X H \end{eqnarray*} (where $\iota_{X}\in \End^{-1}\left(\wedge^\bullet X\right)$ denotes the interior product with $X\in \mathcal{C}^\infty(X;TX)$ and $\mathcal{L}_X:=\left[\iota_X,\, \de\right]\in \End^0\left(\wedge^\bullet X\right)$ denotes the Lie derivative along $X\in \mathcal{C}^\infty(X;TX)$); the $H$-twisted Courant bracket can be seen also as a derived bracket induced by the $H$-twisted differential $\de_H:=\de+H\wedge\cdot$, see \cite[\S3.2]{gualtieri-phdthesis}, \cite[\S2]{gualtieri}. Furthermore, consider the \emph{Clifford action} of $TX\oplus T^*X$ on the space of differential forms with respect to $\scalar{\cdot}{\cdot\cdot}$, $$ \Cliff\left(TX\oplus T^*X\right) \times \wedge^\bullet X \to \wedge^{\bullet-1} X \oplus \wedge^{\bullet+1} X \;, \qquad (X+\xi) \cdot \varphi \;:=\; \iota_X\varphi + \xi\wedge\varphi \;, $$ and its bi-$\mathbb{C}$-linear extension $\Cliff\left(\left(TX\oplus T^*X\right)\otimes_\mathbb{R}\mathbb{C}\right) \times \left(\wedge^\bullet X\otimes_\mathbb{R}\mathbb{C}\right) \to \left(\wedge^{\bullet-1} X\otimes_\mathbb{R}\mathbb{C}\right) \oplus \left(\wedge^{\bullet+1} X\otimes_\mathbb{R}\mathbb{C}\right)$, where $\Cliff\left(TX\oplus T^*X\right) := \left.\left(\bigoplus_{k\in\mathbb{Z}}\bigotimes_{j=1}^{k}\left(TX\oplus T^*X\right)\right)\middle\slash\left\{v\otimes_\mathbb{R} v - \scalar{v}{v} \;:\; v\in TX\oplus T^*X\right\} \right.$ is the Clifford algebra associated to $TX\oplus T^*X$ and $\scalar{\cdot}{\cdot\cdot}$. \medskip Recall that an \emph{$H$-twisted generalized complex structure} on $X$, \cite[Definition 4.14, Definition 4.18]{gualtieri-phdthesis}, \cite[Definition 3.1]{gualtieri} is an endomorphism $\mathcal{J}\in\End\left(TX\oplus T^*X\right)$ such that \begin{inparaenum}[\itshape (i)] \item $\mathcal{J}^2 = -\id_{TX\oplus T^*X}$, and \item $\mathcal{J}$ is orthogonal with respect to $\scalar{\cdot}{\cdot\cdot}$, and \item the Nijenhuis tensor $$ \mathrm{Nij}_{\mathcal{J},H} \;:=\; -\left[ \mathcal{J}\,\cdot ,\, \mathcal{J}\,\cdot\cdot\right]_H + \mathcal{J} \left[ \mathcal{J}\,\cdot ,\, \cdot\cdot \right]_H + \mathcal{J} \left[ \cdot ,\, \mathcal{J}\,\cdot\cdot \right]_H + \mathcal{J} \left[ \cdot ,\, \cdot\cdot \right]_H \;\in\; \left(TX\oplus T^*X\right) \otimes_\mathbb{R} \left(TX\oplus T^*X\right) \otimes_\mathbb{R} \left(TX\oplus T^*X\right)^* $$ of $\mathcal{J}$ with respect to the $H$-twisted Courant bracket vanishes identically. \end{inparaenum} Equivalently, \cite[Proposition 4.3]{gualtieri-phdthesis}, (by setting $L:=:L_{\mathcal{J}}$ the $\im$-eigen-bundle of the $\mathbb{C}$-linear extension of $\mathcal{J}$ to $\left(TX \oplus T^*X\right) \otimes_\mathbb{R} \mathbb{C}$), a generalized complex structure on $X$ is identified by a sub-bundle $L$ of $\left(TX \oplus T^*X\right) \otimes_\mathbb{R} \mathbb{C}$ such that \begin{inparaenum}[\itshape (i)] \item $L$ is maximal isotropic with respect to $\scalar{\cdot}{\cdot\cdot}$, and \item $L$ is involutive with respect to the $H$-twisted Courant bracket, and \item $L \cap \bar{L} = \{0\}$. \end{inparaenum} Equivalently, \cite[Theorem 4.8]{gualtieri-phdthesis}, (by choosing a complex form $\rho$ whose Clifford annihilator $$ L_\rho \;:=\; \left\{v\in \left(TX \oplus T^*X\right) \otimes_\mathbb{R} \mathbb{C} \;:\; v\cdot \rho=0\right\} $$ is the $\im$-eigen-bundle $L_\mathcal{J}$ of the $\mathbb{C}$-linear extension of $\mathcal{J}$ to $\left(TX \oplus T^*X\right) \otimes_\mathbb{R} \mathbb{C}$), a generalized complex structure on $X$ is identified by a sub-bundle $U:=:U_{\mathcal{J}}$ (which is called the \emph{canonical bundle}, \cite[\S4.1]{gualtieri-phdthesis}, \cite[Definition 3.7]{gualtieri}) of complex rank $1$ of $\wedge^\bullet X \otimes_\mathbb{R} \mathbb{C}$ being locally generated by a form $\rho = \exp{\left( B+\im\omega \right)}\wedge\Omega$, where $B\in\wedge^2X$, and $\omega\in\wedge^2X$, and $\Omega=\theta^1\wedge\cdots\wedge\theta^k\in\wedge^kX\otimes_\mathbb{R}\mathbb{C}$ with $\left\{\theta^1,\ldots,\theta^k\right\}$ a set of linearly independent complex $1$-forms, such that \begin{inparaenum}[\itshape (i)] \item $\Omega \wedge \bar\Omega \wedge \omega^{n-k} \neq 0$, and \item there exists $v\in\left(TX \oplus T^*X\right) \otimes_\mathbb{R} \mathbb{C}$ such that $\de_H\rho = v \cdot \rho$, where $\de_H:=\de+H\wedge\cdot$. \end{inparaenum} By definition, the \emph{type} of a generalized complex structure $\mathcal{J}$ on $X$, \cite[\S4.3]{gualtieri-phdthesis}, \cite[Definition 3.5]{gualtieri}, is the upper-semi-continuous function $$ \mathrm{type}\left(\mathcal{J}\right) \;:=\; \frac{1}{2}\, \dim_\mathbb{R} \left( T^*X \cap \mathcal{J}T^*X \right) $$ on $X$, equivalently, \cite[Definition 1.1]{gualtieri}, the degree of the form $\Omega$. \medskip A generalized complex structure $\mathcal{J}$ on $X$ induces a $\mathbb{Z}$-graduation on the space of complex differential forms on $X$, \cite[\S4.4]{gualtieri-phdthesis}, \cite[Proposition 3.8]{gualtieri}. Namely, define, for $k\in\mathbb{Z}$, $$ U^k \;:=:\; U^k_{\mathcal{J}} \;:=\; \wedge^{n-k}\bar L_{\mathcal{J}} \cdot U_{\mathcal{J}} \;\subseteq\; \wedge^\bullet X \otimes_\mathbb{R} \mathbb{C} \;, $$ where $L_{\mathcal{J}}$ is the $\im$-eigenspace of the $\mathbb{C}$-linear extension of $\mathcal{J}$ to $\left(TX \oplus T^*X\right)\otimes_\mathbb{R} \mathbb{C}$ and $U^n_{\mathcal{J}}:=U_{\mathcal{J}}$ is the canonical bundle of $\mathcal{J}$. \medskip For a $\scalar{\cdot}{\cdot\cdot}$-orthogonal endomorphism $\mathcal{J}\in\End\left(TX\oplus T^*X\right)$ satisfying $\mathcal{J}^2 = -\id_{TX\oplus T^*X}$, the $\mathbb{Z}$-graduation $U^\bullet_{\mathcal{J}}$ still makes sense, and the condition that $\mathrm{Nij}_{\mathcal{J},H}=0$ turns out to be equivalent, \cite[Theorem 4.3]{gualtieri-phdthesis}, \cite[Theorem 3.14]{gualtieri}, to $$ \de_H\colon U^\bullet_{\mathcal{J}} \to U^{\bullet+1}_{\mathcal{J}} \oplus U^{\bullet-1}_{\mathcal{J}} \;. $$ Therefore, on a compact differentiable manifold endowed with a generalized complex structure $\mathcal{J}$, one has, \cite[\S4.4]{gualtieri-phdthesis}, \cite[\S3]{gualtieri}, $$ \de_H \;=\; \partial_{\mathcal{J},H} + \overline{\del}_{\mathcal{J},H} \qquad \text{ where } \qquad \partial_{\mathcal{J},H} \colon U^\bullet_{\mathcal{J}} \to U^{\bullet+1}_{\mathcal{J}} \quad \text{ and } \quad \overline{\del}_{\mathcal{J},H} \colon U^\bullet_{\mathcal{J}} \to U^{\bullet-1}_{\mathcal{J}} \;.$$ Define also, \cite[page 52]{gualtieri-phdthesis}, \cite[Remark at page 97]{gualtieri}, $$ \de_H^{\mathcal{J}} \;:=\; -\im\left(\partial_{\mathcal{J},H} - \overline{\del}_{\mathcal{J},H}\right) \colon U^\bullet_{\mathcal{J}} \to U^{\bullet+1}_{\mathcal{J}} \oplus U^{\bullet-1}_{\mathcal{J}} \;.$$ \medskip By abuse of notation, one says that $X$ \emph{satisfies the $\partial_{\mathcal{J},H}\overline{\del}_{\mathcal{J},H}$-Lemma} if $\left(U^\bullet,\, \partial_{\mathcal{J},H},\, \overline{\del}_{\mathcal{J},H}\right)$ satisfies the $\partial_{\mathcal{J},H}\overline{\del}_{\mathcal{J},H}$-Lemma, and one says that $X$ \emph{satisfies the $\de_H\de_H^{\mathcal{J}}$-Lemma} if $\left(U^\bullet,\, \de_H,\, \de_H^{\mathcal{J}}\right)$ satisfies the $\de_H\de_H^{\mathcal{J}}$-Lemma. Actually, it turns out that $X$ satisfies the $\de_H\de_H^{\mathcal{J}}$-Lemma if and only if $X$ satisfies the $\partial_{\mathcal{J},H}\overline{\del}_{\mathcal{J},H}$-Lemma, \cite[Remark at page 129]{cavalcanti-jgp}: indeed, note that $\ker\partial_{\mathcal{J},H}\overline{\del}_{\mathcal{J},H} = \ker \de_H\de_H^{\mathcal{J}}$, and $\ker\partial_{\mathcal{J},H} \cap \ker\overline{\del}_{\mathcal{J},H} = \ker\de_H \cap \ker\de_H^{\mathcal{J}}$, and $\imm\partial_{\mathcal{J},H} + \imm\overline{\del}_{\mathcal{J},H} = \imm\de_H + \imm\de_H^{\mathcal{J}}$. Moreover, the following result by G.~R. Cavalcanti holds. \begin{thm}[{\cite[Theorem 4.2]{cavalcanti}, \cite[Theorem 4.1, Corollary 2]{cavalcanti-jgp}}] A manifold $X$ endowed with an $H$-twisted generalized complex structure $\mathcal{J}$ satisfies the $\de_H\de_H^{\mathcal{J}}$-Lemma if and only if $\left(\ker\de_H^{\mathcal{J}},\, \de\right)\hookrightarrow\left(U^\bullet,\, \de_H\right)$ is a quasi-isomorphism of differential $\mathbb{Z}$-graded $\mathbb{C}$-vector spaces. In this case, it follows that the splitting $\wedge^\bullet X \otimes_\mathbb{R} \mathbb{C} = \bigoplus_{k\in\mathbb{Z}}U^k$ gives rise to a decomposition in cohomology. \end{thm} An application of \cite[Proposition 5.17, 5.21]{deligne-griffiths-morgan-sullivan} yields the following result. \begin{thm}[{\cite[Theorem 4.4]{cavalcanti}, \cite[Theorem 5.1]{cavalcanti-jgp}}] A manifold $X$ endowed with an $H$-twisted generalized complex structure $\mathcal{J}$ satisfies the $\de\de^{\mathcal{J}}$-lemma if and only if the canonical spectral sequence degenerates at the first level and the decomposition of complex forms into sub-bundles $U^k$ induces a decomposition in cohomology. \end{thm} \medskip Given a compact complex manifold $X$ endowed with an $H$-twisted generalized complex structure, consider the following cohomologies: $$ GH_{dR_{H}}(X) \;:=\; H_{\left( \de_H ; \de_H \right)}\left(\Tot U^\bullet_{\mathcal{J}}\right) \;, $$ and $$ GH^{\bullet}_{\overline{\del}_{\mathcal{J},H}}(X) \;:=\; H^\bullet_{\left( \overline{\del}_{\mathcal{J},H} ; \overline{\del}_{\mathcal{J},H} \right)}\left(U^\bullet_{\mathcal{J}}\right) \;, \quad GH^{\bullet}_{\partial_{\mathcal{J},H}}(X) \;:=\; H^\bullet_{\left( \partial_{\mathcal{J},H} ; \partial_{\mathcal{J},H} \right)}\left(U^\bullet_{\mathcal{J}}\right) \;, $$ and $$ GH^{\bullet}_{BC_{\mathcal{J},H}}(X) \;:=\; H^\bullet_{\left( \partial_{\mathcal{J},H} , \overline{\del}_{\mathcal{J},H} ; \partial_{\mathcal{J},H}\overline{\del}_{\mathcal{J},H} \right)}\left(U^\bullet_{\mathcal{J}}\right) \;, \qquad GH^{\bullet}_{A_{\mathcal{J},H}}(X) \;:=\; H^\bullet_{\left( \partial_{\mathcal{J},H}\overline{\del}_{\mathcal{J},H} ; \partial_{\mathcal{J},H} , \overline{\del}_{\mathcal{J},H} \right)}\left(U^\bullet_{\mathcal{J}}\right) \;. $$ Note that, for $H=0$, one has $GH_{dR_{0}}(X)=\Tot H^\bullet_{dR}(X;\mathbb{R})$. By \cite[Proposition 5.1]{gualtieri-phdthesis}, \cite[Proposition 3.15]{gualtieri}, it follows that $\dim_\mathbb{C} GH^{\bullet}_{\partial_{\mathcal{J},H}}(X)<+\infty$ and $\dim_\mathbb{C} GH^{\bullet}_{\overline{\del}_{\mathcal{J},H}}(X)<+\infty$. As an application of Theorem \ref{thm:disug-frol}, we get the following result. \begin{thm}\label{thm:gen-frol-ineq} Let $X$ be a compact differentiable manifold endowed with an $H$-twisted generalized complex structure $\mathcal{J}$. Then \begin{equation}\label{eq:ineq-frol-cplx-gen} \dim_\mathbb{C} GH^{\bullet}_{BC_{\mathcal{J},H}}(X) + \dim_\mathbb{C} GH^{\bullet}_{A_{\mathcal{J},H}}(X) \;\geq\; \dim_\mathbb{C} GH^{\bullet}_{\overline{\del}_{\mathcal{J},H}}(X) + \dim_\mathbb{C} GH^{\bullet}_{\partial_{\mathcal{J},H}}(X) \;. \end{equation} \end{thm} As an application of Corollary \ref{cor:charact-hodge-frolicher-double}, we get the following result; compare it also with \cite[Theorem 4.4]{cavalcanti}. \begin{thm}\label{thm:gen-charact} Let $X$ be a compact differentiable manifold endowed with an $H$-twisted generalized complex structure $\mathcal{J}$. The following conditions are equivalent: \begin{itemize} \item $X$ satisfies the $\partial_{\mathcal{J},H}\overline{\del}_{\mathcal{J},H}$-Lemma; \item the Hodge and Fr\"olicher spectral sequences associated to the canonical double complex $\left(\Doub^{\bullet,\bullet}U^\bullet_{\mathcal{J}},\, \partial_{\mathcal{J},H} \otimes_\mathbb{C} \id,\, \overline{\del}_{\mathcal{J},H} \otimes_\mathbb{C} \beta\right)$ degenerate at the first level and the equality in \eqref{eq:ineq-frol-cplx-gen}, $$ \dim_\mathbb{C} GH^{\bullet}_{BC_{\mathcal{J},H}}(X) + \dim_\mathbb{C} GH^{\bullet}_{A_{\mathcal{J},H}}(X) \;=\; \dim_\mathbb{C} GH^{\bullet}_{\overline{\del}_{\mathcal{J},H}}(X) + \dim_\mathbb{C} GH^{\bullet}_{\partial_{\mathcal{J},H}}(X) \;, $$ holds. \end{itemize} \end{thm} \medskip Symplectic structures and complex structures provide the fundamental examples of generalized complex structures; in fact, the following generalized Darboux theorem by M. Gualtieri holds. (Recall that a \emph{regular point} of a generalized complex manifold is a point at which the type of the generalized complex structure is locally constant.) \begin{thm}[{\cite[Theorem 4.35]{gualtieri-phdthesis}, \cite[Theorem 3.6]{gualtieri}}] For any regular point of a $2n$-dimensional generalized complex manifold with type equal to $k$, there is an open neighbourhood endowed with a set of local coordinates such that the generalized complex structure is a $B$-field transform of the standard generalized complex structure of $\mathbb{C}^{k}\times\mathbb{R}^{2n-2k}$. \end{thm} The standard generalized complex structure of constant type $n$ (that is, locally equivalent to the standard complex structure of $\mathbb{C}^n$), the generalized complex structure of constant type $0$ (that is, locally equivalent to the standard symplectic structure of $\mathbb{R}^{2n}$), and the $B$-field transform of a generalized complex structure are recalled in the following examples. See also \cite[Example 4.12]{gualtieri-phdthesis}. \begin{ex}[{Generalized complex structures of type $n$, \cite[Example 4.11, Example 4.25]{gualtieri-phdthesis}}] Let $X$ be a compact $2n$-dimensional manifold endowed with a complex structure $J\in\End(TX)$. Consider the ($0$-twisted) generalized complex structure $$ \mathcal{J}_J \;:=\; \left( \begin{array}{c|c} -J & 0 \\ \hline 0 & J^* \end{array} \right) \;\in\; \End\left(TX\oplus T^*X\right) \;,$$ where $J^*\in\End(T^*X)$ denotes the dual endomorphism of $J\in\End(TX)$. Note that the $\im$-eigenspace of the $\mathbb{C}$-linear extension of $\mathcal{J}_J$ to $\left(TX \oplus T^*X \right)\otimes_\mathbb{C} \mathbb{C}$ is $$ L_{\mathcal{J}_J} \;=\; T^{0,1}_JX \oplus \left(T^{1,0}_JX\right)^* \;,$$ and the canonical bundle is $$ U^n_{\mathcal{J}_J} \;=\; \wedge^{n,0}_JX \;.$$ Hence, one gets that, \cite[Example 4.25]{gualtieri-phdthesis}, $$ U^\bullet_{\mathcal{J}_J} \;=\; \bigoplus_{p-q=\bullet}\wedge^{p,q}_JX \;,$$ and that $$ \partial_{\mathcal{J}_J} \;=\; \partial_J \qquad \text{ and } \qquad \overline{\del}_{\mathcal{J}_J} \;=\; \overline{\del}_J \;; $$ note that $\de^{\mathcal{J}_J}$ is the operator $\de^c_J:=-\im(\partial-\overline{\del})$, \cite[Remark 4.26]{gualtieri-phdthesis}. Note also that $X$ satisfies the $\de\de^{\mathcal{J}_J}$-Lemma if and only if $X$ satisfies the $\de\de^c_J$-Lemma, and that the Hodge and Fr\"olicher spectral sequence associated to the canonical double complex $\left(\Doub^{\bullet,\bullet}U^\bullet_{\mathcal{J}_J},\, \partial_{\mathcal{J}_J} \otimes_\mathbb{R} \id,\, \overline{\del}_{\mathcal{J}_J} \otimes_\mathbb{R}\beta\right)$ degenerates at the first level if and only if the Hodge and Fr\"olicher spectral sequence associated to the double complex $\left(\wedge^{\bullet,\bullet}_JX,\, \partial_J,\, \overline{\del}_J\right)$ does, \cite[Remark at page 76]{cavalcanti}. In particular, it follows that, for $\sharp\in\left\{\overline{\del},\, \partial,\, BC,\, A\right\}$, $$ GH_{\sharp_{\mathcal{J}_J}}^\bullet(X) \;=\; \Tot^\bullet H_{\sharp_J}^{\bullet,-\bullet}(X) \;=\; \bigoplus_{p-q=\bullet} H_{\sharp_J}^{p,q}(X) \;. $$ Therefore, by Theorem \ref{thm:gen-frol-ineq} and Theorem \ref{thm:gen-charact}, and by using the equalities $\dim_\mathbb{C} H^{\bullet_1,\bullet_2}_{{BC}_J}(X) = \dim_\mathbb{C} H^{n-\bullet_2,n-\bullet_1}_{A_J}(X)$ and $\dim_\mathbb{C} H^{\bullet_1,\bullet_2}_{\overline{\del}_J}(X) = \dim_\mathbb{C} H^{n-\bullet_2,n-\bullet_1}_{\partial_J}(X)$, one gets the following result, compare Corollary \ref{cor:cplx}, \cite[Theorem A, Theorem B]{angella-tomassini-3}. \begin{cor} Let $X$ be a compact complex manifold. Then the inequality $$ \sum_{p-q=\bullet} \dim_\mathbb{C} H_{{BC}_J}^{p,q}(X) \;\geq\; \sum_{p-q=\bullet} \dim_\mathbb{C} H^{p,q}_{\overline{\del}_J}(X) $$ holds. Furthermore, $X$ satisfies the $\partial_J\overline{\del}_J$-Lemma if and only if \begin{inparaenum}[\itshape (i)] \item the Hodge and Fr\"olicher spectral sequence of $X$ degenerates at the first level, namely, $$ \dim_\mathbb{C} H^\bullet_{dR}(X;\mathbb{C}) \;=\; \dim_\mathbb{C} \Tot^\bullet H^{\bullet,\bullet}_{\overline{\del}_J}(X) \;,$$ and \item the equality $$ \sum_{p-q=\bullet} \dim_\mathbb{C} H^{p,q}_{{BC}_J}(X) \;=\; \sum_{p-q=\bullet} \dim_\mathbb{C} H^{p,q}_{\overline{\del}_J}(X) $$ holds. \end{inparaenum} \end{cor} \begin{landscape} \smallskip \begin{footnotesize} \begin{center} \begin{table} \begin{tabular}{c||*{4}{c}|*{4}{c}|*{4}{c}|*{4}{c}|*{4}{c}|*{4}{c}|*{4}{c}||} \toprule $\mathbb{I}_3:=\left.\mathbb{Z}[\im]^3\middle\backslash\mathbb{C}^3\right.$ & \multicolumn{4}{c|}{$\dim_\mathbb{C} \Tot^{-3} H^{\bullet,-\bullet}_{\sharp}(X)$} & \multicolumn{4}{c|}{$\dim_\mathbb{C} \Tot^{-2} H^{\bullet,-\bullet}_{\sharp}(X)$} & \multicolumn{4}{c|}{$\dim_\mathbb{C} \Tot^{-1} H^{\bullet,-\bullet}_{\sharp}(X)$} & \multicolumn{4}{c|}{$\dim_\mathbb{C} \Tot^{0} H^{\bullet,-\bullet}_{\sharp}(X)$} & \multicolumn{4}{c|}{$\dim_\mathbb{C} \Tot^{1} H^{\bullet,-\bullet}_{\sharp}(X)$} & \multicolumn{4}{c|}{$\dim_\mathbb{C} \Tot^{2} H^{\bullet,-\bullet}_{\sharp}(X)$} & \multicolumn{4}{c||}{$\dim_\mathbb{C} \Tot^{3} H^{\bullet,-\bullet}_{\sharp}(X)$}\\ {\bfseries classes} & $\overline{\del}$ & $\partial$ & $BC$ & $A$ & $\overline{\del}$ & $\partial$ & $BC$ & $A$ & $\overline{\del}$ & $\partial$ & $BC$ & $A$ & $\overline{\del}$ & $\partial$ & $BC$ & $A$ & $\overline{\del}$ & $\partial$ & $BC$ & $A$ & $\overline{\del}$ & $\partial$ & $BC$ & $A$ & $\overline{\del}$ & $\partial$ & $BC$ & $A$\\ \midrule[0.02em]\midrule[0.02em] {\itshape (i)} & 1 & 1 & 1 & 1 & 5 & 5 & 5 & 5 & 11 & 11 & 11 & 11 & 12 & 12 & 12 & 12 & 11 & 11 & 11 & 11 & 5 & 5 & 5 & 5 & 1 & 1 & 1 & 1 \\ \midrule[0.02em] {\itshape (ii.a)} & 1 & 1 & 1 & 1 & 4 & 4 & 4 & 4 & 9 & 9 & 11 & 11 & 10 & 10 & 11 & 11 & 9 & 9 & 11 & 11 & 4 & 4 & 4 & 4 & 1 & 1 & 1 & 1 \\ {\itshape (ii.b)} & 1 & 1 & 1 & 1 & 4 & 4 & 4 & 4 & 9 & 9 & 11 & 11 & 10 & 10 & 10 & 10 & 9 & 9 & 11 & 11 & 4 & 4 & 4 & 4 & 1 & 1 & 1 & 1 \\ \midrule[0.02em] {\itshape (iii.a)} & 1 & 1 & 1 & 1 & 3 & 3 & 3 & 3 & 8 & 8 & 11 & 11 & 10 & 10 & 11 & 11 & 8 & 8 & 11 & 11 & 3 & 3 & 3 & 3 & 1 & 1 & 1 & 1 \\ {\itshape (iii.b)} & 1 & 1 & 1 & 1 & 3 & 3 & 3 & 3 & 8 & 8 & 11 & 11 & 10 & 10 & 10 & 10 & 8 & 8 & 11 & 11 & 3 & 3 & 3 & 3 & 1 & 1 & 1 & 1 \\ \midrule[0.02em]\midrule[0.02em] & \multicolumn{4}{c|}{$\mathbf{b_0=1}$} & \multicolumn{4}{c|}{$\mathbf{b_1=4}$} & \multicolumn{4}{c|}{$\mathbf{b_2=8}$} & \multicolumn{4}{c|}{$\mathbf{b_3=10}$} & \multicolumn{4}{c|}{$\mathbf{b_4=8}$} & \multicolumn{4}{c|}{$\mathbf{b_5=4}$} & \multicolumn{4}{c||}{$\mathbf{b_6=1}$} \\ \bottomrule \end{tabular} \caption{Generalized complex cohomologies of the Iwasawa manifold.} \label{table:iwasawa} \end{table} \end{center} \end{footnotesize} \smallskip \end{landscape} \end{ex} \begin{ex}[{Generalized complex structures of type $0$, \cite[Example 4.10]{gualtieri-phdthesis}}] Let $X$ be a compact $2n$-dimensional manifold endowed with a symplectic structure $\omega \in \wedge^2 X \simeq \Hom\left(TX; T^*X\right)$. Consider the ($0$-twisted) generalized complex structure $$ \mathcal{J}_\omega \;:=\; \left( \begin{array}{c|c} 0 & -\omega^{-1} \\ \hline \omega & 0 \end{array} \right) \;,$$ where $\omega^{-1}\in\Hom\left(T^*X; TX\right)$ denotes the inverse of $\omega \in \Hom\left(TX; T^*X\right)$. Note that the $\im$-eigenspace of the $\mathbb{C}$-linear extension of $\mathcal{J}_\omega$ to $\left(TX \otimes_\mathbb{R} \mathbb{C}\right) \oplus \left(T^*X \otimes_\mathbb{R} \mathbb{C}\right)$ is $$ L_{\mathcal{J}_\omega} \;=\; \left\{ X -\im \, \omega\left(X,\,\cdot\right) \;:\; X \in TX\otimes_\mathbb{R}\mathbb{C} \right\} \;, $$ which has Clifford annihilator $\exp(\im\,\omega)$, and the canonical bundle is $$ U_{\mathcal{J}_\omega}^n \;=\; \mathbb{C}\left\langle \exp\left(\im\,\omega\right) \right\rangle \;.$$ In particular, one gets that, \cite[Theorem 2.2]{cavalcanti-jgp}, $$ U_{\mathcal{J}_\omega}^{n-\bullet} \;=\; \exp{\left(\im\omega\right)}\, \left(\exp{\left(\frac{\Lambda}{2\im}\right)} \left(\wedge^\bullet X \otimes_\mathbb{R} \mathbb{C}\right)\right) \;, $$ where $\Lambda := -\iota_{\omega^{-1}}$. Note that, \cite[\S2.2]{cavalcanti-jgp}, $$ \de^\mathcal{J_\omega} \;=\; \de^\Lambda \;. $$ By considering the natural isomorphism $$ \varphi\colon \wedge^\bullet X \otimes_\mathbb{R} \mathbb{C} \to \wedge^\bullet X \otimes_\mathbb{R} \mathbb{C} \;, \qquad \varphi(\alpha) \;:=\; \exp{\left(\im\omega\right)}\, \left(\exp{\left(\frac{\Lambda}{2\im}\right)}\, \alpha\right) \;,$$ one gets that, \cite[Corollary 1]{cavalcanti-jgp}, $$ \varphi\left(\wedge^\bullet X\otimes_\mathbb{R}\mathbb{C}\right) \simeq U^{n-\bullet} \;, \qquad \text{ and }\qquad \varphi\de \;=\; \overline{\del}_{\mathcal{J}_\omega}\varphi \quad\text{ and }\quad \varphi\de^{\mathcal{J}_\omega} \;=\; -2\im\partial_{\mathcal{J}_\omega}\varphi \;; $$ in particular, $$ GH^\bullet_{\overline{\del}_{\mathcal{J}_\omega}}(X) \;\simeq\; H^{n-\bullet}_{dR}(X;\mathbb{C}) \;. $$ In particular, one recovers Theorem \ref{thm:sympl}, namely, $$ \dim_\mathbb{R} H^{\bullet}_{\left( \de , \de^\Lambda ; \de\de^\Lambda \right)}\left(X\right) + \dim_\mathbb{R} H^{\bullet}_{\left( \de\de^\Lambda ; \de , \de^\Lambda \right)}\left(X\right) \;\geq\; 2\, \dim_\mathbb{R} H^{\bullet}_{dR}(X;\mathbb{R}) \;, $$ and the equality holds if and only if $X$ satisfies the Hard Lefschetz Condition. \end{ex} \begin{ex}[{$B$-transform, \cite[\S3.3]{gualtieri-phdthesis}}] Let $X$ be a compact $2n$-dimensional manifold endowed with an $H$-twisted generalized complex structure $\mathcal{J}$, and let $B$ be a $\de$-closed $2$-form. Consider the $H$-twisted generalized complex structure $$ \mathcal{J}^B \;:=\; \exp \left(-B\right) \, \mathcal{J} \, \exp B \qquad \text{ where } \qquad \exp B \;=\; \left( \begin{array}{c|c} \id_{TX} & 0 \\ \hline B & \id_{T^*X} \end{array} \right) \;.$$ Note that the $\im$-eigenspace of the $\mathbb{C}$-linear extension of $\mathcal{J}$ to $\left(TX \oplus T^*X \right)\otimes_\mathbb{R} \mathbb{C}$ is, \cite[Example 2.3]{cavalcanti}, $$ L_{\mathcal{J^B}} \;=\; \left\{X+\xi-\iota_XB \;:\; X+\xi \in L_{\mathcal{J}}\right\} \;, $$ and the canonical bundle is, \cite[Example 2.6]{cavalcanti}, $$ U^n_{\mathcal{J}^B} \;=\; \exp B \wedge U^n_{\mathcal{J}} \;.$$ Hence one gets that, \cite[\S2.3]{cavalcanti-jgp}, $$ U^{\bullet}_{\mathcal{J}^B} \;=\; \exp B \wedge U^\bullet_{\mathcal{J}} \;. $$ and that, \cite[\S2.3]{cavalcanti-jgp}, $$ \partial_{\mathcal{J}^B} \;=\; \exp \left(-B\right) \, \partial_{\mathcal{J}} \, \exp B \qquad \text{ and } \qquad \overline{\del}_{\mathcal{J}^B} \;=\; \exp \left(-B\right) \, \overline{\del}_{\mathcal{J}} \, \exp B \;. $$ In particular, one gets that $\mathcal{J}$ satisfies the $\partial_{\mathcal{J}}\overline{\del}_\mathcal{J}$-Lemma if and only if $\mathcal{J}^B$ satisfies the $\partial_{\mathcal{J}^B}\overline{\del}_{\mathcal{J}^B}$-Lemma. \end{ex} \begin{rem} We recall that, given a $\de$-closed $3$-form $H$ on a manifold $X$, an \emph{$H$-twisted generalized K\"ahler structure} on $X$ is a pair $\left( \mathcal{J}_1,\, \mathcal{J}_2\right)$ of $H$-twisted generalized complex structures on $X$ such that \begin{inparaenum}[\itshape (i)] \item $\mathcal{J}_1$ and $\mathcal{J}_2$ commute, and \item the symmetric pairing $\left\langle \mathcal{J}_1 \cdot,\, \mathcal{J}_2\cdot\cdot\right\rangle$ is positive definite. \end{inparaenum} Generalized K\"ahler geometry is equivalent to a bi-Hermitian geometry with torsion, \cite[Theorem 2.18]{gualtieri-kahler}. We recall that a compact manifold $X$ endowed with an $H$-twisted generalized K\"ahler structure $\left( \mathcal{J}_1,\, \mathcal{J}_2\right)$ satisfies both the $\de_H\de_H^{\mathcal{J}_1}$-Lemma and the $\de_H\de_H^{\mathcal{J}_2}$-Lemma, \cite[Corollary 4.2]{gualtieri-kahler}. Any K\"ahler structure provide an example of a $0$-twisted generalized K\"ahler structure. A left-invariant non-trivial twisted generalized K\"ahler structure on a (non-completely solvable) solvmanifold (which is the total space of a $\mathbb{T}^2$-bundle over the Inoue surface, \cite[Proposition 3.2]{fino-tomassini-jsg}) has been constructed by A. Fino and the second author, \cite[Theorem 3.5]{fino-tomassini-jsg}. \end{rem} \begin{rem} Note that A. Tomasiello proved in \cite[\S B]{tomasiello} that satisfying the $\de\de^\mathcal{J}$-Lemma is a stable property under small deformations. \end{rem}
1,108,101,565,102
arxiv
\section{Introduction} The purpose of this paper is to discuss the conditions under which a light scalar, identifiable with the dilaton, can naturally arise in a field theory \cite{cpr}. This question is non-trivial because dilatation invariance is a spacetime symmetry, and Goldstone theorem does not apply straightforwardly. To put the problem into focus, let us then review the basic facts. In the case of a non-linearly realized ordinary global symmetry, the Goldstone field transforms by a simple constant shift: \begin{equation} \tau(x)\to \tau(x)+c \end{equation} so that the only scalar potential consistent with the symmetry vanishes identically $V(\tau)\equiv 0$. Then, not only the mass but all interactions vanish at zero external momentum. In the case of dilatation invariance the associate Goldstone scalar transforms instead as: \begin{equation} \tau(x)\to \tau(kx)+\ln k \end{equation} with $k\in {\mathbb R}^+$. Consistent with dilatation invariance the most general scalar potential is then: \begin{equation} \label{eq1:PotentialForPi} V=V_0e^{4\tau} \end{equation} with $V_0$ a generically non-vanishing constant with dimension $[E]^4$. This state of things implies that the pattern of symmetry breaking depends on the parameter $V_0$, as we shall now illustrate. Aside from the potential, the most general dilatation-invariant Lagrangian for $\tau$ will include higher derivative terms with the schematic form \begin{equation} e^{4\tau} \left (\partial\right )^m \left (e^{-\tau}\right )^m \end{equation} and with the $m$ partial derivatives spread over the $e^{-\tau}$ factors in all possible ways. Notice that, while the potential (\ref{eq1:PotentialForPi}) is also invariant under the full conformal group $O(4,2)$, only very specific combinations of the higher derivative terms are invariant under special conformal transformations. To be specific, the most general conformal-invariant action $\mathcal{S}_{CI}[\tau]$ can be constructed as the most general diffeomorphism-invariant action involving the metric \cite{Salam:1970qk}: \begin{equation} \hat{g}_{\mu\nu} = e^{2\tau} \eta_{\mu\nu} \end{equation} plus a single ``Wess-Zumino'' term that cannot be written in this form\cite{Tomboulis:1988gw,Nicolis:2008in}: \begin{equation} \label{eq:CILagrangian} \mathcal{S}_{CI}[\tau] = \mathcal{S}[\hat{g}] + \mathcal{S}_{WZ}[\tau] \, . \end{equation} For simplicity from now on we assume the case (\ref{eq:CILagrangian}), with invariance under the full conformal group. It would be perhaps interesting to study whether there can be substantial changes in our discussion in the case of scale-without-conformal invariance. The presence of an explicit dimensionful parameter, $V_0$, is just due to our use of a dimensionless field $\tau$ and is obviously consistent with a non-linearly realized dilatation invariance. To make the symmetry more evident we will also work with a canonical dilaton field $\varphi\equiv f_De^\tau$, in terms of which the most general effective Lagrangian truncated at two derivatives is: \begin{equation} \label{eq:withKappa} {\cal L}=\frac{1}{2}\partial_\mu\varphi\partial^\mu\varphi -\kappa\varphi^4 \end{equation} with $\kappa=V_0/f_D^4$ a dimensionless coupling. Focussing on this simplest Lagrangian, as already mentioned, the pattern of symmetry breaking depends on the parameter $\kappa$. That can be studied by considering the maximally symmetric solutions, as first done in ref. \cite{fubini}. One finds: \begin{eqnarray}\label{pattern} \kappa&>&0 \quad\to\quad \varphi=\frac{1}{\sqrt{2\kappa}}\frac{1}{z}\qquad SO(3,2)\equiv {\rm AdS4}\nonumber \\ \kappa&=&0 \quad\to\quad \varphi={\rm const}\qquad ISO(3,1)\equiv {\rm Poincar\acute{e}\, 4}\\ \kappa&<&0 \quad\to\quad \varphi=\frac{1}{\sqrt{-2\kappa}}\frac{1}{t}\qquad SO(4,1)\equiv {\rm dS4} \, .\nonumber \end{eqnarray} As a matter of fact the result does not qualitatively change when considering the most general conformally invariant derivative action \cite{Nicolis:2008in,Nicolis:2009qm}. It then follows that the spontaneous breakdown of $O(4,2)$ to Poincar\'e, with a resulting massless dilaton, does not arise for a generic choice of parameters, but requires the tuning $\kappa=0$. As far as we know the only case in which the choice $\kappa=0$ is technically natural is in the context of supersymmetry. There, in particular in $N=4$ Super Yang-Mills, there are plenty of flat directions that can play the role of the dilaton. Notice also that eq.~(\ref{pattern}) corresponds precisely to the situation in general relativity: depending on the sign of the cosmological constant $\Lambda$ there are either {\it dS} or {\it AdS} solutions, while only for the special choice $\Lambda =0$ is the solution Poincar\'e invariant. This is not surprising given that the action for the conformal mode of the metric and that of the dilaton share invariance under $O(4,2)$. In this respect the tuning associated with a massless dilaton is completely analogous to the tuning associated with a vanishing cosmological constant in gravity \cite{Sundrum:2003yt}. A solution of the former problem may hopefully shed light on the latter. The breaking pattern \ref{pattern} resembles that of the Lorentz group $SO(3,1)$ when considering a vector field $A_\mu$ with a potential \begin{equation} V \propto (A_\mu A^\mu -m^2)^2 \, . \end{equation} Depending on $m^2$, the minimum is in fact at $A_\mu = \hat{A}_\mu$, where $\hat{A}$ can be chosen to be: \begin{eqnarray}\label{patternLorentz} m^2&<&0 \quad\to\quad \hat{A} = (0,|m|,0,0) \qquad SO(2,1) \nonumber \\ m^2&=&0 \quad\to\quad \hat{A} = (p,p,0,0) \qquad ISO(2)\\ m^2&>&0 \quad\to\quad \hat{A} = (m,0,0,0) \qquad SO(3) \, .\nonumber \end{eqnarray} Notice also the analogy with the theory of representations of the Lorentz group, where the residual symmetry group in (\ref{patternLorentz}) is the little group and $m^2$ is the squared momentum of a one-particle state. Then in the case of representations with spin $1/2$ or $1$ the massless case can be selected respectively by chiral or gauge symmetry, or, more generally, by multiplet shortening. On the contrary for spin $0$ it is unnatural to have $m^2=0$, also related to the absence of multiplet shortening at $m=0$. This is the source of the well-known hierarchy problem. These simple examples illustrate that the non-compact nature of the group ($O(4,2)$ or $SO(3,1)$) plays a central role to produce a ``phase diagram" where some specific breaking pattern (to $ISO$ groups) can arise only on a subspace of zero measure, that is by tuning. Indeed if we considered the same vector $A_\mu$ but with compact symmetry group $SO(4)$ the breaking pattern would more simply be \begin{eqnarray}\label{patternEuclid} m^2&\leq&0 \quad\to\quad \hat{A} = (0,0,0,0) \qquad SO(4) \nonumber \\ m^2&>&0 \quad\to\quad \hat{A} = (m,0,0,0) \qquad SO(3) \, .\nonumber \end{eqnarray} so that the breaking pattern presents only two, generic, options. In phenomenological applications we are often interested in pseudo-Goldstone bosons, whose mass results from the explicit breaking of the global symmetry by a small parameter. In the case of internal compact symmetries, the possible symmetry breaking patterns are robust and generic, as seen in the $SO(4)$ example mentioned above. It is thus straightforward to apply an explict symmetry breaking perturbation, the pion in QCD being a perfect example. On the other hand, our discussion shows that for dilatations the very starting point is non-generic and seemingly implausible. Further elaboration is thus needed to identify a naturally light dilaton. The discussion so far concerned the case of exact conformal symmetry. The next obvious step is to ask what happens in the presence of a (small) explicit source of breaking. Consider now the case of explicit breakdown of conformal invariance, where couplings $\lambda_i$ that take the system away from the fixed point are turned on: \begin{equation} \mu\frac{\partial \lambda_i}{\partial \mu}=\beta_i(\lambda)\not = 0\, . \end{equation} The simplest and perhaps most interesting case is that of just one relevant or marginally relevant coupling $\lambda$ associated with: \begin{equation} \Delta {\cal L} = \lambda {\cal O}_{d} \end{equation} where ${\cal O}_{d}$ has dimension $d\leq 4$ in the limit $\lambda=0$. In this situation, by starting at some UV scale $\mu_0$ with $\lambda(\mu_0)\equiv \lambda_0\ll 1$, the system is driven further away from the fixed point by the Renormalization Group (RG) flow towards the IR, until at some scale $\Lambda$ one has $\lambda(\Lambda)\sim1 $ corresponding to a $O(1)$ perturbation\footnote{We apply Naive Dimensional Analysis (NDA) normalizing the couplings so that the perturbation expansion parameter is $\lambda$ without extra powers of $4 \pi$.} away from conformality. The resulting physics is strongly coupled and generically characterized by just one scale $\Lambda$, like in QCD, with masses scaling in units of $\Lambda$. Massless, or light degrees of freedom, will be associated with broken global symmetries (Goldstone bosons) or with unbroken chiral and gauge symmetries (respectively fermions and vector bosons). However, since conformal invariance is no longer an approximate symmetry at the relevant energy scale, witness the fact that the coupling $\lambda$ runs `fast', there is no reason to expect a light dilaton-like CP even scalar. More explicitly, this is because the non-conservation of the scale current $S_\mu$ is controlled by the beta function: \begin{equation} \partial^\mu S_\mu = T_\mu^\mu \propto \beta(\lambda) \, . \end{equation} The one we outlined is indeed the situation realized in UV free gauge theories like QCD, with the NDA normalized gauge coupling $g/4\pi$ playing the role of $\lambda$. Expectedly there is no candidate light and narrow dilaton in the observed hadron spectrum. Similarly no light dilaton was to be expected in ordinary technicolor models. Moreover in conformal technicolor models like the one proposed in \cite{Evans:2010bp}, where the role of $\lambda$ is played by a very relevant coupling such as a fermion mass, we do not expect a light dilaton-like state. Again this is because, at the relevant IR scale $\Lambda$, conformal invariance is not anymore an approximate symmetry. Notice that this situation does not change at large $N$. In the following Sections we shall illustrate under what conditions this generic expectation fails and a naturally light dilaton-like scalar emerges. More precisely, in Section \ref{sec:4Dpicture} we illustrate the requirements from a purely 4-dimensional point of view. In Section \ref{sec:hologrealiz} we discuss a 5D model representing an explicit holographic realization. This construction allows to perhaps better evaluate the plausibility of the requirements sketched in the purely 4-dimensional discussion. In Section \ref{sec:conclusions} we briefly draw our conclusions. Various additional aspects of the 5D model, such as the stability of the solution, are discussed in the appendices. \section{The 4-dimensional picture} \label{sec:4Dpicture} First of all we should make clear that, underlying our all discussion is the assumption that we are dealing with a CFT where some non trivial operators acquire non-vanishing expectation value thus spontaneously breaking dilatations. Under this assumption, we will focus from now on on the effective theory of the resulting dilaton, addressing the problem that was outlined in the Introduction. The discussion in the Introduction, in spite of being negative, does suggest the features that are necessary in order to obtain a naturally light dilaton. A we shall now elaborate, these are: \begin{enumerate} \item The CFT should somehow be able to sample a direction with $\kappa=0$ in (\ref{eq:withKappa}). \item It should be endowed with a coupling that stays `naturally' close to marginality throughout the RG evolution. \end{enumerate} The first request can be satisfied by postulating that the theory possesses a line (or more generally a surface) of fixed points. This corresponds to the existence of a coupling $\lambda$ (or a set of them) that remains exactly marginal over a finite range. The corresponding marginality line (or surface) can be viewed as a continuous family of CFTs that are deformed into one another by turning on the exactly marginal coupling. Now, the parameter $\kappa$ will vary continuously over this family, $\kappa\to \kappa(\lambda)$, and generically there will exist a point $\lambda_*$, or a discrete set, such that $\kappa(\lambda_*)=0$. To satisfy the second request, imagine now to modify the theory by endowing $\lambda$ with a small beta function over the whole marginality line: \begin{equation} \beta(\lambda)=\epsilon \bar\beta(\lambda)\qquad \epsilon \ll 1\, , \quad \bar\beta(\lambda)=O(1)\, . \end{equation} By RG invariance the dilaton potential will simply be\footnote{We imagine that $\epsilon$ smoothly describes a one parameter family of theories and work in series expansion in $\epsilon$ around $\epsilon = 0$. Eq. (\ref{eq:potentialKappaPhi4}) represents the potential at zeroth order in $\epsilon$. The holographic example we shall present later supports this picture. Higher order effects will modify the function $\kappa$, but its relevant properties, zeroes and slope, will qualitatively remain the same over a finite range of $\epsilon$. So we can neglect this detail in the discussion.}: \begin{equation} \label{eq:potentialKappaPhi4} V(\varphi)=\kappa(\lambda(\varphi)) \varphi^4\, . \end{equation} This basically corresponds to a quartic potential modulated by a slow evolution with $\varphi$ of its coefficient $\kappa$, the slow dependence arising from the near marginality of $\lambda$. Now, by a generic choice of parameters, one that does not require any particular tuning, we can imagine $\kappa(\lambda(\varphi))$ to be positive at $\varphi\to \infty$ and to cross zero at $\varphi=\varphi_*$ such that $\lambda(\varphi_*)=\lambda_*$. In such situation the minimum of the potential will clearly be at $\varphi=O(\varphi_*)$, close to the point where the quartic coefficient vanishes. The resulting mass of the dilaton will thus be suppressed by $\epsilon$, the small parameter in the game. This result is precisely what happens in dimensional transmutation \`a la Coleman-Weinberg \cite{Coleman:1973jx}. To make the discussion more quantitative, we can study the vacuum dynamics in an expansion in $\epsilon$ around $\varphi_*$ ($\lambda(\varphi_*)=\lambda_*$ and $\kappa(\lambda_*)=0$). The condition of stationarity \begin{equation} \frac{\partial V(\varphi)}{\partial\varphi}=\left [4\kappa(\lambda(\varphi))+\beta(\lambda(\varphi))\kappa^\prime(\lambda(\varphi))\right ]\varphi^3=\left [4\kappa(\lambda(\varphi))+\epsilon \bar\beta(\lambda(\varphi))\kappa^\prime(\lambda(\varphi))\right ]\varphi^3\end{equation} implies the minimum is at a $\varphi_{min}$ satisfying: \begin{equation} \lambda(\varphi_{min})\equiv \lambda_{min}=\lambda_*-\frac{\epsilon}{4} \bar\beta(\lambda_*) +O(\epsilon^2) \end{equation} implying \begin{equation} \varphi_{min}=\varphi_*e^{-\frac{1}{4}+O(\epsilon)}\, . \end{equation} Assuming, without loss of generality, a canonically normalized kinetic term, we find for the dilaton mass: \begin{equation} \label{eq:mphiGenericDiscussion} m^2_\varphi=4\epsilon \varphi_{min}^{2} \bar\beta(\lambda_*) \kappa^\prime(\lambda_*)=O(\epsilon)\varphi_{min}^2 \end{equation} suppressed with respect to the characteristic mass scale of the system $\varphi_{min}$. \section{Holographic realization} \label{sec:hologrealiz} To better appreciate how plausible the scenario of the previous Section is, we outline here a holographic realization \cite{Maldacena:1997re}-\cite{Verlinde:1999fy}, in the context of RSI \cite{Randall:1999ee}. Our mechanism is a variant of the one proposed in \cite{Goldberger:1999uk} by Goldberger and Wise (GW). We do not want to claim particular originality here: see \cite{Goldberger:1999uk}-\cite{Eshel:2011wz} for related studies and \cite{Chacko:2012sy}\cite{Bellazzini:2012vz} for recent discussions\footnote{During the completion of this work, another paper appeared \cite{Bellazzini:2013fga} in which the idea of \cite{cpr} is elaborated.}. We just want to elucidate in the holographic context the necessary conditions for a naturally light dilaton, that here coincides with the so-called radion. In that respect our remarks complement the discussion in the Appendix A of ref. \cite{Rattazzi:2000hs}. We want to translate into an AdS5 model the properties of the CFT we previously identified. A naturally marginal deformation will correspond to a naturally massless scalar in 5D, a Goldstone boson $\pi$ living in the bulk: \begin{equation} \label{eq:duality0} \pi\quad \leftrightarrow \quad\lambda \, . \end{equation} The marginality surface in the CFT will correspond to the coset manifold in AdS5. An almost marginal deformation, like the one we want, will then just correspond to a bulk pseudo-Goldstone. We will parametrize with a small dimensionless quantity $\epsilon$ the effects that explicitly break the Goldstone symmetry. In particular the bulk scalar potential $V(\pi)$ will be $O(\epsilon)$. The tension $\tau_{IR}$ of the IR brane contributes additively to the dilaton quartic $\kappa$ \cite{Rattazzi:2000hs}. Then the request 1) in the previous Section amounts to assuming a $\pi$-dependent tension. We choose units where the AdS5 radius is $L=1/k$, and the bulk cosmological constant is $\Lambda_5=-3/L^2$. For the infrared (IR) brane tension we shall assume (see eq.~(\ref{eq:fullaction0}) for the units) : \begin{equation} \label{eq:IRtension} \tau_{IR}(\pi)=-\frac{3}{L} +\frac{f(\pi)}{L} =\tau_{RS}+\frac{f(\pi)}{L} \end{equation} where $\tau_{RS}$ is the tuned value corresponding to an exactly vanishing dilaton potential, that is $f=0$ corresponds to $\kappa=0$. So we basically have: \begin{equation} \kappa\equiv \kappa(\lambda) \quad \leftrightarrow \quad {f(\pi)} \, . \end{equation} In our study we shall elucidate the relation between $\kappa(\lambda)$ and $f(\pi)$. While the relation will become conceptually clear, we shall only be able to present simple analytic expressions under some approximations: a first order Taylor expansion in the case of large back-reaction from the field $\pi$ in Sections \ref{subsec:radion}-\ref{subsec:radion2}, to all orders in $\lambda$ for the case of small back-reaction in Section \ref{subsec:radion0}. We now look for a solution with a 5D metric \begin{equation} \label{eq:metric} ds^2 = g_{NM} dx^N dx^M= e^{2 A(z)} \eta_{\mu\nu} dx^\mu dx^\nu - dz^2 \end{equation} where $\eta_{\mu\nu}=\mbox{diag}(1,-1,-1,-1)$. Notice that (\ref{eq:metric}) is the most general Poincar\'e invariant solution, after the gauge choice $g_{\mu 5}=0$ and $g_{55}=-1$. To allow for a holographic interpretation we focus on asymptotically AdS solutions, i.e. we impose $A(z) \rightarrow -z/L$ at $z\to -\infty$. We introduce an IR brane at $z=z_{IR}$, whose presence is associated with the spontaneous breakdown of 4D conformal invariance \cite{ArkaniHamed:2000ds}\cite{Rattazzi:2000hs}, and for simplicity we do not introduce any ultraviolet (UV) brane. This corresponds to the limit of zero Newton constant in 4D, which is legitimate since our considerations are intrinsecally decoupled from 4D gravity. Introducing the Planck brane one would slightly complicate the discussion and find the usual issue of the finetuning related to the 4D Cosmological Constant. Adopting the conventions of \cite{DeWolfe:1999cp}, our 5D action is thus given by: \begin{equation} \label{eq:fullaction0} \frac{S}{(M_5)^3}=\int d^4 x \int_{-\infty}^0 dz \sqrt{|g|} \left[ -\frac{1}{4} R + \frac{1}{2}(\partial \pi)^2 - V(\pi)\right ] -\frac{1}{2} \int_{z=z_{IR}} d^4 x \sqrt{|h|} \left[ \tau_{IR}(\pi) + K \right] \end{equation} where $h$ is the 4D metric induced on the brane, and $K$ is the extrinsic curvature\footnote{Notice that performing a 4+1 split and using ADM variables \cite{Arnowitt}, as done for example in \cite{Luty:2003vm}, the ``Gibbons-Hawking'' term involving $K$ is automatically canceled and only first derivatives in $z$ appear (see also \cite{Gibbons:1976ue} \cite{Wald:1984rg}).} of the boundary (brane). Notice that the 5D Planck scale $M_5$ is factored out and thus will not enter our considerations. We will parameterize our potential by \begin{equation} \label{eq:PhiPotential} V(\pi) = -\frac{3}{L^2} +\frac{\epsilon}{L^2} P(\pi ) \, , \end{equation} where $\epsilon$ is a small parameter controlling the explicit breaking of the Goldstone symmetry $\pi \to \pi + c$. Notice that, while the shift symmetry is broken by the small parameter $\epsilon$ in the bulk, it is instead maximally broken by the tension at the IR boundary, see eq.~(\ref{eq:IRtension}). This situation is technically natural because of the locality of the UV divergent corrections to the $\pi$ potential: the breakdown of the Goldstone symmetry at the boundary cannot affect the bulk potential \footnote{We expect finite quantum effects to asymptotically vanish away from the IR brane as $z\to -\infty$.}. According to the AdS/CFT dictionary (see for instance ref.~\cite{Girardello:1998pd}), the dual running coupling $\lambda$ can be identified with $\pi $, while the corresponding $\beta$ function is \begin{equation} \beta(\lambda) =\frac{\epsilon}{4}\partial_\lambda P(\lambda)\equiv \epsilon \bar\beta(\lambda)\, .\label{betadual} \end{equation} By a direct inspection of the equations of motion, the condition to have an asymptotic AdS space at $z\to -\infty$, (that is for the $\pi$ field back reaction on the metric to vanish asymptotically) is \begin{equation} \epsilon P''(0) < 0\, .\label{UVfreedom} \end{equation} This condition precisely corresponds to the UV stability of the unperturbed $\lambda = 0$ fixed point in the dual CFT description. In what follows we shall assume $\epsilon >0$, $P''(0)<0$ without loss of generality. We shall also present some more details on the simple case of a quadratic tachyonic potential $P=-2\pi ^2$, for which $\beta = -\epsilon \lambda$, corresponding to a perturbation with fixed scaling dimension $\epsilon$. However our discussion applies to a generic flat potential ($\epsilon \ll 1$). Consider now the equations of motion (EOM) that come from the variation of (\ref{eq:fullaction0}) when both $A$ and $\pi$ are functions of $z$ only, which corresponds to the most general solution with Poincar\'e symmetry. In the bulk the EOM read: \begin{eqnarray} \pi''+4 A' \pi' - \frac{\partial V}{\partial \pi} &=& 0 \label{eq:EOM1}\\ A''+ \frac{2}{3}(\pi')^2&=&0 \label{eq:EOM2} \\ (A')^2+ \frac{1}{3}V(\pi) -\frac{1}{6}(\pi')^2&=&0 \, , \label{eq:EOM3} \end{eqnarray} where here and below the primes denote the derivatives with respect to $z$, supplemented by the matching conditions on the brane: \begin{eqnarray} A' (z=z_{IR}) &=& \frac{1}{3} \tau_{IR}(\pi(z_{IR})) \label{eq:Matching1} \\ \pi' (z=z_{IR}) &=& -\frac{1}{2} \frac{\partial \tau_{IR} (\pi(z_{IR}))}{\partial \pi} \label{eq:Matching2} \, . \end{eqnarray} For $\epsilon=0$ these equations can be solved exactly and the solution is given by: \begin{eqnarray} A_0(z) = \frac{1}{4}\log \sinh \frac{4(z_*-z)}{L} - \frac{z_*}{L} + \frac{\log 2}{4} && [\epsilon=0] \label{eq:ASolutionZero} \\ \pi_0(z) = \pm \frac{\sqrt{6}}{4} \log \tanh \frac{2(z_*-z)}{L} + \pi_* && [\epsilon=0]\, . \label{eq:PhiSolutionZero} \end{eqnarray} The additive constant in $A(z)$ is fixed by the boundary condition $A(z)\to -z/L$ at $z\to -\infty$. The integration constants $z_*$ and $\pi_*$ are instead determined by the matching conditions (\ref{eq:Matching1})-(\ref{eq:Matching2}) once the tension is specified as a function of $\pi$. More precisely one has \begin{equation} z_*=z_{IR}+ c_* L\label{zstar}\, . \end{equation} where, assuming $\tau_{IR}(\pi)$ is a generic $O(1)$ function, we expect $c_*$ to be of order 1. Moreover, $c_*$ must be positive, otherwise the solution has a singularity at $z<z_{IR}$. In general there is only a discrete set of solutions and thus, up to a discrete ambiguity that is not important for our discussion, the parameters $c_*$ and $\pi_*$ are fixed by the dynamics, that is by $\tau_{IR}(\pi)$. Notice in particular that $c_*$ and $\pi_*$ do not depend on $z_{IR}$: by varying $z_{IR}$, and $z_*$ according to eq.~(\ref{zstar}), we obtain a family of solutions, satisfying the same boundary conditions. We conclude that $z_{IR}$ is a modulus and that the associated 4D scalar field in the Kaluza-Klein decomposition, the radion, must be massless. The presence of this modulus suggests we must have made a tuning. Where? Notice indeed that we did not fix a priory the boundary condition of $\pi$ at $z\to -\infty$, but rather determined it from the very existence of a Poincar\'e invariant solution. From eq.~(\ref{eq:PhiSolutionZero}) one finds $\lim_{z\to -\infty}\pi(z)=\pi_*$, a parameter purely fixed by the IR boundary condition. For any other choice of the asymptotic value of $\pi$ there would not exist a solution with Poincar\'e invariance. For these other choices we should find solutions with either dS or AdS residual isometry. The holographic dual of the above state of affairs is precisely what we described in Section \ref{eq:duality0}: only for the specific choice $\lambda = \lambda_*$ of the marginal coupling do we have a vanishing dilaton potential allowing the breaking of $O(4,2)$ to the Poincar\'e group. For all other choices the breaking is either to dS or to AdS. As already said, this fine-tuning, corresponding to $\kappa\rightarrow 0$ in (\ref{eq:withKappa}), is an analogue of the Cosmological Constant problem in the scalar version of gravity \cite{Sundrum:2003yt}. We will see in the following how our construction can be considered as a dynamical solution to this problem. As we already remarked, $z_{IR}$ is a modulus. The corresponding family of solutions can simply be obtained by performing the (global) change of coordinates \begin{equation} z\to \tilde z= z-z_1\qquad\qquad x^\mu \to \tilde x^\mu =x^\mu e^{-z_1/ L}\, . \label{coordinateshift} \end{equation} which leaves the UV boundary conditions $A(z) \to -z/L$, $\pi \to \pi _*$ at $z\to -\infty$ unaffected, and which in practice just amounts to the shift \begin{eqnarray} z_{IR} &\rightarrow & z_{IR} - z_1 \equiv \tilde z_{IR} \nonumber\\ z_* &\rightarrow & z_* - z_1 \equiv \tilde z_*\, .\label{eq:shiftTransformation} \end{eqnarray} By this result the action is stationary under variations of $z_{IR}$, consistent with it being a modulus. Notice that eq.~(\ref{coordinateshift}) precisely corresponds to a 4D dilatation in the dual picture. Under this change of coordinates, the warp factor at the IR boundary changes as \begin{equation} e^{A_{IR}}\equiv e^{A(z_{IR})} \to e^{\tilde A_{IR}}= e^{A_{IR}} e^{z_1/L}\, . \end{equation} That is precisely how the dilaton $\varphi\propto e^\tau$ transforms. This is consistent with the familiar result from RS phenomenology, where the warp factor at the IR boundary can be interpreted, up to an overall normalization, as the interpolating field for the canonical dilaton $\varphi$. In Section \ref{subsec:radion} we shall discuss the dilaton mode in more detail. Moreover, in order to assess the validity of the solution we just found, we should also insure that it is stable, i.e that no Kaluza-Klein mode around it is a ghost or a tachyon. It can easily be checked that there are no ghosts, while in Appendix \ref{appendix:KK} we prove that tachyons are avoided by a mild and generic request on the IR brane tension: ${\partial^2 \tau_{IR} (\pi(z_{IR}))}/{\partial \pi^2}>0$. \subsection{The 5D solution at $\epsilon\not = 0$} \label{sec:5DsolutionEpdZero} Let us consider now the case where $\epsilon \not = 0$. In general the equations of motion cannot be solved exactly. In principle we could imagine to proceed by treating $\epsilon $ as a small perturbation and by expanding the solution in a power series \begin{eqnarray} \pi&=&\pi_0+\epsilon \pi_1+\epsilon^2 \pi_2 +\dots\label{piseries}\\ A'&=& A'_0 +\epsilon A'_1+\epsilon^2 A'_2 +\dots\label{Aseries} \end{eqnarray} where $\pi_0$ and $A_0'$ are the zeroth order solutions in eqs.~(\ref{eq:ASolutionZero})-(\ref{eq:PhiSolutionZero}) However that only works for finite $z$. To see more explicitly what happens, let us fix (withouth loss of generality), $z_*=0$ in the unperturbed solution. We thus have $z_{IR}=-c_* L= O(L)$. We can then solve the equations of motion, order by order in $\epsilon$ starting from the IR brane: eqs.~(\ref{eq:EOM3})-(\ref{eq:Matching2}) fix the initial conditions for $\pi$, $\pi'$ and $A'$ and the solution ($\pi,A'$) is unique. The warp factor $A$ is then obtained by performing a further integration: the overall additive constant can for instance be fixed by the request $\lim_{z\to -\infty}A(z) = -z/L$. Now, notice that the unperturbed solution quickly enters its asymptotic behaviour at $-z/L > O(1)$ \begin{equation} \pi_0(z) =\pi_*+ O(e^{-4|z|/L})\qquad\qquad A'_0(z)= -\frac{1}{L} +O(e^{-8|z|/L})\, .\label{asymptotes} \end{equation} Using this result, by studying the linearized second order differential equation for $\pi_1$, in the region $-z/L\gg 1$ one finds \begin{equation} \epsilon \pi_1 = - \frac{\epsilon z}{4L} P'(\pi_*)+O(e^{-4|z|/L})\, .\label{firstorder} \end{equation} We conclude that for generic $P$ (in particular for quadratic $P$) we can treat the potential as a perturbation only as long as $\epsilon|z|/L \ll 1$. Moreover, in the region $ \ln 1/\epsilon \ll |z|/L \ll 1/\epsilon$, the exponentially decaying part in $\pi_0$ and $A'_0$ is subdominant to the $O(\epsilon)$ perturbation. In that region, to first non-trivial order, the solution is then \begin{equation} \pi=\pi_*- \frac{\epsilon z}{4L} P'(\pi_*)+O(\epsilon^2)\qquad\qquad A'(z)= -\frac{1}{L} +\frac{\epsilon}{L}P(\pi_*)+O(\epsilon^2)\label{nontrivial} \end{equation} The above equations provide the initial matching conditions in the region $ \ln 1/\epsilon \ll |z|/L \ll 1/\epsilon$ for the solution at large $z$. Indeed, by inspecting the equations of motion, one readily concludes that the solution matching eq.~(\ref{nontrivial}) satisfies, to leading order in $\epsilon$, a first order differential equation \begin{equation} \pi' = \frac{-\epsilon }{4L} P'(\pi) \qquad \qquad A'(z)= -\frac{1}{L} +\frac{\epsilon}{L}P(\pi) \label{ansatz} \end{equation} with (at leading order in $\epsilon$) boundary condition $\pi=\pi_*$ at $|z|/L=O(1)$. Indeed eq.~(\ref{ansatz}) is consistent with eq.~(\ref{nontrivial}) in the matching region, and when substituted into the equations of motions solves them up to $O(\epsilon^2)$ terms. In particular the term $\pi''$ in eq.~(\ref{eq:EOM1}) is of order $\epsilon^2$ according to eq.~(\ref{ansatz}), and thus subleading. The evolution of $\pi$ towards the conformal boundary thus follows a first order differential equation, whose CFT interpretation is the RG equation for the dual coupling . The solution of eq.~(\ref{ansatz}) amounts to a resummation of all powers of $\epsilon z /L$ as $z\to -\infty$, while the neglected terms correspond to next-to-leading order powers $\epsilon (\epsilon z/L)^n$. The analogy with the RG resummation of leading logs is obvious. Notice that we worked under the assumption of asymptotic AdS geometry at $z \to - \infty$. It is therefore essential, for our whole picture to make sense, that the contribution of $\pi$ to the energy momentum tensor vanish towards the boundary. A sufficient condition for this to happen is that\footnote{Notice that $P(0)=0$ can always be achieved by redefining the bulk cosmological constant, while a stationary point $P'=0$ can always be set at $\pi=0$ by redefining $\pi$ via a constant shift.} $P(0)=0, P'(0)=0$ and $P''(0)<0$, in which case $\lim_{z\to -\infty}\pi = 0$ for some finite range of $\pi_*$. This situation corresponds to a UV stable fixed point in the dual theory. An example satisfying this criterion is given by the quadratic potential $P= -2\pi^2$. In this case the form of the solution in the asymptotic region is \begin{equation} \label{eq:AsymptPhiSolution} \pi(z \ll -L/\epsilon) = \pi_* e^{\epsilon z/L}\, \left( 1 + O(\epsilon e^{2\epsilon z/L}) \right) + \hat{\pi}_{*} e^{(4-\epsilon)z/L} \, \left( 1 + O(\epsilon e^{2\epsilon z/L})\right) \, . \end{equation} where $\hat{\pi}_*$ is an ``integration constant'' determined by the matching conditions including subleading terms, which we disregarded in the above general discussion. The leading term, scaling like $e^{\epsilon z/L}$, precisely corresponds to the solution of eq.~(\ref{ansatz}). Integrating $A'$ from eq.~(\ref{eq:EOM3}) using (\ref{eq:AsymptPhiSolution}), we find the leading correction to the AdS behaviour of the metric \begin{equation} \label{eq:AsymptASolution} A(z \ll -L/\epsilon) = -\frac{z}{L} +\frac{\pi_*^2 }{3} \left( 1-e^{2\epsilon z/L}+ O(\epsilon e^{2\epsilon z/L}) \right) \, . \end{equation} Notice that the additive constant $\frac{ \pi_*^2 }{3}$ can in principle be removed in order to satisfy the boundary condition $\lim_{z\to -\infty}A(z) = -z/L$. However, if that is done, then in the region near the IR brane $A-A_0=O(1)$, while $A'-A_0'=O(\epsilon)$ everywhere. Our solution of the $\epsilon \not = 0$ case was obtained by perturbing around a given choice of the IR brane coordinate $z_{IR}=-c_* L$, corresponding to $z_*=0$ in the unperturbed solution. It is pretty obvious that the asymptotic behaviour of $\pi$, which is now not constant, will depend on this choice. This is seen clearly by performing the coordinate shift in eq.~(\ref{coordinateshift}) which does not affect the asymptotic behaviour of the warp factor $A(z)$ but does change the asymptotic behaviour of $\pi$ \begin{equation} z_{IR}\to z_{IR}-z_1\qquad\qquad \pi(z)\to \pi(z+z_1)\, .\label{RG} \end{equation} According to this equation the position of the IR brane is in one to one correspondence with the value of the ``running" field at any given test scale $z$. This is seen explicitly in the case of a quadratic potential, for which the shift in the solution can be translated into a change of its overall coefficient in eq.~(\ref{eq:AsymptPhiSolution}) \begin{equation} \label{eq:shiftTransformation2} \pi_* \rightarrow \pi_* e^{ \epsilon z_1/L} \, . \end{equation} Notice the change with respect to the $\epsilon =0$ case. In that case, $\pi$ evolves to an undetermined constant at $z\to -\infty$: in order to obtain a Poincar\'e invariant solution, we must tune the constant to be equal to $\pi_*$. Moreover the leading behaviour at infinity is not affected by a shift of the IR boundary, so we expect the radion to be exactly massless. In the case $\epsilon \not =0$, the Poincar\'e invariant solution is generic. Now the field $\pi$ is automatically driven to a fixed point $\pi=0$ at $z\to -\infty$, so that now the boundary condition on $\pi$ must specify the rate at which it approaches $0$. One convenient prescription is to pick a fixed value $\pi_{UV}$ near zero and define the boundary condition in terms of the value $z_{UV}$ of $z$ such that $\pi(z_{UV}) = \pi_{UV}$. Keeping $\pi_{UV}$ fixed and shifting $z_{UV}$, by eq.~(\ref{RG}) $z_{IR}$ shifts by the same amount. This correlation between the location of the IR brane and the choice of boundary condition, implies the radion is stabilized. Moreover, in the limit where $\epsilon$ is small the radion mass will obviously be small. By translating to the 4D dual picture via the dictionary \begin{equation} \pi(z) \to \lambda(\mu) \qquad\qquad \frac{e^{-z/L}}{L} \to \mu \qquad\qquad \frac{e^{-z_{IR}/L}}{L}\to \langle \varphi\rangle \end{equation} one finds agreement with the discussion in Section \ref{sec:4Dpicture}. Our 5D model can be thus considered a dynamical solution to the Cosmological Constant problem of scalar gravity \cite{Sundrum:2003yt}. Our logic implies, for $\epsilon \ll 1$, a radion with mass $m_\varphi$ parametrically smaller than the Kaluza-Klein gap $m_{KK}\sim e^{-z_{IR}}/L$, corresponding to an effectively small quartic $\kappa$ around the minimum. In the remaining Sections of the paper we shall compute the dilaton effective action working at the first non-trivial order in $\epsilon$. We shall proceed in three steps, as follows. \noindent{\bf 1.} We shall first find the radion/dilaton effective Lagrangian at $\epsilon =0$ and truncated to two derivative terms. In practice we shall find a mode that acts as a good interpolating field for the dilaton. In particular, at zero momentum it is diffeomorphic to the unperturbed solution, implying that its action involves at least two derivatives. As shown in Appendix A, that property also holds true when KK excitations are turned on. The two derivative effective action is then simply obtained by substituting the mode into the 5D action, while the effect of massive KK exchange affects the action starting at four derivatives. This is because the mixing between the radion and any massive KK mode starts at $O(\partial^2)$: integrating out the KK at tree level one obtains a $O(\partial^4)$ correction. \noindent{\bf 2.} Still focussing on the $\epsilon = 0$ case, we shall consider the leading correction to the effective action that arises when the boundary condition $\lim_{z\to -\infty} \pi = \pi_*$ is relaxed. We shall consider $\lim_{z\to -\infty} \pi = \pi_*+\Delta \pi_\infty$ and treating $\Delta\pi_\infty$ as a small quantity we shall compute the correction to the dilaton potential at leading linear order in $\Delta \pi_\infty$. Notice that with this modified boundary condition eqs.~(\ref{eq:ASolutionZero})-(\ref{eq:PhiSolutionZero}) will no longer be a stationary point, but that does not matter provided that we derive the resulting effective action keeping all effects. Indicating by $\Phi_0$ the $\epsilon=0$ solution and by $\psi_0$ and $\psi_{0KK}$ respectively the radion and the most general massive KK fluctuation around it, we can expand the action \begin{equation} S(\Phi_0+\psi_0+\psi_{0KK}, \pi_*+\Delta\pi_\infty)\label{totalaction} \end{equation} in $\Delta \pi_\infty$ and in KK modes. As in the previous case, keeping $\psi_0$ as our low energy field and integrating out the massive KK modes, we find that the effect of the latter integration only starts at order $(\Delta \pi_\infty)^2$ and at order $\Delta \pi_\infty \times \partial^2$. The leading $O(\partial^2)$ and $O(\Delta\pi_\infty)$ action is then simply \begin{equation} S(\Phi_0+\psi_0, \pi_*)+\Delta\pi_\infty \partial_{\pi_*}S(\Phi_0+\psi_0, \pi_*)\label{deltapi} \end{equation} The low energy effective theory described by the above Lagrangian, will not possess Poincar\'e invariant solutions for $\Delta\pi_\infty\not = 0$. But as long as $\Delta\pi_\infty$ is small, the resulting solutions with non trivial spacetime dependent dilaton profile will give a valid effective 4D description of the corresponding 5D exact solutions. In section \ref{subsec:radion1}, we shall directly compute the action of eq.~{\ref{deltapi}), while in section \ref{subsec:radion2} we shall deduce it indirectly by matching 5D solutions with AdS4 symmetry to the corresponding solutions in the 4D effective dilaton theory. \noindent{\bf 3.} According to the dual 4D picture discussed in Section \ref{sec:4Dpicture} the potential at order $O(\Delta\pi_\infty)$ corresponds to the term $(\lambda-\lambda_*) \kappa'(\lambda_*)\varphi^4$. According to the discussion in that Section, once $\kappa'(\lambda_*)$ is known, RG considerations are sufficient to compute the dilaton mass at $O(\epsilon)$. This is the route we shall follow here, keeping in mind that in the 5D language RG invariance corresponds to the global dilatation diffeomorphism. We should also keep in mind that we can view this third step as the addition of the $O(\epsilon)$ perturbation in eq. (\ref{totalaction}). As we did before one may worry about the effects arising from integrating out the massive KK's. Again, by taking into account that the massive KK do not linearly mix with the dilaton in the unperturbed $\epsilon = \Delta\pi_\infty=0$ case, we conclude that these effects give terms that are at most $O(\epsilon \Delta\pi_\infty)$ and $O(\epsilon \partial^2)$. They therefore affect the radion squared mass only at order $\epsilon^2$. On the order hand, according to the discussion in Section \ref{sec:4Dpicture}, the $O(\Delta\pi_\infty)$ correction in eq.~(\ref{deltapi}) gives a $m_\varphi =O(\epsilon)$. \subsection{The dilaton mode at $\epsilon=0$} \label{subsec:radion} Let us start from the case $\epsilon=0$ in which, as already said, we should find a vanishing radion-dilaton potential. This is the step {\bf 1} we outlined above. A convenient parametrization of the radion mode is given by the metric \begin{eqnarray} ds^2&=&e^{2 \hat{A}(x,z)}\eta_{\mu\nu}dx^\mu dx^\nu-\hat{B}(x,z)^2 dz^2 \label{metricdilaton}\\ \hat{A}(x,z)&=&A_0(z+c(z)r(x))-r(x)/L, \label{dilaton0}\\ \hat{B}(x,z)&=&1+ c'(z) r(x), \label{dilaton1} \end{eqnarray} and scalar field \begin{equation} \hat\pi(x,z)=\pi_0(z+c(z)r(x)) \label{dilaton2} \end{equation} where $A_0$ and $\pi_0$ are the solutions in eqs.~(\ref{eq:ASolutionZero})-(\ref{eq:PhiSolutionZero}) for the choice $z_*=0$, and $c(z)$ is a function such that $c(z_{IR})=0$ and $c(-\infty)=-1$. Notice that, given the behaviour of $\hat{A_0}(x,z)$ and $\hat\pi_0(x,z)$ at $z\to \infty$, the above mode has a finite action, i.e. it is normalizable. Moreover when $r$ is constant over spacetime the mode can be eliminated by the change of coordinates $\tilde z = z+c(z) r$, $\tilde x^\mu = e^{-r/L} x_\mu$, which does not affect the coordinate of the IR boundary and the asymptotic behaviour of the fields. We conclude that $r(x)$ has vanishing potential, and as such is a good interpolating field for the massless radion. Notice that there remains some degree of arbitrariness in the choice of the radion wavefunction. All functions $c(z)$ satisfying the same boundary conditions should be equally good. Different choices of $c(z)$ will affect the mixing between massive KK's and the radion, and will be reflected in the $O(\partial^4)$ effective action, which we do not care about\footnote{There should however exist a specific choice of $c(z)$ shuch that the quadratic mixing with the KK's vanishes \cite{Charmousis:1999rg} .}. On the other hand we expect the leading $O(\partial^2)$ action to be unaffected by the freedom in the choice of $c$, as we shall now verify\footnote{In Appendix \ref{app:gaugefixing} we shall discuss in more detail the realization of 4D dilations in the presence of a spacetime dependent $r$ and of KK excitations as well.}. To compute the radion effective action, we simply plug eqs.~(\ref{metricdilaton})-(\ref{dilaton2}) into the action (\ref{eq:fullaction0}). The resulting expression is \begin{eqnarray} \frac{S}{M_5^3} &=& \int d^4 x \int_{-\infty}^{z_{IR}} dz \, \left\{ e^{2\hat{A}} \left[ -\frac{3}{2}\eta_{\mu\nu}(\partial_\mu \hat{A})(\partial_\mu \hat{B}) -\frac{3}{2}\eta_{\mu\nu}\hat{B}(\partial_\mu \hat{A})(\partial_\mu \hat{A}) + \frac{1}{2}\hat{B}\eta_{\mu\nu}(\partial_\mu \hat{\pi})(\partial_\nu \hat{\pi}) \right] \right. \nonumber \\ && \left. + e^{4\hat{A}}\left[ 2\frac{ {\hat{A}'} {\hat{B}'}}{\hat{B}^2} - 5\frac{{(\hat{A}')}^2}{\hat{B}} - 2 \frac{ {\hat{A}''}}{\hat{B}} - \frac{ {(\hat{\pi}')}^2}{2\hat{B}} - \hat{B}V(\hat{\pi}) \right] \right\} -\int d^4 x \, e^{4\hat{A}} \left. \left[ \frac{\tau_{IR}(\hat{\pi})}{2}-2 {\hat{A}'} \right]\right|_{z=z_{IR}} \, . \label{eq:radionAction} \end{eqnarray} By making use of the explicit expressions (\ref{dilaton0})-(\ref{dilaton2}) one finds the kinetic term: \begin{eqnarray} S_{kinetic} &=&(M_5 L)^3 \int d^4 x \frac{(\partial_\mu r)^2}{2L^4} e^{-2(z_{IR}+r(x))/L}\, \times\, Z(z_{IR} ) \label{eq:kineticterm} \\ Z(z_{IR} ) &=& \left( \frac{3}{2}+ 3e^{2z_{IR}/L} \int_{-\infty}^{z_{IR}} \frac{dz}{L} (e^{-2 z/L} - e^{2A_0(z)}) \right) \, , \nonumber \end{eqnarray} which does not depend on the form of $c$ as expected (we recall that $z_*$ has been set to zero without loss of generality). On the other hand by looking at the non-derivative interactions we can derive the radion potential, which can be written as a boundary term: \begin{equation} \label{eq:vanishingpotential} S_{IR} = -\frac{(M_5 L)^3}{2}\int \frac{d^4 x}{L^4}\, e^{4(A_0(z_{IR})-r/L)} \left( \tau_{IR}(\pi_0(z_{IR})) - 3 A_0'(z_{IR}) \right) \, . \end{equation} This has precisely the expected form of a quartic term, as in (\ref{eq:withKappa}). The coefficient is however exactly zero thanks to the matching condition (\ref{eq:Matching1}). The coefficient $Z$ for the kinetic term corresponds to the result in the RS model $Z=\frac{3}{2}$ up to a correction that measures the effect of the backreaction of $\pi$ on the metric. Notice that $Z$ is always positive. It is natural to identify the dilaton with \begin{equation} \varphi =\frac{\sqrt{Z}}{L} e^{-(z_{IR}+r)/L}\label{dilatoncanonical} \end{equation} and to put into evidence the ``large N factor" $N^2\equiv (M_5 L)^3$ in the kinetic term \begin{equation} {\cal L}_{kin}=\frac{N^2}{2} \partial_\mu \varphi\partial^\mu \varphi \end{equation} Moreover, under a dilation diffeomorphism $\tilde z = z+c(z) z_1$, $\tilde x^\mu = e^{-z_1/L} x_\mu$ the field $\varphi(x)$ does indeed transform as expected: \begin{equation} \varphi(x)\to \tilde \varphi(x)= e^{z_1/L} \varphi(e^{z_1/L} x)\, .\label{dilatonscale} \end{equation} It is interesting to ask how things change when $\epsilon\not = 0$. Naively it seems that by replacing $A_0$ and $\pi_0$ with the $\epsilon\not =0$ solution $(A,\pi)$ in eqs.~(\ref{metricdilaton})-(\ref{dilaton2}), we can construct a mode that reduces to a change of coordinates at zero momentum. The problem with such a mode is that it is not normalizable. The reason for that is the slow approach to the asymptote, now $\pi =0$, at the conformal boundary. At $\epsilon\not = 0$ a there isn't any normalizable mode behaving like a pure change of coordinates at zero momentum, and thus we conclude that all modes are expected to have a potential. Pure changes of coordinate, however, still constrain the form of the resulting potential. Indicating by $A=A_0+\Delta A$ the warp factor in the $\epsilon \not = 0$ case, a normalizable mode interpolating for the dilaton could now be written as in eq.~(\ref{metricdilaton}) with \begin{eqnarray} \hat{A}(x,z)&=&A_0(z+c(z)r(x))+\Delta A(z+b(z) r(x))-r(x)/L, \label{dilaton0b}\\ \hat{B}(x,z)&=&1+ c'(z) r(x)\\ \label{dilaton1b} \hat\pi(x,z)&=&\pi(z+b(z)r(x)) \label{dilaton2b} \end{eqnarray} where $c$ satifies the same boundary conditions as before, while $b$ coincides with $c$ at finite $z$, in particular $b(0)=0$, but goes to zero at $z\to -\infty$ fast enough to ensure normalizability. Since $b\not = c$ the diffeomorphism $\tilde z = z+c(z) z_1$, $\tilde x^\mu = e^{-z_1/L} x_\mu$ now changes the functional form of the asymptotic behaviour of the terms associated with $\Delta A$ and $\pi$. At lowest order in $z_1$ and $r$ we have \begin{equation} \pi(z+ b(z) r)\to \pi(z-(c(z)-b(z))z_1 + b(z) (r-z_1)) \end{equation} and similarly for $\Delta A$. Notice that in the asymptotic region $\Delta A$ can be expanded in a power series in $\pi$. Therefore asymptotically the above equation amounts to changing \begin{equation} \pi(z) \to \pi (z+z_1)\label{piRG} \end{equation} We conclude that in the $\epsilon \not = 0$ case, eq.~(\ref{dilatonscale}) must be supplemented with the spurious transformation \ref{piRG} to leave the action invariant, with the obvious dual RG interpretation. In particular, in order to respect the spurious scale invariance the potential must have the form \begin{equation} \kappa (\pi(r))e^{-4r/L}\label{colemanweinberg} \end{equation} where $\pi(r)$ is invariant under the combined action of \ref{piRG} and \ref{dilatonscale} (that is $r\to r-z_1$). \subsection{Dilaton quartic: first approach} \label{subsec:radion1} We now carry out step {\bf 2} outlined in Section \ref{sec:5DsolutionEpdZero}. Still working at $\epsilon=0$ we compute the dilaton quartic at lowest order in the detuning parameter $\Delta \pi_\infty$. In order to do that we simply have to compute the dilaton action over a shifted $\pi$ background: in practice this amounts to taking $\hat \pi = \pi_0(z+c r) + \Delta \pi_\infty$ in eq.~(\ref{dilaton2b}). Notice that such a shift has no effect in the bulk, as the action there only depends on $\partial \pi$. In particular the shifted fields are still a solution of the bulk equations of motion. The only contribution comes from the boundary tension, which at linear order in $\Delta \pi_\infty$ gives a dilaton potential \begin{equation} V= N^2 \Delta \pi_\infty \frac{\partial\tau_{IR}}{\partial \pi}\Big |_{\pi = \pi_{IR}}\frac{e^{4 (A(z_{IR})+z_{IR}/L)}}{2Z^2} \, \varphi^4 \end{equation} In terms of a general dilaton potential of the form $V = N^2 \kappa(\pi_\infty) \varphi^4$, this corresponds to \begin{equation} \frac{\partial \kappa}{\partial \pi_\infty}\Big |_{\pi_\infty = \pi_*} = \frac{\partial\tau_{IR}}{\partial \pi}\Big |_{\pi = \pi_{IR}}\frac{e^{4 (A(z_{IR})+z_{IR}/L)}}{2Z^2} \, .\label{linearquartic} \end{equation} Using the expected general form \ref{colemanweinberg} for the potential in the presence of a slowly evolving $\pi$, and carrying through precisely the same reasoning that lead to eq.~(\ref{eq:mphiGenericDiscussion}) we find the dilaton squared mass at leading $O(\epsilon)$: \begin{equation} \label{eq:final:approach2} m_\varphi^2 = \epsilon \, P'(\pi_*) \, \tau'_{IR}(\pi(z_{IR}))\, \frac{e^{4 A(z_{IR})+2z_{IR}/L}}{2\, L^2\, Z(z_{IR}) } \end{equation} where we used also eq.~(\ref{betadual}) and eq.~(\ref{dilatoncanonical}). \subsection{Dilaton quartic: second approach} \label{subsec:radion2} From the discussion in Sections \ref{subsec:radion} and \ref{subsec:radion1}, interpreted from a purely 4D point of view, we deduce the dilaton Lagrangian: \begin{eqnarray} \mathcal{L} &=& N^2 \left( \frac{1}{2} \, \partial_\mu \varphi\partial^\mu \varphi - \kappa \, \varphi^4 \right) \, . \\ \kappa &=& \Delta \pi_\infty \frac{\partial\tau}{\partial \pi}\Big |_{\pi = \pi_{IR}}\frac{e^{4 (A(z_{IR})+z_{IR}/L)}}{2Z^2} \label{eq:kappaSecondApproach} \end{eqnarray} Following \cite{fubini}, this corresponds to a dilaton VEV with dS4 or AdS4 symmetry, of the type (\ref{pattern}). It should then be possible to deduce the quartic coupling by solving the EOM in AdS5 with detuned asymptotic condition $\pi_\infty = \pi_* + \Delta\pi_\infty$, and then look at the curvature of the 4D sections. We present in this Section this alternative approach. We start from the metric: \begin{eqnarray} \label{eq:metricWithCC} ds^2 &=& e^{2 A(z)} g_{\mu\nu}(\bar\Lambda) dx^\mu dx^\nu - dz^2 \end{eqnarray} where: \begin{equation} \label{eq:dSorAdS} g_{\mu\nu}(\bar\Lambda) dx^\mu dx^\nu = \left\{ \begin{aligned} & \frac{1}{(\sqrt{\bar\Lambda} t)^2} \eta_{\mu\nu} dx^\mu dx^\nu & \text{ (dS4) if } \bar\Lambda>0 \\ & \frac{1}{(\sqrt{-\bar\Lambda} x_3)^2} \eta_{\mu\nu} dx^\mu dx^\nu & \text{ (AdS4) if } \bar\Lambda<0 \end{aligned} \right. \, , \end{equation} while the bulk field is $\pi = \pi(z)$. The EOM now read \cite{Kaloper:1999sm}\cite{DeWolfe:1999cp}: \begin{eqnarray} \pi''+4 A' \pi' - \frac{\partial V}{\partial \pi} &=& 0 \label{eq:EOM1L}\\ A'' + \bar\Lambda e^{-2A}+ \frac{2}{3}(\pi')^2&=&0 \label{eq:EOM2L} \\ (A')^2 -\bar\Lambda e^{-2A}+ \frac{1}{3}V(\pi) -\frac{1}{6}(\pi')^2&=&0 \, , \label{eq:EOM3L} \end{eqnarray} with the same matching conditions on the brane (\ref{eq:Matching1})-(\ref{eq:Matching2}). To connect $\bar\Lambda$ with the quartic coupling $\kappa$ (\ref{eq:withKappa}) of the dilaton potential, we compare the curvature of the 4d sections. To be specific, if one starts from the dilaton field $\varphi$ in the 4D theory, with potential given by (\ref{eq:withKappa}), then the metric that is seen by matter is: \begin{equation} ds^2 = \frac{L^2 \, \varphi^2(x)}{Z}\, e^{2(A(z_{IR}) + z_{IR}/L)} \, \eta_{\mu\nu} \, dx^\mu \, dx^\nu \end{equation} where $\varphi(x)$ is given by (\ref{pattern}). Up to a change of coordinates, this is equivalent to (\ref{eq:dSorAdS}) with the identification: \begin{equation} \label{eq:identificationQuarticLambda} \bar\Lambda = -\frac{2\kappa \, Z \, e^{-2z_{IR}}}{L^2} \, . \end{equation} We now want to derive an expression for $\bar\Lambda $ by solving the EOM with detuned asymptotic value for the bulk scalar $\pi_\infty = \pi_* + \Delta\pi_\infty$, and then check that we recover (\ref{eq:kappaSecondApproach}) through (\ref{eq:identificationQuarticLambda}). This computation is detailed in the Appendix \ref{appendix}. Since we are interested in the solution close to the minimum of the radion potential, it is enough to solve the EOM at linear order in $\bar \Lambda$. In principle however, to fully compute the potential, what one has to do is to find $\bar\Lambda$ by solving the EOM (\ref{eq:EOM1L})-(\ref{eq:EOM3L}). In our case we define: \begin{eqnarray} A(z) &\equiv& A_0(z)+\bar \Lambda \, \bar A_1(z) \\ \pi(z)= &\equiv& \pi_0(z)+\bar \Lambda \, \bar \pi_1(z) \end{eqnarray} and analogously for the derivatives. We then impose the matching conditions (\ref{eq:Matching1})-(\ref{eq:Matching2}), that uniquely fix the values $\pi_*$ and $z_{IR}$ in the case $\bar \Lambda=0$. Changing the asymptotic value of the field profile from $\pi_*$ to $\pi_\infty = \pi_* + \Delta\pi$ will require a non vanishing $\bar \Lambda$ in order for the IR matching conditions to be satisfied again, together with a shift $\Delta z$ in the position $z_{IR}$ of the IR brane. At linear order in $\bar \Lambda$ one finds: \begin{eqnarray} \delta z \, \pi''_0+\bar \Lambda\, \bar \pi'_1&=&-\frac{1}{2}\frac{\partial^2 \tau_{IR}}{\partial \pi^2}\, \tilde\Delta\pi \nonumber \\ \delta z\, A''_0+\bar \Lambda\, \bar A'_1&=&-\frac{2}{3}\pi'_0\, \tilde\Delta\pi \, , \label{matchingIR} \end{eqnarray} where $\tilde\Delta\pi=\Delta\pi+\Delta z\, \pi'_0+\bar \Lambda \bar \pi_1$. Solving the system (\ref{matchingIR}) for the two unknowns $\bar \Lambda$ and $\Delta z $ we find: \begin{equation} \label{eq:resultLambdabar0} {L^2} \frac{ \bar\Lambda}{\Delta \pi_\infty} = \frac{-2 \frac{\partial\tau_{IR}}{\partial\pi}\pi_0'' -3\frac{\partial^2 \tau_{IR}}{\partial \pi^2} A_0''} {\bar \pi_1 (2 \frac{\partial\tau_{IR}}{\partial\pi}\pi_0'' +3\frac{\partial^2 \tau_{IR}}{\partial \pi^2} A_0'') +\bar \pi_1' (6 A_0'' -2 \pi_0' \frac{\partial\tau_{IR}}{\partial\pi}) + \bar A'_1 (-6 \pi''_0 -3 \pi'_0 \frac{\partial^2\tau_{IR}}{\partial\pi^2})} \end{equation} where all the functions are evaluated on the IR brane. Using the zeroth order equations of motion (\ref{eq:Matching1})-(\ref{eq:Matching2}), (\ref{eq:resultLambdabar0}) simplifies as \begin{equation} \label{eq:finalFormula} \frac{\bar\Lambda}{\Delta \pi_\infty} = -\frac{1}{L^2}\times \left. \frac{\frac{\partial\tau_{IR}}{\partial\pi}}{\frac{\partial\tau_{IR}}{\partial\pi} \bar \pi_1 - 3\bar A_1'} \right|_{z_{IR}} \, . \end{equation} Using the results in Appendix \ref{appendix}, in particular (\ref{intformula}), one can straightforwardly show that this result is consistent with eq.~(\ref{eq:kappaSecondApproach}), through eq.~(\ref{eq:identificationQuarticLambda}) \subsection{The limit of negligible backreaction} \label{subsec:radion0} As a final check of our results, we consider the limit of negligible backreaction where the computation is much simpler. Working with $z_*=0$, this limit is obtained when $\tau_{IR}$ is such that the brane stabilizes at a position where: \begin{equation} \left| \frac{z_{IR}}{L}\right| \gg 1 \end{equation} so that the local geometry is well approximated by AdS5 everywhere, that is $A(z)\simeq -z/L$. More precisely, starting from a zeroth order solution with $\pi=\pi_{IR}$ and $A(z)= -z/L$, corresponding to $f=0$, $f'=0$ and $\epsilon=0$, we we can solve the EOM in an order by order expansion in the latter three quantities, treted as small parameters. Indeed by considering the correction $\Delta T$ to the energy momentum tensor that come from $\pi$ and from the IR brane tension, and demanding that it be subdominant to the bulk cosmological constant, one indeed obtains \begin{equation} f(\pi_{IR})\ll 1 \qquad ( f^\prime(\pi_{IR}) )^2\ll1 \qquad \epsilon \ll 1 \end{equation} where the second condition comes from the $(\pi^\prime)^2$ contribution to $\Delta T$ after having taken the IR brane matching condition into account. Notice that, strictly speaking, provided a point where both $f$ and $f^\prime$ vanish exists, eqs.~(\ref{eq:EOM1})-(\ref{eq:Matching2}) are endowed with the unperturbed slice of AdS solution. In this respect we do not need to impose that higher derivatives of $f$ be small. In particular $f''$ could be $O(1)$. Obviously the existence of a value of $\pi$ such that $f=f^\prime=0$ requires tuning: a tuning that only lends us some computational hedge, but which is conceptually not needed. Let us focus for definiteness on the case of a quadratic bulk potential $P(\pi)=-2\pi^2$. Once $\epsilon \not = 0$ the leading order $\pi$ solution is given over all space (see eq.~(\ref{eq:AsymptPhiSolution})) \begin{equation} \label{GWpi} \pi = \pi_* e^{\epsilon z/L}\, + \hat{\pi}_{*} e^{(4-\epsilon)z/L}\, . \end{equation} In order to make contact with eq.~(\ref{dilatoncanonical}) we parametrize the radion by considering a displaced brane at the position $z_{IR}+r$. The matching condition (\ref{eq:Matching2}), then reads \begin{equation} \label{eq:matching:noback} \pi(z_{IR}+r) = -\frac{1}{2(4-\epsilon)} \frac{\partial f}{\partial \pi} (\pi(z_{IR}+r)) + \frac{4-2\epsilon}{4-\epsilon} \, \pi_* \, e^{\epsilon (z_{IR}+r)/L} \, \equiv g(\pi_* e^{\epsilon (z_{IR}+r)/L}) \, . \end{equation} Notice that in the limit $\epsilon =0$, $f(\pi_{IR})=0$ this condition implies $\pi(z)=\pi_{IR}=const$. Indicating the canonical dilaton as in eq.~(\ref{dilatoncanonical}), with now $Z\simeq \frac{3}{2}$, and substituting the solution of the EOM into the action one obtains: \begin{equation} \label{eq:effActionSmallBackreac} S = N^2 \, \int d^4 x \, \left( \frac{1}{2}(\partial \varphi)^2 -\varphi^4 \kappa(\pi(\varphi)) \right) \end{equation} where, keeping only the terms that are not suppressed by powers of $\epsilon$: \begin{equation} \label{eq:dilatonpotential:noback} \kappa(\pi(\varphi)) = \frac{1}{2 Z^2}\left( f(g(\pi(\varphi))) -\frac{1}{2}\left( g(\pi(\varphi)) - \pi(\varphi) \right) \, \frac{\partial f}{\partial \pi}(g(\pi(\varphi))) \right) \, , \end{equation} and we defined the running coupling: \begin{equation} \pi(\varphi) = \pi_* \left( \frac{L}{ Z^{\frac{1}{2}} } \varphi \right)^{-\epsilon} \, . \end{equation} Notice that the term $\pi(\varphi)$ subtracted from $g(\pi(\varphi))$ in the second term arises from the integration in the region $z\to -\infty$. The reason for this contribution is that in the computation a la GW we did not parametrize the radion with localized mode: the profile of $\pi$ towards the conformal boundary depends on $r$. Now, since: \begin{equation} g(\pi(\varphi)) - \pi(\varphi) = -\frac{\epsilon}{4}\pi(\varphi)-\frac{1}{2(4-\epsilon)} \frac{\partial f}{\partial \pi} (g(\pi(\varphi))) = -\frac{1}{8} \frac{\partial f}{\partial \pi} (g(\pi(\varphi)))+ O(\epsilon) \end{equation} we conclude that, at leading order in $\epsilon$ the above result coincides with: \begin{equation} \kappa(\pi(\varphi)) =\frac{1}{2Z^2}\left (f(g(\pi(\varphi)))+\frac{1}{16}( \, f'(g(\pi(\varphi))) \, )^2\right ) \end{equation} We can quickly compare this result with eq.~(\ref{linearquartic}) by considering the limit $\epsilon =0$ and taking the first derivative with respect to $\pi_*$. In agreement with eq.~(\ref{linearquartic}) we find: \begin{equation} \frac{\partial \kappa}{\partial \pi_*}=\frac{1}{2Z^2}\left (f'(g(\pi_*))+\frac{1}{8}f''(g(\pi_*))f'(g(\pi_*))\right )\frac{\partial g}{\partial \pi_*}=\frac{1}{2Z^2}f'(g(\pi_*))\equiv \frac{2}{9}f'(\pi_{IR})\label{quarticnobac} \end{equation} where we used eq.~(\ref{eq:matching:noback}) to derive: \begin{equation} \frac{\partial g}{\partial \pi_*}=\frac{1}{1+\frac{1}{8}f''(g(\pi_*))}\, . \end{equation} The consistency with our previous results for the mass of the dilaton at leading order in $\epsilon$ follows straightforwardly. We should notice that by considering eqs.~(\ref{eq:EOM3})-(\ref{eq:Matching2}), in the limit $\epsilon =0$ one obtains the following condition on $\pi_{IR}$: \begin{equation} f(\pi_{IR})+\frac{1}{16}(f'(\pi_{IR}))^2-\frac{1}{6}(f(\pi_{IR}))^2=0 \end{equation} that would coincide with eq.~(\ref{quarticnobac}) if it wasn't for the $f^2$ term. This is consistent with our leading approximation, where $f$ and $(f')^2$ are independent corrections to the energy momentum tensor, and should be considered as equally important. However $(f)^2$ is subdominant to $f$. In order to consistently take those higher orders into account we would have to consider the backreaction of the metric. \section{Conclusions} \label{sec:conclusions} The issue of spontaneus breaking of the conformal group $O(N,2)$ in N-dimensional quantum field theory resembles very closely the cosmological constant problem in general relativity. In the former case the symmetry of the system does not forbid the associated Goldstone boson (dilaton) to have a quartic potential. The resulting pattern of symmetry breaking to either deSitter or anti-deSitter or Poincar\'e subgroups of $O(N,2)$ then follows respectively from the choice $\kappa<0$, $\kappa>0$ and $\kappa=0$ for the quartic potential. In particular the Poincar\'e subgroup is selected only in a subset of measure zero of the parameter space. It thus appears rather non generic. In the case of gravity, the similar pattern emerges because the relevant symmetry, diffeormophism invariance, does not prevent the presence of a potential term, the cosmological constant $\Lambda$, for the associated gauge field. The (maximally symmetric) solutions are then either deSitter, anti-deSitter or Poincar\'e, depending on $\Lambda>0$, $\Lambda<0$ or $\Lambda =0$. The cosmological constant problem lies in the apparent non genericity of the choice $\Lambda=0$. As nicely elucidated by Sundrum \cite{Sundrum:2003yt}, the analogy between the two problems should not come out as surprising. Indeed the non linear realization of $O(N,2)$ through a dilaton, represents a possible relativistic extension of Newtonian gravity, the so-called theory of scalar gravity. In this paper we presented a scenario, based on effective field theory, that produces a pseudo-dilaton with a naturally small potential. The two key features to achieve that are \begin{itemize} \item The existence of a ``landscape'' of values for the quartic coupling $\kappa$ of the effective dilaton potential, containing the point $\kappa=0$. \item The explicit breakdown of conformal invariance by a naturally small parameter, associated with a nearly marginal coupling \end{itemize} The combination of the above two key features gives rise to a specific vacuum dynamics according to which the minimum robustly sits near the point $\kappa=0$. We discussed in detail a 5D holographic realization that explicitly illustrates our solution is {\it natural} according to the standard naturalness criterion \cite{'tHooft:1979bh}. The smallness of the dilaton potential around its minimum directly follows from the presence in the 5D bulk of the pseudo-Goldstone boson of an internal symmetry. Our model represents a variant of the Goldberger-Wise (GW) mechanism \cite{Goldberger:1999uk} of radion stabilization in the Randall-Sundrum model \cite{Randall:1999ee}. The novelty, with respect to previous implementations of the GW mechanism, is that our construction makes clear that in order to obtain a light radion there is no need to tune, even approximately, the tension of the IR brane. As long as the bulk potential for the GW scalar is flat, the minimum of the radion potential arises at a point where the overall potential is small, as if the IR brane tension were practically tuned. Aside this result we checked that our model is sensible in that there are no ghosts or tachyons. It is now clearly interesting to ask what our example could teach us on the true cosmological constant problem, the one concerning (quantum) gravity. Even before trying to think of an explicit 4D analogue, one interesting step would be to understand how our mechanism can be embedded in a thermal history of a scalar gravity toy universe. Some basic questions would be: Under what initial conditions is the dilaton driven at late times to the region where its potential is small? What would such a cosmology look like? Would there be the analogue epochs of radiation domination, matter domination and structure formation then followed by a period of accelerated expansion? In order for our example to be interesting, one would like to exclude the need for extreme tuning of initial conditions in order to achieve such a cosmology. We understand that Sundrum, in a forthcoming paper, has gone a long way towards addressing these issues \cite{SundrumInPreparation}. Encouraged by that result, one may speculate about a real gravity translation of our scenario. Even allowing for some loss in translation, what could our two key features look like? A landscape of values for the cosmological constant could for instance be provided by the a 3-form field $A$ and its 4-form field strength $F=dA$ (see for instance \cite{Weinberg:1988cp}). Indeed $F$ has a continuum of constant vacuum solutions. The effective cosmological constant is just a function $\Lambda(F)$ and we generically expect $F_*$ esists such that $\Lambda(F_*)=0$. Notice, in passing, that $F\not = 0$ provides the {\it smallest} ``higgsing'' of gravity, where the group of diffeomorphisms is spontaneously broken to the subgroup of volume preserving diffeomorphisms. The freedom in the choice of $F$, which can be associated with the choice of boundary conditions, directly corresponds to the freedom in the choice of $\Lambda$ in unimodular gravity. In view of these analogies, the field strength $F$ encouragingly looks like the real gravity analogue of the marginal coupling $\lambda$ of our toy gravity construction. How could we then mimick the second key ingredient? Like in the toy example, we need to somehow lift the degeneracy over the landscape. One possibility to achieve that was indeed pointed out some time ago by Brown and Teitelboim (BT) in a visionary paper \cite{Brown:1988kg}. BT have shown that when there exist 2-branes that couple to $A$, brane nucleation by quantum tunnelling can relax by discrete jumps the value of $F$. If the brane tension is small enough the succesive jumps could then relax the effective cosmological constant down to its observed value. However, in the range of parameters where this can happen, the rate of bubble nucleation is so small that it seems difficult to implement the mechanism in a realistic cosmology (see however ref. \cite{Feng:2000if} for a broader perspective). Perhaps, leaving quantum tunnelling aside, another perspective to eliminate the degeneracy of the $F=const$ solutions and realize a surrogate of our second key feature, would be to ``higgs" the gauge symmetry associated with the 3-form and give it a mass. That is done by adding a 2-form $B$ that shifts under the gauge transformation $B\to B+\alpha$ whereas $A$ transforms like $A\to A+d\alpha$. The presence of a mass term would give rise to a slow evolution of the field strength $F$ that could, in principle, relax towards the value $F_*$ where the cosmological constant vanishes. In a sense the addition of a Goldstone 2-form $B$ classically screens the field strength $F$, where BT brane nucleation did so by quantum tunnelling. The analogy with the quantum screening mechanism suggests that a massive 3-form could be a promising direction. On the other hand, from another perspective, a massive 3-form is dualized as a massive scalar, so that in the limit of small mass the system should just corresponds to a version of quintessence (see for instance \cite{Koivisto:2009fb}). From the latter perspective it would seem tuning is still needed to achieve a cosmology with small effective cosmological constant at late times, but perhaps an explicit investigation is warranted. To conclude: a 4-form field strength could indeed play the role of our marginal coupling, however we have yet to identify a successful analogue of our second key feature. In scalar gravity that role was played by an explicit small breaking of the relevant global symmetry, conformal invariance. In the case of real gravity the relevant symmetry is a gauged one, for which there is no analogue of explicit breaking: breaking diffeomorphism invariance invariably brings in new degrees of freedom, thus entering the mine field of {\it modified gravity}. Is there a way out? \section*{Acknowledgments} RR thanks Roberto Contino and Alex Pomarol for developing the original idea\cite{cpr} that lead to this paper. DP and RR also thank Raman Sundrum for engaging discussions on the scalar guise of the cosmological constant problem. This research is supported by the Swiss National Science Foundation under contract 200021-125237. The work of D.P. is supported by the NSF Grant PHY-0855653.
1,108,101,565,103
arxiv
\section{Introduction} \label{chapter:introduction} With eye tracking dating back from the XVIII century \cite{javal_1879}, many have been the applications for this technology, such as medicine \cite{holzman_proctor_hughes_1973}), robotics \cite{mcmullen_hotson_2014,gomes2016gaze}), advertising \cite{krugman_fox_fletcher_fischer_rojas_1994}) and, more recently, computer~games~\cite{smith_graham_2006}.~In the past decade, research on computer games tried to compare traditional input (e.g., mouse, keyboard, and gamepad) with eye tracking input, in terms of action accuracy and \mbox{responsiveness \cite{leyba_malcolm_2004,isokoski_martin_2006,smith_graham_2006,isokoski_joos_spakov_martin_2009,dorr_pomarjanschi_barth_2009}.} These comparisons were often made after asking players to compete against each other or asking users to complete a given task, using the different input methods. Overall, these comparisons provided mixed results, with some studies claiming that the use of eye tracking contributed to better task completion \cite{dorr_pomarjanschi_barth_2009}, while others claiming that traditional input devices provided better overall results \cite{leyba_malcolm_2004}. Hence, these previous studies have provided contradictory results regarding the effectiveness of the use of eye tracking in computer games as a direct control input, which may be an indication that eye tracking is not best suited to direct input control. Bearing the limitations of eye tracking as a simple direct control input in mind, in this paper, we~propose to use it to control the attention of the player's avatar and the game's procedural content generation. This use of eye tracking is focused in mapping the player's and avatar's attention processes, which we believe to be much more natural and useful than allowing the player to directly control a pointer with the eyes, which has no actual mapping to real life. That is, instead of replacing the traditional input for an eye tracker, we are more interested in studying meaningful ways about how eye tracking can improve gameplay, which includes co-existence with traditional inputs. This~also means that, instead of analysing objective performance-based data, as in previous studies, we are more interested in a subjective analysis of how eye tracking improves enjoyability and how well the player adapts to the technology. To demonstrate the value of the herein proposed alternative uses of eye tracking in computer games, we developed and tested our own endless runner First-Person Shooter (FPS), Zombie Runner (see Figure~\ref{fig:zombiekilled}). In this game, shot accuracy, automatic obstacle avoidance, and procedural obstacle spawn probability are controlled as a function of the avatar's attention model, which, in turn, operates~according to the player's gaze estimated with an affordable eye tracker. The goal is to better represent the player's actions in the game, thus contributing to a more immersive experience. Based on a set of testing sessions, we conclude that the use of eye tracking provides a more challenging and immersive experience to the player. Participants reported better levels of satisfaction while playing the game with the gaze tracking turned on. However, a strong correlation between eye tracker calibration problems and player's overall experience was found. This means that eye tracking technology still needs to evolve but also means that once technology gets mature enough players are expected to benefit greatly from the inclusion of eye tracking in their gaming experience. \begin{figure}[H] \centering \includegraphics[width=12cm]{zombiekilled.png} \caption{A zombie being killed in Zombie Runner.\vspace{-6pt}} \label{fig:zombiekilled} \end{figure} This article is an extended and improved version of a poster paper \cite{antunes2017} and it is organised as follows. Section~\ref{chapter:leteraturesurvey} presents an overview of previous and related work. Then, in Section~\ref{chapter:developmentimplementation}, Zombie~Runner is described and its implementation detailed. Section~\ref{chapter:evaluationdiscussion} describes the experimental setup and analysis the obtained results. Finally, Section~\ref{chapter:conclusionsfuturework} presents some conclusions and provides some future work directions. \section{Related Work} \label{chapter:leteraturesurvey} Eye tracking is the process of estimating one's gaze direction, identifying the object in which the the subject is focused \cite{lukander_2004,arai_mardiyanto_2011}. Eye tracking dates from the XVIII century, in which persistent images to describe the human eye movements were used \cite{wells_1792}. In the XX century, the first eye movement measures through a non-intrusive method using photographs and light reflections were made \cite{dodge_cline_1901}. In~the 1980s, with the evolution of computing capacity, it became possible to perform real-time eye tracking with access to video, what opened the possibility of human--machine interaction \cite{singh_singh_2012}. With~increasingly more accessible prices \cite{shell_vertegaal_cheng_skaburskis_sohn_stewart_aoudeh_dickie_2004,smith_vertegaal_sohn_2005}, the use of eye trackers increased in several areas, such as marketing \cite{krugman_fox_fletcher_fischer_rojas_1994}, psychology \cite{holzman_proctor_hughes_1973} and, more recently, computer games \cite{smith_graham_2006}. Eye tracking has been explored in computer games as an alternative to traditional input methods, such as mouse or keyboard \cite{smith_graham_2006,isokoski_joos_spakov_martin_2009}. By testing gaze input versus mouse input in three different computer games, Smith and Graham \cite{smith_graham_2006} concluded that the use of eye tracking can provide a more immersive experience to the player. Isokoski and Martin \cite{isokoski_martin_2006} conducted a preliminary study on the use of eye tracker in First Person Shooters. Each participant in the test was asked to play the same game using three different input method schemes: (1) mouse, keyboard, and eye tracker; (2) only mouse and keyboard; or (3) a console gamepad. The conclusions were not exactly encouraging, suggesting that the performance with the eye tracker was quite inferior to the other two. However, Isokoski and Martin attributed these results to the players' greater knowledge and contact with the traditional input methods, suggesting that this scenario could change with more training. Other studies achieved similar conclusions. Leyba and Malcolm \cite{leyba_malcolm_2004} created a simple test in which the player was asked to eliminate twenty-five balls that moved around the screen at different velocities. The player would move the pointer using the mouse or the eye tracker and would eliminate the balls by clicking on them with the mouse. Two conditions were tested: with and without time limit to complete the task. The results showed that, without time limit, precision and time to complete the task was worse while using the eye tracker than when using a mouse. The same results were obtained for the no time limit condition, in which performance was based on percentage of balls eliminated by the player. Michael Dorr et al. \cite{dorr_pomarjanschi_barth_2009} achieved totally opposite results. After creating and adapting a clone of the classic game Breakout, twenty players were asked to participate in a tournament. Players were separated in pairs. The two players of each pair played against each other, one using the mouse and the other using an eye tracker. The control inputs were swapped between rounds. The results showed that the players who used the eye tracker achieved higher scores and won more rounds. The players also stated that using the eye tracker was highly enjoyable. These discrepant results between studies suggest that the type of game and development method of the same game are key elements to achieve a satisfying final result. Bearing the limitations of using eye tracking as a simple direct control input in mind, we propose to use it to control the attention of the player's avatar and the game's procedural content generation. This use of eye tracking is focused in mapping the mental state of the player and her/his avatar, which~we believe to be much more natural and useful than controlling a pointer with the eyes, which~has no mapping to real life. Another alternative and interesting use of eye tracking, not tackled in this paper, is to know when to actively redirect the player's attention \cite{perreira_2007}. Procedural Content Generation (PCG) concerns all creation of game content (e.g., sounds, levels, objects, characters, and textures) with algorithms, with limited or indirect stimuli from the user \cite{shaker_togelius_nelson_2014}. PCG is a way of coping with the daunting task of manually creating and populating massive open worlds. Besides this more traditional use of PCG, some researchers proposed that, through the analysis of the interaction between the player and the game, PCG could be used to create a playing experience that adapts itself to the player \cite{browne_yannakakis_colton_2012,yannakakis_2012,gow_baumgarten_cairns_colton_miller_2012}, improving the game's replay value. This new approach is known as Experience-Driven Procedural Content Generation (EDPCG) \cite{yannakakis_togelius_2011}. Within the EDPCG framework, it is possible to generate levels that are adapted to the strengths and limitations of the player, in an attempt to maximize the fun factor. Many studies propose that the fun and challenge factors are directly linked \cite{iida_takeshita_yoshimura_2003,spronck_sprinkhuizen-kuyper_postma_2004,andrade_ramalho_santana_corruble_2005,yannakakis_hallam_2007,olesen_yannakakis_hallam_2008,lankveld_spronck_herik_rauterberg_2010,sorenson_pasquier_2010}, which means that, to tune the fun factor, one~often has to tune the challenge factor. In this line, in this paper, we propose the integration of PGC with gaze tracking so as to adapt the challenge the player as to face according to his/her form of playing and, \mbox{as a result}, to improve the fun factor. \section{Zombie Runner} \label{chapter:developmentimplementation} The game herein presented, Zombie Runner, was purposely designed to integrate gaze tracking at its core mechanic as a way of estimating the player's attention and use those estimates to control the avatar's attention, that is, as a way of implementing a mapping between the player's and the avatar's attention processes. This is in contrast to the typical use of eye tracking to control avatar's motor actions (e.g., to aim the weapon). In Zombie Runner, motor actions are controlled via a traditional~input, a gamepad. This way, gaze and hand orthogonally and, thus, more naturally, control their virtual counterparts. To better analyse the advantages and disadvantages of the use of eye tracking in games, focus should be given to game genres where the player's visual attention has to be shared between different elements in the scene under tight temporal restrictions, where the eye tracking use is more central and challenging. For this reason, Zombie Runner has been designed and implemented as the common genre (of high impact) FPS. The character's running action is fully automated to reduce the number of actions the player had to memorize and control and, hence, reduce variability in the game evaluation phase. \subsection{System Configuration} \label{sec:system-config} The game, developed in Unreal Engine, has been with the following hardware configuration in mind (see Figure~\ref{fig:testroomsetup}): a low-cost Gazepoint GP3 eye tracker; a Microsoft Xbox gamepad; and~a 32~inch computer screen. The player sits in front of the computer screen, with the gamepad in hand. The~eye tracker sits below the computer screen. Behind it, a laptop running the game is available for the research team during the evaluation sessions. \begin{figure}[H] \centering \includegraphics[height=6cm]{hardwaresetup.png}\hspace{0.2cm}\includegraphics[height=6cm]{testroomsetup.png} \caption{The hardware configuration and a test subject during an evaluation session.\vspace{-6pt}} \label{fig:testroomsetup} \end{figure} In Zombie Runner, eye tracking is ensured by the Gazepoint's control software \cite{gazepoint_control}, which takes control of the mouse cursor to direct it towards the gaze point in the screen. Thus, Zombie Runner only needs to be sensitive to the mouse's cursor to determine the player's gaze in screen coordinates. The~eye tracker's vendor claims a visual angle accuracy between $0.5^{\circ}$ and $1.0^{\circ}$ at 60 Hz, which we consider to be sufficient to assess which in-game elements are being attended by the player. A recent study confirms the vendor's claimed accuracy \cite{zugal2014low}, provided that the user does not wear glasses, the~testing environment offers proper lighting conditions, and a correct calibration procedure is carried~out. To attain a correct calibration, Zombie Runner uses a nine-point calibration procedure shipped with the Gazepoint's control software. Before playing the game, the user must confirm that the calibration is correct by running a calibration validation procedure. This procedure consists of gazing at the centre of several circles displayed in the screen (see Figure~\ref{fig:calibration-screen}). This calibration is assumed to be successful ({good enough}) if the gaze never lands outside the circles being attended by the user. \begin{figure}[H] \centering \includegraphics[height=7cm]{calibration.png} \caption{The Gazepoint Control software's screen used to test the calibration results. The user is asked to look at the centre of each circle. The calibration is assumed to be successful if the gaze, represented~in green, never lands outside the circle being attended by the user.\vspace{-6pt}} \label{fig:calibration-screen} \end{figure} \subsection{Game Rules and Mechanics} Zombie Runner follows the following set of rules. The main objective of the game is to ensure that the avatar survives for as long as possible while running along a corridor with a non controllable constant forward movement. The player can achieve this by killing enemies and by \textit{noticing} elements in the scene (enemies and obstacles). The player can kill enemies by aiming and sooting the avatar's gun via the gamepad. The player can \textit{notice} the several elements in the scene by actively looking at them on the screen for a sufficient amount of time. The gaze of the player is estimated with the eye tracker. An element noticed by the player is also noticed by the avatar. If the avatar approaches a previously \textit{noticed} obstacle, it will automatically avoid that obstacle (i.e.,~jump over a rock or dodge a hanging tree branch). If this obstacle were not noticed early enough, it~will not be seen by the avatar, which will result in a collision and subsequent avatar's health decrement. A shot enemy that was noticed dies instantly, whereas it will only be hurt if unnoticed. This intends to simulate the lack of shot accuracy resulting from an insufficient focus on the enemy. If~a noticed enemy approaches the avatar, it will attack, causing the avatar to lose health. If the enemy was not noticed early enough, its attack will cause instant avatar's death. The idea is that the avatar is able to dodge the attack provided the attacker had been noticed. The avatar dies when its health reaches zero, leading to the game being over. Figure~\ref{fig:gameflowinteractions} depicts the state diagrams associated to these game rules. \begin{figure}[H] \centering \includegraphics[width=9.5cm]{gameflowinteractions.png} \caption{State diagrams for: when the avatar shoots an enemy (\textbf{A}); when the avatar approaches an~obstacle (\textbf{B}); and when an enemy approaches the avatar (\textbf{C}). The arrows indicate state transitions and the text in them are the conditions necessary to trigger another state of the interaction.\vspace{-6pt}} \label{fig:gameflowinteractions} \end{figure} \subsection{Game Implementation} \label{sec:devunrealengine} Zombie Runner was developed on Unreal Engine. The programming of Zombie Runner was split between C++ and Blueprints, the Unreal's visual scripting language. C++ was used for the core algorithms while Blueprints was used to program actor behaviour, such as enemies and obstacles. The~game is based on the Unreal's FPS template, which already implements the expected behaviour for a~generic game of the genre, including shooting, walking, and aiming mechanisms. The corridor in which the game takes place is made of an unidimensional array of 3D tiles (see~Figure~\ref{fig:tilegeneration}), which are procedurally generated and removed from the scene once the avatar passes by the them. Tiles are comprised of one plane for the floor, two planes for the side walls, an array of marker points distributed randomly on the sides of the floor (not in the region in which the avatar will be crossing), and an array of tree models. Every time a tile is spawned, the array of marker points is traversed. For each marker, there is a $66\%$ probability of spawning a tree (non-obstacle) in its position, plus a small random offset. This tree is spawned with a random rotation. The tree's height is also randomly set between reasonable values. The result are sufficiently different tiles that, when placed in succession, give the feeling of a dense and varied forest on each side of the player's sight. Figure~\ref{fig:tilegeneration} shows three different results of this algorithm applied to the tile generation. \begin{figure}[H] \centering \includegraphics[width=14cm]{treepgc.png} \caption{Tiles populated with procedurally placed obstacles. Note that the tree branches in the central region of the tiles are implemented as small rotated trees.\vspace{-6pt}} \label{fig:tilegeneration} \end{figure} Fifteen tiles are initially generated. After the eighth tile is spawned, obstacles or enemies can start being spawned with them. This value was achieved by trial and error and produces an initial tile generation that gives enough time for the player to settle in and prepare for the incoming obstacles and enemies. After this initial generation, all subsequent tile generation is recursively and procedurally handled by the tiles themselves. An obstacle (i.e., a rock or an hanging tree branch) or an enemy has a $33\%$ probability of being procedurally spawn in each new tile. If it does occur, there is a $55\%$ probability for either spawning an obstacle or an enemy. A spawned enemy has a $20\%$ probability of being spawned as a \textit{runner}, which has double the speed of a \textit{walker}, the default behaviour of an enemy. These percentages have been tuned according to a set of informal tests to ensure that the game was playable without training. An obstacle or an enemy can be spawned in three different regions in the tile (see Figure~\ref{fig:screensplit}): central~region, left region, or right region. If the previous spawned obstacle or enemy was spawned in either the left or the right regions, the new one will be spawned in the central region. This adds variety to the game and avoids objects of the same type to be spawned close to each other. If the previous spawned obstacle or enemy was spawned in the central region, the new one will be spawned in either left or right regions, depending on the player's gaze. Concretely, the obstacle or enemy is spawn in the region of the tile least gazed by the player during the time spent running over the two previous tiles. This~forces the attention of the player to be often alternating between regions, thus increasing the challenge. When the obstacle is spawned in the central region, its asset is a rock, being a tree branch when spawned on one of the side regions. Tree branches are implemented as laying small trees (see~Figure~\ref{fig:tilegeneration}). \begin{figure}[H] \centering \includegraphics[width=7cm]{screensplit.png} \caption{Tile/screen regions used to determine where to spawn an obstacle or enemy.} \label{fig:screensplit} \end{figure} Zombie Runner relies on ray casting to determine which \textit{object} (term hereafter used to generally refer to obstacles and enemies) present in the virtual scene is being attended by the player, given~the gaze position in the screen. Assuming that the avatar's camera (the one rendering the images presented to the player) is located at world coordinates $\mathbf{o}$ and the player is gazing towards the screen's local coordinates $\mathbf{s}$, the system determines which object is being attended by the player by casting into the scene a parametric ray $\mathbf{r}(t)=\mathbf{o}+t(\Phi(\mathbf{s})-\mathbf{o})$, where the function $\Phi(\cdot)$ transforms a point from screen coordinates to world coordinates, given the virtual camera's field of view and pose. The closest intersected object is taken as the one being attended by the player. Due to the rapid nature of FPS games (which induces frequent gaze shifts) and the limited accuracy of low-cost eye tracking technology, demanding for gaze fixations to occur with high certainty (i.e., to lock in a given object for a significant amount of time) before accepting that a given object was attended by the player could result in many false negatives. Here, a false negative means penalizing the player because a given object was not attended when, in practise, the player feels that the object was attended (even if only momentarily). These events are more harmful to the player's engagement level than the other way around, that is, to erroneously assume that the player has attended to a not actually attended object. Hence, the approach followed in Zombie Runner is to label a given object as noticed if the gaze of the player intersects that same object over an \textit{accumulated} (i.e., time is not reset when gaze leaves the object) period of 0.5 s. This period was obtained from a set of informal tests. The~approach of considering accumulated time ensures that an object is marked as attended/noticed even if the player frequently gazes across several objects or the eye tracker estimates jitter around the player's gaze, meaning it frequently lands off the attended object. Thus, this approach implements a~filtering process that trades-off false positives and false negatives in a way that suits the game's~needs. To speed up computation, ray--object intersections (for determining which object is being attended by the player) are tested using pre-computed accessory intersection bounding boxes, rather than using the objects' triangular meshes directly. That is, instead of testing ray intersections against the several triangles present in an object's mesh, intersections are tested against a bounding box properly placed in front of the object. Additional bounding boxes are also associated to all objects present in the scene to speed-up avatar--object intersection tests. Again, instead of testing intersections between the avatar's and the objects' meshes, intersections are tested between the avatar and two bounding boxes, one~placed in front of the object and another placed behind the object. The former is used to detect the moment the avatar is close enough to the object to trigger a proper interaction (e.g.,~enemy~attack and obstacle dodging) and the latter to detect the moment the avatar passes beyond the object, allowing the object to be removed from the scene. Figures~\ref{fig:boxesobstacles} and \ref{fig:boxesenemy} depict the intersection bounding boxes employed in obstacles and enemies, respectively. The sizes of the intersection bounding boxes had to be enlarged so as to accommodate the inaccuracy of the eye tracker. These adjustments were carried out during a~set of informal tests. \begin{figure}[H] \centering \includegraphics[width=10cm]{boxes.png} \caption{A lateral view of the meshes composing the two types of obstacles and their associated three intersection bounding boxes, assuming that the avatar approaches the obstacles from the left. Boxes~labelled as (\textbf{A}) are used for detecting ray--obstacle intersections (to determine which object is being attended by the player). In the case of the tree, box (\textbf{A}) surrounds the tree's horizontal hanging branch, that is, the actual obstacle to the avatar (see Figure~\ref{fig:tilegeneration}). Boxes labelled as (\textbf{B}) are used to detect the moment the avatar is about to collide against the obstacle. Boxes labelled as (\textbf{C}) allow the system to detect the moment the avatar passes beyond the obstacle.} \label{fig:boxesobstacles} \end{figure}\unskip \begin{figure}[H] \centering \includegraphics[width=9cm]{boxesenemy.png} \caption{A lateral view of the mesh composing the enemy and its associated three intersection bounding boxes, assuming that the avatar approaches the enemy from the left. Box labelled as (\textbf{A}) is used to detect the moment the avatar is close enough to the enemy to trigger an enemy attack. Box labelled as (\textbf{B}) is used for detecting ray--enemy intersections (to determine which object is being attended by the player) as well as bullet-enemy intersections. Box labelled as (\textbf{C}) allow the system to detect the moment the avatar passes beyond the enemy.} \label{fig:boxesenemy} \end{figure} All assets (e.g., obstacles) were freely obtained from the Unreal's dedicated store. The rigged 3D model and basic animations for the enemies were obtained from Adobe Mixamo. An Unreal Animation Blueprint was built as a state machine which defining the different animation states and transitions between them (see Figure~\ref{fig:animblueprintzombie}). To provide the player with compelling visual feedback, the~appearance of the objects changes when they are first gazed by the player and also later on when they become actually labelled as \textit{noticed}. The first change consists in rendering the object in wireframe with shades of purple. The second change, more dramatic, consists in turning the object in light blue and in adding a particle effect representing a blue shock wave. Figure~\ref{fig:rocknoticed} shows the evolution of the materials and effects used on a rock obstacle through the process of being noticed. \begin{figure}[H] \centering \includegraphics[width=12cm]{animblueprintzombie.png} \caption{The enemy's animation blueprint. The death animation that is triggered from the Walking state is randomly selected to bring variety to the game. The arrows indicate state transitions and the text in them are the conditions necessary to trigger another animation state.} \label{fig:animblueprintzombie} \end{figure}\vspace{-40pt} \begin{figure}[H] \centering \includegraphics[width=9cm]{rocknoticeda.png} \includegraphics[width=9cm]{rocknoticedb.png} \caption{The material evolution of a rock obstacle being noticed.} \label{fig:rocknoticed} \end{figure} \section{Evaluation and Discussion} \label{chapter:evaluationdiscussion} As mentioned, several informal tests were run to guide the development and tune the game's parameters. Then, the whole game was played by a set of ten people in formal game evaluation sessions to systematically assess what eye tracking technology brings to the game in terms of overall enjoyability. \subsection{Evaluation Method} Test sessions were carried out privately in a room without the presence of anyone but the participant and the research team. An email-based call for participants that considered themselves gamers was issued to an universe of 170 people. The first ten that responded to the email and complied with the requirements were selected for the test sessions. The participants had no previous knowledge about the game and experience. The ages of the ten participants spanned from 25 to 49~years old (see~Figure~\ref{fig:age-distribution}), with different occupations such as software developer, quality assurance tester, and~student. All participants were male, with no female subjects volunteering for the experience. \begin{figure}[H] \centering \includegraphics[width=11cm]{graph_ages.png} \caption{Distribution of participants according to their age group.\vspace{-6pt}} \label{fig:age-distribution} \end{figure} In addition to bottom-line data stored during the test sessions, we also enquired the participants with Game Experience Questionnaires (GEQ) \cite{ijsselsteijn_de_kort_poels_2013}, which has been widely applied in previous studies related to control inputs in computer games \cite{drachen_nacke_yannakakis_pedersen_2010,gerling_klauser_niesenhaus_2011}. By using GEQ, we intended to evaluate if the use of eye tracker is enjoyable and comfortable for the player. We also intended to pinpoint possible advantages and disadvantages of the technology in the way it impacts the overall experience of the player and its relationship with the game, comparing the player experience with and without eye~tracking. \subsubsection{Test Sessions} Figure~\ref{fig:flowchart} presents the flowchart followed in each test session (per participant). Each test session started with a brief questionnaire the participant had to fill in, regarding personal details, such as age and occupation, if the player had some degree of visual impairment, as well as classifying their experience as video game players, with the use of eye trackers, and with the use of gamepads in FPS games. As Figure~\ref{fig:participants_distribution} shows, the average participant had a considerable amount of gaming experience, little to no previous exposure to eye tracking technology, and was moderately experienced using gamepads in FPS games. \begin{figure}[H] \centering \includegraphics[width=15.5cm]{fluxogram.png} \caption{Flowchart applied in each test session (per participant). Each box represents a step in the test session and the accompanying off-box italicised text summarises the purpose of applying each~step.} \label{fig:flowchart} \end{figure}\unskip \begin{figure}[H] \centering \includegraphics[width=13cm]{graph_experience.png} \caption{Distribution of participants according to their experience with video games (white bars), eye tracking technology (gray bars), and use of gamepads in FPS games (black bars).\vspace{-6pt}} \label{fig:participants_distribution} \end{figure} To reduce any {pleasing biases}, the participants might have, we conveyed the message that we are agnostic regarding the use of eye tracking in games by means of a brief paragraph on the top of the questionnaire with ``the advent of eye tracking technology some developers are including this technology in games'' and ``this study aims at providing scientific validation of the advantages and disadvantages of such inclusion in terms of gameplay and comfort to the user''. Then, a brief explanation of the game's rules and objectives was given, followed by a briefing on the eye tracker calibration process. This process was run as many times as deemed necessary until the calibration was successful (as described in Section~\ref{sec:system-config}). When this successful calibration was achieved, the participants were asked to state their feelings towards the process and if they would imagine themselves doing it at home before playing a game. To get the participant acquainted with the game, two separate runs were done, with each one only having an input form enabled (gamepad versus eye tracker). These runs had no time limit and did not count for the evaluation. On the first run, only the eye tracker was enabled and no enemies were spawned. The result was a play session with only obstacles being spawned. This allowed the participants to freely use their eyes to notice the obstacles in front of them and get used to this mechanic, without having to worry about the gamepad. The participants were also asked to state any false positives or obstacles that they had noticed but were not tagged as such by the game. On the second run, no obstacles or enemies were spawned. This allowed the participants to get acquainted with aiming and shooting with the gamepad, adapt to its sensibility and button scheme. The main objective was for the participant to feel comfortable with the different inputs and the test session would only advance when the participants confirmed they felt acquainted with the controls. The participant was then asked to play the game in its original form, as described in Section~\ref{chapter:developmentimplementation}, for~three sessions of 2 min each. For each session, the~ratio between enemies killed and spawned, the~ratio between overall (enemies and obstacles) noticed count and overall spawned count, as well as the number of deaths experienced by the player, were registered. After these three sessions, the player was asked to fill the core and post-game modules of a Game Experience Questionnaire (GEQ)~\cite{ijsselsteijn_de_kort_poels_2013}. This~questionnaire is filled by answering a set of questions whose answers are provided on a Likert-type scale scored from 0 to 4. Afterwards, the participant was asked to play another set of three sessions of 2 min, but this time with eye tracking disabled, which means that all obstacles and enemies were automatically labelled as noticed, with the participant only being required to shoot the enemies. With~these three sessions over, another GEQ was filled in by the player. Then, the calibration process was performed again and the player was asked to play one 2 min session of the game but this time with the visual effects, that occur when the obstacle or enemy is noticed, disabled. Finally, an end questionnaire was then handed to the participant. This questionnaire was more subjective and consisted of three questions: {(1)} ``What do you think of the visual effects used on the first sessions, comparing with the last session you played?'' (2) ``How was the overall experience of playing the game?'' (3) ``Would you consider an eye tracker as part of your gaming setup and why?'' \subsection{Experimental Results} All test sessions were concluded with success, in the sense that all test subjects were able to perform the tasks required from them. \subsubsection{Eye Tracker Calibration Process} The calibration process revealed itself as the most challenging step of the testing session. A~calibration was deemed good enough if the accuracy in the areas where the player mostly interacts with in the game, i.e., the whole screen except the top corners (see Figure~\ref{fig:eyetrackerpattern}), was considered as successful according to the criterion defined in Section~\ref{sec:system-config}. Two participants had to remove their glasses so that their gaze could be properly detected. Figure~\ref{fig:triescalibration} shows the distribution of participants according to the number of calibration tries each participant required to obtain a good enough calibration. As the table shows, a single calibration process was enough for half the participants. \begin{figure}[H] \centering \includegraphics[width=9cm]{eyetrackerpattern.png} \caption{Gaze movement of a participant during a typical play-through of Zombie Runner. It can be observed that the player spends most of the time gazing at the central region of the screen.} \label{fig:eyetrackerpattern} \end{figure}\unskip \begin{figure}[H] \centering \includegraphics[width=9cm]{graph_tries.png} \caption{Distribution of participants according to the number of eye tracking calibration tries required to achieve a good enough calibration.\vspace{-6pt}} \label{fig:triescalibration} \end{figure} After achieving a calibration deemed successful, the participants were asked about their feelings towards it and if they would see themselves repeating this process at home before playing a game. From all participants, eight said that they would see themselves doing it, stating that a calibration process also had to be performed with other controllers. Three of these participants reported that the process was easy and fast, while others stated that it should be easier, but it is still acceptable. The~other two participants did not see themselves calibrating an eye tracker each time they wanted to~play, expressing that they wished the process was easier. \subsubsection{Play Sessions} The three 2 min play sessions produced results that suggest an approximation to the ideal game flow, with the participant being able to progressively learn how to use play more competently. This~can be observed in Table~\ref{tab:results_play_session}, which shows the evolution along the three play sessions of: (1) the ratio between the number of killed enemies (by the player) and the total number of spawn enemies; (2)~the ratio between the number of noticed (by the player) elements (obstacles and enemies) and the total number of spawn elements (obstacles and enemies); and (3) the number of avatar (player) deaths. The~two ratios evolved positively, with the player improving in the tasks of killing enemies and noticing them, along with obstacles. The number of deaths went down abruptly from the first to the second session and then went up slightly on the third one, but to a value close to the lowest one obtained in the second session. We suspect this slight increase in deaths can be a consequence of the better results achieved by the participants in terms of killing enemies and noticing objects. To excel in these tasks, participants had to better coordinate the two input forms, which led to a greater risk of being killed. \begin{table}[H] \centering \caption{Results per play session (ratios represented as percentages), provided as STD $\pm$ STE, where~STD stands for standard deviation and STE for standard error of the mean.} \label{tab:results_play_session} \begin{tabular}{>{\centering\arraybackslash}p{1.5cm}>{\centering\arraybackslash}p{2.5cm}>{\centering\arraybackslash}p{2.9cm}>{\centering\arraybackslash}p{2.5cm}} \toprule \textbf{Play Session} & \textbf{Ratio [\%] of Killed Enemies} & \textbf{Ratio [\%] of Noticed Elements} & \textbf{Number of Deaths}\\ \midrule 1{st} & $65.6\, \pm 4.4\,$ & $49.7\, \pm 6.3\,$ & $2.3\, \pm 0.2\,$ \\\midrule 2{nd} & $69.5\, \pm 7.2\,$ & $49.8\, \pm 6.1\,$ & $1.3\, \pm 0.4\,$ \\\midrule 3{rd} & $79.0\, \pm 5.0\,$ & $52.7\, \pm 5.4\,$ & $1.5\, \pm 0.2\,$ \\\bottomrule \end{tabular} \end{table} The GEQ was filled out by the participants after these three play sessions. As mentioned, after~these sessions, the participants were asked to play an additional set of three play sessions, this~time without using eye tracker, i.e., with all obstacles and enemies automatically labelled as noticed. Table~\ref{tab:averagecomponentcore} compares, for each of the components of GEQ core module (competence, sensory~and imaginary Immersion, flow, tension/annoyance, negative affect, and positive affect), the sessions with and without eye tracking. In the Competence component, the participants felt more competent and able to complete tasks without the eye tracker than with it, which can be explained by the higher degree of difficulty of the game when eye tracking is on. In the Sensory and imaginative immersion component, the participants reported higher scores with eye tracking, which means the feeling of immersion was felt at a greater level with the full game experience. The Flow component was also higher with eye tracking on, which reveals the participants felt a stronger feeling of game flow, a~better balance within the game's Challenge, another component better graded with eye tracking~on. participants reported higher levels on the component of Tension/annoyance with the eye tracker in~use, which~reveals the full game experience can lead to greater levels of frustration. This may be the result of the game being more challenging with eye tracking or of the issues related to the accuracy of the eye tracker (more on this below). Both positive and negative affects are slightly higher when eye tracking was~on, suggesting that players were more emotionally invested in that condition. In both conditions, the~positive affect is considerably higher than the negative one, suggesting that the game provides an~overall positive~experience. Although GEQ scores are used in this study to compare two designs, we can also analyse the absolute meaning of the obtained scores. In this regard, although the GEQ scores are low when compared to the maximum of the scale, these are not substantially different from the ones obtained for full-fledged commercial FPS games, reported in a previous study \cite{drachen_nacke_yannakakis_pedersen_2010}. The lower scoring can be explained by the awareness players have of the current state of the art in commercial games. \bgroup \def1{1} \begin{table}[H] \centering \caption{Average values for the different components on the GEQ core module for the play sessions with and without Eye Tracking (ET). Results provided as STD $\pm$ STE, where STD stands for standard deviation and STE for standard error of the mean.} \begin{tabular}{lcc} \toprule & \multicolumn{2}{c}{ \textbf{GEQ Average Score}} \\ \midrule \textbf{Component} & \textbf{With ET} & \textbf{Without ET} \\ \midrule Competence & $2.22 \pm 0.27$ & $2.8 \pm 0.25$ \\ \midrule Immersion & $1.33 \pm 0.3$ & $0.93 \pm 0.3$ \\ \midrule Flow & $2.08 \pm 0.24$ & $1.68 \pm 0.26$ \\ \midrule Tension & $1.06 \pm 0.28$ & $0.3 \pm 0.14$ \\ \midrule Challenge & $1.56 \pm 0.2$ & $0.58 \pm 0.19$ \\ \midrule Negative affect & $0.68 \pm 0.2$ & $0.45 \pm 0.21$ \\ \midrule Positive affect & $2.48 \pm0.26$ & $2.26 \pm 0.22$ \\ \bottomrule \end{tabular} \label{tab:averagecomponentcore} \end{table} \egroup The results were treated in the same way for the GEQ post-game module, whose results are summarised in Table~\ref{tab:averagecomponentpost}. The differences in levels of positive and negative experience are too small ($\approx$0.05) to allow any analysis from them. The more significant difference in Tiredness and Returning to reality components provide additional support to the idea that eye tracking renders the experience more immersive and engaging. The similar values obtained with and without the eye tracker suggest that the participant's perception of the overall experience was more shaped by the game itself than by the use or lack of the eye tracker. However, these results become more meaningful and easier to understand when compared with the answers the test subjects gave to the last set of informal questions (see below). The main goal of these questions was to extract from the participants more subjective perceptions they had from the game, which could help us understand the GEQ scores. \bgroup \def1{1} \begin{table}[H] \centering \caption{Average values for the different components on the GEQ post-game module for the play sessions with and without Eye Tracking (ET). Results provided as STD $\pm$ STE, where STD stands for standard deviation and STE for standard error of the mean.} \begin{tabular}{lcc} \toprule & \multicolumn{2}{c}{ \textbf{GEQ Average Score}} \\ \midrule \textbf{Component} & \textbf{With ET} & \textbf{Without ET} \\ \midrule Positive experience & $1.27 \pm 0.31$ & $1.33 \pm 0.28$ \\ \midrule Negative experience & $0.13 \pm 0.06$ & $0.15 \pm 0.1$ \\ \midrule Tiredness & $0.5 \pm 0.35$ & $0.23 \pm 0.12$ \\ \midrule Returning to reality & $0.7 \pm 0.26$ & $0.5 \pm 0.18$ \\ \bottomrule \end{tabular} \label{tab:averagecomponentpost} \end{table} \egroup \subsubsection{Informal Questions} The first question asked participants for their opinion about the used visual effects signalling which objects were labelled as noticed, that is, as an attention feedback mechanism. Nine of the ten participants stated \textit{the importance of the effects as a means to give feedback to the player}, with some pointing out that \textit{without effects the player may be forced to look more than needed as the player never knows if the obstacle or enemy was actually tagged as noticed}. The use of effects was also pointed out as \textit{more rewarding to the players actions}. From all the participants, seven reported that \textit{without the visual effects, the game is more immersive and the experience more realistic, making the way the player looks at things more natural}. This~may suggest that being more immersive does not mean that a game is necessarily more rewarding. In order for it to be both rewarding and immersive, a different, more subtle attention feedback mechanism should be implemented as to avoid breaking the suspension of disbelief. The~development of such a~feedback mechanism (more subtle than the tested visual effects) still demands for additional research. It is also worth studying the value of such an attention feedback mechanism when using highly accurate eye tracking technology. On the one hand, there is the possibility that users stop feeling the need for attention feedback as soon as they feel that the system is accurately tracking their attention. On the other hand, highly accurate eye trackers are expensive and, even those, are unable to fully predict attention deployment, as humans also exploit vision periphery to scan the environment. Some~participants, to whom the eye tracker calibration was not fully successful, stated that \textit{the visual effects used as attention feedback mechanism may induce frustration on them as they could see the discrepancy between where the eye tracker thought they were looking at and where they were actually looking at}. This~issue calls for more robust eye tracking and calibration techniques if we wish to provide universal access to this~technology. To the second question about the overall experience of playing the game, three out of the ten participants complained about \textit{having to be as static as possible to avoid the eye tracking de-calibration}. Although only two of the participants reported \textit{problems with the game registering when they look at enemies and rock obstacles}, some participants complained about \textit{hardware problems and the frustration of going to the process of calibration and then the technology still not working right}. A participant that had problems with getting the eye tracker to work while wearing glasses \textit{wished that the technology was more prepared for people with glasses}. These issues highlight the biggest practical caveats on the application of eye tracking to video games (and to other related domains): the time-consuming calibration process and brittleness of the system when used in non-ideal scenarios. We expect that further research in eye tracking technology will produce more robust solutions, fostering their use in the wild. Regarding~the game, a participant stated that \textit{it is well designed}, also expressing \textit{appreciation for the feedback given when the player loses life}. Another participant said that \textit{the mechanic of noticing things is fun, but that the game lacks progression, having nothing new after the first minute}. This might suggest that a more complex game could have been a better fit for this test session, as it would avoid frustration resulting from lack of novelty. In fact, although not so relevant in the context of this study, in which the game sessions were setup to last only a couple of minutes, maintaining player engagement in long testing sessions may demand for more compelling game mechanics. One participant felt that \textit{the time to set an obstacle or enemy as noticed is too long}, which leads to the consider that in future studies these timings should be learned for each player, taking into account the player-dependent uncertainty of the eye tracker. The~experience was \textit{classified as immersive} by three participants, with one of them stating enthusiastically that \textit{eye tracking is amazing}. With the goal of emphasising potential points of improvement, Table~\ref{tab:negative_comments} provides a compilation of the complaints described in the two previous paragraphs, alongside the number of participants supporting these complaints. As the table highlights, most of the complaints are related to the limitations of current eye tracking affordable technology. The most frequent complaint concerns the lower level of immersion and realism induced by the presence of visual effects used to highlight noticed elements. However, as aforementioned, these visual effects had a positive impact on the overall experience, rendering it more rewarding. Hence, future studies are required to better analyse this (at least apparent) trade-off between rewarding experience and game immersion/realism in the context of gaze-directed gameplay. \bgroup \def1{1} \begin{table}[H] \centering \caption{Number of participants agreeing with a given complaint.} \begin{tabular}{lc} \toprule \textbf{Complaint} & \textbf{Nr. of Participants} \\ \midrule Impact of noticed-related visual effects to feeling of immersion/realism & 7 \\ \midrule Need for being as static as possible for proper eye tracking operation & 3 \\ \midrule Failures in mis-registering enemies/obstacles as noticed & 2 \\ \midrule Time required for obstacles/enemies to be considered as noticed & 1 \\ \midrule Problems in tracking the eyes of people wearing glasses & 1 \\ \midrule Game lacks progression & 1 \\ \bottomrule \end{tabular} \label{tab:negative_comments} \end{table} \egroup To the question about whether the participants would consider the inclusion of an eye tracking camera in their gaming setup, the responses were mixed (see Table~\ref{tab:in_the_future}). From the entire group, two of the participants stated \textit{they would not do it}, with reasons such as \textit{it being another piece of hardware that has little application and would be quickly abandoned after the novelty effect wore off}. Fortunately, we expect eye tracking technology to become embedded in computing devices, reducing the limitations raised by these participants. Three other participants stated that \textit{they would not in the current state but that they could try it in the future}, stating that \textit{depending on the game it could facilitate precision tasks such as aiming in FPS games or passing the ball to another player in a football game}. This shows that participants see the value of eye tracking as a means to map player's and avatar's attention processes. The reasons these participants presented as to holding off in the adoption of the technology regarded its poor performance while wearing glasses, and the fact that it was not stable or precise, which allied with the calibration process ruined the experience. These issues were already raised by participants in the responses to the two previous questions (please refer to the three previous paragraphs). The other five participants said \textit{they would adopt the technology}, stating that \textit{it was a new form of interaction, that opened new possibilities and created more immersive experiences}. \bgroup \def1{1} \begin{table}[H] \centering \caption{Distribution of responses to the question ``Would you consider the inclusion of an eye tracking camera in your gaming setup?''. } \begin{tabular}{lc} \toprule \textbf{Response} & \textbf{Nr. of Participants} \\ \midrule No & 2 \\ \midrule Maybe, when eye tracking becomes more reliable & 3 \\ \midrule Yes & 5 \\ \bottomrule \end{tabular} \label{tab:in_the_future} \end{table} \egroup These answers showed, along with the previous questionnaires, that the use of eye tracking in games has both pros and cons. On the positive side, the use of gaze-oriented gameplay provided a more immersive and richer experience, providing a better game flow. On the negative side, the~technology's limitations raised feelings in the participants that were not desired. The calibration process and its results, along with some disbelief that eye tracking could have an important role in a computer game, are the main reasons for the participants being adamant about adopting the technology. \section{Conclusions and Future Work} \label{chapter:conclusionsfuturework} This paper presented Zombie Runner, an endless runner FPS game, whose core mechanics and procedurally generated content are modulated by the player's gaze estimated with an affordable eye tracker. A set of testing sessions were carried out to assess the impact of eye-tracking in the player's satisfaction when playing Zombie Runner. The results obtained from testing sessions show that the use of eye tracking provides a more challenging and immersive experience to the player. These~results complement previous studies, which were mostly focused on performance-based metrics when using eye tracking as a direct control input. Conversely, Zombie Runner exploits eye tracking as a mechanism to match the avatar's attention model with the one of the player. Moreover, Zombie Runner also exploits eye tracking to guide the procedural content generation of the game's environment, contributing in an original way to the emerging field of Experience-Driven Procedural Content Generation (EDPCG) \cite{yannakakis_togelius_2011}. During the evaluation, a strong correlation between problems that surfaced with the eye tracker calibration and participants' overall experience was observed. Participants for whom the hardware worked without major flaws reported better levels of satisfaction when contrasting with participants for whom the calibration process was not perfect or took a longer time. Among the participants' complaints about the eye tracking technology, many were related the need to have the head mostly static during the play session, the calibration process requiring repetitions to meet the required accuracy, and overall eye tracker's lack of precision. These complaints show that the affordable eye tracking technology still has to grow and develop until it is in a state that can be accepted by the overall gaming community. However, positive player's experience when calibration was easily attained is a sign that the method will be a valuable asset for the game designer as soon as the eye tracking technology~matures. As future work, we intend to validate the use of eye tracking in other types of video games and allow for free avatar's movement. We also intend to extend this testing framework, including the philosophy behind the way the player's attention is integrated in the core gameplay, to games with other types of camera perspective, such as third-person games. Finally, we also intend to perform a~more intensive set of tests, enlarging the participant population, to obtain more robust statistics, in~particular to allow an in-depth correlation analysis between gaze patterns, game~progression data, player profiles, and GEQ scores. \vspace{6pt} \authorcontributions{J.A. participated in the design of the study, developed the software, participated in the analysis of the results, and participated in the preparation of the manuscript. P.S. participated in the design of the study, participated in the design of the developed software, participated in the analysis of the results, participated~in the preparation of the manuscript, and coordinated all activities.} \conflictofinterests{The authors declare no conflict of interest.} \bibliographystyle{mdpi} \bibliographystyle{mdpi} \reftitle{References}
1,108,101,565,104
arxiv
\section{Membership Inference Attacks Evaluated} \label{sec:adaptiveattack} To (empirically) demonstrate the privacy of our system, we evaluate it against three main classes of MIA attacks. First, we evaluate our attack against the single-query and label-only MIA attacks introduced in earlier. Specifically, evaluate against the direct single-query attacks in Section~\ref{subsec: attack}, and for label-only attacks, we use the boundary attack for all three datasets and the data augmentation attack for CIFAR100. We describe these attacks in a detailed manner in Appendix~\ref{appendix:label}. Additionally, we evaluate our system against \emph{adaptive membership inference attacks}, as introduced in the following. Song and Mittal~\cite{song2020systematic} emphasizes the importance of placing the attacker in the last step of the arms race between attacks and defenses: the defender should consider adaptive attackers with knowledge of the defense to rigorously evaluate the performance of the defenses. Therefore, here we consider attacks that are tailored to our defense. As our defense leverages soft labels from the {Split-AI}\xspace ensemble to train a new model $F_{\theta_{\mathrm{II}}}$ in Self-Distillation, we need to analyze whether and how an attacker can also leverage the information about soft labels. We first note that an attacker is unable to directly interact with our {Split-AI}\xspace to directly estimate soft labels, since the prediction API executes queries on the model produced by the Self-Distillation component. Second, we expect that when the model provider finishes training the protected model $F_{\theta_{\mathrm{II}}}$ with soft labels obtained from {Split-AI}\xspace, it can safely delete the sub-models and soft labels of the training set to avoid inadvertently leaking information about the soft labels. However, an attacker can still aim to indirectly \emph{estimate} soft labels. As we assume that the attacker knows partial membership of the exact training set in evaluating membership privacy risks~(specifically, half of the whole training set) and attacker cannot have access to the defender's non-member model indices ${Id}_{non}({\textbf{x}})$ for training set, the attacker will generate new non-member model indices ${Id}_{non}({\textbf{x}})'$ for these known member samples to train a new shadow {Split-AI}\xspace ensemble and use the shadow {Split-AI}\xspace to estimate soft labels of the target samples. The attacker can then use such soft labels as an additional feature to learn the difference in target model's behavior on members and non-members, and launch MIAs on $F_{\theta_{\mathrm{II}}}$. The shadow {Split-AI}\xspace discussed in our paper is stronger than original shadow models~\cite{shokri2017membership} since it is trained with exact knowledge of the partial training dataset. We design four adaptive direct single-query attacks\footnote{Our Table~\ref{tab:allattacks} shows that label-only attacks is weaker than direct single-query attacks on undefended model. We have also designed adaptive multi-query label-only attacks against {SELENA}\xspace and evaluated on Purchase100 dataset, which is better than original label-only attacks, but weaker than adaptive direct single-query attacks.} including two NN-based attacks and two metric-based attacks to leverage the information estimated soft labels. To clarify, $F_{\theta_{\mathrm{II}}}$ denotes the protected target model which answers the attacker's queries and $F_{\theta_{\mathrm{I}}}'$ denotes attacker's shadow {Split-AI}\xspace. \textbf{MIAs based on NN and soft labels}: The first NN-based attack concatenates the soft labels obtained from $F_{\theta_{\mathrm{I}}}'$, the predicted confidence from $F_{\theta_{\mathrm{II}}}$ and the one-hot encoded class labels as features to train a neural network attack model~(denoted as $I_{\text{NN1}}$). The second attack utilizes the {difference between} the estimated soft labels from $F_{\theta_{\mathrm{I}}}'$ and outputs from $F_{\theta_{\mathrm{II}}}$, and uses this difference as an input to the neural network architecture used by Nasr et al.~\cite{nasr2018machine}~(denoted as $I_{\text{NN2}})$. \textbf{MIAs based on distance between soft labels and predicted confidence}: Similar to previous metric-based attacks~\cite{song2020systematic}, an attacker may try to distinguish between members and non-members by leveraging the distance between estimated soft labels from $F_{\theta_{\mathrm{I}}}'$, and the prediction confidence vectors from $F_{\theta_{\mathrm{II}}}$. We have: $$I_{\text{dist}}(F_{\theta_{\mathrm{II} }}(\textbf{x}), F_{\theta_\mathrm{I}}'(\textbf{x}),y ) = \mathds{1}\{ Dist(F_{\theta_{\mathrm{II}}}(\textbf{x}), F_{\theta_{\mathrm{I}}}'(\textbf{x})) \leq \tau_{(y)}\}$$ $$\text{or, } I_{\text{dist}}(F_{\theta_{\mathrm{II} }}(\textbf{x}), F_{\theta_\mathrm{I}}'(\textbf{x}),y ) = \mathds{1}\{ Dist(F_{\theta_{\mathrm{II}}}(\textbf{x}), F_{\theta_{\mathrm{I}}}'(\textbf{x})) \geq \tau_{(y)}\}$$ where we apply both class-dependent threshold $\tau_y$ and class-independent threshold $\tau$ and we will report the highest MIA accuracy. In this work we consider $L_2$ distance $I_{\text{L}_\text{2}\text{-}dist}$ and cross-entropy loss $I_{\text{CE-}dist}$~(since the cross-entropy loss function is used for training our defense models). \section{Proof for {Split-AI}\xspace and {SELENA}\xspace against Direct, Single-Query Membership Inference Attack} \label{appendix:securityproof} \newcommand{\mathrm{Supp}}{\mathrm{Supp}} \paragraph{Notation.} In this section, we use $x\gets X$ to denote that $x$ is sampled from a distribution $X$. We use $\mathrm{Supp}(X)$ to denote the support set of a random variable $X$. By $TV(X,X')$ we denote the total variation distance between $X$ and $X'$, that is $TV(X,X') = \sup_{S\subset \mathrm{Supp}(X)\cup\mathrm{Supp}(X')} \Pr[X\in S] - \Pr[X'\in S]$. \subsection{{Split-AI}\xspace's Privacy under Direct Single-query Attacks} \label{appendix:splitaidsq} \begin{definition}[Direct, Single-Query Membership Inference]\label{secGame} The single-query membership inference game is defined between an attacker $A$ and a learner $C$ and is parameterized by a number $n$ which is the number of training examples. \begin{enumerate} \item The attacker selects a dataset $X=\set{x_1,\dots,x_{2n}}$ and sends it to the learner. \item Learner selects a uniformly random Boolean vector $b=b_1,\dots, b_{2n}$ such that the Hamming weight of $b$ is exactly $n$. \item Learner constructs a dataset $S=\set{x_i; \forall i\in [2n], b_i=1}$ and learns a model $F_{\theta_I}$ using $S$ as training set. \item Learner selects a random $i\in [2n]$ and sends $(x_i, F_{\theta_I}(x_i))$ to the adversary \item Adversary outputs a bit $b'_i$. \label{def:securitygame} \end{enumerate} The advantage of $A$ in breaking the security game above is $\mathsf{SQMI}(A,C,n)=\mathbf{E}[1-|b_i-b_i'|]$ where the expectation is taken over the randomness of the adversary and learner. \end{definition} \begin{remark} We can define a variant of the security game of Definition \ref{secGame} for a fixed dataset $X$. That is, instead of $X$ being chosen by adversary, we define the game for a given $X$. We use $\mathsf{SQMI}(A,C,X)$ to denote the success of adversary in the security game with the dataset fixed to $X$. \end{remark} \begin{theorem}\label{thm:single_query_split_train} Consider a learner $C_{ST}$ that uses Algorithm \ref{alg:A1}. For any direct, single-query membership inference adversary $A$ we have $$\mathsf{SQMI}(A,C_{ST},n) = 50\%$$ \end{theorem} \begin{proof}[Proof] We show that for any adversary's choice of $i\in [2n]$ in step 4 of the security game, the view of adversary in two cases when $b_i=0$ and when $b_i=1$ are statistically identical. Note that the only information that the adversary receives is $r_i=F_{\theta_I}(x_i).$ We show that the distribution of two random variables $r_i \mid b_i=0$ and $r_i \mid b_i=1$ are identical. Let $U_i$ be a random variable corresponding to the subset of trained models that do not contain $x_i$ in their training set (in particular $|U_i|=L$ if $b_i=1$ and $|U_i|=K$ when $b_i=0$). Also, let $U$ denote a random variable corresponding to a subset of $L$ models that do not contain a random $x_k$ in their training data where $k$ is selected from $\set{j\in[2n]; b_j=1}$ uniformly at random. We first note that $U\mid b_i=0$ and $U_i\mid b_i=1$ are identically distributed random variables. Specifically, they are both an ensemble of $L$ models trained on a uniformly random subset of a dataset $T\subset\set{x_1,\dots,x_{i-1},x_{i+1},\dots,x_{2n}}$ where $|T|=n-1$. Now, lets calculate the distribution of response when $b_i=1$ and when $b_i=0$. For $b_i=1$ we have \begin{align*} (r_i \mid b_i=1) \equiv (\frac{1}{L}\cdot \sum_{F \in U_i} F(x_i) \mid b_i=1) \end{align*} For $b_i=0$ we have \begin{align*} (r_i \mid b_i=0)\equiv (\frac{1}{L}\cdot \sum_{F \in U} F(x_i) \mid b_i=0) \end{align*} Now since $U_i\mid b_i=1$ and $U\mid b_i=0$ are distributed identically, the summation of the query points are also identically distributed. Therefore, $r_i \mid b_i=0$ and $r_i \mid b_i=1$ are identically distributed. Note that it is crucial that the adversary only queries the point $x_i$ as otherwise we had to take the summation over $U\mid b_i=1$ and $U\mid b_i=0$ which are not identically distributed (the case of $b_i=1$ could have $x_i$ in the training set of the $L$ models). Since we prove that $r_i\mid b_i=1$ and $r_i\mid b_i=0$ are identical, the adversary cannot distinguish them and the success probability of the adversary is exactly $0.5$. The intuitive explanation for this proof is that for each data point, the distribution of output of this algorithm on a given point $x$ is independent of the presence of $x$ in the training set, as we will not use models that are trained with $x$ to answer queries, even if $x$ is in the training set. \end{proof} \begin{remark}[A stronger security game and theorem] Note that there is a worst-case variant of Definition \ref{secGame} where in step 4, instead of the challenger, the adversary select $i\in [2n]$. This is a stronger security game as the adversary can select the worst example in the dataset. However, Theorem \ref{thm:single_query_split_train} remain unchanged in this game. This is because the proof applies to any $i\in[2n]$ and does not require $i$ to be chosen at random. As we will see below, we have another theorem (Theorem \ref{thm:distill}) that considers the privacy of end-to-end {SELENA}\xspace for which the guarantee only holds for the weaker definition. \label{remark:strongergame} \end{remark} \subsection{{SELENA}\xspace's Privacy under Direct Single-query Attacks} \label{appendix:selenadsq} \newcommand{\mathcal{M}}{\mathcal{M}} \begin{definition}[stable distillation] A distillation algorithm $Q\colon M_s\times \mathrm{AUX}\to M_o$ is a potentially randomized algorithm with access to a source model $m_s\in M_s\subseteq Y^X$ and some auxiliary information and returns an output model $m_o\in M_o \subset Y^X$. We define the notion of stability for a distillation algorithm on a point $x\in X$, and joint distribution $\mathcal{M}$ on $M_s\times \mathrm{AUX}$ as follows: $$\mathsf{stablity}(Q,\mathcal{M},x)=1-TV(Q(\mathcal{M})[x],\mathcal{M}[x]).$$ Moreover, we say the algorithm $Q$ has $(\alpha,\beta)$-stability on a distribution $\mathcal{M}$ and a dataset $X$ iff $$\Pr_{x\gets X}[\mathsf{stability}(Q,\mathcal{M},x)\leq 1-\alpha ]\leq \beta$$ \label{def:stabledistillation} \end{definition} \paragraph{Example.} If the distillation algorithm $Q$ ensures that for a specific point $x$ and for all $m_s\in M_s$ we have $Q(m_s)[x] = m_s[x]$, then $Q$ has stability $1$ on point $x$ for all distributions $\mathcal{M}$ defined on $M_s$. \begin{remark} The distillation algorithm $Q$ could also depend on an additional dataset that is correlated with $m_s$ as the auxiliary information. For instance, in our self-distillation algorithm, the distillation is done through the same training set that was used to train $m_s$. In this case, we are interested in the joint distribution $\mathcal{M}$ that consist of a model $m_s$ as first element and a dataset $D$ as the second element, so that $m_s$ is a model trained on dataset $D$. \end{remark} Now we state a corollary of our Theorem \ref{thm:single_query_split_train} about the privacy of the distilled models from the output of the {Split-AI}\xspace operation. \newtheorem{corollary}[theorem]{Corollary} \paragraph{Notation.} For a learner $C$ and a dataset $X$, we use $\mathcal{M}_{C,X}$ to denote a distribution of models that is obtained from the following process: First select a random subset $S$ of size $|X|/2$ and then train a model $m$ on that subset using learner $C$ and output $(m,S)$. For a learner $C$ and a distillation model $Q$, we use $QoC$ to denote a learner that first uses $C$ to train a model and then uses distillation algorithm $Q$ to distill that model and then returns the distilled model. \begin{theorem}\label{thm:distill} Let $C$ be an arbitrary learner. Assume for a set of samples $X$ the distillation algorithm $Q$ has $(\alpha,\beta)$-stability on distribution $\mathcal{M}_{C,X}$ and dataset $X$. Then, for any adversary $A$ we have $$\mathsf{SQMI}(A,{QoC},X) \leq \mathsf{SQMI}(A,{C},X) + \alpha + \beta.$$ \end{theorem} \begin{proof} Consider an adversary $A$ that given a response $Qo C[x_i]$ on query $x_i\in X$ outputs a bit $b'_i=A(Qo C(x_i))$. Let $E$ be an event defined on $X$ such that $E(x)=1$ iff $$\mathsf{stability}(Q,\mathcal{M}_{C,X},x)\geq 1-\alpha.$$ For a point $x_i$ such that $E(x_i)=1$ we have \begin{align*}&\Pr\big[A(QoC[x_i])=b_i\big]\leq \Pr\big[QoC[x_i]\neq C[x_i]\big]\\ &~~ +\Pr\big[A(C[x_i]) =b_i \mid C(x_i)=QoC[x_i]\big]\cdot \Pr\big[QoC[x_i]=C[x_i]\big]\\ &\leq \alpha + \Pr\big[A(C[x_i])=b_i \big] \end{align*} Therefore, we have \begin{align*}&\Pr_{x_i\gets X}\big[A(QoC[x_i])=b_i\big]\\ &\leq \Pr_{x_i\gets X}\big[A(QoC[x_i])=b_i] \mid E(x_i)\big]\cdot \Pr_{x_i\gets X}[E(x_i)] + \Pr_{x_i\gets X}[\bar{E}(x_i)]\\ &\leq \Pr_{x_i\gets X}\big[A(QoC[x_i])=b_i \mid E(x_i)\big]\cdot \Pr_{x_i\gets X}[E(x_i)] + \beta\\ &\leq \Big(\Pr_{x_i\gets X}\big[A(C[x_i])=b_i \mid E[x_i]\big] + \alpha\Big) \cdot \Pr_{x_i\gets X}[E(x_i)] + \beta\\ & \leq \Pr_{x_i\gets X}\big[A(C[x_i])=b_i]\big] + \alpha + \beta\\ &=\mathsf{SQMI}(A,{C},X) + \alpha + \beta. \end{align*} \end{proof} Now we are ready to state a corollary of Theorems~\ref{thm:distill} and Theorem~\ref{thm:single_query_split_train} for the full pipeline of {Split-AI}\xspace followed by Self-Distillation. The following Corollary directly follows from Theorems~\ref{thm:distill} and Theorem~\ref{thm:single_query_split_train}. \begin{corollary}\label{cor:last} Let $C_{ST}$ be a learner that uses the {Split-AI}\xspace algorithm \ref{alg:A1}. Also, let $Q_{SD}$ be a distiller that uses self-distillation algorithm. If $Q_{SD}$ is $(\alpha,\beta)$-stable for a dataset $X$ and distribution $\mathcal{M}_{C_{ST},X}$, then, for any adversary $A$ we have $$\mathsf{SQMI}(A,Q_{SD}oC_{ST},X) \leq 0.5 + \alpha + \beta.$$ \end{corollary} \subsection{Discussion of {Split-AI}\xspace and {SELENA}\xspace for Correlated Points} \label{appendix:correlationdiscuss} \begin{remark}[How private is {SELENA}\xspace against multi-query attacks?] The above theoretical analysis of {SELENA}\xspace is only valid for single-query direct attacks. But one might wonder if we can show a similar theory for privacy of {SELENA}\xspace against multi-query attacks. Unfortunately, we cannot prove a result as general as Corollary \ref{cor:last} for multi-query attacks. In fact, there exist some datasets that {SELENA}\xspace cannot obtain provable privacy for. For instance, imagine a dataset that contains two points $(x,0)$ and $(x',1)$ such that $x$ and $x'$ are almost the same points, i.e. $x\approx x'$, yet they are labeled differently in the training set ($x$ is labeled as 0 and $x'$ as 1). In this scenario, we can observe that the adversary can obtain information about membership of $x$ and $x'$, when querying both points. In particular, if only one of $x$ and $x'$ are selected as members, then we expect the result of query on $x$ and $x'$ to be the same and equal to the label of the one that is selected as a member. However, we argue that this lack of privacy for certain datasets will not manifest in the real world examples as such high correlation does not frequently appear in real-world datasets. Our empirical analysis of {SELENA}\xspace is consistent with this claim. We defer the theoretical analysis of {SELENA}\xspace for multi-query attacks on datasets that satisfy certain assumptions to future work. \label{remark} \end{remark} \paragraph{Specific study of possible leakage in Remark 7.} To study the possible leakage in Remark~\ref{remark} on {Split-AI}\xspace, we investigate the effect of querying correlated points. In particular, we consider pairs $(x, x')$, where $x$ is a member and $x'$ is a close non-member. Then, we measure the difference between outputs from $L$ sub-models in $Id_{non}(x)$ and random $L$ sub-models for a non-member sample $x'$. This way, we obtain an attack which shows the magnitude of the privacy loss due to the leakage described in Remark~\ref{remark}. \paragraph{Experiment setup.} We design the following experiment on the CIFAR100 dataset. We use $L_2$ distance to measure the correlation between member samples and non-member samples. For each training sample $x$, we find the sample $x'$ among test set which has the least $L_2$ distance to $x$ but labeled differently. For each correlated pair $(x, x')$, we query {Split-AI}\xspace on $x'$ twice, the first query uses $L$ sub-model indices defined by $Id_{non}(x)$ and the second query uses random $L$ sub-models. We denote these two queries by $F_{\theta_\mathrm{I}}(x', Id_{non}(x))$ and $F_{\theta_\mathrm{I}}(x', rnd)$ respectively. Now we can leverage the MIAs evaluated in Section~\ref{sec: eval}: consider $F_{\theta_\mathrm{I}}(x', Id_{non}(x))$ as a member and $F_{\theta_\mathrm{I}}(x', rnd)$ as a non-member, we use these predictions along with the label of $x'$ as input to the direct-single query attacks~(due to their strong performance on undefended models). \begin{figure}[ht] \centering \includegraphics[width=3in]{imgs/correlated.pdf} \caption{Given the $L_2$ distance threshold for correlated pairs $(x, x')$ as the x-axis, we plot the fraction of pairs with distance less than that threshold. We also plot the average attack accuracy among paired queries for that distance.} \label{fig:ratioandattack} \end{figure} \paragraph{Result.} We present the result of the correlated point attack as a function of how close these correlated pairs are, i.e., the distance $L_2(x, x')$. For $L_2$ distance from 1 to 15,\footnote{Image pixel in range [0,1].} we evaluate the ratio of member samples that satisfy this $L_2$ restriction and the attack success rate and plot the result in Figure~\ref{fig:ratioandattack}. We can see that for $L_2$ distance larger than 6, the attack performance is close to a random guess. For $L_2$ distance less than 6, we can see that as $L_2$ distance restriction becomes smaller, the attack accuracy tends to increase. This is consistent with the what we discuss in Remark~\ref{remark}. However, we should also note that the ratio of such pairs that satisfy the restriction is close to 0. Specifically, for $L_2=1$. there are only 6 member samples out of 50000 member samples that satisfy this restriction, which is consistent with our discussion in Remark~\ref{remark} that the presence of such highly correlated pairs in real-world datasets is small. \paragraph{Can our NN-based attacks (in Section~\ref{sec:attackdefense} and Section~\ref{sec:adaptiveattack}) leverage the correlation leakage?} We emphasize that our NN-based attacks described in Section \ref{sec:attackdefense} and Section~\ref{sec:adaptiveattack} have all the required information for leveraging the correlation leakage described in this subsection. Our attacks have access to a large fraction of dataset together with their membership information and the prediction vector on the target model. Therefore, in principle, the NN-based attack could learn to perform the following: 1) On a given point $x$ find the most correlated point $x'$ in the provided dataset 2) calculate the expected prediction vector for querying $x'$ on models and non-models of $x$. 3) Run the attack described above in the subsection. We cannot prove that the neural network does all these steps, but it has all the power to do so. \section{Evaluation of Our Architecture's Two Components} \label{appendix:components} In Section~\ref{sec: eval}, we have evaluated the end-to-end performance of our framework. Since our framework is a composition of two components, we next evaluate these components individually to study their properties. We first analyze the utility and defense performance of the {Split-AI}\xspace in isolation, showing that the component achieves test accuracy similar to conventional models while mitigating direct single-query attacks. Second, we demonstrate the necessity of adaptive inference in {Split-AI}\xspace design, and show that changing our design choices with a baseline approach results in sub-optimal performance. Third, we demonstrate the necessity of the Self-Distillation component, by showing that the {Split-AI}\xspace component alone is vulnerable against indirect attacks but combined framework provides a strong defense. \subsection{Performance of {Split-AI}\xspace} \label{appendix:splitai} \textbf{{Split-AI}\xspace $F_{\theta_\mathrm{I}}$ has similar test accuracy compared with the undefended model and reduces the accuracy of direct single-query attack close to a random guess}. As we discussed via a proof of privacy based on a security game in Appendix~\ref{appendix:securityproof}, {Split-AI}\xspace alone should mitigate direct single-query attacks (reducing the attack accuracy close to a random guess). Here we experimentally evaluate this property of $F_{\theta_{\mathrm{I}}}$. Table~\ref{tab:SplitEnsembleF} shows that {Split-AI}\xspace mechanism successfully mitigates the direct single-query attacks discussed in Section~\ref{subsec: attack} with approximately $50\%$ attack accuracy that is close to a random guess. From a utility perspective, for the worst case, i.e., Purchase100, the test accuracy of $F_{\theta_{\mathrm{I}}}$ is just 0.2\% lower than that the undefended model, which is negligible. For Texas100, the test accuracy of $F_{\theta_{\mathrm{I}}}$ is even $3.5\%$ higher as the ensemble benefits from the average of $L$ models. \begin{table}[ht] \caption{Comparison of {Split-AI}\xspace and undefended model against direct single-query attacks.} \label{tab:SplitEnsembleF} \centering \begin{tabular}{ccccc} \toprule dataset &defense &\tabincell{c}{acc on \\training\\ set} &\tabincell{c}{acc on\\test set} & \tabincell{c}{best\\attack}\\ \midrule \multirow{2}{*}{\tabincell{c}{Purchase100}} &No &99.98\% &83.2\% &67.3\% \\ &$F_{\theta_\mathrm{I}}$ &82.6\% &83.0\% &{50.3\%}\\ \midrule \multirow{2}{*}{Texas100} &No &79.3\% &52.3\% &66.0\%\\ &$F_{\theta_\mathrm{I}}$ &56.1\% & 55.9\% &50.7\%\\ \midrule \multirow{2}{*}{\tabincell{c}{CIFAR100}} &No &99.98\% &77.0\% &74.8\%\\ &$F_{\theta_\mathrm{I}}$&77.9\% &77.7\% &50.8\% \\ \bottomrule \end{tabular} \end{table} \subsection{Necessity of Adaptive Inference in {Split-AI}\xspace} \label{subsub: adapen} Next, we evaluate the necessity of our adaptive inference strategy in {Split-AI}\xspace. To do so, we make a design change in the ensemble that represents a baseline approach for comparison: we apply the strategy of averaging the outputs of all $K$ models for all input samples. We evaluate this choice on the three datasets against the direct single-query attack with $K=25$, $L=10$. \begin{table}[ht] \caption{Comparison of adaptive inference~(AI) and average of all outputs~(AOAO) strategy against direct single-query attack.} \label{tab:adaptiveensemble} \centering \begin{tabular}{ccccc} \toprule dataset&ensemble & \tabincell{c}{acc on \\training \\set} & \tabincell{c}{acc on \\test set}& \tabincell{c}{best \\attack}\\ \midrule \multirow{2}{*}{\tabincell{c}{Purchase\\100}} & AI &82.6\% & 83.0\% &50.3\%\\ & AOAO &99.9\% &83.4\% &62.0\%\\ \midrule \multirow{2}{*}{Texas100}&AI &56.1\% &55.9\% &50.7\%\\ &AOAO &81.9\% &56.6\% &67.7\%\\ \midrule \multirow{2}{*}{CIFAR100} &AI &77.9\% &77.7\% &50.8\%\\ &AOAO &99.98\% &78.1\% &69.2\%\\ \bottomrule \end{tabular} \end{table} Table~\ref{tab:adaptiveensemble} presents the comparison of adaptive inference with the baseline strategy of averaging all sub-model outputs. \textbf{While the adaptive inference approach reduces the direct single-query attack to a random guess, the behavior for average of all outputs on members and non-members is very different.} For example, a generalization gap still exists in average of all outputs: 16.5\% on Purchase100, 25.3\% on Texas100 and 21.88\% on CIFAR100. The best attack accuracy against average of all outputs is higher than 60.0\% on all three datasets, which still indicates a severe membership inference threat. The adaptive inference is needed in {Split-AI}\xspace to achieve a good defense against direct single-query attack. Since Self-Distillation needs to transfer knowledge from a source model which has a good defensive abilities, the whole {Split-AI}\xspace is a key component in our whole system. \subsection{Necessity of Self-Distillation} \label{appendix_subsec:necessityofdistill} We have stated the need to introduce Self-Distillation as a second component to overcome weaknesses of {Split-AI}\xspace $F_{\theta_\mathrm{I}}$ in Section~\ref{subsec:sd}. We now demonstrate this by showing the potential membership inference risks in $F_{\theta_\mathrm{I}}$ are mitigated by our final protected model from Self-Distillation $F_{\theta_{\mathrm{II}}}$. Towards this end, we now focus on indirect single-query attack, in which the attacker adds noise to the target sample and queries the noisy sample. We generate the noisy sample by randomly flipping one feature for binary inputs and randomly increasing or decreasing one pixel by 1 for images. Results are presented in Table~\ref{tab:noiseindirect}. \begin{table}[ht] \caption{Comparison for {Split-AI}\xspace $F_{\theta_\mathrm{I}}$ and {SELENA}\xspace $F_{\theta_{\mathrm{II}}}$ against indirect single-query attack.} \centering \label{tab:noiseindirect} \begin{tabular}{cccccc} \toprule dataset &model &\tabincell{c}{noisy\\ data?} &\tabincell{c}{acc on \\training\\ set} &\tabincell{c}{acc on \\test set} & \tabincell{c}{best \\ attack}\\ \midrule \multirow{4}{*}{\tabincell{c}{Purchase\\100}} &\multirow{2}{*}{$F_{\theta_\mathrm{I}}$} &no &82.6\% &83.0\% & 50.3\%\\ & &yes &99.1\% & 83.0\% & 60.8\% \\ &\multirow{2}{*}{$F_{\theta_{\mathrm{II}}}$}&no &82.7\% &79.3\% & {53.3\%}\\ & &yes &82.2\% & 79.3\% & 53.1\% \\ \midrule \multirow{4}{*}{\tabincell{c}{Texas100}} &\multirow{2}{*}{$F_{\theta_\mathrm{I}}$} &no &56.1\% & 55.9\% &{50.7\%}\\ & & yes & 79.3\% & 55.9\% &64.1\% \\ &\multirow{2}{*}{$F_{\theta_{\mathrm{II}}}$} &no &58.8\% & 52.6\% & {54.8\%} \\ & & yes & 57.9\% & 52.6\% &53.2\%\\ \midrule \multirow{4}{*}{\tabincell{c}{CIFAR100}} &\multirow{2}{*}{$F_{\theta_\mathrm{I}}$} &no &77.9\% &77.7\% &50.8\% \\ & & yes &99.6\% &78.3\% &66.0\%\\ &\multirow{2}{*}{$F_{\theta_{\mathrm{II}}}$} &no &78.1\% &74.6\% &55.1\% \\ & & yes & 78.1\% &74.6\% &55.0\% \\ \bottomrule \end{tabular} \end{table} While the indirect single-query attack success against $F_{\theta_{\mathrm{I}}}$ is high: 60.8\% for Purchase100, 64.1\% for Texas100 and 66.0\% for CIFAR100, the membership privacy for $F_{\theta_{\mathrm{II}}}$ does not degrade using such indirect attacks. Recall that Self-Distillation is done by using only one exact query for each training sample and applying conventional training with resulting soft labels to train the protected model $F_{\theta_{\mathrm{II}}}$. Thus \textbf{the MIA success against the protected model $F_{\theta_{\mathrm{II}}}$ (from Self-Distillation) using noisy data is not higher than using clean data}. Therefore, Self-Distillation solves a key privacy challenge faced by {Split-AI}\xspace and is a necessary component in our framework. Furthermore, the Self-Distillation approach also solves the issues of computational overhead in the inference stage and the replay attack as discussed in Section~\ref{subsec:sd}. \section{Label-Only Attacks} \label{appendix:label} We analyze boundary attack for all datasets and data augmentation attack for CIFAR100. \emph{Boundary attack:} we use CW white-box attack~\cite{carlini2017towards} for computer vision dataset (CIFAR100 in our paper) as other black-box attacks need to make multiple queries on the target model and can not achieve better performance than CW white-box attacks, as shown by Choo et. al.~\cite{choo2020label}. We have $$I_{\text{CW}}(F, \textbf{x}, y)= \mathds{1}\{adv-{dist}_{CW}(\textbf{x}) \geq {\tau_{(y)}}\}$$ For other binary datasets considered in our work, the CW attack is not very successful due to the binary features. The only possible feature values are $0/1$, thus successful adversarial examples require turning feature value of 1 to be lower than 0.5 or turning feature value of 0 to be higher than 0.5, which is a big jump, otherwise the rounded noisy sample is likely to be the same as the target sample. Instead, we introduce noise in the target sample by randomly flipping a threshold number of features~\cite{choo2020label, li2020label}. Given a threshold on the number of flipped features, we generate hundreds of noisy samples for each target sample to query the model. We then perform an attack based on the percentage of correct predictions on the noisy samples to estimate the boundary. This is based on the intuition that for samples which are away from the classification boundary, the samples around it are likely to be correctly classified. Hence the metric of correctness percentage on noisy samples can be used to estimate the boundary distance. We vary the number of flipped features from 1 to 30 for Purchase100 and from 1 to 100 for Texas100~( We find that our selected parameters already provide a search space large enough for the optimal threshold because continuing to increase the threshold will have lower attack accuracy as the members become more noisy). We report the best attack accuracy among these numbers. $$I_{\text{random-noise}}(F, \textbf{x}, y)= \mathds{1}\{\frac{\sum_{\textbf{x}' \text{around \textbf{x}}} corr(\textbf{x}')}{|\textbf{x}' \text{around \textbf{x}}|} \geq {\tau_{(y)}}\}$$ \emph{Data Augmentation Attack:} Data augmentation attack is based the augmentation technique that we use to train the model. During training, we first perform image padding and cropping, and then perform horizontal flipping with a $0.5$ probability to augment the training set. An attacker will similarly query all possible augmented results of a target image sample. For example, if the padding size is 4 for left/right and up/down, and the size of cropped image is the same as original image: considering left/right after cropping, there are $(4+4+1)$ possible choices; considering up/down after cropping, there are $(4+4+1)$ possible choices; considering horizontal flipping, there are 2 possible choices. Therefore, the number of total queries for a target image is $9\times9\times2 = 162$. As the target model is more likely to correctly classify the augmented samples of members than that of non-members, only target samples with sufficient correctly classified queries will be identified as members. This attack is important as Choo et al.~\cite{choo2020label} show that data augmentation attacks~(specifically those based on image translation) can achieve higher performance than CW attacks. We have $$I_{\text{data-augmentation}}(F, \textbf{x}, y)= \mathds{1}\{\frac{ \sum_{\textbf{x}': \text{augmented \textbf{x}}} corr(\textbf{x}')}{|\textbf{x}': \text{augmented \textbf{x}}|} \geq {\tau_{(y)}}\}$$ \begin{table*}[ht] \caption{Ablation study on architectures for Purchase100. The first column describes model architecture in format of (activation function, width, depth). AdvReg refers to adversarial regularization. The last column is the highest attack accuracy for each row, i.e. for a specific defense on one dataset, the highest attack accuracy that MIAs can achieve, which gives an overview of comparison: the lower the best attack accuracy, lower the membership inference threat. For each dataset, the defense which has the lowest corresponding attack accuracy is bold in the column of best direct single-query attack, best label-only and best attack.} \label{tab:ablationpurchase} \centering \begin{tabular}{cccccccc} \toprule \tabincell{c}{architectures\\ (activation function, \\width, depth)}&defense &\tabincell{c}{acc on \\training set} &\tabincell{c}{acc on \\test set} &\tabincell{c}{best direct \\single-query\\ attack}&\tabincell{c}{best\\label-only \\attack}&\tabincell{c}{best adaptive \\attack} &\tabincell{c}{best attack}\\ \midrule \multirow{4}{*}{Tanh, 1, 4} &None &99.98\% &83.2\% &{67.3\%} &65.8\% &N/A &67.3\%\\ &MemGuard &99.98\% &83.2\% &58.7\% &{65.8\%} &N/A &65.8\%\\ &{AdvReg} &91.9\% &78.5\% &57.3\% &{57.4\%} &N/A &57.4\%\\ &\textbf{{SELENA}\xspace} &82.7\% &79.3\% &\textbf{53.3\%} &\textbf{53.2\%} &{54.3\%} &\textbf{54.3\%}\\ \midrule \multirow{4}{*}{Tanh, 1, 3} &None &100.0\% &84.9\% &{68.2\%} &65.9\% &N/A &68.2\%\\ &MemGuard &100.0\% &84.9\% &57.6\% &{65.9\%} &N/A &65.9\%\\ &{AdvReg} &89.2\% &78.2\% &{56.6\%} &56.5\% &N/A &56.6\%\\ &\textbf{{SELENA}\xspace} &83.9\% &81.1\% &\textbf{52.5\%} &\textbf{52.6\%} &{53.4\%} &\textbf{53.4\%}\\ \midrule \multirow{4}{*}{Tanh, 1, 5} &None &99.8\% &81.4\% &{66.7\%} &65.7\% &N/A &66.7\%\\ &MemGuard &99.8\% &81.4\% &59.5\% &{65.7\%} &N/A &65.7\% \\ &{AdvReg} &91.7\% &77.3\% &58.2\% &{58.4\%} &N/A &58.4\%\\ &\textbf{{SELENA}\xspace} &82.6\% &78.8\% &\textbf{54.5\%} &\textbf{54.9\%} & {56.2\%} &\textbf{56.2\%}\\ \midrule \multirow{4}{*}{Tanh, 0.5, 4} &None &99.9\% &79.9\% &{67.9\%} &66.7\% &N/A &67.9\%\\ &MemGuard &99.9\% &79.9\% &60.2\% &{66.7\%} &N/A &66.7\%\\ &{AdvReg} &92.8\% &77.6\% &58.8\% &{58.9\%}&N/A &58.9\%\\ &\textbf{{SELENA}\xspace} &82.5\% &77.8\% &\textbf{53.7\%} &\textbf{53.6\%} &{55.0\%} &\textbf{55.0\%}\\ \midrule \multirow{4}{*}{Tanh, 2, 4} &None &100.0\% &84.4\% &{70.7\%} &67.6\% &N/A &{70.7\%}\\ &MemGuard &100.0\% &84.4\% &58.7\% &{67.6\%} &N/A &67.6\%\\ &{AdvReg} &90.7\% &77.6\% &57.1\% &{57.2\%}&N/A &57.2\%\\ &\textbf{{SELENA}\xspace} &83.6\% &80.5\% &\textbf{54.5\%} &\textbf{54.9\%} &{56.0\%} &\textbf{56.0\%}\\ \midrule \multirow{4}{*}{ReLU, 1, 4} &None &99.2\% &79.7\% &{63.6\%} &63.3\% &N/A &63.6\%\\ &MemGuard &99.2\% &79.7\% &59.4\% &{63.3\%} &N/A &63.3\%\\ &{AdvReg} &92.4\% &76.8\% &58.4\% &{58.6\%} &N/A &58.6\%\\ &\textbf{{SELENA}\xspace} &82.5\% &77.7\% &\textbf{{54.3\%}} &\textbf{53.9\%} &53.7\% &\textbf{54.3\%}\\ \bottomrule \end{tabular} \end{table*} \section{Experiment Setup} Here we introduce the datasets, the model architectures, and the hyper-parameter settings in more detail. \label{appendix:setup} \subsection{Dataset} We use three benchmark datasets widely used in prior works on MIAs: \textbf{CIFAR100}: This is a benchmark dataset used to evaluate image classification algorithms~\cite{krizhevsky2009learning}. CIFAR100 is composed of $32\times 32$ color images in 100 classes, with 600 images per class. For each class label, 500 images are used as training samples, and remaining 100 images are used as test samples. \textbf{Purchase100}: This dataset is based on Kaggle's Acquire Valued Shopper Challenge,\footnote{\href{https://www.kaggle.com/c/acquire-valued-shoppers-challenge}{https://www.kaggle.com/c/acquire-valued-shoppers-challenge}.} which contains shopping records of several thousand individuals. We obtained a prepossessed and simplified version provided by Shokri et al.~\cite{shokri2017membership}. This dataset is composed of 197,324 data samples with 600 binary features. Each feature corresponds to a product and represents whether the individual has purchased it or not. This dataset is clustered into 100 classes corresponding to purchase styles. \textbf{Texas100}: This dataset is based on the Hospital Discharge Data public use files with information about inpatients stays in several health facilities released by the Texas Department of State Health Services from 2006 to 2009.\footnote{\href{https://www.dshs.texas.gov/THCIC/Hospitals/Download.shtm}{https://www.dshs.texas.gov/THCIC/Hospitals/Download.shtm}.} Each data record contains external causes of injury, the diagnosis, the procedures the patient underwent and some generic information. We obtain a prepossessed and simplified version of this dataset provided by Shokri et al.~\cite{shokri2017membership}, which is composed of 67,330 data samples with 6,170 binary features. This dataset is used to classify 100 most frequent used procedures. \subsection{Target Models} \label{sub: target} For CIFAR100, we use ResNet-18~\cite{he2016deep}, which is a benchmark machine learning model widely used in computer vision tasks. We adopt the cross-entropy loss function and use Stochastic Gradient Descent~(SGD) to learn the model parameters. We train the model for 200 epochs with batch size of 256, initializing learning rate 0.1 with weight decay 0.0005 and Nesterov momentum of 0.9 and divide the learning rate by 5 at epoch 60, 120, 160.\footnote{\href{https://github.com/weiaicunzai/pytorch-cifar100}{https://github.com/weiaicunzai/pytorch-cifar100}.} For Purchase100 and Texas100, we follow previous work~\cite{nasr2018machine} to use a 4-layer fully connected neural network with layer sizes $[1024,512,$ $256,100]$ and Tanh as the activation function. We use the cross-entropy loss function and Adam~\cite{kingma2014adam} optimizer to train the model on Purchase100 for 30 epochs and on Texas100 for 20 epochs with learning rate of 0.001. The batch size is 512 for Purchase100 and 128 for Texas100. These hyper-parameter settings are also used for our ablation study in Appendix~\ref{appendix:architectures} where we vary the model architecture. \begin{table*}[ht] \caption{Ablation study on architectures for Texas100. The first column describes model architecture in format of (activation function, width, depth). AdvReg refers to adversarial regularization. The last column is the highest attack accuracy for each row, i.e. for a specific defense on one dataset, the highest attack accuracy that MIAs can achieve, which gives an overview of comparison: the lower the best attack accuracy, lower the membership inference threat. For each dataset, the defense which has the lowest corresponding attack accuracy is bold in the column of best direct single-query attack, best label-only and best attack.} \label{tab:ablationtexas} \centering \begin{tabular}{cccccccc} \toprule \tabincell{c}{architectures\\ (activation function, \\width, depth)}&defense &\tabincell{c}{acc on\\ training set} &\tabincell{c}{acc on\\ test set} &\tabincell{c}{best direct \\single-query\\ attack}&\tabincell{c}{best\\label-only \\attack}&\tabincell{c}{best adaptive \\attack} &\tabincell{c}{best attack}\\ \midrule \multirow{4}{*}{Tanh, 1, 4} &None &79.3\% &52.3\% &{66.0\%} &64.7\% &N/A &66.0\% \\ &MemGuard &79.3\% &52.3\% &63.0\% &{64.7\%} &N/A &64.7\% \\ &{AdvReg} &55.8\% &45.6\% &{60.5\%} &56.6\% &N/A &60.5\%\\ &\textbf{{SELENA}\xspace} &58.8\% &52.6\% &\textbf{54.8\%} &\textbf{55.1\%}&54.9\% &\textbf{55.1\%}\\ \midrule \multirow{4}{*}{Tanh, 1, 3} &None &82.1\% &55.5\% &{66.2\%} &65.5\% &N/A &66.2\%\\ &MemGuard &82.1\% &55.5\% &63.4\% &{65.5\%}&N/A &65.5\%\\ &{AdvReg} &54.9\% &47.0\% &{58.6\%} &55.5\% &N/A &58.6\%\\ &\textbf{{SELENA}\xspace} &61.1\% &55.4\% &\textbf{54.5\%} &\textbf{54.6\%} &{55.6\%} &\textbf{55.6\%}\\ \midrule \multirow{4}{*}{Tanh, 1, 5} &None &76.5\% &49.0\% &{66.4\%} &65.2\% &N/A &66.4\%\\ &MemGuard &76.5\% &49.0\% &63.9\% &{65.2\%} &N/A &65.2\%\\ &{AdvReg} &54.4\% &43.4\% &{61.5\%} &56.9\% &N/A &61.5\%\\ &\textbf{{SELENA}\xspace} &56.7\% &51.3\% &\textbf{54.3\%} &\textbf{53.9\%} &52.9\% &\textbf{54.3\%}\\ \midrule \multirow{4}{*}{Tanh, 0.5, 4} &None &76.5\% &52.1\% &{65.9\%}&63.2\% &N/A &65.9\%\\ &MemGuard &76.5\% &52.1\% &63.0\% &{63.2\%} &N/A &63.2\%\\ &{AdvReg} &57.1\% &45.4\% &{62.2\%}&57.1\% &N/A &62.2\%\\ &\textbf{{SELENA}\xspace} &57.8\% &53.1\% &\textbf{53.7\%} &\textbf{54.0\%}&{54.7\%} &\textbf{54.7\%}\\ \midrule \multirow{4}{*}{Tanh, 2, 4} &None &81.7\% &51.9\% &{67.7\%} &67.0\% &N/A &67.7\%\\ &MemGuard &81.7\% &51.9\% &64.7\% &{67.0\%} &N/A &67.0\%\\ &{AdvReg} &52.6\% &44.9\% &{57.9\%}&54.7\% &N/A &57.9\%\\ &\textbf{{SELENA}\xspace} & 59.7\% &53.6\% &\textbf{54.4\%} &\textbf{54.7\%} &53.6\% &\textbf{54.7\%}\\ \midrule \multirow{4}{*}{ReLU, 1, 4} &None &98.8\% &47.0\% &{81.7\%}&80.7\% &N/A &81.7\%\\ &MemGuard &98.8\% &47.0\% &75.8\% &{80.7\%} &N/A &80.7\%\\ &{AdvReg} &55.0\% &43.0\% &{59.0\%} &57.0\% &N/A &59.0\%\\ &\textbf{{SELENA}\xspace} &54.6\% &51.4\% &\textbf{57.6\%}&\textbf{54.4\%} &55.7\% &\textbf{57.6\%}\\ \bottomrule \end{tabular} \end{table*} \section{Ablation Studies} \label{appendix:ablation} In this section, we first report on ablation studies that vary the model architecture to demonstrate that the benefits of {SELENA}\xspace hold across architectures. Second, we report on ablation studies that vary our parameters $K$ and $L$ and discuss parameter trade-offs and selection. \subsection{Ablation Study on Different Model Architectures} \label{appendix:architectures} For Purchase100 and Texas100, the target classifier is a 4-layer fully connected neural network. We test two additional neural network depths by deleting the last hidden layer (depth=3) or adding one more hidden layer with 2048 neurons (depth=5). We test two additional neural network widths by halving the numbers of hidden neurons (width=0.5) or doubling the numbers of hidden neurons (width=2.0). We also test both ReLU and Tanh, as the activation functions. For CIFAR100, we apply {SELENA}\xspace on two different architectures: ResNet-18~\cite{he2016deep} and VGG-16~\cite{simonyan2014very}. Note that we will optimize the choice of $K$ and $L$ for different model architectures to achieve the best trade-off between test accuracy and membership privacy. \begin{table*}[ht] \caption{Ablation study on architectures for CIFAR100. AdvReg refers to adversarial regularization. The last column is the highest attack accuracy for each row, i.e. for a specific defense on one dataset, the highest attack accuracy that MIAs can achieve, which gives an overview of comparison: the lower the best attack accuracy, lower the membership inference threat. For each dataset, the defense which has the lowest corresponding attack accuracy is bold in the column of best direct single-query attack, best label-only and best attack.} \label{tab:ablationcifar} \centering \begin{tabular}{cccccccc} \toprule architectures&defense &\tabincell{c}{acc on \\training set} &\tabincell{c}{acc on\\ test set} &\tabincell{c}{best direct \\single-query\\ attack}&\tabincell{c}{best\\label-only \\attack}&\tabincell{c}{best adaptive \\attack} &best attack\\ \midrule \multirow{4}{*}{ResNet-18} &None &99.98\% &77.0\% &{74.8\%} &69.9\% &N/A &74.8\% \\ &MemGuard &99.98\% &77.0\% &68.7\% &{69.9\%} &N/A &69.9\%\\ &{AdvReg} &86.9\% &71.5\% &58.6\% &{59.0\%}&N/A &59.0\%\\ &\textbf{{SELENA}\xspace} &78.1\% &74.6\% &\textbf{55.1\%}&\textbf{54.0\%} &{58.3\%} &\textbf{58.3\%}\\ \midrule \multirow{5}{*}{VGG-16} &None &99.97\% &74.3\% &71.1\% &{73.6\%}&N/A &73.6\%\\ &MemGuard &99.97\% &74.3\% &64.8\% &{73.6\%} &N/A &73.6\%\\ &{AdvReg} &95.4\% &70.3\% &64.9\% &{67.2\%} &N/A &67.2\%\\ &\textbf{{SELENA}\xspace} &75.3\% &71.1\% &\textbf{54.7\%}&\textbf{55.5\%} & {57.3\%} &\textbf{57.3\%}\\ \bottomrule \end{tabular} \end{table*} \begin{table*}[ht] \caption{{Split-AI}\xspace and {SELENA}\xspace against direct single-query attack for $K$ = 25 on Purchase100.} \label{varyL} \centering \begin{tabular}{ccccccccccccc} \toprule $L$ & 5 & 6 &7 &8 &9 &10 &11 &12 &13 & 14 &15\\ \midrule \tabincell{c}{{Split-AI}\xspace $F_{\theta_{\mathrm{I}}}$ single \\model accuracy \\on test set} &81.3\% &80.8\% &80.1\% &79.5\% &78.7\% &77.9\% &77.1\% &75.7\% &75.2\% &73.8\% &72.6\%\\ \midrule \tabincell{c}{{Split-AI}\xspace $F_{\theta_{\mathrm{I}}}$ acc \\on test set} &83.8\% &83.9\% &83.5\% &83.5\% &83.1\% &83.0\% &82.7\% &82.3\% &82.0\% &81.5\% &81.0\%\\ \midrule \tabincell{c}{best direct \\single-query \\attack against $F_{\theta_{\mathrm{I}}}$} &50.4\% &50.4\% &50.6\% &50.5\% &50.5\% &50.3\% &50.7\% &50.7\% &50.3\% &50.7\% &50.6\%\\ \midrule \tabincell{c}{{SELENA}\xspace $F_{\theta_{\mathrm{II}}}$ acc \\on test set}&79.8\% &79.9\% &79.9\% &79.5\% &79.3\% &79.3\% &78.8\% &78.5\% &78.1\% &77.8\% &77.3\%\\ \midrule \tabincell{c}{best direct \\single-query \\attack against $F_{\theta_{\mathrm{II}}}$}&55.7\% &55.0\% &54.6\% &54.5\% &53.9\% &53.3\% &52.9\% &52.0\% &52.3\% &51.8\% &51.6\%\\ \bottomrule \end{tabular} \end{table*} \begin{table}[ht] \caption{{Split-AI}\xspace and {SELENA}\xspace against direct single-query attack for $K/L$ = 5/2 on Purchase100.} \label{fixedKLfull} \centering \begin{tabular}{ccccc} \toprule model&$K$, $L$ &\tabincell{c}{single model\\ acc on test set} &\tabincell{c}{acc on \\test set} &\tabincell{c}{best \\attack}\\ \midrule \multirow{3}{*}{{Split-AI}\xspace}&5, 2 &78.1\% & 80.9 \% &50.5\% \\ &25, 10 & 77.9\% &83.0\%& 50.3\%\\ &50, 20 &77.9\% &83.5\% &50.4\%\\ \midrule \multirow{3}{*}{{SELENA}\xspace} &5, 2 &N/A &77.7\% &55.3\%\\ &25, 10 &N/A &79.3\% &53.3\%\\ &50, 20 &N/A &79.2\% &53.2\%\\ \bottomrule \end{tabular} \end{table} Table~\ref{tab:ablationpurchase} presents the ablation study on Purchase100. Compared with undefended model, {SELENA}\xspace only incurs 2.0\%$\sim$3.9\% loss in (test) classification accuracy. As for MemGuard, though it has the same classification accuracy as undefended model, the best attack accuracy across different architectures are higher than 63.0\%~(MemGuard cannot defend against label-only attacks) while {SELENA}\xspace limits the attack to be no more than 56.2\%. Compared with adversarial regularization, {SELENA}\xspace achieves 0.2\%$\sim$2.9\% higher classification accuracy and reduces the additional attack accuracy over a random guess~(50\%) by a factor of 1.2 $\sim$ 2. Table~\ref{tab:ablationtexas} presents the ablation study on Texas100. Note that the classification accuracy of {SELENA}\xspace is only 0.1\% lower than the undefended model for the model which has a architecture of width=1, depth=3 and Tanh as the activation function. For other architectures, {SELENA}\xspace even increases the classification accuracy a little~(0.3\% $\sim$ 4.4\%). As for MemGuard, the best attack accuracy across different architectures is higher than 63.0\%, and as high as 80.7\% for model with width =1, depth=4 and ReLU activation function~(MemGuard cannot defend against label-only attacks), while {SELENA}\xspace limits the attack accuracy to no more than $57.6\%$. Compared with adversarial regularization, {SELENA}\xspace achieves higher classification accuracy~(7.0\% $\sim$ 8.7\%) and lower MIA accuracy~(1.4\% $\sim$ 7.5\%). Table~\ref{tab:ablationcifar} presents the ablation study on CIFAR100. Compared with undefended model, the classification accuracy for {SELENA}\xspace only decreases by 2.4\% $\sim$ 3.2\%. As for MemGuard, the best attack accuracy across different architectures is 69.9\% $\sim$ 73.6\%~, while {SELENA}\xspace limits the attack accuracy no more than $58.3\%$. In comparison with adversarial regularization, {SELENA}\xspace achieves 0.8\% $\sim$ 3.1\% higher classification accuracy and 0.7\% $\sim$ 9.9\% lower MIA accuracy. \subsection{Ablation Study on K and L} \label{appendix:KL} Here, we discuss the setting of parameters in the {Split-AI}\xspace and {SELENA}\xspace, i.e., the choice of $K$ and $L$. We first vary $L$ keeping $K$ fixed~(Table~\ref{varyL}) and second vary $K$ and $L$ keeping the ratio of $K/L$ fixed~(Table~\ref{fixedKLfull}). We evaluate these two experiments on Purchase100. We first discuss the performance of {Split-AI}\xspace. From a privacy perspective, we find that for direct single-query attacks, all settings of $K$ and $L$ in {Split-AI}\xspace limit the attack accuracy around a random guess. From a utility perspective, when $L$ is smaller with fixed $K$, each sub-model is trained with more data, and the accuracy of a single sub-model on the test set is higher. For overall performance in {Split-AI}\xspace, which is the average of $L$ outputs, when $L$ is smaller, fewer models are aggregated. For example, for $L=1$, the test accuracy is lower than that of the model trained with whole dataset. When $L$ increases close to $K$, the test accuracy for each single sub-model is low and the overall accuracy will be lower than the test accuracy of the undefended model. Therefore, there is a trade-off in the choice of the parameter $L$. When the ratio of $K/L$ is fixed, the test accuracy will be lower than the undefended model for small values of $L$ as the ensemble performance of $L$ is poor. As $L$ increases, the test accuracy will increase because of ensemble of $L$ models but such improvement is limited: the test accuracy is nearly same for $K=25$, $L=10$ and $K=50$, $L=20$. We next discuss the performance of {SELENA}\xspace. As the final protected model in {SELENA}\xspace learns the knowledge transferred from the {Split-AI}\xspace, it has also similar performance for $L$ ranging from 5 to 7, and with further increases in $L$, the test accuracy of {SELENA}\xspace also drops. For MIA accuracy, when $K$ is kept fixed, MIA accuracy decreases as L increases. When the ratio of $K/L$ is fixed, for $K=25$, $L=10$ and $K=50$, $L=20$, the MIA accuracy is similar. This is because when $L$ is relative large, it's easier for Self-Distillation to train a model which mimics {Split-AI}\xspace: when a single sub-model is trained with a appropriate portion of data, it will have a good test accuracy which serves as an enabler for Self-Distillation. Keeping computational overhead, utility and privacy in mind, \textbf{we need $L$ to be large enough and a proper proportion of $K$ to ensure that each sub-model is trained with enough data and benefits from averaging while enabling Self-Distillation to mimic the performance of {Split-AI}\xspace.} However, we should not simply increase $L$ without any limitations while keeping $K/L$ fixed, referring to the computation overhead discussed in Section~\ref{subsec: efficiency}. In our experiments, we use $K=25$, $L = 10$ for all three datasets. \section{An Optional Parameter for Soft Labels in Self-Distillation for Trade-off between Utility and Membership Privacy} \label{appendix:tradeoff} The (test) classification accuracy of {SELENA}\xspace is 2.0\% $\sim$ 3.9\% lower than undefended model on Purchase100 and 2.4\% $\sim$ 3.2\% lower than undefended model on CIFAR100. We now consider an alternative design choice for soft labels of training set in Self-Distillation: instead of only using output from {Split-AI}\xspace to optimize for privacy, we now combine the outputs from {Split-AI}\xspace and the ground truth label together as soft labels in Self-Distillation. This results in a trade-off which can help achieve higher classification accuracy than {SELENA}\xspace at the cost of membership privacy: \begin{equation} \label{eqa:alphaparameter} y_{\text{soft}} = (1-\lambda) F_{\theta_{\mathrm{I}}}+\lambda y \end{equation} Here $\lambda$ is a hyper-parameter ranging from 0 to 1, which controls the ratio of outputs from {Split-AI}\xspace and ground truth labels for training set. When $\lambda =0$, this is equivalent to {SELENA}\xspace; when $\lambda = 1$, this is equivalent to undefended model. \begin{figure}[ht] \centering \includegraphics[width=3in]{imgs/tradeoff.pdf} \caption{The change of classification accuracy with $\lambda$ and change of best MIA classification accuracy with $\lambda$. When $\lambda$ = 0, this is equivalent to SELENA; when $\lambda$ = 1, this is equivalent to undefended model.} \label{fig:alphaparamter} \end{figure} \begin{figure*}[ht] \centering \subfigure[Purchase100]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2.1in]{imgs/vary_know_members/estimated_purchase.pdf} \end{minipage} }% \subfigure[Texas100]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2.1in]{imgs/vary_know_members/estimated_texas.pdf} \end{minipage} }% \subfigure[CIFAR100]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2.1in]{imgs/vary_know_members/estimated_cifar100.pdf} \end{minipage} }% \centering \caption{Impact of training set size used by attacker to train shadow {Split-AI}\xspace on adaptive attacks. $\alpha$ in Figure~\ref{fig:adaptive_attack} is the ratio of training samples used by attacker to train shadow {Split-AI}\xspace.} \label{fig:adaptive_attack} \end{figure*} We evaluate this alternative soft labels approach on Purchase100 and present the classification accuracy as well as the best MIA attack accuracy (across multiple attack types) as a function of training epoch in Figure~\ref{fig:alphaparamter}. The best attack accuracy is the highest accuracy among direct single-query attack, label-only attacks and adaptive attacks~(attacker will estimate soft labels according to Equation~(\ref{eqa:alphaparameter})). We can now better control the trade-off between utility and membership privacy: if the desired MIA accuracy is no more than 56\%~(which is still lower than adversarial regularization, recall that the best MIA accuracy against adversarial regularization on Purchase100 is 57.4\%), then $\lambda = 0, 0.1, 0.2$ all satisfy the requirement. We observe that $\lambda=0.1$ increases classification accuracy by about 1\% and $\lambda=0.2$ increases classification accuracy by about 2\% compared to {SELENA}\xspace. \section{Detailed Analysis of Adaptive Attacks} \label{appen:adaptiveattack} We note that the number of training samples known by attacker will impact the quality of outputs generated by attacker's shadow {Split-AI}\xspace. For example, if the attacker knows all member samples, though not practical for the membership inference attack problem, the confidence of shadow {Split-AI}\xspace's output is similar to that of defender's {Split-AI}\xspace's output. In contrast, knowing only part of member samples will lead to the result that the confidence of shadow {Split-AI}\xspace's output is lower than that of defender {Split-AI}\xspace's output. To understand adaptive attacks thoroughly, we vary the number of member samples known by the attacker. Attacker will use all these member samples to train shadow {Split-AI}\xspace and the attacker goal is still to identify remaining unknown member samples and the baseline random guess is $50\%$ under the setting that the number of members and non-members used to train and evaluate the attack model are the same. Figure~\ref{fig:adaptive_attack} presents the performance of adaptive attacks as a function of size of training samples used by attacker to train shadow {Split-AI}\xspace including the best attack accuracy across multiple attacks~(direct single-query/label-only/adaptive attack for {SELENA}\xspace and direct single-query/label-only attack for adversarial regularization) as well as the adaptive attack for {SELENA}\xspace. $\alpha$ in Figure~\ref{fig:adaptive_attack} is the ratio of training samples used by attacker to train shadow {Split-AI}\xspace. Figure~\ref{fig:adaptive_attack} shows that for all three datasets, the adaptive attack accuracy increases as the increasing number of training samples used to train attacker's shadow {Split-AI}\xspace. Specifically, for Texas100 dataset, when $\alpha \leq 0.5$, the adaptive attack is lower than other two MIAs. Our {SELENA}\xspace performs well across different $\alpha$ settings: for Purchase100 and Texas100, the adaptive attack accuracy is lower than best MIA attack against adversarial regularization for all $\alpha$s. For CIFAR100, we can see that the adaptive attack accuracy against {SELENA}\xspace is lower than the best attack accuracy for CIFAR100 against adversarial regularization for $\alpha\leq 0.6$ and slightly higher (around 2\% at $\alpha=0.9$) than adversarial regularization for $\alpha\geq 0.7$. \section{Comparison with Model Stacking} \label{appendix:modelstacking} \begin{table}[ht] \caption{Comparison of {SELENA}\xspace and Model Stacking~(MS) against direct single-query attack.} \label{tab:modelstacking} \begin{tabular}{ccccc} \toprule dataset&defense & \tabincell{c}{acc on \\training \\set} & \tabincell{c}{acc on \\test set}& \tabincell{c}{best \\attack}\\ \midrule \multirow{3}{*}{Purchase100} &{SELENA}\xspace &82.7\% & 79.3\% &53.3\%\\ &\tabincell{c}{MS} &84.3\% &73.6\% &62.4\%\\ \midrule \multirow{3}{*}{Texas100} &{SELENA}\xspace &58.8\% &52.6\% &54.8\%\\ &\tabincell{c}{MS}&60.5\% &46.3\% &63.8\%\\ \midrule \multirow{3}{*}{CIFAR100} &{SELENA}\xspace &78.1\% &74.6\% &55.1\%\\ &\tabincell{c}{MS} &80.6\% &66.5\% &63.2\%\\ \bottomrule \end{tabular} \end{table} Table~\ref{tab:modelstacking} presents the comparison between our defense and model stacking against direct single-query attack. The classification accuracy of model stacking is lower than {SELENA}\xspace: 5.7\% lower on Purchase100, 6.3\% lower on Texas100 and 8.1\% lower on CIFAR100, while the direct single-query MIA accuracy against model stacking is higher than that against our defense: 9.1\% higher on Purchase100, 9.0\% higher on Texas100 and 8.1\% higher on CIFAR100. This experiment supports our statements in Section~\ref{subsec:modelstacking}:~(1) Model stacking suffers a drop in test accuracy as it requires a disjoint subset of data for each module.~(2) Disjoint dataset partition alone is not enough to protect membership privacy if the outputs in the first layer are directly combined as inputs to the second layer. \section{Existing Attacks and Defenses} \label{sec:attackdefense} Next, we overview prior MI attacks and MI defenses. \subsection{Membership Inference Attacks (MIAs)} \label{subsec: attack} MIAs can utilize the prediction vector as a feature using a neural-network-based model, called \emph{NN-based attacks}, or can compute a range of custom metrics (such as correctness, confidence, entropy) over the prediction vector to infer membership, called \emph{metric-based attacks}. These attacks can be mounted either by knowing a subset of the training set~\cite{nasr2018machine} or by knowing a dataset from the same distribution of the training set and constructing shadow models~\cite{shokri2017membership}. Let us denote $D_{tr}$ as the training set for the target model, i.e., members and $D_{te}$ as the test set, i.e., non-members. $D_{tr}^A$ and $D_{te}^A$ are, respectively, the sets of members and non-members that the attacker knows. $I(\textbf{x}, y, F(\textbf{x}))$ is the binary membership inference classifier in the range of $\{0, 1\}$ which codes members as 1, and non-members as 0. The literature typically measures MIA efficacy as the attack accuracy: $$\frac{\sum_{(\textbf{x},y)\in D_{tr}\backslash D_{tr}^A}I(\textbf{x}, y , F(\textbf{x}))+\sum_{(\textbf{x},y)\in D_{te}\backslash D_{te}^A}(1-I(\textbf{x}, y , F(\textbf{x})))}{|D_{tr}\backslash D_{tr}^A| + |D_{te}\backslash D_{te}^A|}$$ In most previous attacks~\cite{shokri2017membership, nasr2018machine, yeom2020overfitting, song2020systematic}, the number of members and non-members used to train and evaluate the attack model are the same. With this approach, the prior probability of a sample being either a member or a non-member is 50\% (corresponding to a random guess). Next, we summarize black-box MIAs in the following two categories: \textbf{direct} attacks and \textbf{indirect} attacks. \textbf{Direct single-query attacks:} Most existing MIAs directly query the target sample and utilize the resulting prediction vector. Since ML models typically have only one output for each queried sample, just a single query is sufficient. \emph{ NN-based attack~\cite{shokri2017membership, nasr2018machine}:} The attacker can use the prediction vectors from the target model along with the one-hot encoded ground truth labels as inputs and build a NN model~\cite{nasr2018machine} $I_{\text{NN}}$ for the membership inference task. \emph{Correctness-based attack~\cite{yeom2020overfitting}:} Generalization gap~(i.e., the difference between training accuracy and test accuracy) is a simple baseline for MIA as samples with correct prediction are more likely to be training members. $$I_{\text{corr}}(F(\textbf{x}),y) = \mathds{1}\{\mathop{\argmax}_i F(\textbf{x})_i =y\}$$ \emph{Confidence-based attack~\cite{yeom2018privacy, song2019privacy, song2020systematic}:} Prediction confidence corresponding to training samples $F(\textbf{x})_{{y}}$ is typically higher than prediction confidence for testing samples. Therefore, confidence-based attack will only regard the queried sample as a member when the prediction confidence is larger than either a class-dependent threshold $\tau_y$ or a class-independent threshold $\tau$. $$I_{\text{conf}}(F(\textbf{x}),y)= \mathds{1}\{ F(\textbf{x})_{y} \geq {\tau_{(y)}}\}$$ \emph{Entropy-based attack~\cite{shokri2017membership, song2020systematic}}: The prediction entropy of a training sample is typically lower than the prediction entropy of a testing sample. Therefore, entropy-based attack will only regard the queried sample as a member when the prediction entropy is lower than a class-dependent threshold $\tau_y$ or a class-independent threshold $\tau$. $$I_{\text{entr}}(F(\textbf{x}),y) = \mathds{1}\{-\sum_i F(\textbf{x})_i \log(F(\textbf{x})_i) \leq \tau_{(y)}\}$$ \emph{Modified entropy-based attack~\cite{song2020systematic}:} Song et al.~\cite{song2020systematic} proposed the modified prediction entropy metric which combines the information in the entropy metric and ground truth labels.: \begin{equation*} \begin{split} \text{Mentr}(F(\textbf{x}),y) &= -(1-F(\textbf{x})_y)\log (F(\textbf{x})_y)\\ &- \sum_{i\neq y} F(\textbf{x})_i \log(1-F(\textbf{x})_i) \end{split} \end{equation*} Training samples typically have lower values of modified entropy metric than testing samples and either a class-dependent threshold $\tau_y$ or a class-independent threshold $\tau$ attack is applied to infer membership: $$I_{\text{Mentr}}(F(\textbf{x}),y) = \mathds{1}\{\text{Mentr}(F(\textbf{x}),y) \leq \tau_{(y)}\}$$ \textbf{Indirect multi-query attacks~(label-only attacks):} Long et al.~\cite{long2018understanding} stated that indirect attacks can make queries that are related to target sample \textbf{x} to extract additional membership information as a training sample influences the model prediction both on itself and other samples in its neighborhood. These indirect attacks usually make multiple queries for a single target sample~\cite{long2018understanding,li2020label, choo2020label}. For example, multi-query \emph{label-only attacks} leverage the predicted label of the queried data as features, and are thus immune to defenses that only obfuscate prediction confidences, e.g., MemGuard~\cite{jia2019memguard}. The key idea in label-only attacks is that the model should be more likely to correctly classify the samples around the training data than the samples around test data, i.e., members are more likely to exhibit high robustness than non-members~\cite{li2020label, choo2020label}. Simply obfuscating a model’s confidence scores can not hide label information to defend against such label-only attacks. \emph{Boundary estimation attacks~\cite{li2020label,choo2020label}:} As the target model is more likely to correctly classify the samples around training samples than those around test samples, the distance to classification boundary for training sample should be larger than that for the test samples. An attacker can either leverage techniques for finding adversarial examples under the black-box assumption~\cite{chen2020hopskipjumpattack,brendel2017decision} or add noise to find the adversarial examples that change the predicted label with minimum perturbation. Such attacks should not achieve higher attack accuracy than the white-box adversarial examples attack such as Carlini-Wagner attack~\cite{carlini2017towards}, which has full access to the model parameters and can find the adversarial example with the least distance for each target sample. \emph{Data augmentation attacks~\cite{choo2020label}:} In computer vision tasks, data augmentation techniques based on translation, rotation, and flipping help improve test accuracy. However, such data augmentation techniques pose a privacy threat: the target model is more likely to correctly classify the augmented data of training samples. An attacker can query the augmented data of the target record and use the percentage of correct predictions to identify membership of the target record. \subsection{Existing Defenses} \label{subsec:defense} Multiple defenses have been proposed to mitigate MIAs. We summarize them below. Section~\ref{sec:related} gives a more comprehensive summary of prior defenses. \textbf{Adversarial Regularization~\cite{nasr2018machine}:} Nasr et al.~\cite{nasr2018machine} include the estimation of membership threat in the training process of the ML model. They optimize a min-max game to train a privacy-preserving target classifier, which aims to reduce the prediction loss while also minimizing the MIA accuracy. \textbf{Early Stopping ~\cite{caruana2001overfitting,song2020systematic}:} During the training process, the model may learn too much information in the training samples thus the difference between its behavior on members and non-members becomes larger and larger, and the model becomes more vulnerable to MIAs. Therefore, early stopping, which is a general technique to prevent model overfitting by stopping model training before the whole training process ends, can mitigate MIA accuracy with a sacrifice of model utility. Song et al.~\cite{song2020systematic} find that adversarial regularization is not better than early stopping~\cite{caruana2001overfitting} when evaluated by a suite of attacks including both NN-based attacks and metric-based attacks. They recommend that any defense that trades off a reduction in MIA accuracy at the cost of a reduction in utility should be compared with early stopping as a baseline. \textbf{MemGuard~\cite{jia2019memguard}:} Jia et al.~\cite{jia2019memguard} obfuscate the prediction vector with a well-designed noise vector using the perspective of adversarial examples to confuse the membership inference classifier. Since MemGuard doesn't change prediction results, and only obfuscates confidence information, it maintains the original classification accuracy of the undefended model. Song et al.~\cite{song2020systematic} shows that MemGuard~\cite{jia2019memguard} lacks consideration of strategic adversaries; though resistant to NN-based attack, MemGuard underestimates the threat of metric-based attacks. \textbf{DP-based defenses:} Differential privacy~\cite{dwork2008differential} is a formal framework that provides a rigorous privacy guarantee. In machine learning, DP-based defenses add noise to the training process of a classifier such as DP-SGD~\cite{abadi2016deep}. However, it is challenging to perform machine learning with differential privacy while achieving acceptable utility loss and privacy guarantees\cite{jayaraman2019evaluating, rahman2018membership}~(See Section~\ref{subsec:dpsgd} and Section~\ref{sec:related}). \section{Conclusions} In this paper we introduce a new practical membership inference defense using {Split-AI}\xspace and Self-Distillation. We first split the training set into $K$ subsets to train $K$ sub-models. We ensure each training sample is not used to train $L$ sub-models, and apply an adaptive inference strategy for members and non-members. {Split-AI}\xspace will only use the average of a particular subset of $L$ sub-models which are not trained with the queried samples. Hence {Split-AI}\xspace can defend against direct single-query attacks. We apply Self-Distillation from {Split-AI}\xspace to defend against stronger attacks and avoid additional computing resources in inference. We perform rigorous MIAs including direct single-query attacks, label-only attacks and adaptive attacks to show that our defense outperforms previous defenses to achieve a better trade-off between the utility and practical membership privacy. Future work includes understanding the adaptation of Split-AI in other privacy tasks such as provable private mechanisms, analyzing the defense performance against white-box MIAs, and extending the our defense from classification models to generative models. \section{Our Defense Architecture} \label{sec: ourdefense} \begin{figure*}[ht] \centering \includegraphics[width = 6 in]{imgs/system_compact.pdf} \caption{Our end-to-end defense framework with the {Split-AI}\xspace and Self-Distillation components.} \label{fig:system} \end{figure*} \label{sec:methodology} In this section, we first present an overview of our defense framework and then describe the two key framework components: {Split-AI}\xspace and Self-Distillation. \subsection{Overview} \label{subsec:overview} MIAs aim to distinguish members and non-members of the private training data of a model. These attacks use the fact that the trained model has a different behavior on member and non-member data. This difference in behavior can appear in different forms, for example, the accuracy of model might be different on members and non-members~\cite{salem2018ml}, or the confidence might be higher on member inputs~\cite{yeom2018privacy,song2019privacy, song2020systematic}. Similarly, the model might be more likely to correctly classify the samples around the member examples compared to those around non-member examples~\cite{choo2020label, li2020label}. MIAs leverage these differences to obtain an attack advantage that is better than a random guess even in the black-box setting. Current defenses typically consider adding specific constraints during the optimization process, either in the training phase\cite{nasr2018machine} or in the inference phase\cite{jia2019memguard}, to reduce the mismatch of model behavior on members and non-members. However, optimization under multiple constraints for machine learning, which is usually non-convex, is a computationally hard task. We instead propose a framework to defend against MIAs by training multiple sub-models using subsets from whole training set and introducing a specific adaptive inference technique that exploits the following intuition: \emph{if a training sample is not used to train a sub-model, that sub-model will have similar behavior on that training sample and non-members.}~Section~\ref{subsec:results} shows the advantage of our defense in improving the trade-off between membership privacy and utility, which is based on this intuition, over existing membership inference defenses~(MemGuard~\cite{jia2019memguard} in Section~\ref{subsubsec:memguard} and adversarial regularization~\cite{nasr2018machine} in Section~\ref{subsubsec:advreg}). Based on this intuition, we propose a framework, {SELENA}\xspace, composed of two components to defend against MIAs. The first component, which we call \emph{{Split-AI}\xspace}, trains an ensemble of $K$ sub-models with overlapping subsets of the training dataset. The constraint for each subset is as follows: for a given training sample, there are $L$ sub-models which are not trained with that training sample, and therefore, they will behave similarly on that training sample and the non-member samples. {Split-AI}\xspace applies adaptive inference for members and non-members: For a member sample, {Split-AI}\xspace computes $L$ predictions of the $L$ sub-models which are not trained with the member sample, and outputs the average of the $L$ predictions as the final prediction. For a non-member sample, the adaptive inference randomly samples $L$ sub-models from the $K$ total sub-models, subject to a specific distribution, and returns the average of the $L$ predictions on the non-member as the final prediction. We detail our algorithm and explain why it preserves membership privacy in Section~\ref{subsec:se}. The second component, which we call \emph{Self-Distillation}, addresses the two weaknesses of {Split-AI}\xspace: its potential privacy risks due to multi-query/adaptive attacks and its high inference-time computation overhead. Specifically, the Self-Distillation component transfers the knowledge of the model obtained by {Split-AI}\xspace into a new model by using the soft labels of the training set from {Split-AI}\xspace. We call this Self-Distillation because it does not require any public dataset for distillation. As we will demonstrate, this protected model from Self-Distillation has similar classification performances as {Split-AI}\xspace with significantly lower computation overhead, and can protect against advanced multi-query and adaptive MIA attacks. As we study the black-box MIAs, only the final prediction vectors or predicted labels of the protected model from Self-Distillation are available to the attacker. Figure~\ref{fig:system} gives an overview of our defense, where we denote {Split-AI}\xspace as $F_{\theta_{\mathrm{I}}}$ and protected model from Self-Distillation as $F_{\theta_{\mathrm{II}}}$.\footnote{PATE\cite{papernot2016semi, papernot2018scalable} also trains multiple sub-models to provide privacy but with a public dataset. We detailed the difference between our SELENA and PATE in Section~\ref{subsec:PATE}.} Next, we detail {Split-AI}\xspace and Self-Distillation. \subsection{Our {Split-AI}\xspace Ensemble Architecture} \label{subsec:se} Here we describe {Split-AI}\xspace, the first component of our system. \paragraph{{Split-AI}\xspace's training:} Following the intuition in Section~\ref{subsec:overview}: we train $K$ sub-models and ensure that each training sample is not used to train $L$ sub-models such that these $L$ sub-models will have similar behavior on this training sample and other non-members. We accomplish this via a specific data partitioning strategy: \emph{For each data point $\textbf{x}$ in the training set, we randomly generate $L$ non-model indices from \{$1$, 2, ..., $K$\} to denote the $L$ non-models that are not trained with the data point and record the identification numbers of these $L$ non-model indices~(denoted as ${Id}_{{non}}(\textbf{x})$\footnote{${Id}_{{non}}(\textbf{x})$ records $L$ sub-model indices which are not trained with $\textbf{x}$.}). We then generate the dataset partition based on these non-model indices. For each subset $D_i$, we will only use those training samples which do not include $i$ in their non-model indices.} \begin{figure}[htbp] \centering \subfigure[{Split-AI}\xspace's training]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.5in]{imgs/splitai_train.pdf} \label{fig:3a} \end{minipage} }% \subfigure[{Split-AI}\xspace's inference]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.5in]{imgs/splitai_inference.pdf} \label{fig:3b} \end{minipage} }% \caption{Illustration of {Split-AI}\xspace's data partition for training and adaptive inference in $K=5, L=2$ for member samples $A,B,C$ and non-member sample $D$~(red color for members and blue color for non-members). ${Id}_{{non}}(A)=(4,5), {Id}_{{non}}(B)=(2,3), {Id}_{{non}}(C)=(1,2)$. } \label{fig:splitai} \end{figure} Figure~\ref{fig:3a} illustrates this partition strategy for three training samples~($A$, $B$, $C$) under the setting of $K=5,L=2$. We randomly generate non-member sub-model indices: ${Id}_{{non}}(A)=(4,5)$, ${Id}_{{non}}(B)=(2,3)$, ${Id}_{{non}}(C)=(1,2)$. Therefore, $A$ is used to train sub-model 1, 2, 3. $B$ is used to train sub-model 1, 4, 5. $C$ is used to train sub-model 3, 4, 5. This specific data partition strategy ensures that for each data point, we have $L$ sub-models which are not trained with it. This facilitates our key intuition in Split-AI: we use models that are not trained with a data point to estimate its soft label while protecting the membership information. $K$ and $L$ are parameters of our framework. The approximate size of each subset is $((K-L)/K) \times |D_{tr}|$. We then train $K$ sub-models $F_{i}$, one for each subset of the training data $D_i$, which have the same architecture and hyper-parameter settings. \paragraph{{Split-AI}\xspace's inference:} We now describe the adaptive inference based ensemble strategy for members and non-members. For each queried sample $\textbf{x}$, the ensemble will check whether there is an exact match of $\textbf{x}$ in the training set: \begin{compactitem} \item If so, which indicates that $\textbf{x}$ is a member, the defender will average the prediction vectors on $\textbf{x}$ from $L$ models which are not trained with $\textbf{x}$ as the output; \item If not, the defender will randomly use non-member indices of a member sample $\textbf{x}'$ and average the prediction vectors on $\textbf{x}$ from $L$ models of ${Id}_{non}(\textbf{x}')$ as the output. \end{compactitem} Figure~\ref{fig:3b} illustrates the adaptive inference for three member samples~$(A$, $B$, $C)$ and one non-member sample $D$ following the setting in Figure~\ref{fig:3a}. $A$ is non-member for sub-model 4, 5; $B$ is non-member for sub-model 2, 3; $C$ is non-member for sub-model 1, 2; and $D$ is non-member for all sub-models. Adaptive inference will average on non-member indices in sub-models for $A$, $B$, $C$ and randomly select one member sample's non-member indices for non-member sample $D$. \begin{algorithm}[t] \caption{{Split-AI}\xspace Model ${F_{\theta_{\mathrm{I}}}}$} \label{alg:A1} \begin{algorithmic} \STATE {Initialize:\\ $K$: total number of sub-models $F_{1}, F_{2}, ..., F_{K}$\\ $L$: for each training sample, the number of sub-models which are not trained with it. \\$(X_{train}, Y_{train})$: training data and labels} \STATE{\textbf{Training Phase:} } \STATE{Randomly generate the $L$ non-model indices for each training sample ${Id}_{non}(\textbf{x})$.} \FOR {$i = 1$ to $K$} \STATE{Construct subset $(X_{train}^i, Y_{train}^i)$ for model $F_{i}$ based on the recorded ${Id}_{non}$s for models: \{$(\textbf{x},y)$: $(\textbf{x},y) \in (X_{train}, Y_{train})$, $i$ not in ${Id}_{non}(\textbf{x})$\}} \FOR{\text{number of the training epochs} } \STATE{Update $F_{i}$ by descending its stochastic gradients over $l(F_{i}(X_{train}^i ),Y_{train}^i ) $.} \ENDFOR \ENDFOR \STATE{\textbf{Inference Phase:} $F_{\theta_\mathrm{I}}(\textbf{x}$)} \STATE{Given $\textbf{x}$} \IF{$\textbf{x}$ in $X_{train}$} \STATE{$$F_{\theta_{\mathrm{I}}}(\textbf{x}) = \frac{1}{L}{\sum_{i\in {Id}_{non}(\textbf{x})}}F_{{i}}(\textbf{x})$$} \ELSE \STATE{Randomly select $\textbf{x}'$ in the training set, $$F_{\theta_{\mathrm{I}}}(\textbf{x}) = \frac{1}{L}{\sum_{i\in {Id}_{non}(\textbf{x}')}}F_{{i}}(\textbf{x})$$} \ENDIF \end{algorithmic} \end{algorithm} Algorithm~\ref{alg:A1} presents the entire procedure for {Split-AI}\xspace. We formally prove that {Split-AI}\xspace strategy is resilient to \emph{direct} single-query MIAs~(discussed in Section~\ref{subsec: attack}) and can reduce the accuracy of such attacks to a random guess by Theorem~\ref{thm:single_query_split_train} in Appendix~\ref{appendix:splitaidsq}, which provides the theoretical foundation of our defense capability. The intuitive explanation for this proof is that for each data point, the distribution of output of this algorithm on this given point $\textbf{x}$ is independent of the presence of $\textbf{x}$ in the training set. This is because, we will not use models that are trained with $\textbf{x}$ to answer queries, even if $\textbf{x}$ is in the training set. Our evaluation on {Split-AI}\xspace in Appendix~\ref{appendix:splitai} is consistent with Theorem~\ref{thm:single_query_split_train}: {Split-AI}\xspace maintains a good classification accuracy by leveraging flexible overlapping subsets and an ensemble of $L$ sub-models. \subsection{Our Self-Distillation Mechanism} \label{subsec:sd} \paragraph{Limitations of {Split-AI}\xspace.} While {Split-AI}\xspace is resilient to direct single-query MIAs, an adversary can leverage more advanced attacks. For example, instead of direct query, attacker may do an indirect query~\cite{long2018understanding} for the target sample (see Section~\ref{subsec: attack} for definitions) or may do multiple queries for one target sample to identify membership information, as suggested in recent work~\cite{choo2020label}. {Split-AI}\xspace suffers from severe privacy risks under the setting of such aggressive attacks that exploit the matching process for training samples: (1) An adaptive attacker can make a single \emph{indirect} query by adding a small noise to the target sample. Such adaptive attacks can fool the matching process used in the inference strategy that checks if the input is a training member or non-member. {Split-AI}\xspace will recognize noisy training samples as non-members and may end up using sub-models trained with the target sample, thus leaking membership information. (2) Attacker can perform replay attacks by making multiple queries for the same target sample: {Split-AI}\xspace will only have one possible prediction vector for members, while approximately $C_K^L$ possible prediction vectors for non-members. Furthermore, {Split-AI}\xspace incurs a computational overhead during inference: For each queried sample, {Split-AI}\xspace first needs to identify whether it is in the training set, thus incurring overhead for this matching process. Second, {Split-AI}\xspace needs to perform inference on $L$ models for each queried sample, while conventional approaches only perform inference on a single model. \paragraph{Self-Distillation.} To overcome the above limitations, we need a more sophisticated defense mechanism and we correspondingly introduce the second component of our framework. We leverage distillation, which is proposed by Hinton et al.~\cite{hinton2015distilling} to reduce the size of NN architectures or ensemble of NN architectures. To be more specific, here we use a method which we call \emph{Self-Distillation}: we first apply {Split-AI}\xspace to get the prediction vectors for the training samples. We then use the same training set along with the prediction vectors (obtained from {Split-AI}\xspace) as soft labels to train a new model using conventional training. The new protected model benefits from distillation to to maintain a good classification accuracy. For queried samples, the defender now just need to do the inference on the new protected model $F_{\theta_{\mathrm{II}}}$ distilled from the {Split-AI}\xspace. For defense capability, we prove that this new model largely preserve {Split-AI}\xspace's defense ability against direct single-query attack by Theorem~\ref{thm:distill} and Corollary~\ref{cor:last} under mild stability assumptions~(Definition~\ref{def:stabledistillation}) in Appendix~\ref{appendix:selenadsq}. Note that our theoretical analysis of {SELENA}\xspace is only valid for single-query direct attacks. In fact, there exist some datasets that {SELENA}\xspace cannot obtain provable privacy for under multi-query attacks. This includes settings with similar data points that have different labels~(see Appendix~\ref{appendix:correlationdiscuss}). \textbf{Self-Distillation overcomes the privacy limitations of {Split-AI}\xspace and mitigates advanced MIAs.} The defender controls the Self-Distillation component and ensures that Self-Distillation only queries each exact training sample once. The attacker only has black-box access to the protected output model of Self-Distillation, but cannot access the Split-AI model. Hence, the attacker cannot exploit the soft labels computation of Split-AI as discussed before. Hence, the final protected model from Self-Distillation effectively mitigates the replay and multi-query indirect attacks: 1. For replay attack: each sample is only queried once during the Self-Distillation process, while replay attack requires at least two queries of each sample to obtain advantage over random guess. In addition, the final protected model has a deterministic behavior with only one possible prediction vector for each queried sample. 2. For single-query indirect attacks: each exact sample is queried during the Self-Distillation process and noisy samples around the training sample are not queried. In addition, the attacker only has black-box access to the protected model from Self-Distillation~(and no access to defender's {Split-AI}\xspace): indirect query attacks are thus limited in obtaining additional membership information~(See Appendix~\ref{appendix_subsec:necessityofdistill} for more details). Self-Distillation also solves the computational overhead of the {Split-AI}\xspace in inference: the defender now does not need to check whether the queried sample is a training sample and it only needs to make inference on a single Self-Distilled model. In Section~\ref{sec: eval}, we will evaluate the effectiveness of our whole framework via rigorous experimental analysis including direct single-query attacks, label-only attacks and adaptive attacks. \section{Discussions} In this section, we will discuss the computation overhead of our defense, as well as comparison with PATE~\cite{papernot2016semi,papernot2018scalable}~(which uses a disjoint training set partition for sub-models and differential privacy to protect privacy), model stacking~\cite{salem2018ml}~(which uses a disjoint training set partition to train a hierarchical architecture, intuited by dropout~\cite{srivastava2014dropout}) and DP-SGD~\cite{abadi2016deep}, which provides a provable privacy guarantee for neural networks. \subsection{Efficiency} \label{subsec: efficiency} One cost that our framework needs to pay is the use of additional computing resources in the training process as we train multiple sub-models for {Split-AI}\xspace. Table~\ref{tab:traintime} and Table~\ref{tab:inferencetime} present the comparison for the training time cost and inference time cost of our {SELENA}\xspace with previous defenses. As MemGuard~\cite{jia2019memguard} focuses on post-processing techniques for prediction vectors of undefended models in the inference phase, we omit MemGuard in Table~\ref{tab:traintime} for training time and compare our {SELENA}\xspace with MemGuard in Table~\ref{tab:inferencetime} for inference time. For time comparison, all experiments are tested on a single NVIDIA Tesla-P100 GPU. We separately set the batch size as 512, 128, 256 during training for Purchase100, Texas100 and CIFAR100~(note that the batch size might also impact the running time, and here we maintain the batch size for each dataset across different defenses the same). For undefended model, adversarial regularization, and our {Split-AI}\xspace, we train 30, 20, and 200 epochs for Purchase100, Texas100, and CIFAR100. For Self-Distillation, we train 60, 30, and 200 epochs for Purchase100, Texas100, and CIFAR100 to ensure convergence. All running times are tested three times and we report the average of test time. \begin{table}[ht] \caption{Comparison of training time.} \label{tab:traintime} \centering \begin{tabular}{ccccc} \toprule Dataset &None&\tabincell{c}{AdvReg}&\tabincell{c}{{SELENA}\xspace\\sequential} &\tabincell{c}{{SELENA}\xspace \\parallel}\\ \midrule \tabincell{c}{Purchase\\100}&9.5s &55.7s &359.4s &73.5s \\ \midrule \tabincell{c}{Texas100}&10.7s &111.6s &343.0s &68.0s\\ \midrule \tabincell{c}{CIFAR100} &1.78h &23.5h &29.6h &3.0h\\ \bottomrule \end{tabular} \end{table} Comparison of training time in Table~\ref{tab:traintime}: our defense~({SELENA}\xspace sequential: sequentially train each sub-model on a single GPU) costs up to 6.1h more compute time than adversarial regularization (CIFAR100). However, we can simply accelerate training {Split-AI}\xspace by training several sub-models parallelly. For example, if we train all $K$ sub-models simultaneously~({SELENA}\xspace parallel), the training time for {SELENA}\xspace is 73.5s for Purchase100, 68.0s for Texas100 and 3.0h for CIFAR100. In contrast, adversarial regularization cannot benefit from parallel training: there is only one model during training. \begin{table}[ht] \caption{Comparison of inference time. Test on 1000 samples: 500 members and 500 non-members. Batch size is 1.} \label{tab:inferencetime} \centering \begin{tabular}{ccc} \toprule Dataset & MemGuard&{SELENA}\xspace \\ \midrule \tabincell{c}{Purchase100} &702.7s &0.7s\\ \midrule \tabincell{c}{Texas100} &668.6s &0.7s\\ \midrule \tabincell{c}{CIFAR100} &768.5s &8.6s\\ \bottomrule \end{tabular} \end{table} Comparison of inference time in Table~\ref{tab:inferencetime}: MemGuard cost three orders of magnitude more per inference compared to SELENA since it has to solve a complex optimization problem to obfuscate prediction vectors for every query while {SELENA}\xspace only needs to perform computation on a single model. In conclusion, we argue that the cost of computing resources in the training phase and no additional computation in inference phase is acceptable as the improvement in GPU technology are making the computing resources cheap while the privacy threat remains severe. If multiple GPUs are available, our approach can easily benefit from parallelization by training the $K$ sub-models in parallel. Finally, we can also tune the system parameters $K$ and $L$ to control the trade-off between the training time resources, model utility and privacy. \subsection{Comparison with PATE} \label{subsec:PATE} PATE~\cite{papernot2016semi,papernot2018scalable} is a framework composed of teacher-student distillation and leverages public data to achieve a better privacy-utility trade-off for differential privacy. PATE uses a disjoint training set partition for sub-models in the teacher component. To get the private label of the public dataset to train the student model, PATE applies noisy count among sub-models. There are three major differences between our work and PATE: (1) PATE requires a \emph{public dataset} to provide the provable end-to-end privacy guarantee, which is not possible in certain practical scenarios such as healthcare. Our defense does not need public datasets and provides a strong empirical defense against MIAs. (2) We apply a novel \emph{adaptive inference strategy} to defend against MIAs: for each training sample, we only use prediction of sub-models in Split-AI that are not trained with it as these sub-models will not leak membership information for it. PATE does not use adaptive inference and relies on majority voting over all sub-models. (3) We use \emph{overlapping} subsets to train sub-models. This allows our approach to obtain high accuracy for each sub-model with sufficient subset size. PATE faces the limitation of each sub-model being trained with much reduced subset size due to disjoint subsets. In addition, PATE incurs a 0.7\% $\sim$ 6.7\% drop in test accuracy~\cite{papernot2018scalable}, while the test accuracy drop in our defense is no more than 3.9\%. \subsection{Comparison with Model Stacking} \label{subsec:modelstacking} Model stacking~\cite{salem2018ml} uses a two-layer architecture: the first layer contains a NN and a Random Forest, and the combination of two outputs from the first layer is forwarded to a Logistic Regression model. Here we briefly discuss the difference between model stacking and our defense. Model stacking requires a disjoint subset of data for each model: NN and Random Forest in first layer and logistic regression in second layer to help improve membership privacy. This may cause a decrease in test accuracy of the overall ensemble. Also, directly combing the two outputs from the first layer as input to the second layer may still leak information even when Logistic Regression is trained on another subset of data: the membership inference risks of NN or Random Forest may be directly forwarded on to the Logistic Regression module. Appendix~\ref{appendix:modelstacking} presents the experiments to compare our defense and model stacking, which supports above two statements. We do not include model stacking in Section~\ref{sec: eval} as it does not achieve the state-of-the-art performance~\cite{jia2019memguard,nasr2018machine}. \subsection{Comparison with DP-SGD} \label{subsec:dpsgd} In this work, we use the canonical implementation of DP-SGD and its associated analysis from the TensorFlow Privacy library.\footnote{https://github.com/tensorflow/privacy.} We varied the parameter $noise\_multiplier$ in the range of [1, 3] on Purchase100 and [1, 2] on Texas100 with a step size 0.2. We set the privacy budget $\epsilon$ = 4 and report the best classification accuracy for these two datasets. The test accuracy on Purchase100 is 56.0\% and the best direct single-query MIA accuracy is 52.8\%. The test accuracy on Texas100 is 39.1\%, and the best direct single-query MIA accuracy is 53.8\%. Note that though DP-SGD provides a differential privacy guarantee and the best direct single-query MIA accuracy is 0.5\% $\sim$ 1\% lower than that against our {SELENA}\xspace, DP-SGD suffers from a significant loss in utility: compared to the undefended model DP-SGD incurs 13.2\% $\sim$ 27.5\% drop in classification accuracy, while our defense incurs no more than 3.9\% drop in test accuracy. \section{Evaluations} \label{sec: eval} In this section, we first briefly introduce the datasets and model architectures used to train the classification models in Section~\ref{sec: eval-setup}. More details can be found in Appendix~\ref{appendix:setup}. Next we systematically evaluate our end-to-end defense framework including its efficacy against (1) direct single-query attacks, (2) indirect label-only attacks, and (3) adaptive attacks and make a comparison with undefended model, MemGuard~\cite{jia2019memguard}, adversarial regularization~\cite{nasr2018machine} and early stopping~\cite{song2020systematic} by considering both the utility and membership privacy risks in Section~\ref{subsec:results}. \subsection{Experimental Setup} \label{sec: eval-setup} We use three benchmark datasets and target models which are widely used in prior works on MI attacks and defenses. \textbf{Datasets.} Purchase100, Texas100 and CIFAR100. We follow Nasr et al.~\cite{nasr2018machine} to determine the partition between training data and test data and to determine the subset of the training and test data that constitutes attacker's prior knowledge. Specifically, the attacker's knowledge corresponds to half of the training and test data, and the MIA success is evaluated over the remaining half. \textbf{Target Models.} For CIFAR100, we use ResNet-18~\cite{he2016deep}, which is a benchmark machine learning model widely used in computer vision tasks. For Purchase100 and Texas100, we follow previous work~\cite{nasr2018machine} to use a 4-layer fully connected neural network with layer sizes $[1024,512,$ $256,100]$. In our defense, we set $K=25$ and $L=10$ for all three datasets. To show that our defense is effective across multiple model architectures and settings of $K$ and $L$, we vary activation functions, width, depth of target models, as well as different choices of $K$ and $L$ and present the results in Appendix~\ref{appendix:ablation}. We will release code to reproduce all our experiments. \begin{table*}[ht] \caption{Comparison of membership privacy and accuracy on training/test set of undefended model, previous defenses and {SELENA}\xspace on three different datasets. AdvReg refers to adversarial regularization. The last column is the highest attack accuracy for each row, i.e. for a specific defense on one dataset, the highest attack accuracy that MIAs can achieve. The last column gives an overview of comparison: the lower the best attack accuracy, lower the membership inference threat. For each dataset, the defense which has the lowest corresponding attack accuracy is bold in the column of best direct single-query attack, best label-only and best attack.} \label{tab:allattacks} \centering \begin{tabular}{cccccccc} \toprule dataset&defense &\tabincell{c}{acc on \\training set} &\tabincell{c}{acc on \\test set}&\tabincell{c}{best direct \\single-query\\ attack}&\tabincell{c}{best\\label-only \\attack}&\tabincell{c}{best adaptive \\attack} &\tabincell{c}{best attack}\\ \midrule \multirow{4}{*}{Purchase100} &None &99.98\% &83.2\% &{{67.3\%}} &65.8\% &N/A &67.3\%\\ &MemGuard&99.98\% &83.2\% &58.7\% &{65.8\%} &N/A &65.8\%\\ &AdvReg &91.9\% &78.5\% &57.3\% &{57.4\%} &N/A &57.4\%\\ &\textbf{{SELENA}\xspace} &82.7\% &79.3\% &\textbf{53.3\%} &\textbf{53.2\%} & {54.3\%} &\textbf{54.3\%}\\ \midrule \multirow{4}{*}{Texas100} &None &79.3\% &52.3\% &{66.0\%} &64.7\% &N/A &66.0\% \\ &MemGuard &79.3\% &52.3\% &63.0\% &{64.7\%} &N/A &64.7\%\\ &AdvReg &55.8\% &45.6\% &{60.5\%} &56.6\% &N/A &60.5\%\\ &\textbf{{SELENA}\xspace} &58.8\% &52.6\% &\textbf{54.8\%} &{\textbf{55.1\%}} &{54.9\%} &\textbf{55.1\%}\\ \midrule \multirow{5}{*}{CIFAR100} &None &99.98\% &77.0\% &{74.8\%} &69.9\% &N/A &74.8\%\\ &MemGuard &99.98\% &77.0\% &68.7\% &{69.9\%} &N/A &69.9\%\\ &AdvReg &86.9\% &71.5\% &58.6\% &{59.0\%} &N/A &59.0\%\\ &\textbf{{SELENA}\xspace} &78.1\% &74.6\% &\textbf{55.1\%} &\textbf{54.0\%} & {58.3\%} &\textbf{58.3\%}\\ \bottomrule \end{tabular} \end{table*} \begin{figure*}[htbp] \centering \subfigure[Purchase100]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2.1in]{imgs/purchase.pdf} \end{minipage} } \subfigure[Texas100]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2.1in]{imgs/texas.pdf} \end{minipage} } \subfigure[CIFAR100]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2.1in]{imgs/cifar100.pdf} \end{minipage} } \centering \caption{Detailed comparison of {SELENA}\xspace with early stopping. From left to right are results for Purchase100, Texas100 and CIFAR100. The solid curves are the test accuracy and MIA accuracy with corresponding training epochs. ER denotes early stopping. The dashed lines are the test accuracy and MIA accuracy of {SELENA}\xspace, which is shown in Table~\ref{tab:allattacks}. Our defense achieves a better privacy-utility trade-off than all epochs in the conventional training.} \label{fig:earlystopping} \end{figure*} \subsection{Results} \label{subsec:results} Table~\ref{tab:allattacks} summarizes the classification accuracy and best attack accuracy for each attack type, including comparison with both undefended models~(in Section~\ref{subsubsec:undefended}) and previous defenses~(MemGuard in Section~\ref{subsubsec:memguard} and adversarial regularization in Section~\ref{subsubsec:advreg}). In addition, we also compare our SELENA with early stopping in Section~\ref{subsubsec:earlystop}. \subsubsection{Comparison with Undefended Model}\label{subsubsec:undefended} We first compare our SELENA with undefended model on both membership privacy threats and classification accuracy. \textbf{{SELENA}\xspace significantly reduces membership inference risks.} From Table~\ref{tab:allattacks}, we can see that our defense leads to a significant reduction in privacy risks. Across three types of attacks, the MIA accuracy against our defense is no higher than 54.3\% on Purchase100, 55.1\% on Texas100 and 58.3\% on CIFAR100. On the other hand, MIA accuracy against undefended models (in the absence of our defense) is much higher: such MIA advantage over a random guess is a factor of $3.0\sim 4.0$ higher than our defense. \textbf{{SELENA}\xspace achieves its privacy benefits at the cost of a small drop in utility.} Compared with undefended models, our defense only has a small utility loss (while providing substantial privacy benefits). Compared to undefended models, the classification(test) accuracy of our defense incurs at most $3.9\%$ accuracy drop~(on Purchase100), and even no accuracy drop on Texas100. We also discuss a more flexible trade-off between utility and membership privacy by combining the outputs from {Split-AI}\xspace and ground truth labels as the soft labels in Self-Distillation in Appendix~\ref{appendix:tradeoff}. We also note that even though our approach has a small loss in utility, it achieves a better utility-privacy trade-off compared to prior defenses like MemGuard, adversarial regularization and early stopping, which we discuss next. \subsubsection{Comparison with MemGuard} \label{subsubsec:memguard} \textbf{While the test accuracy of our defense is a little lower than MemGuard (MemGuard has the same test accuracy as the undefended model), the MIA accuracy against MemGuard is much higher than our defense.} Compared to a random guess, which achieves 50\% attack accuracy, the best attacks on MemGuard can achieve $14.7\% \sim 19.9\%$ advantage over a random guess, which is a factor of $2.4\sim 3.7$ higher than our defense. In general, MemGuard does not have any defense against MIAs that do not rely on confidence information: attacker can use label-only attacks as adaptive attacks since MemGuard only obfuscates confidence. \subsubsection{Comparison with Adversarial Regularization}\label{subsubsec:advreg} \textbf{Our defense achieves higher classification accuracy and lower MIA accuracy compared with adversarial regularization.} The classification accuracy of our defense is higher than adversarial regularization across all three datasets, and as high as 7.0\% for the Texas100 dataset. For MIAs, our defenses achieves significantly lower attack accuracy than adversarial regularization. MIA attacks against adversarial regularization is higher than our defense across all three datasets, and its advantage over random guess is at most a factor of 2.1 than our defense~(on Texas100). Besides, adversarial regularization is much harder to tune and can also take more training time (by a factor up to 7.8) compared to our defense when multiple GPUs are used in parallel~(see Section \ref{subsec: efficiency}). \subsubsection{Comparison with early stopping}\label{subsubsec:earlystop} We further compare our defense with early stopping, which can also help in minimizing the difference in model behavior on members and non-members~\cite{song2020systematic}. Specifically, we will compare the model performance of an undefended model in each epoch during the training process and our final protected model $F_{\theta_{\mathrm{II}}}$. For early stopping, we only consider direct single-query attack~(due to their strong performance on undefended models). Figure~\ref{fig:earlystopping} shows a detailed comparison between our defense $F_{\theta_{\mathrm{II}}}$ and early stopping. The dashed lines are the classification accuracy on test set and the best MIA accuracy of our defense, which is already reported in Table~\ref{tab:allattacks}. The solid lines correspond to classification accuracy on test set and MIA accuracy using the undefended model as a function of the training epochs. As we can see from Figure~\ref{fig:earlystopping}, \emph{our defense significantly outperforms early stopping.} \textbf{Comparison at similar attack accuracy.} The undefended model will only have same level of MIA accuracy as the dashed line of our defense at the very beginning of the training process. However the test accuracy of the undefended model at that point is far lower than that of our defense. For example, approximately, \emph{for Texas100, when MIA accuracy against the conventional trained model is 55.1\%, the test accuracy of the undefended model is 13.4\% lower than that of our defense.} For other two dataset, when the MIA accuracy against the undefended model achieves similar attack accuracy as our defense, the test accuracy is 8.0\% lower on Purchase100 and 11.0\% lower on CIFAR100 compared to our defense. \textbf{Comparison at similar classification accuracy.} When the undefended model achieves the same classification accuracy on the test set as {SELENA}\xspace, the MIA accuracy against the undefended model is significantly higher than our defense. For example, \emph{when the test accuracy of the conventional model reaches 74.6\% on CIFAR100 (similar to our defense), the attack accuracy is 63.6\%, compared to the best attack accuracy of 58.3\% for our defense (which is 5.3\% lower).} We can see similar results on other datasets: when the test accuracy of undefended models achieves similar classification accuracy as our defense on Purchase100 and Texas100, the attack accuracy is 58.1\% on Purchase100 and 66.0\% on Texas100, which is 3.8\% and 10.9\% higher than {SELENA}\xspace respectively. We also highlight the following two points from Table~\ref{tab:allattacks}: 1. Our {SELENA}\xspace effectively induces the similar behaviors including generalization, confidence, robustness for member and non-member samples and therefore the MIA attack accuracy is significantly reduced. Let us take the generalization gap $g$ as an example: in undefended models/MemGuard, $g$ is 16.78\% on Purchase100, 27.0\% on Texas100, 22.98\% on CIFAR100; in adversarial regularization, $g$ is 13.4\% on Purchase100, 10.2\% on Texas100 and 15.4\% on CIFAR100. In contrast, in our defense, $g$ is 3.4\% on Purchase100, 6.2\% on Texas100 and 3.5\% on CIFAR100: Our mechanism reduces the total generalization gap by a factor of up to 6.6 compared to undefended models/MemGuard, and a factor of up to 4.4 compared to adversarial regularization. 2. The additional estimation of soft labels provided by shadow {Split-AI}\xspace (using the entirety of the attacker's knowledge) provides additional information to the attacker which enhances the accuracy of our adaptive attacks: attack has more advantage over random guess than direct single-query attack and label-only attacks. However, even considering the strong adaptive attacks, {SELENA}\xspace still achieves lower attack accuracy in comparison to previous defenses, which validates the defense effectiveness of our {SELENA}\xspace. In addition, Appendix~\ref{appen:adaptiveattack} further analyze the membership privacy risks when the attacker knows different ratio of the training sets. In conclusion, using direct single-query attacks, label-only attacks, as well as adaptive attacks with estimated soft labels, we show that our approach outperforms previous defenses and achieves a better trade-off between utility and practical membership privacy. We also discuss that {Split-AI}\xspace can defend against direct single-query attack while maintaining a good classification accuracy, and the necessity for the adaptive ensemble in {Split-AI}\xspace and Self-Distillation in Appendix~\ref{appendix:components}. \section{Preliminaries and Problem Formulation} \label{sec:prelim} In this section, we introduce the machine learning concepts and notation relevant to our work, as well as our threat model and design goals. \subsection{ML Preliminaries and Notation} \label{pre} In this paper, we consider supervised machine learning for classification. Let $F_{\theta}:\mathbb{R}^d \mapsto \mathbb{R}^k$ be a classification model with $d$ input features and $k$ classes, which is parameterized by $\theta$. For a given example $\textbf{z} = (\textbf{x}, y)$, $F_{\theta}(\textbf{x})$ is the classifier's confidence vector for $k$ classes and the predicted label is the corresponding class which has the largest confidence score, i.e., $\hat{y} = \mathop{\argmax}_i F_{\theta}(\textbf{x})$. The goal of supervised machine learning is to learn the relationship between training data and labels and generalize this ability to unseen data. The model learns this relationship by minimizing the predicted loss across the training set $D_{tr}$: $$\min_{\theta}\frac{1}{|D_{tr}|}\sum_{\textbf{z}\in D_{tr}}l(F_{\theta}, \textbf{z})$$ Here $|D_{tr}|$ is the size of the training set and $l(F_{\theta}, \textbf{z})$ is the loss function. When clear from the context, we use $F$, instead of $F_\theta$, to denote the target model. \subsection{Threat Model} \textbf{Black-box attack:} In this paper, we follow previous defenses~\cite{nasr2018machine, jia2019memguard} and assume the attacker has black-box access to the target model, i.e., the attacker can only make queries to the model provider and obtain corresponding prediction vectors or predicted labels, instead of having access to target model's parameters. Therefore, the adversary can perform standard black-box attacks, in particular the \emph{direct single-query} attacks, which directly query the target sample \emph{one time} and is the typical benchmarking technique, and the \emph{label-only attacks}, which \emph{make multiple queries} for a single target sample and exploit predicted label information. {We also introduce a third type of black-box attack which is an adaptive attack tailored to our system. See Section~\ref{sec:attackdefense} for a detailed explanation of direct single-query attacks and label-only attacks and Section~\ref{sec:adaptiveattack} for the adaptive attacks.} \textbf{Partial knowledge of membership for members:} Like previous defenses~\cite{nasr2018machine,song2020systematic}, we assume the adversary knows a small ratio of samples from the training set, i.e., it knows some members. The goal of the adversary is to identify any other member sample. \subsection{Design Goals} \label{subsec:goal} In this paper, we aim to overcome the limitations of existing membership inference defenses~\cite{nasr2018machine, jia2019memguard, shejwalkar2019reconciling}, which estimate the membership risk through practical MIAs: none of these defenses are able to provide sufficient MIA protection and high utility simultaneously in the absence of public datasets. \textbf{Low MIA accuracy:} Our goal is to design practical defenses against MIAs. We will evaluate our defense in a systematic and rigorous manner to ensure that it achieves low MIA accuracy (i.e., high membership privacy) across a broad class of attacks, instead of only one specific family of attacks. \textbf{High classification accuracy:} We aim to protect membership privacy without significantly decreasing the classification accuracy (model utility). \textbf{No additional public data required for defense:} Some prior works~\cite{papernot2016semi, shejwalkar2019reconciling} have proposed to preserve membership privacy by knowledge distillation using publicly available datasets. However, this is a limiting assumption since public datasets may not be available in many real-world ML training scenarios such as healthcare data. In this paper, we consider a more realistic scenario, where the model provider does not have access to external public dataset. \section{Introduction} \label{sec:intro} Machine learning has achieved tremendous success in many areas, but it requires access to data that may be sensitive. Recent work has shown that machine learning models are prone to memorizing sensitive information of training data incurring serious privacy risks~\cite{shokri2017membership, carlini2019secret, carlini2020extracting, fredrikson2015model, salem2020updates,song2017machine,ganju2018property}. Even if the model provider is trusted and only provides query services via an API, i.e., black-box model access in which only prediction vectors are available, private information can still be obtained by attackers. The \emph{membership inference attack (MIA)} is one such threat in which an adversary tries to identify whether a target sample was used to train the target machine learning model or not based on model behavior\cite{shokri2017membership}. MIAs pose a severe privacy threat by revealing private information about the training data. For example, knowing the victim's presence in the hospital health analytic training set reveals that the victim was once a patient in the hospital. Shokri et al.~\cite{shokri2017membership} conducted MIAs against machine learning in the black-box manner. They formalize the attack as a binary classification task and utilize a neural network~(NN) model along with shadow training technique to distinguish members of training set from non-members. Following this work, many MIAs have been proposed which can be divided into two categories: direct attacks~\cite{yeom2020overfitting, song2020systematic, yeom2018privacy, song2019privacy,nasr2018machine,nasr2019comprehensive}, which directly query the target sample and typically utilize only a single query; indirect attacks~\cite{long2018understanding, li2020label, choo2020label}, which query for samples that are in the neighborhood of the target sample to infer membership, and typically utilize multiple queries. The research community has further extended the MIA to federated settings~\cite{melis2019exploiting, nasr2019comprehensive} and generative models~\cite{hayes2019logan}. MIAs have also provided a foundation for more advanced data extraction attacks~\cite{carlini2020extracting} and for benchmarking privacy-preserving mechanisms~\cite{jayaraman2019evaluating,nasr2021adversary}. The effectiveness of MIAs and the resulting privacy threat has motivated the research community to design several defense mechanisms against these attacks~\cite{abadi2016deep, nasr2018machine, jia2019memguard, shejwalkar2019reconciling}. As MIAs distinguish members and non-members of the target model based on the difference in model's behavior on members, defense mechanisms need to enforce similar model behavior on members and non-members. There exist two main categories of membership inference defenses, as shown in Table~\ref{tab:prov_empi}: techniques that offer \emph{provable privacy}, and defenses that offer \emph{empirical membership privacy}. The first category mainly uses differential privacy mechanisms~\cite{abadi2016deep,mcmahan2017learning, wang2019subsampled} to be able to provide a \emph{provable privacy guarantee} for all inputs. However, the use of DP (e.g., in DP-SGD~\cite{abadi2016deep}) is shown to significantly reduce the utility of the underlying models in many machine learning tasks~(see Section~\ref{subsec:dpsgd}). This has motivated the second category of membership inference defenses, where privacy is empirically evaluated through practical MIAs with the aim of preserving model utility. Our work in this paper falls in the second category, and as we will show, our technique offers a superior trade-off between MIA protection and model utility compared to the state-of-the-art empirical privacy defenses~\cite{nasr2018machine,jia2019memguard,shejwalkar2019reconciling} (see Section~\ref{sec: eval} for more details). \begin{table}[ht] \aboverulesep=0ex \belowrulesep=0ex \centering \begin{tabular}{c|c|c} \toprule &Low utility &High utility \\ \midrule \rule{0pt}{1.1EM} \tabincell{c}{Provable\\ privacy}& \tabincell{c}{ DP-based:\\DP-SGD~\cite{abadi2016deep}} &\tabincell{c}{Desired~(No method\\ achieves this goal so far)}\\ \midrule \tabincell{c}{Empirical \\membership \\privacy} &\tabincell{c}{Not be \\considered} &\tabincell{c}{ Adversarial \\Regularization~\cite{nasr2018machine},\\MemGuard~\cite{jia2019memguard},\\SELENA(Our work)}\\ \bottomrule \end{tabular} \caption{Two categories of membership inference defenses: provable privacy with low utility vs. empirical membership privacy with high utility.} \label{tab:prov_empi} \end{table} \noindent\textbf{Our Framework.} In this paper, we introduce a novel empirical MIA defense framework, called {SELENA}\xspace,\footnote{SELf ENsemble Architecture.} whose goal is to protect against practical black-box MIAs while also achieving high classification accuracy. Our framework consists of two core components: \emph{{Split-AI}\xspace} and \emph{Self-Distillation}. \textbf{Split Adaptive Inference Ensemble ({Split-AI}\xspace):} Our first component, called {Split-AI}\xspace, is proposed to enable the model to have similar behavior on members and non-members. We obtain this goal by training multiple models~(called sub-models) with random subsets from the training set. While such ensemble architectures have been considered in different ML contexts, our framework's novelty lies in the particular adaptive approach it uses to respond to the queries. The key intuition is that for a training sample, if one of sub-models is not trained with it, this sub-model will have similar behavior on this training sample and other non-members. We use this intuition in our adaptive inference strategy. When the queried sample is in the training set, the adaptive inference procedure will only call the sub-models that did not use the query in their training set. When the queried sample is not in the training set, we query a particular subset of sub-models (as explained later in Section \ref{sec: ourdefense}). Our approach provides an intuitive foundation for membership privacy: no matter if the queried sample is a member or a non-member, the adaptive inference will always use only those sub-models which have not used that sample for their training; this ensures membership privacy which we demonstrate through a formal analysis. \textbf{Self-Distillation:} Our {Split-AI}\xspace shows promising performance against the traditional type of MIA, i.e., the direct single-query attack~\cite{song2019privacy, song2020systematic,yeom2020overfitting, yeom2018privacy}. However, it falls short in protecting against recent adaptive MIAs, which work by crafting multiple, particularly-fabricated queries~\cite{li2020label, choo2020label}. Moreover, {Split-AI}\xspace has a high computational overhead, as it needs to search for each queried sample within the training set, and perform inference on multiple sub-models. To protect {Split-AI}\xspace against adaptive attacks and to reduce its computational overhead, we use the second component of our framework, which we call Self-Distillation. Our Self-Distillation component performs a novel form of knowledge transfer on the model created by {Split-AI}\xspace to produce a final \emph{protected model}. Specifically, it first queries {Split-AI}\xspace with its exact training samples to get their corresponding prediction vectors. Then, it uses these prediction vectors as the soft labels of the training set to train the protected model, which will be used for inference. During the inference stage, the protected model only need to perform a single computation for each queried sample, therefore it has a much lower overhead compared to {Split-AI}\xspace's model. Furthermore, the protected model protects not only against traditional single-query MIA attacks, but also against adaptive MIA attacks as shown in our analysis in Section~\ref{sec: eval}. Note that, unlike conventional uses of distillation for membership privacy~\cite{shejwalkar2019reconciling}, our Self-Distillation component does not need a public dataset for knowledge transfer as it uses its own training dataset for (self-)distillation.\footnote{Note that our usage of the term self distillation is different from what Zhang et al.\cite{zhang2019your} refer to as self-distillation.} \noindent\textbf{Evaluation.} We evaluate our {SELENA}\xspace on three benchmark datasets (CIFAR100, Purchase100, Texas100) and compare with existing defenses~\cite{nasr2018machine, jia2019memguard, song2020systematic} in a rigorous manner using two types of existing attacks and one type of adaptive attack. (1) We first analyze our defense by \emph{direct single-query attacks}, which have been typical used in most previous MI attacks and defenses. (2) We next evaluate our framework by \emph{label-only attacks}, which infer membership information only based on labels and hence simply obfuscating prediction confidence vector can not protect against such attacks. (3) We finally study \emph{adaptive attacks}, which are tailored to our defense mechanism. Overall, {SELENA}\xspace achieves a better trade-off between the utility, i.e., classification accuracy, and the practical membership privacy without requiring additional public data. For utility, {SELENA}\xspace incurs only a little drop in classification accuracy compared to the undefended model~(no more than 3.9\%), and outperforms adversarial regularization~\cite{nasr2018machine} by up to 7.0\%~(on Texas100). For membership privacy risks, {SELENA}\xspace reduces the MIA advantage over a random guess by a factor of up to 4.0 compared to undefended model, a factor of up to 3.7 compared to MemGuard~\cite{jia2019memguard} and a factor of up to 2.1 compared to adversarial regularization~\cite{nasr2018machine}. Unlike DP-SGD that offers a provable privacy guarantee, our approach only provides an empirical membership inference defense (similar to MemGuard and adversarial regularization). However, our evaluation shows that {SELENA}\xspace achieves a much better utility than DP-SGD~(See Figure~\ref{fig:compare}). \begin{figure} \centering \includegraphics[width=3.0in]{imgs/fig_intro.pdf} \caption{Comparison of our method with undefended model, DP-SGD~\cite{abadi2016deep}~($\epsilon=4$), MemGuard~\cite{jia2019memguard} and adversarial regularization~\cite{nasr2018machine} with respect to classification accuracy and MIA accuracy on Purchase100 dataset. Our SELENA outperforms adversarial regularization in both classification and MIA accuracy. Our SELENA significantly reduces MIA accuracy compared to undefended model and MemGuard while incurring little classification accuracy drop. Our SELENA achieves much higher classification accuracy compared to DP-SGD while only incurs little additional practical membership risks.} \label{fig:compare} \end{figure} In summary, we propose a membership inference defense to achieve high classification accuracy and highly mitigate practical MIAs. Our key contributions are as follows: \begin{compactitem} \item We propose {Split-AI}\xspace as the first component of our framework that enforces the model to have a similar behavior on members and non-members while maintaining a good classification accuracy using sub-models trained on overlapping subsets of data and an adaptive inference strategy. We further prove that the direct single-query attack can not achieve higher attack accuracy than a random guess against this component. \item We introduce Self-Distillation of the training set as the second component of our framework to overcome the limitations of the {Split-AI}\xspace while largely preserving its defense abilities without relying on an additional public dataset. \item We systematically evaluate our framework on three benchmark datasets including Purchase100, Texas100, and CIFAR100 against a suite of MIAs including direct single-query attacks, label-only (indirect multi-query) attacks and adaptive attacks to show that our framework outperforms prior defenses. \end{compactitem} \section{Related Work} \label{sec:related} \textbf{Membership inference attacks against machine learning.} MIAs are usually studied in a black-box manner~\cite{shokri2017membership, salem2018ml, nasr2018machine}: an attacker either leverages the shadow training technique or utilizes knowledge of partial membership information of training set. Most MIAs are direct single-query attacks~\cite{song2019privacy, song2020systematic,yeom2020overfitting, yeom2018privacy}. A more recent line of MIA research has considered indirect multi-query attacks which leverage multiple queries around the target sample to extract additional information~\cite{long2018understanding, choo2020label, li2020label, jayaraman2020revisiting}. Jayaraman et al.~\cite{jayaraman2020revisiting} analyze MIA in more realistic assumptions by relaxing proportion of training set size and testing set size in the MIA set up to be any positive value instead of $1$. Hui et al.~\cite{hui2021practical} study MIA in a practical scenario, assuming no true labels of target samples are known and utilizing differential comparison for MIAs. Another threat model for MIAs is that of a white-box setting, i.e., attacker has full access to the model~\cite{nasr2019comprehensive, leino2020stolen}, which can exploit model parameters to infer membership information. \noindent \textbf{Membership inference defenses for machine learning.} Membership inference defenses can be divided into two main categories. One category of defenses are specifically designed to mitigate such attacks. It has been shown that techniques to improve a model's generalization ability, including regularization~\cite{krogh1992simple} and dropout~\cite{srivastava2014dropout}, can decrease the MIA success~\cite{shokri2017membership, salem2018ml} limitedly. Several defenses~\cite{nasr2018machine, li2020membership} propose to add a specific constraint during training to mitigate the difference of model behavior on models and non-models. These optimization problems under multiple constraints in training are usually computationally hard to solve. Post-processing techniques on prediction vectors are also applied on membership inference defenses~\cite{jia2019memguard, yang2020defending}. Note that these defenses which obfuscate prediction vectors can not defend against label-only attacks~\cite{li2020label, choo2020label}. Moreover, Song et al.~\cite{song2020systematic} re-evaluate two state-of-the-art defenses~(adversarial regularization~\cite{nasr2018machine} and MemGuard~\cite{jia2019memguard}) and find that both of them underestimated metric-based attacks. Shejwalkar et al.~\cite{shejwalkar2019reconciling} propose distillation of public data to protect membership privacy. However, public dataset is not usually available in many practical scenarios. Another category of defenses use differential privacy mechanisms~\cite{dwork2006calibrating,dwork2008differential, dwork2014algorithmic}, which provides a provable privacy guarantee for users. A general framework combining deep learning and differential privacy is DP-SGD~\cite{abadi2016deep, mcmahan2017learning, wang2019subsampled}. However, machine learning with differential privacy suffers from the challenge of achieving acceptable utility loss and privacy guarantee ~\cite{jayaraman2019evaluating, rahman2018membership}. Several methods have been proposed to improve test accuracy under an acceptable $\epsilon$ guarantee, which is still an active area of research. Current state-of-the-art approaches still incur significant drop in test accuracy~(around 25\%) on benchmark datasets with acceptable $\epsilon \leq 3$~\cite{tramer2020differentially, papernot2020tempered, nasr2020improving}. \noindent \textbf{Other Attacks Against Machine Learning Privacy.} Fredrikson et al.~\cite{fredrikson2015model} propose model inversion attacks, which can infer missing values of an input feature from the classifier's prediction. Ganju et al.~\cite{ganju2018property} study property inference attacks aiming to infer properties of the target model's training set. Salem et al.~\cite{salem2020updates} propose the dataset reconstruction attack in the online learning setting. {Another line of works studied model extraction attacks~\cite{tramer2016stealing, he2020stealing, krishna2019thieves}, i.e., stealing the ML model's learned parameters through the prediction API. Besides model parameters, other works focus on stealing the target model’s hyperparameters~\cite{wang2018stealing}.} Recently Carlini et al.~\cite{carlini2019secret, carlini2020extracting} studied memorization and data extraction attacks on natural language processing models, which show that machine learning models suffer from severe privacy threats.
1,108,101,565,105
arxiv
\section{Introduction} \label{sec:intro} Low-dose computed tomography (CT) denoising aims for reconstructing high-quality volumetric images from CT acquisitions with reduced patient dose. In the last years, conventional CT denoising algorithms were outperformed by methods using neural networks that allow data-driven optimization. Many of these approaches are trained in a supervised fashion, which requires paired low- and high-dose CT data. Recently, multiple self-supervised methods were proposed that can be trained without ground-truth high-dose target data, which greatly simplifies their applicability. One work demonstrates that learning the mapping between reconstructions of two independent sets of projections can be used to train a CT denoising model \cite{hendriksen2020noise2inverse}. \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{fig_overview.pdf} \caption{Overview of the proposed trainable dual-domain and self-supervised CT denoising pipeline.} \label{fig:overview} \end{figure} Other works employ the similarity of features in neighboring CT slices to learn a mapping to noise-free images \cite{wu2020self}, use the Noise2Noise principle \cite{lehtinen2018noise2noise}, or estimate an underlying noise model to optimize a denoising network \cite{kim2022noise}. All those methods have in common that they perform denoising as a pure post-processing step on reconstructed CT images. However, noise in CT data originates already from the detection process through limited photon statistics and the detector properties itself \cite{yu2012development}. The CT reconstruction algorithms then distribute projection image noise over the entire reconstructed volume, which complicates the noise pattern and denoising task for post-processing algorithms. We believe that there are mainly two reasons why most research focuses on denoising in the image domain. First, projection data can be difficult to handle as it is often acquired on helical trajectories in medical CT scanners. Second, denoising projection images requires a running CT reconstruction algorithm. We selected all $36$ works that reference the most popular public low-dose CT data set (reference \cite{moen2021low}, Google Scholar, Oct 2022) and perform CT data processing. We found that only four of them use the provided raw projection data. All other works only perform experiments starting from the reconstructed images.\\%\cite{wu2021low,kandarpa2022lrr,xu2022extent,yang2022low} \begin{figure*}[htb] \centering \includegraphics[width=\linewidth]{fig_self-supervised_reco_w_g.pdf} \caption{Illustration of the proposed end-to-end self-supervised denoising pipeline. Projections are split in two stacks and reconstructed separately. The loss calculated between denoised prediction and reconstructed target stack is backpropagated through $\mathbf{D}_{\text{img}}^w$ and $\mathbf{R}$ to $\mathbf{D}_{\text{proj}}^w$ to optimize all denoising operators. The gradient flow from Eq.~\ref{eq:gradient_flow} is indicated by dashed lines.} \label{fig:reco_pipeline} \end{figure*}\noindent Only a few other works propose denoising CT projection data alone or in combination with the reconstructed images \cite{kim2020unsupervised,wagner2022ultra,ge2022ddpnet,patwari2022limited}. However, these methods can only train their denoising models in the projection and image domain separately with independent loss functions or require paired data.\\ In this work, we present a self-supervised denoising pipeline that can be trained end-to-end starting from the raw projection data and predicting a denoised reconstruction. Our method allows integrating any trainable denoising model in the projection and the image domain (dual-domain) as illustrated in Fig.~\ref{fig:overview}. All operators are optimized simultaneously by backpropagating a gradient through the entire pipeline including the reconstruction operator without requiring high-dose target data. Together with our proposed denoising pipeline, we make our Python framework for loading, rebinning, and reconstructing all projection data directly from DICOM-CT-PD format \cite{chen2015development} publicly available to facilitate the usage of helical CT data. Our contributions are several-fold. \begin{itemize} \item We present a dual-domain, end-to-end trainable, and entirely self-supervised CT denoising and reconstruction pipeline. \item We demonstrate the effectiveness of our dual-domain approach on both medical CT and pre-clinical X-ray Microscope (XRM) data starting from the raw acquired projection images. \item We make our helical projection rebinning and reconstruction framework publicly available to simplify using helical CT data and provide an open-source differentiable reconstruction pipeline for medical CT. \end{itemize} \section{Methods} \label{sec:methods} \subsection{End-to-end CT denoising} \label{ssec:denoising} During CT acquisitions image $\mathbf{y}$ is measured using the forward projection operator $\mathbf{A}$. Projection images $\mathbf{x}$ are generated that are affected by noise $\mathbf{n}$ through photon statistics and detector physics \begin{align} \mathbf{x} = \mathbf{Ay} + \mathbf{n}\enspace. \end{align} A linear CT reconstruction operator $\mathbf{R}$, e.g., filtered back projection (FBP), can be used to reconstruct a noise-affected version of the measured image \begin{align} \mathbf{y'} = \mathbf{Rx} = \mathbf{RAy} + \mathbf{Rn}\enspace. \end{align} In the measurement domain, noise is a mixture of Poisson and Gaussian distributions defined through the acquired photon statistics and electric noise on the detector \cite{yu2012development}. The reconstruction operator acts on $\mathbf{x}$ and thereby distributes noise over the entire reconstructed image leading to a complex noise pattern. Therefore, already denoising in the measurement domain can be advantageous due to the relatively simple noise distribution present. Denoising operators $\mathbf{D}_{\text{proj}}^w$ and $\mathbf{D}_{\text{img}}^w$ dependent on trainable parameters $w$ can be used to remove noise at different stages of the reconstruction pipeline to predict a denoised image representation $\mathbf{\hat{y}}$ \begin{align} \mathbf{\hat{y}} = \mathbf{D}_{\text{img}}^w \mathbf{R} \mathbf{D}_{\text{proj}}^w \mathbf{x}\enspace. \end{align} Subsequently, a loss $L$ can be calculated as a quality measure of the prediction. To allow optimizing the set of trainable parameters $w_\text{proj}$ of $\mathbf{D}_{\text{proj}}^w$, the gradient \begin{align} \frac{\partial L}{\partial w_\text{proj}} = \frac{\partial L}{\partial \mathbf{\hat{y}}}\frac{\partial \mathbf{\hat{y}}}{\partial w_\text{proj}} \label{eq:gradient_flow} \end{align} must be derived, which requires a differentiable reconstruction operator $\mathbf{R}$. In this work, we employ differentiable fan-beam \cite{ronchetti2020torchradon} and cone beam \cite{syben2019pyro} FBP operators that can backpropagate a loss in the measurement domain. Our combined end-to-end trainable pipeline with denoising operators in both domains is illustrated in Fig. \ref{fig:reco_pipeline}. \subsection{Self-supervised training} \label{ssec:noise2inverse} The Noise2Inverse approach presents an image noise quality metric that does not require high-dose target data \cite{hendriksen2020noise2inverse}. The idea is to, first, split data into multiple element-wise independent sets, second, denoise a subsection of the sets, and third, calculate the distance of the prediction to the remaining sets. In CT applications data must be split in the measurement domain to preserve element-wise independence as the reconstruction operator $\mathbf{R}$ distributes each projected view over the entire image. In practice, we split our projection data into two independent sets $\mathbf{x_1}$ and $\mathbf{x_2}$ containing the projections with odd and even indices respectively. Subsequently, the projection sets are processed independently to obtain element-wise independent reconstructions $\mathbf{\hat{y}}_i$ and $\mathbf{\hat{y}}_j$ \begin{align} \begin{split} \mathbf{\hat{y}}_i &= \mathbf{D}_{\text{img}}^w \mathbf{R} \mathbf{D}_{\text{proj}}^w \mathbf{x}_i\\ \mathbf{\hat{y}}_j &= \mathbf{R} \mathbf{x}_j \end{split} \end{align} with $i,j \in \{ 1,2 \}$ and $i\neq j$. Krull~\textit{et al.} \cite{hendriksen2020noise2inverse} proved that by minimizing the mean-squared error (MSE) between $\mathbf{\hat{y}}_i$ and $\mathbf{\hat{y}}_j$ denoising models learn to predict the underlying noise-free image. The final denoised prediction during inference is derived from all projections by averaging both denoised reconstructions \begin{align} \mathbf{\hat{y}} = \frac{1}{2} \left(\mathbf{\hat{y}}_{i=1} + \mathbf{\hat{y}}_{i=2}\right)\enspace. \end{align} In our proposed dual-domain denoising pipeline we propagate this self-supervised loss back through the reconstruction operator to the projection denoising operator following the setting described in Sec.~\ref{ssec:denoising} and Fig.~\ref{fig:reco_pipeline}. \subsection{Projection rebinning} \label{ssec:rebinning} Most medical CT scanners acquire projections on helical trajectories to reduce scan times and patient dose. However, to the best of our knowledge, there is no differentiable reconstruction operator available that supports helical acquisition geometries. Therefore, we rebinned helical projection data to fan-beam geometry following the algorithm of Noo~\textit{et al.} \cite{noo1999single} to enable backprojecting with differentiable operators as described in Sec.~\ref{ssec:denoising}. We made our repository publicly available that loads projection and geometry data from raw DICOM-CT-PD format \cite{chen2015development} used for all projections in the largest public low-dose CT data set \cite{moen2021low}, rebins the projections to fan-beam geometry, and reconstructs them using differentiable fan-beam FBP \cite{ronchetti2020torchradon}. We believe that our open-source Python framework can remove barriers for other researchers when developing algorithms for medical CT data \footnote{\url{https://github.com/faebstn96/helix2fan}}. \section{Experiments} \label{sec:experiments} In this work, we perform multiple experiments on two distinct CT data sets to demonstrate the effectiveness of our proposed dual-domain, end-to-end trainable, self-supervised denoising pipeline. First, we perform experiments on rebinned helical abdomen CT scans ($25\,\%$ dose) to show applicability in a clinical setting. Second, we show that dose and acquisition speed can be improved in pre-clinical cone-beam X-ray microscope (XRM) scans on mouse bone samples ($10\,\%$ dose). Future in vivo XRM acquisitions of the bone-remodeling process on the micrometer scale can help to understand and develop treatments for bone-related diseases \cite{gruneboom2019next}. We used the differentiable cone-beam reconstruction pipeline by Thies~\textit{et al.} \cite{thies2022calibration} in our XRM experiments.\\ Three different denoising settings were investigated: (a) self-supervised denoising following Sec.~\ref{ssec:noise2inverse} and (b) supervised denoising, both using denoising operators in the projection and the image domain. In addition, we performed (c) self-supervised denoising with only one denoising operator as reconstruction post-processing as it is done in many related works including Noise2Inverse \cite{hendriksen2020noise2inverse}. We investigated the compatibility of two different denoising operators to our pipeline: First, standard U-Net architectures \cite{ronneberger2015u} which can be regarded as representative of most CNN-based methods. Second, single trainable bilateral filters (BFs) \cite{wagner2022ultra} which are conventional/hybrid ultralow-parameter (four trainable parameters) filters that have been shown to achieve competitive and robust denoising performance compared to deep neural networks \cite{wagner2022trainable}. Whereas BFs were directly employed to predict denoised images, the U-Nets were used to predict the residual noise from the network input, which was subsequently subtracted from that input. This setting turned out to converge more stably after the random weight initialization.\\ Both data sets were split into four training, one validation, and five test scans respectively with each scan reconstructed to either $100$ (abdomen CT) or $30$ (XRM) slices. We trained on single CT slices due to limited GPU size (Nvidia RTX A6000) but tested on the entire scans. The training and validation data are only used during supervised training. We used the Adam optimizer with lr $= 5 \cdot 10^{-5}$ (U-Net) and $5 \cdot 10^{-3}$ (BFs) in all our experiments and trained until convergence of the self-supervised loss (experiment (a) and (c)) or validation loss (experiment (b)). \section{Results and discussion} \label{sec:results} We present quantitative and qualitative results for all three investigated training strategies (a), (b), and (c). Quantitative quality measures peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) are calculated across all test scans (mean $\pm$ std) and listed in Tab.~\ref{tab:abdomen} and Tab.~\ref{tab:xrm} for the investigated abdomen and bone scans. \begin{table}[ht] \caption{Quantitative results on abdomen CT data. Dual-domain self-supervised (a) and supervised (b) as well as solely post-processing (c) denoising is investigated. (1) indicates the U-Net and (2) the BFs.} \begin{tabular}{lcc} \hline & PSNR & SSIM \\ \hline Low-dose & $41.7 \pm 1.4$ & $0.941 \pm 0.017$ \\ \hline (1a) Dual U-Nets (self-sup) & $44.8 \pm 1.2$ & $0.975 \pm 0.006$ \\ (1b) Dual U-Nets (sup) & $45.6 \pm 1.2$ & $0.978 \pm 0.005$ \\ (1c) Reco U-Net (self-sup) & $43.4 \pm 1.0$ & $0.965 \pm 0.007$ \\ \hline (2a) Dual BFs (self-sup) & $45.0 \pm 1.4$ & $0.977 \pm 0.006$ \\ (2b) Dual BFs (sup) & $45.3 \pm 1.4$ & $0.976 \pm 0.008$ \\ (2c) Reco BF (self-sup) & $44.1 \pm 1.1$ & $0.973 \pm 0.006$ \\ \hline \end{tabular} \label{tab:abdomen} \end{table}\noindent In general, across both data sets and both model types (U-Nets, BFs), supervised training using the ground-truth high-dose reconstructions to train the network outperformed self-supervised methods quantitatively by a small but distinct amount. In addition, all self-supervised pipelines using denoising operators in both projection and image domain outperformed the respective self-supervised model that only performs image post-processing by $82.4\text{--}94.1\,\%$/$12.5\text{--}41.7\,\%$ (PSNR/SSIM) on abdomen data and by $1.5\text{--}2.9\,\%$/$0.4\text{--}0.5\,\%$ (PSNR/SSIM) on XRM scans relative to the low-dose baseline. Therefore, we conclude that dual-domain CT denoising is beneficial over single-domain denoising.\\ \begin{table}[t] \caption{Quantitative results on XRM data. Abbreviations as in Tab.~\ref{tab:abdomen}.} \begin{tabular}{lcc} \hline & PSNR & SSIM \\ \hline Low-dose & $18.6 \pm 0.1$ & $0.141 \pm 0.009$ \\ \hline (1a) Dual U-Nets (self-sup) & $32.5 \pm 0.1$ & $0.671 \pm 0.008$ \\ (1b) Dual U-Nets (sup) & $32.7 \pm 0.1$ & $0.682 \pm 0.008$ \\ (1c) Reco U-Net (self-sup) & $32.3 \pm 0.1$ & $0.668 \pm 0.008$ \\ \hline (2a) Dual BFs (self-sup) & $32.6 \pm 0.1$ & $0.686 \pm 0.008$ \\ (2b) Dual BFs (sup) & $32.9 \pm 0.1$ & $0.690 \pm 0.008$ \\ (2c) Reco BF (self-sup) & $32.2 \pm 0.2$ & $0.684 \pm 0.009$ \\ \hline \end{tabular} \label{tab:xrm} \end{table}\noindent In general, the experiments on medical data show a stronger relative improvement with respect to the low-dose baseline. However, different image content, data ranges, and noise levels in the two investigated data sets make quality metrics and improvements difficult to compare. In addition, we believe that due to the high angular sampling in XRM scans that data inherently contains fewer reconstruction artifacts, which can simplify denoising during post-processing.\\ Magnified ROIs of model predictions on both data sets are presented in Fig.~\ref{fig:vis_res}. A liver lesion is highlighted for the abdomen CT data (red arrow). Likewise to the quantitative results, the supervisedly trained models (1b, 2b) predict reconstructions closest to the high-dose ground-truth images. The dual-domain models employing denoising operators in both the projection and image domain simultaneously (1a, 2a) reduce noise compared to the noisy low-dose image and outperform the respective model only using a post-processing denoising operator (1c, 2c).\\ In general, our experiments show that our presented dual-domain and self-supervised CT denoising pipeline improves denoising compared to pure reconstruction post-processing. The benefit of projection denoising can be explained through the distinct noise distribution in the projection data, which constitutes a considerably easier denoising task compared to complex noise removal on the reconstruction. We hope that our open-source projection rebinning and differentiable reconstruction framework can facilitate more research on methods intervening in the different data domains of CT reconstruction pipelines. \begin{figure}[h!tb] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{fig_hd.pdf}} \vspace{0.05cm} \end{minipage} \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{fig_rois.pdf}} \end{minipage} \caption{Examplary predictions of an abdomen CT slice (left) and the cross sections of a mouse tibia bone (XRM) close to the knee area (right). Below high-dose overview images, ROIs (red squares) of (LD) low-dose, (HD) high-dose, (1a) dual U-Nets (self-supervised), (1b) dual U-Nets (supervised), (1c) reco U-Net (self-supervised), (2a) dual BFs (self-supervised), (2b) dual BFs (supervised), and (2c) reco BF (self-supervised) predictions are presented. Windows are $[-150, 250]\,\text{HU}$ (abdomen) and $[0.05, 0.32]\,\text{arb.}\,\text{unit}$ (XRM).} \label{fig:vis_res} \end{figure} \section{Conclusion} \label{sec:conclusion} In this work, we presented an end-to-end trainable and self-supervised CT reconstruction pipeline that performs denoising in two domains, namely projection and image domain. Our experiments on medical and pre-clinical CT data demonstrate quantitatively and qualitatively that dual-domain denoising is beneficial over solely reconstruction image denoising as conducted in many recent works. We believe that our released open-source helical CT rebinning and differentiable reconstruction framework can enable further research on self-supervised and dual-domain CT pipelines. \newpage \section{Compliance with ethical standards} \label{sec:ethics} The abdomen CT study was conducted retrospectively using human subject data made available in open access by Moen~\textit{et al.} \cite{moen2021low}. Ethical approval was not required as confirmed by the license attached with the open-access data. The bone XRM study was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the Ethics Committee of FAU Erlangen-Nürnberg (license TS-10/2017). \section{Acknowledgments} \label{sec:acknowledgments} This work was supported by the European Research Council (ERC Grant No. 810316) and a GPU donation through the NVIDIA Hardware Grant Program. F.W. conceived and conducted the experiments. M.T., L.P., N.M., M.R., M.G., J.U., and F.D. provided valuable technical feedback during development. O.A., S.P., and D.W. prepared and scanned the bone samples. A.M. supervised the project. All authors reviewed the manuscript. L.P., N.M., M.R., and F.D. are employees of Siemens Healthcare GmbH. \bibliographystyle{IEEEbib}
1,108,101,565,106
arxiv
\section{Introduction} Glottochronology uses the percentage of shared ``cognates'' between languages to calculate their distances. These ``genetic'' distances are logarithmically proportional to divergence times if a constant rate of lexical replacement is assumed. Cognates are words inferred to have a common historical origin, their identification is often a matter of sensibility and personal knowledge. Therefore, subjectivity plays a relevant role. Furthermore, results are often biased since it is easier for European or American scholars to find out those cognates belonging to western languages. For instance, the Spanish word {\it leche} and the Greek word {\it gala} are cognates. In fact, {\it leche} comes from the Latin {\it lac} with genitive form {\it lactis}, while the genitive form of {\it gala} is {\it galactos}. This identification is possible because of our historical records, hardly it would have been possible for languages, let's say, of Central Africa. Our aim is to avoid this subjectivity and construct a languages tree which can be easily replicated by other scholars. To reach this goal, we compare words with same meaning belonging to different languages only considering orthographical differences. More precisely, we use a modification of the Levenshtein distance (or edit distance) to measure distance between pairs of words in different languages. The edit distance is defined as the minimum number of operations needed to transform one word into the other, where an operation is an insertion, deletion, or substitution of a single character. Our definition of genetic distance between two words is taken as the edit distance divided by the number of characters of the longer of the two. With this definition, the distance can take any value between 0 and 1. To understand why we renormalize, let us consider the following case of one substitution between two words: if the compared words are long even if the difference between them is given by one substitution they remain very similar; while, if these words are short, let's say two characters, one substitution is enough to make them completely different. Without renormalization, the distance between the words compared in the two examples would be the same, no matter their length. Instead, introducing the normalization factor, in the first case the genetic distance would be much smaller than in the second one. We use the distance between words pairs, as defined above, to construct a distance between pairs of languages. The first step is to find lists of words with the same meaning for all the languages for which we intend to construct the distance. Then, we compute the genetic distance for each pair of words with same meaning in one language pair. Finally, the distance between each language pair is defined as the average of the distance between words pair. As a result we have a number between 0 and 1 which we claim to be the genetic distance between the two languages. \section{Languages database} The database we use for the present analysis \cite{footnote1} is composed by 50 languages with 200 words for each of them. The words are chosen according to the Swadesh list. All the languages considered belong to the Indo-European group. The database is a selection/modification of the one used in \cite{D}, where some errors have been corrected, and many missing words have been added. In the database only the English alphabet is used (26 character plus space); those languages written in a different alphabet (i.e. Greek etc.) were already transliterated into the English one in \cite{D}. For some of the languages in our lists \cite{footnote1} there are still few missing words for a total number of 43 in a database of 9957. When a language has one or more missing words, these are simply not considered in the average that brings to the definition of distance. This implies that for some pairs of languages, the number of compared words is not 200 but a number always greater than or equal to 187. There is no bias in this procedure, the only effect is that the statistic is slightly reduced. The result of the analysis described above is a $50 \times 50$ upper triangular matrix which expresses the 1225 distances among all languages pairs. Indeed, our method for computing distances is a very simple operation, that does not need any specific linguistic knowledge and requires a minimum computing time. \section{Time distance between languages} A phylogenetic tree can be build already from this matrix, but this would only give the topology of the tree whereas the absolute time scale would be missing. In order to have this quantitative information, some hypotheses on the time evolution of genetic distances are necessary. We assume that the genetic distance among words, on one side tends to grow due to random mutations and on the other side may reduce since different words may become more similar by accident or, more likely, by language borrowings. Therefore, the distance $D$ between two given languages can be thought to evolve according to the simple differential equation \begin{equation} \label{diffeq} \dot{D}=-\alpha \,(1-D) -\beta D \end{equation} where $\dot{D}$ is the time derivative of $D$. The parameter $\alpha$ is related to the increasing of $D$ due to random permutations, deletions or substitutions of characters (random mutations) while the parameter $\beta$ considers the possibility that two words become more similar by a ``lucky'' random mutation or by words borrowing from one language to the other or both from a third one. Since $\alpha$ and $\beta$ are constant, it is implicitly assumed that mutations and borrowings occur at a constant rate. At time $T=0$ the two languages begin to separate and the genetic distance $D$ is zero. With this initial condition the above equation can be solved and the solution can be inverted. The result is a relation which gives the separation time $T$ (time distance) between two languages in terms of their genetic distance $D$ \begin{equation} T= -\epsilon \,\ln(1 - \gamma D) \label{time} \end{equation} The values for the parameters $\epsilon = 1/(\alpha + \beta)$ and $\gamma = (\alpha + \beta ) / \alpha$ can be fixed experimentally by considering two pairs of languages whose separation time (time distance) is known. We have chosen a distance of 1600 years between Italian and French and a distance of 1100 years between Icelandic and Norwegian. The resulting values of the parameter are $\epsilon =1750$ and $\gamma=1.09$ which corresponds to the following values $\alpha \cong 5*10^{-4}$ and $\beta\cong 6*10^{-5}$. This means that similar words may become more different at a rate that is about ten times the rate at which different words may become more similar. It should be noticed that (\ref{time}) closely resembles the fundamental formula of glottochronology. The time distance $T$ is then computed for all pairs of languages in the database, obtaining a $50 \times 50 $ upper triangular with $1225$ non trivial entries. This matrix preserves the topology of the genetic distance matrix but it contains all the information concerning absolute time scales. The phylogenetic tree in Fig. \ref{fig1} is constructed from the matrix using the Unweighted Pair Group Method Average (UPGMA). We use UPGMA for its coherence with the trees associated with the coalescence process of Kingamnn type \cite{K}. In fact, the process of languages separation and extinction closely resembles the population dynamics associated with haploid reproduction which holds for simple organism or for the mitochondrial DNA of complex ones as humans. This dynamics, introduced by Kingman, has been extensively studied and described, see for example \cite{S,SD}. In particular, in these two papers the distribution of distances is found and plotted and can be usefully compared with the one herein obtained and plotted in Fig. \ref{fig2}. It should be considered that in the model of Kingman, time distances have the objective meaning of measuring the time from separation while in our realistic case time distances are reconstructed from genetic distances. In this reconstruction we assume that lexical mutations and borrowings happen at a constant rate. This is true only on average, since there is an inherent randomness in this process which is not taken into account by the deterministic differential equation (\ref{diffeq}). Furthermore, the parameters $\alpha$ and $\beta$ may vary from a pair of languages to another and also they may vary in time according to historical conditions. Therefore, the distribution in Fig. \ref{fig2} is not exactly the distribution in \cite{S,SD} but it could be obtained from them after a random shift of all distances. In this paper we do not consider dead languages, we suspect, in fact, that results would be biased due to the different times in which languages existed. For example, comparison of Latin with its offspring could be a meaningless operation in the context of this research. We think that, eventually, Latin should be compared with its contemporary languages and their genealogical tree constructed. \section{Methods} \subsection{Database} The database used here to construct the phylogenetic tree is composed by 50 languages of the Indo-European group. The main source for the database is the file prepared by Dyen et al. in \cite{D} which contains the Swadesh list of 200 words for 96 languages. This list consists of items of basic vocabulary, like body parts, pronouns, numbers, which are known to be resistant to borrowings. Many words are missing in \cite{D} but we have filled most of the gaps by finding the words on Swadesh lists and on dictionaries freely available on the web. The file \cite{D} also contains information on ``cognates'' among languages that we do not use in this work. Our selection of 50 languages is based on \cite{D} but we avoid to consider more than one version of the same language. For example, we do not consider both Irish A and Irish B, but we choose only one of them and we do not include Brazilian but only Portuguese. Our choice among similar languages is based on keeping that language with fewer gaps. Our database is available at \cite{footnote1}. \subsection{Tree construction} In this work the normalized Levenshtein is used to build up an upper triangular $50 \times 50$ matrix with 1225 entries representing the pairwise distances corresponding to 50 languages. These genetic distances are translated into time distances between language pairs and a new matrix of same size comes out. Then, the simple phylogenetic algorithm UPGMA \cite{UPGMA} for tree construction is used. The reason why we choose the unweighted pair-group method, using arithmetic average (UPGMA), is that it is the most coherent with the hypothesis that the languages tree is generated by a coalescence process of Kingman type \cite{K}. Let us describe briefly how this algorithm works. It first identifies the two languages with shortest time distance and then it treats this pair as a new single object whose distance from the other languages is the average of the distance of its two components. Subsequently, among the new group of objects it identifies the pair with the shortest distance, and so on. At the end, one is left with only two objects (languages clusters) which represents the two main branches at the root of the tree. We remark that, as a consequence of the construction rule, the distance between two branches is the average of the distances between all pairs of languages belonging to the two branches. \section{Conclusions} We would like to compare now our results with those published in \cite{GA}. The tree in Fig. \ref{fig1} is similar to the one in \cite{GA} but there are some important differences. First of all, the first separation concerns Armenian, which forms a separate branch close to the root, while the other branch contains all the remaining Indo-European languages. Then, the second one is that of Greek, and only after there is a separation between the European branch and the Indoiranian one. This is the main difference with the tree in \cite{GA}, since therein the separation at the root gives origin to two branches, one with Indoiranian languages plus Armenian and Greek, the other with European languages. The position of Albanian is also different: in our case it is linked to European languages while in \cite{GA} it goes with Indoiranian ones. Finally, the Romani language is correctly located together with Indian languages but it is not as close to Singhalese as reported in \cite{GA}. In spite of this differences, our tree seems to confirm the same conclusions reported in \cite{GA} about the Anatolian origin of the Indo-European languages, in fact, in our research, the first separation concerns the languages geographically closer to Anatolia, that is to say Armenian and Greek. We want to stress that the method here used is very simple and does not require any previous knowledge of languages origin. Also, it can be applied directly to all those language pairs for which a translation of a small group of words exists. The results could be improved if more words are added to the database and if translation and transliterations are made more accurate. Since our method is very easy to use, being the only difficulty the procedure of collecting words, we plan to extend our study to other language families and eventually test competing hypothesis concerning super families, or test controversial classifications as for example the case of Japanese. \section*{Acknowledgments} We thank Julien Raboanary for many discussion and examples from Malagasy dialects concerning the applicability of the Levenshtein distance to linguistics. Critical comments on many aspects of the paper by M. Ausloos are also gratefully acknowledged. The work by FP has been supported by European Commission Project E2C2 FP6-2003-NEST-Path-012975 Extreme Events: Causes and Consequences. \section*{References}
1,108,101,565,107
arxiv
\section{INTRODUCTION} Solar flare is a sudden explosion in the solar atmosphere during which the magnetic energy (stored in the twisted and sheared magnetic fields as well as in the current layers between interacting fields) is released in the form of kinetic energy of rapidly moving plasma, accelerated particles and thermal energy to heat-up the ambient plasma. This primary release of energy takes place in the corona and is accompanied by fast directed ejections (e.g., jets) of plasma, powerful flows of heat, and accelerated particles. They interact with the chromosphere and photosphere, and therefore, creating an extremely rich scenario of secondary physical processes observed as a solar flares. It is generally believed and well supported by observations that magnetic reconnection is the key effect which plays the crucial role in annihilating the complex magnetic field structures and corresponding energy release. The solar flares are mainly distinguished in two categories, e.g., the confined and eruptive flares, which are usually triggered respectively in the closed and open morphology of overlying magnetic fields. The instabilities generated in the complex magnetic fields may be one of the most probable causes to drive/trigger the solar flares after the reconnection of unstable flux tubes with the neighbourhood field configuration. The emergence of unstable and helical twisted structures can trigger the flares followed by an eruption (\citealt{liu2008}, 2007; and references cited there). However, the activation of twisted helical magnetic structures may also play a crucial role in the flare energy build-up and their initiation with failed eruption depending upon the surrounding magnetic field environment (\citealt{kumar2010b}, \citealt{sri2010} and references cited there). Solar coronal loops may be considered as the current ($ \, \stackrel{<}{_\sim} \, $10$^{12}$ amp.) carrying conductors. Two current carrying conductors possess net attractive force if both have resultant currents in the same direction or resultant magnetic fields in the opposite direction depending upon their orientation with each other. Collisions between current carrying loops are considered as a cause of some solar flares \citep{sakai1996}. Based on the loop orientations and size of the interaction region, the current carrying loop interactions are classified into three categories: (a) 1-D coalescence (I-type), (b) 2-D coalescence (Y-type), and (c) 3-D coalescence (X-type). The theoretical model of \citet{gold1960} firstly explains the flare triggering caused by interacting current carrying loops. However, it is not necessary that the field lines should be anti-parallel for the interaction of two current carrying conductors. There may be other mechanisms, e.g., footpoint shear motion and rotation, which can also destabilize the loop-system to trigger the flare and eruption. Stronger shear has more probability for the initiation of the solar flares and related eruptions (e.g., \citealt{tan2009} and references cited there). Yohkoh has also observed some of the flaring events which show three types of loop interaction (I, Y and X-type). In the above mentioned interactions, the 3-D X-type reconnection due to coalescence is the most realistic scenario in the active regions. The necessary condition for 3-D X-type interaction is that the length of the interaction region (L) should be comparable to the loop diameter (R) \citep{sakai1989}. \citet{hana1996} has found evidence of the emergence of a small loop near one of the footpoints of a pre-existing large coronal loop using observations of various instruments including Yohkoh. The interaction of this loop with the larger loop causes flares, micro\-flares and jets. \citet{liu1998} have also observed the flare triggering by the I-type interaction of loop-systems. \citet{fale1999} have shown the flare energy release caused by two successive X-type interaction of an expanding loop with two high-lying and nearly parallel loop-systems. Furthermore, \citet{poh2003} has also studied the series of flares from AR 8996 on 18-20 May, 2000 and provided the evidence of flare triggering due to loop-loop interaction with the observation of moving magnetic features around the sunspot region. Several authors have reported the loop-loop interaction as a cause of solar flares. However, further multiwavelength studies are needed to understand the flare triggering mechanism due to loop-loop interaction, and its responses in the various layers of the solar atmosphere. In spite of the loop-loop interaction, the flare triggering followed by solar eruptions (e.g., coronal mass ejection) can also be caused by the interaction of filaments system due to sunspot rotation (e.g., \citealt{kumar2010a} and references cited there). We know that the interacting current loops are not located in the vacuum or isolating medium, but they are lying in the highly-conducting plasma penetrated by frozen-in magnetic fields in the solar corona. From the beginning of the evolution of a current carrying loop-system, every change in the current carrying loop-system generates currents in the surrounding plasma and magnetic field. Therefore, we have to take into account an interaction not only between the loops but also with these new currents, in particular with screening current layers between the loops. Moreover, the frozen-in magnetic fields of an active region or an activity complex are typically strong in the corona and have their specific topology determined by the photospheric sources. \citet{hen1987} were the first to show that these effects are essential and must be considered in terms of magnetic reconnection of field-aligned electric currents (see Section~\ref{sub:topology}). On the other hand, if there were no current loops related with a twist of magnetic flux tubes at all, even in this case, three-dimensional reconnection between interacting magnetic fluxes gives such distribution of reconnected magnetic fluxes in the corona that two soft X-ray loops look like interacting with each other \citep{gor1989,gor1990}. That is the reason that the observations demonstrated such structures are usually considered as a direct evidence of the hypothesis of two interacting currents. In this paper, we present a multiwavelength study of M7.9/1N solar flare on 27 April, 2006 in AR NOAA 10875, which shows rare observational evidence of the coalescence and the interaction of two current carrying loops. We report a most likely multiwavelength signature of X-type interaction and coalescence instability in the active region which triggers the solar flare. In Section~\ref{sub:observations}, we present multiwavelength observations of the event. We discuss our results and conclusions in the last section. \section{OBSERVATIONS AND DATA} \label{sub:observations} The active region NOAA 10875 was located at S10\,E20 on 27 April, 2006, showing $\beta$$\gamma$/$\beta$$\gamma$$\delta$ magnetic configuration, and has produced M7.9/1N class solar flare. According to the GOES soft X-ray flux profile, the flare started at 15:45~UT, was at its maximum at 15:52~UT and ended at 15:58~UT. Figure \ref{fluxes} displays the flux profiles in the soft X-ray, soft X-ray derivative, hard X-ray and radio wavelengths. The flux derivative of soft X-ray matches well with the rise-up of hard X-ray flux profile. This implies that the accelerated electrons that produce the hard X-ray also heat the plasma that produces the soft X-ray, obeying the Neupert effect \citep{neupert1968}. More exactly, this means that the impulsive heating of the solar atmosphere by accelerated electrons can dominate its heating by thermal fluxes from the high-temperature source of flare energy (see Chapter 2 in Somov, 1992). So there is a causal connection between the thermal and nonthermal flare emissions. Further, the radio flux profile shows the sharp rise-up with double peak structure mostly in 4.9 and 8.8 GHz at 15:47 UT, which shows the gyro\-synchrotron emission generated by the accelerated electrons at the reconnection (i.e. loop-interaction) site. \subsection{GOES SXI AND TRACE Observations} \label{sub:GOES} We have used GOES-SXI observations of the event \citep{hill2005,pizzo2005}. It is a broadband imager in the 6--60~\AA \ bandpass that produces full-disk solar images with $\sim$1 minute cadence. The images consist of 512 pixel$\times$512 pixel with 5$^{\prime\prime}$ \ resolution. The FWHM of the telescope point-spread function is $\sim$10$^{\prime\prime}$. A set of selectable thin-film entrance filters allows plasma temperature discrimination, i.e., open, three polyimide (thin, medium, and thick), and three beryllium (thin, medium, and thick). The open and polyimide filters are sensitive to the plasma below 2 MK. It is especially suitable for continuous tracking of coronal loops. Figure \ref{sxi} displays the selected images of GOES SXI before and during the flare activity. Two loop systems have been observed before the flare initiation. One lower loop system (indicated by red line) is underlying a higher loop-system (blue). Initially, brightening starts in the lower loop during flare initiation at 15:43~UT. This loop becomes more brighter as the flare progresses. The four foot\-points of both the loop-systems become evident at 15:47~UT mainly due to the precipitation of the accelerated electrons from the interaction or reconnection site. The corresponding footpoints of both interacting loops are indicated by FP1 (L1) and FP2 (L1) for loop 1 and FP1 (L2) and FP2 (L2) for loop 2, respectively. As the plasma is heated-up due to the dissipation of kinetic energy of the accelerated electrons from the reconnection site, chromospheric evaporation takes place and it fills the interacting loop-system in the corona and these loops look like as if they are crossing to each other. Now the X-type configuration becomes evident at 15:49 UT. The flare maximum takes place at 15:52~UT. After the interaction between the loops, the orientation of the lower loop has changed into a more relaxed state. The SXI image taken during the decay phase of the flare (at 16:31~UT) evidently shows the orientation change of the lower loop-system. In this Figure, the loop shown by red line is marked in the upper-left panel as rooted somewhere close to X$\approx$-445$^{\prime\prime}$, Y$\approx$-50$^{\prime\prime}$. However, in middle-left panel the left foot of this loop (marked by FP1(L1) has co-ordinates at X$\approx$-440$^{\prime\prime}$, Y$\approx$-70$^{\prime\prime}$. Therefore, the shift in the footpoint during the dynamical flare event is $\Delta$X = 5$^{\prime\prime}$, $\Delta$Y = 20$^{\prime\prime}$. Presumably, this apparent displacement of the footpoint FP1(L1) may be due to the two reasons: (a) A displacement directed out from the photospheric neutral line, therefore, it is related to the motion of the flare ribbons in the opposite directions. Such behavior is typical for the two-ribbon flares; (b) A displacement directed parallel to the photospheric neutral line, which is related to the magnetic shear relaxation. These two processes can jointly cause an increasing or decreasing distance between the footpoints. Investigations in the frame of a more detailed model should be done to interpret this feature. It is necessary to compare the kernel displacements observed during the flare with motions and evolution of magnetic fields in the photosphere before the flares (see Somov et al., 2002). TRACE (Transition Region and Coronal Explorer) provides the opportunity to observe the Sun from chromosphere to corona \citep{handy1999}. We have used TRACE 195~\AA \ (Fe XII, T$\sim$1.5 MK) and 1600~\AA \ (T$\sim$4000-10000 K). The field of view for each image is 1024$\times$1024 with 0.5$^{\prime\prime}$ \ pixel$^{-1}$ resolution. The typical cadence for TRACE images is $\sim$20-60 sec. Figure \ref{tr_195} displays the selected TRACE 195~\AA \ images during the flare activity. TRACE data have been calibrated and analyzed using standard routines in the solarsoft library \footnote [2] {http://hesperia.gsfc.nasa.gov/ssw/trace/}. During the flare initiation, brightening was observed along both sides of the photospheric neutral line. Two bright sheared structures are observed at 15:46 UT. The image at 15:48 UT shows the loop-loop interaction and formation of an `X' point in between the interacting loop-system. Many interacting small flux threads/tubes may be seen in this image. After the X-type interaction during the impulsive phase of the flare, it seems that the loop threads are changing their footpoint connectivities. This is the signature of an ongoing reconnection process in the same global configuration of the active region. During 15:42--15:46 UT, the two interacting loops are visible in the soft X-ray GOES/SXI images, however, they are not visible in the TRACE images of the same duration. The GOES/SXI images represent the high temperature and high coronal part of the loop systems, while the TRACE images show the lower part of the loop systems joining the two brightened ribbons. In the pre-flare state, the GOES/SXI images show the loop segments visible due to the emission of the soft X-ray during loop-loop interaction, while at the same time the plasma at EUV temperature band is not uploaded in the lower segments of the two loops to brought them as visible as GOES/SXI images. However, near the flare maximum and even after the flare, the interacting loop systems are clearly evident in both X-ray as well as in EUV, and imply the presence of plasma at various temperatures. Since, we see the different segments of the interacting loop-systems in GOES/SXI and TRACE images. Therefore, they look like with a different orientations as the apex part may be more tilted compared to the lower segments. We can identify the four foot\-points of the associated interacting two loop-systems. During the interaction time, the thickness of the interaction region (indicated by arrows) reduces during the impulsive phase of the flare and it seems that the orientation of the loops is changed during the flare maximum (refer to image at 15:50~UT and onwards images). During the sharp impulsive phase, the foot\-points of the loop systems do not show significant changes (see TRACE movie). It means that the reconnection point is mostly fixed, i.e., the loops interaction site. The loop-system morphology becomes simple and relaxed during the decay phase of the flare as observed in SXI images (see SXI image at 16:31:01 UT). The thickness of the interaction region is plotted against the GOES soft X-ray flux profile (refer to Figure \ref{thick_xray}). This plot reveals that the X-ray flux rises up as the thickness of the interaction region decreases. This may be the most likely signature of ongoing reconnection at the loops interaction site. From the linear fit, the typical converging speed is estimated as $\sim$30 km s$^{-1}$. This speed may be related with the typical inflow speed as observed in other flares \citep{tsun1997,yok2001}. We have overplotted MDI contours over TRACE 195 \AA \ image and vice versa (refer to Figure \ref{tr_mdi}). Left foot\-points [FP1(L1) and FP2(L2)] of the associated loop-systems are anchored in positive polarity field regions whereas the right foot\-points [FP1(L2) and FP2(L1)] are anchored in the negative polarity regions. For investigating the overlying magnetic field environment of this active region, we have used the potential field source surface (PFSS) extrapolation \citep{alt1969,sch1969} before the flare event at 00:05 UT (see left panel of Figure \ref{extr_ha}). The right panel of Figure \ref{extr_ha} displays H$\alpha$ image observed at Meudon, which shows flare ribbons during the decay phase (at 16:16~UT) of the flare. It shows mainly four bright kernels, which are the regions where most of the energy flux is concentrated i.e. the sites of particle precipitation. These are the footpoints of the corresponding reconnecting loop-system. These observations are in favour of loop-loop interaction mechanism. For comparison, the location of the flare ribbons polarities is denoted by corresponding `+' (red) and `-' (blue) signs in SOHO/MDI image of the active region (AR10875) with its coronal field extrapolation. The coronal magnetic field topology is on average also in agreement with TRACE and SXI observations. Figure \ref{tr_rib} displays the TRACE 1600~\AA \ images during the flare event. Two ribbons, located on the both side of neutral line are observed at 15:44 UT. Left side ribbon shows the sheared `S' shaped structure, whereas the ribbon at the right side shows simple structure. \subsection{Radio and RHESSI Observations} \label{sub:radio} We have used Ondrejov dynamic radio spectrum data (2--4.5 GHz) during the flare \citep{jir1993,jir2008}. This radiospectrograph uses a 3-m dish and wide band horn antenna as primary feed. The time resolution is 10 ms and the frequency band is divided into 256 channels, which mean the frequency resolution is of about 10 MHz. Figure \ref{radio} (upper panel) displays the Ondrejov dynamic radio spectrum on 27 April, 2006 showing the intense DCIM radio burst during flare initiation. Moreover, there was no Type III burst during this time period (checked with Wind/WAVES spectrum). That means the opening of field lines did not take place during the flare energy release (i.e. during reconnection). The DCIM burst starts in $\sim$2.5--3 GHz frequency and continues upto 4.5 GHz. This frequency range covers the typical range of heights corresponding to reconnection site. The burst starts at 15:46 UT and continues upto 15:49 UT for the duration of $\sim$3 minutes. The observed DCIM bursts reveal the signature of particle acceleration from the reconnection site during loop-loop interaction/coalescence. The US Air Force operates four solar radio observatories at various locations around the world. These are collectively known as the Radio Solar Telescope Network or RSTN. Each observatory monitors solar radio emissions on 8 discrete fixed frequencies (245, 410, 610, 1415, 2695, 4995, 8800 and 15400~MHz) as well as low frequency spectral emissions in the VHF band. We have used the radio flux data (1 sec cadence) from Saga\-more Hill. We have selected four radio frequency bands of 2695, 4995, 8800 and 15000~MHz, which show significant variations in the flux profiles. The radio burst is observed during $\sim$15:46--15:49 UT (Figure \ref{radio}, lower panels). The radio flux profiles in 4900 and 8800 MHz show double peak structures associated with the coalescence of the loop-systems. It may be noted that second double peak structure is stronger in comparison to the first one, which shows that the superthermal electrons accelerated from a higher amount of pre-accelerated electrons generated the last double peak \citep{karl2003}. After this burst, we observe the quasi-periodic oscillations specially in 4995, 8800 and 15400~MHz frequencies during $\sim$15:48--15:51 UT for the duration of $\sim$3 minutes, which may be attributed to modulations by MHD oscillations or nonlinear relaxational oscillations of wave particle interactions. Therefore, MHD waves can modulate the emissions from the trapped electrons \citep{asc2004}. The absence of Type III radio burst suggests the absence of opening of field lines during the reconnection process. Further, we do not see plasmoid ejection in soft X-ray images from the reconnection site. Therefore, the DCIM radio burst can not be interpreted as ejected plasmoid from the reconnection site. It should be noted that the burst starting frequency is $\sim$ 2.5-3 GHz, which corresponds to the typical height of post flare loops and originates in magnetic reconnection regions (i.e. plasma density of $\sim$ 10$^{10}$-10$^{11}$ cm$^{-3}$ ) \citep{asc2004}. As this burst continuation can be seen upto 4.5 GHz in the radio spectrum and further in single frequencies radio flux profiles ( i.e. in 2.6, 4.9, 8.8 and 15 GHz). Therefore, we interpret these emissions due to nonthermal electrons accelerated from the reconnection site along the soft X-ray loop systems. This may be confirmed by the soft X-ray image at 15:47:02 UT, which shows the four footpoints due to precipitated electrons during the time of radio burst. The evolution of hard X-ray sources in two selected energy bands (12-25 and 25-50~keV) of RHESSI instrument is shown in Figure \ref{hessi1} and \ref{hessi2}. These images have been reconstructed using PIXON method. In both the energy bands, the two separated loop-top sources are visible at 15:49 and 15:50~UT and then their coalescence resulting into a single source (at 15:54 and 15:56~UT). These images also provide the evidence of two loops coalescence. \subsection{Evolution of Active Region} Figure \ref{tr_wl} displays the selected images of TRACE white-light of active region on 27 April, 2006. FP1 (red) and FP2 (blue) in the top first image show the `+ve' and `--ve' footpoints (indicated by arrows) of the lower loop system respectively. The careful investigation of the TRACE movie reveals the linear/shear motion of small sunspot of negative polarity (indicated by blue contours) across the neutral line. We have made the time-distance plot to quantify the linear translational motion of the sunspot. From the linear fit to the data points, the speed of this motion is estimated as $\sim$0.2 km s$^{-1}$ (662 km~h$^{-1}$) (see Figure \ref{shear}). To identify the foot\-point of the related loop-system anchored in this spot, we overlaid MDI and TRACE 195 \AA \ contours over the white-light image (refer to Figure \ref{tr_wl_flow}, left). This image reveals that one foot\-point of the loop-system is anchored in this spot. In order to view the photospheric horizontal flow pattern in and around the active region, we use the Fourier Local Correlation Tracking Technique (FLCT) on SOHO/MDI images. The FLCT method is described by \citet{fisher2008}. The main input parameters for this technique are, two images f1 and f2, the pixel separation scale ($\Delta$s) and time separation ($\Delta$t), and a Gaussian window size scale ($\sigma$). This routine calculates the velocity (2D) by maximizing the cross-correlation of each image when weighted by the Gaussian window centered on each pixel location. In our study, we use the two SOHO/MDI frames at different times before the flare. After a careful investigation, a Gaussian window with a standard deviation of 15$^{\prime\prime}$ was chosen. The right panel of Figure \ref{tr_wl_flow} displays the photospheric velocity map obtained from FLCT technique using SOHO/MDI magnetograms. The longest arrow corresponds to velocity of 0.291 km s$^{-1}$. It may be noted from the flow map that the small, negative polarity spot shows the clockwise shear flow motion whereas the positive polarity region (in which another footpoint was anchored of the lower loop-system) shows counter-clockwise flow motion. This linear translational motion as evident in TRACE white light images as well as velocity shear flows as evident in FLCT images near the spots most likely indicate the triggering of the shear in their locations. This physical mechanism most likely plays a role in the energy build-up for flare and generates the coalescence instability in the lower loop-system. \subsection{Magnetic Topology of the Interacting Loop-Systems} \label{sub:topology} In this Section, we discuss the large-scale structure of a magnetic field responsible for the solar flare. The soft X-ray image of the flare clearly reveals the two large solar loops (L1 and L2) crossing to each other and exhibit the X-type interaction. The chromospheric images (H$\alpha$ and TRACE 1600~\AA) show the two ribbon morphology with the four kernels, i.e. four foot\-points of the reconnected loops. We illustrate these features of the interacting loop-systems in terms of the {\em topological\/} models (see ch.~3 in \citet{somov2007}). Figure \ref{topology} displays the field lines that connect the H$ \alpha $ kernels: FP1 (L1) with FP2 (L1), and FP1 (L2) with FP2 (L2). The shadowed regions FR1 and FR2 indicate the flare ribbons. They are located on both sides of the photospheric neutral line NL. Chromospheric evaporation along the reconnected field lines creates the SXR loops that look like they are crossing or touching each other somewhere near the top of a magnetic-field separator X. The loops and ribbon morphology shown in the observations qualitatively matches with this cartoon. It is very likely that, in addition to what is shown in Figure \ref{topology}, the electric currents and twisted magnetic fields can be created inside the interacting loops by some under-photospheric or photospheric mechanism observed in the photosphere as shear motions or rotations. Such currents certainly must exist in complex active regions with sunspot rotation and large-scale photospheric shear flows. If the currents are mostly parallel, they attract each other and can give energy to a flare \citep{gold1960}. On the other hand, according to the simplified topological model presented in Figure \ref{topology}, the flare energy comes from an interaction of magnetic fluxes that can be mostly potential. If this would be the case, the flare energy should be stored before a flare mainly in slowly-reconnecting current layer at the separator of coronal magnetic field. This possibility seems to be in agreement with the quad\-ru\-pole reconnection model of the solar flares. The morphology of the loops is also in agreement with the PFSS extrapolation of photospheric magnetic fields into the corona. Therefore, we consider both the models firstly from the view-point of global magnetic configuration of a quad\-ru\-pole-type active region taking into account the interacting electric currents. Figure~\ref{currents} illustrates the {possible configuration of two large scale} coronal currents~$ J_{1} $ and $ J_{2} $ distributed inside two different magnetic cells, i.e. the two magnetic fluxes of different linkage that interact and reconnect at the separator~$ X $. The two field lines~$ B_{1} $ and $ B_{2} $ belong to the magnetic cells that connect the kernel FP2 (L2) with FP1 (L2) and the kernel FP2 (L1) with FP1 (L1) respectively. The coronal currents are distributed somehow inside the two different magnetic cells and shown schematically the total currents~$ J_{1} $ and $ J_{2} $ along the field lines~$ B_{1} $ and $ B_{2} $. If the field lines~$ B_{1} $ and $ B_{2} $ near the current layer along the separator have an opposite direction component, then they can be reconnected. If the two current systems~$ J_{1} $ and $ J_{2} $ flow more or less in the same direction, then they also attract each other according to \citet{gold1960}. The components of the magnetic field transversal to the separator reconnect together with electric currents flowing along them \citep{hen1987,somov1992}. In this way, with a perpendicular magnetic field inside the place of interruption, magnetic reconnection can create local interruptions of the electric currents in the solar atmosphere. If these currents are highly concentrated, their interruption can give rise to strong electric fields that accelerate the energetic particles and can contribute significantly to the flare energetics. What factors do determine the rate of magnetic reconnection in the current layer at the separator? -- Let us consider the magnetic fields created by the currents~$ J_{1} $ and $ J_{2} $. These additional or secondary fields play the role of the longitudinal magnetic field near the reconnecting current layer. Being superimposed on the large-scale potential field, they create the two types of field line spirals, i.e., left-handed and right-handed. When looking along the positive direction of the field lines~$ B_{1} $ and $ B_{2} $, we see the two opposite orientations for the spirals namely to the right for the {\em dextral\/} structure and to the left for the {\em sinistral\/} one. Depending on this handedness property known as a {\em chirality\/} also that depends on the angle between the currents~$ J_{1} $ and $ J_{2} $, magnetic reconnection of electric currents will proceed faster or slower \citep{hen1987}. As evident in the observations as well as in the theoretical baseline, the X-type reconnection may produce the plasma jets. However, we have no observational signature of such jets in our observations. In the flare under consideration, the reconnected fast outflows from a current layer relax quickly because they interact with (i) closed field lines of a quadru\-pole-type of the active region (recall that there was no type III radio\-burst, thus the opening of field lines did not take place during the flare energy release, i.e. reconnection); (ii) chromospheric evaporation upflows (the energy released in closed magnetic configuration goes into impulsive heating of the upper chromosphere to high temperatures that is why the soft X-ray images become so bright quickly). \section{SOME THEORETICAL ESTIMATIONS} The RHESSI temporal images (12-25 and 25-50 keV) reveal the coalescence of the loop-top sources of the interacting loop system. The two loop-top sources merge approximately vertical in the RHESSI field of view. Therefore, the lower bound change of the distance of the two approaching loops is \begin{equation} \Delta l_{coal} \approx 22000\, \, \, {\rm km} \; \end{equation} and the elapses time is \begin{equation} \Delta\tau_{coal} \approx 420 \, \, {\rm s} \; \end{equation} The coalescence instability may activate in the observed interacting loops system, which is the effect that merges the two isolated magnetic islands into a single one \citep{har2001b,har2001a,asc2004}. This type of instability evolves in two phases, i.e. First phase in pairing of the current filament/loops as in ideal MHD process, while the second as the resistive phase of pairwise reconnection between the approaching current carrying flux tubes. The numerical MHD simulations reveal the different phases of coalescence instability in ideal/resistive solar plasma \citep{sch1997}. The characteristic time scale of the ideal phase of coalescence instability is the multiple of Alfv\'enic transit time \citep{asc2004}: \begin{equation} \tau_{coal}=\frac{1}{q_{coal}}. \frac{l_{coal}}{v_A}, \, \, \end{equation} where \begin{equation} q_{coal}=\frac{u_{coal}}{v_A}, \end{equation} The l$_{coal}$, u$_{coal}$ and v$_A$ are respectively the distance between approaching loops, approaching velocity and local Alfv\'enic speed. Using equation (3) and (4), the differential coalescence speed \begin{equation} \Delta u_{coal}=\frac {\Delta l_{coal}}{\Delta \tau_{coal}}, \end{equation} Therefore, using the observationally estimated values as mentioned in equation (1) and (2), we get the average coalescence speed as $\sim$52 km~s$^{-1}$. TRACE 195~\AA \ images also show the interacting and paired loops. Using these images, the projected distance-time profile of the interaction region (i.e. converging motion at the interaction site) has been presented in Figure 4. The average converging speed of the interaction region is estimated as $\sim$30 km s$^{-1}$. The approximate approaching velocity of one magnetic island of a loop is evident as $\sim$26 km s$^{-1}$. The resemblance in these two speeds is in agreement with loop coalescence. By assuming the typical Alfv\'enic speed at the interaction region as $\sim$1000 km s$^{-1}$ and the projected distance between the approaching loops ($\Delta$l$_{coal}$$\approx$22000 km), the estimated Alfv\'enic transit time of the region will be $\sim$22 s. Therefore the coalescence will occur $\sim$20 $\tau$$_A$ for our observation, which is rather longer as predicted in various simulation results explained by \citet{sakai1996} as well as \citet{tajima1982} under various assumptions of the model atmosphere. However, for L$\sim$62800 km, $\tau$$_A$=16 s, the Reynolds number (S=R)=500, n$_e$=10$^{10}$ cm$^{-3}$ and B$_Z$=90 G, \citet{milano1999} have found that two loops coalesces at t=11$\tau$$_A$ and the magnetic energy and even its dissipation enhanced. The loop coalescences time depends upon various atmospheric parameters, and therefore further simulations will be interesting to study the dynamics and energetics of our observed coalesced loops. We can estimate the amount of energy ($ {\cal E}_c $) available due to coalescence instability \citep{tajima1982,smartt1993} by: \begin{equation} {\cal E}_c \approx \frac{LB^2a^2}{2} \, \ln \frac{L}{a} \end{equation} where $ L , B $ and $ a $ are length of the reconnecting region, loop magnetic field and radius of current loop respectively. We take $ B \approx 100 $~G, $ L \approx 22000 $~km and $ a \approx 11000 $~km, which gives \begin{equation} {\cal E}_c \approx 1.0\times10^{31} \, \, {\rm ergs} \, . \end{equation} Therefore, this value is comparable with the energy released during M-class flare. In general, the total magnetic field energy of the currents generated by photospheric vortex flows, sunspot rotation or shear flows in the photosphere can exceed the energy of even the largest flares. However, in contrast to thin current layer at the separator, these currents are typically dispersed over a large volume of magnetic flux tubes in the corona. The dissipation rate of the currents so distributed in the coronal plasma of very high conductivity is vanishingly small. However, their interaction with each other and with the current layers at the separator is not small and must be treated within the framework of the global electro\-dynamical coupling of a flare active region or complex. As we saw in Section~\ref{sub:topology}, a distinctive feature of this interaction is that the separator is orthogonal (in the sense of the magnetic field topology) to both systems of electric currents~$ J_{1} $ and $ J_{2} $. For this reason, not only the magnetic field components associated with the current layer, but also the longitudinal (guiding) components with respect to the separator are reconnected. Therefore, not only the energy associated with the current layer at the separator, but also a part of the energy of the currents generated by the photospheric vortex flows, sunspot rotation and shear flows is released in the flares \citep{hen1987}, see also \citet{somov2002}. All the above have been concerned with the large-scale structure of magnetic fields and electric currents in large solar flares that can be qualitatively described in the main features by the simplified topological models. However, in actual flares there are many different structures of different scales including the smallest ones. In the flare under consideration, we see many interacting small flux threads/tubes (e.g., Figure~\ref{tr_195}). Moreover, the image at 15:48 UT in this figure shows the loop-loop interaction and formation of 'X' point in between the interacting loop-system. So, it is likely that the observed flare was caused by interactions of not two but the multitude of the loops, forming more-or-less parallel systems and visible in low-resolution images as single wide loops. From theoretical point of view, this presumably means that the distributed currents $ J_{1} $ and $ J_{2} $ are deeply pinched in many thin current filaments. Therefore, we observe some average picture of reconnection with some average reconnection rate. \section{DISCUSSION AND CONCLUSIONS} We present the rare observational evidence of X-type loop-loop interaction associated with M7.9/1N flare. The coronal images observed by GOES SXI and TRACE 195~\AA \, evidently show the interacting loop-system. TRACE white-light images reveal the sunspots shear motion (negative polarity) across the neutral line. This shear motion probably might have produce the destabilization in the associated loop-system and cause the loop-interaction followed by the flare. On the basis of multi\-wavelength observations, we draw a schematic cartoon to explain the whole event scenario (see Figure \ref{cartoon}). Before the flare there was two loop systems visible in SXI images. One higher loop in N-S direction and another smaller loop system in E-W direction lying below this higher loop system. Due to the shear motion of the right foot\-point (anchored in negative polarity) of smaller loop system, the loop becomes unstable and rises up due to instability and reconnects with the overlying higher loop system resulting X-type interaction in association with flare event. After the flare event, the connectivity of the smaller loop system changed into the relaxed state. The regular variation of 4.9 and 8.8 GHz radio flux and accompanying flare effect observed during 27 April, 2006 are interpreted using X-type loop interaction model. We found the oscillatory behavior with double peak structure. Double peak in the radio flux gives the support for loop-interaction model \citep{sakai1986}. According to the theoretical model, the double peak structure is more pronounced, when the currents in the two loops are sufficient for the explosive coalescence. Individual peak belongs to the electric field variation at the reconnection site. This electric field accelerates the electrons which generate the radio emission. The cause of quasi\-periodic oscillation is as follows: after explosive reconnection of poloidal magnetic fields taking place at the `X' point between approaching current loop and two plasma blobs pass through each other and overshoot (an approach that fails and gives way to another attempt), resulting the repetition of the process. \citet{kliem2000} also proposed a model in which the pulsations of the radio flux are caused by quasi-periodic particle acceleration episodes that result from a dynamic phase of magnetic reconnection in a large-scale current sheet. The reconnection is dominated by repeated formation and sub-sequent coalescence of magnetic islands, while a continuously growing plasmoid is fed by newly coalescing islands. In our case, the coalescence speed of 52 km~s$^{-1}$ is much smaller than the Alfv\'en velocity of $\sim$1000 km~s$^{-1}$. The pre\-flare stage in which multiple current filament structure might be generated due to the photospheric shear motion across the neutral line. The photospheric shear motion can give rise to plasma currents along the potential magnetic field produced by the sunspots nearby the active region. As the shear motion proceed, the current density may increase and current loop might move up, associated with relaxation of magnetic tension \citep{sakai1986}. The absence of type III burst during flare energy release confirms the connectivity change and no opening of field lines. In addition, coalescence of hard X-ray sources also confirm the loop-loop interaction. \citet{sakai1986} presented the physical characteristics of the explosive coalescence of current loops through computer simulation and theory and mentioned canonical characteristics of the explosive coalescence as (i) impulsive increase of kinetic energy of electrons and ions (ii) simultaneous heating and acceleration of particles in high and low energy spectra (i.e. Neupert effect) (iii) quasi-periodic amplitude oscillations in field and particle quantities (iv) a double peak (or triple peak) structure in these profiles. Our observations clearly matches with all the above mentioned characteristics of the explosive coalescence and provide a unique evidence of X-type loop-loop interaction satisfying theories and simulations. The interaction of large-scale current-carrying loops should be considered as a part of the global electrodynamic coupling in flare-productive active regions and active complexes as discussed in Section~\ref{sub:topology}. On the one hand, the potential magnetic field in the corona determines a large-scale structure of active regions while the reconnecting current layers at separators in the corona together with other non-potential components (see Section 14.5 in Somov, 2007) of magnetic field determine energetic and dynamics of large flares. On the other hand, two large-scale current-carrying loops emerging from under the photosphere have the sufficient energy to provide a large flare too by their interaction and coalescent instability as considered in this paper. Moreover, these two currents could be incorporated in the large-scale structure with reconnecting current layer. The principal question is in the relative role of two distinct sources of free magnetic energy: the interaction of magnetic fluxes, and the interaction of electric currents as demonstrated in this paper. Clearly the answer depends on the relation between: (a) the photospheric flows which create the pre\-flare current layers at the separators, (b) the photospheric shear flows which induce the current layers extending along the separatrices \citep{somov2002}, and (c) the other photospheric flows like sunspot rotations which twist the magnetic flux tubes. In any case, the separator is a special place where a fast conversion of free magnetic energy into bulk plasma motions, heat flows and energy of accelerated particles can take place. In conclusions, we find the rare multiwavelength observational signature of the loop-loop interaction and triggering of the M-class flare, which is consistent with the earlier developed theories and simulations. However, further detailed multiwavelength studies should be carried out statistically by analyzing such events to shed more lights on the dynamics and energetics related to the flare and eruptive phenomena related to loop-loop interactions. \acknowledgments We express our gratitude to the referee for his/her valuable suggestions which improved the manuscript considerably. We acknowledge to space missions, GOES, SOHO/MDI, TRACE and RHESSI for providing the data used in this study. SOHO is a project of international cooperation between ESA and NASA. We are thankful for the radio data obtained from RSTN network (Sagamore Hill) and radiospectrograph from Ondrejov, Czech republic. We are thankful for Meudon H$\alpha$ data used in this study. Global High Resolution H$\alpha$ Network is operated by the Space Weather Research Lab, New Jersey Institute of Technology. AKS thanks SP2RC, Department of Applied Mathematics, The University of Sheffield for the support of collaborative visit, where the part of present research work has been carried out. AKS also acknowledges the joint DST-RFBR (INT/RFBR/P-38) project grant for the financial support of this work. BVS thanks the Russian Foundation for Fundamental Research (grant no. 08-02-01033). RE acknowledges M. K\'eray for patient encouragement and is also grateful to NSF, Hungary (OTKA, Ref. No. K67746) for financial support received. We also thank Dr. Marc DeRosa for his valuable suggestions and discussions regarding the use of PFSS technique. \bibliographystyle{apj}
1,108,101,565,108
arxiv
\section{Introduction} In this article we analyze a continuous-time optimal stopping problem with a series of inequality-type and equality-type expectations constraints in a general non-Markovian framework. Let $(\cQ, \cF, \fp)$ be a generic probability space equipped with a Brownian motion $B$. Given a historical path $\bx|_{[0,t]}$, the state process $\big\{X^{t,\bx}_s\big\}_{s \in [t,\infty)}$ evolves along the following dynamics: \bea \label{SDE0} X_s \= \bx(t) + \int_t^s b (r, X_{r \land \cd}) dr \+ \int_t^s \si (r, X_{r \land \cd}) d B_r , \q \fa s \in [t,\infty) , \eea where drift coefficient $b$ and diffusion coefficient $\si$ depend on the past trajectory of the solution. The player decides a relative exercise time $\tau $ after $t$ to exit the game when she will receive a running reward $\int_t^{t+\tau} \n (f\+g) \big(r, X^{t,\bx}_{r \land \cd} \big) \, dr $ plus a terminal reward $ \pi \big( t\+\tau , X^{t,\bx}_{(t+\tau) \land \cd} \big) $ with a running cost $\int_t^{t+\tau} g \big( r, X^{t,\bx}_{r \land \cd} \big) dr $. Subject to a series of expectation constraints \beas E_\fp \Big[ \int_t^{t+\tau} g_i ( r,X^{t,\bx}_{r \land \cd} ) dr \Big] \ls y_i, \q E_\fp \Big[ \int_t^{t+\tau} h_i ( r,X^{t,\bx}_{r \land \cd} ) dr \Big] \= z_i, \q \fa i \ins \hN, ~ \fa (y_i,z_i) \ins (-\infty,\infty] \ti [-\infty,\infty] . \eeas the player wants to maximize her expected profit by choosing an approximate $\tau$. Set $(y,z) \= \big(\{y_i\}_{i \in \hN}, \{z_i\}_{i \in \hN}\big)$. The value of our optimal stopping problem with expectation constraints is \beas V (t,\bx,y,z) \df \Sup{\tau \in \cS_{t,\bx}(y,z) } E_\fp \bigg[ \int_t^{t+\tau} f \big( r, X^{t,\bx}_{r \land \cd} \big) dr \+ \pi \big( t\+\tau , X^{t,\bx}_{(t+\tau) \land \cd} \big) \bigg] , \eeas as long as $ \cS_{t,\bx}(y,z) \df \big\{ \tau \ins \cS^t \n : E_\fp \big[ \int_t^{t+\tau} g_i ( r,X^{t,\bx}_{r \land \cd} ) dr \big] \ls y_i, \, E_\fp \big[ \int_t^{t+\tau} h_i ( r, X^{t,\bx}_{r \land \cd} ) dr \big] \= z_i, \, \fa i \ins \hN \big\} $ is not empty. Such kind of constrained optimal stopping problem has applications in various economic, engineering and financial areas such as travel problem with fuel constraint, pricing American options, quickest detection problem and etc. Since a large number of traded assets and contingent claims are not continuous in time and state, we aim to study the measurability of the value function $V(t,\bx,y,z)$ and establish a form of dynamic programming principle for $V$ without imposing any continuity condition on the reward/cost functions $f,\pi,g_i,h_i $. Inspired by \cite{EHJ_1987}'s idea, we transfer the optimal stopping problem with expectation constraints to an enlarged canonical space $ \oO \df \O_0 \ti \OmX \ti [0,\infty] $ using the map $ \cQ \ni \o \mto (B_\cd(\o),X_\cd(\o),\tau(\o)) \ins \oO$ and regarding the joint laws of $(B_\cd,X_\cd,\tau)$ as a form of new controls on $\oO$. In this weak formulation, the optimal stopping problem with expectation constraints $ E_\oP \Big[ \int_t^\oT g_i ( r,\oX_{r \land \cd} ) dr \Big] \ls y_i$, $ E_\oP \Big[ \int_t^\oT h_i ( r,\oX_{r \land \cd} ) dr \Big] \= z_i$, $ \fa i \ins \hN $ has a value \beas \oV (t,\bx,y,z) \df \underset{\oP \in \ocP_{t,\bx}(y,z)}{\sup} E_\oP \bigg[ \int_t^\oT f \big(r,\oX_{r \land \cd} \big) dr \+ \pi \big(\oT, \oX_{\oT \land \cd} \big) \bigg] . \eeas Here $ \ocP_{t,\bx}(y,z) \df \big\{\oP \ins \ocP_{t,\bx} \n : E_\oP \big[ \int_t^\oT g_i ( r,\oX_{r \land \cd} ) dr \big] \ls y_i, \, E_\oP \big[ \int_t^\oT h_i ( r,\oX_{r \land \cd} ) dr \big] \= z_i, \, \fa i \ins \hN \big\} $, and $\ocP_{t,\bx}$ denotes a class of of probabilities $\oP$ on $\oO$ under which the first canonical coordinator $\oW $ is a Brownian motion, the second coordinator $\oX$ satisfies a similar SDE to \eqref{SDE0} driven by $\oW$ and the third coordinator $\oT$ terminates the game. One of our achievements is to show that the value $V$ of the optimal stopping problem with expectation constraints on $(\cQ, \cF, \fp)$ is equal to its value $\oV$ in the weak formulation if $\cF$ supports an $\eta \sim ${\it unif}\;$(0,1)$ independent of $B$. Via the induced probability $ \oP \df \fp \circ (B_\cd,X_\cd,\tau)^{-1} $ it is relatively easy to obtain $V (t,\bx,y,z) \ls \oV (t,\bx,y,z) $ while the inverse inequality is more technically involved: Given a $\oP \ins \ocP_{t,\bx}(y,z)$, we consider the process $ \vth_s \big( \oW^t_\cd \big) \df E_\oP \Big[ \b1_{\{\oT \in [t,t+s]\}} \big| \cF^{\oW^t}_s \Big] $, $s \ins [0,\infty)$, where $\oW^t $ is the increments of $\oW $ after $t$ and $\{\vth_s\}_{s \in [0,\infty)}$ is a process on $\O_0$. Setting $B^t_s \df B_{t+s} \- B_t$, $s \ins [0,\infty)$, we assign the hitting time of process $ \vth_s (B^t_\cd) \- \eta $ above level $0$ as the corresponding stopping strategy $\tau$ on $\cQ$. By exploiting an equality $ E_\oP \Big[ \b1_{\{\oT \in [t,t+s]\}} \big| \cF^{\oW^t}_s \Big] \= E_\oP \Big[ \b1_{\{\oT \in [t,t+s]\}} \big| \cF^{\oW^t}_\infty \Big] $, $\oP-$a.s. (known as Property (K), see \cite{Szpirglas_Mazziotto_1977}), applying the ``change-of-variable" formula \big(see \eqref{Ju02_01}\big) and using some other techniques of stochastic analysis, we deduce that $ \oV (t,\bx,y,z) \ls V (t,\bx,y,z) $. This equivalence result indicates that the value of the optimal stopping problem with expectation constraints is actually an robust value independent of specific probability models. A dynamic programming principle of a stochastic control problem allows people to optimize the problem stage by stage in a backward recursive way. The wellposedness of the DPP requires the problem value to be a measurable function so that one can do optimization at an intermediate horizon first. To show the measurability of the value of the optimal stopping problem with expectation constraint, we utilize the martingale-problem formulation of \cite{Stroock_Varadhan} to describe the probability class $\ocP_{t,\bx}(y,z)$ as a series of probabilistic tests on the stochastic performances of some quadratic form of the canonical coordinators. By such a characterization of $\ocP_{t,\bx}(y,z)$, we demonstrate that the value function $V \= \oV$ is upper semi-analytic and we thus establish a DPP for $\oV$ in the weak formulation of our constrained optimal stopping problem: \beas \oV(t,\bx,y,z) & \tn \= & \tn \Sup{\oP \in \ocP_{t,\bx}(y,z)} \n E_\oP \bigg[ \b1_{\{\oT < t+\otau \}} \bigg( \n \int_t^\oT \n f(r,\oX_{r \land \cd}) dr \+ \pi \big(\oT, \oX_{\oT \land \cd}\big) \bigg)\\ & \tn & \tn \hspace{1.7cm} + \b1_{\{\oT \ge t+\otau \}} \bigg( \n \int_t^{t+\otau} \n f(r,\oX_{r \land \cd}) dr \+ \oV \Big( t \+\otau ,\oX_{(t+\otau) \land \cd} , \oY_\oP ( \otau ) , \oZ_\oP ( \otau ) \Big) \bigg) \bigg] , \eeas Here the additional states $\big(\oY_\oP ( \otau ), \oZ_\oP ( \otau ) \big) \= \Big( \big\{\oY^i_\oP (\otau)\big\}_{i \in \hN}, \big\{\oZ^i_\oP (\otau)\big\}_{i \in \hN} \Big)$ is defined by $\big(\oY^i_\oP (\otau),\oZ^i_\oP (\otau)\big) \df E_\oP \Big[ \int_{\oT \land (t+\otau)}^\oT \\ (g_i,h_i)(r,\oX_{r \land \cd} ) dr \big| \cF(\otau) \Big] $. For the subsolution-side of this DPP, we use the regular conditional probability distribution and a flow property of SDEs to show that the probability classes $ \ocP_{t,\bx}(y,z) $, $ \fa (t,\bx,y,z) $ is stable under conditioning. For the supersolution-side of the DPP, we take advantage of a measurable selection theorem in the descriptive-set theory (see Proposition 7.50 of \cite{Bertsekas_Shreve_1978}) to paste a class of local $\e-$optimal probabilities and show that the probability classes $ \ocP_{t,\bx}(y,z) $'s is also stable under such pasting. \ss \no {\bf Relevant Literature.} Since Arrow et al. \cite{ABG_1949} and Snell \cite{Snell_1952}, the theory of (unconstrained) optimal stopping has been plentifully developed over decades. Expositions of this theory are presented in the monographs \cite{CRS_1971,Shiryayev_1978,El_Karoui_1981,Kara_Shr_MF} and etc. For the recent development of the optimal stopping under model uncertainty/non-linear expectations and the closely related controller-stopper-games, see \cite{Karatzas_Sudderth_2001,Kara_Zam_2005,CDK-2006, Delbaen_2006, Kara_Zam_2008, Riedel_2009,OSNE1,OSNE2,OS_CRM,riedel2012,Bayraktar_Huang_2013,ETZ_2014,ROSVU,NZ_2015,RDOSRT,RDG} among others. Kennedy \cite{Kennedy_1982} initiated the study of optimal stopping problem with expectation constraint. The author used a {\it Lagrange multiplier} method to reformulate a discrete-time optimal stopping problem with first-moment constraint to a minimax problem and showed that the optimal value of the dual problem is equal to that of the primal problem. Since then, the Lagrangian technique has been prevailing in research of optimal stopping problems with expectation constraints (see e.g. \cite{Pontier_Szpirglas_1984,LSMS_1995,Balzer_Jansen_2002,Urusov_2005,Makasu_2009,Tanaka_2019}), and has been applied to various economic and financial problems such as Markov decision processes with constrained stopping times \cite{Horiguchi_2001c,Horiguchi_2001b}, mean-variance optimal control/stopping problem \cite{Pedersen_Peskir_2016,Pedersen_Peskir_2017}, quickest detection problem \cite{Peskir_2012} and etc. Recently, Ankirchner et al. \cite{AKK_2015} and Miller \cite{Miller_C_2017a} took a different approach to optimal stopping problems for diffusion processes with expectation constraints by transforming them to stochastic optimization problems with martingale controls. The former characterizes the value function in terms of a Hamilton-Jacobi-Bellman equation and obtains a verification theorem, while the latter analyzes the optimal stopping problem with first-moment constraint that is embedded in a time-inconsistent (unconstrained) stopping problem. However, they only postulate a dynamic programming principle for the optimal stopping with expectation constraint. In contrast, the main contribution of this paper is to establish a dynamic programming principle with rigorous proof for the optimal stopping with expectation constraints in a general non-Markovian setting. A closely related topic to our research is optimal stopping with constraint on the distribution of stopping time. Bayraktar and Miller \cite{Bayraktar_Miller_2016} studied the problem of optimally stopping a Brownian motion with the restriction that the distribution of the stopping time must equal to an given measure with finitely many atoms, and obtained a dynamic programming result which relates each of the sequential optimal control problems. K\"allblad \cite{Kallblad_2017} used measure-valued martingales to transform the distribution-constrained optimal stopping problem to a stochastic control problem and derived a dynamic programming principle by measurable selection arguments. From the perspective of optimal transport, Beiglb\"ock et al. \cite{BEES_2016} gave a geometric description of optimal stopping times of a Brownian motion with distribution constraint. In their study of a continuous-time stochastic control problem for diffusion processes, El Karoui, Huu Nguyen and Jeanblanc-Picqu\'e \cite{EHJ_1987} used a martingale-problem formulation and regarded the joint laws of control and state processes as relaxed controls on a canonical path space. The authors then employed the measurable selection theorem in the descriptive-set theory to establish a DPP without assuming any regularity conditions on the coefficients. This approach was later generalized by \cite{Elk_Tan_2013a,Elk_Tan_2013b,Gordan_Zitkovic_2014} in abstract frameworks, and developed by \cite{Neufeld_Nutz_2013,HN_2012,PTZ_2018} for tower property of sub-linear expectations. As to the stochastic control theory with expectation constraint, Yu et al. \cite{CYZ_2020} used the measurable selection argument to obtain a DPP result and applied it to quantitative finance problems with various expectation constraints. Pfeiffer et al. \cite{PTZ_2020} took a Lagrange relaxation approach to study a continuous-time stochastic control problem with both inequality-type and equality-type expectation constraints and obtained a duality result by the knowledge of convex analysis. Moreover, for stochastic control problems with state constraints, the stochastic target problems with controlled losses and the related geometric dynamic programming principle, see \cite{BEI_2009,BET_2009,Bouchard_Nutz_2012,Soner_Touzi_2002_GDP,Soner_Touzi_2002_SDV, Soner_Touzi_2009,Bouchard_Vu_2010,Bouchard_Dang_2013,BMN_2014,BDK_2017} and etc. The rest of the paper is organized as follows: Section \ref{sec_genprob} sets up the optimal stopping problem with expectation constraints in a generic probabilistic setting. In Section \ref{sec_weak_form}, we show that the constrained optimal stopping problem can be equivalently embedded into an enlarged canonical space or the optimal stopping problem with expectation constraints in the strong formulation has the same value as that in the weak formulation. In Section \ref{sec_Mart_prob}, we use a martingale-problem formulation to make a characterization of the probability class in the weak formulation and thus show that the value function of the constrained optimal stopping problem is upper semi-analytic. Then in Section \ref{sec_DPP}, we utilize a measurable selection argument to establish a dynamic programming principle in the weak formulation for the value of the optimal stopping problem with expectation constraints. The appendix contains some technical lemmata necessary for the proofs of the main results. We close this section by introducing our notation and standing assumptions on drift/diffusion coefficients and reward/constraint functions. \subsection{Notation and Preliminaries} \label{subsec:preliminary} For any $s \ins [0,\infty)$, set $\hQ_s \df \big(\hQ \Cp [0,s)\big) \cp \{s\} $. For a generic topological space $\big( \hX,\fT(\hX) \big)$, we denote its Borel sigma-field by $\sB(\hX)$ and let $\fP(\hX)$ collection all probabilities on $\big(\hX,\sB(\hX)\big)$. Given $n \ins \hN$, let $ \big\{\wh{O}_i\big\}_{i \in \hN}$ be a countable subbase of $\fT (\hR^n) $. The collection $\sO (\hR^n)$ of all finite intersections in $\big\{\wh{O}_i\big\}_{i \in \hN}$ as well as $\es,\hR^n$ \Big(i.e. $ \sO (\hR^n) \df \Big\{ \ccap{i=1}{n} \wh{O}_{k_i} \n : \{ k_i \}^n_{i=1} \sb \hN \Big\} \cp \{\es,\hR^n\}$\Big) forms a countable base of $\fT (\hR^n) $ and thus $\sB(\hR^n) \= \si\big(\sO (\hR^n)\big)$. We also set $\wh{\sO} (\hR^n) \df \ccup{k \in \hN}{} \big( \big(\hQ \cap [0,\infty)\big) \ti \sO (\hR^n) \ti \sO (\hR^n) \big)^k $. Fix $d, l \ins \hN$. Let $ \O_0 \= \big\{ \o \ins C ( [0,\infty) ; \hR^d ) \n : \o(0) \= 0 \big\} $ be the space of all $\hR^d-$valued continuous paths on $[0,\infty)$ that start from $0$. It is a Polish space under the topology of locally uniform convergence. \if{0} \beas \Rho{\O_0} (\o_0,\o'_0) \df \sum_{n \in \hN} \Big( 2^{-n} \ld \Sup{t \in [0,n]} |\o_0(t)\- \o'_0(t)| \Big) , \q \fa \o_0 , \o'_0 \ins \O_0 , \eeas $\O_0$ is a complete separable space and thus a Polish space. \beas \Rho{\OmX} \big(\omX,\omX'\big) \df \sum_{n \in \hN} \Big( 2^{-n} \ld \Sup{s \in [0,n]} |\omX(s)\- \omX'(s)| \Big) , \q \fa \omX , \omX' \ins \OmX . \eeas \fi Let $ P_0 $ be the Wiener measure of $\big(\O_0, \sB(\O_0) \big)$, under which the canonical process $ W \=\{W_t\}_\tz $ of $\O_0$ is a $d-$dimensional standard Brownian motion. Also, let $\OmX \= C ( [0,\infty) ; \hR^l )$ be the space of all $\hR^l-$valued continuous paths on $[0,\infty)$ endowed with the topology of locally uniform convergence. Let $ b \n : (0,\infty) \ti \OmX \mto \hR^l $ and $ \si \n : (0,\infty) \ti \OmX \mto \hR^{l \times d} $ be two Borel-measurable functions such that for any $t \ins (0,\infty)$ \bea \q \big|b(t,\bx)\-b(t,\bx') \big| \+ \big|\si(t,\bx)\-\si(t,\bx') \big| \ls \k(t) \|\bx \-\bx'\| , ~ \; \fa \bx, \bx' \ins \OmX \aand \int_0^t \big( |b (r,\bz)|^2 \+ |\si (r,\bz)|^2 \big) dr \< \infty, \qq \label{coeff_cond1} \eea where $\k \n : (0,\infty) \mto (0,\infty)$ is some non-decreasing function and $\| \bx \-\bx' \| \df \Sup{s \in [0,\infty)} \big| \bx(s) \-\bx'(s) \big|$. Also, let $f , g_i, h_i \n : (0,\infty) \ti \OmX \mto [-\infty,\infty]$ be Borel-measurable functions for all $i \ins \hN$ and let $\pi \n : [0,\infty) \ti \OmX \mto \hR$ be a Borel-measurable function. We make some notations on a general measurable space $(\cQ,\cF )$: \bul For any measure $\fm $ on $(\cQ,\cF) $ and for any $[-\infty,\infty]-$valued $\cF-$measurable random variable $\xi$ on $\cQ$, define the integral $\int_\O \xi (\o) \fm(d \o) \df \int_\O \xi^+ (\o) \fm(d \o) \- \int_\O \xi^- (\o) \fm(d \o) $ with the convention $(+\infty)\+(-\infty) \= (-\infty)\+(+\infty) \= - \infty$. \if{0} Under such a convention, monotone convergence theorem and the tower property still hold. \beas \int_{\o \in \O} \xi (\o) \fm(d \o) \df \left\{ \ba{ll} - \infty, & \hb{if } \int_{\o \in \O} \xi^- (\o) \fm(d \o) \= \infty, \\ \infty, & \hb{if } \int_{\o \in \O} \xi^- (\o) \fm(d \o) \< \infty \hb{ and } \int_{\o \in \O} \xi^+ (\o) \fm(d \o) \= \infty, \ea \right. \eeas under which one still has $ \int_{\o \in \O} \xi_1 (\o) \fm(d \o) \ls \int_{\o \in \O} \xi_2 (\o) \fm(d \o) $ for any two real$-$valued measurable random variables $\xi_1$ and $\xi_2$ with $\xi_1 \ls \xi_2$, $\fm-$a.s. \fi \bul \if{0} For any filtration $\{\cF_t\}_\tz$ on $(\cQ,\cF)$, we set $\cF_\infty \df \si\Big(\underset{\tz}{\cup}\cF_t\Big)$. \fi For any process $X\= \{X_t\}_{\tz}$ on $(\cQ,\cF) $, let $\bF^X \= \big\{ \cF^X_t \df \si (X_s ; s \in [0,t]) \big\}_\tz$ be the raw filtration of $X$. \bul Let $ \fp$ be a probability on $(\cQ,\cF) $. For any sub-sigma-field $\cG $ of $\cF$, we set $\sN_\fp(\cG) \df \big\{ \cN \sb \cQ \n : \cN \sb A \hb{ for some } A \ins \cG \hb{ with } \fp (A) \=0 \big\}$. For any filtration $\bF \= \{\cF_t\}_\tz$ on $(\cQ,\cF,\fp)$, let $\bF^\fp \= \big\{\cF^\fp_t \df \si \big( \cF_t \cp \sN_\fp ( \cF_\infty ) \big) \big\}_\tz $ be the $\fp-$augmentation of $ \bF $ with $\cF_\infty \df \si\Big(\underset{\tz}{\cup}\cF_t\Big)$. Moreover, we set $\cR \df (-\infty,\infty]^\infty \ti [-\infty,\infty]^\infty $ and define $\phi(t,\cE) \df \big(2 \pi t\big)^{-d/2} \int_{z \in \cE} e^{ - \frac{z^2}{2 t}} dz $, $\fa (t,\cE) \ins (0,\infty) \ti \sB(\hR^d)$. \section{A General Optimal Stopping with Expectation Constraint} \label{sec_genprob} Let $(\cQ, \cF, \fp)$ be a generic probability space equipped with a $d-$dimensional standard Brownian motion $B \= \{B_t\}_{t \in [0,\infty)}$ and an $\cF-$measurable {\it unif}\;$(0,1)$ random variable $\eta$ that is independent of $B$ under $\fp$. Let $t \ins [0,\infty)$. The evolution of $B$ after time $t$, $B^t_s \= B_{t+s} \- B_t $, $s \ins [0,\infty)$, is also a standard Brownian motion on $\cQ$ under $\fp$. We can view $s$ as the relative time after $t$. Given $\bx \in \OmX $, \eqref{coeff_cond1} ensures that the following SDE on $(\cQ, \cF, \fp)$ \bea \label{FSDE1} X_s \= \bx(t) + \int_t^s b (r, X_{r \land \cd}) dr \+ \int_t^s \si (r, X_{r \land \cd}) d B_r , \q \fa s \in [t,\infty) \eea with initial condition $X_s \= \bx(s)$, $\fa s \ins [0,t] $ admits a unique solution $ \big\{ X^{t,\bx}_s\big\}_{s \in [0,\infty)}$ \big(In particular, $ \big\{ X^{t,\bx}_{t+s}\big\}_{s \in [0,\infty)} $ is an $\bF^{B^t,\fp}-$adapted process with all continuous paths satisfying $\fp \big\{X^{t,\bx}_{t+s} \= \bx(t) \+ \int_0^s b (t\+r, X^{t,\bx}_{(t+r) \land \cd}) dr \+ \int_0^s \si (t\+r, X^{t,\bx}_{(t+r) \land \cd}) d B^t_r , \fa s \ins [0,\infty)\big\} \= 1$\big). We define a filtration $\bF^{B^t,\eta}\=\big\{\cF^{B^t,\eta}_s \df \si \big( \cF^{B^t}_s \cup \si(\eta) \big)\big\}_{s \in [0,\infty)}$ and denote by $\cS^t $ the set of all $[0,\infty]-$valued $\bF^{B^t,\eta,\fp}-$stopping times. Given a historical path $\bx|_{[0,t]}$, the state then evolves along process $X^{t,\bx}$. The player decides a relative exercise time $\tau \ins \cS^t$ after $t$ to cease the game when she receives a running reward $\int_t^{t+\tau} \n f \big(r, X^{t,\bx}_{r \land \cd} \big) \, dr $ plus a terminal reward $ \pi \big( t\+\tau , X^{t,\bx}_{(t+\tau) \land \cd} \big) $. The player would like to maximize the expectation of her total wealth, but her choice of $\tau$ is subject to a series of expectation constraints \bea \label{111920_11} E_\fp \Big[ \int_t^{t+\tau} g_i ( r,X^{t,\bx}_{r \land \cd} ) dr \Big] \ls y_i, \q E_\fp \Big[ \int_t^{t+\tau} h_i ( r,X^{t,\bx}_{r \land \cd} ) dr \Big] \= z_i, \q \fa i \ins \hN, ~ \fa (y_i,z_i) \ins (-\infty,\infty] \ti [-\infty,\infty] . \eea So for any $(y,z) \= \big(\{y_i\}_{i \in \hN}, \{z_i\}_{i \in \hN}\big) \ins \cR$ such that $ \cS_{t,\bx}(y,z) \df \big\{ \tau \ins \cS^t \n : E_\fp \big[ \int_t^{t+\tau} g_i ( r,X^{t,\bx}_{r \land \cd} ) dr \big] \ls y_i, \, E_\fp \big[ \int_t^{t+\tau} h_i ( r, \\ X^{t,\bx}_{r \land \cd} ) dr \big] \= z_i, \, \fa i \ins \hN \big\} $ is not empty, the value of the optimal stopping problem with expectation constraints \eqref{111920_11} is \bea \label{081820_11} V (t,\bx,y,z) \df \Sup{\tau \in \cS_{t,\bx}(y,z) } E_\fp \Big[ \int_t^{t+\tau} f \big( r, X^{t,\bx}_{r \land \cd} \big) dr \+ \b1_{\{\tau < \infty\}} \pi \big( t\+\tau , X^{t,\bx}_{(t+\tau) \land \cd} \big) \Big] . \eea \begin{rem} \label{rem_112220} 1\) \(finitely many constraints\) For some $i \ins \hN$, the constraint $E_\fp \big[ \int_t^{t+\tau} g_i ( r,X^{t,\bx}_{r \land \cd} ) dr \big] \ls y_i$ is trivial if $ y_i \= \infty $, while the constraint $E_\fp \big[ \int_t^{t+\tau} h_i ( r,X^{t,\bx}_{r \land \cd} ) dr \big] \= z_i$ is trivial if $(h_i,z_i) \= (0,0)$. \no 1a\) If we take $(y_i,h_i,z_i) \= (\infty,0,0)$, $\fa i \ins \hN$, there is no constraint at all. \no 1b\) If one takes $y_i \= \infty$, $\fa i \gs 2$ and $(h_i,z_i) \= (0,0)$ $\fa i \ins \hN$, \eqref{111920_11} degenerates to a single constraint $ E_\fp \big[ \int_t^{t+\tau} g_1 ( r,X^{t,\bx}_{r \land \cd} ) dr \big] \\ \ls y_1 $. In addition, if $g_1 \gs 0$ and $y_1 \gs 0$, then $0 \ins \cS_{t,\bx}(y,z) $ and $ \cS_{t,\bx}(y,z) $ is thus non-empty. \no 1c\) If we take $y_i \= \infty$, $\fa i \ins \hN $ and $(h_i,z_i) \= (0,0)$ $\fa i \gs 2$, \eqref{111920_11} reduces to a single constraint $ E_\fp \big[ \int_t^{t+\tau} h_1 ( r,X^{t,\bx}_{r \land \cd} ) dr \big] \= z_1 $. \no 1d\) If one takes $(y_i,h_i,z_i) \= (\infty,0,0)$, $\fa i \gs 2$, \eqref{111920_11} becomes a couple of constraints $ E_\fp \big[ \int_t^{t+\tau} g_1 ( r,X^{t,\bx}_{r \land \cd} ) dr \big] \ls y_1 \; \& \; E_\fp \big[ \int_t^{t+\tau} h_1 ( r,X^{t,\bx}_{r \land \cd} ) dr \big] \= z_1$. \no 2\) \(moment constraints\) Let $i \ins \hN$, $ a \ins (0,\infty)$, $q \ins (1,\infty) $ and $ \l \ins [0,\infty)$. If $g_i(t,\bx) \= a q t^{q-1} \+ \l $, $\fa (t,\bx) \ins (0,\infty) \ti \OmX $ \big(or $h_i(t,\bx) \= a q t^{q-1} \+ \l $, $\fa (t,\bx) \ins (0,\infty) \ti \OmX $\big), then the expectation constraints $E_\fp \big[ \int_t^{t+\tau} g_i ( r,X^{t,\bx}_{r \land \cd} ) dr \big] \ls y_i$ \big(or $E_\fp \big[ \int_t^{t+\tau} h_i ( r,X^{t,\bx}_{r \land \cd} ) dr \big] \= z_i$\big) specify as a moment constraint $E_\fp \big[ a \big((t\+\tau)^q \- t^q\big) \+ \l \tau \big] \ls y_i$ \big(or $E_\fp \big[ a \big((t\+\tau)^q \- t^q\big) \+ \l \tau \big] \= z_i$\big). \end{rem} To study the measurability of the value function $V$ and derive a dynamic programming principle for $V$ without imposing any continuity condition on reward/constraint functions $f,\pi,g_i,h_i$, we would embed the stopping rules together with the Brownian/state information into an enlarged canonical space and consider their joint distribution as a form of new controls: \section{Weak Formulation} \label{sec_weak_form} In this section, we study the optimal stopping problem with expectation constraints in a weak formulation or over an enlarged canonical space \beas \oO \df \O_0 \ti \OmX \ti [0,\infty] . \eeas Clearly, $\oO$ is a Borel space under the product topology. Let $\fP(\oO)$ be the space of all probabilities on $\big(\oO, \sB(\oO) \big)$ equipped with the topology of weak convergence, which is also a Borel space (see e.g. Corollary 7.25.1 of \cite{Bertsekas_Shreve_1978}). Define the canonical coordinates on $\oO$ by \beas \oW_s (\oo) \df \o_0(s) , \q \oX_s(\oo) \df \omX(s) , \q \fa s \ins [0,\infty) \q \hb{and} \q \oT(\oo) \df \ft , \q \fa \oo \= \big(\o_0,\omX,\ft\big) \ins \oO , \eeas in which one can regard $\oW$ as a canonical coordinate for Brownian motion, $\oX$ as a canonical coordinate about state process, and $\oT$ as a canonical coordinate for stopping rules. Let $t \ins [0,\infty)$. We also define shifted canonical processes $\oXi^t \= (\oW^t ,\oX^t )$ on $\oO$ by \beas \big(\oW^t_s,\oX^t_s\big) (\oo) \df \big( \oW_{t+s} (\oo) \- \oW_t (\oo), \oX_{t+s} (\oo) \big) \q \fa (s,\oo) \ins [0,\infty) \ti \oO , \eeas and set the filtrations $\obF^t \= \Big\{\ocF^t_s \df \si \Big(\oW^t_r, \big\{\oT \ins [t,t\+r] \big\}; r \ins [0,s]\Big)\Big\}_{s \in [0,\infty)} $, $ \obG^t \= \Big\{ \ocG^t_s \df \si \Big(\oXi^t_r, \big\{\oT \ins [t,t\+r] \big\}; r \ins [0,s]\Big)\Big\}_{s \in [0,\infty)} $. For any $s \ins [0,\infty)$, $\ocF^t_s $ can be countably generated by the Pi system \bea \label{090520_21} \hspace{-0.7cm} \ol{\cC}^t_s \df \bigg\{ \underset{i=1}{\overset{k}{\cap}} \Big( \big[ (\oW^t_{s_i \land s})^{-1}(\cO_i) \Cp \big\{\oT \ins [t,t\+ s_i \ld s]\big\} \big] \cp \big[ (\oW^t_{s_i \land s})^{-1}(\cO'_i) \Cp \big\{\oT \ins [t,t\+ s_i \ld s]^c \big\} \big] \Big) \n : \fa \big\{(s_i,\cO_i,\cO'_i)\big\}^k_{i=1} \ins \wh{\sO} (\hR^d) \bigg\} , \q \eea and $\ocG^t_s $ can be countably generated by the Pi system \bea \label{090520_23} \hspace{-0.7cm} \ol{\sC}^t_s \df \bigg\{ \underset{i=1}{\overset{k}{\cap}} \Big( \big[ (\oXi^t_{s_i \land s})^{-1}(\cO_i) \Cp \big\{\oT \ins [t,t\+ s_i \ld s]\big\} \big] \cp \big[ (\oXi^t_{s_i \land s})^{-1}(\cO'_i) \Cp \big\{\oT \ins [t,t\+ s_i \ld s]^c \big\} \big] \Big) \n : \fa \big\{(s_i,\cO_i,\cO'_i)\big\}^k_{i=1} \ins \wh{\sO} (\hR^{d+l}) \bigg\} . \q \eea For any $\oP \ins \fP(\oO)$, we set $\sB_\oP(\oO) \df \si \Big( \sB(\oO) \cp \sN_\oP \big(\sB(\oO)\big) \Big)$. The $\oP-$augmentation of $\obF^t$ is $\obF^{t,\oP} \= \Big\{ \ocF^{t,\oP}_s \df \si \Big( \ocF^t_s \cp \sN_\oP\big(\ocF^t_\infty\big) \Big) \Big\}_{s \in [0,\infty)} $ and the $\oP-$augmentation of $\obG^t$ is $\obG^{t,\oP} \= \Big\{ \ocG^{t,\oP}_s \df \si \Big( \ocG^t_s \cp \sN_\oP\big(\ocG^t_\infty\big) \Big) \Big\}_{s \in [0,\infty)} $. The weak formulation of our optimal stopping problem with expectation constraints relies on the following probability classes of $\fP(\oO)$: \begin{deff} \label{def_ocP} For any $(t,\bx) \ins [0,\infty) \ti \OmX$, let $\ocP_{t,\bx}$ be the collection of all probabilities $ \oP \ins \fP(\oO) $ satisfying: \no i\) On $\big(\oO , \sB(\oO) , \oP\big)$, the process $\oW^t$ is a $d-$dimensional standard Brownian motion with respect to filtration $\obF^t $. \no ii\) $ \oP\big\{ \oX_s \= \sX^{t,\bx}_s, ~ \fa s \ins [0,\infty) \big\} \= 1$, where $ \big\{\sX^{t,\bx}_s\big\}_{s \in [0,\infty)}$ uniquely solves the following SDE on $\big(\oO , \sB\big(\oO\big) , \oP\big) \n : $ \bea \label{Ju01_01} \sX_s = \bx(t) + \int_t^s b \big( r, \sX_{r \land \cd} \big)dr \+ \int_t^s \si \big( r, \sX_{r \land \cd} \big) d \oW_r, \q s \ins [t,\infty) \eea with initial condition $\sX_s \= \bx(s)$, $\fa s \ins [0,t] $ \big(In particular, $ \big\{ \sX^{t,\bx}_{t+s}\big\}_{s \in [0,\infty)} $ is an $\bF^{\oW^t,\oP}-$adapted process with all continuous paths satisfying $\oP \big\{\sX^{t,\bx}_{t+s} \= \bx(t) \+ \int_0^s b (t\+r,\sX^{t,\bx}_{(t+r) \land \cd}) dr \+ \int_0^s \si (t\+r, \sX^{t,\bx}_{(t+r) \land \cd}) d \oW^t_r , \fa s \ins [0,\infty)\big\} \= 1$\big). \ss \no iii\) $\oP\{\oT \gs t\} \= 1 $. \end{deff} Given a historical path of the state $\bx|_{[0,t]}$, for any $(y,z) \= \big(\{y_i\}_{i \in \hN}, \{z_i\}_{i \in \hN}\big) \ins \cR$ such that $ \ocP_{t,\bx}(y,z) \df \big\{\oP \ins \ocP_{t,\bx} \n : E_\oP \big[ \int_t^\oT g_i ( r,\oX_{r \land \cd} ) dr \big] \ls y_i, \, E_\oP \big[ \int_t^\oT h_i ( r,\oX_{r \land \cd} ) dr \big] \= z_i, \, \fa i \ins \hN \big\} $ is not empty, \beas \oV (t,\bx,y,z) \df \underset{\oP \in \ocP_{t,\bx}(y,z)}{\sup} E_\oP \bigg[ \int_t^\oT f \big(r,\oX_{r \land \cd} \big) dr \+ \b1_{\{\oT < \infty\}} \pi \big(\oT, \oX_{\oT \land \cd} \big) \bigg] \eeas is the value of the optimal stopping problem with expectation constraints \bea \label{111920_14} E_\oP \Big[ \int_t^\oT g_i ( r,\oX_{r \land \cd} ) dr \Big] \ls y_i, \q E_\oP \Big[ \int_t^\oT h_i ( r,\oX_{r \land \cd} ) dr \Big] \= z_i, \q \fa i \ins \hN \eea in the weak formulation. We can consider another value of the constrained optimal stopping problem in the weak formulation: Let $(t,\bw,\bx) \ins [0,\infty) \ti \O_0 \ti \OmX $ and define $ \ocP_{t,\bw,\bx} \df \big\{\oP \ins \ocP_{t,\bx} \n : \oP \big\{ \oW_s \= \bw(s) , \fa s \ins [0,t] \big\} \= 1 \big\}$ as the probability class given the Brownian and state history $(\bw , \bx) \big|_{[0,t]}$. For any $(y,z) \= \big(\{y_i\}_{i \in \hN}, \{z_i\}_{i \in \hN}\big) \ins \cR$ such that $ \ocP_{t,\bw,\bx}(y,z) \df \big\{\oP \ins \ocP_{t,\bw,\bx} \n : E_\oP \big[ \int_t^\oT g_i ( r,\oX_{r \land \cd} ) dr \big] \ls y_i, \, E_\oP \big[ \int_t^\oT h_i ( r,\oX_{r \land \cd} ) dr \big] \= z_i, \, \fa i \ins \hN \big\}$ is not empty, the value of the optimal stopping problem with expectation constraints \eqref{111920_14} given $(\bw , \bx) \big|_{[0,t]}$ is \beas \ocV (t,\bw,\bx,y,z) \df \underset{\oP \in \ocP_{t,\bw,\bx}(y,z)}{\sup} E_\oP \bigg[ \int_t^\oT f \big(r,\oX_{r \land \cd} \big) dr \+ \b1_{\{\oT < \infty\}} \pi \big(\oT, \oX_{\oT \land \cd} \big) \bigg] \eeas in the weak formulation. Set $ \oD \df \{(t,\bx,y,z) \ins [0,\infty) \ti \OmX \ti \cR \n : \ocP_{t,\bx}(y,z) \nne \es \} $ and $ \ocD \df \{(t,\bx,y,z) \ins [0,\infty) \ti \OmX \ti \cR \n : \ocP_{t,\bw,\bx}(y,z) \nne \es \} $. One of our main results in the next theorem exposes that the value $V(t,\bx,y,z)$ in \eqref{081820_11} coincides with the value $\oV(t,\bx,y,z)$ in the weak formulation, and is even equal to the value $ \ocV (t,\bw,\bx,y,z) $. To wit, the value of the optimal stopping with expectation constraints is independent of a specific probabilistic setup and is also indifferent to the Brownian history. \begin{thm} \label{thm_V=oV} For any $(t,\bw,\bx,y,z) \ins [0,\infty) \ti \O_0 \ti \OmX \ti \cR $, $\cS_{t,\bx}(y,z) \nne \es \Leftrightarrow \ocP_{t,\bx}(y,z) \nne \es \Leftrightarrow \ocP_{t,\bw,\bx}(y,z) \nne \es$, and one has $ V (t,\bx,y,z ) \= \oV (t,\bx,y,z ) = \ocV (t, \bw,\bx, y,z ) $ in this situation. \end{thm} By Remark \ref{rem_112220} (1b) and Theorem \ref{thm_V=oV}, $\hb{Proj} \big(\oD\big) \df \{(t,\bx) \ins [0,\infty) \ti \OmX \n : (t,\bx,y,z) \ins \oD \hb{ for some } (y,z) \ins \cR \} \= [0,\infty) \ti \OmX$ and $\hb{Proj} \big(\ocD\big) \df \{(t,\bw,\bx) \ins [0,\infty) \ti \O_0 \ti \OmX \n : (t,\bw,\bx,y,z) \ins \ocD \hb{ for some } (y,z) \ins \cR \} \= [0,\infty) \ti \O_0 \ti \OmX$. \begin{rem} \label{rem_robust} Theorem \ref{thm_V=oV} indicates that our optimal stopping problem with expectation constraints is actually independent of a particular probabilistic setting. It even allows us to deal with the robust case. Let $ \big\{(\cQ_\a, \cF_\a, \fp_\a)\big\}_{\a \in \fA} $ be a family of probability spaces, where $\fA$ is a countable or uncountable index set \(e.g. it can be a non-dominated class of probabilities\). Given $\a \ins \fA$, let $B^\a$ be a $d-$dimensional standard Brownian motion on $(\cQ_\a, \cF_\a, \fp_\a)$ and let $\eta_\a$ be an $\cF_\a-$measurable {\it unif}\;$(0,1)$ random variable that is independent of $B^\a$ under $\fp_\a$. For any $ (t,\bx) \ins [0,\infty) \ti \OmX $, let $X^{\a,t,\bx}$ be the unique solution of the SDE on $(\cQ_\a, \cF_\a, \fp_\a)$ \beas X^\a_s \= \bx(t) + \int_t^s b ( r, X^\a_{r \land \cd}) dr \+ \int_t^s \si ( r, X^\a_{r \land \cd}) d B^\a_r , \q s \in [t,\infty) \eeas with initial condition $ X^\a_s \= \bx(s) $, $\fa s \ins [0,t]$. Also, let $ \cS^t_\a $ denote the set of all $[0,\infty]-$valued, $\bF^{B^{\a,t},\eta_\a,\fp_\a}-$stopping times, where $ B^{\a,t}_s \df B^\a_{t+s} \- B^\a_t $, $s \ins [0,\infty)$ and $\bF^{B^{\a,t},\eta_\a}\=\big\{\cF^{B^{\a,t},\eta_\a}_s \df \si \big( \cF^{B^{\a,t}}_s \cup \si(\eta_\a) \big)\big\}_{s \in [0,\infty)}$. According to Theorem \ref{thm_V=oV}, for any $(t,\bx) \ins [0,\infty) \ti \OmX $ and $(y,z) \= \big(\{y_i\}_{i \in \hN}, \{z_i\}_{i \in \hN}\big) \ins \cR$ such that $ \ocP_{t,\bx}(y,z) \nne \es $, \beas \;\; \oV (t,\bx,y,z) \= \Sup{\a \in \fA} \, \Sup{ \tau_\a \in \cS^\a_{t,\bx}(y,z) } E_{\fp_\a} \bigg[ \int_t^{t+\tau_\a} f \big( r,X^{\a,t,\bx}_{r \land \cd} \big) dr \+ \b1_{\{\tau_\a < \infty\}} \pi \big( t\+\tau_\a, X^{\a,t,\bx}_{(t+\tau_\a) \land \cd} \big) \bigg] , \eeas where $ \cS^\a_{t,\bx}(y,z) \df \big\{ \tau_\a \ins \cS^t_\a \n : E_{\fp_\a} \big[ \int_t^{t+\tau_\a} g_i ( r,X^{\a,t,\bx}_{r \land \cd} ) dr \big] \ls y_i, \, E_{\fp_\a} \big[ \int_t^{t+\tau_\a} h_i ( r,X^{\a,t,\bx}_{r \land \cd} ) dr \big] \= z_i, \, \fa i \ins \hN \big\} $ is not empty for all $ \a \in \fA $. \end{rem} \no {\bf Proof of Theorem \ref{thm_V=oV} (Part I):} Fix $(t,\bw,\bx) \ins [0,\infty) \ti \O_0 \ti \OmX $ and $(y,z) \= \big(\{y_i\}_{i \in \hN}, \{z_i\}_{i \in \hN}\big) \ins \cR$ such that $\cS_{t,\bx}(y,z) \nne \es$. We show in this part that $ \ocP_{t,\bw,\bx}(y,z) \nne \es $ and $V(t,\bx,y,z) \ls \ocV (t,\bw,\bx,y,z) \ls \oV (t,\bx,y,z)$. Fix $ \tau \in \cS_{t,\bx}(y,z)$. We define processes $ \breve{B}_s (\o) \df \bw(s \land t) \+ B_{s \vee t} (\o) \- B_t (\o) $, $ \fa (s,\o) \ins [0,\infty) \ti \cQ $ and define a mapping $\Phi \n : \cQ \mto \oO$ by $ \Phi (\o) \df \big( \breve{B}_\cd (\o), X^{t,\bx}_\cd (\o) , t\+\tau(\o) \big) \ins \oO $, $ \fa \o \ins \cQ $. So it holds for any $\o \ins \cQ$ that \bea \label{082120_11} (\oW_s,\oX_s)\big(\Phi (\o)\big) \= \big(\bw(s),\bx(s)\big), ~ \fa s \ins [0,t] \aand \oXi^t_\cd \big(\Phi (\o)\big) \= \big(B^t_\cd (\o) , X^{t,\bx}_{t+\cd} (\o)\big),~ \oT\big(\Phi(\o)\big) \= t\+\tau(\o) . \eea The mapping $\Phi$ induces a probability $\oP \ins \fP(\oO) $ by $ \oP\big(\oA\big) \df \fp \big( \Phi^{-1}\big(\oA\big) \big) $, $ \fa \oA \ins \sB(\oO) $. For $s \ins [0,\infty] $, since it holds for any $r \ins [0,s] \Cp \hR $ and $\cE \ins \sB(\hR^d)$ that $ \Phi^{-1} \big( (\oW^t_r)^{-1} ( \cE ) \big) \= \big\{ \oW^t_r ( \Phi ) \ins \cE \big\} \= \big\{ B^t_r \ins \cE \big\} \ins \cF^{B^t}_s $, one can deduce that \bea \label{Oct12_07} \Phi^{-1} \big(\cF^{\oW^t,\oP}_s\big) \sb \cF^{B^t,\fp}_s, \q \fa s \ins [0,\infty] . \eea Let $ 0 \ls s \< r \< \infty $. For any $\cE \ins \sB(\hR^d)$, one clearly has $ \oP\big( (\oW^t_r \- \oW^t_s)^{-1} (\cE) \big) \= \fp \big\{ (\oW^t_r \- \oW^t_s) (\Phi) \ins \cE \big\} \= \fp \big\{ B^t_r \- B^t_s \ins \cE \big\} \= \phi(r\-s,\cE) $. Also, let $ \big\{(s_i,\cE_i,\cE'_i)\big\}^n_{i=1} \sb [0,s] \ti \sB(\hR^{d+l}) \ti \sB(\hR^{d+l})$. Since the Brownian motion $B^t$ is independent of $\eta$ under $\fp$ and since $\{\tau \ls s_i \} \ins \cF^{B^t,\eta,\fp}_{s_i} $ for $i \= 1,\cds \n , n$, \beas && \hspace{-1.2cm} \oP\Big\{ (\oW^t_r \- \oW^t_s)^{-1} (\cE) \Cp \Big( \underset{i=1}{\overset{n}{\cap}} \Big( \big[ (\oW^t_{s_i})^{-1}(\cE_i) \Cp \big\{\oT \ins [t,t\+ s_i]\big\} \big] \cp \big[ (\oW^t_{s_i})^{-1}(\cE'_i) \Cp \big\{\oT \ins [t,t\+ s_i]^c \big\} \big] \Big) \Big) \Big\} \\ && \hspace{-0.7cm} \= \fp \Big\{ \big\{ (\oW^t_r \- \oW^t_s) ( \Phi ) \ins \cE \big\} \Cp \Big( \underset{i=1}{\overset{n}{\cap}} \Big( \Big\{ \oW^t_{s_i} ( \Phi ) \ins \cE_i , \, \oT ( \Phi ) \ins [t,t\+ s_i] \Big\} \cp\Big\{ \oW^t_{s_i} ( \Phi ) \ins \cE'_i , \, \oT ( \Phi ) \ins [t,t\+ s_i]^c \Big\} \Big) \Big) \Big\} \\ && \hspace{-0.7cm} \= \fp \big\{ B^t_r \- B^t_s \ins \cE \big\} \ti \fp \Big\{ \underset{i=1}{\overset{n}{\cap}} \Big( \Big\{ B^t_{s_i} \ins \cE_i , \, \tau \ins [0, s_i] \Big\} \cp\Big\{ B^t_{s_i} \ins \cE'_i , \, \tau \ins (s_i,\infty) \Big\} \Big) \Big\} \\ && \hspace{-0.7cm} \= \oP\big\{ (\oW^t_r \- \oW^t_s)^{-1} (\cE) \big\} \ti \oP \Big\{ \underset{i=1}{\overset{n}{\cap}} \Big( \big[ (\oW^t_{s_i})^{-1}(\cE_i) \Cp \big\{\oT \ins [t,t\+ s_i]\big\} \big] \cp \big[ (\oW^t_{s_i})^{-1}(\cE'_i) \Cp \big\{\oT \ins [t,t\+ s_i]^c \big\} \big] \Big) \Big\} . \eeas An application of Dynkin's Theorem and \eqref{090520_21} yield that $ \oP\big( (\oW^t_r \- \oW^t_s)^{-1} (\cE) \Cp \oA \big) \= \oP\big( (\oW^t_r \- \oW^t_s)^{-1} (\cE) \big) \oP\big( \oA \big)$, $ \fa \oA \ins \ocF^t_s $. So $\oW^t$ is a $d-$dimensional standard Brownian motion with respect to the filtration $ \obF^t $ on $ \big(\oO,\sB(\oO),\oP\big) $, or $\oP$ satisfies Definition \ref{def_ocP} (i). As $ \big\{ \sX^{t,\bx}_{t+s}\big\}_{s \in [0,\infty)} $ is an $\bF^{\oW^t,\oP}-$adapted process, $\oM_s \df \int_0^s \n \si(t\+r,\sX^{t,\bx}_{(t+r) \land \cd}) d\oW^t_r$, $s \ins [0,\infty)$ defines an $\big( \bF^{\oW^t,\oP}, \oP\big)- $ martingale. There is a sequence of $ \hR^{l \times d}-$valued, $ \bF^{\oW^t,\oP} -$simple processes $ \Big\{\ol{\cH}^n_s \= \sum_{i = 1}^{\ell_n} \oxi^n_i \, \b1_{ \{s \in (s^n_i, s^n_{i+1}] \} } , \, s \ins [0,\infty) \Big\}_{n \in \hN}$ \big(with $0 \= s^n_1 \< \cds \< s^n_{\ell_n+1} \< \infty $ and $\oxi^n_i \ins \cF^{\oW^t,\oP}_{ s^n_i}$, $i \=1 ,\cds \n, \ell_n $\big) such that \bea \label{090520_31} \oP \n - \n \lmt{n \to \infty} \, \int_0^\infty \big| \ol{\cH}^n_r \- \si \big(t\+r,\sX^{t,\bx}_{(t+r) \land \cd} \big) \big|^2 dr \= 0 \aand \oP \n - \n \lmt{n \to \infty} \; \underset{s \in [0,\infty)}{\sup} \big| \oM^n_s - \oM_s \big| \= 0 , \eea where $ \oM^n_s \df \int_0^s \n \ol{\cH}^n_r d\oW^t_r $. It follows that \bea \fp \n - \n \lmt{n \to \infty} \int_0^\infty \big| \ol{\cH}^n_r (\Phi) \- \si\big(t\+r,\sX^{t,\bx}_{(t+r) \land \cd} (\Phi)\big) \big|^2 dr \= 0 \aand \fp \n - \n \lmt{n \to \infty} \, \underset{s \in [0,\infty)}{\sup} \big| (\oM^n_s \- \oM_s ) (\Phi) \big| =0 . \qq \label{S24_00} \eea For $n \ins \hN$, since $ \oxi^n_i (\Phi) \ins \cF^{B^t,\fp}_{ s^n_i} $ for $i \= 1,\cds \n ,\ell_n$ by \eqref{Oct12_07}, applying Proposition 3.2.26 of \cite{Kara_Shr_BMSC} and using the first limit in \eqref{S24_00} yield that $ \fp \n - \n \lmt{n \to \infty} \, \underset{s \in [s,\infty)}{\sup} \Big| \int_0^s \ol{\cH}^n_r (\Phi) dB^t_r - \int_0^s \si\big(t\+r,\sX^{t,\bx}_{(t+r) \land \cd} (\Phi)\big) dB^t_r \Big| \= 0 $, which together with the second limit in \eqref{S24_00} renders that \bea \label{Oct12_63} \fp \bigg\{ \Big( \int_0^s \n \si(t\+r,\sX^{t,\bx}_{(t+r) \land \cd}) d\oW^t_r \Big) (\Phi) \= \int_0^s \si\big(t\+r,\sX^{t,\bx}_{(t+r) \land \cd} (\Phi)\big) dB^t_r , ~ s \ins [0,\infty) \bigg\} \= 1 . \eea Then we can deduce that \fpas, \beas \sX^{t,\bx}_s (\Phi) & \tn \= & \tn \bx(t) \+ \int_0^{s-t} \n b\big(t\+r, \sX^{t,\bx}_{(t+r) \land \cd} (\Phi)\big) dr \+ \Big( \int_0^{s-t} \n \si(t\+r,\sX^{t,\bx}_{(t+r) \land \cd}) d\oW^t_r \Big) (\Phi) \\ & \tn \= & \tn \bx(t) \+ \int_0^{s-t} \n b\big(t\+r, \sX^{t,\bx}_{(t+r) \land \cd} (\Phi)\big) dr \+ \int_0^{s-t} \n \si\big(t\+r,\sX^{t,\bx}_{(t+r) \land \cd} (\Phi)\big) dB^t_r \\ & \tn \= & \tn \bx(t) \+ \int_t^s \n b\big(r, \sX^{t,\bx}_{r \land \cd} (\Phi)\big) dr \+ \int_t^s \n \si\big(r,\sX^{t,\bx}_{r \land \cd} (\Phi)\big) dB_r , \q \fa s \in [t,\infty). \eeas So $\big\{\sX^{t,\bx}_s (\Phi)\big\}_{s \in [0,\infty)}$ also solves \eqref{FSDE1}. Consequently, $ \oP \big\{ \oX_s \= \sX^{t,\bx}_s , \; \fa s \ins [0,\infty) \big\} \= \fp \big\{ \oX_s (\Phi) \= \sX^{t,\bx}_s (\Phi) , \; \fa s \ins [0,\infty) \big\} \= \fp \big\{ X^{t,\bx}_s \= \sX^{t,\bx}_s (\Phi) , \; \fa s \ins [0,\infty) \big\} \= 1 $. Namely, $\oP$ satisfies Definition \ref{def_ocP} (ii). Since $ \oP \{ \oT \gs t\} \= \fp \big\{ \oT(\Phi) \gs t \big\} \= \fp \big\{ \tau \gs 0 \big\} \= 1 $, we see from \eqref{082120_11} that $\oP \ins \ocP_{t,\bw,\bx}$. For any $i \ins \hN$, one has \bea E_\oP \bigg[ \int_t^\oT g_i \big(r,\oX_{r \land \cd} \big) dr \bigg] \= E_\fp \Big[ \int_t^{\oT(\Phi)} g_i \big(r,\oX_{r \land \cd}(\Phi) \big) dr \Big] \= E_\fp \Big[ \int_t^{t+\tau} g_i (r,X^{t,\bx}_{r \land \cd} ) dr \Big] \ls y_i \label{Feb12_01} \eea and similarly $ E_\oP \big[ \int_t^\oT h_i \big(r,\oX_{r \land \cd} \big) dr \big] \= E_\fp \big[ \int_t^{t+\tau} h_i (r,X^{t,\bx}_{r \land \cd} ) dr \big] \= z_i $, which shows that $ \oP \ins \ocP_{t,\bw,\bx}(y,z) \sb \ocP_{t,\bx} (y,z) $. By an analogy to \eqref{Feb12_01}, \beas E_\fp \Big[ \int_t^{t+\tau} f (r,X^{t,\bx}_{r \land \cd} ) dr \+ \b1_{\{\tau < \infty\}} \pi\big(t\+\tau,X^{t,\bx}_{(t+\tau) \land \cd}\big) \Big] \= E_\oP \bigg[ \int_t^\oT f \big(r,\oX_{r \land \cd} \big) dr \+ \b1_{\{\oT < \infty\}} \pi \big( \oT, \oX_{\oT \land \cd} \big) \bigg] \ls \ocV (t,\bw,\bx,y,z) . \eeas Letting $ \tau $ run through $ \cS_{t,\bx}(y,z) $ yields that $V(t,\bx,y,z) \ls \ocV (t,\bw,\bx,y,z) \ls \oV (t,\bx,y,z)$. \qed To demonstrate the inequality $\oV(t,\bx,y,z) \ls V(t,\bx,y,z)$ in Theorem \ref{thm_V=oV}, we need to introduce an auxiliary value $\wV (t,\bx,y,z)$ over another enlarged canonical space \beas \wO \df \O_0 \ti \OmX \ti [0,1] . \eeas Analogous to $\fP(\oO)$, the space $\fP\big(\wO\big)$ of all probabilities on $\big(\wO, \sB\big(\wO\big) \big)$ equipped with the topology of weak convergence is a Borel space. Define the canonical coordinates on $\wO$ by \beas \wW_s (\wo) \df \o_0 (s) , \q \wX_s(\wo) \df \omX (s) , \q \fa s \ins [0,\infty) \aand \weta (\wo) \df \l , \q \fa \wo \= \big(\o_0, \omX, \l \big) \ins \wO . \eeas Also, for $t \ins [0,\infty)$ we define shifted canonical processes $\wXi^t \= (\wW^t ,\wX^t )$ on $\wO$ by \beas \big(\wW^t_s,\wX^t_s\big) (\wo) \df \big( \wW_{t+s} (\wo) \- \wW_t (\wo), \wX_{t+s} (\wo) \big) , \q \fa (s,\wo) \ins [0,\infty) \ti \wO , \eeas and set the filtration $\bF^{\wW^t,\weta} \= \Big\{\cF^{\wW^t,\weta}_s \df \si \Big(\cF^{\wW^t}_s \cp \si\big(\weta \big) \Big) \Big\}_{s \in [0,\infty)}$. For any $\wP \ins \fP\big(\wO\big)$, let $\cT^\wP_t$ denote the set of all $[0,\infty]-$valued, $\bF^{\wW^t,\weta,\wP}-$stopping times. We have the following counterpart of $\ocP_{t,\bx}$'s in $\fP\big(\wO\big)$: \begin{deff} \label{def_wcP} For any $(t,\bx) \ins [0,\infty) \ti \OmX$, let $\wcP_{t,\bx}$ be the collection of all probabilities $ \wP \ins \fP \big(\wO\big) $ satisfying: \no i\) On $\big(\wO , \sB\big(\wO\big) , \wP\big)$, the process $\wW^t$ is a $d-$dimensional standard Brownian motion and $ \weta $ is a {\it unif}\,$(0,1)$ random variable independent of $\wW^t$. \no ii\) $ \wP\big\{ \wX_s \= \cX^{t,\bx}_s, ~ \fa s \ins [0,\infty) \big\} \= 1$, where $ \big\{\cX^{t,\bx}_s\big\}_{s \in [0,\infty)}$ uniquely solves the following SDE on $\big(\wO , \sB\big(\wO\big) , \wP\big) \n : $ \beas \cX_s = \bx(t) + \int_t^s b \big( r, \cX_{r \land \cd} \big)dr \+ \int_t^s \si \big( r, \cX_{r \land \cd}\big) d \wW_r, \q s \ins [0,\infty) \eeas with initial condition $\cX_s \= \bx(s)$, $\fa s \ins [0,t] $ \big(In particular, $ \big\{ \cX^{t,\bx}_{t+s}\big\}_{s \in [0,\infty)} $ is an $\bF^{\wW^t,\wP}-$adapted process with all continuous paths satisfying $\wP \big\{\cX^{t,\bx}_{t+s} \= \bx(t) \+ \int_0^s b (t\+r,\cX^{t,\bx}_{(t+r) \land \cd}) dr \+ \int_0^s \si (t\+r, \cX^{t,\bx}_{(t+r) \land \cd}) d \wW^t_r , \fa s \ins [0,\infty)\big\} \= 1$\big). \end{deff} For any $(t,\bx) \ins [0,\infty) \ti \OmX $ and any $(y,z) \= \big(\{y_i\}_{i \in \hN}, \{z_i\}_{i \in \hN}\big) \ins \cR$ such that $ \cT^\wP_{t,\bx}(y,z) \df \big\{ \wtau \ins \cT^\wP_t \n : E_\wP \big[ \int_t^{t+\wtau} g_i (r, \\ \wX_{r \land \cd} ) dr \big] \ls y_i,\, E_\wP \big[ \int_t^{t+\wtau} h_i (r, \wX_{r \land \cd} ) dr \big] \= z_i, \,\fa i \ins \hN\big\} $ is not empty for some $ \wP \in \wcP_{t,\bx} $, we define the auxiliary value \beas \wV (t,\bx,y,z) \df \Sup{\wP \in \wcP_{t,\bx}(y,z)} \Sup{ \wtau \in \cT^\wP_{t,\bx}(y,z) } E_\wP \bigg[ \int_t^{t+\wtau} \n f \big( r, \wX_{r \land \cd} \big) dr \+ \b1_{\{\wtau < \infty\}} \pi \big( t\+\wtau , \wX_{(t+\wtau) \land \cd} \big) \bigg] , \eeas where $\wcP_{t,\bx}(y,z) \df \big\{ \wP \ins \wcP_{t,\bx} \n : \cT^\wP_{t,\bx}(y,z) \nne \es \big\}$. \no {\bf Proof of Theorem \ref{thm_V=oV} (Part II):} Fix $(t,\bx ) \ins [0,\infty) \ti \OmX $ and $(y,z) \= \big(\{y_i\}_{i \in \hN}, \{z_i\}_{i \in \hN}\big) \ins \cR$ such that $\ocP_{t,\bx}(y,z) \nne \es$. In this part, we demonstrate that $\cS_{t,\bx}(y,z) \nne \es$ and $\oV(t,\bx,y,z) \ls V(t,\bx,y,z)$. \no {\bf (II.a)} To show that $\wcP_{t,\bx}(y,z)\nne \es $ and $\oV(t,\bx,y,z) \ls \wV(t,\bx,y,z)$, we fix $\oP \ins \ocP_{t,\bx}(y,z) $. Define a probability $ \wP $ of $\fP\big(\wO\big)$ by $ \wP \df \oP|_{\O_0 \times \OmX} \oti d\l $ \big(i.e., $\wP$ is the product measure between the projection of $\oP$ on $ \O_0 \ti \OmX $ and the Lebesgue measure on $[0,1]$\big). \if{0} i.e. \beas \wP (A \ti \cE) \= \oP|_{\O_0 \times \OmX} (A) \ti \int_0^1 \b1_{\{\l \in \cE\}} d\l \= \oP\big(A \ti [0,\infty]\big) \ti \int_0^1 \b1_{\{\l \in \cE\}} d\l , \q \fa A \ins \sB(\O_0) \oti \sB(\OmX) , ~ \fa \cE \ins \sB[0,1] . \eeas $ \oP|_{\O_0 \times \OmX} $ is projection of $\oP$ on $ \O_0 \ti \OmX $ and $\wP$ is the product measure between $ \oP|_{\O_0 \times \OmX} $ and the Lebesgue measure on $[0,1]$. \fi Clearly, $\weta$ is a {\it unif}\,$(0,1)$ random variable independent of $(\wW,\wX)$ under $\wP$ and the joint $\wP-$distribution of $(\wW,\wX)$ is equal to the joint $\oP-$distribution of $(\oW,\oX)$. As conditions (i),(ii) of Definition \ref{def_ocP} hold for $\oP$, the probability $\wP$ correspondingly satisfies Definition \ref{def_wcP}, namely, $\wP \ins \wcP_{t,\bx}$. \no {\bf (II.a.1)} We next construct a $\wt{\tau} \ins \cT^\wP_{t,\bx} (y,z)$. For any $s \ins [0,\infty)$, there is $ \cF^W_s -$measurable random variable $\vth_s$ on $\O_0$ such that \bea \label{J30_03} \vth_s \big( \oW^t_\cd (\oo) \big) \= E_\oP \Big[ \b1_{\{\oT \in [t,t+s]\}} \big| \cF^{\oW^t}_s \Big] ( \oo ), \q \fa \oo \ins \oO . \eea Since the filtration $\bF^{W,P_0}$ is right-continuous and since $E_{P_0} [\vth_s] \= E_\oP \big[ \vth_s (\oW^t_\cd) \big] \= E_\oP \big[ \b1_{\{\oT \in [t,t+s]\}} \big] $ is right-continuous in $s \ins [0,\infty)$, the process $ \{\vth_s\}_{s \in [0,\infty)}$ admits a \cad modification $ \big\{\wh{\vth}_s\big\}_{s \in [0,\infty)} $ on $\O_0$. Let $s \ins [0,\infty)$ and $\oA \ins \cF^{\oW^t}_\infty$. Since $\oW^t$ is a Brownian motion with respect to filtration $\obF^t $ \big(and thus with respect to $\bF^{\oW^t} $\big) under $\oP$, the Markov property implies that $E_\oP \big[ \b1_A \big| \ocF^t_s \big] \= E_\oP \big[ \b1_A \big| \cF^{\oW^t}_s \big]$, $\oP-$a.s. By the {\it tower property}, \beas && \hspace{-1.2cm} E_\oP \big[\b1_\oA \b1_{\{\oT \in [t,t+s]\}} \big] \= E_\oP \Big[ \b1_{\{\oT \in [t,t+s]\}} E_\oP \big[\b1_\oA \big| \ocF^t_s \big] \Big] \= E_\oP \Big[ \b1_{\{\oT \in [t,t+s]\}} E_\oP \big[\b1_\oA \big| \cF^{\oW^t}_s \big] \Big] \= E_\oP \Big[ E_\oP \big[ \b1_{\{\oT \in [t,t+s]\}} E_\oP [\b1_\oA | \cF^{\oW^t}_s ] \big| \cF^{\oW^t}_s \big] \Big] \\ & & \hspace{-0.5cm} \= E_\oP \Big[ E_\oP \big[\b1_\oA \big| \cF^{\oW^t}_s \big] E_\oP \big[ \b1_{\{\oT \in [t,t+s]\}} \big| \cF^{\oW^t}_s \big] \Big] \= E_\oP \Big[ E_\oP \Big[\b1_\oA E_\oP \big[ \b1_{\{\oT \in [t,t+s]\}} | \cF^{\oW^t}_s ] \big| \cF^{\oW^t}_s \Big] \Big] \= E_\oP \Big[ \b1_\oA E_\oP \big[ \b1_{\{\oT \in [t,t+s]\}} \big| \cF^{\oW^t}_s \big] \Big] . \eeas Letting the set $\oA$ run through $ \cF^{\oW^t}_\infty$, we obtain \bea \label{J30_02} E_\oP \Big[ \b1_{\{\oT \in [t,t+s]\}} \big| \cF^{\oW^t}_\infty \Big] \= E_\oP \Big[ \b1_{\{\oT \in [t,t+s]\}} \big| \cF^{\oW^t}_s \Big] , \q \hb{$\oP-$a.s.} \eea \if{0} If $\big\{\oP_\oo\big\}_{\oo \in \oO}$ denotes the family of regular conditional probability distributions (r.c.p.d.) of $\oP$ with respect to $\cF^{\oW^t}_\infty$, then $ \oP_\oo \{\oT \ls t\+s\} \= E_\oP \Big[ \b1_{\{\oT \in [t,t+s]\}} | \cF^{\oW^t}_\infty \Big] (\oo) $ for $\oP-$a.s. $\oo \ins \oO$. \fi For $0 \ls s \< r \< \infty$, \eqref{J30_03} and \eqref{J30_02} show that \beas \q 0 \ls \vth_s \big( \oW^t_\cd (\oo) \big) \= E_\oP \Big[ \b1_{\{\oT \in [t,t+s]\}} \big| \cF^{\oW^t}_\infty \Big] ( \oo ) \ls E_\oP \Big[ \b1_{\{\oT \in [t,t+r]\}} \big| \cF^{\oW^t}_\infty \Big] ( \oo ) \= \vth_r \big( \oW^t_\cd (\oo) \big) \ls 1 ~ \hb{ for $\oP-$a.s. $\oo \ins \oO$,} \eeas which implies that $ 0 \ls \wh{\vth}_s \= \vth_s \ls \vth_r \= \wh{\vth}_r \ls 1 $, $P_0-$a.s. So $ \wh{\vth} $ is a $[0,1]-$valued, $\bF^{W,P_0}-$adapted \cad increasing process on $\O_0$. Define a mapping $ \vr \n : \O_0 \ti [0,1] \mto [0,\infty] $ by \beas \vr (\o_0,\l) \df \inf\big\{s \ins [0,\infty) \n : \wh{\vth}_s ( \o_0) \> \l \big\} , \q \fa (\o_0,\l) \ins \O_0 \ti [0,1] . \eeas In particular, $ \wt{\tau} \df \vr \big(\wW^t_\cd,\weta\big) $ can be viewed as the hitting time of the $\bF^{\wW^t,\weta,\wP}-$adapted \cad increasing process $\big\{\wh{\vth}_s ( \wW^t_\cd) \- \weta \big\}_{s \in [0,\infty)}$ above level $0$. Since $\wW^t$ is a Brownian motion and since $\weta \sim$ {\it unif}\,$(0,1)$ is independent of $\wW^t$ under $\wP$, the $\wP-$augmentation $\bF^{\wW^t,\weta,\wP}$ of $\bF^{\wW^t,\weta}$ is a right-continuous filtration (see e.g. Proposition 2.7.7 of \cite{Kara_Shr_BMSC}). Then the {\it d\'ebut} Theorem renders that $ \wt{\tau} \= \vr \big(\wW^t_\cd,\weta\big) $ is an $\bF^{\wW^t,\weta,\wP}-$stopping time or $\wt{\tau} \ins \cT^\wP_t $. \no {\bf (II.a.2)} Let $\Phi \n : [0,\infty) \ti \OmX \mto [-\infty,\infty] $ and $\U \n : \OmX \mto [-\infty,\infty] $ be two Borel-measurable functions. We claim that \bea \label{Ju01_02} E_\oP \Big[ \b1_{\{\oT < \infty \}} \Phi \big(\oT , \oX_\cd \big) \+ \b1_{\{\oT = \infty \}} \U \big( \oX_\cd \big) \Big] \= E_\wP \Big[ \b1_{\{\wt{\tau} < \infty\}} \Phi \big( t\+\wt{\tau} , \wX_\cd \big) \+ \b1_{\{\wt{\tau} = \infty\}} \U \big( \wX_\cd \big) \Big] , \eea To see this, we first assume that $\U \equiv 0$ and $\Phi$ is a bounded nonnegative function. For any $s \ins [0,\infty) $ and $A \ins \sB(\OmX)$, Definition \ref{def_ocP} (ii), \eqref{J30_02} and \eqref{J30_03} imply that $\oP-$a.s. \beas && \hspace{-1.2cm} E_\oP \Big[ \b1_{\{\oT \in [t,t+s]\}} \b1_{\{\oX_\cd \in A\}} \big| \cF^{\oW^t,\oP}_\infty \Big] \= E_\oP \Big[ \b1_{\{\oT \in [t,t+s]\}} \b1_{\{\sX^{t,\bx}_\cd \in A\}} \big| \cF^{\oW^t,\oP}_\infty \Big] \= \b1_{\{\sX^{t,\bx}_\cd \in A\}} E_\oP \Big[ \b1_{\{\oT \in [t,t+s]\}} \big| \cF^{\oW^t,\oP}_\infty \Big] \\ && \hspace{-0.5cm} \= \b1_{\{\oX_\cd \in A\}} E_\oP \Big[ \b1_{\{\oT \in [t,t+s]\}} \big| \cF^{\oW^t}_\infty \Big] \= \b1_{\{\oX_\cd \in A\}} E_\oP \Big[ \b1_{\{\oT \in [t,t+s]\}} \big| \cF^{\oW^t}_s \Big] \= \b1_{\{\oX_\cd \in A\}} \wh{\vth}_s ( \oW^t_\cd) \= \int_{[0,\infty)} \b1_{\{r \le s\}} \b1_{\{\oX_\cd \in A\}} \wh{\vth} (dr ,\oW^t_\cd) , \eeas where $\wh{\vth} (ds ,\o_0)$ denotes the measure on $[0,\infty)$ with cumulative function $ \wh{\vth} ([0,s] ,\o_0) \= \wh{\vth}_s (\o_0) $, $\fa (s,\o_0) \ins [0,\infty) \ti \O_0 $. So all measurable rectangles of $ \sB[0,\infty) \oti \sB(\OmX ) $ are included in the Lambda system \beas \L \df \Big\{ \cD \ins \sB[t,\infty) \oti \sB(\OmX ) \n : E_\oP \Big[ \b1_{\{(\oT , \oX_\cd) \in \cD \}} \big| \cF^{\oW^t,\oP}_\infty \Big] \= \int_{[0,\infty)} \b1_{\{(t+r, \oX_\cd) \in \cD \}} \wh{\vth} (dr ,\oW^t_\cd) , \; \hb{$\oP-$a.s.} \Big\} . \eeas Then Dynkin's Theorem shows that $ \L \= \sB[0,\infty) \oti \sB(\OmX ) $. Using the standard approximation argument and the ``change-of-variable" formula (see e.g. Proposition 0.4.9 of \cite{revuz_yor}) yield that $\oP-$a.s. \bea \label{Ju02_01} E_\oP \Big[ \b1_{\{\oT < \infty \}}\Phi (\oT , \oX_\cd) \big| \cF^{\oW^t,\oP}_\infty \Big] \= \int_{[0,\infty)} \Phi (t \+ r, \oX_\cd) \, \wh{\vth} (dr ,\oW^t_\cd) \= \int_0^1 \b1_{\{\vr(\oW^t_\cd,\l) < \infty\}} \Phi ( t \+ \vr(\oW^t_\cd,\l) , \oX_\cd) d \l . \eea Since the joint $\oP-$distribution of $ (\oW_\cd,\oX_\cd)$ is equal to the joint $\wP-$distribution of $\big(\wW,\wX\big)$ and since $\weta \sim$ {\it unif}\,$(0,1)$ is independent of $\cF^{\wW^t,\wP}_\infty$ under $\wP$, taking the expectation $E_\oP[\cd]$ in \eqref{Ju02_01}, we see from Fubini Theorem that \bea \q && \hspace{-1.5cm} E_\oP \Big[ \b1_{\{\oT < \infty \}} \Phi \big(\oT , \oX_\cd \big) \Big] \= \int_0^1 E_\oP \Big[ \b1_{\{\vr(\oW^t_\cd,\l) < \infty\}} \Phi \big( t \+ \vr(\oW^t_\cd,\l) , \oX_\cd \big) \Big] d \l \= \int_0^1 E_\wP \Big[ \b1_{\{\vr(\wW^t_\cd,\l) < \infty\}} \Phi \big( t \+ \vr(\wW^t_\cd,\l) , \wX_\cd \big) \Big] d \l \nonumber \\ && \= \int_0^1 E_\wP \Big[ \b1_{\big\{\vr(\wW^t_\cd,\weta) < \infty\big\}} \Phi \big( t \+ \vr(\wW^t_\cd,\weta), \wX_\cd \big) \Big|\weta \= \l \Big] d \l \= E_\wP \Big[ \b1_{\big\{\vr(\wW^t_\cd,\weta) < \infty\big\}} \Phi \big( t \+ \vr(\wW^t_\cd,\weta), \wX_\cd \big) \Big] . \label{Ju02_03} \eea Next, let $\Phi $ be a general $[-\infty,\infty]-$valued, Borel-measurable function on $[0,\infty) \ti \OmX$ and let $\U $ be a general $[-\infty,\infty]-$valued, Borel-measurable function on $ \OmX$. We set $\oxi \df \b1_{\{\oT < \infty \}} \Phi \big(\oT , \oX_\cd \big) \+ \b1_{\{\oT = \infty \}} \U \big( \oX_\cd \big) $ and $\wt{\xi} \df \b1_{\{\wt{\tau} < \infty\}} \Phi \big(t \+ \wt{\tau} , \wX_\cd \big) \+\b1_{\{\wt{\tau} = \infty\}} \U \big( \wX_\cd \big)$. Given $n \ins \hN$, applying \eqref{Ju02_03} to $ \Phi^\pm \ld n $ and to $ \U^\pm \ld n $ respectively yields that \beas E_\oP \Big[ \b1_{\{\oT < \infty \}} n \ld \Phi^\pm \big(\oT , \oX_\cd \big) \Big] \= E_\wP \Big[ \b1_{\{\wt{\tau} < \infty\}} n \ld \Phi^\pm \big(t\+\wt{\tau} , \wX_\cd \big)\Big] \; \hb{ and } \; E_\oP \Big[ \b1_{\{\oT < \infty \}} n \ld \U^\pm \big( \oX_\cd \big) \Big] \= E_\wP \Big[ \b1_{\{\wt{\tau} < \infty\}} n \ld \U^\pm \big( \wX_\cd \big)\Big] . \eeas Subtracting the latter from $ E_\oP \big[ n \ld \U^\pm ( \oX_\cd ) \big] \= E_\wP \big[ n \ld \U^\pm ( \wX_\cd ) \big] $ and then adding to the former render that $ E_\oP \big[ \, \oxi^\pm \land n \big] \= E_\wP \big[ \, \wt{\xi}^\pm \ld n \big] $. As $n \nto \infty$, the monotone convergence theorem gives that $ E_\oP \big[ \, \oxi^\pm \big] \= E_\wP \big[ \, \wt{\xi}^\pm \big] $ and \eqref{Ju01_02} thus holds. \no {\bf (II.a.3)} Let $i \ins \hN$. Since function $\fl (s,\omX) \df \omX(s \ld \cd) $ is continuous in $ (s,\omX) \ins [0,\infty) \ti \OmX $, the measurability of functions $f, \pi, g_i ,h_i $ implies that $(\ff,\fg_i,\fh_i) (r,\omX) \df \b1_{\{r \ge t\}} (f,g_i,h_i) \big(r, \fl (r,\omX) \big) \= \b1_{\{r \ge t\}} (f,g_i,h_i) \big( r, \omX(r \ld \cd) \big) $ are $[-\infty,\infty]-$valued Borel-measurable functions and $ \varpi (r,\omX) \df \pi \big( r, \omX(r \ld \cd) \big) $, $\fa (r,\omX) \ins [0,\infty) \ti \OmX$ is a $\hR-$valued Borel-measurable function. Then $ \big(\wh{\ff},\wh{\fg}_i,\wh{\fh}_i\big) (s,r,\omX) \df \b1_{\{ r \le s \}} (\ff,\fg_i,\fh_i) (r,\omX) $, $ \fa (s,r,\omX) \ins [0,\infty) \ti [0,\infty) \ti \OmX$ are also $[-\infty,\infty]-$valued Borel-measurable functions and it follows that \beas && \hspace{0.5cm} \big(\U_f, \U_{g_i}, \U_{h_i} \big) ( \omX) \df \int_t^\infty ( f,g_i,h_i) \big( r,\omX(r \ld \cd) \big) dr , \q \fa \o \ins \OmX , \\ && \hspace{-0.5cm} \Phi_{f,\pi} (s,\omX) \df \int_t^{s \vee t} \n f \big( r,\omX(r \ld \cd) \big) dr \+ \pi \big( s,\omX(s \ld \cd)\big) , \q \fa (s,\omX) \ins [0,\infty) \ti \OmX , \\ && \big( \Phi_{g_i}, \Phi_{h_i} \big) (s,\omX) \df \int_t^{s \vee t} \n ( g_i , h_i ) \big( r,\omX(r \ld \cd) \big) dr , \q \fa (s,\omX) \ins [0,\infty) \ti \OmX \eeas are all $[-\infty,\infty]-$valued Borel-measurable functions. Now, taking $(\Phi,\U) \= (\Phi_{g_i},\U_{g_i}) $ and $(\Phi,\U) \= (\Phi_{h_i},\U_{h_i}) $ respectively in \eqref{Ju01_02} and using the integration convention yield that \beas \q E_\wP \Big[ \int_t^{t+\wt{\tau}} g_i ( r,\wX_{r \land \cd} ) dr \Big] \= E_\oP \Big[ \int_t^\oT g_i (r,\oX_{r \land \cd} ) dr \Big] \ls y_i , \q E_\wP \Big[ \int_t^{t+\wt{\tau}} h_i ( r,\wX_{r \land \cd} ) dr \Big] \= E_\oP \Big[ \int_t^\oT h_i (r,\oX_{r \land \cd} ) dr \Big] \= z_i , \eeas which means $ \wt{\tau} \ins \cT^{\wP}_{t,\bx}(y,z) $. To wit, $ \wP \ins \wcP_{t,\bx}(y,z) $. Similarly, applying \eqref{Ju01_02} to $(\Phi,\U) \= (\Phi_{f,\pi},\U_f) $, we obtain \beas E_\oP \bigg[ \int_t^\oT f \big(r,\oX_{r \land \cd} \big) dr \+ \b1_{\{\oT < \infty\}} \pi \big( \oT, \oX_{\oT \land \cd} \big) \bigg] \= E_\wP \bigg[ \int_t^{t+\wt{\tau}} f \big( r,\wX_{r \land \cd} \big) dr \+ \b1_{\{\wt{\tau} < \infty\}} \pi \big( t\+\wt{\tau},\wX_{(t+\wt{\tau}) \land \cd} \big) \bigg] \ls \wt{V}(t,\bx,y,z) . \eeas Taking supremum over $\oP \ins \ocP_{t,\bx}(y,z)$ eventually leads to $ \oV (t,\bx,y,z) \ls \wt{V}(t,\bx,y,z)$. \no {\bf (II.b)} It remains to show that $\wV(t,\bx,y,z) \ls V(t,\bx,y,z)$: Fix $\wP \ins \wcP_{t,\bx}(y,z) $ and $ \wtau \ins \cT^\wP_{t,\bx}(y,z) $. Define a mapping $\Psi \n : \cQ \mto \wO$ by $ \Psi (\o) \df \big( B_{t \vee \cd} (\o) \- B_t(\o), X^{t,\bx}_\cd (\o) ,\eta(\o) \big) \ins \wO $, $ \fa \o \ins \cQ $. So for any $\o \ins \cQ$, \beas \wX_s \big(\Psi (\o)\big) \= \bx(s), ~ \fa s \ins [0,t] \aand \wXi^t_\cd \big(\Psi (\o)\big) \= \big( B^t_\cd (\o) , X^{t,\bx}_{t+\cd} (\o) \big) ,~ \weta \big(\Psi(\o)\big) \= \eta(\o) . \eeas For $s \ins [0,\infty] $, since it holds for any $r \ins [0,s] \Cp \hR $, $\cE \ins \sB(\hR^d)$ and $\cE' \ins \sB[0,1]$ that $ \Psi^{-1} \big( (\wW^t_r)^{-1} ( \cE ) \Cp \weta^{-1} ( \cE' ) \big) \= \big\{ \wW^t_r (\Psi) \ins \cE \big\} \Cp \big\{ \weta (\Psi) \ins \cE' \big\} \= \big\{ B^t_r \ins \cE \big\} \Cp \big\{ \eta \ins \cE' \big\} \ins \cF^{B^t,\eta}_s $, one can deduce that $ \Psi^{-1} \big( \cF^{\wW^t}_s \big) \sb \cF^{B^t}_s $ and $ \Psi^{-1} \big( \cF^{\wW^t,\weta}_s \big) \sb \cF^{B^t,\eta}_s $. As the $\fp-$joint distribution of $(B^t,\eta)$ is equal to the $\wP-$joint distribution of $(\wW^t,\weta)$, we see that for $\big\{(s_i,\cE_i)\big\}^n_{i=1} \sb [0,\infty) \ti \sB(\hR^d)$ and $\cE' \ins \sB[0,1]$ \beas \hspace{-0.5cm} \big(\fp \nci \Psi^{-1} \big) \big( (\wW^t_{s_1})^{-1} ( \cE_1 ) \Cp \cds \Cp (\wW^t_{s_n})^{-1} ( \cE_n ) \Cp \weta^{-1} ( \cE' ) \big) \= \fp \big\{ B^t_{s_1} \ins \cE_1 , \cds \n , B^t_{s_n} \ins \cE_n , \eta \ins \cE' \big\} \= \wP \Big\{ \wW^t_{s_1} \ins \cE_1 , \cds \n , \wW^t_{s_n} \ins \cE_n , \weta \ins \cE' \Big\} . \eeas An application of Dynkin's Theorem yields that $ \big(\fp \nci \Psi^{-1}\big) \big(\wA\big) \= \wP\big(\wA\big) $, $\fa \wA \ins \cF^{\wW^t,\weta}_\infty$, then one can further deduce that \bea \label{Oct01_07} \Psi^{-1} \big(\cF^{\wW^t,\wP}_s\big) \sb \cF^{B^t,\fp}_s, ~ \, \Psi^{-1} \big(\cF^{\wW^t,\weta,\wP}_s\big) \sb \cF^{B^t,\eta,\fp}_s , ~ \fa s \ins [0,\infty] \aand \big(\fp \nci \Psi^{-1}\big) \big(\wA\big) \= \wP\big(\wA\big) , ~ \; \fa \wA \ins \cF^{\wW^t,\weta,\wP}_\infty . \qq \eea It follows that $\tau (\o) \df \wtau\big(\Psi (\o)\big)$, $\o \ins \cQ$ is $[0,\infty]-$valued, $\bF^{B^t,\eta,\fp}-$stopping time or $\tau \ins \cS^t$. Using similar arguments to those leading to \eqref{Oct12_63}, we can derive from \eqref{Oct01_07} that \if{0} Since $\wM_s \df \int_0^s \n \si(t\+r,\cX^{t,\bx}_{(t+r) \land \cd} ) d\wW^t_r$, $s \ins [0,\infty)$ is a martingale with respect to $ \big( \bF^{\wW^t,\wP} , \wP \big) $, there is a sequence of $ \hR^{l \times d}-$valued, $ \bF^{\wW^t,\wP} -$simple processes $ \Big\{\cH^n_s \= \sum_{i = 1}^{\ell_n} \xi^n_i \, \b1_{ \{s \in (s^n_i, s^n_{i+1}] \} } , \, s \ins [0,\infty) \Big\}_{n \in \hN}$ \big(with $t \= s^n_1 \< \cds \< s^n_{\ell_n+1} \< \infty $ and $\xi^n_i \ins \cF^{\wW^t,\wP}_{ s^n_i}$ for $i \=1 ,\cds \n , \ell_n $\big) such that \beas \wP \n - \n \lmt{n \to \infty} \int_0^\infty \big| \cH^n_r \- \si\big(t\+r,\cX^{t,\bx}_{(t+r) \land \cd} \big) \big|^2 dr \= 0 \; \hb{ and } \; \wP \n - \n \lmt{n \to \infty} \, \underset{s \in [0,\infty)}{\sup} \big| \wM^n_s - \wM_s \big| \= 0 , \eeas where $ \wM^n_s \df \int_0^s \n \cH^n_r d\wW^t_r $. \eqref{Oct01_07} shows that \bea && \fp \n - \n \lmt{n \to \infty} \int_0^\infty \Big| \cH^n_r (\Psi) \- \si\big(t\+r,\cX^{t,\bx}_{(t+r) \land \cd} (\Psi)\big) \Big|^2 dr \= 0 , \qq \label{M24_00a} \\ && \hb{and} \hspace{1cm} \fp \n - \n \lmt{n \to \infty} \, \underset{s \in [0,\infty)}{\sup} \big| (\wM^n_s \- \wM_s ) \big(\Psi \big) \big| =0 , \label{M24_00b} \eea where $ \wM^n_s (\Psi) \= \sum^{\ell_n}_{i = 1 } \xi^n_i (\Psi) \big( \wW^t_{s \land s^n_{i+1}} (\Psi) \- \wW^t_{s \land s^n_i} (\Psi)\big) \= \int_0^s \cH^n_r (\Psi) dB^t_r$. For $n \ins \hN$, as $ \xi^n_i (\Psi) \ins \cF^{B^t,\fp}_{ s^n_i} $ for $i \= 1,\cds \n ,\ell_n$ by \eqref{Oct01_07}, applying Proposition 3.2.26 of \cite{Kara_Shr_BMSC} and using \eqref{M24_00a} yield \beas 0 = \fp \n - \n \lmt{n \to \infty} \, \underset{s \in [s,\infty)}{\sup} \bigg| \int_0^s \cH^n_r (\Psi) dB^t_r - \int_0^s \si\big(t\+r,\cX^{t,\bx}_{(t+r) \land \cd} (\Psi)\big) dB^t_r \bigg| \, , \eeas which together with \eqref{M24_00b} gives that \fi $ \fp \big\{ \big( \int_0^s \n \si(t\+r,\cX^{t,\bx}_{(t+r) \land \cd} ) d\wW^t_r \big) (\Psi) \= \int_0^s \si\big(t\+r,\cX^{t,\bx}_{(t+r) \land \cd} (\Psi)\big) dB^t_r , ~ s \ins [0,\infty) \big\} \= 1 $ and thus that \fpas ~ \beas \cX^{t,\bx}_s (\Psi) & \tn \= & \tn \bx(t) \+ \int_0^{s-t} \n b\big(t\+r, \cX^{t,\bx}_{(t+r) \land \cd} (\Psi)\big) dr \+ \Big( \int_0^{s-t} \n \si(t\+r,\cX^{t,\bx}_{(t+r) \land \cd} ) d\wW^t_r \Big) (\Psi) \\ & \tn \= & \tn \bx(t) \+ \int_0^{s-t} \n b\big(t\+r, \cX^{t,\bx}_{(t+r) \land \cd} (\Psi)\big) dr \+ \int_0^{s-t} \n \si\big(t\+r,\cX^{t,\bx}_{(t+r) \land \cd} (\Psi)\big) dB^t_r \\ & \tn \= & \tn \bx(t) \+ \int_t^s \n b\big(r, \cX^{t,\bx}_{r \land \cd} (\Psi)\big) dr \+ \int_t^s \n \si\big(r,\cX^{t,\bx}_{r \land \cd} (\Psi)\big) dB_r , \q \fa s \in [t,\infty) . \eeas So $ \big\{ \cX^{t,\bx}_s (\Psi) \big\}_{s \in [0,\infty)}$ is the unique solution of \eqref{FSDE1} or $ \fp \big\{ X^{t,\bx}_s \= \cX^{t,\bx}_s (\Psi) , ~ \fa s \ins [0,\infty) \big\} \= 1 $. For any $i \ins \hN$, \eqref{Oct01_07} implies that \bea \hspace{-1.2cm} E_\fp \Big[ \int_t^{t+\tau} g_i (r,X^{t,\bx}_{r \land \cd} ) dr \Big] \= E_\fp \Big[ \int_t^{t+\wtau (\Psi )} g_i \big( r,\cX^{t,\bx}_{r \land \cd} (\Psi) \big) dr\Big] \= E_\wP \Big[ \int_t^{t+\wtau} g_i (r,\cX^{t,\bx}_{r \land \cd} ) dr \Big] \= E_\wP \Big[ \int_t^{t+\wtau} g_i (r,\wX_{r \land \cd} ) dr \Big] \ls y_i \label{111920_17} \eea and similarly $ E_\fp \big[ \int_t^{t+\tau} h_i (r,X^{t,\bx}_{r \land \cd} ) dr \big] \= E_\wP \big[ \int_t^{t+\wtau} h_i (r,\wX_{r \land \cd} ) dr \big] \= z_i$, which shows $ \tau \ins \cS_{t,\bx}(y,z)$. By an analogy to \eqref{111920_17}, \beas E_\wP \bigg[ \int_t^{t+\wtau} f \big(r,\wX_{r \land \cd} \big) dr \+ \b1_{\{\wtau < \infty\}} \pi \big( t\+\wtau,\wX_{(t+\wtau) \land \cd} \big) \bigg] \= E_\fp \Big[ \int_t^{t+\tau} f \big( r,X^{t,\bx}_{r \land \cd} \big) dr \+ \b1_{\{\tau < \infty\}} \pi \big( t\+\tau,X^{t,\bx}_{(t+\tau) \land \cd} \big) \Big] \ls V(t,\bx,y,z). \eeas Letting $ \wtau $ vary over $ \cT^\wP_{t,\bx}(y,z) $ and then letting $\wP$ run through $\wcP_{t,\bx}(y,z)$ yield $ \wV(t,\bx,y,z) \ls V(t,\bx,y,z)$. \qed \section{Martingale-problem Formulation and Measurability of $V$} \label{sec_Mart_prob} In this section, using the martingale-problem formulation of stochastic differential equations (see Stroock \& Varadhan \cite{Stroock_Varadhan}) we first characterize the probability class $ \ocP_{t,\bx} $ via the stochastic behaviors of the canonical coordinators $(\oW,\oX,\oT)$. This will enable us to analyze the measurability of the value function $\oV$ of our optimal stopping problem with expectation constraints and we can thus study the dynamic programming principle of $\oV$ in the next section. Set $ \hQ^{2,<} \df \big\{ (s,r) \ins \hQ^2 \n : 0 \ls s \< r \big\} $ and denote by $\sP(\hR^{d+l})$ the set of all polynomial functions on $\hR^{d+l}$ with $\hQ-$valued coefficients. Let $t \ins [0,\infty)$. For any $\vf \ins \sP (\hR^{d+l})$, we define process \beas \oM^t_s(\vf) \df \vf \big(\oXi^t_s \big) \- \n \int_0^s \ol{b} \big(t\+r,\oX_{(t+r) \land \cd} \big) \n \cd \n D \vf \big( \oXi^t_r \big) dr \- \frac12 \int_0^s \ol{\si} \, \ol{\si}^T \big(t\+r, \oX_{(t+r) \land \cd} \big) \n : \n D^2 \vf \big( \oXi^t_r \big) dr , \q s \ins [0,\infty) , \eeas where $ \dis \ol{b} (r,\bx) \df \binom{0}{b(r,\bx)} \ins \hR^{d+l} $ and $ \dis \ol{\si} (r,\bx) \df \binom{ I_{d \times d}}{ \si(r,\bx)} \ins \hR^{ d \times d + l \times d} $, $ \fa (r,\bx) \ins (0,\infty) \ti \OmX $. For any $n \ins \hN$, let us also set $\otau^t_n \df \inf\big\{s \ins [0,\infty) \n : |\oXi^t_s| \gs n \big\} \ld n $, which is an $\bF^{\oXi^t}-$stopping time. We can characterize the probability class $\ocP_{t,\bx}$ by a martingale-problem formulation as follows. \begin{prop} \label{prop_Ptx_char} For any $(t,\bx) \ins [0,\infty) \ti \OmX $, the probability class $\ocP_{t,\bx}$ is the intersection of the following two subsets of $\fP(\oO)$: \no 1\) $ \ocP^1_t \df \Big\{ \oP \ins \fP(\oO) \n : E_\oP \Big[ \Big( \oM^t_{\otau^t_n \land r} (\vf ) \- \oM^t_{\otau^t_n \land s} (\vf ) \Big) \underset{i=1}{\overset{k}{\prod}} \big( \b1_{ \{\oXi^t_{s_i } \in \cO_i\} \cap \{\oT \le t + s_i \} } \+ \b1_{ \{\oXi^t_{s_i } \in \cO'_i\} \cap \{\oT > t + s_i \} } \big) \Big] \= 0 , ~ \fa \vf \ins \sP(\hR^{d+l}) ; \,\fa n \ins \hN ; \,\fa (s,r) \ins \hQ^{2,<} ; \,\fa \{(s_i,\cO_i,\cO'_i)\}^k_{i=1} \sb \big(\hQ \Cp [0,s]\big) \ti \sO (\hR^{d+l}) \ti \sO (\hR^{d+l}) \Big\}$. \no 2\) $ \ocP^2_{t,\bx} \df \big\{ \oP \ins \fP(\oO) \n : \oP \{ \oT \gs t ; \oX_s \= \bx(s), \; \fa s \ins [0,t] \} \= 1 \big\}$. \end{prop} \no {\bf Proof of Proposition \ref{prop_Ptx_char}:} Fix $(t,\bx) \ins [0,\infty) \ti \OmX$. \no {\bf 1)} Let $\oP \ins \ocP_{t,\bx}$, which is clearly of $\ocP^2_{t,\bx}$. For any $\vf \ins \sP(\hR^{d+l}) $, $n \ins \hN$, $(s,r) \ins \hQ^{2,<} $ and $ \{(s_i,\cO_i,\cO'_i)\}^k_{i=1} \sb \big(\hQ \cap [0,s]\big) \ti \sO (\hR^{d+l}) \ti \sO (\hR^{d+l})$, an application of Lemma \ref{lem_071220} shows that $ E_\oP \Big[ \big( \oM^t_{\otau^t_n \land r} (\vf ) \- \oM^t_{\otau^t_n \land s} (\vf ) \big) \underset{i=1}{\overset{k}{\prod}} \big( \b1_{ \{ \oXi^t_{s_i } \in \cO_i \} \cap \{\oT \le t+s_i \} } \+ \b1_{ \{ \oXi^t_{s_i } \in \cO'_i \} \cap \{\oT > t+s_i \} } \big) \Big] \= 0 $. Namely, $\oP $ is also of $ \ocP^1_t $. \no {\bf 2)} Next, let $\oP \ins \ocP^1_t \Cp \ocP^2_{t,\bx} $. It is clear that $ \oP \{ \oT \gs t \} \= 1 $ or $\oP$ satisfies Definition \ref{def_ocP} (iii). Set $ \ocN^1_{\n X} \df \{ \oo \ins \oO \n : \oX_s (\oo) \nne \bx(s) \hb{ for some } s \ins [0,t] \} \ins \sN_\oP\big(\cF^\oX_t\big) $. \no {\bf 2a)} Let $\vf \ins \sP(\hR^{d+l})$ and $n \ins \hN$. We define $\oX^{t,\bx}_s \df \b1_{\{s \in [0,t]\}} \bx(s) \+ \b1_{\{s \in (t,\infty)\}} \big(\oX_s \- \oX_t \+ \bx(t) \big)$, $s \ins [0,\infty)$. It is a continuous process such that $\big\{\oX^{t,\bx}_{t+s}\big\}_{s \in [0,\infty)}$ is $\bF^{\oX^t}-$adapted. Then $ \oXi^{t,\bx}_s \df \big(\oW^t_s , \oX^{t,\bx}_{t+s}\big) $, $s \ins [0,\infty)$ and \bea \label{091920_15} \oM^{t,\bx}_s(\vf) \df \vf \big(\oXi^{t,\bx}_s \big) \- \n \int_0^s \ol{b} \big(t\+r,\oX^{t,\bx}_{(t+r) \land \cd} \big) \n \cd \n D \vf \big( \oXi^{t,\bx}_r \big) dr \- \frac12 \int_0^s \ol{\si} \, \ol{\si}^T \big(t\+r, \oX^{t,\bx}_{(t+r) \land \cd} \big) \n : \n D^2 \vf \big( \oXi^{t,\bx}_r \big) dr , \q s \ins [0,\infty) \eea are $\bF^{\oXi^t}-$adapted continuous processes, and $\otau^{t,\bx}_n \df \inf\big\{s \ins [0,\infty) \n : \big|\oXi^{t,\bx}_s\big| \gs n \big\} \ld n $ is an $\bF^{\oXi^t}-$stopping time. Clearly, \bea \label{091920_11} \oX^{t,\bx}_s(\oo) \= \oX_s(\oo), \q \oM^{t,\bx}_s (\vf) (\oo) \= \oM^t_s (\vf) (\oo) \aand \otau^{t,\bx}_n(\oo) \= \otau^t_n(\oo) , \q \fa (s,\oo) \ins [0,\infty) \ti \big(\ocN^1_{\n X}\big)^c . \eea For any $ (s,r) \ins \hQ^{2,<} $ and $ \{(s_i,\cO_i,\cO'_i)\}^k_{i=1} \sb \big(\hQ \Cp [0,s]\big) \ti \sO (\hR^{d+l}) \ti \sO (\hR^{d+l}) $, we see from $\oP \ins \ocP^1_t $ and \eqref{091920_11} that \beas && \hspace{-1.5cm} E_\oP \Big[ \big( \oM^{t,\bx}_{\otau^{t,\bx}_n \land r} (\vf ) \- \oM^{t,\bx}_{\otau^{t,\bx}_n \land s} (\vf ) \big) \underset{i=1}{\overset{k}{\prod}} \big( \b1_{ \{\oXi^t_{s_i } \in \cO_i\} \cap \{\oT \in [t, t + s_i ] \} } \+ \b1_{ \{\oXi^t_{s_i } \in \cO'_i\} \cap \{\oT \in [t, t + s_i ]^c \} } \big) \Big] \\ && \= E_\oP \Big[ \big( \oM^t_{\otau^t_n \land r} (\vf ) \- \oM^t_{\otau^t_n \land s} (\vf ) \big) \underset{i=1}{\overset{k}{\prod}} \big( \b1_{ \{\oXi^t_{s_i } \in \cO_i\} \cap \{\oT \in [t, t + s_i ] \} } \+ \b1_{ \{\oXi^t_{s_i } \in \cO'_i\} \cap \{\oT \in [t, t + s_i ]^c \} } \big) \Big] \= 0 . \eeas Dynkin's Theorem and \eqref{090520_23} render that $ E_\oP \Big[ \Big( \oM^{t,\bx}_{\otau^{t,\bx}_n \land r} (\vf ) \- \oM^{t,\bx}_{\otau^{t,\bx}_n \land s} (\vf ) \Big) \b1_{\oA} \Big] \= 0$, $ \fa \oA \ins \ocG^t_{s} $, which further implies that $\big\{\oM^{t,\bx}_{\otau^{t,\bx}_n \land s} (\vf ) \big\}_{s \in [0,\infty)}$ is an $\big(\obG^{t,\oP},\oP\big)-$martingale. Since $\lmtu{n \to \infty} \otau^{t,\bx}_n \= \infty$, $\big\{\oM^{t,\bx}_s (\vf ) \big\}_{s \in [0,\infty)}$ is actually an $\big(\obG^{t,\oP},\oP\big)-$local martingale. \no {\bf 2b)} Given $i,j \= 1,\cds \n ,d$, set $ \phi_i(w,x) \df w_i $ and $\phi_{ij}(w,x) \df w_i w_j $ for any $w \= (w_1,\cds \n ,w_d) \ins \hR^d$ and $ x \ins \hR^l$. By Part (2a), the processes $ \oM^{t,\bx}_s(\phi_i) \= (\oW^t_s)^{(i)} $, $s \ins [0,\infty)$ and $ \oM^{t,\bx}_s(\phi_{ij}) \= (\oW^t_s)^{(i)} (\oW^t_s)^{(j)} \- \d_{ij} s $, $s \ins [0,\infty)$ are all $ \big(\obG^{t,\oP},\oP\big)-$local martingales for $i,j \= 1,\cds \n ,d$, where $\oW^t_s \= \big( (\oW^t_s)^{(1)},\cds \n , (\oW^t_s)^{(d)} \big)$. L\'evy's characterization theorem shows that $\oW^t$ is a $d-$dimensional standard Brownian motion with respect to the filtration $ \obG^{t,\oP} $ and is thus a Brownian motion with respect to the filtration $ \obF^t $. So $\oP$ satisfies Definition \ref{def_ocP} (i). On the other hand, we simply denote $\ol{b}^{t,\bx}_s \df \ol{b} \big(t\+s, \oX^{t,\bx}_{(t+s) \land \cd} \big)$ and $ \ol{\a}^{t,\bx}_s \df \ol{\si} \, \ol{\si}^T \big(t\+s, \oX^{t,\bx}_{(t+s) \land \cd} \big) $, $s \ins [0,\infty)$. Given $i,j \= 1,\cds \n ,d\+l$, we set $ \psi_i(\nxi) \df \nxi^{(i)} $ and $\psi_{ij}(\nxi) \df \nxi^{(i)} \nxi^{(j)} $ for any $ \nxi \= (\nxi_1 ,\cds \n , \nxi_{d + l} ) \ins \hR^{d+l}$. By Part (2a), the processes $ \oM^{t,\bx}_s(\psi_i) \= \big( \oXi^{t,\bx}_s \big)^{(i)} \- \int_0^s \big( \ol{b}^{t,\bx}_r \big)^{(i)} dr$, $s \ins [0,\infty)$ and $ \oM^{t,\bx}_s(\psi_{ij}) \= \big( \oXi^{t,\bx}_s \big)^{(i)} \big( \oXi^{t,\bx}_s \big)^{(j)} \- \int_0^s \big( \ol{b}^{t,\bx}_r \big)^{(i)} \big( \oXi^{t,\bx}_r \big)^{(j)} dr \- \int_0^s \big( \ol{b}^{t,\bx}_r \big)^{(j)} \big( \oXi^{t,\bx}_r \big)^{(i)} dr \- \int_0^s \big( \ol{\a}^{t,\bx}_r \big)_{ij} dr $, $ s \ins [0,\infty)$ are $\big(\obG^{t,\oP},\oP\big)-$local martingales and thus $ \big(\bF^{\oXi^t,\oP},\oP\big)-$local martingales. Since the {\it integration by part} formula renders that $\oP-$a.s. \beas && \hspace{-1.5cm} \big( \oXi^{t,\bx}_s \big)^{(i)} \big( \oXi^{t,\bx}_s \big)^{(j)} \- \oM^{t,\bx}_s(\psi_i) \oM^{t,\bx}_s(\psi_j) \= \oM^{t,\bx}_s(\psi_i) \int_0^s \big( \ol{b}^{t,\bx}_r \big)^{(j)} dr \+\oM^{t,\bx}_s(\psi_j) \int_0^s \big( \ol{b}^{t,\bx}_r \big)^{(i)} dr \+ \int_0^s \big( \ol{b}^{t,\bx}_r \big)^{(i)} dr \int_0^s \big( \ol{b}^{t,\bx}_r \big)^{(j)} dr \\ && \hspace{-0.7cm} \= \int_0^s \oM^{t,\bx}_r(\psi_i) \big( \ol{b}^{t,\bx}_r \big)^{(j)} dr \+ \int_0^s \Big( \int_0^r \big( \ol{b}^{t,\bx}_{r'} \big)^{(j)} dr' \Big) d \oM^{t,\bx}_r(\psi_i) \+ \int_0^s \oM^{t,\bx}_r(\psi_j) \big( \ol{b}^{t,\bx}_r \big)^{(i)} dr \+ \int_0^s \Big( \int_0^r \big( \ol{b}^{t,\bx}_{r'} \big)^{(i)} dr' \Big) d \oM^{t,\bx}_r(\psi_j) \\ && \hspace{-0.7cm} \q + \int_0^s \Big( \int_0^r \big( \ol{b}^{t,\bx}_{r'} \big)^{(i)} dr' \Big) \big( \ol{b}^{t,\bx}_r \big)^{(j)} dr \+ \int_0^s \Big( \int_0^r \big( \ol{b}^{t,\bx}_{r'} \big)^{(j)} dr' \Big) \big( \ol{b}^{t,\bx}_r \big)^{(i)} dr \\ && \hspace{-0.7cm} \= \int_0^s \Big[ \big( \oXi^{t,\bx}_s \big)^{(i)} \big( \ol{b}^{t,\bx}_r \big)^{(j)} \+ \big( \oXi^{t,\bx}_s \big)^{(j)} \big( \ol{b}^{t,\bx}_r \big)^{(i)} \Big] dr \+ \int_0^s \n \Big( \int_0^r \big( \ol{b}^{t,\bx}_{r'} \big)^{(j)} dr' \Big) d \oM^{t,\bx}_r(\psi_i) \+ \int_0^s \n \Big( \int_0^r \big( \ol{b}^{t,\bx}_{r'} \big)^{(i)} dr' \Big) d \oM^{t,\bx}_r(\psi_j) , \eeas \if{0} Note \beas \int_0^s \big( \ol{b}^{t,\bx}_r \big)^{(i)} dr \int_0^s \big( \ol{b}^{t,\bx}_r \big)^{(j)} \= \int_0^s \Big( \int_0^r \big( \ol{b}^{t,\bx}_{r'} \big)^{(i)} dr' \Big) \big( \ol{b}^{t,\bx}_r \big)^{(j)} dr \+ \int_0^s \Big( \int_0^r \big( \ol{b}^{t,\bx}_{r'} \big)^{(j)} dr' \Big) \big( \ol{b}^{t,\bx}_r \big)^{(i)} dr , \q s \ins [0,\infty) . \eeas \fi $s \ins [0,\infty) $, we obtain that \beas \oM^{t,\bx}_s(\psi_i) \oM^{t,\bx}_s(\psi_j) \- \int_0^s \big( \ol{\a}^{t,\bx}_r \big)_{ij} dr \= \oM^{t,\bx}_s(\psi_{ij}) \- \int_0^s \Big( \int_0^r \big( \ol{b}^{t,\bx}_{r'} \big)^{(j)} dr' \Big) d \oM^{t,\bx}_r(\psi_i) \- \int_0^s \Big( \int_0^r \big( \ol{b}^{t,\bx}_{r'} \big)^{(i)} dr' \Big) d \oM^{t,\bx}_r(\psi_j) , \eeas $s \ins [0,\infty)$ is also an $ \big(\bF^{\oXi^t,\oP},\oP\big)-$local martingale. Thus the quadratic variation of the $ \big(\bF^{\oXi^t,\oP},\oP\big)-$local martingale $ \osM^{t,\bx}_s \df \big( \oM^{t,\bx}_s(\psi_1), \cds \n , \oM^{t,\bx}_s(\psi_{d+l})\big) \= \oXi^{t,\bx}_s \- \int_0^s \ol{b}^{t,\bx}_r dr $, $ s \ins [0,\infty)$ is $ \big\lan \osM^{t,\bx}_s , \osM^{t,\bx}_s \big\ran \= \int_0^s \ol{\a}^{t,\bx}_r dr , \q s \ins [0,\infty) $. Let $ n \ins \hN$, $a \ins \hR^l$ and set $ \dis \sH_s \df \binom{- \b1_{\{s \le \otau^{t,\bx}_n\}}\si^T \big(t\+s, \oX^{t,\bx}_{(t+s) \land \cd} \big) a }{a}$, $s \ins [0,\infty)$. The stochastic exponential of the $ \big(\bF^{\oXi^t,\oP},\oP\big)-$martingale $\big\{\int_0^{\otau^{t,\bx}_n \land s} \sH_r \n \cd \n d \osM^{t,\bx}_r\big\}_{s \in [0,\infty)}$ is \beas && \hspace{-1.5cm} \exp\bigg\{\int_0^{\otau^{t,\bx}_n \land s} \sH_r \n \cd \n d \osM^{t,\bx}_r \- \frac12 \int_0^{\otau^{t,\bx}_n \land s} \sH^T_r \ol{\a}^{t,\bx}_r \sH_r dr \bigg\} \= \exp\bigg\{\int_0^{\otau^{t,\bx}_n \land s} \sH_r \n \cd \n d \oXi^{t,\bx}_r \- \int_0^{\otau^{t,\bx}_n \land s} \sH_r \n \cd \n \ol{b}^{t,\bx}_r dr \bigg\} \\ && \= \exp\bigg\{ a \n \cd \n \bigg( \oX^{t,\bx}_{t+\otau^{t,\bx}_n \land s} \- \oX^{t,\bx}_t \- \int_0^{\otau^{t,\bx}_n \land s} b \big(t\+r, \oX^{t,\bx}_{(t+r) \land \cd} \big) dr \- \int_0^{\otau^{t,\bx}_n \land s} \si \big(t\+r, \oX^{t,\bx}_{(t+r) \land \cd} \big) d \oW^t_r \bigg) \bigg\} , \q s \ins [0,\infty) . \eeas \if{0} We can deduce from \eqref{coeff_cond1} that $\big\{\int_0^{\otau^{t,\bx}_n \land s} \sH_r \n \cd \n d \osM^{t,\bx}_r\big\}_{s \in [0,\infty)}$ is a $ \big(\bF^{\oXi^t,\oP},\oP\big)-$BMO martingale and thus its stochastic exponential is a $ \big(\bF^{\oXi^t,\oP},\oP\big)-$uniformly integrable martingale. \fi Letting $a $ varies over $\hR^l$ yields that $\oP-$a.s., $ \oX^{t,\bx}_{t+\otau^{t,\bx}_n \land s} \= \bx(t) \+ \int_0^{\otau^{t,\bx}_n \land s} b \big(t\+r, \oX^{t,\bx}_{(t+r) \land \cd} \big) dr \+ \int_0^{\otau^{t,\bx}_n \land s} \si \big(t\+r, \oX^{t,\bx}_{(t+r) \land \cd} \big) d \oW^t_r $, $ s \ins [0,\infty) $. Sending $n \nto \infty$ then shows that $ \big\{ \oX^{t,\bx}_s \big\}_{s \in [0,\infty)} $ solves the SDE \eqref{Ju01_01}. As $ \oX^{t,\bx} $ coincides with $\oX$ on $\big(\ocN^1_{\n X}\big)^c$ by \eqref{091920_11}, we see that $\oP$ satisfies Definition \ref{def_ocP} (ii). \qed Based on the countable decomposition of the probability class $\ocP_{t,\bx}$ by Proposition \ref{prop_Ptx_char}, the next result shows that the graph of the probability class $ \{\ocP_{t,\bx}\}_{(t,\bx) \in [0,\infty) \times \OmX} $ is a Borel subset of $ [0,\infty) \ti \OmX \ti \fP(\oO) $, which is crucial for the measurability of the value function $\oV$. \begin{prop} \label{prop_graph_ocP} The graph $\gP \df \big\{ \big(t,\bx,y,z, \oP\big) \ins \oD \ti \fP(\oO) \n : \oP \ins \ocP_{t,\bx}(y,z) \big\}$ is a Borel subset of $ \oD \ti \fP(\oO)$ and the graph $\gcP \df \big\{ \big(t,\bw,\bx,y,z, \oP\big) \ins \ocD \ti \fP(\oO) \n : \oP \ins \ocP_{t,\bw,\bx}(y,z) \big\}$ is a Borel subset of $\ocD \ti \fP(\oO)$. \end{prop} \no {\bf Proof of Proposition \ref{prop_graph_ocP}: 1)} We first show that the graph $\big\lan\n\big\lan \ocP \big\ran\n\big\ran \df \big\{\big(t,\bx,\oP \big) \ins [0,\infty) \ti \OmX \ti \fP(\oO) \n : \oP \ins \ocP_{t,\bx} \big\}$ is a Borel subset of $[0,\infty) \ti \OmX \ti \fP(\oO)$. According to Proposition \ref{prop_Ptx_char}, $\big\lan\n\big\lan \ocP \big\ran\n\big\ran $ is the intersection of $ \big\lan\n\big\lan \ocP \big\ran\n\big\ran_1 \df \big\{\big(t,\bx,\oP \big) \ins [0,\infty) \ti \OmX \ti \fP(\oO) \n : \oP \ins \ocP^1_t \big\}$ and $ \big\lan\n\big\lan \ocP \big\ran\n\big\ran_2 \df \big\{\big(t,\bx,\oP \big) \ins [0,\infty) \ti \OmX \ti \fP(\oO) \n : \oP \ins \ocP^2_{t,\bx} \big\} $. \no {\bf 1a)} Since the function $W(s,\o_0) \df \o_0(s)$ is continuous in $ (s,\o_0) \ins [0,\infty) \ti \O_0 $, and the function $W^X(s,\omX) \df \omX(s)$ is continuous in $ (s,\omX) \ins [0,\infty) \ti \OmX $, $\Xi(t,r,\o_0,\omX) \df \big( W(t\+r,\o_0) \- W(t,\o_0), W^X(t\+r,\omX) \big) \ins \hR^{d+l}$, $ \fa (t,r,\o_0,\omX) \ins [0,\infty) \ti (0,\infty) \ti \O_0 \ti \OmX $ is Borel-measurable. Let $ \vf \ins \sP(\hR^{d+l}) $ and $n \ins \hN$. As the function $\fl (s,\omX) \df \omX(s \ld \cd) $ is continuous in $ (s,\omX) \ins [0,\infty) \ti \OmX $, the measurability of the functions $b,\si$ and the mappings $\fl,\Xi$ imply that the mapping \beas \cH_\vf (t,s,r,\o_0,\omX) \df \b1_{\{r \le s\}} \Big\{ \ol{b} \big(t\+r, \fl(t\+r,\omX)\big) \n \cd \n D \vf \big( \Xi(t,r,\o_0,\omX) \big) \+ \frac12 \ol{\si} \ol{\si}^T \big(t\+r, \fl(t\+r,\omX)\big) \n : \n D^2 \vf \big( \Xi(t,r,\o_0,\omX) \big) \Big\} , \eeas $\fa (t,s,r,\o_0,\omX) \ins [0,\infty) \ti [0,\infty) \ti (0,\infty) \ti \O_0 \ti \OmX$ is Borel-measurable, and $\cI_\vf (t,s,\o_0,\omX) \df \int_0^\infty \cH_\vf (t,s,r,\o_0,\omX) dr $, $ \fa (t,s,\o_0,\omX) \ins [0,\infty) \ti [0,\infty) \ti \O_0 \ti \OmX$ is thus Borel-measurable. It follows that \bea \label{091520_23} \cM_\vf(t,s,\oo) \df \big( \oM^t (\vf) \big) (s ,\oo ) \= \vf \Big( \Xi \big( t,s,\oW(\oo),\oX(\oo) \big) \Big) \- \cI_\vf \big(t,s,\oW(\oo),\oX(\oo)\big) , ~ \fa (t,s,\oo) \ins [0,\infty) \ti [0,\infty) \ti \oO \q \eea is also $\sB[0,\infty) \oti \sB[0,\infty) \oti \sB(\oO) -$measurable. For any $s \ins [0,\infty)$, since $ D^n_s \df \big\{(t,\oo) \ins [0,\infty) \ti \oO \n : \otau^t_n (\oo) \> s \big\} \= \big\{(t,\oo) \ins [0,\infty) \ti \oO \n : |\oXi^t_r (\oo)| \< n, \; \fa r \ins [0,s] \big\} $, we can deduce from the continuity of processes $(\oW,\oX)$ and the topology of locally uniform convergence on $(\O_0,\OmX)$ that $ D^n_s $ is an open subset of $[0,\infty) \ti \oO $ and the process $\sT_n (t,\oo) \df \otau^t_n (\oo)$, $ \fa (t,\oo) \ins [0,\infty) \ti \oO $ is thus $\sB[0,\infty) \oti \sB(\oO) -$measurable. This measurability together with \eqref{091520_23} further shows that for any $s \ins [0,\infty)$ \bea \oM^{\vf,n}_s (t,\oo) \df \big( \oM^t (\vf) \big) \big(\otau^t_n (\oo) \ld s ,\oo\big) \= \cM_\vf \big( t, \sT_n (t,\oo) \ld s ,\oo\big) , \q \fa (t,\oo) \ins [0,\infty) \ti \oO \label{Sep21_02} \eea is also a $\sB[0,\infty) \oti \sB(\oO) -$measurable process. \no {\bf 1b)} Let $\th \df \big( \vf, n, (s,r) , \{(s_i,\cO_i,\cO'_i)\}^k_{i=1} \big) \ins \sP(\hR^{d+l}) \ti \hN \ti \hQ^{2,<} \ti \wh{\sO} (\hR^{d+l}) $. As the process $ (t,\oo) \n \to \n \oT(\oo) \-t $ is $ \sB[0,\infty) \oti \sB(\oO) - $measurable, we see from \eqref{Sep21_02} that \beas \ol{\ff}_\th (t,\oo) \df \big( \oM^{\vf,n}_r (t,\oo) \- \oM^{\vf,n}_s (t,\oo) \big) \underset{i=1}{\overset{k}{\prod}} \Big( \b1_{\{ \Xi ( t,s_i \land s,\oW(\oo),\oX(\oo) ) \in \cO_i \} \cap \{\oT (\oo) \le t+s_i \land s \}} \+ \b1_{\{ \Xi ( t,s_i \land s,\oW(\oo),\oX(\oo) ) \in \cO'_i \} \cap \{\oT (\oo) > t+s_i \land s \}} \Big) , \eeas $\fa (t,\oo) \ins [0,\infty) \ti \oO$ is $\sB[0,\infty) \oti \sB(\oO) -$measurable. Applying Lemma \ref{lem_A1} yields that the mapping $ (t,\oP) \mto \int_{\oo \in \oO} \, \ol{\ff}_\th (t,\oo) \oP(d \, \oo) $ is $ \sB[0,\infty) \oti \sB\big(\fP(\oO)\big) - $measurable and the set \beas \bigg\{ (t,\bx,\oP) \ins [0,\infty) \ti \OmX \ti \fP(\oO) \n : E_\oP \Big[ \Big( \oM^t_{\otau^t_n \land r} (\vf ) \- \oM^t_{\otau^t_n \land s} (\vf ) \Big) \underset{i=1}{\overset{k}{\prod}} \big( \b1_{ \{\oXi^t_{s_i \land s } \in \cO_i \} \cap \{\oT \le t + s_i \land s \}} \+ \b1_{ \{\oXi^t_{s_i \land s } \in \cO'_i \} \cap \{\oT > t + s_i \land s \}} \big) \Big] \= 0 \bigg\} \eeas is thus Borel-measurable. Letting $\th$ run through $\sP(\hR^{d+l}) \ti \hN \ti \hQ^{2,<} \ti \wh{\sO} (\hR^{d+l})$ shows that $ \big\lan\n\big\lan \ocP \big\ran\n\big\ran_1 $ is a Borel subset of $[0,\infty) \ti \OmX \ti \fP(\oO)$. Since the mapping $ (t, \bx,\oo) \mto \prod_{r \in \hQ \cap [0,\infty)} \b1_{\{ \oX (r \land t,\oo ) = \bx (r \land t) \}}$ is $\sB[0,\infty) \oti \sB(\OmX) \oti \sB(\oO) -$measurable, Lemma \ref{lem_A1} again renders that the mapping $ (t,\bx,\oP) \mto \int_{\oo \in \oO} \b1_{\{\oT(\oo) - t \ge 0\}} \Big( \prod_{r \in \hQ \cap [0,\infty)} \b1_{\{ \oX (r \land t,\oo ) = \bx (r \land t) \}} \Big) \oP(d \, \oo) \= \oP \big\{ \oT \gs t; \oX_s \= \bx(s), \; \fa s \ins [0,t] \big\} $ is $ \sB[0,\infty) \oti \sB(\OmX) \oti \sB\big(\fP(\oO)\big) - $measurable and thus $ \big\lan\n\big\lan \ocP \big\ran\n\big\ran_2 \= \big\{ (t,\bx,\oP) \ins [0,\infty) \ti \OmX \ti \fP(\oO) \n : \oP \{\oT \gs t ; \oX_s \= \bx(s), \; \fa s \ins [0,t] \} \= 1 \big\}$ is a Borel subset of $[0,\infty) \ti \OmX \ti \fP(\oO)$. Totally, $ \big\lan\n\big\lan \ocP \big\ran\n\big\ran \= \big\lan\n\big\lan \ocP \big\ran\n\big\ran_1 \Cp \big\lan\n\big\lan \ocP \big\ran\n\big\ran_2 $ is also a Borel subset of $[0,\infty) \ti \OmX \ti \fP(\oO)$. \no {\bf 2a)} Let $i \ins \hN$. By the measurability of functions $\fl$, $g_i$ and $h_i$, the mapping $ \big(\fg_i,\fh_i \big) (t,s,r,\omX) \df \b1_{\{t \le r \le s\}} (g_i,h_i)\big(r, \fl(r,\omX)\big) $, $ \fa (t,s,r,\omX) \ins [0,\infty) \ti [0,\infty) \ti (0,\infty) \ti \OmX $ is Borel-measurable. It follows that $ \big(\cI_{g_i},\cI_{h_i}\big) (t,s,\omX) \df \int_0^\infty \big(\fg_i,\fh_i\big) (t,s,r,\omX) dr \\ = \int_t^{s \vee t} (g_i,h_i)\big(r, \fl(r,\omX)\big) dr $, $\fa (t,s, \omX) \ins [0,\infty) \ti [0,\infty) \ti \OmX $ is Borel-measurable and thus the process \bea \label{Sep21_04} \big(\wh{\cI}_{g_i}, \wh{\cI}_{h_i} \big) (t, \oo) \df \big(\cI_{g_i},\cI_{h_i}\big) \big( t, \oT(\oo), \oX(\oo) \big) \= \int_t^{\oT(\oo) \vee t } (g_i,h_i)\big(r,\oX (r \ld \cd,\oo)\big) dr , \q \fa (t, \oo) \ins [0,\infty) \ti \oO \eea is $\sB[0,\infty) \oti \sB(\oO) -$measurable. An application of Lemma \ref{lem_A1} yields that the mapping $ \big(\Phi_{g_i},\Phi_{h_i}\big) (t, \oP) \df \int_{\oo \in \oO} \, \big(\wh{\cI}_{g_i},\wh{\cI}_{h_i} \big) \\ (t, \oo) \oP(d \, \oo) \= E_\oP \big[ \int_t^{\oT } (g_i,h_i) \big(r,\oX_{r \land \cd} \big) dr \big] $, $ \fa (t, \oP) \ins [0,\infty) \ti \fP(\oO) $ is $ \sB[0,\infty) \oti \sB\big(\fP(\oO)\big) - $measurable. Then \beas \sD \df \big\{ (t,\bx,y,z,\oP) \ins [0,\infty) \ti \OmX \ti \cR \ti \fP(\oO) \n : \Phi_{g_i} (t, \oP) \ls y_i ,\, \Phi_{h_i} (t, \oP) \= z_i,\,\fa i \ins \hN \big\} \eeas is a Borel subset of $ [0,\infty) \ti \OmX \ti \cR \ti \fP(\oO) $ and thus a Borel subset of $ \oD \ti \fP(\oO) $. It follows that $\gP \= \Big\{ (t,\bx, y,z, \oP ) \ins [0,\infty) \ti \OmX \ti \cR \ti \fP(\oO) \n : \oP \ins \ocP_{t,\bx} ; \; E_\oP \big[ \int_t^{\oT } g_i\big(r,\oX_{r \land \cd} \big) dr \big] \ls y_i ,\, E_\oP \big[ \int_t^{\oT } h_i\big(r, \oX_{r \land \cd} \big) dr \big] \= z_i , \, \fa i \ins \hN \Big\} \= \big( \big\lan\n\big\lan \ocP \big\ran\n\big\ran \ti \cR \big) \Cp \sD$ is a Borel subset of $\oD \ti \fP(\oO)$. \no {\bf 2b)} Since the mapping $ \phi_W (t,\bw, \oo) \df \prod_{r \in \hQ \cap [0,\infty)} \b1_{\{ \oW (r \land t,\oo ) = \bw (r \land t) \}}$, $ (t,\bw, \oo) \ins [0,\infty) \ti \O_0 \ti \oO $ is $\sB[0,\infty) \oti \sB(\O_0) \oti \sB(\oO) -$measurable, applying Lemma \ref{lem_A1} again shows that the mapping $ \Phi(t,\bw, \oP) \df E_\oP \big[ \phi_W (t,\bw, \oo) \big] \= \oP \big\{ \oW_s \= \bw(s) $, $\fa s \ins [0,t] \big\} $, $\fa (t,\bw, \oP) \ins [0,\infty) \ti \O_0 \ti \fP(\oO)$ is $\sB[0,\infty) \oti \sB(\O_0) \oti \sB\big(\fP(\oO)\big) -$measurable. By the projections \beas \ol{\Pi}_1 (t,\bw,\bx,y,z,\oP) \df \big(t,\bx ,\oP\big), \q \ol{\Pi}_2 (t,\bw,\bx,y,z,\oP) \df \big(t,\bx,y,z,\oP\big), \q \ol{\Pi}_3 (t,\bw,\bx,y,z,\oP) \df (t,\bw,\bx, \oP) , \eeas we can derive that \beas && \hspace{-1.7cm} \gcP \= \Big\{ (t, \bw,\bx,y,z, \oP ) \ins [0,\infty) \ti \O_0 \ti \OmX \ti \cR \ti \fP(\oO) \n : \oP \ins \ocP_{t, \bx} \Big\} \\ && \hspace{-1cm} \q \cap \, \Big\{ (t, \bw,\bx,y,z, \oP ) \ins [0,\infty) \ti \O_0 \ti \OmX \ti \cR \ti \fP(\oO) \n : \oP \big\{ \oW_s \= \bw(s) , \fa s \ins [0,t] \big\} \= 1 \Big\} \\ && \hspace{-1cm} \q \cap \, \bigg\{ (t, \bw,\bx,y,z, \oP ) \ins [0,\infty) \ti \O_0 \ti \OmX \ti \cR \ti \fP(\oO) \n : E_\oP \Big[ \int_t^{\oT } g_i\big(r,\oX_{r \land \cd} \big) dr \Big] \ls y_i ,\, E_\oP \Big[ \int_t^{\oT } h_i\big(r,\oX_{r \land \cd} \big) dr \Big] \= z_i , \, \fa i \ins \hN \bigg\} \\ && \hspace{-1cm} \q \= \ol{\Pi}^{-1}_1 \big( \big\lan\n\big\lan \ocP \big\ran\n\big\ran \big) \Cp \ol{\Pi}^{-1}_3 \big( \Phi^{-1}(1) \big) \Cp \ol{\Pi}^{-1}_2 (\sD) \eeas is a Borel subset of $ \ocD \ti \fP(\oO) $. \qed By Proposition \ref{prop_graph_ocP}, the value function $\oV$ is \usa ~ and thus universally measurable. \begin{thm} \label{thm_V_usa} The value function $\oV$ is \usa ~ in $(t,\bx,y,z) \ins \oD$ and the value function $\ocV$ is \usa ~ in $(t,\bw,\bx,y,z) \ins \ocD$. \end{thm} \no {\bf Proof of Theorem \ref{thm_V_usa}:} Similar to \eqref{Sep21_04}, $ \wh{\cI}_f (t, \oo) \df \int_t^{\oT(\oo) \vee t } f \big(r,\oX (r \ld \cd,\oo)\big) dr $ is $\sB[0,\infty) \oti \sB(\oO) -$measurable. Since the measurability of functions $\fl$ and $\pi$ implies that $ (s,\oo) \mto \pi \big(s,\fl(s,\oX(\oo))\big) \= \pi \big(s,\oX(s \ld \cd,\oo)\big)$ is $\sB(0,\infty) \oti \sB(\oO)-$measurable, the mapping $ \phi_\pi (\oo) \df \b1_{\{\oT (\oo) < \infty\}} \pi \big(\oT (\oo),\oX (\oT (\oo) \ld \cd , \oo) \big) $, $\oo \ins \oO$ is $\sB(\oO)/\sB(\hR)-$measurable. According to Lemma \ref{lem_A1}, \beas \ol{\sV} (t,\oP) \df \int_{\oo \in \oO} \, \big( \wh{\cI}_f (t, \oo) \+ \phi_\pi (\oo) \big) \oP(d \, \oo) \= E_\oP \bigg[ \int_t^{\oT } f\big(r,\oX_{r \land \cd} \big) dr \+ \b1_{\{\oT < \infty\}} \pi \big(\oT ,\oX_{\oT \land \cd} \big) \bigg] , \eeas $ (t,\oP) \ins [0,\infty) \ti \fP(\oO)$ is $ \sB[0,\infty) \oti \sB\big(\fP(\oO)\big) - $measurable. Then Proposition \ref{prop_graph_ocP} and Proposition 7.47 of \cite{Bertsekas_Shreve_1978} yield that $ \oV (t,\bx,y,z) \= \Sup{\oP \in \ocP_{t,\bx}(y,z)} \ol{\sV} (t,\oP) \= \Sup{(t,\bx,y,z,\oP) \in [[\ocP]]} \ol{\sV} (t,\oP) $ is \usa ~ in $(t,\bx,y,z) \ins \oD$ and $\ocV(t,\bw,\bx,y,z) \= \Sup{\oP \in \ocP_{t,\bw,\bx}(y,z)} \ol{\sV} (t,\oP) \= \Sup{(t,\bw,\bx,y,z,\oP) \in \{\n\{\ocP\}\n\}} \ol{\sV} (t,\oP)$ is \usa ~ in $(t,\bw,\bx,y,z) \ins \ocD$. \qed \section{Dynamic Programming Principle} \label{sec_DPP} In this section, we explore a dynamic programming principle (DPP) for the value $\oV$ in the weak formulation, which takes the conditional expected integrals of constraint functions as additional states. Given $t \ins [0,\infty)$, let $\otau$ be a $[0,\infty)-$valued $\bF^{\oW^t}-$stopping time and set $\oAt \df \big\{\oT \ins [t,t\+\otau)\big\} \ins \ocF^t_\otau$. We denote by $\cF(\otau)$ the sigma field of $\oO$ generated by $\cF^{\oW^t}_\otau $ and the set $\oAt$, which consists of sets $\big( \oA_1 \Cp \oAt \big) \cp \big( \oA_2 \Cp\oAtc\big) $, $ \fa \oA_1, \oA_2 \ins \cF^{\oW^t}_\otau $. Since $ \cF^{\oW^t}_\otau $ is countably generated (see e.g. Lemma 1.3.3 of \cite{Stroock_Varadhan}), $\cF(\otau)$ is also countably generated. Let $\oP \ins \fP(\oO)$. According to Theorem 1.1.8 of \cite{Stroock_Varadhan}, there is a family $\big\{ \oP^t_{\otau,\oo} \big\}_{\oo \in \oO}$ of probabilities in $\fP(\oO)$, called the {\it regular conditional probability distribution} (r.c.p.d.) of $\oP$ with respect to $\cF(\otau)$, such that \no (R1) for any $\oA \ins \sB(\oO)$, the mapping $\oo \mto \oP^t_{\otau,\oo} \big(\oA\big)$ is $\cF(\otau)-$measurable; \no (R2) for any $[-\infty,\infty]-$valued, $\sB_\oP(\oO)-$measurable random variable $ \oxi $, it holds for all $\oo \ins \oO$ except on a $ \ocN_\oxi \ins \sN_\oP\big(\cF(\otau)\big) $ that $ \oxi$ is $ \sB_{\oP^t_{\otau,\oo}}(\oO) -$measurable and $ E_{\oP^t_{\otau,\oo}} \big[ \, \oxi \, \big] \= E_\oP \big[ \, \oxi \, \big| \cF(\otau) \big] (\oo) $; \no (R3) for some $ \ocN_0 \ins \sN_\oP \big(\cF(\otau)\big) $, $ \oP^t_{\otau,\oo} \big(\oA\big) \= \b1_{\{\oo \in \oA\}} $, $\fa \big(\oo,\oA\big) \ins \ocN^c_0 \ti \cF(\otau) $. For any $\oo \ins \oO $, $ \Wtzo \df \big\{\oo' \ins \oO \n : \oW^t_r (\oo') \= \oW^t_r (\oo), ~ \fa r \ins [0,\otau(\oo)] \big\} $ is an $ \cF^{\oW^t}_\otau -$measurable set including $\oo$. We know from Galmarino's test that \bea \label{090520_11} \otau(\Wtzo) \= \otau(\oo) , \q \fa \oo \ins \oO , \eea and (R3) shows that \bea \label{Jan11_03} \oP^t_{\otau,\oo} \big(\Wtzo \big) \= \b1_{\big\{\oo \in \Wtzo\big\}} \= 1 , \q \fa \oo \ins \ocN^c_0. \eea In term of the r.c.p.d. $\big\{ \oP^t_{\otau,\oo} \big\}_{\oo \in \oO}$, we have the following flow property for SDE \eqref{Ju01_01}: Simply speaking, if process $\oX $ satisfies the SDE under $\oP$, then for $\oP-$a.s. $\oo \ins \oO$, it satisfies the SDE with initial condition $\big(t\+\otau(\oo), \oX_{(t+\otau) \land \cd } (\oo)\big)$ under $\oP^t_{\otau,\oo}$. \begin{prop} \label{prop_flow} Given $ (t,\bx ) \ins [0,\infty) \ti \OmX $, let $\otau $ be a $(0,\infty)-$valued $\bF^{\oW^t}-$stopping time. If $\oP \ins \fP(\oO)$ satisfies Definition \ref{def_ocP} \(i\) and \(ii\), there is a $\oP-$null set $\ocN $ such that for any $\oo \ins \ocN^c $, \no \(1\) $ \oW^{t_\oo} $ is a standard Brownian motion with respect to filtration $ \obF^{t_\oo} $ under $\oP^t_{\otau,\oo}$, where $t_\oo \df t \+ \otau(\oo)$; \no \(2\) $\oP^t_{\otau,\oo} \big\{ \oX_s \= \sX^\oo_s, ~ \fa s \ins [0,\infty) \big\} \= 1$, where $ \big\{\sX^\oo_s\big\}_{s \in [0,\infty)}$ uniquely solves the following SDE on $\big(\oO , \sB\big(\oO\big) , \oP^t_{\otau,\oo}\big) \n : $ \bea \label{Ju01_01c} \sX_s = \oX (t_\oo,\oo) + \int_{t_\oo}^s b \big( r, \sX_{r \land \cd} \big)dr \+ \int_{t_\oo}^s \si \big( r, \sX_{r \land \cd} \big) d \oW_r, \q s \ins [t_\oo,\infty) \eea with initial condition $ \sX_s \= \oX_s(\oo) $, $\fa s \ins [0,t_\oo]$ \big(In particular, $ \big\{\sX^\oo_s\big\}_{s \in [0,\infty)}$ is an $ \bF^{\oW^{t_\oo},\oP^t_{\otau,\oo}}-$adapted process with all continuous paths satisfying $\oP^t_{\otau,\oo} \big\{\sX^\oo_{t_\oo+s} \= \oX (t_\oo,\oo) \+ \int_0^s b (t_\oo\+r,\sX^\oo_{(t_\oo+r) \land \cd}) dr \+ \int_0^s \si (t_\oo\+r, \sX^\oo_{(t_\oo+r) \land \cd}) d \oW^{t_\oo}_r , \fa s \ins [0,\infty)\big\} \= 1$\big). \end{prop} \no {\bf Proof of Proposition \ref{prop_flow}: 1a)} Let $\th \= \big( (s,r), \cO, \{(s_i,\cO_i,\cO'_i)\}^n_{i=1} \big) \ins \hQ^{2,<} \ti \sO (\hR^d) \ti \wh{\sO} (\hR^d) $. Set $\oA_s \big(\{(s_i,\cO_i,\cO'_i)\}^n_{i=1} \big) \df \ccap{i=1}{n} \Big(\big[ \big( \oW^t_{\otau+s_i \land s} \- \oW^t_{\otau } \big)^{-1} (\cO_i)\Cp \{ \oT \ins [t \+\otau ,t \+\otau \+ s_i \ld s]\} \big] \cp \big[ \big( \oW^t_{\otau+s_i \land s} \- \oW^t_{\otau } \big)^{-1} (\cO'_i)\Cp \{ \oT \ins [t \+\otau ,t \+\otau \+ s_i \ld s]^c\} \big] \Big) \ins \ocF^t_{ \otau + s} $. Since $ \oW^t_{\otau + r} \- \oW^t_{\otau + s}\sim \hb{\it Normal}\,(0,r\-s)$ is independent of $ \ocF^t_{ \otau + s} $ under $\oP$ and since $\cF(\otau) \sb \ocF^t_\otau \sb \ocF^t_{\otau+s} $, we can deduce from (R2) that for any $\oo \ins \oO$ except on a $ \ocN_\th \ins \sN_\oP \big(\cF(\otau)\big) $ \bea && \hspace{-1.2cm} E_{\oP^t_{\otau,\oo}} \Big[ \b1_{ (\oW^t_{\otau + r} - \oW^t_{\otau + s} )^{-1} (\cO)} \b1_{\oA_s (\{(s_i,\cO_i,\cO'_i)\}^n_{i=1} )}\Big] \= E_\oP \Big[ \b1_{ (\oW^t_{\otau + r} - \oW^t_{\otau + s} )^{-1} (\cO)} \b1_{\oA_s (\{(s_i,\cO_i,\cO'_i)\}^n_{i=1} )} \Big| \cF(\otau)\Big](\oo) \nonumber \\ && \= E_\oP \Big[ \b1_{ (\oW^t_{\otau + r} - \oW^t_{\otau + s} )^{-1} (\cO)} \Big] E_\oP \Big[ \b1_{\oA_s (\{(s_i,\cO_i,\cO'_i)\}^n_{i=1} )} \Big| \cF(\otau) \Big](\oo) \= \phi(r\-s,\cO) E_{\oP^t_{\otau,\oo}} \Big[ \b1_{\oA_s (\{(s_i,\cO_i,\cO'_i)\}^n_{i=1} )} \Big] . \qq \label{Jan13_01} \eea As $ \hQ^{2,<} \ti \sO (\hR^d) \ti \wh{\sO} (\hR^d)$ is a countable set, $ \ocN_1 \df \cup \big\{\ocN_\th \n : \th \ins \hQ^{2,<} \ti \sO (\hR^d) \ti \wh{\sO} (\hR^d) \big\} \hb{ still belongs to } \sN_\oP \big(\cF(\otau)\big) $. Fix $\oo \ins \ocN^c_0 \Cp \ocN^c_1 $. Given $\big( (s,r), \cO, \{(s_i,\cO_i,\cO'_i)\}^n_{i=1} \big) \ins \hQ^{2,<} \ti \sO (\hR^d) \ti \wh{\sO} (\hR^d)$, since \eqref{090520_11} shows \beas && \hspace{-1.2cm} \Wtzo \Cp \big(\oW^{t_\oo}_r \- \oW^{t_\oo}_s \big)^{-1} (\cO) \Cp \bigg\{ \underset{i=1}{\overset{n}{\cap}} \Big( \big[ (\oW^{t_\oo}_{s_i \land s})^{-1}(\cO_i) \Cp \big\{\oT \ins [t_\oo,t_\oo\+ s_i \ld s]\big\} \big] \cp \big[ (\oW^{t_\oo}_{s_i \land s})^{-1}(\cO'_i) \Cp \big\{\oT \ins [t_\oo,t_\oo\+ s_i \ld s]^c\big\} \big] \Big) \bigg\} \\ && \hspace{-0.75cm} \= \Wtzo \Cp \big\{\oo' \ins \oO \n : \oW (t\+\otau(\oo')\+r,\oo') \- \oW (t\+\otau(\oo')\+s,\oo') \ins \cO \big\} \Cp \\ && \hspace{-0.5cm} \bigg( \underset{i=1}{\overset{n}{\cap}} \Big( \big\{\oo' \ins \oO \n : \big( \oW (t\+\otau(\oo')\+s_i \ld s,\oo') \- \oW (t\+\otau(\oo') ,\oo') \big) \ins \cO_i, \oT (\oo') \ins [t\+\otau(\oo'),t\+\otau(\oo')\+ s_i \ld s]\big\} \cp \\ && \hspace{-0.25cm} \big\{\oo' \ins \oO \n : \big( \oW (t\+\otau(\oo')\+s_i \ld s,\oo') \- \oW (t\+\otau(\oo') ,\oo') \big) \ins \cO'_i, \oT (\oo') \ins [t\+\otau(\oo'),t\+\otau(\oo')\+ s_i \ld s]^c\big\} \Big) \bigg) \\ && \hspace{-0.75cm} \= \Wtzo \Cp \big(\oW^t_{\otau + r} \- \oW^t_{\otau + s}\big)^{-1} (\cO) \Cp \oA_s \big(\{(s_i,\cO_i,\cO'_i)\}^n_{i=1} \big) , \eeas \eqref{Jan11_03} and \eqref{Jan13_01} imply that \beas && \hspace{-1.7cm} \oP^t_{\otau,\oo} \bigg\{ \big(\oW^{t_\oo}_r \- \oW^{t_\oo}_s \big)^{-1} (\cO) \Cp \bigg( \underset{i=1}{\overset{n}{\cap}} \Big( \big[ (\oW^{t_\oo}_{s_i \land s})^{-1}(\cO_i) \Cp \big\{\oT \ins [t_\oo,t_\oo\+ s_i \ld s]\big\} \big] \cp \big[ (\oW^{t_\oo}_{s_i \land s})^{-1}(\cO'_i) \Cp \big\{\oT \ins [t_\oo,t_\oo\+ s_i \ld s]^c\big\} \big] \Big) \bigg) \bigg\} \\ && \hspace{-0.7cm} \= \oP^t_{\otau,\oo} \Big\{ \big(\oW^t_{\otau + r} \- \oW^t_{\otau + s}\big)^{-1} (\cO) \Cp \oA_s \big(\{(s_i,\cO_i,\cO'_i)\}^n_{i=1} \big) \Big\} \= \phi(r\-s,\cO) \oP^t_{\otau,\oo} \Big\{ \oA_s \big(\{(s_i,\cO_i,\cO'_i)\}^n_{i=1} \big) \Big\} \\ && \hspace{-0.7cm} \= \phi(r\-s,\cO) \oP^t_{\otau,\oo} \bigg\{ \n \underset{i=1}{\overset{n}{\cap}} \Big( \big[ (\oW^{t_\oo}_{s_i \land s})^{-1}(\cO_i) \Cp \big\{\oT \ins [t_\oo,t_\oo\+ s_i \ld s]\big\} \big] \cp \big[ (\oW^{t_\oo}_{s_i \land s})^{-1}(\cO'_i) \Cp \big\{\oT \ins [t_\oo,t_\oo\+ s_i \ld s]^c\big\} \big] \Big) \bigg\} . \eeas Then we can deduce from Dynkin's Theorem and \eqref{090520_21} that \bea \label{Jan14_01} \oP^t_{\otau,\oo} \big\{ \big(\oW^{t_\oo}_r \- \oW^{t_\oo}_s\big)^{-1} (\cE) \Cp \oA \big\} \= \phi(r\-s,\cE) \oP^t_{\otau,\oo} \big( \oA \big), \q \fa (s,r) \ins \hQ^{2,<} , \; \cE \ins \sB(\hR^d) , \; \oA \ins \ocF^{t_\oo}_s . \eea \no {\bf 1b)} Let $0 \ls s \< r \< \infty$, $\oA \ins \ocF^{t_\oo}_s $ and let $\cO$ be an open subset of $\hR^d$. For any $n \ins \hN$ with $n \> n_0 \df - \log_2(r\-s)$, we set $ \dis (s_n,r_n) \df \Big( \frac{\lceil 2^n s \rceil}{2^n}, \frac{\lceil 2^n r \rceil}{2^n} \Big) \ins \hQ^{2,<} $. By the continuity of $\oW^{t_\oo}$, $ \big\{ \oW^{t_\oo}_r \- \oW^{t_\oo}_s \ins \cO \big\} \sb \ccup{n = n_0 + 1}{\infty} \ccap{k \ge n}{} \big\{ \oW^{t_\oo}_{r_k} \- \oW^{t_\oo}_{s_k} \ins \cO \big\} $. So \eqref{Jan14_01} implies that \bea && \hspace{-1.5cm} \oP^t_{\otau,\oo} \Big\{ \big( \oW^{t_\oo}_r \- \oW^{t_\oo}_s \big)^{-1} ( \cO ) \Cp \oA \Big\} \ls \oP^t_{\otau,\oo} \Big\{ \ccup{n = n_0 + 1}{\infty} \ccap{k \ge n}{} \Big( \big( \oW^{t_\oo}_{r_k} \- \oW^{t_\oo}_{s_k} \big)^{-1} ( \cO ) \Cp \oA \Big) \Big\} \= \lmtu{n \to \infty} \oP^t_{\otau,\oo} \Big\{ \ccap{k \ge n}{} \Big( \big( \oW^{t_\oo}_{r_k} \- \oW^{t_\oo}_{s_k} \big)^{-1} ( \cO ) \Cp \oA \Big) \Big\} \nonumber \\ & & \ls \lmt{n \to \infty} \oP^t_{\otau,\oo} \Big\{ \big( \oW^{t_\oo}_{r_n} \- \oW^{t_\oo}_{s_n} \big)^{-1} ( \cO ) \Cp \oA \Big\} \= \oP^t_{\otau,\oo} \big( \oA \big) \lmt{n \to \infty} \phi(r_n\-s_n,\cO) \= \oP^t_{\otau,\oo} \big( \oA \big) \phi(r\-s,\cO) . \label{Jan12_47} \eea On the other hand, we let $\ell \ins \hN$ and define a closed set $ \cE_\ell \= \{\fx \in \hR^d: \hb{dist}(\fx,\cO^c) \gs 1/\ell \} $. Since the continuity of $\oW^{t_\oo}$ also shows that $ \big( \oW^{t_\oo}_r \- \oW^{t_\oo}_s \big)^{-1} (\cE_\ell) \supset \ccap{n = n_0 + 1}{\infty} \ccup{k \ge n}{} \big( \oW^{t_\oo}_{r_k} \- \oW^{t_\oo}_{s_k} \big)^{-1} (\cE_\ell) $, \eqref{Jan14_01} again renders that \beas && \hspace{-1.2cm} \oP^t_{\otau,\oo} \Big\{ \big( \oW^{t_\oo}_r \- \oW^{t_\oo}_s \big)^{-1} (\cE_\ell) \Cp \oA \Big\} \gs \oP^t_{\otau,\oo} \Big\{ \ccap{n = n_0 + 1}{\infty} \ccup{k \ge n}{} \Big( \big( \oW^{t_\oo}_{r_k} \- \oW^{t_\oo}_{s_k} \big)^{-1} (\cE_\ell) \Cp \oA \Big) \Big\} \= \lmtd{n \to \infty} \oP^t_{\otau,\oo} \Big\{ \ccup{k \ge n}{} \Big( \big( \oW^{t_\oo}_{r_k} \- \oW^{t_\oo}_{s_k} \big)^{-1} (\cE_\ell) \Cp \oA \Big) \Big\} \\ & & \gs \lmt{n \to \infty} \oP^t_{\otau,\oo} \Big\{ \big( \oW^{t_\oo}_{r_n} \- \oW^{t_\oo}_{s_n} \big)^{-1} (\cE_\ell) \Cp \oA \Big\} \= \oP^t_{\otau,\oo} \big( \oA \big) \lmt{n \to \infty} \phi(r_n\-s_n,\cE_\ell) \= \oP^t_{\otau,\oo} \big( \oA \big) \phi(r\-s,\cE_\ell) . \eeas As $ \cO \= \ccup{\ell \in \hN}{} \cE_\ell$, it follows that $ \oP^t_{\otau,\oo} \Big\{ \big(\oW^{t_\oo}_r \- \oW^{t_\oo}_s \big)^{-1} (\cO) \Cp \oA \Big\} \= \lmtu{\ell \to \infty } \oP^t_{\otau,\oo} \Big\{ \big( \oW^{t_\oo}_r \- \oW^{t_\oo}_s \big)^{-1} (\cE_\ell) \Cp \oA \Big\} \gs \oP^t_{\otau,\oo} \big( \oA \big) \lmtu{\ell \to \infty } \phi(r\-s,\cE_\ell) \= \oP^t_{\otau,\oo} \big( \oA \big) \phi(r\-s,\cO) $, which together with \eqref{Jan12_47} means that the Lambda system $ \Big\{ \cE \ins \sB(\hR^d) \n : \oP^t_{\otau,\oo} \big\{ \big(\oW^{t_\oo}_r \- \oW^{t_\oo}_s \big)^{-1} (\cE) \Cp \oA \big\} \= \oP^t_{\otau,\oo} \big( \oA \big) \phi(r\-s,\cE) \Big\}$ contains all open sets and is thus equal to $\sB(\hR^d)$. To wit, \beas \oP^t_{\otau,\oo} \big\{ \big(\oW^{t_\oo}_r \- \oW^{t_\oo}_s \big)^{-1} (\cE) \Cp \oA \big\} \= \oP^t_{\otau,\oo} \big( \oA \big) \phi(r\-s,\cE) , \q \fa 0 \ls s \< r \< \infty, \; \cE \ins \sB(\hR^d) , \; \oA \ins \ocF^{t_\oo}_s . \eeas Thus for any $ \oo \ins \ocN^c_0 \Cp \ocN^c_1 $, $ \oW^{t_\oo} $ is a standard Brownian motion with respect to filtration $ \obF^{t_\oo} $ under $\oP^t_{\otau,\oo}$. \no {\bf 2a)} Set $ \ocN^1_{\n X} \df \big\{ \oo \ins \oO \n : \oX_s (\oo) \nne \bx(s) \hb{ for some } s \ins [0,t] \big\} \ins \sN_\oP\big(\cF^\oX_t\big)$ and $ \ocN^2_{\n X} \df \big\{ \oo \ins \oO \n : \oX^t_s (\oo) \nne \sX^{t,\bx}_{t+s} (\oo) \hb{ for some } s \ins [0,\infty) \big\} \ins \sN_\oP\big(\cF^{\oXi^t}_\infty\big) $. As $\big\{\sX^{t,\bx}_{t+s} \big\}_{s \in [0,\infty)} $ is an $ \bF^{\oW^t,\oP}-$adapted continuous process, there is an $\bF^{\oW^t}-$predictable process $ \big\{ \oK^t_s \big\}_{s \in [0,\infty)}$ such that $ \ocN_{\n K} \df \big\{ \oo \ins \oO \n : \oK^t_s (\oo) \nne \sX^{t,\bx}_{t+s} (\oo) $ for some $s \ins [0,\infty) \big\} \ins \sN_\oP \big(\cF^{\oW^t}_\infty\big)$ (see e.g. Lemma 2.4 of \cite{STZ_2011a}). Since $\bK_\oo \df \ccap{r \in \hQ \cap [0,\infty)}{} \big\{\oo' \ins \oO \n : \oK^t_{\otau \land r} (\oo') \= \oX^t_{\otau \land r} (\oo) \big\} $ is $ \cF^{\oW^t}_\otau -$measurable set including $\oo$, (R3) shows that $ \oP^t_{\otau,\oo} (\bK_\oo) \= 1 $, $ \fa \oo \ins \ocN^c_0$ \big(cf. \eqref{Jan11_03}\big). We see from \eqref{090520_11} that \bea \Wtzo \Cp \big(\ocN^2_{\n X} \cp \ocN_{\n K}\big)^c \Cp \bK_\oo & \tn \= & \tn \Wtzo \Cp \big(\ocN^2_{\n X} \cp \ocN_{\n K}\big)^c \Cp \big\{\oo' \ins \oO \n: \oK^t (\otau(\oo) \ld r, \oo') \= \oX^t (\otau(\oo) \ld r, \oo), \; \fa r \ins \hQ \Cp [0,\infty) \big\} \nonumber \\ & \tn \sb & \tn \Wtzo \Cp \big(\ocN^2_{\n X} \cp \ocN_{\n K}\big)^c \Cp \big\{\oo' \ins \oO \n: \oX^t ( \otau(\oo) \ld r, \oo') \= \oX^t (\otau(\oo) \ld r, \oo), \; \fa r \ins \hQ \Cp [0,\infty) \big\} \nonumber \\ & \tn \= & \tn \Wtzo \Cp \big(\ocN^2_{\n X} \cp \ocN_{\n K}\big)^c \Cp \big\{\oo' \ins \oO \n: \sX^{t,\bx}_r ( \oo') \= \oX_r ( \oo') \= \oX_r ( \oo), \; \fa r \ins \big[t,t_\oo\big] \big\} . \label{091620_21} \eea Define $\oM_s \df \int_0^s \n \si(t\+r,\sX^{t,\bx}_{(t+r) \land \cd}) d\oW^t_r$, $s \ins [0,\infty)$. There is a sequence of $ \hR^{l \times d}-$valued, $ \bF^{\oW^t,\oP} -$simple processes $ \Big\{\ol{\cH}^n_s \= \sum_{i = 1}^{\ell_n} \oxi^n_i \, \b1_{ \{s \in (s^n_i, s^n_{i+1}] \} } , \, s \ins [0,\infty) \Big\}_{n \in \hN}$ \big(with $0 \= s^n_1 \< \cds \< s^n_{\ell_n+1} \< \infty $ and $\oxi^n_i \ins \cF^{\oW^t,\oP}_{ s^n_i}$, $i \=1 ,\cds \n, \ell_n $\big) such that \eqref{090520_31} holds. For any $n \ins \hN$ and $i \= 1, \cds \n , \ell_n$, one can find a $ \hR^{l \times d}-$valued $ \wh{\xi}^n_i \ins \cF^{\oW^t}_{s^n_i} $ such that $ \ocN^{n,i}_\xi \df \big\{ \oo \ins \oO \n : \wh{\xi}^n_i (\oo) \nne \oxi^n_i (\oo) \big\} \ins \sN_\oP \big(\cF^{\oW^t}_\infty\big)$. Set $\ocN_\xi \df \ccup{n \in \hN}{} \ccup{i=1}{\ell_n} \ocN^{n,i}_\xi \ins \sN_\oP \big(\cF^{\oW^t}_\infty\big) $. We know from (R2) that for some $ \ocN_2 \ins \sN_\oP \big(\cF(\otau)\big) $ \bea \label{Jan19_03} \oP^t_{\otau,\oo} \big(\ocN^1_{\n X} \cp \ocN^2_{\n X} \cp \ocN_{\n K} \cp \ocN_\xi \big) \= 0 , \q \fa \oo \ins \ocN^c_2 . \eea Since $ \ol{\beta}_n \df \int_0^\infty \big| \ol{\cH}^n_r \- \si \big(t\+r,\sX^{t,\bx}_{(t+r) \land \cd} \big) \big|^2 dr \+ \underset{s \in [0,\infty)}{\sup} \big| \oM^n_s \- \oM_s \big| \ins \cF^{\oW^t,\oP}_\infty $ converges to 0 in probability $\oP$ by \eqref{090520_31}, Lemma \ref{lemm_090620_11} shows that for some subsequence $ \{n_j\}_{j \in \hN} $ of $\hN$ and some $\ocN_\beta \ins \sN_\oP \big(\cF(\otau)\big) $ \bea \label{Jan22_09} \oP^t_{\otau,\oo} \n - \n \lmt{j \to \infty} \ol{\beta}_{n_j} \= 0 , \q \fa \oo \ins \ocN^c_\beta . \eea \no {\bf 2b)} Define $\ocN_\sharp \df \Big( \ccup{i=0}{2} \ocN_i \Big) \cp \ocN_\beta \cp \ocN^1_{\n X} $ and fix $\oo \ins \ocN^c_\sharp $. For any $ r \ins [0,\infty)$, applying Lemma \ref{lem_Jan10_01b} with $ s \= \otau(\oo) $ yields that $\oK^t_{\otau(\oo)+r} \ins \cF^{\oW^t}_{\otau(\oo)+r}$ coincides with some $\hR^l-$valued $ \cX^\oo_r \ins \cF^{\oW^{t_\oo}}_r $ on $ \Wtzo $. So it holds for any $(r,\oo') \ins [0,\infty) \ti \Big( \Wtzo \Cp \ocN^c_{\n K} \Big) $ that \bea \label{Jan22_07} \cX^\oo_r (\oo') \= \oK^t \big(\otau(\oo)\+r,\oo'\big) \= \sX^{t,\bx} \big(t\+\otau(\oo)\+r,\oo'\big) . \eea Since \eqref{090520_11} implies that \bea \label{091820_21} \cX^\oo_0 (\oo') \= \oK^t \big(\otau(\oo),\oo'\big) \= \oK^t \big( \otau(\oo') ,\oo' \big) \= \oX^t \big(\otau(\oo) ,\oo \big) \= \oX (t_\oo ,\oo ) , \q \fa \fa \oo' \ins \Wtzo \Cp \bK_\oo , \eea we see from \eqref{Jan22_07}, \eqref{Jan11_03} and \eqref{Jan19_03} that \beas \sX^\oo (s,\oo') \df \b1_{\{s \in [0,t_\oo]\}} \oX_s (\oo) \+ \b1_{\{s \in (t_\oo,\infty)\}} \big[ \b1_{\Wtzo \cap \bK_\oo \cap \ocN^c_{\n K} } \cX^\oo \big(s \- t_\oo, \oo' \big) \+ \b1_{\Wtzo^c \cup \bK^c_\oo \cup \ocN_{\n K} } \oX (t_\oo,\oo) \big] , ~ \fa (s,\oo') \ins [0,\infty) \ti \oO \eeas is a process with all continuous paths such that $\big\{\sX^\oo (t_\oo\+s,\oo') \big\}_{(s,\oo') \in [0,\infty) \times \oO} $ is $ \bF^{\oW^{t_\oo},\oP^t_{\otau,\oo}} -$adapted. Given $\oo' \ins \Wtzo \Cp \bK_\oo \Cp \big(\ocN^1_{\n X} \cp \ocN^2_{\n X} \cp \ocN_{\n K}\big)^c $, as $\oo \ins \big( \ocN^1_{\n X} \big)^c$, \eqref{091620_21} renders that for any $r \ins [0,t_\oo]$, $ \oX_r(\oo) \= \b1_{\{r \in [0,t)\}} \bx(r) \+ \b1_{\{r \in [t,t_\oo]\}} \sX^{t,\bx}_r (\oo') \= \sX^{t,\bx}_r (\oo') $. For any $ s \ins [0,\infty)$, it follows from \eqref{091620_21} and \eqref{Jan22_07} that \bea \sX^\oo_s (\oo') & \tn \= & \tn \b1_{\{s \in [0,t)\}} \bx(s) \+ \b1_{\{s \in [t,t_\oo]\}} \oX_s (\oo') \+ \b1_{\{s \in (t_\oo,\infty)\}} \sX^{t,\bx}_s ( \oo' ) \= \oX_s ( \oo' ) , \aand \label{091720_21} \\ \sX^\oo \big((t_\oo\+s) \ld \cd, \oo' \big) & \tn \= & \tn \Big( \, \Sup{r \in [0,t_\oo]} \sX^{t,\bx}_r(\oo') \Big) \ve \Big( \, \Sup{r \in (t_\oo, t_\oo+s]} \sX^{t,\bx}_r ( \oo') \Big) \= \sX^{t,\bx} \big( (t_\oo \+ s) \ld \cd , \oo' \big) . \label{091620_25} \eea Let $n \ins \hN$ and set $\fs^n_i(\oo) \df \otau(\oo) \ve s^n_i \- \otau(\oo) $, $ i \= 1, \cds \n, \ell_n \+ 1 $ \big(in particular, $ \fs^n_1(\oo) \= 0 $\big). For any $ i \= 1, \cds \n, \ell_n$, as $ \wh{\xi}^n_i \ins \cF^{\oW^t}_{s^n_i} \sb \cF^{\oW^t}_{\otau(\oo)+\fs^n_i(\oo)} $, Lemma \ref{lem_Jan10_01b} shows that $\wh{\xi}^n_i$ coincides with some $\hR^{l \times d}-$valued $ \wh{\xi}^{\,\oo}_{n,i} \ins \cF^{\oW^{t_\oo}}_{\fs^n_i(\oo)} $ on $ \Wtzo $. For the $ \hR^{l \times d}-$valued, $ \bF^{\oW^{t_\oo}} -$simple process $ \ol{\cH}^{\oo,n}_r \df \sum_{i = 1}^{\ell_n} \wh{\xi}^{\,\oo}_{n,i} \, \b1_{ \{r \in (\fs^n_i(\oo), \fs^n_{i+1}(\oo)] \} } , \, r \ins [0,\infty) $, set $ \oM^{\oo,n}_s \df \int_0^s \n \ol{\cH}^{\oo,n}_r d\oW^{t_\oo}_r \= \sum_{i = 1}^{\ell_n} \wh{\xi}^{\,\oo}_{n,i} \Big( \oW^{t_\oo}_{s \land \fs^n_{i+1}(\oo)} \- \oW^{t_\oo}_{s \land \fs^n_i(\oo)} \Big) $, $ \fa s \ins [0, \infty) $. Next, let $\oo' \ins \oO_\oo \df \Wtzo \Cp \bK_\oo \Cp \big(\ocN^1_{\n X} \cp \ocN^2_{\n X} \cp \ocN_{\n K} \cp \ocN_\xi \big)^c $. Since \beas \ol{\cH}^{\oo,n}_r (\oo') & \tn \= & \tn \sum_{i = 1}^{\ell_n} \wh{\xi}^n_i(\oo') \, \b1_{ \{\otau(\oo)+r \in (\otau(\oo) \vee s^n_i, \otau(\oo) \vee s^n_{i+1}] \} } \= \sum_{i = 1}^{\ell_n} \oxi^n_i(\oo') \, \b1_{ \{\otau(\oo)+r \in ( s^n_i, s^n_{i+1}] \} } \= \ol{\cH}^n \big( \otau(\oo)\+r , \oo' \big) , \aand \nonumber \\ \oM^{\oo,n}_s (\oo') & \tn \= & \tn \sum_{i = 1}^{\ell_n} \wh{\xi}^n_i (\oo') \Big( \oW \big(t_\oo \+ s \ld \fs^n_{i+1}(\oo), \oo'\big) \- \oW \big(t_\oo \+ s \ld \fs^n_i(\oo), \oo'\big) \Big) \= \sum_{i = 1}^{\ell_n} \oxi^n_i (\o') \Big( \oW^t \big( (\otau(\oo)\+s) \ld ( s^n_{i+1} \ve \otau(\oo)) , \oo'\big) \nonumber \\ & \tn & - \oW^t \big( (\otau(\oo)\+s) \ld ( s^n_i \ve \otau(\oo) ) , \oo'\big) \Big) \= \oM^n \big(\otau(\oo)\+s,\oo'\big) \- \oM^n \big(\otau(\oo),\oo'\big) , \q \fa s \ins [0,\infty) , \eeas we see from \eqref{091620_25} that \beas && \hspace{-1.2cm} \int_0^{\infty} \Big| \ol{\cH}^{\oo,n}_r( \oo') \- \si \big(t_\oo\+r,\sX^\oo \big((t_\oo\+r) \ld \cd, \oo' \big) \big) \Big|^2 dr \+ \underset{s \in [0,\infty)}{\sup} \Big| \oM^{\oo,n}_s (\oo') \- \oM \big(\otau(\oo)\+s,\oo'\big) \+ \oM \big(\otau(\oo) ,\oo'\big) \Big| \\ && \ls \int_{\otau(\oo)}^\infty \Big| \ol{\cH}^n (r',\oo') \- \si \big(t\+r',\sX^{t,\bx} \big((t\+r') \ld \cd,\oo'\big) \big) \Big|^2 dr' \+ 2 \underset{s \in [0,\infty)}{\sup} \big| \oM^n_s (\oo') - \oM_s (\oo') \big| \ls 2 \ol{\beta}_n (\oo') . \eeas Then \eqref{Jan11_03}, \eqref{Jan19_03} and \eqref{Jan22_09} imply that \bea \oP^t_{\otau,\oo} \n - \n \lmt{j \to \infty} \int_0^{\infty} \Big| \ol{\cH}^{\oo,n_j}_r \- \si \big(t_\oo\+r,\sX^\oo_{(t_\oo+r) \land \cd} \big) \Big|^2 dr \= 0 \; \hb{ and } \; \oP^t_{\otau,\oo} \n - \n \lmt{j \to \infty} \underset{s \in [0,\infty]}{\sup} \Big| \oM^{\oo,n_j}_s \- \oM_{\otau(\oo)+s} \+ \oM_{\otau(\oo)} \Big| \= 0 . \q \label{Jan22_14} \eea As $ \oW^{t_\oo} $ is a Brownian motion with respect to the filtration $ \obF^{t_\oo} $ under $\oP^t_{\otau,\oo}$ by Part (1), applying Proposition 3.2.26 of \cite{Kara_Shr_BMSC} and using the first limit of \eqref{Jan22_14} yield that $ 0 = \oP^t_{\otau,\oo} \n - \n \lmt{j \to \infty} \, \underset{s \in [0,\infty]}{\sup} \, \Big| \int_0^s \ol{\cH}^{\oo,n_j}_r d \oW^{t_\oo}_r - \int_0^s \si \big(t_\oo\+r,\sX^\oo_{(t_\oo+r) \land \cd} \big) d \oW^{t_\oo}_r \Big| $, which together with the second limit of \eqref{Jan22_14} indicates that except on a $\oP^t_{\otau,\oo}-$null set $\ocN_\oo$ \bea \label{Jan22_17} \int_0^s \si \big(t_\oo\+r,\sX^\oo_{(t_\oo+r) \land \cd} \big) d \oW^{t_\oo}_r \= \oM_{\otau(\oo)+s} \- \oM_{\otau(\oo)} \= \int_{\otau(\oo)}^{\otau(\oo)+s} \n \si(t\+r,\sX^{t,\bx}_{(t+r) \land \cd}) d\oW^t_r , \q \fa s \ins [0,\infty) . \eea Let $\oo' \ins \oO_\oo \Cp \ocN^c_\oo $ and let $s \ins [0,\infty)$. We can deduce from \eqref{Jan22_17}, \eqref{Jan22_07}, \eqref{091820_21} and \eqref{091620_25} that for any $ s \ins [0,\infty) $ \beas && \hspace{-1.2cm} \Big( \int_0^s \si \big(t_\oo\+r,\sX^\oo_{(t_\oo+r) \land \cd} \big) d \oW^{t_\oo}_r \Big) (\oo') \= \sX^{t,\bx} \big(t\+\otau(\oo) \+ s,\oo'\big) \- \sX^{t,\bx} \big(t\+\otau(\oo),\oo'\big) \- \int_{\otau(\oo)}^{\otau(\oo)+s} \n b \big(t\+r,\sX^{t,\bx}_{(t+r) \land \cd} (\oo') \big) d r \\ && \= \cX^\oo_s (\oo') \- \oK^t \big( \otau(\oo) ,\oo'\big) \- \int_0^s \n b \big(t\+\otau(\oo)\+r',\sX^{t,\bx} \big((t\+\otau(\oo)\+r') \ld \cd ,\oo'\big) \big) d r' \\ && \= \sX^\oo (t_\oo \+ s, \oo') \- \oX^t \big( \otau(\oo) ,\oo \big) \- \int_0^s \n b \big(t_\oo\+r,\sX^\oo \big((t_\oo\+r) \ld \cd, \oo' \big) \big) d r . \eeas As $ \oP^t_{\otau,\oo} \big(\oO_\oo \Cp \ocN^c_\oo \big) \= 1$ by \eqref{Jan11_03} and \eqref{Jan19_03}, it holds equivalently that $ \oP^t_{\otau,\oo} - $a.s. \beas \sX^\oo_s \= \oX ( t_\oo ,\oo ) \+ \int_{t_\oo}^s \n b \big(r,\sX^\oo_{r \land \cd} \big) d r \+ \int_{t_\oo}^s \si \big( r,\sX^\oo_{r \land \cd} \big) d \oW_r , \q \fa s \ins [t_\oo,\infty). \eeas Namely, $ \big\{\sX^\oo_s\big\}_{s \in [0,\infty)} $ solves the SDE \eqref{Ju01_01c}. We also see from \eqref{091720_21} that $ \oP^t_{\otau,\oo} \big\{ \sX^\oo_s \= \oX_s , \fa s \ins [0,\infty)\big\} \= 1$. Hence, for any $\oo \ins \ocN^c_\sharp $, $ \oP^t_{\otau,\oo} $ satisfies the second statement of this Proposition. \qed Let $t \ins [0,\infty)$, $\oP \ins \fP(\oO)$ and let $\otau$ be a $[0,\infty)-$valued $\bF^{\oW^t}-$stopping time. We define \beas \oY^i_\oP (\otau) \df E_\oP \bigg[ \int_{\oT \land (t+\otau)}^\oT g_i(r,\oX_{r \land \cd} ) dr \Big| \cF(\otau) \bigg] , \q \oZ^i_\oP (\otau) \df E_\oP \bigg[ \int_{\oT \land (t+\otau)}^\oT h_i(r,\oX_{r \land \cd} ) dr \Big| \cF(\otau) \bigg], \q \fa i \ins \hN , \eeas and set $ \big(\oY_\oP (\otau), \oZ_\oP (\otau) \big) \df \Big( \big\{\oY^i_\oP (\otau)\big\}_{i \in \hN}, \big\{\oZ^i_\oP (\otau)\big\}_{i \in \hN} \Big) $. \begin{cor} \label{cor_ocP_invariant} Given $ (t,\bx ) \ins [0,\infty) \ti \OmX $, let $\otau $ be a $(0,\infty)-$valued $\bF^{\oW^t}-$stopping time. If $\oP \ins \fP(\oO)$ satisfies Definition \ref{def_ocP} \(i\) and \(ii\), there is a $\oP-$null set $\ocN $ such that \beas \oP^t_{\otau,\oo} \ins \ocP_{t+\otau(\oo),\oX_{(t+\otau) \land \cd} (\oo)} \big( \oY_\oP (\otau ) (\oo), \oZ_\oP (\otau ) (\oo) \big), \q \fa \oo \ins \oAtc \Cp \ocN^c . \eeas \end{cor} \no {\bf Proof of Corollary \ref{cor_ocP_invariant}:} We have seen from the proof of Proposition \ref{prop_flow}: there is a $\oP-$null set $\ocN_\sharp $ such that for any $\oo \ins \ocN^c_\sharp$, $\oP^t_{\otau,\oo}$ satisfies Definition \ref{def_ocP} (i), (ii) with $(t,\bx) \= \big(t\+\otau(\oo),\oX_{(t+\otau) \land \cd} (\oo)\big)$. By (R2), there is a $\ocN_{\oT,g,h} \ins \sN_\oP \big( \cF(\otau) \big) $ such that for any $ \oo \ins \ocN^c_{\oT,g,h}$ \beas && \q \oP^t_{\otau,\oo}\{ \oT \gs t \+\otau \} \= E_{\oP^t_{\otau,\oo}} \big[\b1_{\{ \oT \ge t +\otau \}} \big] \= E_\oP \big[\b1_{\{ \oT \ge t +\otau \}} |\cF(\otau) \big](\oo) \= E_\oP \big[\b1_\oAtc |\cF(\otau) \big](\oo) \= \b1_{\{ \oo \in \oAtc \}} \q \hb{and} \\ && \hspace{-0.7cm} E_{\oP^t_{\otau,\oo}} \Big[ \int_{\oT \land (t+\otau)}^\oT (g_i,h_i) (r, \oX_{r \land \cd}) dr \Big] \= E_\oP \Big[ \int_{\oT \land (t+\otau)}^\oT (g_i,h_i) (r,\oX_{r \land \cd} ) dr \big| \cF(\otau) \Big] (\oo) \= \big( \oY^i_\oP (\otau) (\oo), \oZ^i_\oP (\otau) (\oo)\big) , ~ \fa i \ins \hN . \eeas For any $ \oo \ins \oAtc \Cp \Big( \ocN_0 \cp \ocN_{\oT,g,h} \Big)^c $, \eqref{Jan11_03} and \eqref{090520_11} imply that $ 1 \= \oP^t_{\otau,\oo}\{ \oT \gs t \+\otau \} \= \oP^t_{\otau,\oo}\big( \Wtzo \Cp \{ \oT \gs t \+\otau \}\big) \= \oP^t_{\otau,\oo}\big( \Wtzo \Cp \{ \oT \gs t \+\otau (\oo) \}\big) \= \oP^t_{\otau,\oo}\{ \oT \gs t \+ \otau(\oo) \} $ and \bea \big( \oY^i_\oP (\otau) (\oo), \oZ^i_\oP (\otau) (\oo)\big) & \tn \= & \tn E_{\oP^t_{\otau,\oo}} \Big[ \int_{\oT \land (t+\otau)}^\oT (g_i,h_i) (r, \oX_{r \land \cd}) dr \Big] \= E_{\oP^t_{\otau,\oo}} \bigg[ \b1_{\Wtzo} \int_{\oT \land (t+\otau(\oo))}^\oT (g_i,h_i)(r, \oX_{r \land \cd}) dr \bigg] \nonumber \\ & \tn \= & \tn E_{\oP^t_{\otau,\oo}} \bigg[ \int_{ t+\otau (\oo) }^{\oT } (g_i,h_i)(r, \oX_{r \land \cd}) dr \bigg] , \q \fa i \ins \hN. \label{090620_21} \eea Hence, $\oP^t_{\otau,\oo} \ins \ocP_{t+\otau(\oo),\oX_{(t+\otau) \land \cd}(\oo)} \big( \oY_\oP (\otau ) (\oo) , \oZ_\oP (\otau ) (\oo) \big)$ for any $\oo \ins \oAtc \Cp \Big( \ocN_\sharp \cp \ocN_{\oT,g,h} \Big)^c$. \qed Corollary \ref{cor_ocP_invariant} implies that the probability class $ \big\{ \ocP_{t,\bx}(y,z) \n : (t,\bx,y,z) \ins \oD \big\}$ is stable under {\it conditioning}. It will play an important role in deriving the sub-solution part of the DPP for $\oV$. Now, we are ready to present a dynamic programming principle in the weak formulation for the value of the optimal stopping with expectation constraints, in which $ \big(\oY_\oP (\otau),\oZ_\oP (\otau)\big) $ acts as additional states at the intermediate horizon $\otau$. \begin{thm} \label{thm_DPP} Let $ (t,\bx) \ins [0,\infty) \ti \OmX $ and let $(y,z) \ins \cR$ such that $ \ocP_{t,\bx} (y,z) \nne \es $. Let $\big\{\otauP \big\}_{\oP \in \ocP_{t,\bx}(y,z)}$ be a family of $(0,\infty)-$valued $\bF^{\oW^t} - $stopping times. Then \beas &&\hspace{-1.5cm} \oV(t,\bx,y,z) \= \Sup{\oP \in \ocP_{t,\bx}(y,z)} \n E_\oP \bigg[ \b1_{\{\oT < t+\otauP \}} \bigg( \n \int_t^\oT \n f(r,\oX_{r \land \cd}) dr \+ \pi \big(\oT, \oX_{\oT \land \cd}\big) \bigg) \\ &&\hspace{2.3cm} + \b1_{\{\oT \ge t+\otauP \}} \bigg( \n \int_t^{t+\otauP} \n f(r,\oX_{r \land \cd}) dr \+ \oV \Big( t \+\otauP ,\oX_{(t+\otauP) \land \cd} , \oY_\oP \big( \otauP \big) , \oZ_\oP \big( \otauP \big) \Big) \bigg) \bigg] . \eeas \end{thm} \no {\bf Proof of Theorem \ref{thm_DPP}:} We set $\oz \df \b1_{\{\oT < \infty\}} \pi \big(\oT, \oX_{\oT \land \cd}\big) $. \no {\bf (I) (sub-solution side)} Fix $\oP \ins \ocP_{t,\bx}(y,z)$ and simply denote $\otauP$ by $\otau$. According to Corollary \ref{cor_ocP_invariant}, there is a $\oP-$null set $\ocN_* $ such that \bea \label{Feb01_07} \oP^t_{\otau,\oo} \ins \ocP_{t+\otau(\oo),\oX_{(t+\otau) \land \cd}(\oo)} \big( \oY_\oP (\otau ) (\oo), \oZ_\oP (\otau ) (\oo) \big) , \q \fa \oo \ins \oAtc \Cp \ocN^c_* . \eea And (R2) shows that for some $\ocN_{f,\pi} \ins \sN_\oP \big(\cF(\otau)\big)$ \bea E_{\oP^t_{\otau,\oo}} \bigg[ \int_{\oT \land (t+\otau)}^\oT f (r,\oX_{r \land \cd} ) dr \+ \oz \, \bigg] \= E_\oP \bigg[ \int_{\oT \land (t+\otau)}^\oT f (r,\oX_{r \land \cd} ) dr \+ \oz \, \Big| \cF(\otau) \bigg] (\oo) , \q \fa \oo \ins \ocN^c_{f,\pi} . \label{Feb01_03} \eea Fix $\oo \ins \oAtc \Cp \big( \ocN_* \cp \ocN_{f,\pi} \big)^c$. Since $ E_{\oP^t_{\otau,\oo}} \big[ \int_{\oT \land (t+\otau)}^\oT f (r,\oX_{r \land \cd} ) dr \+ \oz \big] \= E_{\oP^t_{\otau,\oo}} \big[ \int_{ t+\otau (\oo) }^{\oT } f (r, \oX_{r \land \cd}) dr \+ \oz \big] $ by an analogy to \eqref{090620_21}, we see from \eqref{Feb01_03} and \eqref{Feb01_07} that \beas \hspace{-0.5cm} E_\oP \bigg[ \int_{\oT \land (t+\otau)}^\oT f (r,\oX_{r \land \cd} ) dr \+ \oz \Big| \cF(\otau) \bigg] (\oo) \= E_{\oP^t_{\otau,\oo}} \bigg[ \int_{ t+\otau (\oo) }^{\oT } f (r,\oX_{r \land \cd} ) dr \+ \oz \bigg] \ls \oV \Big( t\+\otau(\oo),\oX_{(t+\otau) \land \cd}(\oo), \oY_\oP (\otau ) (\oo) , \oZ_\oP (\otau ) (\oo) \Big) . \eeas It follows that \if{0} $ E_\oP \Big[ \b1_\oAtc \oV^{\,-} \big( t\+\otau ,\oX^t_\otau, \oY_\oP (\otau ) , \oZ_\oP (\otau ) (\oo) \big) \Big] \ls E_\oP \Big[ \b1_\oAtc E_\oP \big[ \int_{\oT \land (t+\otau)}^\oT f^- (r,\oX_{r \land \cd} ) dr \+ \oz^- \big| \cF(\otau) \big] \Big] \ls E_\oP \big[ \int_t^\oT f^- (r,\oX_{r \land \cd} ) dr \+ \oz^- \big] \< \infty $ and that \fi \beas && \hspace{-1.2cm} E_\oP \Big[ \b1_{\{ \oT \ge t + \otau \}} \oV \big( t\+\otau , \oX_{(t+\otau) \land \cd}, \oY_\oP (\otau ) , \oZ_\oP (\otau ) (\oo) \big) \Big] \= E_\oP \Big[ \b1_\oAtc \oV \big( t\+\otau , \oX_{(t+\otau) \land \cd}, \oY_\oP (\otau ) , \oZ_\oP (\otau ) (\oo) \big) \Big] \\ & & \gs E_\oP \bigg[ \b1_{\oAtc} E_\oP \Big[ \int_{\oT \land (t+\otau)}^\oT f (r,\oX_{r \land \cd} ) dr \+ \oz \Big| \cF(\otau) \Big] \bigg] \= E_\oP \bigg[ E_\oP \Big[ \b1_{\oAtc} \Big( \int_{\oT \land (t+\otau)}^\oT f (r,\oX_{r \land \cd} ) dr \+ \oz \Big) \Big| \cF(\otau) \Big] \bigg] \\ & & \= E_\oP \bigg[ \b1_{\oAtc} \Big( \int_{\oT \land (t+\otau)}^\oT f (r,\oX_{r \land \cd} ) dr \+ \oz \Big) \bigg] \= E_\oP \bigg[ \b1_{\{\oT \ge t + \otau \}} \Big( \int_{ t+\otau }^\oT f (r,\oX_{r \land \cd} ) dr \+ \oz \Big) \bigg] \eeas \if{0} Since $ E_\oP \Big[ \b1_{\{\oT < t+\otau \}} \int_t^\oT f^- (r,\oX_{r \land \cd} ) dr \+ \b1_{\{\oT < t+\otau \}} \oz^- \+ \b1_{\{\oT \ge t+\otau \}} \int_t^{\oT \land (t+\otau)} f^- (r,\oX_{r \land \cd} ) dr \+ \b1_{\{\oT \ge t+\otau \}} \Big( \int_{\oT \land (t+\otau)}^\oT f^- (r,\oX_{r \land \cd} ) dr \+ \oz^- \Big] \= E_\oP \Big[ \int_t^\oT f^- (r,\oX_{r \land \cd} ) dr \+ \oz^- \Big] \< \infty $, \fi and thus \beas && \hspace{-1.8cm} E_\oP \bigg[ \int_t^\oT \n f (r,\oX_{r \land \cd} ) dr \+ \oz \bigg] \= E_\oP \bigg[ \b1_{\{\oT < t+\otau \}} \bigg( \int_t^\oT \n f (r,\oX_{r \land \cd} ) dr \+ \oz \bigg) \+ \b1_{\{\oT \ge t+\otau \}} \bigg( \int_t^{ t+\otau } \n f (r,\oX_{r \land \cd} ) dr \+ \int_{ t+\otau }^\oT \n f (r,\oX_{r \land \cd} ) dr \+ \oz \bigg)\bigg] \\ && \hspace{-1.3cm} \ls E_\oP \Bigg[ \b1_{\{\oT < t+\otauP \}} \bigg( \int_t^\oT \n f(r,\oX_{r \land \cd}) dr \+ \pi \big(\oT, \oX_{\oT \land \cd}\big) \bigg) \+ \b1_{\{\oT \ge t+\otauP \}} \bigg( \int_t^{t+\otauP} \n f(r,\oX_{r \land \cd}) dr \+ \oV \Big( t \+\otauP ,\oX_{(t+\otauP) \land \cd} , \oY_\oP ( \otauP ) , \oZ_\oP (\otauP ) \Big) \bigg) \Bigg] . \eeas Taking supremum over $\oP \ins \ocP_{t,\bx}(y,z)$ yields the sub-solution part of the DPP for $\oV$. \no {\bf (II) (super-solution side)} Given $\e \ins (0,1)$, according to Proposition 7.50 of \cite{Bertsekas_Shreve_1978}, Proposition \ref{prop_graph_ocP} and Theorem \ref{thm_V_usa}, there is an analytically measurable function $ \ol{\bQ}_\e \n : \ocD \mto \fP(\oO) $ such that for any $(\ft,\bw,\fx,\fy,\fz) \ins \ocD $, $ \ol{\bQ}_\e (\ft,\bw,\fx,\fy,\fz) $ belongs to $ \ocP_{\ft,\bw,\fx}(\fy,\fz) $ and satisfies \bea \label{081620_19} \hspace{-1cm} E_{\ol{\bQ}_\e (\ft,\bw,\fx,\fy,\fz)} \bigg[ \int_\ft^{\oT } f\big(r,\oX_{r \land \cd} \big) dr \+ \b1_{\{\oT < \infty\}} \pi \big(\oT ,\oX_{\oT \land \cd} \big) \bigg] \gs \left\{ \ba{ll} \ocV(\ft,\bw,\fx,\fy,\fz) \- \e , & \hb{ if } \ocV(\ft,\bw,\fx,\fy,\fz) \< \infty ; \\ 1/\e , & \hb{ if } \ocV(\ft,\bw,\fx,\fy,\fz) \= \infty . \ea \right. \eea Fix $\oP \ins \ocP_{t,\bx}(y,z)$. We simply denote $\otau_\oP$ by $\otau$ and let $\cG(\otau)$ be the sigma field of $\oO$ generated by $ \cF^{\oXi^t}_\otau $ and the set $\oAt$, which consists of sets $\big( \oA_1 \Cp \oAt \big) \cp \big( \oA_2 \Cp\oAtc\big) $, $ \fa \oA_1, \oA_2 \ins \cF^{\oXi^t}_\otau $. It is clear that $\cF(\otau) \sb \cG(\otau) \sb \ocG^t_\otau $. For any $\oo \ins \oO$, set $t_\oo \df t \+ \otau(\oo)$. To show \bea &&\hspace{-1.5cm} E_\oP \Bigg[ \b1_{\{\oT < t+\otau \}} \bigg( \int_t^\oT f(r,\oX_{r \land \cd}) dr \+ \oz \bigg) \nonumber \\ && + \b1_{\{\oT \ge t+\otau \}} \bigg( \int_t^{t+\otau} f(r,\oX_{r \land \cd}) dr \+ \oV \Big( t \+\otau ,\oX_{(t+\otau) \land \cd} , \oY_\oP \big(\otau\big) , \oZ_\oP \big(\otau\big) \Big) \bigg) \Bigg] \ls \oV (t,\bx,y,z) , \q \label{081720_15} \eea we can assume without loss of generality that \bea \hspace{-1cm} E_\oP \bigg[ \b1_{\{\oT < t + \otau\}} \Big( \int_t^\oT f^- \big(r,\oX_{r \land \cd} \big) dr \+ \oz^- \Big) \+ \b1_{\{\oT \ge t + \otau \}} \bigg( \int_t^{t + \otau} f^- \big(r,\oX_{r \land \cd} \big) dr \+ \oV^- \Big( t \+ \otau , \oX_{(t+\otau) \land \cd} , \oY_\oP \big(\otau\big) , \oZ_\oP \big(\otau\big) \Big) \bigg) \bigg] \< \infty . \q \label{081620_23} \eea \no {\bf II.a)} As in Part (2a) of the Proof of Proposition \ref{prop_flow}, we still define Set $ \ocN^1_{\n X} \df \big\{ \oo \ins \oO \n : \oX_s (\oo) \nne \bx(s) \hb{ for some } s \ins [0,t] \big\} \ins \sN_\oP\big(\cF^\oX_t\big)$ and $ \ocN^2_{\n X} \df \big\{ \oo \ins \oO \n : \oX^t_s (\oo) \nne \sX^{t,\bx}_{t+s} (\oo) \hb{ for some } s \ins [0,\infty) \big\} \ins \sN_\oP \big( \cF^{\oXi^t}_\infty\big) $. There is an $\bF^{\oW^t}-$predictable process $ \big\{ \oK^t_s \big\}_{s \in [0,\infty)}$ such that $ \ocN_{\n K} \df \big\{ \oo \ins \oO \n : \oK^t_s (\oo) \nne \sX^{t,\bx}_{t+s} (\oo) $ for some $s \ins [0,\infty) \big\} \ins \sN_\oP \big(\cF^{\oW^t}_\infty\big)$. Define path random variables $ \big( \oW^{t,\otau}, \oX^{t,\otau} \big) \n : \oO \mto \O_0 \ti \OmX$ by \beas \big( \oW^{t,\otau}_r, \oX^{t,\otau}_r\big) (\oo) \df \big( \oW \big( (r \ve t) \ld ( t \+ \otau(\oo)) , \oo \big) \- \oW(t,\oo), \oX \big( r \ld (t\+ \otau(\oo)) , \oo \big) \big) , \q \fa (r,\oo) \ins [0,\infty) \ti \oO . \eeas Clearly, $ \oW^{t,\otau} $ is $ \cF^{\oW^t}_\otau \big/ \sB(\O_0) -$measurable. As $ \oX^t_\cd $ coincides with $\oK^t_\cd $ on $ \big(\ocN^2_{\n X} \cp \ocN_{\n K}\big)^c $, one can deduce that $ \oX^{t,\otau} $ is $ \si \Big( \cF^{\oW^t}_\otau \cp \cF^\oX_t \cp \sN_\oP \big(\cF^{\oXi^t}_\infty\big) \Big) \Big/ \sB(\OmX)-$measurable. For any $i \ins \hN$, since $\big(\oY^i_\oP ( \otau ),\oZ^i_\oP ( \otau ) \big) $ is $\cF(\otau)-$measurable, there are $(-\infty,\infty] \ti [-\infty,\infty]-$valued $\cF^{\oW^t}_\otau -$measurable random variables $\big(\ol{\cY}^i_1,\ol{\cZ}^i_1\big)$, $\big(\ol{\cY}^i_2,\ol{\cZ}^i_2\big)$ such that \bea \label{062720_11} \big( \oY^i_\oP ( \otau ) , \oZ^i_\oP ( \otau ) \big) (\oo) \= \big( \ol{\cY}^i_1(\oo),\ol{\cZ}^i_1(\oo)\big) \b1_{\{\oo \in \oAt \}} \+ \big( \ol{\cY}^i_2(\oo),\ol{\cZ}^i_2(\oo)\big) \b1_{\{\oo \in \oAtc\}} , \q \fa \oo \ins \oO . \eea We set $\big(\ol{\cY}_1,\ol{\cZ}_1 \big) \df \big( \big\{\ol{\cY}^i_1\}_{i \in \hN},\big\{\ol{\cZ}^i_1\}_{i \in \hN} \big) $ and $\big(\ol{\cY}_2,\ol{\cZ}_2 \big) \df \big( \big\{\ol{\cY}^i_2\}_{i \in \hN},\big\{\ol{\cZ}^i_2\}_{i \in \hN} \big) $. Let $\ddot{\O} \df [0,\infty) \ti \O_0 \ti \OmX \ti [0,\infty] $. Putting the above measurability together shows that the mapping \beas \ol{\Psi}_\tau (\oo) \df \Big( t \+ \otau (\oo) , \oW^{t,\otau} (\oo) , \oX^{t,\otau} (\oo) , \ol{\cY}_2 (\oo) , \ol{\cZ}_2 (\oo) \Big) \ins \ddot{\O} , \q \fa \oo \ins \oO \eeas is $ \si \Big( \cF^{\oW^t}_\otau \cp \cF^\oX_t \cp \sN_\oP \big(\cF^{\oXi^t}_\infty\big) \Big) \Big/ \sB \big( \ddot{\O} \big) -$measurable, which induces a probability $ \ddot{P}_\otau \df \oP \circ \ol{\Psi}^{-1}_\otau $ on $ \big( \ddot{\O},\sB ( \ddot{\O} )\big)$. Then $\ol{\Psi}_\tau$ is further $ \si \Big( \cF^{\oW^t}_\otau \cp \cF^\oX_t \cp \sN_\oP \big(\cF^{\oW^t}_\infty \ve \cF^\oX_\infty\big) \Big) \Big/ \si \Big( \sB ( \ddot{\O} ) \cp \sN_{\ddot{P}_\otau} \big( \sB ( \ddot{\O} ) \big) \Big) -$measurable. Since the universally measurable function $\ocV$ is $ \si \Big( \sB ( \ddot{\O} ) \cp \sN_{\ddot{P}_\otau} \big( \sB ( \ddot{\O} ) \big) \Big) \Big/ \sB (-\infty, \infty] $ by Theorem \ref{thm_V_usa}, we see that \bea \ocV \big( \ol{\Psi}_\tau (\oo) \big) \= \ocV \Big( t \+ \otau (\oo) , \oW^{t,\otau} (\oo) , \oX^{t,\otau} (\oo) , \ol{\cY}_2(\oo), \ol{\cZ}_2 (\oo) \Big) \ins (-\infty, \infty], \q \fa \oo \ins \oO \label{081620_45} \eea is $ \si \Big( \cF^{\oW^t}_\otau \cp \cF^\oX_t \cp \sN_\oP \big(\cF^{\oW^t}_\infty \ve \cF^\oX_\infty\big) \Big) \Big/ \sB (-\infty, \infty] -$measurable. \no {\bf II.b)} Fix $\e \ins (0,1)$ through Part (II.e). The analytically measurable function $ \ol{\bQ}_\e$ is also universally measurable. In particular, it is $ \si \big( \sB ( \ddot{\O} ) \cup \sN_{\ddot{P}_\otau} \big( \sB ( \ddot{\O} ) \big) \big) \big/ \\ \sB \big( \fP(\oO) \big) -$measurable and \bea \hspace{-0.8cm} \oQ^\oo_\e \df \ol{\bQ}_\e \big( \ol{\Psi}_\tau (\oo) \big) \= \ol{\bQ}_\e \Big( t \+ \otau (\oo) , \oW^{t,\otau} (\oo) , \oX^{t,\otau} (\oo) , \ol{\cY}_2(\oo), \ol{\cZ}_2 (\oo) \Big) \ins \ocP_{t + \otau (\oo) , \oW^{t,\otau} (\oo) , \oX^{t,\otau} (\oo)} \big( \ol{\cY}_2(\oo), \ol{\cZ}_2 (\oo) \big) , ~ \fa \oo \ins \oO \q \label{April07_11} \eea is thus $ \si \Big( \cF^{\oW^t}_\otau \cp \cF^\oX_t \cp \sN_\oP \big(\cF^{\oW^t}_\infty \ve \cF^\oX_\infty\big) \Big) \Big/ \sB\big(\fP(\oO)\big) -$measurable. Let $\oo \ins \oO$ and set $ \oO^t_{\otau,\oo} \df \Big\{ \oo' \ins \oO \n: (\oW_s,\oX_s) (\oo') \= \big( \oW^{t,\otau}_s, \oX^{t,\otau}_s \big) (\oo) , \fa s \ins \big[0,t\+\otau(\oo)\big] \Big\} $. By \eqref{April07_11}, \bea \label{072820_15} \oQ^\oo_\e \big( \oO^t_{\otau,\oo} \big) \= 1 \aand \oQ^\oo_\e \big\{ \oT \gs t \+ \otau(\oo) \big\} \= \oQ^\oo_\e \big\{\oo' \ins \oO \n : \oT(\oo') \gs t \+ \otau(\oo) \big\} \= 1 . \eea Since the set $ \oXi^t_{\otau,\oo} \df \big\{ \oo' \ins \oO \n: \oXi^t_r (\oo') \= \oXi^t_r (\oo) , \fa r \ins \big[0, \otau(\oo)\big] \big\} $ satisfies \beas \oO^t_{\otau,\oo} & \tn \sb & \tn \big\{ \oo' \ins \oO \n: \oW_r (\oo') \= 0 , \fa r \ins [0,t] ; \oW_r (\oo') \= \oW_r (\oo) \- \oW_t (\oo) , \fa r \ins (t,t\+\otau(\oo)] ; \oX_r(\oo') \= \oX_r(\oo) , \fa r \ins [0,t\+\otau(\oo)] \big\} \\ & \tn \sb & \tn \big\{ \oo' \ins \oO \n: \oW_r (\oo') \- \oW_t (\oo') \= \oW_r (\oo) \- \oW_t (\oo), \fa r \ins [t,t\+\otau(\oo)] ; \, \oX^t_r(\oo') \= \oX^t_r(\oo), \fa r \ins [0, \otau(\oo)] \big\} \= \oXi^t_{\otau,\oo} \sb \Wtzo , \eeas we further have \bea \label{062820_14} \oQ^\oo_\e \big( \Wtzo \big) \= \oQ^\oo_\e \big( \oXi^t_{\otau,\oo} \big) \= 1 , \q \fa \oo \ins \oO . \eea Then \eqref{090520_11} and \eqref{072820_15} render that for any $ \oo \ins \oO$, $ \oQ^\oo_\e \{ \oT \gs t \+ \otau \} \= \oQ^\oo_\e \big( \Wtzo \Cp \{ \oT \gs t \+ \otau(\oo) \} \big) \= 1 $. So \bea \label{081320_27} \oQ^\oo_\e \big(\oAt\big) \= 0 \aand \oQ^\oo_\e \big(\oAtc\big) \=1 , \q \fa \oo \ins \oO . \eea The following SDE on $\big(\oO , \sB(\oO) , \oQ^\oo_\e\big) $ \beas \hspace{-1cm} \sX_s \= \oX(t_\oo,\oo) \+ \n \int_{t_\oo}^s \n b ( r, \sX_{r \land \cd})dr \+ \n \int_{t_\oo}^s \n \si ( r, \sX_{r \land \cd}) d \oW_r, \q s \ins [t_\oo,\infty) \eeas with initial condition $ \sX_s \= \oX_s(\oo) $, $\fa s \ins [0,t_\oo]$ admits a unique solution $ \Big\{ \fX^\oo_s \= \sX^{t_\oo,\oX_{(t+\otau) \land \cd}(\oo)}_s \Big\}_{s \in [0,\infty)}$ \big(In particular, $ \big\{\fX^\oo_s\big\}_{s \in [0,\infty)}$ is an $ \bF^{\oW^{t_\oo},\oQ^\oo_\e}-$adapted process with all continuous paths satisfying $ \oQ^\oo_\e \big\{\fX^\oo_{t_\oo+s} \= \oX (t_\oo,\oo) \+ \int_0^s b (t_\oo\+r,\fX^\oo_{(t_\oo+r) \land \cd}) dr \+ \int_0^s \si (t_\oo\+r, \fX^\oo_{(t_\oo+r) \land \cd}) d \oW^{t_\oo}_r , \fa s \ins [0,\infty)\big\} \= 1$\big). By \eqref{April07_11} and Definition \ref{def_ocP} (i), $ \ocN^{\,\oo}_{\n X,2} \df \big\{ \oo' \ins \oO \n : \oX^{t_\oo}_s (\oo') \nne \fX^\oo (t_\oo\+s,\oo') \hb{ for some } s \ins [0,\infty) \big\} \ins \sN_{\oQ^\oo_\e} \Big(\cF^{\oXi^{t_\oo}}_\infty \Big) $. Also, there is an $\bF^{\oW^{t_\oo}}-$predictable process $ \big\{ K^\oo_s \big\}_{s \in [0,\infty)}$ such that $ \ocN^{\,\oo}_K \df \big\{ \oo' \ins \oO \n : K^\oo_s (\oo') \nne \fX^\oo (t_\oo\+s,\oo') \hb{ for some } s \ins [0,\infty) \big\} \ins \sN_{\oQ^\oo_\e} \Big(\cF^{\oW^{t_\oo}}_\infty\Big) $. Given $\ocA \ins \sB (\oO)$, we claim that \bea \label{May06_25} \oQ^\oo_\e (\oA \Cp \ocA) \= \b1_{ \{\oo \in \oA \}} \oQ^\oo_\e (\ocA) , \q \fa \oA \ins \cG(\otau) , ~ \fa \oo \ins \oAtc . \eea To see this, we let $ \oA_1, \oA_2 \ins \cF^{\oXi^t}_\otau $. When $\oo \ins \oA_2 $, set $\fs \df \otau(\oo)$. Since $ \oA_2 \Cp \{\otau \ls \fs\} $ is a $ \cF^{\oXi^t}_\fs-$measurable set including $\oo $, \bea \label{090920_15} \hb{$ \oXi^t_{\otau,\oo} \= \big\{ \oo' \ins \oO \n: \oXi^t_r (\oo') \= \oXi^t_r (\oo) , \fa r \ins [0, \fs] \big\} $ is also contained in $ \oA_2 \Cp \{\otau \ls \fs\} $.} \eea By \eqref{062820_14}, $ \oQ^\oo_\e \big(\oA_2 \big) \= 1 $ and thus $\oQ^\oo_\e \big( \oA_2 \Cp \ocA \big) \= \oQ^\oo_\e \big( \ocA \big) $. Similarly, for any $\oo \ins \oA^c_2 $ one has $ \oQ^\oo_\e (\oA^c_2 ) \= 1 $ and thus $\oQ^\oo_\e \big( \oA_2 \Cp \ocA \big) \= 0 $. Then \eqref{081320_27} implies that for $\oA \= \big( \oA_1 \Cp \oAt \big) \cp \big( \oA_2 \Cp\oAtc\big) \ins \cG(\otau) $ \beas \oQ^\oo_\e \big( \oA \Cp \ocA \big) \= \oQ^\oo_\e \big( \oA_1 \Cp \oAt \Cp \ocA \big) \+ \oQ^\oo_\e \big( \oA_2 \Cp\oAtc \Cp \ocA \big) \= \oQ^\oo_\e \big( \oA_2 \Cp \ocA \big) \= \b1_{\{\oo \in \oA_2\}} \oQ^\oo_\e \big( \ocA \big) \= \b1_{\{\oo \in \oA \}} \oQ^\oo_\e \big( \ocA \big) , ~ \fa \oo \ins \oAtc . \eeas Let us consider a pasted probability $\oP_\e \ins \fP(\oO)$: \bea \label{092520_11} \oP_\e (\oA) \df \oP \big(\oAt \Cp \oA \big) \+ \int_{\oo \in \oAtc} \oQ^\oo_\e (\oA ) \oP(d\oo) , \q \fa \oA \ins \sB (\oO) . \eea In particular, taking $\ocA \= \oO$ in \eqref{May06_25} renders that \bea \oP_\e (\oA) \= \oP \big(\oAt \Cp \oA \big) \+ \int_{\oo \in \oAtc} \b1_{ \{\oo \in \oA\} } \oP(d\oo) \= \oP (\oA) , \q \fa \oA \ins \cG(\otau) . \label{052420_21} \eea We shall verify in next steps that the pasted probability $\oP_\e $ also belongs to $\ocP_{t,\bx}(y,z)$, or the probability class $\ocP_{t,\bx}(y,z)$ is stable under the pasting \eqref{092520_11}. \no {\bf II.c)} In this step, we demonstrate that $ \oW^t $ is a $d-$dimensional standard Brownian motion with respect to filtration $ \obG^t $ under $\oP_\e$. Let $ 0 \ls s \< r \< \infty $ and $ \cE \ins \sB(\hR^d) $. We need to show that \bea \label{May08_01} \oP_\e \big\{ \big(\oW^t_r \- \oW^t_s \big)^{-1} (\cE) \Cp \oA \big\} \= \phi(r\-s,\cE) \oP_\e \big( \oA \big) , \q \fa \oA \ins \ocG^t_s . \eea To verify \eqref{May08_01}, we let $\big\{(s_i,\cO_i,\cO'_i)\big\}^n_{i=1} \sb \hQ_s \ti \sO(\hR^{d+l}) \ti \sO(\hR^{d+l})$ and set \beas \oA \df \ccap{i=1}{n} \Big(\big[ \big( \oXi^t_{ s_i } \big)^{-1} (\cO_i) \Cp \{ \oT \ins [t ,t \+ s_i ]\} \big] \cp \big[ \big( \oXi^t_{ s_i } \big)^{-1} (\cO'_i) \Cp \{ \oT \ins [t ,t \+ s_i ]^c\} \big] \Big) \ins \ol{\sC}^t_s . \eeas \no {\bf II.c.1)} Let $ \oo \ins \{\otau \> s \} \Cp \oA \Cp \{\oT \gs t \+ \otau\} $. For $i \= 1,\cds \n ,n$, since $ \oT (\oo) \gs t \+ \otau (\oo) \> t \+ s \gs t\+s_i $ and since $\oo \ins \oA $, we see that $ \oXi^t_{ s_i } (\oo ) \ins \cO'_i $. It follows that $ \oXi^t_{\otau,\oo} \sb \big\{ \oo' \ins \oO \n: \oXi^t_{s_i} (\oo') \= \oXi^t_{s_i} (\oo) \ins \cO'_i , \, i \= 1,\cds \n ,n \big\} \sb \ccap{i=1}{n} \big(\oXi^t_{s_i}\big)^{-1} (\cO'_i) $, and thus $ \oXi^t_{\otau,\oo} \Cp \big\{ \oT \gs t \+ \otau(\oo)\big\} \sb \ccap{i=1}{n} \Big( \big(\oXi^t_{s_i}\big)^{-1} (\cO'_i) \Cp \{\oT \ins [t,t \+ s_i]^c\} \Big) \sb \oA $. By \eqref{072820_15} and \eqref{062820_14}, \bea \label{May09_41} \oQ^\oo_\e ( \oA ) \= 1 , \q \fa \oo \ins \{\otau \> s \} \Cp \oA \Cp \{\oT \gs t \+ \otau\} . \eea Next, let $ \oo \ins \{\otau \> s \} \Cp \oA^c \Cp \{\oT \gs t \+ \otau\} $. As $ \oo \ins \oA^c $, \if{0} Since $\oo \ins \oA^c \= \ccup{i=1}{n} \Big(\big[ \big( \oXi^t_{ s_i } \big)^{-1} (\cO^c_i)\Cp \{ \oT \ins [t ,t \+ s_i ]\} \big] \cp \big[ \big( \oXi^t_{ s_i } \big)^{-1} \big((\cO'_i)^c\big) \Cp \{ \oT \ins [t ,t \+ s_i ]^c\} \big] \Big) $, \fi there is $j \ins \{1,\cds \n ,n\}$ such that $ \oo \ins \big[ \big( \oXi^t_{ s_j } \big)^{-1} (\cO^c_j)\Cp \{ \oT \ins [t ,t \+ s_j ]\} \big] \cp \\ \big[ \big( \oXi^t_{ s_j } \big)^{-1} \big((\cO'_j)^c\big) \Cp \{ \oT \ins [t ,t \+ s_j ]^c\} \big] $. In particular, $ \oXi^t_{ s_j }(\oo) \ins (\cO'_j)^c $. Then $ \oXi^t_{\otau,\oo} \sb \big\{ \oo' \ins \oO \n: \oXi^t_{s_j} (\oo') \= \oXi^t_{s_j} (\oo) \ins (\cO'_j)^c \big\} \sb \big(\oXi^t_{s_j}\big)^{-1} \big((\cO'_j)^c\big) $ and thus $ \oXi^t_{\otau,\oo} \Cp \big\{ \oT \gs t \+ \otau(\oo)\big\} \sb \big(\oXi^t_{s_j}\big)^{-1} \big((\cO'_j)^c\big) \Cp \{\oT \ins [t,t \+ s_j]^c\} \sb \oA^c $. By \eqref{072820_15} and \eqref{062820_14} again, \bea \label{May09_43} \oQ^\oo_\e \big( \oA^c \big) \= 1 \q \hb{or} \q \oQ^\oo_\e ( \oA ) \= 0 , \q \fa \oo \ins \{\otau \> s \} \Cp \oA^c \Cp \{\oT \gs t \+ \otau\} . \eea Since it holds for any $ \oo \ins \{\otau \gs r \} $ that $ \Wtzo \sb \{\otau \gs r \}$ by \eqref{090520_11}, \eqref{062820_14} shows that \beas \q \int_{\oo \in \oAtc} \b1_{\{ \otau(\oo) \ge r\} } \oQ^\oo_\e \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \oP(d\oo) \= \int_{\oo \in \oAtc} \b1_{\{ \otau(\oo) \ge r\} } \oQ^\oo_\e \big( \{\otau \gs r \} \Cp (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \oP(d\oo). \eeas As $\{\otau \gs r \} \Cp (\oW^t_r \- \oW^t_s )^{-1} (\cE) \= \{\otau \gs r \} \Cp (\oW^t_{\otau \land r} \- \oW^t_{\otau \land s} )^{-1} (\cE) \ins \cF^{\oW^t}_{\otau \land r} \sb \cF^{\oXi^t}_\otau $, \eqref{May06_25}, \eqref{May09_41} and \eqref{May09_43} imply that \bea && \hspace{-1.5cm} \int_{\oo \in \oAtc} \b1_{\{ \otau(\oo) \ge r\} } \oQ^\oo_\e \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \oP(d\oo) \= \int_{\oo \in \oAtc} \b1_{\{ \otau(\oo) \ge r\} \cap \{ (\oW^t_r \- \oW^t_s ) (\oo) \in \cE \} } \oQ^\oo_\e \big( \oA \big) \oP(d\oo) \nonumber \\ && \hspace{-1cm} \= \int_{\oo \in \oO} \b1_{\{ \otau(\oo) \ge r\} \cap \{ (\oW^t_r \- \oW^t_s ) (\oo) \in \cE \} \cap \{\oT (\oo) \ge t + \otau (\oo)\}} \oQ^\oo_\e ( \oA ) \oP(d\oo) \= \oP \big( \{\otau \gs r \} \Cp (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \Cp \{\oT \gs t \+ \otau\} \big) . \qq \label{051120_27} \eea \no {\bf II.c.2)} Define $ \ocA \df \ccap{i=1}{n} \big( \oXi^t_{\otau \land s_i } \big)^{-1} (\cO'_i) \ins \cF^{\oXi^t}_\otau $ and let $ \oo \ins \{s \< \otau \< r \} $. By \eqref{090520_11}, \bea \oA \Cp \big\{ \oT \gs t \+ \otau(\oo)\big\} \Cp \Wtzo & \tn \= & \tn \oA \Cp \big\{ \oT \gs t \+ \otau \> t\+ s\big\} \Cp \Wtzo \= \Big( \ccap{i=1}{n} \big\{ \oXi^t_{ s_i } \ins \cO'_i \big\} \Big) \Cp \big\{ \oT \gs t \+ \otau \> t\+ s\big\} \Cp \Wtzo \nonumber \\ & \tn \= & \tn \Big( \ccap{i=1}{n} \big\{ \oXi^t_{ \otau \land s_i } \ins \cO'_i \big\} \Big) \Cp \big\{ \oT \gs t \+ \otau \> t\+ s\big\} \Cp \Wtzo \= \ocA \Cp \big\{ \oT \gs t \+ \otau(\oo)\big\} \Cp \Wtzo . \q \qq \label{051120_21} \eea Set $\cE_\oo \df \big\{\fx \- \oW^t_\otau (\oo) \+ \oW^t_s (\oo) \n : \fx \ins \cE \big\} \ins \sB(\hR^d)$. Given $\oo' \ins \Wtzo $, $ (\oW^t_r \- \oW^t_s ) (\oo') \ins \cE $ if and only if $ \oW^{t_\oo} \big(r \- \otau(\oo) , \oo'\big) \= \oW (t\+r,\oo') \- \oW \big(t\+\otau(\oo) ,\oo'\big) \= \oW^t_r (\oo') \- \oW^t \big(\otau(\oo) ,\oo'\big) \= \oW^t_r (\oo') \- \oW^t_s (\oo') \- \oW^t \big(\otau(\oo) ,\oo\big) \+ \oW^t_s (\oo) \ins \cE_\oo $. So \bea \label{051120_14} (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \Wtzo \= \Big(\oW^{t_\oo}_{r - \otau(\oo)}\Big)^{-1} \big( \cE_\oo \big) \Cp \Wtzo . \eea As $ \oW^{t_\oo} $ is a $\big(\obF^{t_\oo}, \oQ^\oo_\e\big)-$Brownian motion by \eqref{April07_11}, we can deduce from \eqref{051120_21}, \eqref{072820_15}, \eqref{062820_14} and \eqref{May06_25} that \bea && \hspace{-1cm} \oQ^\oo_\e \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \= \oQ^\oo_\e \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \ocA \big) \= \b1_{\{\oo \in \ocA \}} \ \oQ^\oo_\e \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \big) \= \b1_{\{\oo \in \ocA \}} \oQ^\oo_\e \Big\{ \Big(\oW^{t_\oo}_{r - \otau(\oo)}\Big)^{-1} \big( \cE_\oo \big) \Big\} \nonumber \\ && \hspace{3cm} \= \b1_{\{\oo \in \ocA \}} \phi \big( r\- \otau(\oo) ,\cE_\oo \big) , \q \fa \oo \ins \{s \< \otau \< r \} \Cp \oAtc , \q \hb{thus} \nonumber \\ && \hspace{-1cm} \int_{\oo \in \oAtc} \b1_{\{s < \otau(\oo)< r \} } \oQ^\oo_\e \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \oP(d\oo) \= \int_{\oo \in \oAtc} \b1_{\{s < \otau(\oo)< r \} } \b1_{\{\oo \in \ocA \}} \phi( r\- \otau(\oo) ,\cE_\oo) \oP(d\oo) . \label{051120_23} \eea We also define $\ocA' \df \ccap{i=1}{n} \big( \oW^t_{\otau \land s_i }, \oK^t_{\otau \land s_i } \big)^{-1} (\cO'_i) \ins \cF^{\oW^t}_{\otau \land s} $, clearly, $ \ocA' \Cp \big(\ocN^2_{\n X} \cp \ocN_{\n K}\big)^c \= \ocA \Cp \big(\ocN^2_{\n X} \cp \ocN_{\n K}\big)^c $. By (R2), there is $ \ocN_{s,r} \ins \sN_\oP \big(\cF(\otau)\big) $ such that for any $\oo \ins \ocN^c_{s,r} $ \bea && \hspace{-1cm} E_\oP \big[ \b1_{(\oW^t_r - \oW^t_s )^{-1} (\cE) } \big| \cF(\otau) \big] (\oo) \= E_{\oP^t_{\otau,\oo}} \big[ \b1_{(\oW^t_r - \oW^t_s )^{-1} (\cE) } \big] , ~ E_\oP \big[ \b1_{(\oW^t_r - \oW^t_s )^{-1} (\cE) \cap \oA} \big| \cF(\otau) \big] (\oo) \= E_{\oP^t_{\otau,\oo}} \big[ \b1_{(\oW^t_r - \oW^t_s )^{-1} (\cE) \cap \oA} \big] , \nonumber \\ && \hspace{-1cm} \hb{and} \q \b1_{\{\oo \in \ocA'\}} E_\oP \big[ \b1_{(\oW^t_r - \oW^t_s )^{-1} (\cE) } \big| \cF(\otau) \big] (\oo) \= E_\oP \big[ \b1_{(\oW^t_r - \oW^t_s )^{-1} (\cE) \cap \ocA'} \big| \cF(\otau) \big] (\oo) \= E_\oP \big[ \b1_{(\oW^t_r - \oW^t_s )^{-1} (\cE) \cap \ocA} \big| \cF(\otau) \big] (\oo) \nonumber \\ && \hspace{2cm} \= E_{\oP^t_{\otau,\oo}} \Big[ \b1_{ (\oW^t_r - \oW^t_s )^{-1} (\cE) \cap \ocA} \Big] . \label{051120_17} \eea Let $ \oo \ins \{s \< \otau \< r \} \Cp \oAtc \Cp (\ocN_* \cp \ocN_{s,r} \cp \ocN^2_{\n X} \cp \ocN_{\n K})^c $. As $ \oW^{t_\oo} $ is a standard Brownian motion with respect to filtration $ \obF^{t_\oo} $ under $\oP^t_{\otau,\oo}$ by \eqref{Feb01_07}, we see from \eqref{051120_14} and \eqref{Jan11_03} that $ \oP^t_{\otau,\oo} \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \big) \= \oP^t_{\otau,\oo} \Big\{ \Big(\oW^{t_\oo}_{r - \otau(\oo)}\Big)^{-1} \big( \cE_\oo \big) \Big\} \= \phi \big( r\- \otau(\oo) ,\cE_\oo \big) $. Since \eqref{Jan11_03} and \eqref{Feb01_07} show that $ \oP^t_{\otau,\oo} \big( \big\{ \oT \gs t \+ \otau(\oo)\big\} \Cp \Wtzo \big) \= 1 $, \eqref{051120_17} and \eqref{051120_21} further yield that \beas && \hspace{-1.2cm} E_\oP \big[ \b1_{(\oW^t_r - \oW^t_s )^{-1} (\cE) \cap \oA} \big| \cF(\otau) \big] (\oo) \= \oP^t_{\otau,\oo} \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \= \oP^t_{\otau,\oo} \Big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \ocA \Big) \\ && \hspace{-0.7cm} \= \b1_{\{\oo \in \ocA'\}} E_\oP \big[ \b1_{(\oW^t_r - \oW^t_s )^{-1} (\cE) } \big| \cF(\otau) \big] (\oo) \= \b1_{\{\oo \in \ocA'\}} \oP^t_{\otau,\oo} \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \big) \= \b1_{\{\oo \in \ocA'\}} \phi \big( r\- \otau(\oo) ,\cE_\oo \big) \= \b1_{\{\oo \in \ocA\}} \phi \big( r\- \otau(\oo) ,\cE_\oo \big) . \eeas As $\{s \< \otau \< r \} \ins \cF^{\oW^t}_\otau$, one has $ \{s \< \otau \< r \} \Cp \oAtc \ins \cF(\otau) $ and it follows from \eqref{051120_23} that \beas && \hspace{-1.2cm} \oP \Big( \{s \< \otau \< r \} \Cp (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \Cp \oAtc \Big) \= E_\oP \bigg[ \b1_{\{s < \otau < r \} \cap \oAtc} E_\oP \Big[ \b1_{(\oW^t_r - \oW^t_s )^{-1} (\cE) \cap \oA} \Big| \cF(\otau) \Big] \bigg] \nonumber \\ && \hspace{-0.7cm} \= \int_{\oo \in \oAtc} \b1_{\{s < \otau (\oo) < r \} } \b1_{\{\oo \in \ocA\}} \phi \big( r\- \otau(\oo) ,\cE_\oo \big) \oP(d \oo) \= \int_{\oo \in \oAtc} \b1_{\{s < \otau(\oo)< r \} } \oQ^\oo_\e \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \oP(d\oo) . \qq \eeas Adding it to \eqref{051120_27}, we obtain \bea \oP \big( \{ \otau \> s \} \Cp (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \Cp \oAtc \big) \= \int_{\oo \in \oAtc} \b1_{\{ \otau(\oo) > s \} } \oQ^\oo_\e \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \oP(d\oo) . \q \label{051820_21} \eea Define $\wh{\cA} \df \ccap{i=1}{n} \Big(\big[ \big( \oW^t_{s_i}, \oK^t_{s_i} \big)^{-1} (\cO_i)\Cp \{ \oT \ins [t ,t \+ s_i ]\} \big] \cp \big[ \big( \oW^t_{s_i}, \oK^t_{s_i} \big)^{-1} (\cO'_i)\Cp \{ \oT \ins [t ,t \+ s_i ]^c\} \big] \Big) \ins \ocF^t_s $, so $ \wh{\cA} \Cp \big(\ocN^2_{\n X} \cp \ocN_{\n K} \big)^c \= \oA \Cp \big(\ocN^2_{\n X} \cp \ocN_{\n K} \big)^c$. Since $\{ \otau \> s \} \ins \cF^{\oW^t}_\otau $ and since $\oW^t$ is a standard Brownian motion with respect to filtration $\obF^t $ under $\oP$, \eqref{May06_25} and \eqref{051820_21} imply that \beas && \hspace{-1.5cm} \oP_\e \big( \{ \otau \> s \} \Cp (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \= \oP \big( \{ \otau \> s \} \Cp (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \Cp \oAt \big) \+ \int_{\oo \in \oAtc} \b1_{\{ \otau(\oo) > s \} } \oQ^\oo_\e \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \oP(d\oo) \nonumber \\ && \= \oP \big( \{ \otau \> s \} \Cp (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \= \oP \big( \{ \otau \> s \} \Cp (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \wh{\cA} \, \big) \= \oP \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \big) \ti \oP \big( \{ \otau \> s \} \Cp \wh{\cA} \, \big) \nonumber \\ && \= \phi(r\-s,\cE) \oP \big( \{ \otau \> s \} \Cp \wh{\cA} \, \big) \= \phi(r\-s,\cE) \oP \big( \{ \otau \> s \} \Cp \oA \big) . \eeas Taking $\cE \= \hR^d$ gives that $ \oP_\e \big( \{ \otau \> s \} \Cp \oA \big) \= \oP \big( \{ \otau \> s \} \Cp \oA \big) $, then \bea \label{051120_33} \oP_\e \big( \{ \otau \> s \} \Cp (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \= \phi(r\-s,\cE) \oP_\e \big( \{ \otau \> s \} \Cp \oA \big) . \eea If $s \= 0$, as $\{\otau \> 0\} \= \oO$, \eqref{051120_33} reduces to \bea \label{082920_41} \oP_\e \big( (\oW^t_r )^{-1} (\cE) \Cp \oA \big) \= \oP_\e \big( \{ \otau \> 0 \} \Cp (\oW^t_r \- \oW^t_0 )^{-1} (\cE) \Cp \oA \big) \= \phi(r,\cE) \oP_\e \big( \{ \otau \> 0 \} \Cp \oA \big) \= \phi(r ,\cE) \oP_\e \big( \oA \big) . \eea \no {\bf II.c.3)} Next, we suppose $s \> 0$ and assume without loss of generality that $ 0 \= s_1 \< s_2 \< \cds \< s_{n-1} \< s_n \= s$ with $n \gs 2$. Let $i \= 1,\cds \n ,n\-1$ and let $\oo \ins \{s_i \< \otau \ls s_{i+1}\} $. As $\otau(\oo) \> s_j$ for $j \= 1,\cds \n,i$, one can deduce that \bea \oA \Cp \big\{ \oT \gs t \+ \otau(\oo)\big\} & \tn \= & \tn \bigg\{ \ccap{j=1}{i} \Big( \big( \oXi^t_{ s_j } \big)^{-1} (\cO'_j)\Cp \big\{ \oT \gs t \+ \otau(\oo)\big\} \Big) \bigg\} \nonumber \\ & \tn & \cap \, \bigg\{ \ccap{j=i+1}{n} \Big(\big[ \big( \oXi^t_{ s_j } \big)^{-1} (\cO_j)\Cp \{ \oT \ins [t \+ \otau(\oo) ,t \+ s_j ]\} \big] \cp \big[ \big( \oXi^t_{ s_j } \big)^{-1} (\cO'_j) \Cp \{ \oT \> t \+ s_j \} \big] \Big) \bigg\} . \qq \label{051020_31} \eea Define $\ocA_i \df \ccap{j=1}{i} \big( \oXi^t_{ \otau \land s_j } \big)^{-1} (\cO'_j) \ins \cF^{\oXi^t}_\otau $. Since $ \Wtzo \sb \{s_i \< \otau\} $ by \eqref{090520_11}, \bea \Wtzo \Cp \Big( \ccap{j=1}{i} \big( \oXi^t_{ s_j } \big)^{-1} (\cO'_j) \Big) \= \Wtzo \Cp \Big( \ccap{j=1}{i} \big\{ \oXi^t_{ \otau \land s_j } \ins \cO'_j \big\} \Big) \= \Wtzo \Cp \ocA_i . \label{070120_11} \eea Set $\fra_{\overset{}{\oo}} \df \big( \oW^t_\otau( \oo ) , \bz \big) \ins \hR^{d+l}$ and define $\oA^\oo_i \df \ccap{j=i+1}{n} \bigg(\Big[ \Big( \oW^{t_\oo}_{s_j - \otau(\oo)}, K^\oo_{s_j - \otau(\oo)} \Big)^{-1} \big( \cO_{j,\oo} \big) \Cp \big\{ \oT \ins [t_\oo ,t \+ s_j ] \big\} \Big] \cp \Big[ \Big( \oW^{t_\oo}_{s_j - \otau(\oo)}, \\ K^\oo_{s_j - \otau(\oo)} \Big)^{-1} \big( \cO'_{j,\oo} \big) \Cp \big\{ \oT \ins [t_\oo ,t \+ s_j ]^c \big\} \Big] \bigg) \ins \ocF^{t_\oo}_{s - \otau(\oo)} $, where $\cO_{j,\oo} \df \big\{\fx \- \fra_{\overset{}{\oo}} \n : \fx \ins \cO_j \big\} \ins \sB(\hR^{d+l})$ and $\cO'_{j,\oo} \df \big\{\fx \- \fra_{\overset{}{\oo}} \n : \fx \ins \cO'_j \big\} \ins \sB(\hR^{d+l})$. Given $j \= i\+1,\cds \n ,n$ and $\oo' \ins \Wtzo \Cp \big( \ocN^{\,\oo}_{\n X,2} \cp \ocN^{\,\oo}_K \big)^c$, we can derive that $ \oXi^t_{ s_j } (\oo') \ins \cO_j $ if and only if $ \big( \oW^{t_\oo},K^\oo\big) \big(s_j \- \otau(\oo) , \oo'\big) \= \big( \oW (t\+s_j,\oo') \- \oW \big(t\+\otau(\oo) ,\oo'\big), \fX^\oo \big(t \+ s_j , \oo'\big) \big) \= \big( \oW^t_{ s_j } (\oo') \- \oW^t \big(\otau(\oo) ,\oo'\big), \oX^{t_\oo} \big(s_j \- \otau(\oo) , \oo'\big) \big) \= \oXi^t_{ s_j } (\oo') \- \big( \oW^t_\otau( \oo ) , \bz \big) \ins \cO_{j,\oo} $. Similarly, $ \oXi^t_{ s_j } (\oo') \ins \cO'_j $ if and only if $ \big( \oW^{t_\oo},K^\oo\big) \big(s_j \- \otau(\oo) , \oo'\big) \ins \cO'_{j,\oo} $. Putting them with \eqref{051020_31} and \eqref{070120_11} renders that \beas && \hspace{-1.2cm} \oA \Cp \big\{ \oT \gs t \+ \otau(\oo)\big\} \Cp \Wtzo \Cp \big( \ocN^{\,\oo}_{\n X,2} \cp \ocN^{\,\oo}_{\n K} \big)^c \= \Wtzo \Cp \big( \ocN^{\,\oo}_{\n X,2} \cp \ocN^{\,\oo}_{\n K} \big)^c \Cp \Big( \ocA_i \Cp \big\{ \oT \gs t \+ \otau(\oo)\big\} \Big) \nonumber \\ && \hspace{-0.8cm} \cap \, \bigg\{ \ccap{j=i+1}{n} \bigg(\Big[ \Big( \oW^{t_\oo}_{s_j - \otau(\oo)},K^\oo_{s_j - \otau(\oo)} \Big)^{-1} \big( \cO_{j,\oo} \big) \Cp \big\{ \oT \ins [t_\oo , t \+ s_j ]\big\} \Big] \cp \Big[ \Big( \oW^{t_\oo}_{s_j - \otau(\oo)},K^\oo_{s_j - \otau(\oo)} \Big)^{-1} \big( \cO'_{j,\oo} \big) \Cp \big\{ \oT \> t \+ s_j \big\} \Big] \bigg) \bigg\} \nonumber \\ && \hspace{-0.8cm} \= \ocA_i \Cp \oA^\oo_i \Cp \big\{ \oT \gs t \+ \otau(\oo)\big\} \Cp \Wtzo \Cp \big( \ocN^{\,\oo}_{\n X,2} \cp \ocN^{\,\oo}_{\n K} \big)^c . \eeas As $ \oW^{t_\oo} $ is a standard Brownian motion with respect to filtration $ \obF^{t_\oo} $ under $\oQ^\oo_\e$ by \eqref{April07_11}, we then see from \eqref{072820_15}, \eqref{062820_14} and \eqref{May06_25} that for any $\oo \ins \{s_i \< \otau \ls s_{i+1}\} \Cp \oAtc$ \beas && \hspace{-1.2cm} \oQ^\oo_\e \Big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \Big) \= \oQ^\oo_\e \Big\{ \ocA_i \Cp \oA^\oo_i \Cp (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Big\} \= \b1_{\{\oo \in \ocA_i\}} \oQ^\oo_\e \Big\{ \oA^\oo_i \Cp \Big( \oW^{t_\oo}_{r - \otau(\oo)} \- \oW^{t_\oo}_{s - \otau(\oo)} \Big)^{-1} (\cE) \Big\} \\ && \= \b1_{\{\oo \in \ocA_i\}} \oQ^\oo_\e \big( \oA^\oo_i \big) \ti \oQ^\oo_\e \Big\{ \Big( \oW^{t_\oo}_{r - \otau(\oo)} \- \oW^{t_\oo}_{s - \otau(\oo)} \Big)^{-1} (\cE) \Big\} \= \oQ^\oo_\e \big( \ocA_i \Cp \oA^\oo_i \big) \phi(r\-s,\cE) \= \oQ^\oo_\e \big( \oA \big) \phi(r\-s,\cE) , \eeas and thus $ \int_{\oo \in \oAtc} \b1_{ \{s_i < \otau(\oo) \le s_{i+1}\} } \oQ^\oo_\e \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \oP(d\oo) \= \phi(r\-s,\cE) \ti \int_{\oo \in \oAtc} \b1_{ \{s_i < \otau(\oo) \le s_{i+1}\} } \oQ^\oo_\e \big( \oA \big) \oP(d\oo) $. Taking summation of this equality from $i\=1$ through $i\=n \- 1$ and using \eqref{May06_25} yield that \beas && \hspace{-1.2cm} \int_{\oo \in \oAtc} \oQ^\oo_\e \big( \{ \otau \ls s\} \Cp (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \oP(d\oo) \= \int_{\oo \in \oAtc} \b1_{\{ \otau(\oo) \le s\} } \oQ^\oo_\e \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \oP(d\oo) \nonumber \\ && \hspace{-0.7cm} \= \phi(r\-s,\cE) \n \int_{\oo \in \oAtc} \b1_{\{ \otau(\oo) \le s\} } \oQ^\oo_\e \big( \oA \big) \oP(d\oo) \= \phi(r\-s,\cE) \n \int_{\oo \in \oAtc} \oQ^\oo_\e \big( \{ \otau \ls s\} \Cp \oA \big) \oP(d\oo) . \eeas Since $ \oAt \ins \ocF^t_\otau $ and since $\oW^t$ is a standard Brownian motion with respect to filtration $\obF^t $ under $\oP$, \beas && \hspace{-0.7cm} \oP_\e \big( \{ \otau \ls s \} \Cp (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \= \oP \big( \{ \otau \ls s \} \Cp (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \wh{\cA} \Cp \oAt \big) \+ \phi(r\-s,\cE) \int_{\oo \in \oAtc} \oQ^\oo_\e \big( \{ \otau \ls s\} \Cp \oA \big) \oP(d\oo) \\ && \= \oP \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \big) \ti \oP \big( \{ \otau \ls s \} \Cp \wh{\cA} \Cp \oAt \big) \+ \phi(r\-s,\cE) \int_{\oo \in \oAtc} \oQ^\oo_\e \big( \{ \otau \ls s\} \Cp \oA \big) \oP(d\oo) \\ && \= \phi(r\-s,\cE) \Big\{ \oP \big( \{ \otau \ls s \} \Cp \oA \Cp \oAt \big) \+ \int_{\oo \in \oAtc} \oQ^\oo_\e \big( \{ \otau \ls s\} \Cp \oA \big) \oP(d\oo) \Big\} \= \phi(r\-s,\cE) \oP_\e \big( \{ \otau \ls s\} \Cp \oA \big) , \eeas which together with \eqref{051120_33} shows that $ \oP_\e \big( (\oW^t_r \- \oW^t_s )^{-1} (\cE) \Cp \oA \big) \= \phi(r\-s,\cE) \oP_\e \big( \oA \big) $. Combining it with \eqref{082920_41}, we can derive \eqref{May08_01} from Dynkin Theorem and \eqref{090520_23}. Hence, $ \oP_\e $ satisfies Definition \ref{def_ocP} (i). \no {\bf II.d)} In this step, we demonstrate that Definition \ref{def_ocP} (ii) holds for $ \oP_\e $. Let $\vf \ins C^2 (\hR^{d+l})$ and $n \ins \hN$. We still define $\oXi^{t,\bx}$, $\oM^{t,\bx} (\vf) $, $ \otau^{t,\bx}_n$ as in \eqref{091920_15} and set $ \oM^{t,\bx,n}_s \df \oM^{t,\bx}_{\otau^{t,\bx}_n \land s} $, $ s \ins [0,\infty)$. We also let $ 0 \ls \fs \< \fr \< \infty$. \no {\bf II.d.1)} Let $ \big\{(s_i,\cE_i )\big\}^k_{i=1} \sb [0,\fs] \ti \sB(\hR^{d+l}) $. For any $\oo \ins \oO$, set $\fr_\loo \df \big(\fr \- \otau(\oo)\big)^+$, $\fs_\loo \df \big(\fs \- \otau(\oo)\big)^+$ and $s_{i,\loo} \df \big(s_i \- \otau(\oo)\big)^+$ for $i \= 1, \cds \n , k$. We claim that \bea \label{071720_31} E_{\oP_\e} \bigg[ \b1_{\{\otau > \fs\}} \Big( \oM^{t,\bx,n}_\fr (\vf ) \- \oM^{t,\bx,n}_\fs (\vf ) \Big) \b1_{ \underset{i=1}{\overset{k}{\cap}} \big\{ \oXi^t_{s_i} \in \cE_i\big\} } \bigg] \= 0 . \eea Since $\{\otau \> \fs\} \ins \cF^{\oW^t}_\otau $ and since $\ccap{i=1}{k} \big\{ \oXi^t_{\otau \land s_i} \ins \cE_i\big\} \ins \cF^{\oXi^t}_\otau $, \eqref{May06_25} implies that \bea && \hspace{-1.2cm} E_{\oP_\e} \bigg[ \b1_{\{\otau > \fs\}} \Big( \oM^{t,\bx,n}_{ \fr} (\vf ) \- \oM^{t,\bx,n}_{ \otau \land \fr} (\vf ) \Big) \b1_{ \underset{i=1}{\overset{k}{\cap}} \big\{ \oXi^t_{s_i} \in \cE_i\big\} } \bigg] \= E_{\oP_\e} \bigg[ \b1_{\{\otau > \fs\}} \Big( \oM^{t,\bx,n}_{ \otau \vee \fr } (\vf ) \- \oM^{t,\bx,n}_{ \otau} (\vf ) \Big) \b1_{ \underset{i=1}{\overset{k}{\cap}} \big\{ \oXi^t_{\otau \land s_i} \in \cE_i\big\} } \bigg] \nonumber \\ && \= E_\oP \bigg[ \b1_{\oAt \cap \{\otau > \fs\}} \Big( \oM^{t,\bx,n}_{\otau \vee \fr} (\vf ) \- \oM^{t,\bx,n}_{\otau} (\vf ) \Big) \b1_{ \underset{i=1}{\overset{k}{\cap}} \big\{ \oXi^t_{\otau \land s_i} \in \cE_i\big\} } \bigg] \nonumber \\ && \q + \int_{\oo \in \oAtc} \b1_{ \{\otau (\oo) > \fs \}} \b1_{ \underset{i=1}{\overset{k}{\cap}} \big\{ \oXi^t_{\otau \land s_i} (\oo) \in \cE_i\big\} } E_{\oQ^\oo_\e} \Big[ \oM^{t,\bx,n}_{\otau \vee \fr} (\vf ) \- \oM^{t,\bx,n}_{\otau} (\vf ) \Big] \oP(d\oo) . \q \label{071720_27} \eea Since $ \oM^{t,\bx,n}_{\otau \land \fr} (\vf ) \- \oM^{t,\bx,n}_{\otau \land \fs} (\vf ) \ins \cF^{\oXi^t}_{\otau^{t,\bx}_n \land \otau \land \fr} \sb \cF^{\oXi^t}_\otau$ and since $\{\otau \> \fs\} \Cp \Big( \ccap{i=1}{k} \big\{ \oXi^t_{\otau \land s_i} \ins \cE_i\big\} \Big) \ins \cF^{\oXi^t}_{\otau \land \fs} \sb \ocG^t_{\otau \land \fs}$, applying Lemma \ref{lem_071220} with $ \big(\oz_1,\oz_2 \big) \= \big( \otau \ld \fs, \otau \ld \fr \big)$, we see from \eqref{052420_21} and \eqref{091920_11} that \bea && \hspace{-1.5cm} E_{\oP_\e} \Big[ \b1_{\{\otau > \fs\}} \big( \oM^{t,\bx,n}_{\otau \land \fr} (\vf ) \- \oM^{t,\bx,n}_\fs (\vf ) \big) \b1_{ \underset{i=1}{\overset{k}{\cap}} \big\{ \oXi^t_{s_i} \in \cE_i\big\} } \Big] \= E_\oP \Big[ \b1_{\{\otau > \fs\}} \big( \oM^{t,\bx,n}_{\otau \land \fr} (\vf ) \- \oM^{t,\bx,n}_{\otau \land \fs} (\vf ) \big) \b1_{ \underset{i=1}{\overset{k}{\cap}} \big\{ \oXi^t_{\otau \land s_i} \in \cE_i\big\} } \Big] \nonumber \\ && \= E_\oP \Big[ \b1_{\{\otau > \fs\}} \big( \oM^t_{\otau^t_n \land \otau \land \fr} (\vf ) \- \oM^t_{\otau^t_n \land \otau \land \fs} (\vf ) \big) \b1_{ \underset{i=1}{\overset{k}{\cap}} \big\{ \oXi^t_{\otau \land s_i} \in \cE_i\big\} } \Big] \= 0 . \label{071420_21} \eea Also, taking $ \big(\oz_1,\oz_2 \big) \= \big( \otau, \otau \ve \fr \big)$ in Lemma \ref{lem_071220} and using \eqref{091920_11} yield that \bea && \hspace{-1.5cm} E_\oP \bigg[ \b1_{\oAt \cap\{\otau > \fs\}} \Big( \oM^{t,\bx,n}_{\otau \vee \fr} (\vf ) \- \oM^{t,\bx,n}_{\otau} (\vf ) \Big) \b1_{ \underset{i=1}{\overset{k}{\cap}} \big\{ \oXi^t_{\otau \land s_i} \in \cE_i\big\} } \bigg] \nonumber \\ && \= E_\oP \bigg[ \b1_{\oAt \cap\{\otau > \fs\}} \Big( \oM^t_{\otau^t_n \land (\otau \vee \fr)} (\vf ) \- \oM^t_{\otau^t_n \land \otau} (\vf ) \Big) \b1_{ \underset{i=1}{\overset{k}{\cap}} \big\{ \oXi^t_{\otau \land s_i} \in \cE_i\big\} } \bigg] \= 0 . \q \label{071720_25} \eea Fix $ \oo \ins \oAtc \Cp \big(\ocN^1_{\n X}\big)^c $. As $ \oX_r (\oo) \= \bx(r) $, $\fa r \ins [0,t] $, one has $\oO^t_{\otau,\oo} \sb \big\{ \oo' \ins \oO \n: \oX_r(\oo') \= \oX_r(\oo) , \fa r \ins [0,t ] \big\} \= \big\{ \oo' \ins \oO \n: \oX_r(\oo') \= \bx(r) , \fa r \ins [0,t ] \big\} \= \big(\ocN^1_{\n X}\big)^c$. By \eqref{072820_15}, $ \oQ^\oo_\e \Big( \big(\ocN^1_{\n X} \big)^c \Big) \= 1 $ and we thus see from \eqref{091920_11} that \bea \label{091920_19} E_{\oQ^\oo_\e} \Big[ \oM^{t,\bx,n}_{\otau \vee \fr} (\vf ) \- \oM^{t,\bx,n}_{\otau} (\vf ) \Big] \= E_{\oQ^\oo_\e} \Big[ \oM^t_{\otau^t_n \land (\otau \vee \fr)} (\vf ) \- \oM^t_{\otau^t_n \land \otau} (\vf ) \Big] . \eea Set $n_\loo \df n \- \otau(\oo) \> 0 $. We define a $C^2 $ function $\vf_\loo(w,x) \df \vf \Big(w \+ \oW^t_\otau ( \oo ), x \Big) $, $(w,x) \ins \hR^{d+l}$. For $i=1,2,3$, since $ D^i \vf_\loo (w,x) \= D^i \vf \big(w \+ \oW^t \big(\otau(\oo),\oo\big) , x \big) $, $ \fa (w,x) \ins \hR^{d+l} $ (with $D^0 \vf \df \vf$), \bea \label{091920_17} D^i\vf \Big( \oXi^t(r,\oo')\Big) & \tn \= & \tn D^i\vf \Big( \oW^t(r,\oo') \- \oW^t \big(\otau(\oo),\oo'\big) \+ \oW^t \big(\otau(\oo),\oo\big) , \oX^t(r,\oo') \Big) \= D^i \vf_\loo \Big( \oW^{t_\oo}\big(r\-\otau(\oo),\oo'\big) ,\oX^{t_\oo}\big(r\-\otau(\oo),\oo'\big)\Big) \nonumber \\ & \tn \= & \tn D^i \vf_\loo \Big( \oXi^{t_\oo}\big(r\-\otau(\oo),\oo'\big) \Big) , \q \fa r \ins \big[\otau(\oo),\infty\big) . \eea We also define an $\bF^{\oXi^{t_\oo}}-$stopping time by $ \oga^n_\loo (\oo') \df \inf\big\{s \ins [0,\infty) \n : \big|\oXi^{t_\oo}_s (\oo') \+ \big(\oW^t_\otau (\oo),\bz\big) \big| \gs n \big\} $, $\oo' \ins \oO$. By an analogy to Lemma \ref{lem_071220}, \bea \label{071720_23} E_{\oQ^\oo_\e} \bigg[ \oMoo_{\oga^n_\loo \land n_\loo \land \fr_\loo } \big(\vf_\loo \big) \- \oM^{t_\oo}_0 \big(\vf_\loo \big) \bigg] \= E_{\oQ^\oo_\e} \bigg[ \oMoo_{\oga^n_\loo \land n_\loo \land \fr_\loo } \big(\vf_\loo \big) \- \oM^{t_\oo}_{\oga^n_\loo \land n_\loo \land 0} \big(\vf_\loo \big) \bigg] \= 0 . \eea As $\oo \ins \big\{ \oo' \ins \oO \n : \otau^t_n(\oo') \> \otau (\oo) \big\} \ins \cF^{\oXi^t}_{\otau (\oo)} $, an analogy to \eqref{090920_15} shows that $\oXi^t_{\otau,\oo} \sb \big\{ \oo' \ins \oO \n : \otau^t_n(\oo') \> \otau (\oo) \big\}$. Let $\oo' \ins \oXi^t_{\otau,\oo} $. Since $\inf\big\{s \ins [0,\infty) \n : |\oXi^t_s (\oo')| \gs n \big\} \gs \otau^t_n(\oo') \> \otau(\oo)$, one has $|\oXi^t_s (\oo')|\< n $, $\fa s \ins [0,\otau(\oo)] $ and thus \beas & & \hspace{-1.5cm} \inf\big\{s \ins [0,\infty) \n : |\oXi^t_s (\oo')| \gs n \big\} \= \inf\big\{s \ins [\otau(\oo),\infty) \n : |\oXi^t_s (\oo')| \gs n \big\} \= \inf\Big\{s \ins [\otau(\oo),\infty) \n : \big|\oXi^t_s (\oo') \-\big(\oW^t (\otau(\oo),\oo'),\bz\big) \+ \big(\oW^t_\otau (\oo),\bz\big) \big| \gs n \Big\} \nonumber \\ & \tn \= & \tn \inf\Big\{s \ins [\otau(\oo),\infty) \n : \big|\oXi^{t_\oo} (s\-\otau(\oo),\oo') \+ \big(\oW^t_\otau ( \oo),\bz\big) \big| \gs n \Big\} \= \oga^n_\loo (\oo') \+ \otau(\oo) . \nonumber \eeas It follows that $ \otau^t_n(\oo') \= \big(\oga^n_\loo (\oo') \+ \otau(\oo) \big) \ld n \= \oga^n_\loo (\oo') \ld n_\loo \+ \otau(\oo) $. We can then deduce from \eqref{090520_11} and \eqref{091920_17} that \bea && \hspace{-1.7cm} \big(\oM^t (\vf)\big) \big( \otau^t_n(\oo') \ld \big(\otau (\oo') \ve \fr\big) ,\oo' \big) \- \big(\oM^t (\vf ) \big) \big( \otau (\oo'), \oo' \big) \= \big(\oM^t (\vf)\big) \big( \otau^t_n(\oo') \ld \big(\otau (\oo) \ve \fr\big) ,\oo' \big) \- \big(\oM^t (\vf ) \big) \big( \otau (\oo), \oo' \big) \nonumber \\ && \hspace{-1cm} \= \vf \Big(\oXi^t \big( \otau^t_n(\oo') \ld \big(\otau (\oo) \ve \fr\big) ,\oo'\big) \Big) \- \vf \big( \oXi^t \big(\otau(\oo),\oo'\big) \big) \- \n \int_{\otau(\oo)}^{\otau^t_n(\oo') \land (\otau (\oo) \vee \fr )} \ol{b} \big(t\+r,\oX \big((t\+r) \ld \cd,\oo'\big) \big) \n \cd \n D \vf \big( \oXi^t (r,\oo') \big) dr \nonumber \\ & & \hspace{-1cm} \q - \frac12 \int_{\otau(\oo)}^{\otau^t_n(\oo') \land (\otau (\oo) \vee \fr )} \ol{\si} \, \ol{\si}^T \big(t\+r,\oX \big((t\+r) \ld \cd,\oo'\big) \n : \n D^2 \vf \big( \oXi^t (r,\oo') \big) dr \nonumber \\ && \hspace{-1cm} \= \vf_\loo \Big(\oXi^{t_\oo} \big( \oga^n_\loo (\oo') \ld n_\loo \ld \fr_\loo ,\oo'\big) \Big) \- \vf_\loo \big( \oXi^{t_\oo} (0,\oo') \big) \- \n \int_0^{\oga^n_\loo (\oo') \land n_\loo \land \fr_\loo } \ol{b} \Big(t_\oo\+r',\oX \big((t_\oo\+r') \ld \cd, \oo' \big) \Big) \n \cd \n D \vf_\loo \Big( \oXi^{t_\oo} (r',\oo') \Big) dr' \nonumber \\ && \hspace{-1cm} \q - \frac12 \int_0^{\oga^n_\loo (\oo') \land n_\loo \land \fr_\loo } \ol{\si} \, \ol{\si}^T \Big(t_\oo\+r',\oX \big((t_\oo\+r') \ld \cd, \oo' \big) \Big) \n : \n D^2 \vf_\loo \Big( \oXi^{t_\oo} (r',\oo') \Big) dr' \q \big(\hb{by setting } r' \= r \- \otau(\oo) \big) \nonumber \\ && \hspace{-1cm} \= \big( \oM^{t_\oo} (\vf_\loo) \big) \Big( \oga^n_\loo (\oo') \ld n_\loo \ld \fr_\loo, \oo' \Big) \- \big( \oM^{t_\oo} (\vf_\loo) \big) \big( 0 , \oo' \big) . \label{071820_17} \eea As $\{ \otau^t_n \> \otau \} \ins \cF^{\oXi^t}_{\otau^t_n \land \otau} \sb \cF^{\oXi^t}_\otau$, \eqref{091920_19}, \eqref{062820_14}, \eqref{071820_17}, \eqref{May06_25} and \eqref{071720_23} imply that \beas && \hspace{-1.2cm} E_{\oQ^\oo_\e} \Big[ \oM^{t,\bx,n}_{\otau \vee \fr} (\vf ) \- \oM^{t,\bx,n}_{\otau} (\vf ) \Big] \= E_{\oQ^\oo_\e} \Big[ \b1_{ \{ \otau^t_n > \otau \}} \big( \oM^t_{ \otau^t_n \land (\otau \vee \fr)} (\vf ) \- \oM^t_\otau (\vf ) \big) \Big] \= E_{\oQ^\oo_\e} \Big[ \b1_{ \{ \otau^t_n > \otau \}} \Big( \oMoo_{\oga^n_\loo \land n_\loo \land \fr_\loo } (\vf_\loo) \- \oM^{t_\oo}_0 (\vf_\loo) \Big) \Big] \\ && \= \b1_{\big\{ \otau^t_n(\oo) > \otau (\oo) \big\}} E_{\oQ^\oo_\e} \Big[ \oMoo_{\oga^n_\loo \land n_\loo \land \fr_\loo } (\vf_\loo) \- \oM^{t_\oo}_0 (\vf_\loo) \Big] \= 0 , \q \fa \oo \ins \oAtc \Cp \big(\ocN^1_{\n X}\big)^c , \eeas and thus $ \int_{\oo \in \oAtc} \b1_{ \{\otau (\oo) > \fs \}} \b1_{ \underset{i=1}{\overset{k}{\cap}} \big\{ \oXi^t_{\otau \land s_i} (\oo) \in \cE_i\big\} } E_{\oQ^\oo_\e} \Big[ \oM^{t,\bx,n}_{\otau \vee \fr} (\vf ) \- \oM^{t,\bx,n}_{\otau} (\vf ) \Big] \oP(d\oo) \= 0 $, which together with \eqref{071720_27}$-$\eqref{071720_25} leads to \eqref{071720_31}. \no {\bf II.d.2)} If $0 \= \fs \< \fr \< \infty$, as $\{\otau \> 0\} \= \oO$, we have seen from \eqref{071720_31} that \bea \label{071820_29} E_{\oP_\e} \bigg[ \Big( \oM^{t,\bx,n}_\fr (\vf ) \- \oM^{t,\bx}_0 (\vf ) \Big) \b1_{ \big\{ \oXi^t_0 \in \cE \big\} } \bigg] \= 0 , \q \fa \cE \ins \sB(\hR^{d+l}) . \eea Assume next that $ 0 \< \fs \< \fr \< \infty$. Let $ 0 \= s_0 \< s_1 \< \cds \< s_{k-1} \< s_k \= \fs $ with $k \gs 2 $ and let $ \{ \cE_i \}^k_{i=1} \sb \sB(\hR^{d+l}) $. As $ \oAt \ins \ocF^t_\otau \sb \ocG^t_\otau$, one has $ \oAt \Cp \{\otau \ls \fs \} \ins \ocG^t_\fs $. Lemma \ref{lem_071220} and \eqref{091920_11} yield that \bea \hspace{-0.5cm} E_\oP \bigg[ \b1_{\oAt \cap \{\otau \le \fs\}} \Big( \oM^{t,\bx,n}_\fr (\vf ) \- \oM^{t,\bx,n}_\fs (\vf ) \Big) \b1_{ \underset{i=1}{\overset{k}{\cap}} \{ \oXi^t_{s_i} \in \cE_i\}} \bigg] \= E_\oP \bigg[ \b1_{\oAt \cap \{\otau \le \fs\}} \Big( \oM^t_{\otau^t_n \land \fr} (\vf ) \- \oM^t_{\otau^t_n \land \fs} (\vf ) \Big) \b1_{ \underset{i=1}{\overset{k}{\cap}} \{ \oXi^t_{s_i} \in \cE_i\}} \bigg] \= 0 . \q \label{071820_27} \eea Let $i \ins \{ 1, \cds \n , k \- 1\}$ and set $\breve{A}_i \df \ccap{j=1}{i} \big( \oXi^t_{ \otau \land s_j } \big)^{-1} (\cE_j) \ins \cF^{\oXi^t}_\otau $. We fix $\oo \ins \big\{s_i \< \otau \ls s_{i+1}\big\} \Cp \oAtc $. Analogous to \eqref{070120_11}, \bea \label{071820_19} \Wtzo \Cp \Big( \ccap{j=1}{i} \big( \oXi^t_{ s_j } \big)^{-1} (\cE_j) \Big) \= \Wtzo \Cp \breve{A}_i . \eea Define $\breve{A}^\oo_i \df \ccap{j=i+1}{k} \Big( \oXi^{t_\oo}_{s_{j,\loo}} \Big)^{-1} \big( \breve{\cE}_{j,\oo} \big) \ins \cF^{\oXi^{t_\oo}}_{\fs_\loo} \sb \ocG^{t_\oo}_{\fs_\loo} $, with $ \breve{\cE}_{j,\oo} \df \big\{\fx \- ( \oW^t_\otau( \oo ) , \bz ) \n : \fx \ins \cE_j \big\} $. Similar to Lemma \ref{lem_071220}, \bea \label{071820_25} E_{\oQ^\oo_\e} \bigg[ \Big( \oMoo_{\oga^n_\loo \land n_\loo \land \fr_\loo} \big( \vf_\loo\big) \- \oMoo_{\oga^n_\loo \land n_\loo \land \fs_\loo} \big( \vf_\loo\big) \Big) \b1_{\breve{A}^\oo_i } \bigg] \= 0 . \eea For any $j \= i\+1,\cds \n ,k$ and $\oo' \ins \Wtzo$, $ \oXi^t_{ s_j } (\oo') \ins \cE_j $ if and only if $ \oXi^{t_\oo} \big(s_j \- \otau(\oo) , \oo'\big) \= \Big( \oW (t\+s_j,\oo') \- \oW\big(t\+\otau(\oo),\oo'\big) , \oX (t\+s_j,\oo') \Big) \= \Big( \oW^t_{ s_j } (\oo') \- \oW^t \big(\otau(\oo) ,\oo'\big), \oX^t_{s_j} (\oo') \Big) \= \oXi^t_{ s_j } (\oo') \- \big( \oW^t_\otau( \oo ) , \bz \big) \ins \breve{\cE}_{j,\oo} $. So \bea \label{071820_21} \Wtzo \Cp \Big( \ccap{j=i+1}{k} \big( \oXi^t_{ s_j } \big)^{-1} (\cE_j) \Big) \= \Wtzo \Cp \breve{A}^\oo_i . \eea Let $\oo' \ins \oXi^t_{\otau,\oo} $. As $ \otau(\oo) \ls s_{i+1} \ls \fs $, following a similar argument to those in \eqref{071820_17} yields that $ \big(\oM^t (\vf )\big) \big(\otau^t_n(\oo') \ld \fr,\oo'\big) \- \big(\oM^t (\vf )\big) \big(\otau^t_n(\oo') \ld \fs,\oo'\big) \= \big( \oM^{t_\oo} (\vf_\loo) \big) \big( \oga^n_\loo (\oo') \ld n_\loo \ld \fr_\loo , \oo' \big) \- \big( \oM^{t_\oo} (\vf_\loo) \big) \big( \oga^n_\loo (\oo') \ld n_\loo \ld \fs_\loo , \oo' \big) $. \if{0} \beas && \hspace{-1cm} \big(\oM^t (\vf )\big) \big(\otau^t_n(\oo') \ld \fr,\oo'\big) \- \big(\oM^t (\vf )\big) \big(\otau^t_n(\oo') \ld \fs,\oo'\big) \= \big(\oM^t (\vf)\big) \big( \otau^t_n(\oo') \ld \fr ,\oo'\big) \- \big(\oM^t (\vf ) \big) \big( \otau^t_n(\oo') \ld \fs ,\oo'\big) \\ && \= \vf \Big(\oXi^t \big( \otau^t_n(\oo') \ld \fr ,\oo'\big) \Big) \- \vf \Big( \oXi^t \big(\otau^t_n(\oo') \ld \fs,\oo'\big) \Big) \- \n \int_{\otau^t_n(\oo') \land \fs}^{\otau^t_n(\oo') \land \fr} \ol{b} \Big(t\+r,\oX^t (r,\oo') \Big) \n \cd \n D \vf \big( \oXi^t (r,\oo') \big) dr \\ & & \q - \frac12 \int_{\otau^t_n(\oo') \land \fs}^{\otau^t_n(\oo') \land \fr} \ol{\si} \, \ol{\si}^T \Big(t\+r,\oX^t (r,\oo') \Big) \n : \n D^2 \vf \big( \oXi^t (r,\oo') \big) dr \\ && \= \vf \Big(\oXi^t \big( \oga_n(\oo') \ld n \ld \fr ,\oo'\big) \Big) \- \vf \Big( \oXi^t \big(\oga_n(\oo') \ld n \ld \fs,\oo'\big) \Big) \- \n \int_\fs^\fr \b1_{\{r \le \oga_n(\oo') \land n\}} \ol{b} \Big(t\+r,\oX^t (r,\oo') \Big) \n \cd \n D \vf \big( \oXi^t (r,\oo') \big) dr \\ & & \q - \frac12 \int_\fs^\fr \b1_{\{r \le \oga_n(\oo') \land n\}} \ol{\si} \, \ol{\si}^T \Big(t\+r,\oX^t (r,\oo') \Big) \n : \n D^2 \vf \big( \oXi^t (r,\oo') \big) dr \\ && \= \vf_\loo \Big(\oXi^{t_\oo} \big( \oga^n_\loo (\oo') \ld n_\loo \ld \fr_\loo ,\oo'\big) \Big) \- \vf_\loo \Big( \oXi^{t_\oo} \big(\oga^n_\loo (\oo') \ld n_\loo \ld \fs_\loo,\oo'\big) \Big) \\ && \q - \n \int_{\oga^n_\loo (\oo') \land n_\loo \land \fs_\loo }^{\oga^n_\loo (\oo') \land n_\loo \land \fr_\loo } \ol{b} \Big(t_\oo\+r',\oX^{t_\oo} \big(r', \oo' \big) \Big) \n \cd \n D \vf_\loo \Big( \oXi^{t_\oo} (r',\oo') \Big) dr' \\ && \q - \frac12 \int_{\oga^n_\loo (\oo') \land n_\loo \land \fs_\loo }^{\oga^n_\loo (\oo') \land n_\loo \land \fr_\loo } \ol{\si} \, \ol{\si}^T \Big(t_\oo\+r',\oX^{t_\oo} \big(r', \oo' \big) \Big) \n : \n D^2 \vf_\loo \Big( \oXi^t (r',\oo') \Big) dr' \\ && \= \big( \oM^{t_\oo} (\vf_\loo) \big) \big( \oga^n_\loo (\oo') \ld n_\loo \ld \fr_\loo , \oo' \big) \- \big( \oM^{t_\oo} (\vf_\loo) \big) \big( \oga^n_\loo (\oo') \ld n_\loo \ld \fs_\loo , \oo' \big) . \eeas \fi By an analogy to \eqref{091920_19}, one can then deduce from \eqref{062820_14}, \eqref{071820_19}, \eqref{071820_21}, \eqref{May06_25} and \eqref{071820_25} that \beas && \hspace{-2cm} E_{\oQ^\oo_\e} \bigg[ \Big( \oM^{t,\bx,n}_\fr (\vf ) \- \oM^{t,\bx,n}_\fs (\vf ) \Big) \b1_{ \underset{j=1}{\overset{k}{\cap}} \big\{ \oXi^t_{s_j} \in \cE_j\big\}} \bigg] \= E_{\oQ^\oo_\e} \bigg[ \b1_{\breve{A}_i \cap \breve{A}^\oo_i} \Big( \oM^t_{\otau^t_n \land \fr} (\vf ) \- \oM^t_{\otau^t_n \land \fs} (\vf ) \Big) \bigg] \= \b1_{\{\oo \in \breve{A}_i\}} E_{\oQ^\oo_\e} \bigg[ \b1_{ \breve{A}^\oo_i} \Big( \oM^t_{\otau^t_n \land \fr} (\vf ) \- \oM^t_{\otau^t_n \land \fs} (\vf ) \Big) \bigg] \\ && \= \b1_{\{\oo \in \breve{A}_i\}} E_{\oQ^\oo_\e} \bigg[ \Big( \oMoo_{\oga^n_\loo \land n_\loo \land \fr_\loo} \big( \vf_\loo\big) \- \oMoo_{\oga^n_\loo \land n_\loo \land \fs_\loo} \big( \vf_\loo\big) \Big) \b1_{ \breve{A}^\oo_i} \bigg] \= 0 ,\q \fa \oo \ins \{s_i \< \otau \ls s_{i+1}\} \Cp \oAtc \Cp \big(\ocN^1_{\n X}\big)^c , \eeas and thus $ \int_{\oo \in \oAtc} \b1_{ \{ s_i < \otau(\oo) \le s_{i+1} \}} E_{\oQ^\oo_\e} \bigg[ \Big( \oM^{t,\bx,n}_\fr (\vf ) \- \oM^{t,\bx,n}_\fs (\vf ) \Big) \b1_{ \underset{j=1}{\overset{k}{\cap}} \big\{ \oXi^t_{s_j} \in \cE_j\big\}} \bigg] \oP(d\oo) \= 0 $. Taking summation from $i\=1$ through $i\=k \- 1$, we obtain from \eqref{071820_27} and \eqref{May06_25} that \beas && \hspace{-1.2cm} E_{\oP_\e} \bigg[ \b1_{\{ \otau \le \fs \}} \Big( \oM^{t,\bx,n}_\fr (\vf ) \- \oM^{t,\bx,n}_\fs (\vf ) \Big) \b1_{ \underset{i=1}{\overset{k}{\cap}} \{ \oXi^t_{s_i} \in \cE_i\}} \bigg] \= E_\oP \bigg[ \b1_{\oAt \cap \{ \otau \le \fs\}} \Big( \oM^{t,\bx,n}_\fr (\vf ) \- \oM^{t,\bx,n}_\fs (\vf ) \Big) \b1_{ \underset{i=1}{\overset{k}{\cap}} \{ \oXi^t_{s_i} \in \cE_i\}} \bigg] \nonumber \\ && \q + \int_{\oo \in \oAtc} \b1_{ \{ \otau(\oo) \le \fs \}} E_{\oQ^\oo_\e} \bigg[ \Big( \oM^{t,\bx,n}_\fr (\vf ) \- \oM^{t,\bx,n}_\fs (\vf ) \Big) \b1_{ \underset{j=1}{\overset{k}{\cap}} \big\{ \oXi^t_{s_j} \in \cE_j\big\}} \bigg] \oP(d\oo) \= 0 . \eeas Adding it to \eqref{071720_31} shows that $ E_{\oP_\e} \Big[ \Big( \oM^{t,\bx,n}_\fr (\vf ) \- \oM^{t,\bx,n}_\fs (\vf ) \Big) \b1_{ \underset{i=1}{\overset{k}{\cap}} \{ \oXi^t_{s_i} \in \cE_i\}} \Big] \= 0 $, which together with \eqref{071820_29} and Dynkin's Theorem imply that for any $0 \ls \fs \< \fr \< \infty$, $ E_{\oP_\e} \Big[ \Big( \oM^{t,\bx,n}_\fr (\vf ) \- \oM^{t,\bx,n}_\fs (\vf ) \Big) \b1_\oA \Big] \= 0 $, $ \fa \oA \ins \cF^{\oXi^t}_\fs $. So $\big\{\oM^{t,\bx}_{ \otau^{t,\bx}_n \land s } (\vf ) \big\}_{s \in [0,\infty)}$ is an $\big(\bF^{\oXi^t,\oP},\oP_\e\big)-$martingale. As $ \lmtu{n \to \infty} \otau^{t,\bx}_n \= \infty$, $\big\{\oM^{t,\bx}_s (\vf ) \big\}_{s \in [0,\infty)}$ is an $\big(\bF^{\oXi^t,\oP},\oP_\e\big)-$local martingale for any $\vf \ins C^2(\hR^{d+l})$. Then using similar arguments to those in Part (2b) of the proof of Proposition \ref{prop_Ptx_char}, we can derive that $\big\{\oX^{t,\bx}\big\}_{s \in [0,\infty)}$ is a solution of SDE \eqref{Ju01_01} under $\oP_\e$. As $ \oX^{t,\bx} $ coincides with $\oX$ on $\big(\ocN^1_{\n X}\big)^c$ by \eqref{091920_11}, \beas \oP_\e \big\{ \oX_s \= \oX^{t,\bx}_s , ~ \fa s \ins [0,\infty) \big\} \gs \oP_\e \Big( \big(\ocN^1_{\n X}\big)^c \Big) \= \oP \Big( \oAt \Cp \big(\ocN^1_{\n X}\big)^c \Big) \+ \int_{\oo \in \oAtc} \oQ^\oo_\e \Big( \big(\ocN^1_{\n X}\big)^c \Big) \oP(d\oo) \= \oP \big( \oAt \big) \+ \oP \big( \oAtc \big) \= 1 . \eeas Hence, $\oP_\e$ satisfies Definition \ref{def_ocP} (ii). \no {\bf II.e)} As $ \oQ^\oo_\e \{\oT \gs t\} \gs \oQ^\oo_\e \{\oT \gs t \+ \otau(\oo)\} \= 1 $ for any $\oo \ins \oAtc$ by \eqref{072820_15}, one has $ \oP_\e\{\oT \gs t \} \= \oP \big(\oAt \Cp \{\oT \gs t \} \big) \+ \int_{\oo \in \oO} \b1_{\{\oo \in \oAtc\}} \oQ^\oo_\e \{\oT \gs t \} \oP(d\oo) \= \oP\big(\oAt\big) \+ \oP \big(\oAtc\big) \= 1 $. So $ \oP_\e \ins \ocP_{t,\bx}$. Set $\oK^{t,\bx}_s \df \b1_{\{s \in [0,t]\}} \bx(s) \+ \b1_{\{s \in (t,\infty)\}} \big(\oK^t_{s-t} \- \oK^t_0 \+ \bx(t) \big)$, $s \ins [0,\infty)$. It is a continuous process such that $\big\{\oK^{t,\bx}_{t+s}\big\}_{s \in [0,\infty)}$ is $\bF^{\oW^t}-$predictable and that \bea \label{092020_15} \oK^{t,\bx}_s (\oo) \= \oX^{t,\bx}_s (\oo) \= \oX_s (\oo) , \q \fa (s,\oo) \ins [0,\infty) \ti \big( \ocN^1_{\n X} \cp \ocN^2_{\n X} \cp \ocN_{\n K} \big)^c . \eea Let $\oo \ins \oAtc \Cp \big( \ocN^1_{\n X} \cp \ocN^2_{\n X} \cp \ocN_{\n K} \big)^c $ and $i \ins \hN$, By \eqref{April07_11} and \eqref{062720_11}, one has $E_{\oQ^\oo_\e} \big[ \int_{t+\otau(\oo)}^\oT \n g_i \big( r, \oX_{r \land \cd} \big) dr \big] \ls \ol{\cY}^i_2(\oo) \= \big( \oY^i_\oP ( \otau ) \big) (\oo) $ and $E_{\oQ^\oo_\e} \big[ \int_{t+\otau(\oo)}^\oT \n h_i \big( r, \oX_{r \land \cd} \big) dr \big] \= \ol{\cZ}^i_2(\oo) \= \big( \oZ^i_\oP ( \otau ) \big) (\oo) $. Since $ \oO^t_{\otau,\oo} \sb \big\{ \oo' \ins \oO \n: \oX_s (\oo') \= \oX_s (\oo) , \fa s \ins \big[0,t\+\otau(\oo)\big] \big\} $ we see from \eqref{072820_15} and \eqref{092020_15} that \bea E_{\oQ^\oo_\e} \Big[ \int_t^\oT \n g_i \big( r, \oX_{r \land \cd} \big) dr \Big] & \tn \= & \tn \int_t^{t+\otau(\oo)} \n g_i \big( r, \oX_{r \land \cd} (\oo) \big) dr \+ E_{\oQ^\oo_\e} \Big[ \int_{t+\otau(\oo)}^\oT \n g_i \big( r, \oX_{r \land \cd} \big) dr \Big] \label{091020_15} \\ & \tn \ls & \tn \int_0^{\otau(\oo)} \n g_i \big( t\+ r, \oK^{t,\bx}_{(t+r) \land \cd} (\oo) \big) dr \+ E_\oP \bigg[ \int_{\oT \land (t+\otau)}^\oT g_i (r,\oX_{r \land \cd} ) dr \Big| \cF(\otau) \bigg] (\oo) , \nonumber \\ \hb{and similarly} \q E_{\oQ^\oo_\e} \Big[ \int_t^\oT \n h_i \big( r, \oX_{r \land \cd} \big) dr \Big] & \tn \= & \tn \int_0^{\otau(\oo)} \n h_i \big( t\+ r, \oK^{t,\bx}_{(t+r) \land \cd} (\oo) \big) dr \+ E_\oP \bigg[ \int_{\oT \land (t+\otau)}^\oT h_i (r,\oX_{r \land \cd} ) dr \Big| \cF(\otau) \bigg] (\oo) . \nonumber \eea As $ \int_0^\otau (g_i,h_i) \big(t\+r,\oK^{t,\bx}_{(t+r) \land \cd} \big) dr \ins \cF^{\oW^t}_\otau \sb \cF(\otau) $, one can further deduce from \eqref{092020_15} that \beas && \hspace{-1.2cm} \int_{\oo \in \oAtc} E_{\oQ^\oo_\e} \bigg[ \int_t^\oT g_i \big(r,\oX_{r \land \cd} \big) dr \bigg] \oP(d\oo) \ls E_\oP \bigg[ \b1_{\oAtc} \bigg( \int_0^\otau \n g_i \big( t\+ r, \oK^{t,\bx}_{(t+r) \land \cd} \big) dr \+ E_\oP \Big[ \int_{\oT \land (t+\otau)}^\oT g_i(r,\oX_{r \land \cd} ) dr \Big| \cF(\otau) \Big] \bigg) \bigg] \\ && \= E_\oP \bigg[ E_\oP \bigg[ \b1_{\oAtc} \Big( \int_0^\otau \n g_i \big( t\+ r, \oK^{t,\bx}_{(t+r) \land \cd} \big) dr \+ \int_{\oT \land (t+\otau)}^\oT g_i(r,\oX_{r \land \cd} ) dr \Big) \bigg| \cF(\otau) \bigg] \bigg] \\ && \= E_\oP \bigg[ \b1_{\{\oT \ge t + \otau\}} \Big( \int_0^\otau \n g_i \big( t\+ r, \oX_{(t+r) \land \cd} \big) dr \+ \int_{ t+\otau }^\oT g_i(r,\oX_{r \land \cd} ) dr \Big) \bigg] \= E_\oP \bigg[ \b1_{\oAtc} \int_t^\oT \n g_i \big( r, \oX_{r \land \cd} \big) dr \bigg] . \eeas It follows that $ E_{\oP_\e} \big[ \int_t^\oT g_i \big(r,\oX_{r \land \cd} \big) dr \big] \ls E_\oP \big[ \int_t^\oT g_i \big(r,\oX_{r \land \cd} \big) dr \big] \ls y_i $. Similarly, one has \if{0} $ \int_{\oo \in \oAtc} E_{\oQ^\oo_\e} \big[ \int_t^\oT h_i \big(r,\oX_{r \land \cd} \big) dr \big] \oP(d\oo) \= E_\oP \big[ \b1_{\oAtc} \int_t^\oT \n h_i \big( r, \oX_{r \land \cd} \big) dr \big] $. \fi $ E_{\oP_\e} \big[ \int_t^\oT h_i \big(r,\oX_{r \land \cd} \big) dr \big] \= E_\oP \big[ \int_t^\oT h_i \big(r,\oX_{r \land \cd} \big) dr \big] \= z_i $. Hence, $\oP_\e $ belongs to $ \ocP_{t,\bx}(y,z)$. \no {\bf II.f)} By \eqref{081620_45}, $\ocD_\infty \df \big\{\oo \ins \oO \n : \ocV \big( \ol{\Psi}_\tau (\oo) \big) \= \infty \big\} $ is a $ \si \big( \cF^{\oW^t}_\otau \cp \cF^\oX_t \cp \sN_\oP \big(\cF^{\oW^t}_\infty \ve \cF^\oX_\infty\big) \big) -$measurable set. Let $\e \ins (0,1)$. \if{0} since \eqref{081620_19} implies that \beas E_{\oQ^\oo_\e} \Big[ \int_{t+\otau(\oo)}^\oT \n f \big( r, \oX_{r \land \cd} \big) dr \+ \b1_{\{\oT < \infty\}} \pi \big( \oT,\oX_{\oT \land \cd} \big) \Big] \gs \left\{ \ba{ll} 1/\e , & \fa \oo \ins \ocD_\infty ;\\ \ocV \big( \ol{\Psi}_\tau (\oo) \big) \- \e , & \fa \oo \ins \ocD^c_\infty, \ea \right. \eeas \fi By an analogy to \eqref{091020_15}, \eqref{April07_11} and \eqref{081620_19} imply that for any $\oo \ins \oAtc $ \beas E_{\oQ^\oo_\e} \bigg[ \int_t^\oT \n f \big( r, \oX_{r \land \cd} \big) dr \+ \oz \bigg] & \tn \= & \tn \int_t^{t+\otau(\oo)} \n f \big( r, \oX_{r \land \cd} (\oo) \big) dr \+ E_{\ol{\bQ}_\e ( \ol{\Psi}_\tau (\oo) )} \bigg[ \int_{t+\otau(\oo)}^\oT \n f \big( r, \oX_{r \land \cd} \big) dr \+ \oz \bigg] \\ & \tn \gs & \tn \int_t^{t+\otau(\oo)} \n f \big( r, \oX_{r \land \cd} (\oo) \big) dr \+ \b1_{\{\oo \in \ocD^c_\infty \}} \Big( \ocV \big( \ol{\Psi}_\tau (\oo) \big) \- \e \Big) \+ \b1_{\{\oo \in \ocD_\infty \}} \frac{1}{\e} . \eeas As $\oP_\e \ins \ocP_{t,\bx}(y,z)$, we then have \bea \hspace{-1.5cm} \oV (t,\bx,y,z) & \tn \gs & \tn E_{\oP_\e} \bigg[ \int_t^\oT f \big(r,\oX_{r \land \cd} \big) dr \+ \oz \bigg] \gs E_\oP \bigg[ \b1_\oAt \Big( \int_t^\oT f \big(r,\oX_{r \land \cd} \big) dr \+ \oz \Big) \bigg] \nonumber \\ & \tn & + E_\oP \bigg[ \b1_\oAtc \Big( \int_t^{t+\otau } \n f \big( r, \oX_{r \land \cd} \big) dr \+ \b1_{\{\oo \in \ocD^c_\infty \}} \Big( \ocV \big( \ol{\Psi}_\tau \big) \- \e \Big) \+ \b1_{\{\oo \in \ocD_\infty \}} \frac{1}{\e} \Big) \bigg] . \label{081720_17} \eea If $\oP \big(\oAtc \Cp \ocD_\infty \big) \= 0 $, this equality together with Theorem \ref{thm_V=oV} and \eqref{062720_11} shows that for any $ \e \ins (0,1) $ \beas \hspace{-0.8cm} \oV (t,\bx,y,z) & \tn \gs & \tn E_\oP \bigg[ \b1_\oAt \Big( \int_t^\oT f \big(r,\oX_{r \land \cd} \big) dr \+ \oz \big) \Big) \+ \b1_{\oAtc} \bigg( \int_t^{t + \otau} f \big(r,\oX_{r \land \cd} \big) dr \+ \oV \Big( t \+ \otau , \oX_{(t+\otau) \land \cd} , \ol{\cY}_2 , \ol{\cZ}_2 \Big) \bigg) \bigg] \- \e \\ & \tn \= & \tn E_\oP \bigg[ \b1_{\{\oT < t + \otau\}} \Big( \int_t^\oT f \big(r,\oX_{r \land \cd} \big) dr \+ \oz \Big) \+ \b1_{\{\oT \ge t + \otau \}} \bigg( \int_t^{t + \otau} f \big(r,\oX_{r \land \cd} \big) dr \+ \oV \Big( t \+ \otau , \oX_{(t+\otau) \land \cd} , \oY_\oP ( \otau ) , \oZ_\oP ( \otau ) \Big) \bigg) \bigg] \- \e . \eeas Letting $\e \nto 0$ gives \eqref{081720_15}. Suppose next that $\oP \big(\oAtc \Cp \ocD_\infty \big) \> 0 $. For any $\e \ins (0,1)$, \eqref{081720_17}, Theorem \ref{thm_V=oV} and \eqref{062720_11} imply that \beas \hspace{-0.5cm} \oV (t,\bx,y,z) & \tn \gs & \tn E_\oP \bigg[ \b1_\oAt \Big( -\int_t^\oT f^- \big(r,\oX_{r \land \cd} \big) dr \- \oz^- \Big) \+ \b1_\oAtc \Big(- \n \int_t^{t+\otau} \n f^- \big( r, \oX_{r \land \cd} \big) dr - \oV^- \big( t \+ \otau , \oX_{(t+\otau) \land \cd} , \oY_\oP (\otau) , \oZ_\oP ( \otau ) \big) \Big) \bigg] \\ & \tn & - \e \+ \frac{1}{\e} \oP \big( \oAtc \cap \ocD_\infty \big) . \eeas Sending $\e \nto 0$ and using \eqref{081620_23} yield $ \oV (t,\bx,y,z) \= \infty $. Hence \eqref{081720_15} still holds. Taking supremum over $\oP \ins \ocP_{t,\bx}(y,z)$ reaches the super-solution part of the DPP for $\oV$. \qed
1,108,101,565,109
arxiv
\section{Introduction} Last years showed much progress in the experimental determination of the $g$-factors of hydrogen-like ions \cite{haeffner:00:prl,verdu:04,sturm:11,sturm:11:b}. Measurements, accurate up to a few parts in $10^{11}$ \cite{sturm:11:b}, were performed by studying a single ion confined in a Penning trap. These experiments provided the stringent tests of bound-state quantum electrodynamics (QED) theory and yielded the best determination of the electron mass \cite{mohr:08:rmp}. In order to match the experimental accuracy, many sophisticated calculations were performed during the past years, in particular those of the one-loop self-energy \cite{blundell:97:pra,persson:97:g,yerokhin:02:prl}, the one-loop vacuum-polarization \cite{karshenboim:02:plb}, the nuclear recoil \cite{shabaev:02:recprl}, and the two-loop QED effects \cite{pachucki:04:prl,pachucki:05:gfact}. These calculations made it possible to determine the electron mass from the experimental values of the bound-electron $g$ factor \cite{beier:02:prl}. The theoretical accuracy of the bound-electron $g$ factor in hydrogen-like ions is presently at the $10^{-11}$ level \cite{pachucki:05:gfact} for light elements up to carbon but deteriorates quickly for heavier elements because of the unknown higher-order two-loop QED effects scaling with the nuclear charge number $Z$ as $Z^5$. The experimental investigations of the $g$-factors of hydrogen-like ions were so far performed for ions with spinless nuclei. However, when applied to the ions with a non-zero nuclear spin, such investigations can be useful as a new method for the determination of the magnetic dipole moments of the nuclei. This method has important advantages over the more traditional approaches, such as nuclear magnetic resonance (NMR), atomic beam magnetic resonance, collinear laser spectroscopy, and optical pumping (OP). These advantages are that (i) the simplicity of the system under investigation (a hydrogen-like ion) allows for an {\em ab initio} theoretical description with a reliable estimation of uncalculated effects and (ii) the influence of the nuclear-structure effects (which are the main limiting factors for theory) is relatively weak. This is in contrast to the existing methods, in which the experimental data should be corrected for several physical effects, which are difficult to calculate. Among such effects is the diamagnetic shielding of the external magnetic field by the electrons in the atom. The NMR results should be also corrected for the paramagnetic chemical shift caused by the chemical environment \cite{ramsey:50:dia} and the OP data are sensitive to the hyperfine mixing of the energy levels \cite{lahaye:70}. Significant (and generally unknown) uncertainties of calculations of these effects often lead to ambiguities in the published values of nuclear magnetic moments \cite{gustavsson:98:pra}. The goal of the present investigation is to perform an {\it ab initio} calculation of the $g$-factor of a hydrogen-like ion with a non-zero nuclear spin. It can be demonstrated that the nuclear-spin dependent part of the atomic $g$-factor can be parameterized in terms of the nuclear magnetic shielding constant $\sigma$, which describes the effective reduction of the coupling of the nuclear magnetic moment $\vec{\mu}$ to an external magnetic field $\vec{B}$ caused by the shell electron(s), \begin{eqnarray} -\vec{\mu} \cdot \vec{B} \to -\vec{\mu} \cdot \vec{B}\, (1-\sigma) \,. \end{eqnarray} The relativistic theory of the $g$-factor of a hydrogen-like ion with a non-zero nuclear spin (and, thus, the theory of the nuclear magnetic shielding) was examined in detail in Ref.~\cite{moskovkin:04}. In the present work, we go beyond the relativistic description of the nuclear magnetic shielding and calculate the dominant corrections to it, namely, the self-energy, the vacuum-polarization, and the nuclear magnetization distribution corrections. As a result, we bring the theory to the point where the uncertainty due to nuclear-structure effects impedes further progress. The main challenge of the present work is the calculation of the self-energy correction. To the best of our knowledge, the only previous attempt to address it was made in Ref.~\cite{rudzinski:09}. In that work, the self-energy contribution to the shielding constant was estimated by the leading logarithm of its ${Z\alpha}$ expansion (where $\alpha$ is the fine-structure constant). In our work, we calculate the self-energy correction rigorously to all orders in the binding nuclear strength parameter ${Z\alpha}$ and, independently, perform an analytical calculation to the leading order in the expansion in this parameter (including both the logarithm and constant terms). First results of this work were reported in Ref.~\cite{yerokhin:11:prl}. The rest of the paper is organized as follows. In Sec.~\ref{sec:1} we summarize the relativistic theory of the $g$-factor of an ion with a non-zero nuclear spin and the theory of the nuclear magnetic shielding. Our calculation of the self-energy and vacuum-polarization corrections to all orders in ${Z\alpha}$ is described in Sec.~\ref{sec:2}. In Sec.~\ref{sec:Zaexp} we report the calculation of the QED correction to the nuclear magnetic shielding to the leading order in the ${Z\alpha}$ expansion. Sec.~\ref{sec:3} deals with the other effects, namely, the nuclear magnetization distribution, the nuclear recoil, and the quadrupole interaction. Numerical results and discussion are given in Sec.~\ref{sec:4}. The paper ends with conclusion in Sec.~\ref{sec:5}. The relativistic units ($m$ = $\hbar = c = 1$) and the charge units $\alpha = e^2/(4\pi)$ are used throughout this paper. \section{Leading-order magnetic shielding} \label{sec:1} We consider a hydrogen-like ion with a non-zero spin nucleus placed in a weak homogenous magnetic field $\vec{B}$ directed along the $z$ axis. Assuming that the energy shift due to the interaction with the $\vec{B}$ field (the Zeeman shift) is much smaller than the hyperfine-structure splitting (hfs), the energy shift can be expressed in terms of the $g$ factor of the atomic system $g_F$, \begin{eqnarray} \label{eq1} \Delta E = g_F\,\mu_0\,B\,M_F\,, \end{eqnarray} where $B = |\vec{B}|$, $\mu_0 = |e|/(2m)$ is the Bohr magneton, $e$ and $m$ are the elementary charge and the electron mass, respectively, and $M_F$ is the $z$ projection of the total angular momentum of the system $F$. To the leading order, the energy shift is given by \begin{eqnarray} \label{eq2} \Delta E = \langle} \newcommand{\rbr}{\rangle FM_F|\left[-\frac{e}{2}\,(\vec{r}\times\vec{\alpha})\cdot\vec{B}-\vec{\mu}\cdot\vec{B}\right]|FM_F\rbr\,, \end{eqnarray} where $|FM_F\rbr \equiv |jIFM_F\rbr$ is the wave function of the ion, $j$ and $I$ are the angular momentum quantum numbers of the electron and nucleus, respectively; $\vec{\mu}$ is the operator of the magnetic moment of the nucleus. The matrix element (\ref{eq2}) can easily be evaluated, yielding the well-known leading-order relativistic result \cite{bethesalpeter}, \begin{eqnarray} \label{eq3} g^{(0)}_F = g_j^{(0)}\, \frac{\langle} \newcommand{\rbr}{\rangle \vec{j}\cdot\vec{F} \rbr}{F(F+1)} - \frac{m}{m_p}\,g_I\, \frac{\langle} \newcommand{\rbr}{\rangle \vec{I}\cdot\vec{F} \rbr}{F(F+1)}\,, \end{eqnarray} where $g_j^{(0)}$ is the Dirac bound-electron $g$ factor \cite{breit:28}, $g_I = \mu/(\mu_NI)$ is the nuclear $g$ factor, $\mu = \langle II | \vec{\mu}|II \rangle$ is the nuclear magnetic moment, $\mu_N = |e|/(2m_p)$ is the nuclear magneton, $m_p$ is the proton mass, and $\langle} \newcommand{\rbr}{\rangle \vec{j}\cdot\vec{F} \rbr$ and $\langle} \newcommand{\rbr}{\rangle \vec{I}\cdot\vec{F} \rbr$ are the angular-momentum recoupling coefficients, \begin{eqnarray} \langle} \newcommand{\rbr}{\rangle \vec{j}\cdot\vec{F} \rbr &=& [F(F+1)-I(I+1)+j(j+1)]/2\,,\\ \langle} \newcommand{\rbr}{\rangle \vec{I}\cdot\vec{F} \rbr &=& [F(F+1)+I(I+1)-j(j+1)]/2\,. \end{eqnarray} Generalizing the leading-order result to include the higher-order correction, we write the atomic $g$ factor $g_F$ as \begin{eqnarray} \label{eq4a} g_F = g_j\, \frac{\langle} \newcommand{\rbr}{\rangle \vec{j}\cdot\vec{F} \rbr}{F(F+1)} - \frac{m}{m_p}\,g_I\, (1-\sigma)\,\frac{\langle} \newcommand{\rbr}{\rangle \vec{I}\cdot\vec{F} \rbr}{F(F+1)}\,. \end{eqnarray} In the above equation, the bound-electron $g$ factor $g_j = g_j^{(0)} + \alpha/(2\pi)+ \ldots$ incorporates all corrections that do not depend on the nuclear spin, whereas the nuclear shielding constant $\sigma$ parameterizes the nuclear-spin dependent corrections. The bound-electron $g$ factor $g_j$ has been extensively studied during the last years, both theoretically \cite{yerokhin:02:prl,karshenboim:02:plb,shabaev:02:recprl,pachucki:04:prl,pachucki:05:gfact} and experimentally \cite{haeffner:00:prl,verdu:04,sturm:11}. The goal of the present work is the {\em ab initio} theoretical description of the nuclear shielding parameter $\sigma$. It can be seen from Eq.~(\ref{eq4a}) that the nuclear-spin dependent contribution to the atomic $g$ factor $g_F$ is suppressed by the electron-to-proton mass ratio and thus is by about three orders of magnitude smaller than the nuclear-spin independent part proportional to $g_j$. It is, however, possible to form a combination of the atomic $g$ factors which is free of the nuclear-spin independent contributions. So, for ions with the nuclear spin $I>\nicefrac12$, we introduce the sum of the atomic $g$ factors $\overline{g}$ that is directly proportional to the nuclear magnetic moment, \begin{eqnarray} \label{eq4b} \overline{g} \equiv g_{F = I+\nicefrac12}+ g_{F = I-\nicefrac12} = -2\frac{m}{m_p}\frac{\mu}{\mu_NI}\,(1-\sigma)\,. \end{eqnarray} This combination of the $g$ factors is particularly convenient for the determination of the nuclear magnetic dipole moments from experiment. Indeed, if both the $g_{F = I+\nicefrac12}$ and $g_{F = I-\nicefrac12}$ $g$-factors are measured and $\sigma$ is known from theory, Eq.~(\ref{eq4b}) determines the nuclear magnetic moment $\mu$. For the ions with a nuclear spin $I=\nicefrac12$, Eq.~(\ref{eq4b}) is not applicable and the nuclear magnetic moment should be determined from Eq.~(\ref{eq4a}). Contributions to the nuclear magnetic shielding are described by Feynman diagrams with two external interactions, one with the external magnetic field (the Zeeman interaction), \begin{eqnarray} V_{\rm zee}(r) = \frac{|e|}{2}\,\vec{B}\cdot (\vec{r}\times\vec{\alpha}) \,, \end{eqnarray} and another, with the magnetic dipole field of the nucleus (the hfs interaction) \begin{eqnarray} V_{\rm hfs}(r) = \frac{|e|}{4\pi}\, \vec{\mu}\cdot \frac{\vec{r}\times \vec{\alpha}}{r^3}\,. \end{eqnarray} The leading-order contribution to the magnetic shielding comes from the following energy shift \begin{eqnarray} \label{eq5} \Delta E = 2\,\sum_{n\ne a} \frac1{\varepsilon_a-\varepsilon_n} \langle} \newcommand{\rbr}{\rangle a|V_{\rm zee}|n\rbr\langle} \newcommand{\rbr}{\rangle n|V_{\rm hfs}|a\rbr\,, \end{eqnarray} where the summation runs over the whole Dirac spectrum with the reference state excluded. As follows from Eqs.~(\ref{eq1}) and (\ref{eq4a}), contributions to the shielding constant $\delta \sigma$ are obtained from the corresponding energy shifts $\delta E$ by \begin{eqnarray} \delta \sigma = \frac{\delta E}{\mu\, B\, M_F\, \frac{\langle} \newcommand{\rbr}{\rangle \vec{I}\cdot\vec{F} \rbr}{IF(F+1)}}\,. \end{eqnarray} For the electronic states with $j_a=1/2$, all nuclear quantum numbers in Eq.~(\ref{eq5}) can be factorized out. The expression for the shielding constant is then obtained from that formula by using the following substitutions \begin{eqnarray} \label{eqsubst} & V_{\rm zee} \to \widetilde{V}_{\rm zee} \equiv (\vec{r}\times\vec{\alpha})_0\,, \ \ \ V_{\rm hfs} \to \widetilde{V}_{\rm hfs} \equiv \frac{\displaystyle (\vec{r}\times\vec{\alpha})_0}{\displaystyle r^3}\,, \ \ \ \nonumber \\ & | a \rbr \to | a_{\nicefrac12} \rbr \,, \ \ \ | n \rbr \to | n_{\nicefrac12} \rbr \,, \nonumber \\ & 2 \to \alpha \ \ \mbox{\rm (prefactor)}\,, \end{eqnarray} where the zero subscript refers to the zero spherical component of the vector. Here and in what follows, $|n_{\nicefrac12}\rbr \equiv |\kappa_n,\mu_n=\nicefrac12\rbr$ denotes the Dirac state with the relativistic angular quantum number $\kappa_n$ and the fixed momentum projection $\mu_n=1/2$. So, for $j_a=1/2$, the leading contribution to the magnetic shielding is given by a simple expression \begin{align} \sigma^{(0)} = &\, \alpha \sum_{n\ne a}\frac1{\varepsilon_a-\varepsilon_n}\, \langle} \newcommand{\rbr}{\rangle a_{\nicefrac12}| \widetilde{V}_{\rm zee} |n_{\nicefrac12}\rbr \langle} \newcommand{\rbr}{\rangle n_{\nicefrac12}| \widetilde{V}_{\rm hfs} |a_{\nicefrac12}\rbr\,. \end{align} Performing the angular integrations with help of Eq.~(\ref{app:eq3}), one gets for the $ns$ reference state \begin{eqnarray} \label{eq:leading} \sigma^{(0)} = \alpha \sum_{\kappa_n = -1,2} x_{\kappa_n}^2\, \sum_{n\ne a} \frac{R_{an}^{(1)}\,R_{na}^{(-2)}}{\varepsilon_a-\varepsilon_n}\,, \end{eqnarray} where $R^{(\alpha)}$ are the radial integrals of the form \begin{eqnarray} \label{eqrad} R^{(\alpha)}_{ab} = \int_0^{\infty}dr\,r^{2+\alpha}\bigl[g_a(r)f_b(r)+f_a(r)g_b(r) \bigr]\,, \end{eqnarray} $g(r)$ and $f(r)$ are the upper and lower radial components of the Dirac wave function, respectively, and the angular prefactors $x_{\kappa_n}$ are given by $x_{\kappa_n=-1}=-2/3$, $x_{\kappa_n=2}=-\sqrt{2}/3$. For the point nuclear model, the sum over the Dirac spectrum in the above expression can be evaluated analytically \cite{moore:99,pyper:99:a,pyper:99:b}, see also more recent studies \cite{moskovkin:04,ivanov:09}, \begin{eqnarray} \label{eq5a} \sigma^{(0)} &=& -\frac{4\alpha{Z\alpha}}{9}\left(\frac13-\frac1{6(1+\gamma)} +\frac{2}{\gamma}-\frac3{2\gamma-1} \right) \nonumber \\ &=& \alpha{Z\alpha}\left(\frac13 + \frac{97}{108}({Z\alpha})^2+\ldots \right)\,, \end{eqnarray} where $\gamma = \sqrt{1-({Z\alpha})^2}$. For the extended nucleus, the calculation is easily performed numerically \cite{moskovkin:04}. \section{QED correction} \label{sec:2} \begin{figure*} \centerline{\includegraphics[width=0.9\textwidth]{sepnc2.eps}} \caption{Self-energy correction to the nuclear magnetic shielding. Double line represents the electron in the binding nuclear field. The wave line terminated by a triangle represents the dipole hyperfine interaction with the nucleus and the wave line terminated by a cross represents the interaction with the external magnetic field. \label{fig:1} } \end{figure*} \subsection{Self-energy: General formulas} The self-energy correction in the presence of the external magnetic field and the magnetic dipole field of the nucleus is graphically represented by six topologically non-equivalent Feynman diagrams shown in Fig.~\ref{fig:1}. General expressions for these diagrams are conveniently obtained by using the two-time Green's function method \cite{shabaev:02:rep}. The resulting formulas are summarized below. \subsubsection{Perturbed-orbital contribution} The irreducible contributions of diagrams on Figs.~\ref{fig:1}(a)-(c) can be represented in terms of the one-loop self-energy operator $\Sigma$. This part will be termed as the perturbed-orbital contribution. The corresponding energy shift is \begin{eqnarray} \label{po1} \Delta E_{\rm po} &=& 2\, \langle} \newcommand{\rbr}{\rangle \Sigma_R \,\frac1{(\varepsilon_a-H)^{\prime}}\, V_{\rm zee}\, \frac1{(\varepsilon_a-H)^{\prime}}\, V_{\rm hfs} \rbr \nonumber \\ && + 2\, \langle} \newcommand{\rbr}{\rangle \Sigma_R \,\frac1{(\varepsilon_a-H)^{\prime}}\, V_{\rm hfs}\, \frac1{(\varepsilon_a-H)^{\prime}}\, V_{\rm zee} \rbr \nonumber \\ && -2\, \langle} \newcommand{\rbr}{\rangle \Sigma_R \,\frac1{(\varepsilon_a-H)^{2^{\prime}}}\, V_{\rm zee}\rbr \langle} \newcommand{\rbr}{\rangle V_{\rm hfs} \rbr \nonumber \\ && -2\, \langle} \newcommand{\rbr}{\rangle \Sigma_R \,\frac1{(\varepsilon_a-H)^{2^{\prime}}}\, V_{\rm hfs}\rbr \langle} \newcommand{\rbr}{\rangle V_{\rm zee} \rbr \nonumber \\ && -2\, \langle} \newcommand{\rbr}{\rangle \Sigma_R \rbr \langle} \newcommand{\rbr}{\rangle V_{\rm zee} \,\frac1{(\varepsilon_a-H)^{2^{\prime}}}\, V_{\rm hfs}\rbr \nonumber \\ && +2\, \langle} \newcommand{\rbr}{\rangle V_{\rm zee} \,\frac1{(\varepsilon_a-H)^{\prime}}\, \Sigma_R\, \frac1{(\varepsilon_a-H)^{\prime}}\, V_{\rm hfs} \rbr\,, \nonumber \\ && \end{eqnarray} where we used the short-hand notations for the reduced Green's function, \begin{equation} \frac1{(\varepsilon_a-H)^{{\prime}}} = \sum_{n\ne a} \frac{|n\rbr\langle} \newcommand{\rbr}{\rangle n|}{\varepsilon_a-\varepsilon_n}\,, \end{equation} and its derivative \begin{equation} \frac1{(\varepsilon_a-H)^{2^{\prime}}} = \sum_{n\ne a} \frac{|n\rbr\langle} \newcommand{\rbr}{\rangle n|}{(\varepsilon_a-\varepsilon_n)^2}\,. \end{equation} The (unrenormalized) self-energy operator $\Sigma(\varepsilon)$ is defined by its matrix elements as follows \begin{eqnarray} \langle} \newcommand{\rbr}{\rangle i | \Sigma(\varepsilon)|k\rbr &=& \frac{i}{2\pi}\int_{-\infty}^{\infty} d\omega \sum_{n} \frac{\langle} \newcommand{\rbr}{\rangle in|I(\omega)|nk\rbr}{\varepsilon-\omega-u\,\varepsilon_n} \,, \end{eqnarray} where $I(\omega)$ is the operator of the electron-electron interaction, $I(\omega) = e^2\alpha_{\mu}\alpha_{\nu}D^{\mu\nu}(\omega)$, $D^{\mu\nu}(\omega)$ is the photon propagator, $\alpha_{\mu}$ are the Dirac matrices, and $u \equiv 1-i0$, where $i0$ is a small imaginary addition that defines the positions of the poles of the electron propagators with respect to the integration contour. The summation over $n$ runs over the complete Dirac spectrum. Renormalization of the one-loop self-energy operator is well described in the literature, see, e.g., Ref.~\cite{snyderman:91}. In this work, the renormalized part of the self-energy operator is defined as \begin{align} \Sigma_R(\varepsilon) = \Sigma(\varepsilon)-\beta\,\delta m - (\varepsilon-\vec{\alpha}\cdot\vec{p} - \,V_C -\beta m)\,B^{(1)}\,, \end{align} where $\delta m$ is the mass counterterm, $\beta$ is the Dirac $\beta$ matrix, $B^{(1)}$ is the one-loop renormalization constant, $V_C$ is the binding Coulomb potential of the nucleus, and the renormalization is to be performed in momentum space with a covariant regularization of ultraviolet (UV) divergencies. Details on the renormalization procedure and explicit formulas for $\Sigma_R(\varepsilon)$ can be found in Refs.~\cite{yerokhin:99:pra,yerokhin:03:epjd}. \subsubsection{Single-vertex contributions} The irreducible contribution of the diagram shown in Fig.~\ref{fig:1}(d), together with the corresponding derivative term, is referred to as the {\em hfs-vertex} contribution. It is given by \begin{eqnarray} \label{vrhfs} \Delta E_{\rm vr,hfs} &=& 2\, \langle} \newcommand{\rbr}{\rangle \Gamma_{{\rm hfs},R} \,\frac1{(\varepsilon_a-H)^{{\prime}}}\, V_{\rm zee}\rbr \nonumber \\ && + 2\,\langle} \newcommand{\rbr}{\rangle \Sigma^{\prime}_R \,\frac1{(\varepsilon_a-H)^{{\prime}}}\, V_{\rm zee}\rbr\,\langle} \newcommand{\rbr}{\rangle V_{\rm hfs}\rbr\, , \end{eqnarray} where $\Gamma_{{\rm hfs},R}\equiv \Gamma_{{\rm hfs},R}(\varepsilon_a)$ is the renormalized part of the 3-point vertex representing the interaction with the hfs field and $\Sigma^{\prime}_R\equiv \Sigma^{\prime}_R(\varepsilon_a)$ is the derivative of the renormalized self-energy operator over the energy argument, $\Sigma^{\prime}_R(\varepsilon_a) = \left. d/(d\varepsilon)\Sigma_R(\varepsilon)\right|_{\varepsilon= \varepsilon_a}$. The unrenormalized 3-point hfs vertex operator is defined by its matrix elements as \begin{align} \langle} \newcommand{\rbr}{\rangle i | \Gamma_{\rm hfs}(\varepsilon)|k\rbr &\ = \frac{i}{2\pi}\int_{-\infty}^{\infty} d\omega \nonumber \\ & \times \sum_{n_1n_2} \frac{\langle} \newcommand{\rbr}{\rangle in_2|I(\omega)|n_1k\rbr \langle} \newcommand{\rbr}{\rangle n_1|V_{\rm hfs}|n_2\rbr } {(\varepsilon-\omega-u\varepsilon_{n_1})(\varepsilon-\omega-u\varepsilon_{n_2})} \,. \end{align} The renormalized part of the operator is obtained as \begin{eqnarray} \Gamma_{{\rm hfs},R}(\varepsilon) = \Gamma_{{\rm hfs}}(\varepsilon) - V_{\rm hfs} \, L^{(1)} \,, \end{eqnarray} where $L^{(1)}$ is the one-loop renormalization constant and the renormalization is to be performed in momentum space with a covariant regularization of UV divergencies, see Ref.~\cite{yerokhin:03:epjd} for details. The irreducible contribution of the diagram shown in Fig.~\ref{fig:1}(e), together with the corresponding derivative term, is referred to as the {\em Zeeman-vertex} contribution. It is given by \begin{eqnarray} \label{vrzee} \Delta E_{\rm vr,zee} &=& 2\, \langle} \newcommand{\rbr}{\rangle \Gamma_{{\rm zee},R} \,\frac1{(\varepsilon_a-H)^{{\prime}}}\, V_{\rm hfs}\rbr \nonumber \\ && + 2\,\langle} \newcommand{\rbr}{\rangle \Sigma^{\prime}_R \,\frac1{(\varepsilon_a-H)^{{\prime}}}\, V_{\rm hfs}\rbr\,\langle} \newcommand{\rbr}{\rangle V_{\rm zee}\rbr\,, \end{eqnarray} where the 3-point Zeeman vertex is defined analogously to the hfs one. \subsubsection{Double-vertex contribution} The contribution of the diagram shown on Fig.~\ref{fig:1}(f), together with the corresponding derivative terms, will be termed as the {\em double-vertex} contribution. It is defined by \begin{align} \label{dvr} &\Delta E_{\rm d.vr} = 2\,\langle} \newcommand{\rbr}{\rangle \Lambda(\varepsilon_a)\rbr \nonumber \\ & + \langle} \newcommand{\rbr}{\rangle \Sigma^{\prime\prime}\rbr \langle} \newcommand{\rbr}{\rangle V_{\rm zee}\rbr \langle} \newcommand{\rbr}{\rangle V_{\rm hfs}\rbr + \langle} \newcommand{\rbr}{\rangle \Gamma_{\rm hfs}^{\prime}\rbr \langle} \newcommand{\rbr}{\rangle V_{\rm zee}\rbr + \langle} \newcommand{\rbr}{\rangle \Gamma_{\rm zee}^{\prime}\rbr\langle} \newcommand{\rbr}{\rangle V_{\rm hfs} \rbr \nonumber \\ & - 2\, \langle} \newcommand{\rbr}{\rangle V_{\rm zee} \frac1{(\varepsilon_a-H)^{\prime}} V_{\rm hfs}\rbr\, \frac{i}{2\pi}\int_{-\infty}^{\infty} d\omega \sum_{a'} \frac{\langle} \newcommand{\rbr}{\rangle aa'|I(\omega)|a'a\rbr}{(-\omega+i0)^2} \,, \end{align} where $\Lambda$ denotes the 4-point vertex representing the interaction with both the Zeeman and hfs interactions, \begin{align} &\langle} \newcommand{\rbr}{\rangle i | \Lambda(\varepsilon)|k\rbr = \frac{i}{2\pi}\int_{-\infty}^{\infty} d\omega \nonumber \\ &\times \sum_{n_1n_2n_3} \frac{\langle} \newcommand{\rbr}{\rangle in_3|I(\omega)|n_1k\rbr \langle} \newcommand{\rbr}{\rangle n_1|V_{\rm zee}|n_2\rbr \langle} \newcommand{\rbr}{\rangle n_2|V_{\rm hfs}|n_3\rbr } {(\varepsilon-\omega-u\varepsilon_{n_1})(\varepsilon-\omega-u\varepsilon_{n_2})(\varepsilon-\omega-u\varepsilon_{n_3})} \,, \end{align} $\Sigma^{\prime\prime}\equiv\Sigma^{\prime\prime}(\varepsilon_a)$ denotes the second derivative of the self-energy operator over the energy argument, $\Sigma^{\prime\prime}(\varepsilon_a) = \left. d^2/(d^2\varepsilon)\,\Sigma(\varepsilon)\right|_{\varepsilon= \varepsilon_a}$, $\Gamma^{\prime}\equiv \Gamma^{\prime}(\varepsilon_a)$ denotes the derivative of the vertex operator over the energy argument, and $a'$ denotes the intermediate electron states with the energy $\varepsilon_{a'} = \varepsilon_{a}$. The last term in Eq.~(\ref{dvr}) is added artifically, in order to make the whole expression for $\Delta E_{\rm d.vr}$ infrared (IR) finite. The same term will be subtracted from the derivative contribution defined below, see Eq.~(\ref{der}). We note that all terms in Eq.~(\ref{dvr}) are UV finite, so that there is no need for any UV regularization. There are, however, IR divergences, which appear when the energy of the intermediate electron states in the electron propagators coincides with the energy of the reference state. The divergences cancel out in the sum of individual terms in Eq.~(\ref{dvr}). \subsubsection{Derivative contribution} Finally, the remaining contribution will be termed as the {\em derivative} term. It is given by \begin{eqnarray} \label{der} \Delta E_{\rm der} &=& 2\, \biggl[\langle} \newcommand{\rbr}{\rangle \Sigma^{\prime}_R \rbr +\frac{i}{2\pi}\int_{-\infty}^{\infty} d\omega \sum_{a'} \frac{\langle} \newcommand{\rbr}{\rangle aa'|I(\omega)|a'a\rbr}{(-\omega+i0)^2} \biggr] \nonumber \\ && \times \langle} \newcommand{\rbr}{\rangle V_{\rm zee} \,\frac1{(\varepsilon_a-H)^{{\prime}}}\, V_{\rm hfs}\rbr \,. \end{eqnarray} The second term in the brackets is added artifically, in order to compensate the IR reference-state divergence present in the first term, making the total expression for $\Delta E_{\rm der}$ IR finite. Note that this term is exactly the same as the one added to Eq.~(\ref{dvr}) but has the opposite sign. Finally, the total self-energy correction is given by the sum of the contributions discussed above, \begin{align} \label{eq7} \Delta E_{\rm SE} = \Delta E_{\rm po}+ \Delta E_{\rm vr, hfs}+ \Delta E_{\rm vr, zee} + \Delta E_{\rm d.vr} + \Delta E_{\rm der}\,, \end{align} which are given by Eqs.~(\ref{po1}), (\ref{vrhfs}), (\ref{vrzee}), (\ref{dvr}), and (\ref{der}), respectively. \subsection{Self-energy: Calculation} The general formulas reported so far represent contributions to the energy shift. We now have to separate out the nuclear degrees of freedom and convert the corrections to the energy into corrections to the shielding constant. In most cases, this is achieved simply by using the substitutions (\ref{eqsubst}). The double-vertex contribution, however, requires an explicit angular-momentum algebra calculation for the separation of the nuclear variables. We will see that most of the corrections to the shielding constant can be regarded as generalizations of the corrections already discussed in the literature. So, our present calculation will be largely based on the previous investigations of the self-energy correction to the Lamb shift \cite{yerokhin:99:pra,yerokhin:05:se}, to the hyperfine structure \cite{yerokhin:05:hfs,yerokhin:08:prl,yerokhin:10:sehfs}, and to the $g$ factor \cite{yerokhin:04,yerokhin:08:prl,yerokhin:10:sehfs}. The double-vertex correction of the kind similar to that in the present work appeared in the evaluation of the self-energy correction to the parity-nonconserving transitions in Refs.~\cite{shabaev:05:prl,shabaev:05:pra}. However, in this work we develop a different scheme for the evaluation of the double-vertex contribution based on the analytical representation of the Dirac-Coulomb Green function. \subsubsection{Perturbed-orbital contributions} The matrix elements of the self-energy operator are diagonal in the relativistic angular momentum quantum number $\kappa$ and the momentum projection $\mu$. Because of this, the angular reduction of the perturbed-orbital contribution is achieved by the same set of substitutions (\ref{eqsubst}) as for the leading-order magnetic shielding. The resulting contribution to the shielding constant is conveniently represented as \begin{eqnarray} \Delta \sigma_{\rm po} &=& \alpha\, \langle} \newcommand{\rbr}{\rangle a_{\nicefrac12}|\Sigma_R(\varepsilon_a) |\delta^{(2)}a_{\nicefrac12}\rbr \nonumber \\ && + \alpha\, \langle} \newcommand{\rbr}{\rangle \delta^{(1)}_{\rm hfs}a_{\nicefrac12}|\Sigma_R(\varepsilon_a) |\delta^{(1)}_{\rm zee}a_{\nicefrac12}\rbr\,, \end{eqnarray} where the first-order perturbations of the reference-state wave function are given by \begin{align} |\delta^{(1)}_{\rm hfs} a_{\nicefrac12}\rbr = \sum_{n\ne a} |n_{\nicefrac12}\rbr \frac1{\varepsilon_a-\varepsilon_{n}}\, \langle} \newcommand{\rbr}{\rangle n_{\nicefrac12}|\widetilde{V}_{\rm hfs} |a_{\nicefrac12}\rbr \,, \end{align} and \begin{align} |\delta^{(1)}_{\rm zee} a_{\nicefrac12}\rbr = \sum_{n\ne a} |n_{\nicefrac12}\rbr \frac1{\varepsilon_a-\varepsilon_{n}}\, \langle} \newcommand{\rbr}{\rangle n_{\nicefrac12}|\widetilde{V}_{\rm zee}|a_{\nicefrac12}\rbr \,, \end{align} and $|\delta^{(2)}a\rangle$ is the standard second-order perturbation \cite{landau:III} of the reference-state wave function induced by {\em both} interactions, $\widetilde{V}_{\rm hfs}$ and $\widetilde{V}_{\rm zee}$. Note that only the diagonal in $\kappa$ part of the perturbed wave functions contributes to $\Delta \sigma_{\rm po}$. The calculation of the non-diagonal matrix elements of the self-energy operator is performed by a straightforward generalization of the method developed in Ref.~\cite{yerokhin:05:se} for the first-order self-energy correction to the Lamb shift. \subsubsection{Hfs-vertex contributions} The hfs-vertex correction to the energy shift (\ref{vrhfs}) for the reference state with $j_a=\nicefrac12$ can be converted to the correction to the magnetic shielding by the substitution (\ref{eqsubst}). The result is \begin{align} \label{eqhfs} & \Delta \sigma_{\rm vr,hfs} = \frac{i\alpha}{2\pi}\int_{-\infty}^{\infty} d\omega \nonumber \\ & \times \sum_{n_1n_2} \frac{\langle} \newcommand{\rbr}{\rangle a_{\nicefrac12}n_2|I(\omega)|n_1\,\delta^{(1)}_{\rm zee} a_{\nicefrac12}\rbr \langle} \newcommand{\rbr}{\rangle n_1|\widetilde{V}_{\rm hfs}|n_2\rbr } {(\varepsilon_a-\omega-u\,\varepsilon_{n_1})(\varepsilon_a-\omega-u\,\varepsilon_{n_2})} \nonumber \\ & - \langle} \newcommand{\rbr}{\rangle\widetilde{V}_{\rm hfs}\rbr \, \frac{i\alpha}{2\pi}\int_{-\infty}^{\infty} d\omega \sum_{n} \frac{\langle} \newcommand{\rbr}{\rangle a_{\nicefrac12}n|I(\omega)|n\,\delta^{(1)}_{\rm zee} a_{\nicefrac12}\rbr } {(\varepsilon_a-\omega-u\,\varepsilon_{n})^2} \,, \end{align} where the covariant regularization of the ultraviolet (UV) divergences is implicitly assumed. The right-hand-side of Eq.~(\ref{eqhfs}) differs from the vertex and reducible parts of the self-energy correction to the hyperfine structure only by the perturbed wave function $|\delta^{(1)}_{\rm zee} a\rbr$ in place of one of the reference-state wave functions $|a\rbr$. The main complication brought by this difference is that the perturbed wave function contains components with different values of the relativistic angular quantum number $\kappa$. So, for the reference state with $\kappa_a = -1$, the perturbed wave function has components with $\kappa = -1$ and $\kappa = 2$, both of which contribute to the first term in Eq.~(\ref{eqhfs}), denoted in the following as $\Delta \sigma_{\rm ver,hfs}$. The second (reducible) term contains only the $\kappa = \kappa_a$ component of the perturbed wave function and its calculation is done exactly as described in Ref.~\cite{yerokhin:05:hfs}. Below, we present the generalization of formulas derived in Ref.~\cite{yerokhin:05:hfs,yerokhin:10:sehfs} needed for the evaluation of $\Delta \sigma_{\rm ver,hfs}$. As explained in Ref.~\cite{yerokhin:05:hfs}, the covariant separation of the UV divergences is conveniently performed by dividing the vertex contribution into the zero- and many-potential parts, according to the number of interactions with the binding Coulomb field in the electron propagator, \begin{align} \Delta \sigma_{\rm ver,hfs} = \Delta \sigma_{\rm ver,hfs}^{(0)} + \Delta \sigma_{\rm ver,hfs}^{(1+)}\,. \end{align} The zero-potential part is calculated in momentum space with the dimensional regularization of the UV divergences. The Fourier transform of the $\widetilde{V}_{\rm hfs}$ is done by \begin{eqnarray} \frac{(\vec{r} \times \vec{\alpha})_0}{r^3} \to (-4\pi i)\, \frac{(\vec{q} \times \vec{\alpha})_0}{\vec{q}^2}\,, \end{eqnarray} where $\vec{q} = \vec{p}_1-\vec{p}_2$ is the transferred momentum. The contribution of the zero-potential hfs vertex part to the shielding constant is \begin{align} \label{e6} \Delta \sigma^{(0)}_{\rm ver,hfs} &\ = -4\pi i \alpha \int \frac{d\vec{p}_1}{(2\pi)^3}\, \frac{d\vec{p}_2}{(2\pi)^3}\, \nonumber \\ & \times \overline{\psi}_{a_{\nicefrac12}}(\vec{p}_1)\, \frac{\left[ \vec{q} \times {\vec \Gamma}_R(p_1,p_2)\right]_0}{\vec{q}^2}\, \psi_{\delta a_{\nicefrac12}}(\vec{p}_2)\,, \end{align} where $p_1$ and $p_2$ are 4-vectors with the fixed time component $p_1 = (\varepsilon_a,\vec{p}_1)$, $p_2 = (\varepsilon_a,\vec{p}_2)$, $\psi_a$ and $\psi_{\delta a}$ are the reference-state and the perturbed wave functions, respectively, $\overline{\psi} = \psi^{\dag}\gamma^0$ is the Dirac conjugation, and ${\vec \Gamma}_R$ is the renormalized one-loop vertex operator \cite{yerokhin:99:pra}. For evaluating the integrals over the angular variables, it is convenient to use the following representation of the vertex operator sandwiched between the Dirac wave functions \begin{align} \label{e7} \overline{\psi}_a(\vec{p}_1)&\ {\vec\Gamma}_R(p_1,p_2)\, \psi_b(\vec{p}_2) = \frac{\alpha}{4\pi} \left[ {\cal R}_1 \chi^{\dag}_{\kappa_a \mu_a}(\hat{\vec{p}}_1) \, {\vec{\sigma}}\, \chi_{-\kappa_b\mu_b}(\hat{\vec{p}}_2) \right. \nonumber \\ & +{\cal R}_2 \chi^{\dag}_{-\kappa_a \mu_a}(\hat{\vec{p}}_1) \,{\vec{\sigma}}\, \chi_{\kappa_b\mu_b}(\hat{\vec{p}}_2) \nonumber \\ & + ({\cal R}_3 \,\vec{p}_1+ {\cal R}_4\, \vec{p}_2) \chi^{\dag}_{\kappa_a \mu_a}(\hat{\vec{p}}_1) \chi_{\kappa_b\mu_b}(\hat{\vec{p}}_2) \nonumber \\& + \left. ({\cal R}_5 \,\vec{p}_1 +{\cal R}_6\, \vec{p}_2) \chi^{\dag}_{-\kappa_a \mu_a}(\hat{\vec{p}}_1) \chi_{-\kappa_b\mu_b}(\hat{\vec{p}}_2) \right] \,, \end{align} where $\hat{\bfp} \equiv \vec{p}/|\vec{p}|$, $\chi_{\kappa\mu}(\hat{\bfp})$ are the spin-angular Dirac spinors \cite{rose:61}, and the scalar functions ${\cal R}_i$ are given by Eqs.~(A7)--(A12) of Ref.~\cite{yerokhin:99:sescr}. Integration over the angular variables yields (cf.~Eq.~(30) of Ref.~\cite{yerokhin:10:sehfs}) \begin{widetext} \begin{align} \label{e8} \Delta \sigma^{(0)}_{\rm ver,hfs} &\ = -\frac{\alpha^2}{48\pi^5}\, \sum_{\kappa_{\delta a}} x_{\kappa_{\delta a}}\, i^{l_a-l_{\delta a}}\, \int_0^{\infty}dp_{1r}\,dp_{2r}\int_{|p_{1r}-p_{2r}|}^{p_{1r}+p_{2r}}dq_r\, \frac{p_{1r}p_{2r}}{q_r}\, \nonumber \\ & \times \Bigl\{ {\cal R}_1 \, [p_{1r}K_1(\kappa_a,-\kappa_{\delta a})-p_{2r}K_1^{\prime}(\kappa_a,-\kappa_{\delta a})] +{\cal R}_2 \, [p_{1r}K_1(-\kappa_a,\kappa_{\delta a})-p_{2r}K_1^{\prime}(-\kappa_a,\kappa_{\delta a})] \nonumber \\ & + p_{1r}p_{2r}({\cal R}_3+{\cal R}_4)\, K_2(\kappa_a,\kappa_{\delta a})\, + p_{1r}p_{2r}({\cal R}_5+{\cal R}_6)\, K_2(-\kappa_a,-\kappa_{\delta a}) \Bigr\}\,, \end{align} \end{widetext} where $p_{i_r} = |\vec{p}_i|$, $q_r = |\vec{q}|$, $\kappa_{\delta a}$ is the relativistic angular quantum number of the perturbed wave function, $l_n = |\kappa_{n}+\nicefrac12|-\nicefrac12$, $x_{\kappa=-1}=-2/3$, $x_{\kappa=2}=-\sqrt{2}/3$, and the basic angular integrals $K_i(\kappa,\kappa')$ are defined and evaluated in Appendix~\ref{app:angular}. The many-potential vertex contribution is free from UV divergences and thus can be calculated in coordinate space. The result after the integration over the angular variables is \begin{align} \label{eq00} \Delta \sigma_{\rm ver, hfs}^{(1+)} &\ = \sum_{\kappa_{\delta a} = -1,2} x_{\kappa_{\delta a}}\, \frac{i\alpha^2}{2\pi} \int_{-\infty}^{\infty} d\omega \nonumber \\& \times \sum_{n_1n_2} \frac{R^{(-2)}_{n_1n_2} \, \sum_J X_J(\kappa_1,\kappa_2)\, R_J(\omega,an_2n_1\delta a)} {(\varepsilon-\omega-u\,\varepsilon_{n_1})(\varepsilon-\omega-u\,\varepsilon_{n_2})} \nonumber \\& - \ \mbox{\rm subtraction}\,, \end{align} where $R^{(-2)}_{n_1n_2}$ is the radial integral of the hfs type given by Eq.~(\ref{eqrad}), $X_J$ is the angular coefficient, \begin{align} X_J(\kappa_1,\kappa_2) =&\ \frac{(-1)^{j_{\delta a}-1/2}}{\sqrt{2}} \SixJ{j_1}{j_2}{1}{j_{\delta a}}{j_a}{J} \nonumber \\ & \times \frac{-\kappa_1-\kappa_2}{\sqrt{3}} C_1(-\kappa_2,\kappa_1)\,\,, \end{align} $C_1(\kappa_a,\kappa_b)$ is the reduced matrix element of the normalized spherical harmonics given by Eq.~(C10) of Ref.~\cite{yerokhin:99:pra}, $R_J$ is the relativistic generalization of the Slater radial integral given by Eqs.~(C1)--(C9) of Ref.~\cite{yerokhin:99:pra}, and the subtraction in the last line of Eq.~(\ref{eq00}) means that the contribution of the free propagators (already accounted for by the zero-potential term) needs to be subtracted. \subsubsection{Zeeman-vertex contribution} The Zeeman-vertex correction to the energy shift (\ref{vrzee}) for the reference state with $j_a=\nicefrac12$ can be converted to the correction to the magnetic shielding by the substitution (\ref{eqsubst}). The result has the form analogous to that for the hfs-vertex contribution, \begin{align} & \Delta \sigma_{\rm vr,zee} = \frac{i\alpha}{2\pi}\int_{-\infty}^{\infty} d\omega \nonumber \\ & \times \sum_{n_1n_2} \frac{\langle} \newcommand{\rbr}{\rangle a_{\nicefrac12}n_2|I(\omega)|n_1\,\delta^{(1)}_{\rm hfs} a_{\nicefrac12}\rbr \langle} \newcommand{\rbr}{\rangle n_1|\widetilde{V}_{\rm zee}|n_2\rbr } {(\varepsilon_a-\omega-u\,\varepsilon_{n_1})(\varepsilon_a-\omega-u\,\varepsilon_{n_2})} \nonumber \\ & - \langle} \newcommand{\rbr}{\rangle \widetilde{V}_{\rm zee}\rbr \, \frac{i\alpha}{2\pi}\int_{-\infty}^{\infty} d\omega \sum_{n} \frac{\langle} \newcommand{\rbr}{\rangle a_{\nicefrac12}n|I(\omega)|n\,\delta^{(1)}_{\rm hfs} a_{\nicefrac12}\rbr } {(\varepsilon_a-\omega-u\,\varepsilon_{n})^2} \,, \end{align} where the covariant regularization of the UV divergences is implicitly assumed. The above expression looks very similar to Eq.~(\ref{eqhfs}) and can be evaluated almost in the same way, except for the zero-potential vertex contribution. The expression for the zero-potential Zeeman vertex contribution is different from the hfs case because the momentum representation of the interaction with a constant magnetic field involves a $\delta$-function. The diagonal matrix element of the Zeeman vertex operator was evaluated previously in Ref.~\cite{yerokhin:04}; here we present the generalization of the formulas required for the non-diagonal case. The Fourier transform of the interaction with the external magnetic field is given by \begin{eqnarray} \vec{B}\times\vec{r} \to -i(2\pi)^3\, \vec{B} \times \vec{\nabla}_{\vec{p}^{\prime}} \delta^3(\vec{p}-\vec{p}^{\prime})\,. \end{eqnarray} The contribution of the zero-potential vertex to the shielding constant is \begin{eqnarray} \Delta \sigma_{\rm ver,zee}^{(0)} &=& -i\alpha \int \frac{d\vec{p}\, d\vec{p}^{\prime}}{(2\pi)^3}\, \overline{\psi}_{a_{\nicefrac12}}(\vec{p}) \nonumber \\ && \times \left[\vec{\nabla}_{\vec{p}^{\prime}} \delta^3(\vec{p}-\vec{p}^{\prime}) \times \vec{\Gamma}_R(p,p^{\prime}) \right]_0 \psi_{\delta a_{\nicefrac12}}(\vec{p}^{\prime})\,, \nonumber \\ \end{eqnarray} where the gradient $\vec{\nabla}_{\vec{p}^{\prime}}$ acts on the $\delta$-function only. This expression is transformed by integrating by parts and carrying out the integration with the $\delta$-function analytically. The result after the angular integration consists of two parts (cf. Eqs.~(27) and (36) of Ref.~\cite{yerokhin:04}), \begin{equation} \Delta \sigma_{\rm ver,zee}^{(0)} = \Delta \sigma_{\rm ver,zee,1}^{(0)} + \Delta \sigma_{\rm ver,zee,2}^{(0)} \,, \end{equation} where \begin{align} \Delta \sigma_{\rm ver,zee,1}^{(0)} &\ = \frac{\alpha^2}{\pi} \sum_{\kappa_{\delta a}} x_{\kappa_{\delta a}}\,i^{l_a-l_{\delta a}}\, \int_0^{\infty} \frac{p^2dp}{8\pi^3}\, A(\rho)\, \nonumber \\ & \times \biggl[ g_a \tilde{g}_{\delta a}\,A_1(\kappa_a,\kappa_{\delta a}) +f_a \tilde{f}_{\delta a}\,A_1(-\kappa_a,-\kappa_{\delta a}) \nonumber \\ & -p\, g_af_{\delta a}\,A_2(\kappa_a,-\kappa_{\delta a}) -p\, f_ag_{\delta a}\,A_2(-\kappa_a,\kappa_{\delta a})\biggr]\,, \end{align} and \begin{align} &\ \Delta \sigma_{\rm ver,zee,2}^{(0)} = -\frac{\alpha^2}{4\pi} \sum_{\kappa_{\delta a}} x_{\kappa_{\delta a}}\, i^{l_a-l_{\delta a}}\, \int_0^{\infty} \frac{p^2dp}{8\pi^3}\, \nonumber \\ & \times \biggl\{ b_1(\rho)\, \bigl[ g_af^{\prime}_{\delta a} A_2(\kappa_a,-\kappa_{\delta a}) + f_ag^{\prime}_{\delta a} A_2(-\kappa_a,\kappa_{\delta a})\bigr] \nonumber \\ & + \frac1p\, b_1(\rho)\, \bigl[ g_af_{\delta a} A_3(\kappa_a,-\kappa_{\delta a}) + f_ag_{\delta a} A_3(-\kappa_a,\kappa_{\delta a})\bigr] \nonumber \\ & + b_2(\rho)\, \bigl[ \tilde{g}_ag_{\delta a} A_4(\kappa_a,\kappa_{\delta a}) + \tilde{f}_af_{\delta a} A_4(-\kappa_a,-\kappa_{\delta a})\bigr] \nonumber \\ & + b_3(\rho)\, \bigl[ g_ag_{\delta a} A_4(\kappa_a,\kappa_{\delta a}) - f_af_{\delta a} A_4(-\kappa_a,-\kappa_{\delta a})\bigr]\biggr\}\,, \end{align} where $\tilde{g}_{\delta a} = \varepsilon_a g_{\delta a}+p f_{\delta a}$, $\tilde{f}_{\delta a} = \varepsilon_a f_{\delta a}+p g_{\delta a}$, $\tilde{g}_{a} = \varepsilon_a g_{a}+p f_{a}$, $\tilde{f}_{a} = \varepsilon_a f_{a}+p g_{a}$, $g^{\prime} = dg(p)/dp$ and $f^{\prime} = df(p)/dp$, the scalar functions $A(\rho)$, $b_i(\rho)$ are given by Eqs.~(24), (30)-(32) of Ref.~\cite{yerokhin:04} and $A_i(\kappa_1,\kappa_2)$ are the basic angular integrals defined and evaluated in Appendix~\ref{app:angular}. Note that the expression for $\Delta \sigma_{\rm ver,zee,2}^{(0)}$ is non-symmetric with respect to $a$ and $\delta a$, but the result does not change when the two wave functions are interchanged. \subsubsection{Double-vertex contribution} The double-vertex correction is defined by Eq.~(\ref{dvr}). All parts of it are UV finite and thus can be evaluated in coordinate space. We denote the individual terms in the right-hand-side of Eq.~(\ref{dvr}) by $\delta_iE$ and consider each of them separately, \begin{eqnarray} \Delta E_{\rm d.vr} = \sum_{i=1}^5 \delta_iE\,. \end{eqnarray} The first term is \begin{align} & \delta_1E \equiv 2\,\langle} \newcommand{\rbr}{\rangle \Lambda \rbr = 2\, \frac{i}{2\pi}\int_{-\infty}^{\infty} d\omega \nonumber \\ & \times \sum_{n_1n_2n_3} \frac{\langle} \newcommand{\rbr}{\rangle a n_3|I(\omega)|n_1 a\rbr \langle} \newcommand{\rbr}{\rangle n_1|V_{\rm zee}|n_2\rbr \langle} \newcommand{\rbr}{\rangle n_2|V_{\rm hfs}|n_3\rbr } {(\varepsilon-\omega-u\,\varepsilon_{n_1})(\varepsilon-\omega-u\,\varepsilon_{n_2})(\varepsilon-\omega-u\,\varepsilon_{n_3})} \,. \end{align} After separating the nuclear variables and integrating over the angles, the contribution to the magnetic shielding is (for $j_a = 1/2$) \begin{align}\label{delta1} \delta_1 \sigma &\ = \frac{i\alpha^2}{2\pi} \int_{-\infty}^{\infty} d\omega \sum_{n_1n_2n_3} \sum_J X_J^{\rm d.ver}(\kappa_1,\kappa_2,\kappa_3)\, \nonumber \\ & \times \frac{ R_J(\omega,an_3n_1a)\,R^{(1)}_{n_1n_2} \, R^{(-2)}_{n_2n_3} } {(\varepsilon-\omega-u\,\varepsilon_{n_1})(\varepsilon-\omega-u\,\varepsilon_{n_2})(\varepsilon-\omega-u\,\varepsilon_{n_3})} \,, \end{align} where \begin{align} & X_J^{\rm d.ver}(\kappa_1,\kappa_2,\kappa_3) = \frac{(-1)^{J+j_a-j_2}}{2}\, \nonumber \\ & \times \sum_{j_n=1/2,3/2}(2j_n+1) \SixJ{1}{j_n}{j_a}{J}{j_1}{j_2} \SixJ{1}{j_n}{j_a}{J}{j_3}{j_2} \nonumber \\ & \times \frac{\kappa_1+\kappa_2}{\sqrt{3}}\, C_1(-\kappa_2,\kappa_1)\, \frac{\kappa_2+\kappa_3}{\sqrt{3}}\, C_1(-\kappa_3,\kappa_2)\,. \end{align} Note that the expression for $\delta_1 \sigma$ is IR divergent when any two or all three of the intermediate states have the same energy as the reference state: $\varepsilon_{n_1} = \varepsilon_{n_2} = \varepsilon_a$, $\varepsilon_{n_2} = \varepsilon_{n_3} = \varepsilon_a$, $\varepsilon_{n_1} = \varepsilon_{n_3} = \varepsilon_a$, $\varepsilon_{n_1} = \varepsilon_{n_2} = \varepsilon_{n_3} = \varepsilon_a$. The IR divergence cancels when all parts of Eq.~(\ref{dvr}) are added together. The contribution of the second term to the shielding is \begin{align} \label{delta2} \delta_2\sigma &\ = \frac{\alpha}{2}\, \langle} \newcommand{\rbr}{\rangle \widetilde{V}_{\rm zee}\rbr\, \langle} \newcommand{\rbr}{\rangle \widetilde{V}_{\rm hfs}\rbr\, \langle} \newcommand{\rbr}{\rangle \Sigma ^{\prime\prime}\rbr = \frac{4}{9}\, R^{(1)}_{aa}\,R^{(-2)}_{aa}\, \nonumber \\ & \times \frac{i\alpha^2}{2\pi} \int_{-\infty}^{\infty} d\omega \sum_{n_1n_2n_3} \sum_J \frac{(-1)^{J+j_a-j_1}}{2j_a+1}\,\delta_{\kappa_1,\kappa_2}\,\delta_{\kappa_1,\kappa_3}\, \nonumber \\ & \times \frac{ R_J(\omega,an_3n_1a)\,N_{n_1n_2} \, N_{n_2n_3} } {(\varepsilon-\omega-u\,\varepsilon_{n_1})(\varepsilon-\omega-u\,\varepsilon_{n_2})(\varepsilon-\omega-u\,\varepsilon_{n_3})} \,, \end{align} where $N_{ab}$ is the normalization integral, \begin{align} N_{ab} = \int_0^{\infty}dx\, x^2\,(g_ag_b+f_af_b)\,. \end{align} The expression for $\delta_2\sigma$ is IR divergent when $\varepsilon_{n_1} = \varepsilon_{n_2} = \varepsilon_{n_3} = \varepsilon_a$. The third term is given by \begin{align} \label{delta3} \delta_3 \sigma &\ = \frac13\, R_{aa}^{(1)}\, \frac{i\alpha^2}{2\pi} \int_{-\infty}^{\infty} d\omega \sum_{n_1n_2n_3} \nonumber \\ & \times \biggl[ \frac{\sum_J X_{J}(\kappa_1,\kappa_2)\,\delta_{\kappa_2,\kappa_3}\, R_J(\omega,an_3n_1a)\,R^{(-2)}_{n_1n_2} \, N_{n_2n_3} } {(\varepsilon-\omega-u\,\varepsilon_{n_1})(\varepsilon-\omega-u\,\varepsilon_{n_2})(\varepsilon-\omega-u\,\varepsilon_{n_3})} \nonumber \\ & + \frac{\sum_J X_{J}(\kappa_2,\kappa_3)\,\delta_{\kappa_1,\kappa_2}\, R_J(\omega,an_3n_1a)\, N_{n_1n_2}\,R^{(-2)}_{n_2n_3} } {(\varepsilon-\omega-u\,\varepsilon_{n_1})(\varepsilon-\omega-u\,\varepsilon_{n_2})(\varepsilon-\omega-u\,\varepsilon_{n_3})} \Biggr] \,, \end{align} where \begin{align} X_J(\kappa_1,\kappa_2) = \frac{1}{\sqrt{2}} \SixJ{j_1}{j_2}{1}{j_{a}}{j_a}{J} \frac{-\kappa_1-\kappa_2}{\sqrt{3}} C_1(-\kappa_2,\kappa_1)\,. \end{align} The fourth term $\delta_4\sigma$ is obtained from $\delta_3\sigma$ by an obvious substitution $R^{(1)}\leftrightarrow R^{(-2)}$. The fifth term is given by \begin{align} \label{delta5} \delta_5\sigma = -\sigma^{(0)}\, \frac{i\alpha}{2\pi} \int_{-\infty}^{\infty} d\omega \frac{\sum_J \frac{(-1)^{J}}{2j_a+1}\, R_J(\omega,aaaa)}{(-\omega+i0)^2}\,. \end{align} Finally, the total double-vertex contribution is given by the sum of the five terms discussed above, \begin{align} \label{sum} \Delta \sigma_{\rm d.vr} = \sum_{i = 1}^5\delta_i\sigma\,. \end{align} Despite the fact that the individual terms $\delta_i\sigma$ are IR divergent, the sum of them is finite and can be evaluated without any explicit regularization, provided that (i) the integration over the frequency of the virtual photon in the self-energy loop $\omega$ is performed after all five terms are added together and (ii) the contour of the $\omega$ integration is suitably chosen. One can show that if the $\omega$ integration is performed along the contour consisting of the low-energy and high-energy parts, as e.g., in Refs.~\cite{yerokhin:05:se,yerokhin:05:hfs}, the integrand becomes regular at $\omega\to 0$ and, therefore, can be directly evaluated numerically. The numerical evaluation of the double-vertex correction $\Delta \sigma_{\rm d.vr}$ was the most time consuming part of our calculation, due to a large number of the partial waves involved and a four-dimensional radial integration in Eq.~(\ref{delta1}). The radial integration was carried out with help of the numerical approach developed in our calculation of the two-loop self-energy and described in detail in Ref.~\cite{yerokhin:03:epjd}. Because of numerical cancellations between the five terms in Eq.~(\ref{sum}), especially in the region of small values of $\omega$, we took care to treat all five terms exactly in the same way. In particular, the normalization integrals $N_{ab}$ in Eqs.~(\ref{delta2}) and (\ref{delta3}) were evaluated numerically, in order to be consistent with the evaluation of Eq.~(\ref{delta1}), whereas the corresponding contributions could have been evaluated more easily by taking the derivative of the Dirac-Coulomb Green function. \subsection{Vacuum polarization correction} \begin{figure*} \centerline{\includegraphics[width=0.8\textwidth]{vppnca.eps}} \caption{Vacuum-polarization corrections to the nuclear magnetic shielding calculated in the present work. The double line represents electron propagating in the binding nuclear field, the single line represents the free-electron propagator, the dashed line terminated by a cross represents interaction with the binding Coulomb field, the wave line terminated by a triangle represents the dipole hyperfine interaction with the nucleus, and the wave line terminated by a cross represents the interaction with the external magnetic field. \label{fig:2} } \end{figure*} Vacuum polarization corrections to the magnetic shielding calculated in the present work are shown in Fig.~\ref{fig:2}. The diagrams in Fig.~\ref{fig:2}(a)-(c) come from the insertion of the Uehling potential into the electron line of the leading-order magnetic shielding, whereas the diagram in Fig.~\ref{fig:2}(d) respresents the Uehling potential insertion into the hfs interaction. We note that the diagram with the vacuum polarization insertion into the Zeeman interaction vanishes in the Uehling-potential approximation. In our treatment, we neglect contributions with additional Coulomb interactions in the vacuum-polarization loop (which correspond to the Wichmann-Kroll part of the one-loop vacuum polarization) and the additional diagram of the Wichmann-Kroll type with both hfs and Zeeman interactions attached to the vacuum-polarization loop. We expect that the part accounted for in the present work yields the dominant contribution to the vacuum polarization correction. The contribution of the diagrams in Fig.~\ref{fig:2}~(a)-(c) is an analogue of the perturbed-orbital self-energy contribution and is given by a similar expression, \begin{eqnarray} \Delta \sigma_{\rm VP,po} &=& \alpha\, \langle} \newcommand{\rbr}{\rangle a_{\nicefrac12}|V_{\rm Ueh} |\delta^{(2)}a_{\nicefrac12}\rbr \nonumber \\ && + \alpha\, \langle} \newcommand{\rbr}{\rangle \delta^{(1)}_{\rm hfs}a_{\nicefrac12}|V_{\rm Ueh} |\delta^{(1)}_{\rm zee}a_{\nicefrac12}\rbr\,, \end{eqnarray} where $V_{\rm Ueh}$ is the Uehling potential, \begin{align}\label{c2} V_{\rm Ueh}(r) &\ = -\frac{2\alpha^2 Z}{3 m r}\, \int_0^{\infty} dr^{\prime} r^{\prime} \rho(r^{\prime})\, \nonumber \\ & \times \left[ K_0(2m|r-r^{\prime}|)-K_0(2m|r+r^{\prime}|)\right]\,, \end{align} and \begin{equation}\label{c3} K_0(x) = \int_1^{\infty}dt\, e^{-xt} \left(\frac1{t^3}+\frac1{2t^5}\right)\, \sqrt{t^2-1}\,, \end{equation} and the nuclear-charge density $\rho$ is normalized by the condition $\int d\vec{r}\,\rho(r) = 1$. We note that this contribution can also be calculated by incorporating the Uehling potential into the Dirac equation and re-evaluating the leading-order magnetic shielding, in this way accounting for the Uehling potential to all orders. We performed calculations in both ways, which ensured that the perturbations of the reference-state wave function $|\delta^{(1)}a\rbr$ and $|\delta^{(2)}a\rbr$ are computed correctly. The contribution of the diagram in Fig.~\ref{fig:2}(d) to the magnetic shielding can be expressed as \begin{align} \Delta \sigma_{\rm VP,mag} = &\, \alpha \sum_{n\ne a}\frac1{\varepsilon_a-\varepsilon_n}\, \nonumber \\ & \times \langle} \newcommand{\rbr}{\rangle a_{\nicefrac12}| \widetilde{V}_{\rm zee} |n_{\nicefrac12}\rbr \langle} \newcommand{\rbr}{\rangle n_{\nicefrac12}| \widetilde{V}_{\rm VP,mag} |a_{\nicefrac12}\rbr\,, \end{align} where \begin{align} \widetilde{V}_{\rm VP,mag}(r) = &\ \widetilde{V}_{\rm hfs}(r)\,\, \frac{2\alpha}{3\pi}\,\int_1^{\infty}dt\, \frac{\sqrt{t^2-1}}{t^2}\, \nonumber \\ & \times \left( 1+ \frac1{2t^2}\right) \, (1+2mrt) \, e^{-2mrt}\, \end{align} is the hfs interaction modified by the vacuum-polarization insertion. \section{QED correction for small nuclear charges} \label{sec:Zaexp} In the previous section, we calculated the QED corrections to the magnetic shielding without any expansion in the nuclear binding strength parameter ${Z\alpha}$. Now we turn to the evaluation of this corrections within the expansion in this parameter. We will derive the complete expression for the leading term of the ${Z\alpha}$ expansion, which enters in the relative order $\delta \sigma/\sigma \sim \alpha({Z\alpha})^2$. The results obtained in this section will be applicable for light hydrogen-like ions, where no all-order calculations were possible because of large numerical cancellation. They will also provide an important cross-check with the all-order calculations described in the previous section. For the derivation, it is convenient to use the formalism of the non-relativistic quantum electrodynamics (NRQED). In this approach, all QED effects are calculated by an expansion in powers of $\alpha$ and ${Z\alpha}$ and represented as expectation values of various effective operators on the nonrelativistic reference-state wave function. Let us start with the effective Hamiltonian $H_{\rm NRQED}$, which includes leading one-loop radiative corrections: \begin{eqnarray} H_{\rm NRQED} &=& \frac{\vec \pi^2}{2\,m}+e\,A^0 -\frac{e}{6}\,\biggl(\frac{3}{4\,m^2}+r_E^2\biggr)\,\vec\nabla\cdot\vec E \nonumber \\ && +\frac{e}{12\,m}\biggl(r_E^2-\frac{3\,\kappa}{4\,m^2}\biggr) \bigl\{\vec \pi\,,\,\vec\nabla\times\vec B\bigr\} \nonumber \\ && -\frac{e^2}{2}\,\biggl(\frac{1}{4\,m^3}+\alpha_M\biggr)\,\vec B^2\,, \label{t02} \end{eqnarray} where $\{\,.\,,.\}$ denotes anticommutator, $\vec \pi = \vec p - e\,\vec A$. The QED effects in the above Hamiltonian are parameterized in terms of the constants $\kappa$, $r_E$, and $\alpha_M$, which are interpreted as the electron magnetic moment anomaly, the charge radius and the magnetic polarizability, respectively, \begin{eqnarray} \kappa &=& \frac{\alpha}{2\,\pi}\,,\\ r_E^2 &=& \frac{3\,\kappa}{2\,m^2} + \frac{2\,\alpha}{\pi\,m^2}\, \biggl(\ln\frac{m}{2\,\epsilon} +\frac{11}{24}\biggr)\,,\label{t03}\\ \alpha_M &=& \frac{4\,\alpha}{3\,\pi\,m^3}\, \biggl(\ln\frac{m}{2\,\epsilon} +\frac{13}{24}\biggr)\,, \end{eqnarray} where $\epsilon$ is the photon momentum cutoff. In the non-QED limit, all QED constants vanish, $r^2_E=\alpha_M=\kappa=0$, and the effective Hamiltonian $H_{\rm NRQED}$ turns into the Schr{\"o}dinger-Pauli Hamiltonian. The QED constants $r_E^2$ and $\alpha_M$ depend explicitly on the the photon momentum cutoff parameter $\epsilon$. This dependence cancels out with contributions coming from the emission and absorption of virtual photons with the energy smaller than $\epsilon$. The complete expression for any physical quantity, e.g., the Lamb shift and the shielding constant, does not depend on the artificial photon cutoff parameter. The above formula for $r^2_E$ is obtained from the well known expressions for the one-loop radiative corrections to electromagnetic formfactors $F_1$ and $F_2$ \cite{itzykson:80}. The formula for $\alpha_M$ has not appeared in the literature previously. We have derived it by a method similar to that used in Ref.~\cite{jentschura:05:sese} for the electric polarizability, denoted by $\chi$ in that work. We will start with rederiving the known nonrelativistic expression for the shielding constant within our approach. It comes from the $\vec A^2$ term in the electron kinetic energy in Eq. (\ref{t02}). The electromagnetic potential $\vec A$ is the sum of the external magnetic potential $\vec A_E$, \begin{equation} \vec A_E = \frac{1}{2}\,\vec B\times\vec r\,, \end{equation} and the potential induced by the nuclear magnetic moment, \begin{equation} \vec A_I = \frac{1}{4\,\pi}\,\vec\mu\times\frac{\vec r}{r^3}\,. \end{equation} The corresponding energy shift is \begin{align} \delta E &\ = \frac{e^2}{2\,m}\langle {\vec A\,}^2\rangle =\frac{e^2}{m}\langle \vec A_E \cdot \vec A_I\rangle \nonumber \\ & =\frac{\alpha}{2\,m}\,\biggl\langle \left(\vec B\times\vec r\right) \cdot \left(\vec\mu\times\frac{\vec r}{r^3}\right)\biggr\rangle, \end{align} where matrix elements are calculated with the nonrelativistic wave function. The shielding constant $\sigma$ is obtained from the energy shift $\delta E$ by $\delta E = \sigma\,\vec\mu\cdot\vec B$. For the ground $(L=0)$ hydrogenic state, the nonrelativistic results is \begin{equation} \sigma = \frac{\alpha}{3\,m}\,\biggl\langle \frac{1}{r}\biggr\rangle. \end{equation} Before considering the QED effects to the shielding constant, it is convenient to first recalculate the leading QED correction to the energy levels. The total contribution is split into two parts induced by the virtual photons of low ($L$) and high ($H$) energy, \begin{equation} \delta E_{\rm Lamb} = \delta E_L + \delta E_H\,. \end{equation} The high-energy part is the expectation value of the $\vec\nabla\cdot\vec E$ term in the effective Hamiltonian in Eq. (\ref{t02}) \begin{eqnarray} \delta E_H &=& \biggl\langle -\frac{e}{6}\,r_E^2\,\vec\nabla\cdot\vec E\biggr\rangle =\frac{2}{3}\,\frac{(Z\,\alpha)^4}{n^3}\,r_E^2\,\delta_{l0} \nonumber \\ && =\frac{\alpha}{\pi}\,\frac{(Z\,\alpha)^4}{n^3}\, \biggl(\frac{4}{3}\,\ln\frac{m}{2\,\epsilon}+\frac{10}{9}\biggr)\,\delta_{l0}\,, \end{eqnarray} where $\vec E=-\vec \nabla A^0$ and $A^0 = -Z\,e/(4\,\pi\,r)$. The vacuum polarization can be incorporated in the above expression by adding $-2\,\alpha/(5\,\pi\,m^2)$ to $r^2_E$ in Eq. (\ref{t03}). The low-energy part $\delta E_L$ is induced by the emission and the absorption of the virtual photons of low $(k<\epsilon)$ energy, \begin{align} &\delta E_L = e^2\int_0^\epsilon \frac{d^3k}{(2\,\pi)^3\,2\,k}\, \biggl(\delta^{ij}-\frac{k^i\,k^j}{k^2}\biggr)\, \biggl\langle\frac{p^i}{m}\, \frac{1}{E-H-k} \,\frac{p^j}{m}\biggr\rangle \nonumber \\ & = \frac{2\,\alpha}{3\,\pi}\, \biggl\langle\frac{\vec p}{m}\,(H-E)\,\biggl\{ \ln\biggl[\frac{2\,\epsilon}{m\,(Z\,\alpha)^2}\biggr] -\ln\biggl[\frac{2\,(H-E)}{m\,(Z\,\alpha)^2}\biggr]\biggr\} \frac{\vec p}{m}\biggr\rangle\,. \end{align} The terms containing $\ln\epsilon$ cancel out in the sum of the low- and high-energy parts, as expected. The total leading-order Lamb shift contribution is \begin{eqnarray} \delta E_{\rm Lamb} &=& \frac{\alpha}{\pi}\,\frac{(Z\,\alpha)^4}{n^3}\, \biggl\{\frac{4}{3}\ln\bigl[(Z\,\alpha)^{-2}\bigr]\,\delta_{l0} \nonumber \\&& +\biggl(\frac{10}{9}-\frac{4}{15}\biggr)\,\delta_{l0} -\frac{4}{3}\,\ln k_0(n,l)\biggr\}\,, \end{eqnarray} where $\ln k_0(n,l)$ is the Bethe logarithm, \begin{equation} \ln k_0(n,l) = \frac{n^3}{2\,m^3\,(Z\,\alpha)^4}\, \Bigl\langle\vec p\,(H-E) \ln\biggl[\frac{2\,(H-E)}{m\,(Z\,\alpha)^2}\biggr] \vec p\,\Bigr\rangle\,. \end{equation} We now turn to the calculation of the QED correction to the magnetic shielding, which is performed similarly to that for the Lamb shift. The total contribution is split into the low and the high-energy parts: \begin{equation} \delta E = \delta E_L +\delta E_H\,, \end{equation} where \begin{align} \delta E_L &\ = e^2\,\int_0^\epsilon \frac{d^3 k}{(2\,\pi)^3\,2\,k}\, \biggl(\delta^{ij}-\frac{k^i\,k^j}{k^2}\biggr)\, \biggl\langle\frac{\pi^i}{m}\, \frac{1}{E-H-k} \,\frac{\pi^j}{m}\biggr\rangle\,, \end{align} and \begin{align} \delta E_H &\ = -2\, \biggl\langle \frac{e}{6}\,r^2_E\,\vec\nabla\cdot\vec E\, \frac{1}{(E-H)'}\,\frac{{\vec A\,}^2}{2\,m}\biggr\rangle \nonumber \\ & +\biggl\langle\frac{e}{12\,m}\,\biggl( r^2_E-\frac{3\,\kappa}{4\,m^2}\biggr)\, \bigl\{\vec\pi\,,\,\vec\nabla\times\vec B\bigr\}\biggr\rangle -\biggl\langle \frac{e^2}{2}\,\alpha_M\,{\vec B\,}^2\biggr\rangle \,. \end{align} The high-energy part can be conveniently rewritten in the form \begin{eqnarray} \delta E_H &=& \frac{1}{9}\,Z\,\alpha^2\,r^2_E\, \biggl\langle\frac{1}{r}\,\frac{1}{(E-H)'}\,4\,\pi\,\delta^3(r)\biggr\rangle \,\vec\mu\cdot\vec B \nonumber \\ &&-\frac{2\,\pi\,\alpha}{3\,m}\,\biggl(r^2_E-\frac{3\,\kappa}{4\,m^2}\biggr) \,\bigl\langle\vec A_E\cdot\vec\nabla\times\vec B_I\bigr\rangle \nonumber \\&& -\,4\,\pi\,\alpha\,\alpha_M\,\bigl\langle\vec B\cdot\vec B_I\bigr\rangle \,. \end{eqnarray} In the low-energy part, we separate out $\ln\epsilon$ and then perform an expansion in the magnetic fields, \begin{eqnarray} \delta E_L &=& \delta E_{LA} + \delta E_{LB},\\ \delta E_{LA} &=& -\frac{2\,\alpha}{3\,\pi}\, \biggl\langle\frac{\vec\pi}{m}\,(H-E)\,\ln\frac{2\,(H-E)}{m\,(Z\,\alpha)^2}\, \frac{\vec\pi}{m}\biggr\rangle,\\ \delta E_{LB} &=&\frac{2\,\alpha}{3\,\pi}\, \biggl\langle\frac{\vec\pi}{m}\,(H-E)\, \frac{\vec\pi}{m}\biggr\rangle\, \ln\frac{2\,\epsilon}{m\,(Z\,\alpha)^2}\,. \end{eqnarray} Using the identity \begin{align} &2\,\biggl\langle\frac{\vec\pi}{m}\,(H-E)\,\frac{\vec\pi}{m}\biggr\rangle = \biggl\langle\biggl[\frac{\vec\pi}{m}\,, \biggl[H-E\,,\,\frac{\vec\pi}{m}\biggr]\biggr]\biggr\rangle \nonumber \\ &=\biggl\langle4\,\pi\,Z\,\alpha\,\delta^3(r) + \frac{e}{2\,m^3}\Bigl\{\vec \pi\,,\,\vec\nabla\times\vec B\Bigr\} + \frac{2\,e^2}{m^3}\,B^2 \biggr\rangle\,, \end{align} we transform $\delta E_{LB}$ to the form \begin{eqnarray} \delta E_{LB} &=& \frac{\alpha}{3\,\pi}\, \ln\biggl[\frac{2\,\epsilon}{m\,(Z\,\alpha)^2}\biggr] \nonumber \\&& \biggl[\,2\,\biggl\langle \frac{\alpha}{3\,r}\,\frac{1}{(E-H)'} \,4\,\pi\,Z\,\alpha\,\delta^3(r)\biggr\rangle\,\vec\mu\cdot\vec B\\\nonumber &&-4\,\pi\,\alpha\,\bigl\langle\vec A_E\cdot\vec\nabla\times\vec B_I\bigr\rangle +16\,\pi\,\alpha\,\bigl\langle\vec B\cdot\vec B_I\bigr\rangle\biggr]\,. \end{eqnarray} The artificial parameter $\ln\epsilon$ cancels out in the sum $\delta E_{LB}+ \delta E_{H}$ separately for each type of the matrix elements, \begin{align} &\delta E_{LB}+ \delta E_{H} = \frac{2}{9}\,\frac{\alpha^2}{\pi}\,Z\,\alpha\, \nonumber \\ &\times \biggl[\ln(Z\,\alpha)^{-2}+\frac{5}{6}-\frac{1}{5}\biggr]\, \vec\mu\cdot\vec B\, \biggl\langle 4\,\pi\,\delta^3(r)\,\frac{1}{E-H)'}\,\frac{1}{r}\biggr\rangle \nonumber \\ & -\frac{4}{3}\,\alpha^2\,\biggl[ \ln(Z\,\alpha)^{-2}+\frac{31}{48}-\frac{1}{5}\biggr]\, \bigl\langle\vec A_E\cdot\vec\nabla\times\vec B_I\bigr\rangle \nonumber \\ & +\frac{16}{3}\,\alpha^2\, \biggl[\ln(Z\,\alpha)^{-2}-\frac{13}{24}\biggr]\,\bigl\langle\vec B\cdot\vec B_I\bigr\rangle. \end{align} Using the following results for the matrix elements with $nS$ states \begin{eqnarray} \biggl\langle 4\,\pi\,\delta^3(r)\,\frac{1}{E-H)'}\,\frac{1}{r}\biggr\rangle &=& -\frac{6\,(Z\,\alpha)^2}{n^3},\\ \bigl\langle\vec A_E\cdot\vec\nabla\times\vec B_I\bigr\rangle &=& \frac{(Z\,\alpha)^3}{\pi\,n^3}\,\vec\mu\cdot\vec B,\\ \bigl\langle\vec B\cdot\vec B_I\bigr\rangle &=& \frac{2}{3\,\pi}\, \frac{(Z\,\alpha)^3}{n^3}\,\vec\mu\cdot\vec B\,, \end{eqnarray} we obtain \begin{equation} \delta E_{LB}+\delta E_H = \frac{8\,\alpha^2}{9\,\pi}\, \frac{(Z\,\alpha)^3}{n^3}\, \biggl[ \ln(Z\,\alpha)^{-2}-\frac{421}{96}+\frac{3}{5}\biggr]\,\vec\mu\cdot\vec B\,. \end{equation} The calculation of the remaining low-energy contribution $\delta E_{LA}$ is slightly more complicated. We first return to the integral form of $\delta E_L$, derive an expression for $\delta E_{LA}$, and then drop all terms with $\ln\epsilon$, as they are already accounted for by $\delta E_{LB}$. The integral form of $\delta E_L$ is \begin{equation} \delta E_L = \frac{2\,\alpha}{3\,\pi}\,\int_0^\epsilon k\,dk\, \biggl\langle\frac{\vec\pi}{m}\,\frac{1}{E-H-k}\,\frac{\vec \pi}{m}\biggr\rangle. \end{equation} It can be rewritten, using the identity $\vec\pi = -i\,m\,[\vec r\,,\,H-E]$, in the form \begin{equation} \delta E_L = \frac{2\,\alpha}{3\,\pi}\,\int_0^\epsilon k^3\,dk\, \biggl\langle\vec r\,\frac{1}{E-H-k}\,\vec r\biggr\rangle. \label{t34} \end{equation} All terms with positive powers of $\epsilon$ are discarded, since one assumes that the limit $\epsilon\rightarrow 0$ is taken after the expansion in $\alpha$ is done. We shall now expand the integrand in Eq. (\ref{t34}) in the magnetic fields. The Hamiltonian $H$ is \begin{equation} H = H_0 +\frac{\alpha}{3\,r}\,\vec\mu\cdot\vec B -\frac{e}{2\,m}\,\vec L\cdot\vec B -\frac{e}{4\,\pi\,m\,r^3}\,\vec L\cdot\vec\mu \label{t35}\,, \end{equation} where $H_0 = p^2/(2\,m)-Z\,\alpha/r$. The first term with $\vec B$ in Eq. (\ref{t35}) can be absorbed into $Z' = Z-\vec\mu\cdot\vec B/3$. So, the correction due to this term is \begin{eqnarray} \delta E_{L1} &=& \frac{4\,\alpha}{3\,\pi}\,(Z'\,\alpha)^4\, \biggl\{\ln\biggl[\frac{2\,\epsilon}{m\,(Z'\,\alpha)^2}\biggr]-\ln k_0\biggr\} \\ &=& \frac{16\,\alpha^2}{9\,\pi}\,\frac{(Z\,\alpha)^3}{n^3}\,\vec\mu\cdot\vec B\, \biggl\{\ln k_0 +\frac{1}{2}-\ln\biggl[\frac{2\,\epsilon}{m\,(Z\,\alpha)^2}\biggr]\biggr\}\,. \nonumber \end{eqnarray} The correction due to the two last terms in Eq. (\ref{t35}) is \begin{widetext} \begin{align} \delta E_{L2} &\ = \frac{2\,\alpha^2}{3\,\pi\,m^2}\, \int_0^\epsilon dk\,k^3\,\biggl\langle\vec r \frac{1}{E-H-k}\,\vec L\cdot\vec B \frac{1}{E-H-k}\, \frac{\vec\mu\cdot\vec L}{r^3}\,\frac{1}{E-H-k}\,\vec r\biggr\rangle \nonumber\\ &\ =\frac{2\,\alpha^2}{9\,\pi}\,\vec\mu\cdot\vec B\, \int_0^\epsilon dk\,k^3 \frac{d}{dk}\,\biggl\langle\vec r\, \frac{1}{E-H-k}\,\frac{1}{r^3}\,\frac{1}{E-H-k}\,\vec r\biggr\rangle\,. \end{align} \end{widetext} Integrating by parts and using the results and the notation of $\ln k_3$ from Ref. \cite{pachucki:05:gfact} \begin{align} &\int_0^\epsilon dk\,k^2 \biggl\langle\vec r\, \frac{1}{E-H-k}\,\frac{1}{r^3}\,\frac{1}{E-H-k}\,\vec r\biggr\rangle \nonumber \\ &= \epsilon\,\biggl\langle\frac{1}{r}\biggr\rangle-\frac{4\,(Z\,\alpha)^3}{n^3}\, \biggl\{\ln\biggl[\frac{2\,\epsilon}{m\,(Z\,\alpha)^2}\biggr]-\ln k_3\biggr\}\,, \end{align} we obtain \begin{equation} \delta E_{L2} = \frac{8\,\alpha^2}{3\,\pi\,m^2}\,\frac{(Z\,\alpha)^3}{n^3}\, \vec\mu\cdot\vec B\,\biggl\{ \ln\biggl[\frac{2\,\epsilon}{m\,(Z\,\alpha)^2}\biggr]-\ln k_3-\frac{1}{3}\biggr\}\,. \end{equation} The sum of these two contributions $\delta E_{LA} = \delta E_{L1}+\delta E_{L2}$, after dropping the $\ln\epsilon$ terms, is \begin{equation} \delta E_{LA} =\frac{8\,\alpha^2}{9\,\pi}\,\frac{(Z\,\alpha)^3}{n^3}\, \vec\mu\cdot\vec B\,\bigl(2\,\ln k_0-3\,\ln k_3\bigr). \end{equation} Finally, the total correction to energy is \begin{equation} \delta E = \delta E_{LA} + \delta E_{LB} +\delta E_{H}\,. \end{equation} Expressing the energy shift in terms of the shielding constant ($\delta E = \delta\sigma\,\vec\mu\cdot\vec B$), we obtain the complete result for the leading QED correction to the nuclear magnetic shielding in hydrogen-like ions, valid for the $nS$ states, \begin{align} \delta\sigma &\ = \frac{8\,\alpha^2}{9\,\pi}\,\frac{(Z\,\alpha)^3}{n^3} \label{t42} \\ & \times \biggl\{ \ln\bigl[(Z\,\alpha)^{-2}\bigr]+2\,\ln k_0(n)-3\,\ln k_3(n)-\frac{421}{96}+\frac{3}{5}\biggr\}\,. \label{eq:Za} \end{align} The term of $3/5$ in the brackets is the contribution of the vacuum polarization. The numerical results for the Bethe logarithm $\ln k_0$ and the $1/r^3$ Bethe-logarithm-type correction $\ln k_3$ \cite{pachucki:05:gfact} for the $1s$ state are \begin{eqnarray} \ln k_0(1s) &=& 2.984\,128\,556,\\ \ln k_3(1s) &=& 3.272\,806\,545\,. \end{eqnarray} We note that the numerical value of the constant term in Eq.~(\ref{eq:Za}), $-7.635\,58$, is comparable in magnitude but of the opposite sign to the logarithmic term at $Z=1$, $\ln\alpha^{-2} = 9.840\,49$. This entails a significant numerical cancellation between these two terms for hydrogen and light hydrogen-like ions. As a result, the total QED correction turns out to be much smaller in magnitude than could be anticipated from the leading logarithm alone. \section{Other corrections} \label{sec:3} \subsection{Bohr-Weisskopf correction} We now turn to the effect induced by the spatial distribution of the nuclear magnetic moment, also known as the Bohr-Weisskopf (BW) correction. Our treatment of the BW effect is based on the effective single-particle (SP) model of the nuclear magnetic moment. Within this model, the magnetic moment is assumed to be induced by the odd nucleon (proton, when $Z$ and $A$ are odd and neutron, when $Z$ is even and $A$ is odd) with an effective $g$-factor, which is adjusted to yield the experimental value of the nuclear magnetic moment. The treatment of the magnetization distribution effect on hfs within the SP model was originally developed in Refs.~\cite{bohr:50,bohr:51} and later in Ref.~\cite{shabaev:94:hfs}. Our present treatment closely follows the procedure described in Refs.~\cite{shabaev:94:hfs,shabaev:97:pra,zherebtsov:00:cjp}. The wave function of the odd nucleon is assumed to satisfy the Schr\"odinger equation with the central potential of Woods-Saxon form and the spin-orbital term included (see, e.g., Ref.~\cite{elton:67}) \begin{equation}\label{5eq10} V(\vec{r}) = -V_0\,{\cal F}(r)+ \frac1{m_p}\phi_{so}(r)\,\vec{l}\cdot\vec{\sigma}+ V_C(r)\,, \end{equation} where \begin{equation}\label{5eq10a} \phi_{so}(r) = \frac{V_{so}}{4m_pr}\, \frac{d{\cal F}(r)}{dr}\,, \end{equation} \begin{equation}\label{5eq10b} {\cal F}(r) = \left[1+\exp\left(\frac{r-R}{a} \right)\right]^{-1}\,, \end{equation} and $V_C$ is the Coulomb part of the interaction (absent for neutron), with the uniform distribution of the charge ($Z-1$) over the nuclear sphere. The parameters $V_0$, $V_{so}$, $R$, and $a$ were taken from Refs.~\cite{elton:67,rost:68}. The nuclear magnetic moment can be evaluated within the SP model to yield \cite{shabaev:97:pra} \begin{align}\label{5eq11} \frac{\mu}{\mu_N} = \left\{ \begin{aligned} \displaystyle \frac12\, g_S+ \left[I-\frac12+\frac{2I+1}{4(I+1)}\langle} \newcommand{\rbr}{\rangle\phi_{so}r^2\rbr \right] g_L\,, \ \ \ \ \ \ \ \ \ \ \ & \\ \ \mbox{for} \ \ I = L+\frac12\,, & \\ \displaystyle -\frac{I}{2(I+1)}\, g_S+ \left[\frac{I(2I+3)}{2(I+1)} -\frac{2I+1}{4(I+1)}\langle} \newcommand{\rbr}{\rangle\phi_{so}r^2\rbr \right]& \ g_L\,,\ \\ \ \mbox{for} \ \ I = L-\frac12\,, & \\ \end{aligned} \right. \end{align} where $I$ and $L$ are the total and the orbital angular momentum of the nucleus, respectively, $g_L$ is the $g$ factor associated with the orbital motion of the nucleon ($g_L$ = 1 for proton and $g_L = 0$ for neutron) and $g_S$ is the effective nucleon $g$ factor, determined by the condition that Eq.~(\ref{5eq11}) yields the experimental value of the magnetic moment. It was demonstrated in Ref.~\cite{shabaev:97:pra} that, within the SP model, the BW effect can be accounted for by adding a multiplicative magnetization-distribution function $F(r)$ to the standard point-dipole hfs interaction. The distribution function is given by \cite{zherebtsov:00:cjp} \begin{widetext} \begin{eqnarray} \label{5eq12} F(r) &=& \frac{\mu_N}{\mu} \int_0^{r}dr^{\prime}\, {r^{\prime}}^2 |u(r^{\prime})|^2\, \left[ \frac12\, g_S+ \left(I-\frac12+\frac{2I+1}{4(I+1)}\,r^2\phi_{so}(r) \right) g_L \right] \nonumber \\ && + \frac{\mu_N}{\mu} \int_r^{\infty}dr^{\prime}\, {r^{\prime}}^2 |u(r^{\prime})|^2\,\frac{{r}^3}{{r^{\prime}}^3}\, \left[ -\frac{2I-1}{8(I+1)}\, g_S+ \left(I-\frac12+\frac{2I+1}{4(I+1)}\,r^2\phi_{so}(r) \right) g_L \right]\,, \end{eqnarray} for $I = L+1/2$ and \begin{eqnarray} \label{5eq13} F(r) &=& \frac{\mu_N}{\mu} \int_0^{r}dr^{\prime}\, {r^{\prime}}^2 |u(r^{\prime})|^2\, \left[ -\frac{I}{2(I+1)}\, g_S+ \left(\frac{I(2I+3)}{2(I+1)}-\frac{2I+1}{4(I+1)}\,r^2\phi_{so}(r) \right) g_L \right] \nonumber \\ && + \frac{\mu_N}{\mu} \int_r^{\infty}dr^{\prime}\, {r^{\prime}}^2 |u(r^{\prime})|^2\,\frac{{r}^3}{{r^{\prime}}^3}\, \left[ \frac{2I+3}{8(I+1)}\, g_S+ \left(\frac{I(2I+3)}{2(I+1)}-\frac{2I+1}{4(I+1)}\,r^2\phi_{so}(r) \right) g_L \right]\,, \end{eqnarray} \end{widetext} for $I = L-1/2$. In the above formulas, $u(r)$ is the wave function of the odd nucleon. It can easily be seen that $F(r) = 1$ outside the nucleus. \subsection{Recoil and quadrupole corrections} The recoil correction to the magnetic shielding was obtained in Ref.~\cite{rudzinski:09} in the nonrelativistic approximation, \begin{eqnarray} \label{eqrec} \delta \sigma_{\rm rec} = -\frac{\alpha {Z\alpha}}{3}\,\frac{m}{M}\, \left( 1+ \frac{g_N-1}{g_N}\right)\,, \end{eqnarray} where $M$ is the nuclear mass and \begin{eqnarray} g_N = \frac{M}{Zm_p}\,\frac{\mu}{\mu_N \,I}\,. \end{eqnarray} The electric-quadrupole correction to the magnetic shielding is \begin{eqnarray} \label{eqQ} \delta \sigma_Q = -\frac{3Q}{2I(2I-1)}\,\frac{\delta g_Q}{(m/m_p)g_I}\,, \end{eqnarray} where $Q$ is the quadrupole moment of the nucleus and $\delta g_Q$ is the quadrupole correction to the $g$ factor calculated in Ref.~\cite{moskovkin:04}, \begin{align} \delta g_Q &\ = \alpha\,({Z\alpha})^3 \,\frac{12 \left[35+20\gamma-32({Z\alpha})^2\right]} {135\,\gamma (1+\gamma)^2\left[15-16({Z\alpha})^2\right]} \nonumber \\ & = \alpha\,({Z\alpha})^3 \,\left[\frac{11}{135}+ \frac{43}{405}({Z\alpha})^2+\ldots\right]\,. \end{align} \section{Results and discussion} \label{sec:4} Numerical results for the self-energy correction to the nuclear magnetic shielding can be conveniently parameterized in terms of the dimensionless function $D_{\rm SE}({Z\alpha})$ defined as \begin{align} \label{eqDZa} \Delta \sigma_{\rm SE} = \alpha^2\,({Z\alpha})^3\,D_{\rm SE}({Z\alpha})\,. \end{align} Our numerical all-order (in ${Z\alpha}$) results for the self-energy correction to the magnetic shielding are summarized in Table~\ref{tab:se}. We note significant numerical cancellation between the individual contributions, which is particularly strong for low values of $Z$. The fact that the resulting sum is a smooth function of $Z$ and demonstrates the expected $Z$ scaling serves as a consistency check of our calculations. Besides of numerical cancellation, the additional complication arising in the low-$Z$ region is that the convergence of the partial-wave expansions becomes slower when $Z$ decreases. Because of these two complications, the accuracy of our results worsens for smaller values of $Z$ and no all-order results were obtained for $Z<10$. The all-order numerical results can be compared with the ${Z\alpha}$-expansion results obtained in Sec.~\ref{sec:Zaexp}. As follows from Eq.~(\ref{eq:Za}), the ${Z\alpha}$ expansion of the function $D_{\rm SE}$ for the $1s$ state reads \begin{align} \label{seZa} D_{\rm SE}({Z\alpha}) = &\ \frac{8}{9\pi}\, \biggl[ \ln({Z\alpha})^{-2} -8.235\,579 +O({Z\alpha}) \biggr]\,, \end{align} where $O({Z\alpha})$ denotes the higher-order terms. In Fig.~\ref{fig:se}, the numerical all-order results for the function $D_{\rm SE}$ are plotted together with the contribution of the leading logarithmic term in Eq.~(\ref{seZa}) (dashed line, red) and the contribution of both terms in Eq.~(\ref{seZa}) (dashed-dot line, blue). We observe that the leading logarithm alone gives a large contribution that disagrees strongly with the all-order results. However, when the constant term is added, the total ${Z\alpha}$-expansion contribution shrinks significantly and even changes its sign for $Z>3$. Only after the constant term is accounted for, we observe reasonable agreement between the all-order and ${Z\alpha}$ expansion results. \begin{figure}[t] \centerline{\includegraphics[width=\columnwidth]{dse3.eps}} \caption{(Color online) Self-energy correction to the nuclear magnetic shielding. \label{fig:se} } \end{figure} \begin{figure}[t] \centerline{\includegraphics[width=\columnwidth]{dvp.eps}} \caption{(Color online) Vacuum-polarization correction to the nuclear magnetic shielding. \label{fig:vp} } \end{figure} \begin{figure} \centerline{\includegraphics[width=\columnwidth]{shielding3.eps}} \caption{(Color online) Individual contributions to the nuclear shielding. ``NR'' is the nonrelativistic contribution, ``REL'' is the relativistic point-nucleus contribution, ``FNS'' is the finite nuclear size correction, ``QED'' is the QED correction, ``BW'' is the Bohr-Weisskopf correction, ``REC'' is the recoil correction, and ``QUAD'' is the electric quadrupole correction. Note that the QED correction changes its sign between $Z=4$ and 5. \label{fig:total} } \end{figure} The vacuum-polarization correction to the nuclear magnetic shielding is parameterized as \begin{align} \label{eq:vp} \Delta \sigma_{\rm VP} = \alpha^2\,({Z\alpha})^3\,D_{\rm VP}({Z\alpha})\,. \end{align} Our numerical results for the vacuum-polarization correction are presented in Table~\ref{tab:vp}. The calculation was performed for the extended nucleus and the Uehling potential was included to all orders. Note that our present treatment is not complete, as the Wichmann-Kroll part of the correction is still missing. We estimate the uncertainty due to omitted terms to be within 30\% of the calculated contribution. The ${Z\alpha}$ expansion of the function $D_{\rm VP}$ reads \begin{align} D_{\rm VP}({Z\alpha}) = \frac{8}{9\pi n^3}\, \frac35\, \delta_{l,0} + O({Z\alpha})\,, \end{align} where the Kronecker symbol $\delta_{l,0}$ indicates that the correction vanishes for the reference states with $l>0$. Comparison of the all-order numerical results with the leading term of the ${Z\alpha}$ expansion is given in Fig.~\ref{fig:vp}. It is remarkable that the all-order results grow fast when $Z$ is increased and eventually become more than an order of magnitude larger than the leading-order result. In the high-$Z$ region, the self-energy and vacuum-polarization corrections are (as usual) of the opposite sign, the self energy being about twice larger than the vacuum polarization. In the low-$Z$ region, however, the self-energy correction changes its sign and the total QED contribution becomes positive for very light ions. The summary of our calculations of the nuclear magnetic shielding constant $\sigma$ for several hydrogen-like ions is given in Table~\ref{tab:total}. The first line of the table presents results for the leading-order nuclear shielding, including the finite nuclear size effect. The leading-order contribution was calculated using the Fermi model for the nuclear charge distribution and the point-dipole approximation for the interaction with the nuclear magnetic moment. The nuclear charge radii were taken from Ref.~\cite{angeli:04}. The results are in good agreement with those reported in Ref.~\cite{moskovkin:04}. The QED correction presented in the second line of Table~\ref{tab:total} is the sum of the all-order results for the self energy and the vacuum polarization. Its error comes from the numerical uncertainty of the self-energy part and the estimate of uncalculated vacuum-polarization diagrams. Since there were no all-order calculations performed for oxygen, we used an extrapolation of our all-order results (taking into account the derived values of the ${Z\alpha}$ expansion coefficients). The Bohr-Weisskopf correction, presented in the third line of Table~\ref{tab:total}, was calculated by reevaluating the leading-order contribution with the point-dipole hfs interaction modified by the extended-distribution function $F(r)$ given by Eqs.~(\ref{5eq12}) and (\ref{5eq13}). Because the effective single-particle model of the nuclear magnetic moment is (of course) a rather crude approximation, we estimate the uncertainty of the Bohr-Weisskopf correction to be 30\%, which is consistent with previous estimates of the uncertainty of this effect \cite{shabaev:97:pra}. This uncertainty includes also the error due to the nuclear polarization effect, which is not considered in the present work. The quadrupole and the recoil corrections, given in the fourth and fifth lines of Table~\ref{tab:total}, respectively, were evaluated according to Eqs.~(\ref{eqQ}) and (\ref{eqrec}). The error of the quadrupole correction comes from the nuclear quadrupole moments. The magnetic dipole and electric quadrupole moments of nuclei were taken from Refs.~\cite{raghavan:89,stone:05}. The $Z$ dependence of individual contributions to the magnetic shielding constant $\sigma$ is shown in Fig.~\ref{fig:total}. The leading-order contribution is separated into three parts, the point-nucleus nonrelativistic part (solid line, black), the point-nucleus relativistic part (dashed line, green), and the finite nuclear size correction (dashed-dotted line, blue). We observe that the finite nuclear size correction, as well as the other effects calculated in this work, become increasingly important in the region of large nuclear charges. The only exception is the nuclear recoil correction. It practically does not depend on the nuclear charge number $Z$, since the linear $Z$ scaling in Eq.~(\ref{eqrec}) is compensated by the increase of the nuclear mass $M$ with $Z$. As a consequence, the recoil effect is completely negligible for high- and medium-$Z$ ions, but turns into one of the dominant corrections for $Z<10$. For most ions, the uncertainty of the theoretical prediction is roughly 30\% of the Bohr-Weisskopf effect; it can be immediately estimated from Fig.~\ref{fig:total}. For very light ions, however, there is an additional uncertainty due to the unknown relativistic recoil effect. Its contribution can be estimated by multiplying the nonrelativistic recoil correction plotted on Fig.~\ref{fig:total} by the factor of $({Z\alpha})^2$. We observe that for very light ions, the relativistic recoil is the dominant source of error in theoretical predictions. \section{Conclusion} \label{sec:5} In this work we have performed an {\em ab initio} calculation of the nuclear magnetic shielding in hydrogen-like ions with inclusion of relativistic, nuclear, and QED effects. The uncertainty of our theoretical predictions for the nuclear magnetic shielding constant defines [according to Eq.~(\ref{eq4b})] the precision to which the nuclear magnetic dipole moments can be determined from experiments on the $g$-factors of hydrogen-like ions. It can be concluded from Table~\ref{tab:total} and Fig.~\ref{fig:total} that the present theory permits determination of nuclear magnetic moments with fractional accuracy ranging from $10^{-9}$ in the case of $^{17}$O$^{7+}$ to $10^{-5}$ for $^{209}$Bi$^{82+}$. For most hydrogen-like ions, the dominant source of error in the theoretical predictions is the Bohr-Weisskopf effect. Since this effect cannot be accurately calculated at present, we conclude that the theory of the nuclear magnetic shielding has reached the point where the uncertainty due to nuclear-structure effects impedes further progress. For very light ions, however, the dominant theoretical error comes from the unknown relativistic recoil effect, whose calculation might be a subject of future work. \section*{Acknowledgement} Stimulating discussions with K.~Blaum and G.~Werth are gratefully acknowledged. V.~A.~Y.~and Z.~H.~were supported by the Alliance Program of the Helmholtz Association (HA216/EMMI). K.P.~acknowledges support by NIST Precision Measurement Grant PMG 60NANB7D6153. \begin{table*}[htb] \caption{Individual contributions to the self-energy correction to the nuclear magnetic shielding, for the point nucleus, in terms of the function $D_{\rm SE}$ defined by Eq.~(\ref{eqDZa}). \label{tab:se}} \begin{ruledtabular} \begin{tabular}{c......} $Z$ &\multicolumn{1}{c}{po} & \multicolumn{1}{c}{vr,hfs} & \multicolumn{1}{c}{vr,zee} & \multicolumn{1}{c}{d.ver} & \multicolumn{1}{c}{der} & \multicolumn{1}{c}{total} \\ \hline\\[-7pt] 10 & -9.x584 & 6.x095 & 9.x180 & -49.x722 & 43.x523 & -0.x508\,(100) \\ 14 & -4.x741 & 1.x711 & 4.x321 & -21.x048 & 19.x047 & -0.x710\,(15) \\ 16 & -3.x571 & 0.x705 & 3.x140 & -14.x718 & 13.x655 & -0.x789\,(9) \\ 20 & -2.x217 & -0.x405 & 1.x761 & -7.x848 & 7.x782 & -0.x927\,(4) \\ 26 & -1.x277 & -1.x115 & 0.x772 & -3.x473 & 3.x983 & -1.x110\,(2) \\ 32 & -0.x858 & -1.x407 & 0.x292 & -1.x644 & 2.x333 & -1.x283\,(1) \\ 40 & -0.x624 & -1.x580 & -0.x043 & -0.x586 & 1.x315 & -1.x519\,(1) \\ 45 & -0.x575 & -1.x643 & -0.x171 & -0.x267 & 0.x975 & -1.x681\, \\ 54 & -0.x595 & -1.x744 & -0.x335 & 0.x023 & 0.x622 & -2.x029\, \\ 60 & -0.x672 & -1.x825 & -0.x423 & 0.x112 & 0.x487 & -2.x321\, \\ 70 & -0.x929 & -2.x028 & -0.x571 & 0.x175 & 0.x355 & -2.x999\, \\ 82 & -1.x612 & -2.x508 & -0.x812 & 0.x191 & 0.x284 & -4.x457\,(2) \\ 83 & -1.x701 & -2.x569 & -0.x839 & 0.x192 & 0.x281 & -4.x636\,(1) \\ 92 & -3.x011 & -3.x400 & -1.x174 & 0.x198 & 0.x280 & -7.x107\,(2) \\ \end{tabular} \end{ruledtabular} \end{table*} \begin{table}[htb] \caption{Vacuum-polarization correction to the magnetic shielding, for the extended nucleus, in terms of the function $D_{\rm VP}$ defined by Eq.~(\ref{eq:vp}). \label{tab:vp}} \begin{ruledtabular} \begin{tabular}{cccc} $Z$ &\multicolumn{1}{c}{po} & \multicolumn{1}{c}{mag} & \multicolumn{1}{c}{total} \\ \hline\\[-7pt] 10 & 0.118 & 0.110 & 0.228 \\ 14 & 0.135 & 0.121 & 0.256 \\ 16 & 0.144 & 0.126 & 0.271 \\ 20 & 0.164 & 0.137 & 0.302 \\ 26 & 0.200 & 0.155 & 0.355 \\ 32 & 0.242 & 0.175 & 0.417 \\ 40 & 0.314 & 0.206 & 0.520 \\ 45 & 0.369 & 0.228 & 0.597 \\ 54 & 0.500 & 0.275 & 0.775 \\ 60 & 0.618 & 0.315 & 0.933 \\ 70 & 0.891 & 0.398 & 1.289 \\ 82 & 1.449 & 0.546 & 1.996 \\ 83 & 1.512 & 0.562 & 2.074 \\ 92 & 2.227 & 0.727 & 2.954 \end{tabular} \end{ruledtabular} \end{table} \begin{table*} \caption{Individual contributions to the shielding constant $\sigma \times 10^6$ for selected hydrogen-like ions. \label{tab:total}} \begin{ruledtabular} \begin{tabular}{l.....} & \multicolumn{1}{c}{$^{17}$O$^{7+}$} & \multicolumn{1}{c}{$^{43}$Ca$^{19+}$} &\multicolumn{1}{c}{$^{73}$Ge$^{31+}$} &\multicolumn{1}{c}{$^{131}$Xe$^{53+}$} &\multicolumn{1}{c}{$^{209}$Bi$^{82+}$} \\ \hline\\[-5pt] Leading & 143.3x127 & 375.9x60 & 657.x93 & 1461.x6 & 4112x \\ QED & -0.0x026\,(2) & -0.1x03\,(15)& -0.x59\,(8) & -4.x1\,(0.8)& -30x\,(7) \\ Bohr-Weisskopf & -0.0x013\,(4) & -0.0x61\,(18)& -0.x54\,(16)& -8.x2\,(2.5)& -42x\,(13)\\ Quadrupole & -0.0x007\,(1) & -0.0x18 & -0.x42 & 6.x9\,(0.1)& 7x \\ Recoil & -0.0x120 & -0.0x15 & -0.x02 & 0.x0 & 0x \\ Total & 143.2x960\,(5) & 375.7x63\,(24)& 656.x36\,(18)& 1456.x3\,(2.6)& 4046x\,(15)\\ \end{tabular} \end{ruledtabular} \end{table*}
1,108,101,565,110
arxiv
\section{Introduction}\label{intro}\noindent In this paper we investigate some qualitative properties of the skew-product semiflows generated by the solutions of non-autonomous parabolic partial functional differential equations (PFDEs for short) with finite delay and boundary conditions of Neumann, Robin or Dirichlet type. In this non-autonomous framework the phase space is a product space $\Omega \times C$, where the base $\Omega$ is a compact metric space under the action of a continuous flow $\sigma: \R \times \Omega \to \Omega$, $(t,\omega) \mapsto \omega {\cdot} t$ and the state space $C$ is an infinite dimensional Banach space of continuous functions. The skew-product semiflow $\tau: \R_+\times \Omega \times C \to \Omega \times C$, $(t,\omega,\varphi) \mapsto (\omega{\cdot} t,u(t,\omega, \varphi))$ is built upon the mild solutions of the associated abstract Cauchy problems (ACPs for short) with delay. We assume that the flow in the base is minimal. \par This formalism permits to carry out a dynamical study of the solutions of non-autonomous differential equations in which the temporal variation of the coefficients is almost periodic, almost automorphic or, more generally, recurrent. Frequently, $\Omega$ is obtained as the hull of the non-autonomous function defining the differential equations, although the approach considered here is more general. The references Ellis~\cite{elli}, Johnson et al.~\cite{jonnf}, Sacker and Sell~\cite{sase, sase94}, and Shen and Yi~\cite{shyi}, and references therein, contain ingredients of the theory of non-autonomous dynamical systems which will be used throughout this work. \par The main issue in the paper is the persistence of the systems of parabolic PFDEs. Persistence is a dynamical property which has a great interest in mathematical modelling, in areas such as biological population dynamics, epidemiology, ecology or neural networks. In the field of monotone dynamical systems, different notions of persistence have been introduced, with the general meaning that in the long run the trajectories place themselves above a prescribed region of the phase space, which we take to be a minimal set $K\subset \W\times C$. In many applications this minimal set is $\Omega \times \{0\}$, so that, roughly speaking, uniform or strict persistence means that the solutions eventually become uniformly strongly or strictly positive, respectively. \par In~\cite{obsa18} Obaya and Sanz showed that in the general non-autonomous setting, in order that persistence can be detected experimentally, this notion should be considered as a collective property of the complete family of systems over $\Om$. We follow this collective approach to develop dynamical properties of persistence with important practical implications. Our study intends to extend the theory of persistence written in Novo et al.~\cite{noos7} and Obaya and Sanz~\cite{obsa} for non-autonomous ODEs, FDEs with delay and parabolic PDEs to parabolic PFDEs, considering also the cases of Robin or Dirichlet boundary conditions. \par We briefly explain the structure and contents of the paper. Some basic concepts in the theory of non-autonomous dynamical systems are included in Section~\ref{sec-prelim}. Section~\ref{sec-skew-product} is devoted to describe the dynamical scenario in which the parabolic problems are immersed, distinguishing the case of Neumann or Robin boundary conditions, and the Dirichlet case. We analyze the regularity properties and the long-term behaviour of the solutions that determine the topological structure of omega-limit sets and minimal sets. We follow arguments in the line of Martin and Smith~\cite{masm0,masm} and Wu~\cite{wu} to extend previous results given in Novo et al.~\cite{nonuobsa} for Neumann boundary conditions, to the case of Robin and Dirichlet boundary conditions. We also study the consequences of the so-called quasimonotone condition in the problems. \par In Section~\ref{sec-linearized sem}, under regularity conditions in the reaction terms in the equations, we build the variational problems along the semiorbits of $\tau$, whose mild solutions induce the linearized skew-product semiflow. Then, in the linear and monotone setting, we consider continuous separations of type~II and the associated principal spectrums, which can be determined by Lyapunov exponents. The classical concept of continuous separation given by Pol\'{a}\v{c}ik and Tere\v{s}\v{c}\'{a}k~\cite{pote} in a discrete dynamical setting, and then extended to a continuous setting by Shen and Yi~\cite{shyi}, has proved to be widely applicable in non-autonomous ODEs and parabolic PDEs, but not in equations with delay. Later, Novo et al.~\cite{noos6} introduced a variation of this notion, and called it continuous separation of type~II, in order to make it applicable to delay equations. The results in Novo et al.~\cite{noos7} and Obaya and Sanz~\cite{obsa,obsa18} show its importance in the dynamical description of non-autonomous FDEs with finite delay, and now it becomes relevant in reaction-diffusion systems with delay. \par Finally, in Section~\ref{sec-uniform persistence} we consider regular and quasimonotone parabolic PFDEs. Assuming the existence of a minimal set $K$ for $\tau$ with a flow extension, we first establish an easy criterion for the existence of a continuous separation of type~II over $K$ in terms of the irreducibility of a constant matrix calculated from the partial derivatives of the reaction term in the equations, with respect to the non-delayed and delayed state components. A key fact is that in the general case, after a convenient permutation of the variables in the system, the constant matrix mentioned before has a block lower triangular structure, with irreducible diagonal blocks. This permits to consider a family of lower dimensional linear systems with a continuous separation, for which the property of persistence depends upon the positivity of its principal spectrum. In this situation, a sufficient condition for the presence of uniform or strict persistence in the area above $K$ is given in terms of the principal spectrums of an adequate subset of such systems in each case. \section{Some preliminaries}\label{sec-prelim}\noindent In this section we include some basic notions in topological dynamics for non-autonomous dynamical systems. \par Let $(\W,d)$ be a compact metric space. A real {\em continuous flow \/} $(\W,\sigma,\R)$ is defined by a continuous map $\sigma: \R\times \W \to \W,\; (t,\w)\mapsto \sigma(t,\w)$ satisfying $\sigma_0=\text{Id}$, and $\sigma_{t+s}=\sigma_t\circ\sigma_s$ for each $t, s\in\R$, where $\sigma_t(\w)=\sigma(t,\w)$ for all $\w \in \W$ and $t\in \R$. The set $\{ \sigma_t(\w) \mid t\in\R\}$ is called the {\em orbit\/} of the point $\w$. A subset $\W_1\subset \W$ is {\em $\sigma$-invariant\/} if $\sigma_t(\W_1)=\W_1$ for every $t\in\R$, and it is {\em minimal \/} if it is compact, $\sigma$-invariant and it does not contain properly any other compact $\sigma$-invariant set. Every compact and $\sigma$-invariant set contains a minimal subset. Furthermore, a compact $\sigma$-invariant subset is minimal if and only if every orbit is dense. We say that the continuous flow $(\W,\sigma,\R)$ is {\em recurrent\/} or {\em minimal\/} if $\W$ is minimal. \par A finite regular measure defined on the Borel sets of $\W$ is called a Borel measure on $\W$. Given $\mu$ a normalized Borel measure on $\W$, it is {\em $\sigma$-invariant\/} if $\mu(\sigma_t(\W_1))=\mu(\W_1)$ for every Borel subset $\W_1\subset \W$ and every $t\in \R$. It is {\em ergodic\/} if, in addition, $\mu(\W_1)=0$ or $\mu(\W_1)=1$ for every $\sigma$-invariant Borel subset $\W_1\subset \W$. \par Let $\R_+=\{t\in\R\,|\,t\geq 0\}$. Given a continuous compact flow $(\W,\sigma,\R)$ and a complete metric space $(C,\di)$, a continuous {\em skew-product semiflow\/} $(\W\times C,\tau,\,\R_+)$ on the product space $\W\times C$ is determined by a continuous map \begin{equation*} \begin{array}{cccl} \tau \colon &\R_+\times\W\times C& \longrightarrow & \W\times C \\ & (t,\w,\varphi) & \mapsto &(\w{\cdot}t,u(t,\w,\varphi)) \end{array} \end{equation*} which preserves the flow on $\W$, denoted by $\w{\cdot}t=\sigma(t,\w)$ and referred to as the {\em base flow\/}. The semiflow property means that $\tau_0=\text{Id}$, and $\tau_{t+s}=\tau_t \circ \tau_s$ for all $t, s\geq 0$, where again $\tau_t(\w,\varphi)=\tau(t,\w,\varphi)$ for each $(\w,\varphi) \in \W\times C$ and $t\in \R_+$. This leads to the so-called semicocycle property: \begin{equation}\label{semicocycle} u(t+s,\w,\varphi)=u(t,\w{\cdot}s,u(s,\w,\varphi))\quad\text{for }\; t,s\ge 0\;\;\text{and }\; (\w,\varphi)\in \W\times C\,. \end{equation} \par The set $\{ \tau(t,\w,\varphi)\mid t\geq 0\}$ is the {\em semiorbit\/} of the point $(\w,\varphi)$. A subset $K$ of $\W\times C$ is {\em positively invariant\/} if $\tau_t(K)\subseteq K$ for all $t\geq 0$ and it is $\tau$-{\em invariant\/} if $\tau_t(K)= K$ for all $t\geq 0$. A compact positively invariant set $K$ for the semiflow is {\em minimal\/} if it does not contain any nonempty compact positively invariant set other than itself. The restricted semiflow on a compact and $\tau$-invariant set $K$ admits a {\em flow extension\/} if there exists a continuous flow $(K,\wit\tau,\R)$ such that $\wit \tau(t,\w,\varphi)=\tau(t,\w,\varphi)$ for all $(\w,\varphi)\in K$ and $t\in\R_+$. \par Whenever a semiorbit $\{\tau(t,\w_0,\varphi_0)\mid t\ge 0\}$ is relatively compact, one can consider the {\em omega-limit set\/} of $(\w_0,\varphi_0)$, formed by the limit points of the semiorbit as $t\to\infty$. The omega-limit set is then a nonempty compact connected and $\tau$-invariant set. An important property of an omega-limit set is that semiorbits admit backward extensions inside it. Therefore, the sufficient condition for such a set to have a flow extension is the uniqueness of backward orbits (see~\cite{shyi} for more details). \par In this paper we will sometimes work under some differentiability assumptions. More precisely, when $C$ is a Banach space, the skew-product semiflow $\tau$ is said to be of class $C^1$ when $u$ is assumed to be of class $C^1$ in $\varphi$, meaning that $D_{\!\varphi}u(t,\w,\varphi)$ exists for any $t>0$ and any $(\w,\varphi)\in\W\times C$ and for each fixed $t>0$, the map $(\w,\varphi)\mapsto D_{\!\varphi}u(t,\w,\varphi)\in \mathcal L(C)$ is continuous in a neighborhood of any compact set $K\subset \W\times C$, for the norm topology on $\mathcal L(C)$; moreover, for any $\phi\in C$, $\lim_{\,t\to 0^+}D_{\!\varphi}u(t,\w,\varphi)\,\phi=\phi $ uniformly for $(\w,\varphi)$ in compact sets of $\W\times C$. \par In that case, whenever $K\subset \W\times C$ is a compact positively invariant set, we can define a continuous linear skew-product semiflow called the {\em linearized skew-product semiflow\/} of $\tau$ over $K$, \begin{equation*} \begin{array}{cccl} L: & \R_+\times K \times C& \longrightarrow & K \times C\\ & (t,(\w,\varphi),\phi) & \mapsto &(\tau(t,\w,\varphi),D_{\!\varphi}u(t,\w,\varphi)\,\phi)\,. \end{array} \end{equation*} We note that $D_{\!\varphi}u$ satisfies the linear semicocycle property: \begin{equation}\label{linear semicocycle} D_{\!\varphi}u(t+s,\w,\varphi)=D_{\!\varphi}u(t,\tau(s,\w,\varphi))\,D_{\!\varphi}u(s,\w,\varphi)\quad\text{for }\; t,s\ge 0\,,\; (\w,\varphi)\in K. \end{equation} \par Finally, we include the definition of monotone skew-product semiflow. A Banach space $X$ is {\em ordered} if there is a closed convex cone, i.e., a nonempty closed subset $X_+\subset X$ satisfying $X_+\!+X_+\subset X_+$, $\R_+ X_+\!\subset X_+$ and $X_+\cap(-X_+)=\{0\}$. If besides the positive cone has a nonempty interior, $\Int X_+\not=\emptyset$, $X$ is {\em strongly ordered}. The (partial) {\em strong order relation\/} in $X$ is then defined by \begin{equation}\label{order} \begin{split} v_1\le v_2 \quad &\Longleftrightarrow \quad v_2-v_1\in X_+\,;\\ v_1< v_2 \quad &\Longleftrightarrow \quad v_2-v_1\in X_+\;\text{ and }\;v_1\ne v_2\,; \\ v_1\ll v_2 \quad &\Longleftrightarrow \quad v_2-v_1\in \Int X_+\,.\qquad\quad\quad~ \end{split} \end{equation} The relations $\ge,\,>$ and $\gg$ are defined in the obvious way. If $C$ is an ordered Banach space, the skew-product semiflow $(\W\times C,\tau,\R_+)$ is {\em monotone\/} if \begin{equation*} u(t,\w,\varphi)\le u(t,\w,\phi)\,\quad \text{for\, $t\ge 0$, $\w\in\W$ \,and\, $\varphi, \phi\in C$ \,with\, $\varphi\le \phi$}\,. \end{equation*} Note that monotone semiflows are forward dynamical systems on ordered Banach spaces which preserve the order of initial states along the semiorbits. \section{Skew-product semiflows induced by parabolic PFDEs with delay}\label{sec-skew-product}\noindent In this section we consider time-dependent families of initial boundary value (IBV for short) problems given by systems of parabolic PFDEs with a fixed delay (just taken to be $1$) over a minimal flow $(\W,\sigma,\R)$, with Dirichlet, Neumann or Robin boundary conditions. More precisely, for each $\w\in\W$ we consider the IBV problem \begin{equation*} \left\{\begin{array}{l} \des\frac{\partial y_i}{\partial t}(t,x)= d_i\Delta y_i(t,x)+f_i(\w{\cdot}t,x,y(t,x),y(t-1,x))\,,\quad t>0\,,\;x\in \bar U,\\[.2cm] \alpha_i(x)\,y_i(t,x)+\delta_i\,\des\frac{\partial y_i}{\partial n}(t,x) =0\,,\quad t>0\,,\;\,x\in \partial U,\\[.2cm] y_i(s,x)=\varphi_i(s,x)\,,\quad s\in [-1,0]\,,\;\,x\in \bar U, \end{array}\right. \end{equation*} for $i=1,\ldots,n$, where $\w{\cdot}t$ denotes the flow on $\W$; $U$, the spatial domain, is a bounded, open and connected subset of $\R^m$ ($m\geq 1$) with a sufficiently smooth boundary $\partial U$; $\Delta$ is the Laplacian operator on $\R^m$ and $d_1,\ldots,d_n$ are positive constants called the diffusion coefficients; the map $f:\W\times \bar U\times\R^n\times\R^n\to \R^n$, called the reaction term, with components $f=(f_1,\ldots,f_n)$ satisfies the following condition: \begin{itemize} \item[(C)] $f(\w,x,y,\wit y)$ is continuous, and it is Lipschitz in $(y,\wit y)$ in bounded sets, uniformly for $\w\in \W$ and $x\in\bar U$, that is, given any $\rho>0$ there exists an $L_\rho>0$ such that \[ \|f(\w,x,y_2,\wit y_2)-f(\w,x,y_1,\wit y_1)\|\leq L_\rho\,(\|y_2-y_1\|+\|\wit y_2-\wit y_1\|) \] for any $\w\in \W$, $x\in\bar U$ and $y_i,\,\wit y_i\in\R^n$ with $\|y_i\|,\,\|\wit y_i\|\leq \rho\,,\; i=1,2$; \end{itemize} $\partial/\partial n$ denotes the outward normal derivative at the boundary; and the boundary conditions are called Dirichlet boundary conditions if $\delta_i=0$ and $\alpha_i\equiv 1$, Neumann boundary conditions if $\delta_i=1$ and $\alpha_i\equiv 0$, or Robin boundary conditions if $\delta_i=1$ and $\alpha_i\geq 0$ is sufficiently regular on $\partial U\!$, for $i=1,\ldots,n$. \par Let $C(\bar U)$ be the space of continuous real maps on the closure of $U$, endowed with the sup-norm, which we just denote by $\|\,{\cdot}\,\|$. If every $\delta_i=1$, that is, with Neumann or Robin boundary conditions, the initial value $\varphi_i$ lies in the space $C([-1,0]\times \bar U)\equiv C([-1,0],C(\bar U))$ of the continuous maps on $[-1,0]$ taking values in $C(\bar U)$, whereas with Dirichlet boundary conditions $\varphi_i$ should in addition satisfy the compatibility condition $\varphi_i(0)\in C_0(\bar U)$, the subspace of $C(\bar U)$ of functions vanishing on $\partial U$. \par The former family can be written for short for $y(t,x)=(y_1(t,x),\ldots,y_n(t,x))$ as \begin{equation}\label{family} \left\{\begin{array}{l} \des\frac{\partial y}{\partial t}(t,x)= D\Delta y(t,x)+f(\w{\cdot}t,x,y(t,x),y(t-1,x))\,,\quad t>0\,,\;\,x\in \bar U,\\[.2cm] \bar\alpha(x)\,y(t,x)+\delta\,\des\frac{\partial y}{\partial n}(t,x) =0\,,\quad t>0\,,\;\,x\in \partial U,\\[.2cm] y(s,x)=\varphi(s,x)\,,\quad s\in [-1,0]\,,\;\,x\in \bar U, \end{array}\right. \end{equation} for each $\w\in\W$, where $D$ and $\bar\alpha(x)$ respectively stand for the $n\times n$ diagonal matrices with entries $d_1,\ldots,d_n$ and $\alpha_1(x),\ldots,\alpha_n(x)$; $\delta=1$ for Neumann or Robin boundary conditions and $\delta=0$ for Dirichlet boundary conditions; and $\varphi$ is a given map in the space $C([-1,0],C(\bar U,\R^n))$, which can be identified with $C([-1,0]\times\bar U,\R^n)$. \par Using results by Martin and Smith~\cite{masm0,masm} and Travis and Webb~\cite{trwe}, the construction of a locally defined continuous skew-product semiflow linked to time-dependent families of IBV problems given by systems of parabolic PFDEs with (possibly variable) finite delay has been explained in Novo et al.~\cite{nonuobsa} in the case of Neumann boundary conditions. In fact, the problem with Robin boundary conditions admits a common treatment. Notwithstanding, the problem with Dirichlet boundary conditions is more delicate. \par In any case, the main idea is to immerse the family of problems with delay~\eqref{family} into a family of retarded abstract equations in an appropriate Banach space $B$, \begin{equation}\label{ACPdelay} \left\{\begin{array}{l} z'(t) = A z(t)+ F(\w{\cdot}t,z_t)\,,\quad t>0\,,\\ z_0=\varphi\in C([-1,0],B)\,, \end{array}\right. \end{equation} where for each $t\geq 0$, $z_t$ is the map defined by $z_t(s)=z(t+s)$ for $s\in [-1,0]$, and then use the semigroup theory approach. On the space of continuous functions $C([-1,0],B)$ the sup-norm will be used and it will be denoted by $\n{\cdot}_C$. \par When it's time to choose a Banach space, it is important to have in mind the kind of results that one wants to obtain. On the one hand, we want a strongly continuous semigroup of operators, so that the induced skew-product semiflow is continuous. On the other hand, in Section~\ref{sec-uniform persistence} we will be working with some strong monotonicity conditions, so that we need a cone of positive elements in the Banach space with a nonempty interior. For this reason, in the Dirichlet case we skip to work in $C_0(\bar U)$, since the natural cone of positive elements has an empty interior, and we better choose an intermediate space; more precisely, a domain of fractional powers associated to the realization of the Dirichlet Laplacian in $L^p(U)$. Nice sections dedicated to these spaces can be found in Henry~\cite{henr}, Lunardi~\cite{luna} or Pazy~\cite{pazy}. \par At this point it seems convenient to present the Neumann and Robin cases, and the Dirichlet case separately. \subsection{The case of Neumann or Robin boundary conditions}\label{sect-Neumann} In this case for each component $i=1,\ldots,n$ we consider on the space $C(\bar U)$ the differential operator $A_i^0 z_i = d_i\Delta z_i$ with domain $D(A_i^0)$ given by \[ \left\{z_i\in C^2(U)\cap C^1(\bar U)\;\Big|\; A_i^0z_i\in C(\bar U)\,,\; \alpha_i(x)\,z_i(x)+\des\frac{\partial z_i}{\partial n}(x)=0\, \;\forall \;x\in \partial U \right\}. \] Then, the closure $A_i$ of $A_i^0$ in $C(\bar U)$ is a sectorial operator and it generates an analytic semigroup of bounded linear operators $(T_i(t))_{t\geq 0}$, which is usually just written down as $(e^{tA_i})_{t\geq 0}$, and $e^{tA_i}$ is compact for any $t>0$ (for instance, see Smith~\cite{smit}). Besides, the semigroup is strongly continuous, that is, $A_i$ is densely defined. \par On the product Banach space $E=C(\bar U)^n\equiv C(\bar U,\R^n)$ endowed with the norm $\|(z_1,\ldots,z_n)\|=\sum_{i=1}^n\|z_i\|$, we consider the operator $A=\Pi_{i=1}^n A_i$ with domain $D(A)=\Pi_{i=1}^n D(A_i)$, which is sectorial and generates an analytic semigroup of operators $(e^{tA})_{t\geq 0}$, with $e^{tA}=\Pi_{i=1}^n e^{tA_i}$, and $e^{tA}$ is compact for any $t>0$. \par Let us define $F:\W\times C([-1,0],E)\to E$, $(\w,\varphi)\mapsto F(\w,\varphi)$ by \begin{equation}\label{F} F(\w,\varphi)(x)=f(\w,x,\varphi(0,x),\varphi(-1,x))\,,\quad x\in \bar U \end{equation} and consider the retarded abstract problems on $E$ given in~\eqref{ACPdelay}. As explained in Novo et al.~\cite{nonuobsa}, with condition (C) on $f$, mild solutions of these ACPs with delay, that is, continuous solutions of the integral equations \begin{equation}\label{variation constants} z(t)=e^{tA}\,z(0) +\int_0^t e^{(t-s)A}\,F(\w{\cdot}s,z_s) \,ds\,,\quad t\geq 0\,, \end{equation} permit us to set a locally defined continuous skew-product semiflow \begin{equation*} \begin{array}{cccc} \tau: &\mathcal{U} \subseteq\R_+\times \W\times C([-1,0],E) & \longrightarrow & \W\times C([-1,0],E)\\ & (t,\w,\varphi) & \mapsto &(\w{\cdot}t,z_t(\w,\varphi))\,, \end{array} \end{equation*} for an appropriate open set $\mathcal{U}$, where as usual $z_t(\w,\varphi)(s)=z(t+s,\w,\varphi)$ for every $s\in [-1,0]$, for $t\geq 0$. Besides, for any $t>1$ the section map $\tau_t$ is compact, meaning that it takes bounded sets in $\W\times C([-1,0],E)$ into relatively compact sets (see Proposition~2.4 in Travis and Webb~\cite{trwe}), and if a solution $z(t,\w,\varphi)$ remains bounded, then it is defined on the whole positive real line and the semiorbit of $(\w,\varphi)$ is relatively compact \par It is well-known that we have to impose some extra conditions on the map $f$ in~\eqref{family} in order to gain regularity in the solutions of the associated ACPs~\eqref{ACPdelay}. For completeness, we include the definition of what we call a classical solution. Different names for the same concept are sometimes found in the literature. \begin{defi}\label{defi-clasica} A map $z\in C^1((0,T],E)\cap C((0,T],D(A))\cap C([0,T],E)$ which satisfies~\eqref{ACPdelay} for $0<t\leq T$ is a {\em classical solution\/} on $[0,T]$. \end{defi} The following condition is often referred to as a {\em time regularity\/} condition. \begin{itemize} \item[($C^\theta(t)$)] $f(\w{\cdot}t,x,y,\wit y)$ is $\theta$-H\"{o}lder continuous in $t$ (for some $\theta\in(0,1)$) in bounded sets of $\R^n\times \R^n$ uniformly for $\w\in \W$ and $x\in\bar U$; that is, given any $r>0$ there exists an $l_r>0$ such that \[ \|f(\w{\cdot}t,x,y,\wit y)-f(\w{\cdot}s,x,y,\wit y)\|\leq l_r\,|t-s|^\theta\,,\quad t,\,s\geq 0\,, \] for any $\w\in \W$, $x\in\bar U$ and $y,\wit y\in\R^n$ with $\|y\|, \|\wit y\|\leq r$\,. \end{itemize} \par We include a short proof of the following result, which follows from Theorem~4.1 in Novo et al.~\cite{nonuobsa}. \begin{teor}\label{teor-time regularity-Neumann} Assume conditions $(C)$ and $(C^\theta(t))$, for some $\theta\in(0,1/2)$, on the map $f$ in~\eqref{family}. Then, for fixed $\w\in\W$ and $\varphi\in C([-1,0],E)$: \begin{itemize} \item[(i)] The mild solution of~\eqref{ACPdelay} is classical for $t\geq 1$, provided that it is defined. \item[(ii)] If $\varphi:[-1,0]\to E$ is $\theta$-H\"{o}lder continuous and besides $\varphi(0)\in C^{2\theta}(\bar U,\R^n)$, then the mild solution of~\eqref{ACPdelay} is a classical solution on intervals $[0,T]$ as long as it is defined. \end{itemize} \end{teor} \begin{proof} (i) Assume that the mild solution $z(t)=z(t,\w,\varphi)$ of~\eqref{ACPdelay} is defined for $t\in [0,T]$, and let $g(t)=F(\w{\cdot}t,z_t)$ for $t\in [0,T]$. Then, if $T>1$, for any $\varepsilon>0$, it is well-known that $z\in C^\theta([\varepsilon,T],E)$ meaning that it is $\theta$-H\"{o}lder continuous in $t$ (see Lunardi~\cite{luna}), so that under conditions (C) and $(C^\theta(t))$, $g$ is $\theta$-H\"{o}lder continuous on $[1+\varepsilon,T]$. The classical theory for the nonhomogeneous equation $z'(t)=Az(t)+g(t)$ then says that $z(t)$ is a classical solution on $[1,T]$ (see Henry~\cite{henr} or Lunardi~\cite{luna}). \par For (ii), note that with Neumann of Robin boundary conditions, $z\in C^\theta([0,\varepsilon],E)$ if and only if $\varphi(0)\in C^{2\theta}(\bar U,\R^n)$, provided that $\theta<1/2$ (for instance, see Lunardi~\cite{luna}), and then just argue as before. \end{proof} Still an additional condition has to be imposed on $f$ in order to have classical solutions $y(t,x)$ of the IBV problems with delay~\eqref{family}: \begin{itemize} \item[$(C^\theta(x))$] $f(\w,x,y,\wit y)$ is $\theta$-H\"{o}lder continuous in $x$ (for some $\theta\in(0,1)$) in bounded sets of $\R^n\times\R^n$ uniformly for $\w\in \W$; that is, given any $r>0$ there exists an $l_r>0$ such that for any $\w\in \W$ and $y,\wit y\in\R^n$ with $\|y\|, \|\wit y\|\leq r$, \[ \|f(\w,x_2,y,\wit y)-f(\w,x_1,y,\wit y)\|\leq l_r\,\|x_2-x_1\|^\theta\,,\quad x_1,\,x_2\in \bar U. \] \end{itemize} \par Note that the classical space where one looks for solutions is $C^{1,2}([a,b]\times \bar U,\R^n)$, for appropriate time intervals $[a,b]$. We are going to use some optimal regularity results of solutions of IBV problems contained in Lunardi~\cite{luna}. Nevertheless, since we are just interested in the $C^{1,2}$ regularity of solutions, we are not going to pay the due attention to the optimal regularity there proved. Some classical references for regularity results are Friedman~\cite{frie} and Ladyzhenskaja et al.~\cite{lasu}. \begin{teor}\label{teor-space regularity-Neumann} Assume conditions $(C)$, $(C^{\theta}(t))$ and $(C^{2\theta}(x))$ on the map $f$ in~\eqref{family}, for some $\theta\in(0,1/2)$, with Neumann or Robin boundary conditions. For fixed $\w\in\W$ and $\varphi\in C([-1,0],E)$, assume that the mild solution $z(t)=z(t,\w,\varphi)$ is defined on a time interval $[0,T]$ and set $y(t,x)=z(t)(x)$ for $t\in [0,T]$ and $x\in \bar U$, as well as $y(s,x)=\varphi(s,x)$ for $s\in [-1,0]$ and $x\in \bar U$. Then: \begin{itemize} \item[(i)] If $T>1$, for any $\varepsilon>0$ the map $y\in C^{1,2}([1+\varepsilon,T]\times \bar U,\R^n)$ is a solution of the IBV problem~\eqref{family} for $1+\varepsilon<t\leq T$. \item[(ii)] If $\varphi\in C^{\theta,2\theta}([-1,0]\times \bar U, \R^n)$, then, for any $\varepsilon>0$, $y\in C^{1,2}([\varepsilon,T]\times \bar U,\R^n)$ is a solution of the IBV problem~\eqref{family} for $\varepsilon<t\leq T$. \end{itemize} \end{teor} \begin{proof} For the continuous map $h(t,x)=f(\w{\cdot}t,x,y(t,x),y(t-1,x))$, $(t,x)\in [0,T]\times\bar U$, we consider the IBV problem on $[0,T]\times\bar U$, \begin{equation}\label{auxiliar-robin} \left\{\begin{array}{l} \des\frac{\partial y}{\partial t}(t,x)= D\Delta y(t,x)+h(t,x)\,,\quad 0<t\leq T,\;\,x\in \bar U,\\[.2cm] \bar\alpha(x)\,y(t,x)+\des\frac{\partial y}{\partial n}(t,x) =0\,,\quad 0<t\leq T,\;\,x\in \partial U,\\[.2cm] y(0,x)=\varphi(0,x)\,, \quad x\in \bar U. \end{array}\right. \end{equation} \par Note that in both items $y(t,x)=z(t)(x)$ is $C^1$ in $t$ because $z(t)$ is a classical solution, by Theorem~\ref{teor-time regularity-Neumann}. Therefore, it remains to check the $C^2$ regularity in $x$. \par By Theorem~5.1.17 in~\cite{luna}, $y(t,x)\in C^{\theta,2\theta}([\delta,T]\times \bar U)$ for any $\delta>0$ (condition (C) on $f$ is enough for this). Then, fixed $\varepsilon>0$, $h(t,x)\in C^{\theta,2\theta}([1+\frac{\varepsilon}{2},T]\times \bar U)$, and from Theorem~\ref{teor-time regularity-Neumann}, $z(1+\frac{\varepsilon}{2})\in D(A)$, so that the boundary condition is fulfilled at $t=1+\frac{\varepsilon}{2}$. Then, we can apply Proposition~7.3.3~(iii) in~\cite{luna} to the IBV problem~\eqref{auxiliar-robin} for $1+\frac{\varepsilon}{2}<t\leq T$ with initial condition $y(1+\frac{\varepsilon}{2},x)=z(1+\frac{\varepsilon}{2})(x)$ for $x\in \bar U$, to deduce that $y\in C^{1,2}([1+\varepsilon,T]\times \bar U,\R^n)$. The proof of (i) is finished. \par Recall that $\varphi(0)\in C^{2\theta}(\bar U,\R^n)$ is the necessary and sufficient condition to guarantee that $y(t,x)\in C^{\theta,2\theta}([0,T]\times \bar U)$. With the assumption $\varphi\in C^{\theta,2\theta}([-1,0]\times \bar U, \R^n)$ in~(ii), $h(t,x)\in C^{\theta,2\theta}([0,T]\times \bar U)$. Arguing as in the previous paragraph, we can deduce that $y\in C^{1,2}([\varepsilon,T]\times \bar U,\R^n)$ for any $\varepsilon>0$. The proof is finished. \end{proof} \subsection{The case of Dirichlet boundary conditions}\label{sect-Dirichlet} This time, for each component $i=1,\ldots,n$ we consider on the space $C(\bar U)$ the differential operator $A_i^0 z_i = d_i\Delta z_i$ with domain $D(A_i^0)=\{z_i\in C^2(U)\cap C_0(\bar U)\;|\; A_i^0z_i\in C_0(\bar U)\}$. The closure $A_i$ of $A_i^0$ in $C(\bar U)$ is a sectorial operator which generates an analytic semigroup of bounded linear operators $(e^{tA_i})_{t\geq 0}$, with $e^{tA_i}$ compact for any $t>0$ (see Lunardi~\cite{luna} or Smith~\cite{smit}), but now the semigroup is not strongly continuous, since $\overline{ D(A_i)}=C_0(\bar U)$. As in Section \ref{sect-Neumann}, $A=\Pi_{i=1}^n A_i$ is the sectorial operator with domain $D(A)=\Pi_{i=1}^n D(A_i)$ on the product Banach space $E=C(\bar U)^n\equiv C(\bar U,\R^n)$, and $e^{tA}$ is compact for any $t>0$. \par In this case, we also consider for each component $i$ the realization of the Dirichlet $d_i$-Laplacian on the Banach space $L^p(U)$ for a fixed $m<p<\infty$, that is, the operator $A_{i,p}:D(A_{i,p})\subset L^p(U)\to L^p(U)$ defined by $A_{i,p}z_i=d_i\Delta z_i$ (in a weak sense) for $z_i\in D(A_{i,p})$. This operator is sectorial, densely defined and $0\in\rho(A_{i,p})$. Then, for $\alpha\in (1/2+m/(2p),1)$, let $E_i^\alpha:=D(-A_{i,p})^\alpha=\rg (-A_{i,p})^{-\alpha}$ be the domain of fractional power $\alpha$ of $-A_{i,p}$, which is a Banach space with norm $\|z_i\|_\alpha = \|(-A_{i,p})^{\alpha}\,z_i\|_p$ and satisfies $E_i^\alpha\hookrightarrow C^1(\bar U)$ (see Theorem~1.6.1 in Henry~\cite{henr}). Besides, $E_i^\alpha$ is an intermediate space in the class $J_\alpha$ between $L^p(U)$ and $D(A_{i,p})$, that is, we have continuous embeddings $D(A_{i,p}) \hookrightarrow E_i^\alpha \hookrightarrow L^p(U)$ and there exists a constant $c_i>0$ such that $ \|z_i\|_\alpha\leq c_i\,\|A_{i,p}z_i\|_p^\alpha\,\|z_i\|_p^{1-\alpha}$ for any $z_i\in D(A_{i,p})$. Also the following estimate holds, which will be used later on: \begin{equation}\label{estimate} \| (-A_{i,p})^\alpha\,e^{t A_{i,p}}\|_{\mathcal{L}(L^p(U))}\leq M_\alpha\,t^{-\alpha}\,e^{-w t}\,,\quad t>0\, , \end{equation} for some $w>0$ and $M_\alpha>0$ (see Theorem~6.13 in Pazy~\cite{pazy}). \par In all, $D(A_i)\hookrightarrow D(A_{i,p})\hookrightarrow E_i^\alpha \hookrightarrow C^1(\bar U) \hookrightarrow C(\bar U) \hookrightarrow L^p(U)$ and $E_i^\alpha$ as an intermediate space in the class $J_\alpha$ between $C(\bar U)$ and $D(A_i)$. \par This time, we consider on the product Banach space $L^p(U)^n\equiv L^p( U,\R^n)$ with norm $\|(z_1,\ldots,z_n)\|_p=\sum_{i=1}^n \|z_i\|_p$, the operator $A_p=\Pi_{i=1}^n A_{i,p}$ with domain $D(A_p)=\Pi_{i=1}^n D(A_{i,p})$ and the bounded linear operator $(-A_p)^{-\alpha}=\Pi_{i=1}^n (-A_{i,p})^{-\alpha}$. We also consider the product Banach space $E^\alpha=\Pi_{i=1}^n E_i^\alpha$ endowed with the norm $\|(z_1,\ldots,z_n)\|_\alpha=\sum_{i=1}^n\|z_i\|_\alpha$. Thanks to H\"{o}lder's inequality, it is immediate to check that $E^\alpha$ is an intermediate Banach space between $E$ and $D(A)$ in the class $J_\alpha$. In fact, because of the continuous embeddings, $(e^{t A})_{t\geq 0}$ is an analytic semigroup of bounded linear operators on $E^\alpha$ and besides in this case: \begin{equation}\label{sup finito} \limsup_{t\to 0^+} \|e^{tA}\|_{\mathcal{L}(E^\alpha)}<\infty\,. \end{equation} Furthermore, it is easy to check that the semigroup of operators $(e^{t A})_{t\geq 0}$ is strongly continuous on $E^\alpha$, that is, $D(A)$ is dense in $E^\alpha$: just take any $z\in E^\alpha$, that is, $z=(-A_{p})^{-\alpha}\,y$ for some $y\in L^p(U,\R^n)$ and since $(-A_{p})^{-\alpha}$ commutes with $e^{tA_{p}}$, $\|e^{t A}z-z\|_\alpha= \|e^{t A_p}(-A_{p})^{-\alpha}\,y- (-A_{p})^{-\alpha}\,y\|_\alpha=\|e^{t A_p}y-y\|_p\to 0$ as $t\to 0^+$, since $(e^{t A_p})_{t\geq 0}$ is strongly continuous in $L^p(U,\R^n)$. In particular, $E^\alpha\hookrightarrow\overline{ D(A)}=C_0(\bar U,\R^n)$. Finally, $e^{t A}:E^\alpha \to E^\alpha$ is compact for any $t>0$. This follows from $E^\alpha \hookrightarrow E$, the compactness of $e^{(t/2) A}:E\to E$ and the boundedness of $e^{(t/2) A}:E\to E^\alpha$ because $E^\alpha$ is an intermediate space in the class $J_\alpha$. \par On this occasion, we consider $F:\W\times C([-1,0],E^\alpha)\to E$, $(\w,\varphi)\mapsto F(\w,\varphi)$ defined as in~\eqref{F}, and the retarded ACPs on $E^\alpha$ given in~\eqref{ACPdelay}. Although there are some results for these problems in the $\alpha$-norm (e.g., see Travis and Webb~\cite{trwe78}), here we opt to apply the ``method of steps" to get existence and uniqueness of mild solutions of~\eqref{ACPdelay}, arguing on $[0,1]$ first, then on $[1,2]$, and so on. In this way we can apply the well-established theory for semilinear ACPs with nonlinearities defined in intermediate spaces (for instance, see Chapter~7 in Lunardi~\cite{luna}). So, for fixed $\w\in \W$ and $\varphi\in C([-1,0],E^\alpha)$, let us define the map $ \wit F:[0,1]\times E^\alpha \to E$ by $\wit F(t,v)(x)=f(\w{\cdot}t,x,v(x),\varphi(t-1,x))$ for any $x\in \bar U$, for the map $f$ in~\eqref{family}. It is easy to check that condition (C) on $f$ is transferred to the map $\wit F$, in the sense that $\wit F$ is continuous and it is Lipschitz in $v$ in bounded sets of $E^\alpha$, uniformly for $t\in [0,1]$; that is, given $R>0$ there exists a $C_R>0$ such that for $t\in[0,1]$, \begin{equation}\label{lipsch v} \|\wit F(t,v_2)-\wit F(t,v_1)\|\leq C_R\,\|v_2-v_1\|_\alpha\quad\hbox{for any }\; \|v_1\|_\alpha, \|v_2\|_\alpha\leq R\,. \end{equation} \par With these conditions, the standard theory for the semilinear ACP in $E^\alpha$, \begin{equation}\label{ACP} \left\{\begin{array}{l} z'(t) = A z(t)+ \wit F(t,z(t))\,,\quad t>0\,,\\ z(0)=\varphi(0)\in E^\alpha\,, \end{array}\right. \end{equation} with $A$ sectorial and densely defined, says that the problem admits a unique mild solution $z=z(t,\w,\varphi)\in C([0,\delta],E^\alpha)$ for a certain $\delta=\delta(\w,\varphi)\in (0,1]$, that is, $z$ is a continuous solution of the integral equation \begin{equation*} z(t)=e^{tA}\,z(0) +\int_0^t e^{(t-s)A}\,\wit F(s,z(s)) \,ds\,,\quad t\in [0,\delta]\,. \end{equation*} Compare with~\eqref{variation constants} to see that $z$ is also a mild solution of~\eqref{ACPdelay} on $[0,\delta]$. \par Whenever the mild solution is globally defined on $[0,1]$, then we consider the ACP~\eqref{ACP} on $[1,2]$ with $\wit F(t,v)(x)=f(\w{\cdot}t,x,v(x),z(t-1,\w,\varphi)(x))$ for any $x\in \bar U$, and $z(1)=z(1,\w,\varphi)\in E^\alpha$. Now $\wit F$ is continuous and satisfies~\eqref{lipsch v} on $[1,2]$, and once more the problem admits a unique mild solution. Both solutions stuck together give the mild solution on $[0,1+\delta']$, and note once more that $z$ is a mild solution of~\eqref{ACPdelay} too. This procedure can be iterated, as long as the mild solution is defined. \par Standard arguments using a generalized version of the Gronwall's lemma (see Lemma~7.1.1 in Henry~\cite{henr}) permit to see that the mild solution $z(t,\w,\varphi)$ depends continuously on the initial condition $\varphi$, and also on $\w\in\W$ (note that the map $F$ depends on both $\w$ and $\varphi$). Therefore, mild solutions of the ACPs permit us to set a locally defined continuous skew-product semiflow \begin{equation*} \begin{array}{cccc} \tau: &\mathcal{U} \subseteq\R_+\times \W\times C([-1,0],E^\alpha) & \longrightarrow & \W\times C([-1,0],E^\alpha)\\ & (t,\w,\varphi) & \mapsto &(\w{\cdot}t,z_t(\w,\varphi))\,, \end{array} \end{equation*} for an appropriate open set $\mathcal{U}$, where $z_t(\w,\varphi)(s)=z(t+s,\w,\varphi)$ for every $s\in [-1,0]$. Also here, if a solution $z(t,\w,\varphi)$ remains bounded, then it is defined on the whole positive real line and the semiorbit of $(\w,\varphi)$ is relatively compact: see the arguments in the proof of Proposition~3.1 in Novo et al.~\cite{nonuobsa}. \par Note that we can also consider $F:\W\times C([-1,0],E)\to E$ defined as in~\eqref{F}, and solve the retarded ACP~\eqref{ACPdelay} for any $\varphi\in C([-1,0],E)$ (with $\varphi(0)\in C_0(\bar U,\R^n)$ if continuity of the mild solution up to $t=0$ is wanted). Then, since $E^\alpha$ is an intermediate space between $E$ and $D(A)$, from~\eqref{variation constants} it follows that for $t>1$, $z_t(\w,\varphi)\in C([-1,0],E^\alpha)$ (see Proposition~4.2.1 in Lunardi~\cite{luna}). In fact, one can prove that for $t>1$, the section semiflow $\tau_t:\W\times C([-1,0],E)\to \W\times C([-1,0],E^\alpha)$ is compact on its domain: argue as in Proposition~2.4 in Travis and Webb~\cite{trwe}. \par We finish this section with some results on regularity of solutions. A classical solution is defined exactly as in Definition~\ref{defi-clasica}. \begin{teor}\label{teor-time regularity-Dirichlet} Assume conditions $(C)$ and $(C^\theta(t))$, for some $\theta\in(0,1/2)$, on the map $f$ in~\eqref{family}. Then, for fixed $\w\in\W$ and $\varphi\in C([-1,0],E^\alpha)$: \begin{itemize} \item[(i)] The mild solution of~\eqref{ACPdelay} is classical for $t\geq 1$, provided that it is defined. \item[(ii)] If $\varphi:[-1,0]\to E$ is $\theta$-H\"{o}lder continuous, then the mild solution of~\eqref{ACPdelay} is a classical solution on intervals $[0,T]$ as long as it is defined. \end{itemize} \end{teor} \begin{proof} The proof follows the same lines as that of Theorem~\ref{teor-time regularity-Neumann}. Just recall that $E^\alpha\hookrightarrow\overline{ D(A)}=C_0(\bar U,\R^n)$, and that with Dirichlet boundary conditions the mild solution $z\in C^\theta([0,\varepsilon],E)$ if and only if $\varphi(0)\in C_0^{2\theta}(\bar U,\R^n)$ (see~\cite{luna}). Since $\varphi(0)\in E^\alpha\hookrightarrow C^1(\bar U,\R^n)$, and $\theta<1/2$, we can give it for granted. The proof is finished. \end{proof} \begin{teor}\label{teor-space regularity-Dirich} Assume conditions $(C)$, $(C^{\theta}(t))$ and $(C^{2\theta}(x))$ on the map $f$ in~\eqref{family}, for some $\theta\in(0,1/2)$, with Dirichlet boundary conditions. For fixed $\w\in\W$ and $\varphi\in C([-1,0],E^\alpha)$, assume that the mild solution $z(t)=z(t,\w,\varphi)$ is defined on a time interval $[0,T]$ and set $y(t,x)=z(t)(x)$ for $t\in [0,T]$ and $x\in \bar U$, as well as $y(s,x)=\varphi(s,x)$ for $s\in [-1,0]$ and $x\in \bar U$. Then: \begin{itemize} \item[(i)] If $T>1$, for any $\varepsilon>0$ the map $y\in C^{1,2}([1+\varepsilon,T]\times \bar U,\R^n)$ is a solution of the IBV problem~\eqref{family} for $1+\varepsilon<t\leq T$. \item[(ii)] If $\varphi\in C^\theta([-1,0], E)$, then for any $\varepsilon>0$ the map $y\in C^{1,2}([\varepsilon,T]\times \bar U,\R^n)$ is a solution of the IBV problem~\eqref{family} for $\varepsilon<t\leq T$. \end{itemize} \end{teor} \begin{proof} The proof follows the same lines as that of Theorem~\ref{teor-space regularity-Neumann}. For the continuous map $h(t,x)=f(\w{\cdot}t,x,y(t,x),y(t-1,x))$ for $(t,x)\in [0,T]\times\bar U$, consider the IBV problem~\eqref{auxiliar-robin} but with boundary condition $y(t,x)=0$ for $0<t\leq T$, $x\in \partial U$. \par This time by Theorem~5.1.11 in~\cite{luna}, $y(t,x)\in C^{\theta,2\theta}([\varepsilon,T]\times \bar U)$ for any $\varepsilon>0$. Since we are working on $E^\alpha$ with $\alpha>1/2$, $\varphi(0)\in E^\alpha\hookrightarrow C^{2\theta}_0(\bar U,\R^n)$ and then in fact $y(t,x)\in C^{\theta,2\theta}([0,T]\times \bar U)$. Therefore, the map $h(t,x)\in C^{\theta,2\theta}([1,T]\times \bar U)$. Besides, since for any $t\geq 0$, $z(t)\in E^\alpha \hookrightarrow C_0(\bar U,\R^n)$, we have that $z(t)(x)=0$ for any $t\geq 0$ and any $x\in \partial U$. With these conditions we can apply Proposition~7.3.2~(iii) in~\cite{luna} to the IBV problem for $1<t\leq T$ with initial condition $y(1,x)=z(1)(x)$ for $x\in \bar U$, to get that $y\in C^{1,2}([1+\varepsilon,T]\times \bar U,\R^n)$ and (i) is proved. \par With the assumptions in (ii), and the fact that $\varphi(s)\in E^\alpha\hookrightarrow C^{2\theta}(\bar U,\R^n)$ for any $s\in [-1,0]$, now $h(t,x)\in C^{\theta,2\theta}([0,T]\times \bar U)$. Once more Proposition~7.3.2~(iii) in~\cite{luna} implies that $y\in C^{1,2}([\varepsilon,T]\times \bar U,\R^n)$. The proof is finished. \end{proof} \subsection{Monotone skew-product semiflows induced by quasimonotone parabolic PFDEs}\label{sec-monotonos} In this section we are concerned with the classical quasimonotone condition which renders the skew-product semiflow induced by mild solutions monotone. We state this result, together with a technical inequality which will be fundamental in Section~\ref{sec-uniform persistence}. \par First of all, we describe the cones of positive vectors in the spaces we are dealing with. In the case of Neumann or Robin boundary conditions, $C([-1,0],E)$ is a strongly ordered Banach space with positive cone $ C_+([-1,0],E)=\{\varphi\in C([-1,0],E)\mid \varphi(s)\in E_+\;\text{for} \; s\in [-1,0]\}$ where $E_+=\{z\in E\mid z(x)\geq 0\;\text{for} \; x\in\bar U\}$ and $\R^n_+=\{y\in\R^n\mid y_i\geq 0\; \text{for}\; i=1,\ldots,n\}$. Note that we can trivially identify \[ \Int C_+([-1,0],E)=\{\varphi \in C([-1,0]\times\bar U,\R^n)\mid \varphi(s,x)\gg 0 \;\text{for}\; s\in [-1,0]\,,\,x\in\bar U \}\,. \] \par In the case of Dirichlet boundary conditions, $C([-1,0],E^\alpha)$ is a strongly ordered Banach space with $ C_+([-1,0],E^\alpha)=\{\varphi\in C([-1,0],E^\alpha)\mid \varphi(s)\in E^\alpha_+\;\text{for} \; s\in [-1,0]\}\,, $ where the positive cone in $E^\alpha$ is $E^\alpha_+=\{z\in E^\alpha\,\big|\; z(x)\geq 0\;\text{for} \; x\in\bar U \}$. Besides, $E^\alpha_+$ has a nonempty interior, since \begin{equation*} \Big\{z\in E^\alpha_+\,\big|\; z(x)\gg 0\;\text{for} \; x\in U \;\text{and}\; \frac{\partial z}{\partial n}(x)\ll 0 \;\text{for} \; x\in\partial U \Big\}= \Int E^\alpha_+\,, \end{equation*} and $\Int C_+([-1,0],E^\alpha)=\{\varphi\in C([-1,0],E^\alpha)\mid \varphi(s)\in \Int E^\alpha_+ \text{ for }s\in[-1,0]\}\not=\emptyset$. \par To unify the writing, $E^\gamma$ will stand for the Banach space $E$ in the problem with Neumann or Robin boundary conditions, and for $E^\alpha$ in the problem with Dirichlet boundary conditions; and $C_\gamma$ for the space $C([-1,0],E^\gamma)$ with the sup-norm $\|\,{\cdot}\,\|_{C_\gamma}$. Also, the order relations in $E^\gamma$ and $C_\gamma$ will just be denoted by $\leq$, $<$ and $\ll$ according to~\eqref{order}, but have in mind the different spaces involved in each case. \begin{prop}\label{prop-monotone} Assume hypotheses $\rm{(C)}$ and $(C^{\theta}(t))$, for some $0<\theta<1/2$, on the map $f$ in~\eqref{family}, plus the quasimonotone condition: \begin{itemize} \item[(QM)] If $\,y,\wit y, u,\wit u\in \R^n$ with $y\leq u$, $\wit y\leq \wit u$ and $y_i=u_i$ for some $i\in\{1,\ldots,n\}$, then $f_i(\w,x,y,\wit y)\leq f_i(\w,x,u, \wit u)$ for any $\w\in\W$ and $x\in\bar U$. \end{itemize} Besides, in the Dirichlet case assume further: \begin{itemize} \item[(DM)] $f(\w,x,0,0)=0$ for any $\w\in\W$ and $x\in\partial U$. \end{itemize} Then: \begin{itemize} \item[(i)] The induced skew-product semiflow on $\W\times C_\gamma$ is monotone, that is, if $\varphi,\psi\in C_\gamma$ with $\varphi\leq \psi$, then $z_t(\w,\varphi)\leq z_t(\w,\psi)$ for any $\w\in\W$ and any $t\geq 0$ where both terms are defined. \item[(ii)] Given $\w\in\W$ and $\varphi,\psi\in C_\gamma$ with $\varphi\leq \psi$ such that $z(t,\w,\varphi)$ and $z(t,\w,\psi)$ are defined for $t\in[0,\beta]$ for some $\beta>0$, there exists an $L=L(\w,\varphi,\psi,\beta)>0$ such that for each $i=1,\ldots,n$, and for each $t\in[0,\beta]$, \[ z_i(t,\w,\psi)-z_i(t,\w,\varphi)\geq e^{-L t}\,e^{t A_i}\,(\psi_i(0)-\varphi_i(0))\,. \] \end{itemize} \end{prop} \begin{proof} (i) For each fixed $\w\in \W$, it follows from the results in Martin and Smith~\cite{masm0,masm}. \par (ii) We include the proof for the sake of completeness, although the result in the Neumann case follows from Lemma~4.3 in Novo et al.~\cite{nonuobsa}. First of all, observe that if $\w\in\W$, and $\varphi,\psi\in C_\gamma$ with $\varphi\leq \psi$ and $\|\varphi(s)(x)\|,\|\psi(s)(x)\|\leq \rho$ for any $s\in [-1,0]$ and $x\in \Bar U$, then, for any $t\in\R$ and $x\in\bar U$, \begin{multline}\label{lemma 1.1} f_i(\w{\cdot}t,x,\psi(0)(x),\psi(-1)(x))-f_i(\w{\cdot}t,x,\varphi(0)(x),\varphi(-1)(x))\geq \\ - L\,(\psi_i(0)(x)-\varphi_i(0)(x))\,, \end{multline} for the constant $L=L_\rho>0$ provided in $\rm{(C)}$. To see it, just subtract and add the term $f_i(\w{\cdot}t,x,\phi(0)(x),\varphi(-1)(x))$ for the map $\phi\in C_\gamma$ defined by $\phi_i=\psi_i$ and $\phi_j=\varphi_j$ if $j\not= i$, which satisfies $\varphi\leq \phi\leq \psi$, and then apply (QM) and (C). \par Now, let us fix $\varphi,\psi\in C_\gamma$ with $\varphi\leq \psi$ and such that $z(t,\w,\varphi)$ and $z(t,\w,\psi)$ are defined for $t\in[0,\beta]$, and let $\rho>0$ be such that $\sup\{\|z(t,\w,\varphi)(x)\|,\,\|z(t,\w,\psi)(x)\|\mid t\in[-1,\beta],\, x\in \bar U\}< \rho$. Then take $L=L_\rho$ the constant given in (C), which obviously depends on $\w,\,\varphi,\,\psi$ and $\beta$. \par As a first step, we consider the particular case when $\varphi,\psi\in C^\theta([-1,0],E)$, and $\varphi(0),\,\psi(0)\in C^{2\theta}(\bar U, \R^n)$ in the Neumann or Robin cases. Then, either Theorem~\ref{teor-time regularity-Neumann} or Theorem~\ref{teor-time regularity-Dirichlet} applies to get that the mild solutions $z(t,\w,\varphi)$ and $z(t,\w,\psi)$ are classical solutions on $[0,\beta]$, so that for each fixed $i=1,\ldots,n$ we can consider the map on $[0,\beta]$, with values in $D(A_i)$ for $t>0$, defined by $v_i(t)=e^{L t}\,(z_i(t,\w,\psi)-z_i(t,\w,\varphi))$. Then, for $t>0$, and for $F:\W\times C_\gamma\to E$ defined in~\eqref{F}, we have that \begin{equation*} v_i'(t)= L v_i(t)+ A_i v_i(t)+e^{L t} (F_i(\w{\cdot}t,z_t(\w,\psi))- F_i(\w{\cdot}t,z_t(\w,\varphi))). \end{equation*} Now, for any $t\in(0,b]$, since $z_t(\w,\varphi)\leq z_t(\w,\psi)$ by (i), and by the choice of $\rho$,~\eqref{lemma 1.1} applies and we can write $v_i'(t)\geq L v_i(t)+A_i v_i(t)-L v_i(t)= A_i v_i(t)$, so that $g_i(t)=v_i'(t)-A_i v_i(t)\geq 0$. Now, $(e^{tA_i})_{t\geq 0}$ is a positive semigroup of operators in $C(\bar U)$ in the Neumann or Robin cases, and in $C_0(\bar U)$ in the Dirichlet case (for instance, see Smith \cite{smit}). Besides, in the Dirichlet case, $\overline{ D(A_i)}=C_0(\bar U)$ and {\rm (DM)} is assumed, so that $g_i(t)=L v_i(t)+e^{L t} (F_i(\w{\cdot}t,z_t(\w,\psi))- F_i(\w{\cdot}t,z_t(\w,\varphi)))\in C_0(\bar U)$ for $t>0$. Finally, since $v_i'(t)=A_i v_i(t)+g_i(t)$, we can write \begin{equation*} v_i(t)=e^{tA_i}\,v_i(0)+\int_0^t e^{(t-s)A_i}\,g_i(s)\,ds \geq e^{tA_i}\,v_i(0) = e^{tA_i}(\psi_i(0)-\varphi_i(0)) \,, \end{equation*} from where the searched inequality immediately follows. \par In the general case, note that the set of H\"{o}lder continuous maps $C^\theta([-1,0],E^\gamma)$, with $\varphi(0)\in C^{2\theta}(\bar U, \R^n)$ in the Neumann or Robin cases, is dense in $C_\gamma$, and in the Dirichlet case $C^\theta([-1,0],E^\alpha)\subset C^\theta([-1,0],E)$. Then, for $\varphi,\psi\in C_\gamma$ as before, we can take sequences $\{\varphi_n\},\{\psi_n\}$ as in the first step with $\varphi_n\to\varphi$ and $\psi_n\to\psi$, $\varphi_n\leq \varphi\leq \psi\leq \psi_n$ for any $n\geq 1$ and such that $\|z(t,\w,\varphi_n)(x)\|,\,\|z(t,\w,\psi_n)(x)\|\leq \rho$ for any $t\in[0,\beta]$, $x\in \bar U$ and $n\geq 1$. Then, the proof is finished by applying the first step to the pairs $\varphi_n,\psi_n$ and taking limits as $n\to\infty$. \end{proof} Note that the standard parabolic maximum principle implies that $e^{t A_i}$ is strong\-ly positive for $t>0$, i.e., if $z_i>0$, then $e^{t A_i}z_i\gg 0$. Then, in the situation of the previous result, if $\varphi_i(0)<\psi_i(0)$ for some $i$, it is $z_i(t,\w,\varphi)\ll z_i(t,\w,\psi)$ for $t>0$. \section{The linearized semiflow and Lyapunov exponents}\label{sec-linearized sem} \noindent In this section we build the linearized semiflow under regularity conditions in the problems. Besides, when the semiflow is also monotone, we present the concept of a continuous separation of type~II and of the related principal spectrum, and show how the latter can be calculated in terms of some Lyapunov exponents. \par From now on, we use the unified notation introduced in Section~\ref{sec-monotonos} to include any of the boundary conditions, but whenever it is convenient to make a distinction, we will write $C=C([-1,0],E)$ with sup-norm $\|\,{\cdot}\,\|_{C}$ in the Neumann and Robin cases, and $C_\alpha=C([-1,0],E^\alpha)$ with sup-norm $\|\,{\cdot}\,\|_{C_\alpha}$ in the Dirichlet case. \par In the case of Neumann boundary conditions, the next result can be found in Novo et al.~\cite{nonuobsa}, and it can be trivially extended to the case of Robin boundary conditions. The proof is inspired in the proof of Theorem~3.5 in Novo et al.~\cite{nonuobsa}. \begin{teor}\label{teor-linearized sk} Consider the family of IBV problems with delay~\eqref{family}, $\w\in\W$ and assume that $f:\W\times\bar U\times \R^n\times \R^n\to\R^n$ is continuous and of class $C^1$ in the $y$ and $\wit y$ variables. Then, the skew-product semiflow generated by mild solutions on $\W\times C_\gamma$, $\tau(t,\w,\varphi)=(\w{\cdot}t,z_t(\w,\varphi))$ is of class $C^1$ with respect to $\varphi$. Furthermore, for each $\psi\in C_\gamma$, $D_{\!\varphi} z_t(\w,\varphi)\,\psi = v_t(\w,\varphi,\psi)$ for the mild solution $v(t,\w,\varphi,\psi)$ of the associated variational retarded ACP along the semiorbit of $(\w,\varphi)$, \begin{equation}\label{variational} \left\{\begin{array}{l} v'(t)=Av(t)+D_{\!\varphi} F(\w{\cdot}t,z_t(\w,\varphi))\,v_t\,, \quad t> 0\,, \\v_0=\psi\in C_\gamma\,,\end{array}\right. \end{equation} which is defined for $t$ in $[0,b)$, the maximal interval of definition of $z(t,\w,\varphi)$. \end{teor} \begin{proof} We write the proof for the case of Dirichlet boundary conditions. Recall that $F:\W\times C_\alpha\to E$ is defined in~\eqref{F} and that $z(t,\w,\varphi)$ is a mild solution of the retarded ACP~\eqref{ACPdelay}. In this case with fixed delay, \begin{multline}\label{derivada} [D_{\!\varphi} F(\w{\cdot}t,z_t(\w,\varphi))\,v_t](x)=D_{y} f(\w{\cdot}t,x,z(t,\w,\varphi)(x),z(t-1,\w,\varphi)(x))\,v(t)(x)\\ +D_{\wit y} f(\w{\cdot}t,x,z(t,\w,\varphi)(x),z(t-1,\w,\varphi)(x))\,v(t-1)(x)\,,\quad x\in \bar U. \end{multline} \par By the $C^1$ character of $f(\w,x,y,\wit y)$ in $(y,\wit y)$, we can argue as in the previous sections to get the existence of a unique mild solution of~\eqref{variational}, denoted by $v(t)=v(t,\w,\varphi,\psi)$. By linearity of the problem, $v$ exists in the large, i.e., \begin{equation}\label{mild v} v(t)=e^{tA}\,\psi(0) +\int_0^t e^{(t-s)A}\,D_{\!\varphi} F(\w{\cdot}s,z_s(\w,\varphi))\,v_s \,ds\,,\quad\hbox{for any }\, t\in [0,b)\,. \end{equation} \par Let us fix a $t>0$, and let us first check that for $\w\in \W$ and $\varphi,\,\psi\in C_\alpha$, $D_{\!\varphi} z_t(\w,\varphi)\,\psi$ exists, provided that $z_t(\w,\varphi)$ exists, and $D_{\!\varphi} z_t(\w,\varphi)\,\psi = v_t(\w,\varphi,\psi)$, and second that the map $\Om\times C_\alpha\to \mathcal L(C_\alpha)$, $(\w,\varphi) \mapsto D_{\!\varphi} z_t(\w,\varphi)$ is continuous. \par First of all, note that fixed $t>0$ and $(\w,\varphi)\in \W\times C_\alpha$ such that $z_t(\w,\varphi)$ exists, and given $\psi\in C_\alpha$, the solution $z(\,\cdot\,,\w,\varphi+\varepsilon \,\psi)$ of~\eqref{ACPdelay} with initial data $z_0=\varphi+\varepsilon \,\psi$ is also defined on $[0,t]$, provided that $|\varepsilon|\leq \varepsilon_0$ for a sufficiently small $\varepsilon_0$. We want to prove that there exists the limit \[ \lim_{\varepsilon\to 0} \frac{z_t(\w,\varphi+\varepsilon \,\psi)-z_t(\w,\varphi)}{\varepsilon}= v_t(\w,\varphi,\psi)\,. \] For convenience, we will get to this by proving that $\lim_{\varepsilon\to 0} h^\varepsilon(t)=0$ for the map \[ h^\varepsilon(s)= \frac{1}{\varepsilon} \sup_{r\in [0,s]} \| z(r,\w,\varphi+\varepsilon \,\psi)-z(r,\w,\varphi)-\varepsilon\, v(r,\w,\varphi,\psi)\|_\alpha\,,\quad s\in [0,t]\,. \] Let us call $g^\varepsilon(r)=z(r,\w,\varphi+\varepsilon \,\psi)-z(r,\w,\varphi)-\varepsilon\, v(r,\w,\varphi,\psi)$ for $r\in[0,s]$ and recall that $\|z\|_\alpha= \|(-A_{p})^{\alpha}\,z\|_p$, for any $z\in E^\alpha$. \par Having in mind~\eqref{variation constants} and~\eqref{mild v} we write for $r\in [0,s]$, \begin{multline*} g^\varepsilon(r)=\displaystyle\int_0^{r} e^{(r-l)A}\,\big[ F(\w{\cdot}l,z_l(\w,\varphi+\varepsilon \,\psi))- F(\w{\cdot}l,z_l(\w,\varphi))\\ -\varepsilon D_{\!\varphi} F(\w{\cdot}l,z_l(\w,\varphi))\,v_l(\w,\varphi,\psi)\big]\,dl\,, \end{multline*} and the term $F(\w{\cdot}l,z_l(\w,\varphi+\varepsilon \,\psi))- F(\w{\cdot}l,z_l(\w,\varphi))$ can be written, applying the mean value theorem to $F$, as \[ \int_0^1 D_{\!\varphi} F(\w{\cdot}l,\lambda\,z_l(\w,\varphi+\varepsilon \,\psi)+ (1-\lambda)\,z_l(\w,\varphi))(z_l(\w,\varphi+\varepsilon \,\psi)-z_l(\w,\varphi))\,d\lambda\,. \] Consequently, for any $r\in [0,s]$ we can write \begin{multline*} g^\varepsilon(r)= \displaystyle\int_0^{r} e^{(r-l)A}\left(\int_0^1 D_{\!\varphi} F(\w{\cdot}l,\lambda\,z_l(\w,\varphi+\varepsilon \,\psi)+ (1-\lambda)\,z_l(\w,\varphi))\,g_l^\varepsilon\,d\lambda \right)dl \\ + \varepsilon \displaystyle\int_0^{r} e^{(r-l)A}\left( \int_0^1 \big[ D_{\!\varphi} F(\w{\cdot}l,\lambda\,z_l(\w,\varphi+\varepsilon \,\psi)+ (1-\lambda)\,z_l(\w,\varphi)) \right.\\ - D_{\!\varphi} F(\w{\cdot}l,z_l(\w,\varphi))\big]\,v_l(\w,\varphi,\psi)\,d\lambda \Big)\,dl \,. \end{multline*} From this, taking into account that: \par\smallskip - there exists an $M_\alpha'>0$ such that \begin{equation}\label{cota} \|(-A_{p})^\alpha\,e^{rA}\,z\|_p\leq M_\alpha'\,r^{-\alpha}\,\|z\| \quad\text{for any}\; z\in E\;\text{and}\; r>0, \end{equation} because of~\eqref{estimate} and the continuous embedding $E\hookrightarrow L^p(U,\R^n)$; - $\|v_l(\w,\varphi,\psi)\|_{C_\alpha}\leq K_1$ for some $K_1>0$ and for any $l\in[0,t]\,$; - $\sup_{\lambda\in[0,1]} \| D_{\!\varphi} F(\w{\cdot}l,\lambda\,z_l(\w,\varphi+\varepsilon \,\psi)+ (1-\lambda)\,z_l(\w,\varphi))\|_{\mathcal L(C_\alpha,E)}\leq K_2$ for some $K_2>0$, for any $l\in[0,t]$ and for small enough $|\varepsilon|$, because of the continuity of $D_{\!\varphi} F$ in $\Om\times C_\alpha$ and the compactness of $\{z_l(\w,\varphi) \mid l\in[0,t]\}$ for the norm in $C_\alpha$; - $\lim_{\varepsilon\to 0}\alpha^\varepsilon(s)=0$ uniformly for $s\in[0,t]$, where \par \noindent${\displaystyle \alpha^\varepsilon(s)=M_\alpha'\,K_1\!\!\!\sup_{r\in[0,s]}\!\int_0^r (r-l)^{-\alpha}\left(\int_0^1 \!\!\| D_{\!\varphi} F(\w{\cdot}l,\lambda\,z_l(\w,\varphi+\varepsilon \,\psi)+ (1-\lambda)\,z_l(\w,\varphi)) \right.}$\\ ${\displaystyle ~\hspace{5.5cm}- D_{\!\varphi} F(\w{\cdot}l,z_l(\w,\varphi))\|_{\mathcal L(C_\alpha,E)}\,d\lambda\Big)dl\,;}$ - the map $h^\varepsilon(l)$ is nondecreasing for $l\in [0,t]$, hence, for $s\in[0,t]$, \[ \sup_{r\in[0,s]}\int_0^r M_\alpha'\,(r-l)^{-\alpha} K_2 \, h^\varepsilon(l)\,dl=\int_0^s M_\alpha'\,(s-l)^{-\alpha} K_2 \, h^\varepsilon(l)\,dl\,; \] \par\smallskip\noindent we obtain that for any $s\in [0,t]$, \[ h^\varepsilon(s)=\frac{1}{\varepsilon}\,\sup_{r\in[0,s]} \|g^\varepsilon(r)\|_\alpha\leq \alpha^\varepsilon(s)+ \int_0^s M_\alpha'\,(s-l)^{-\alpha} K_2 \, h^\varepsilon(l)\,dl\,, \] and applying the generalized Gronwall's lemma, we get that for any $s\in [0,t]$, \[ h^\varepsilon(s)\leq \alpha^\varepsilon(s)+ \theta \int_0^s H(\theta(s-l))\,\alpha^\varepsilon(l)\,dl\,, \] where the constant $\theta$ depends on the constants $M_\alpha'\,K_2$ and on $1-\alpha$, and the map $H(s)$ behaves like $\frac{s^{-\alpha}}{\Gamma(1-\alpha)}$ as $s\to 0^+$ (see Lemma 7.1.1 in Henry~\cite{henr} for more details). From here, we can deduce that $\lim_{\varepsilon\to 0}h^\varepsilon(t)=0$, as we wanted to see. \par To finish the proof, let us fix a $t>0$ and let us check the continuity of the map $\Om\times C_\alpha\to \mathcal L(C_\alpha)$, $(\w,\varphi) \mapsto D_{\!\varphi} z_t(\w,\varphi)$. So, let us take $\{(\w_n,\varphi_n)\}_{n\geq 1}\subset \Om\times C_\alpha$ with $(\w_n,\varphi_n)\to (\w,\varphi)$ and let us see that \begin{align*} \|D_{\!\varphi} &z_t(\w_n,\varphi_n)-D_{\!\varphi} z_t(\w,\varphi)\|_{\mathcal L(C_\alpha)}=\sup_{\|\psi\|\leq 1}\|v_t(\w_n,\varphi_n,\psi)- v_t(\w,\varphi,\psi)\|_{C_\alpha}\\ &\leq \sup_{\|\psi\|\leq 1}\sup_{s\in [0,t]}\|v(s,\w_n,\varphi_n,\psi)- v(s,\w,\varphi,\psi)\|_\alpha\to 0\quad\hbox{as}\; n\to\infty\,. \end{align*} The general arguments are similar to the ones used before. Using~\eqref{mild v}, we first apply the generalized Gronwall's inequality to prove that $\|v_s(\w,\varphi,\psi)\|_{C_\alpha}$ is uniformly bounded for $s\in[0,t]$ and $\|\psi\|\leq 1$. Note that~\eqref{sup finito} is needed at this point. Then, again using~\eqref{mild v} for $v(s,\w_n,\varphi_n,\psi)- v(s,\w,\varphi,\psi)$, a further application of the generalized Gronwall's inequality, together with~\eqref{cota} and the facts that: \par\smallskip - $\des\sup_{s\in[0,t]}\des\sup_{n\geq 1}\|D_{\!\varphi} F(\w_n{\cdot}s,z_s(\w_n,\varphi_n))\|_{\mathcal L(C_\alpha,E)}<\infty\,;$ -$\des\lim_{n\to\infty}\des\sup_{s\in[0,t]}\|D_{\!\varphi} F(\w_n{\cdot}s,z_s(\w_n,\varphi_n))-D_{\!\varphi} F(\w{\cdot}s,z_s(\w,\varphi))\|_{\mathcal L(C_\alpha,E)}= 0\,;$ - $l\in [0,t]\mapsto \des\sup_{r\in[0,l]}\des\sup_{\|\psi\|\leq 1}\|v(r,\w_n,\varphi_n,\psi)- v(r,\w,\varphi,\psi)\|_\alpha$ is nondecreasing; \par\smallskip\noindent permits to see that the above limit is $0$. The proof is finished. \end{proof} In the conditions of the previous result, if there is a compact positively invariant set $K\subset \W\times C_\gamma$ for $\tau$ (e.g., if there is a bounded solution $z(t,\w,\varphi)$ and $K$ is the omega-limit set of $(\w,\varphi)$), one can build the linearized skew-product semiflow over $K$: \begin{equation*} \begin{array}{cccl} L: & \R_+\times K \times C_\gamma& \longrightarrow & K \times C_\gamma\\ & (t,(\w,\varphi),\psi) & \mapsto &(\tau(t,\w,x),v_t(\w,\varphi,\psi))\,, \end{array} \end{equation*} with $v_t(\w,\varphi,\psi)=D_{\!\varphi} z_t(\w,\varphi)\,\psi$, and $v(t,\w,\varphi,\psi)$ is the mild solution of the variational retarded ACP~\eqref{variational} along the semiorbit of $(\w,\varphi)$. Note that, because of boundedness of $K$, the semiflow inside $K$ is globally defined. \par It is important to note that, if $K$ is $\tau$-invariant and compact, in the Dirichlet case $K$ can be equally considered with either the topology of $\W\times C_\alpha$ or of $\W\times C$. \begin{prop} If $K$ is a compact $\tau$-invariant subset of $\Om\times C$, then $K \subset \Omega \times C_\alpha$ and the restriction of both topologies on $K$ agree. \end{prop} \begin{proof} Since $\tau_t(K)=K$ for any $t\geq 0$ and $\tau_t:\W\times C\to \W\times C_\alpha$ is compact for $t>1$, $K$ is relatively compact in $\W\times C_\alpha$; and it is closed because the inclusion $\Om\times C_\alpha\hookrightarrow \Om\times C$ is continuous. Thus, the identity map with the two topologies $i:(K,\W\times C_\alpha)\to (K,\W\times C)$ is a homeomorphism, as it is continuous, bijective and $(K,\W\times C_\alpha)$ is compact. \end{proof} Closely related to the classical concept of a continuous separation in the terms given by Pol\'{a}\v{c}ik and Tere\v{s}\v{c}\'{a}k~\cite{pote} and Shen and Yi~\cite{shyi}, Novo et al.~\cite{noos6} introduced the concept of a continuous separation of type~II, which is the appropriate one if there is delay in the equations. We include the definition here, since it is going to be crucial in the study of persistence properties in Section~\ref{sec-uniform persistence}. \par When the skew-product semiflow $\tau$ is monotone and of class $C^1$ in $\varphi$, we say that a compact, positively invariant set $K\subset \W\times C_\gamma$ admits a {\em continuous separation of type~\/}II if there are families of subspaces $\{X_1(\w,\varphi)\}_{(\w,\varphi)\in K}$ and $\{X_2(\w,\varphi)\}_{(\w,\varphi)\in K}$ of $C_\gamma$ satisfying the following properties. \begin{itemize} \item[(S1)$\;$] $C_\gamma=X_1(\w,\varphi)\oplus X_2(\w,\varphi)$ and $X_1(\w,\varphi)$, $X_2(\w,\varphi)$ vary continuously in $K$; \item[(S2)$\;$] $X_1(\w,\varphi)=\spa\{ \psi(\w,\varphi)\}$, with $\psi(\w,\varphi)\gg 0$ and $\|\psi(\w,\varphi)\|_{C_\gamma}=1$ for any $(\w,\varphi)\in K$; \item[(S3)'] there exists a $t_0>0$ such that if for some $(\w,\varphi)\in K$ there is a $\phi\in X_2(\w,\varphi)$ with $\phi>0$, then $D_{\!\varphi} z_t(\w,\varphi)\,\phi=0$ for any $t\geq t_0$; \item[(S4)$\;$] for any $t>0$, $(\w,\varphi)\in K$, \begin{align*} D_{\!\varphi} z_t(\w,\varphi)\,X_1(\w,\varphi)&= X_1(\tau(t,\w,\varphi))\,,\\ D_{\!\varphi} z_t(\w,\varphi)\,X_2(\w,\varphi)&\subset X_2(\tau(t,\w,\varphi))\,; \end{align*} \item[(S5)$\;$] there are $M>0$, $\delta>0$ such that for any $(\w,\varphi)\in K$, $\phi\in X_2(\w,\varphi)$ with $\|\phi\|_{C_\gamma}=1$ and $t>0$, \begin{equation*} \|D_{\!\varphi} z_t(\w,\varphi)\,\phi\|_{C_\gamma}\leq M \,e^{-\delta t}\,\|D_{\!\varphi} z_t(\w,\varphi)\,\psi(\w,\varphi)\|_{C_\gamma}\,. \end{equation*} \end{itemize} The precise meaning of the continuous variation expressed in (S1) has been explained in Obaya and Sanz~\cite{obsa}. \par For convenience, we also recall some definitions of Lyapunov exponents. The standard definition of superior and inferior Lyapunov exponents at $\infty$ of each $(\w,\varphi,\psi)\in K\times C_\gamma$ is as follows (for instance, see Sacker and Sell~\cite{sase}): \[ \lambda_i(\w,\varphi,\psi)=\liminf_{t\to\infty} \frac{\log\|v_t(\w,\varphi,\psi)\|_{C_\gamma}}{t}\,,\; \lambda_s(\w,\varphi,\psi)=\limsup_{t\to\infty} \frac{\log\|v_t(\w,\varphi,\psi)\|_{C_\gamma}}{t}\,; \] the Lyapunov exponents of each $(\w,\varphi)\in K$ are defined by \[ \lambda_i(\w,\varphi)=\liminf_{t\to\infty} \frac{\log\|D_{\!\varphi} z_t(\w,\varphi)\|_{\mathcal{L}(C_\gamma)}}{t},\, \lambda_s(\w,\varphi)=\limsup_{t\to\infty} \frac{\log\|D_{\!\varphi} z_t(\w,\varphi)\|_{\mathcal{L}(C_\gamma)}}{t}; \] and the lower and upper Lyapunov exponents of $K$ are respectively the numbers: $\alpha_K=\inf_{(\w,\varphi)\in K} \lambda_i(\w,\varphi)$ and $\lambda_K=\sup_{(\w,\varphi)\in K} \lambda_s(\w,\varphi)$. \par When the linearized semiflow $L$ is monotone and $K$ is a minimal set with a flow extension and a continuous separation of type~II, these exponents play a fundamental role in the determination of the principal spectrum $\Sigma_p$ (see Mierczy{\'n}ski and Shen~\cite{mish}), that is, the Sacker-Sell spectrum (see~\cite{sase,sase94}) of the restriction of $L$ to the one-dimensional invariant subbundle \begin{equation* \displaystyle\bigcup_{(\w,\varphi)\in K} \{(\w,\varphi)\} \times X_1(\w,\varphi)\,. \end{equation*} More precisely, $\Sigma_p=[\alpha_K,\lambda_K]$ and besides, if $X_1(\w,\varphi)=\spa\{\psi\}$ for the vector $\psi=\psi(\w,\varphi)\gg 0$ in (S2), then $\lambda_i(\w,\varphi)=\lambda_i(\w,\varphi,\psi)$ and $\lambda_s(\w,\varphi)=\lambda_s(\w,\varphi,\psi)$ (see Proposition~4.4 in Novo et al.~\cite{noos7} for the result in an abstract setting). Since principal spectrums are going to be the dynamical objects in order to determine the persistence of the systems in Section \ref{sec-uniform persistence}, it is good to know that in the Dirichlet case the Lyapunov exponents can be calculated with the sup-norm in $C=C([-1,0],E)$, which is much easier to deal with numerically than the sup-norm in $C_\alpha$. \begin{prop}\label{prop-Lyapunov exponents} Assume that the map $f$ in~\eqref{family} is continuous and of class $C^1$ in the $y$ and $\wit y$ variables. Let $K\subset \W\times C_\gamma$ be a compact positively invariant set and consider the linearized semiflow $L$ over $K$. Then, in the case of Dirichlet boundary conditions, for any $(\w,\varphi)\in K$ and $\psi \in C_\alpha$ one can calculate: \[ \lambda_i(\w,\varphi,\psi)=\liminf_{t\to\infty} \frac{\log\|v_t(\w,\varphi,\psi)\|_{C}}{t}\,,\; \lambda_s(\w,\varphi,\psi)=\limsup_{t\to\infty} \frac{\log\|v_t(\w,\varphi,\psi)\|_{C}}{t}\,. \] In particular, if $\lambda_i(\w,\varphi,\psi)=\lambda_s(\w,\varphi,\psi)$, then \[ \lambda(\w,\varphi,\psi)=\lim_{t\to\infty} \frac{\log\|v_t(\w,\varphi,\psi)\|_{C}}{t}\,. \] \end{prop} \begin{proof} Let us omit the dependence of $v_t$ on $(\w,\varphi,\psi)$ to simplify the writing, and set $\tilde\lambda_s=\limsup_{t\to\infty} \frac{\log\|v_t\|_{C}}{t}$. Since $C_\alpha\hookrightarrow C$, it is clear that $\tilde\lambda_s\leq \lambda_s$. To see that also $\lambda_s\leq \tilde\lambda_s$, let us take a sequence $t_n\uparrow \infty$ such that $\lambda_s=\lim_{n\to\infty} \frac{\log\|v_{t_n}\|_{C_\alpha}}{t_n}$. Since for each $n\geq 1$ there exists a $t_n^1\in [t_n-1,t_n]$ such that $\|v_{t_n}\|_{C_\alpha}=\|v(t_n^1)\|_{\alpha}$, we have that $\lambda_s=\lim_{n\to\infty} \frac{\log\|v(t_n^1)\|_{\alpha}}{t_n^1}$. Now, for the map $F:\W\times C_\alpha\to E$ defined in~\eqref{F}, we can write by the variation of constants formula~\eqref{variation constants}, \[ v(t_n^1)=e^A\,v(t_n^1-1) +\int_0^1 e^{(1-s)A}\,D_{\!\varphi}F(\w{\cdot}(t_n^1-1+s),z_{t_n^1-1+s}(\w,\varphi))\,v_{t_n^1-1+s}\,ds\,. \] Now, note that we can also consider $F$ as defined on $\W\times C$ with values in $E$. Then, taking $M=\sup\{\|D_{\!\varphi}F(\tilde \w, \tilde \varphi)\|_{\mathcal{L}(C,E)}\mid (\tilde \w, \tilde \varphi)\in K\}<\infty$, we can apply~\eqref{cota} to get \[ \|v(t_n^1)\|_{\alpha}\leq M'_\alpha\, \|v(t_n^1-1)\|+\int_0^1 M'_\alpha\, (1-s)^{-\alpha}\, M\, \|v_{t_n^1-1+s}\|_C\,ds\,. \] As before, for each $n\geq 1$, there exists a $t_n^2\in [t_n^1-2,t_n^1]$ such that $\|v_{t_n^1-1+s}\|_C\leq \|v(t_n^2)\|$ for any $s\in [0,1]$, and in particular $\|v(t_n^1-1)\|\leq \|v(t_n^2)\|$. Then, \[ \|v(t_n^1)\|_{\alpha}\leq M'_\alpha\, \|v(t_n^2)\|+\int_0^1 M'_\alpha\, (1-s)^{-\alpha}\, M\, \|v(t_n^2)\|\,ds=\big(1+\frac{M}{1-\alpha}\big) M'_\alpha\, \|v(t_n^2)\| \,, \] for any $n\geq 1$. Since $\|v(t_n^2)\| \leq \|v_{t_n^2}\|_C $, we can easily conclude that \[ \lambda_s=\lim_{n\to\infty} \frac{\log\|v(t_n^1)\|_{\alpha}}{t_n^1}\leq \lim_{n\to\infty} \frac{\log\|v_{t_n^2}\|_{C}}{t_n^2} \leq \limsup_{t\to\infty} \frac{\log\|v_t\|_{C}}{t}= \tilde\lambda_s\,. \] \par Let us now deal with $\tilde\lambda_i=\liminf_{t\to\infty} \frac{\log\|v_t\|_{C}}{t}$. Once more the inequality $\tilde\lambda_i\leq \lambda_i$ is clear, so that it remains to prove that $\lambda_i\leq \tilde\lambda_i$. This time we take a sequence $t_n\uparrow\infty$ such that $\tilde\lambda_i=\lim_{n\to\infty} \frac{\log\|v_{t_n}\|_{C}}{t_n}$. Now, arguing as in the first paragraph, associated with the sequence $\{t_n+2\}_{n\geq 1}$ we can find a sequence $\{t_n^2\}_{n\geq 1}$ with $t_n^2\in [t_n-1,t_n+2]$ such that $\|v_{t_n+2}\|_{C_\alpha}\leq c\,\|v(t_n^2)\|$ for $c=(1+\frac{M}{1-\alpha}) M'_\alpha>0$ and for any $n\geq 1$. Note that if we prove that $\|v(t_n^2)\|\leq \tilde c\,\|v_{t_n}\|_C$ for every $n\geq 1$, for a certain $\tilde c>0$, we are done, since then: \[ \lambda_i=\liminf_{t\to\infty} \frac{\log\|v_t\|_{C_\alpha}}{t}\leq \lim_{n\to\infty}\frac{\log\|v_{t_n+2}\|_{C_\alpha}}{t_n+2} \leq \lim_{n\to\infty} \frac{\log\|v_{t_n}\|_{C}}{t_n}=\tilde \lambda_i\,. \] For that, once more we use the variation of constants formula to write, for $r\in [0,2]$, \[ v(t_n+r)=e^{rA}\,v(t_n) +\int_0^r e^{(r-l)A}\,D_{\!\varphi}F(\w{\cdot}(t_n+l),z_{t_n+l}(\w,\varphi))\,v_{t_n+l}\,dl\,. \] Then, consider the map $h_n(s)=\des\sup_{r\in [-1,s]}\|v(t_n+r)\|$ defined for $s\in [0,2]$. Note that if for some $s\in [0,2]$, $h_n(s)=\|v(t_n+r_0)\|$ for some $-1\leq r_0\leq 0$, then $h_n(s)\leq \|v_{t_n}\|_C$. Else, $h_n(s)=\des\sup_{r\in [0,s]}\|v(t_n+r)\|$ and we can bound \[ h_n(s)\leq M_0\,\|v_{t_n}\|_C+ \int_0^s M_0\,M\,h_n(l)\,dl\, \] for the constants $M_0=\max\{1,\sup_{s\in [0,2]}\|e^{sA}\|_{\mathcal{L}(E)}\}$ and $M$ the same as before, so that this inequality holds for any $s\in [0,2]$. Applying of the Gronwall's lemma, we obtain $h_n(s)\leq \tilde c \,\|v_{t_n}\|_C$ for an appropriate $\tilde c>0$ independent of $n\geq 1$, for any $s\in [0,2]$. In particular $\|v(t_n^2)\|\leq h_n(2)\leq \tilde c\,\|v_{t_n}\|_C$ for every $n\geq 1$. The proof is finished. \end{proof} \section{Persistence for quasimonotone systems of parabolic PFDEs}\label{sec-uniform persistence}\noindent In this section the properties of uniform and strict persistence are studied for quasimonotone and regular parabolic problems of type~\eqref{family}, $\w\in\W$. More precisely, we assume the following conditions on $f$: \begin{itemize} \item[(C1)] $f(\w,x,y,\wit y)$ is continuous and of class $C^1$ in $(y,\wit y)$. \item[(C2)] The maps $D_y f(\w{\cdot}t,x,y,\wit y)$ and $D_{\wit y} f(\w{\cdot}t,x,y,\wit y)$ are Lipschitz in $(y,\wit y)$ in bounded sets, uniformly for $\w\in \Om$ and $x\in\bar U$. \item[(C3)] $f(\w{\cdot}t,x,y,\wit y)$ as well as the maps $D_yf(\w{\cdot}t,x,y,\wit y)$ and $D_{\wit y} f(\w{\cdot}t,x,y,\wit y)$ satisfy conditions $(C^{\theta}(t))$ and $(C^{2\theta}(x))$, for some $\theta\in(0,1/2)$. \item[(C4)] Quasimonotone condition: for any $(\w,x,y,\wit y)\in \W\times \bar U\times\R^n\times\R^n$, \[ \frac{\partial f_i}{\partial y_j}(\w,x,y,\wit y) \geq 0 \;\, \text{ for } i\not= j \;\, \text{ and }\; \frac{\partial f_i}{\partial \wit y_j}(\w,x,y,\wit y)\ge 0\,\;\text{ for any}\;\, i, j\,. \] \end{itemize} \par As proved in Theorem \ref{teor-linearized sk}, with (C1) the skew-product semiflow $\tau(t,\w,\varphi)$ is of class $C^1$ in $\varphi$. Condition $(C^{2\theta}(x))$ in (C3) is required so that the solutions of the IBV problems with delay, as well as those of the linearized problems, are smooth enough in order to apply the classical parabolic maximum or minimum principles; see Theorems~\ref{teor-space regularity-Neumann} and~\ref{teor-space regularity-Dirich}. Finally, note that (C4) is the usual way to write the quasimonotone condition (QM) under regularity assumptions. \par First of all, by linearizing the problems, in the Dirichlet case we can now establish the monotonicity of the skew-product semiflow removing condition (DM) in Proposition~\ref{prop-monotone}. Recall that $C_\alpha=C([-1,0],E^\alpha)$. \begin{prop}\label{prop-strong monotonicity-Dirichlet} Consider the family of parabolic problems with delay~\eqref{family}, $\w\in\W$ with Dirichlet boundary conditions and assume that $f$ satisfies $\rm{(C1)}$-$\rm{(C4)}$. Then: \begin{itemize} \item[(i)] The induced skew-product semiflow on $\W\times C_\alpha$ is monotone, that is, if $\varphi,\psi\in C_\alpha$ with $\varphi\leq \psi$, then $z_t(\w,\varphi)\leq z_t(\w,\psi)$ for any $\w\in\W$ and any $t\geq 0$ where both terms are defined. \item[(ii)] Given $\w\in\W$ and $\varphi,\psi\in C_\alpha$ with $\varphi\leq \psi$ such that $z(t,\w,\varphi)$ and $z(t,\w,\psi)$ are defined for $t\in[0,\beta]$ for some $\beta>0$, there exists an $L=L(\w,\varphi,\psi,\beta)>0$ such that for each $i=1,\ldots,n$, and for each $t\in[0,\beta]$, \[ z_i(t,\w,\psi)-z_i(t,\w,\varphi)\geq e^{-L t}\,e^{t A_i}\,(\psi_i(0)-\varphi_i(0))\,. \] \end{itemize} \end{prop} \begin{proof} Note that with any boundary conditions, by the regularity assumptions on $f$ we can consider the linearized IBV problem of~\eqref{family} along the semiorbit of each fixed $(\w,\varphi)\in \W\times C_\gamma$, \begin{equation}\label{linear family} \left\{\begin{array}{l} \des\frac{\partial u}{\partial t}= D\Delta u+g(\tau(t,\w,\varphi),x,u(t,x),u(t-1,x))\,,\; t\in (0,\beta]\,,\;\,x\in \bar U,\\[.2cm] \bar\alpha(x)\,u(t,x)+\delta\,\des\frac{\partial u}{\partial n}(t,x) =0\,,\quad t\in (0,\beta]\,,\;\,x\in \partial U,\\[.2cm] u(s,x)=\psi(s,x)\,, \quad s\in [-1,0]\,,\;\,x\in \bar U, \end{array}\right. \end{equation} provided that the mild solution $z(t,\w,\varphi)$ is defined on the interval $[0,\beta]$, where the map $g:(\W\times C_\gamma)\times\bar U\times \R^n\times\R^n\to\R^n$, linear in $(u,v)$, is defined by (see~\eqref{derivada}) \[g(\w,\varphi,x,u,v)=D_{y} f(\w,x,\varphi(0)(x),\varphi(-1)(x))\,u +D_{\wit y} f(\w,x,\varphi(0)(x),\varphi(-1)(x))\,v\,.\] \par Under assumptions (C1)-(C4) on $f$, it is easy to check that $g$ satisfies all the conditions in order to apply Proposition~\ref{prop-monotone} to each linearized problem along the orbit of $(\w,\varphi)$ if $\varphi\in C^\theta([-1,0],E)$. Let us now restrict to the Dirichlet case. \par Arguing as in the proof of Proposition~\ref{prop-monotone}~(ii), we just need to consider $\w\in\W$ and $\varphi,\psi\in C_\alpha$ with $\varphi\leq \psi$ such that $\varphi,\,\psi\in C^\theta([-1,0],E)$, and $z(t,\w,\varphi)$ and $z(t,\w,\psi)$ are defined for $t\in[0,\beta]$ for some $\beta>0$: in the general case, we can approximate $\varphi$ and $\psi$ by $\theta$-H\"{o}lder continuous maps. Besides, we can assume without loss of generality that also $z(t,\w,\lambda \psi + (1-\lambda)\varphi)$ is defined for $t\in[0,\beta]$ for every $\lambda\in (0,1)$. Then, thanks to Theorem~\ref{teor-linearized sk} we can write for any $t\in (0,\beta]$, \begin{equation}\label{mean value} z_t(\w,\psi)-z_t(\w,\varphi)=\int_0^1 D_{\!\varphi} z_t(\w,\lambda\, \psi + (1-\lambda)\,\varphi)\,(\psi-\varphi)\,d\lambda \end{equation} where $D_{\!\varphi} z_t(\w,\lambda \psi + (1-\lambda)\varphi)\,(\psi-\varphi)=v_t(\w,\lambda \psi + (1-\lambda)\varphi,\psi-\varphi)$ for the mild solution $v$ of the variational retarded ACP along the semiorbit of $(\w,\lambda \psi + (1-\lambda)\varphi)$ with initial condition $\psi-\varphi$ (see~\eqref{variational}), which is just the ACP built from the linearized IBV problem, to which Proposition~\ref{prop-monotone} applies. Therefore (i) immediately follows, since $D_{\!\varphi} z_t(\w,\lambda\, \psi + (1-\lambda)\,\varphi)\,(\psi-\varphi)\geq 0$ for any $\lambda\in [0,1]$. \par Now, to see (ii) just write \begin{equation}\label{mean value componente} z_i(t,\w,\psi)-z_i(t,\w,\varphi)=\int_0^1 v_i(t,\w,\lambda\, \psi + (1-\lambda)\,\varphi,\psi-\varphi)\,d\lambda\,, \end{equation} recall that $v$ is linear with respect to the initial value, and apply Proposition~\ref{prop-monotone}~(ii) to the linearized problem for each $\lambda\in [0,1]$. \end{proof} In the next result conditions are given to provide the existence of a continuous separation of type~II over a minimal set $K\subset \W\times C_\gamma$: see Section \ref{sec-linearized sem} for the definition. \begin{teor}\label{teor-sep cont tipo II} Consider the family of parabolic problems with delay~\eqref{family}, $\w\in\W$ with $f$ satisfying conditions $\rm{(C1)}$-$\rm{(C4)}$, and assume that there exists a minimal set $K\subset \W\times C_\gamma$ for the induced skew-product semiflow $\tau$. For the $n\times n$ real matrices \begin{equation}\label{A and B} D_y f(\w,x,y,\wit y) = [a_{ij}(\w,x,y,\wit y)]\,, \quad D_{\wit y} f(\w,x,y,\wit y) = [b_{ij}(\w,x,y,\wit y)] \end{equation} define \begin{align*} \bar a_{ij} &= \sup\{ a_{ij}(\w,x,\varphi(0,x),\varphi(-1,x))\mid (\w,\varphi)\in K,\, x\in\bar U\}\,\; \text{for }\, i\not= j\,, \; \text{and }\,\bar a_{ii}=0\,, \\ \bar b_{ij} &= \sup\{ b_{ij}(\w,x,\varphi(0,x),\varphi(-1,x))\mid (\w,\varphi)\in K,\, x\in\bar U\}\,\; \text{for }\, i\not= j\,, \; \text{and }\,\bar b_{ii}=0\,, \end{align*} and consider the matrix \begin{equation}\label{A+B} \bar A+ \bar B=[\bar a_{ij}+\bar b_{ij}]\,. \end{equation} Then, if the matrix $\bar A+ \bar B$ is irreducible, \begin{itemize} \item[(i)] there exists a $t_*\geq 1$ such that for each $(\w,\varphi)\in K$ the linear operator $D_{\!\varphi} z_{t_*}(\w,\varphi)$ satisfies the following dichotomy property: given $\psi\in C_\gamma$ with $\psi>0$, either $D_{\!\varphi} z_{t_*}(\w,\varphi)\,\psi =0$ or $D_{\!\varphi} z_{t_*}(\w,\varphi)\,\psi \gg 0$; \item[(ii)] provided that $K$ admits a flow extension, there is a continuous separation of type~{\rm II} over $K$. \end{itemize} \end{teor} \begin{proof} This result is Theorem~5.1 in Novo et al.~\cite{nonuobsa} for the case of Neumann boundary conditions. The proof for Robin or Dirichlet boundary conditions follows step by step the same arguments, so that we only make some remarks. \par First of all, note that in the minimal set $K$ there are backwards extensions of semiorbits, and this implies that if $(\w,\varphi)\in K$, $\varphi\in C_\gamma$ has some specific regularity properties; more precisely $\varphi\in C^{1,2}([-1,0]\times \bar U,\R^n)$. This follows from Theorem~\ref{teor-space regularity-Neumann} or Theorem~\ref{teor-space regularity-Dirich}, moving backwards in the semiorbit with $t>2$ and then gaining regularity by coming back forwards. \par Second, when we look at the family of linearized IBV problems along the semiorbits of $(\w,\varphi)\in K$, the map $g$ in~\eqref{linear family} satisfies conditions (C), $(C^\theta(t))$ and $(C^{2\theta}(x))$ uniformly for $(\w,\varphi)\in K$, and Proposition~\ref{prop-monotone}~(ii) is repeatedly used. \par Finally, (ii) follows from the abstract Theorem~5.4 in Novo et al.~\cite{noos6} provided that the operators $D_{\!\varphi}z_t(\w,\varphi)$ are eventually compact, which happens for $t>1$. \end{proof} Before we state the main result, we give the appropriate definitions of uniform and strict persistence in the area above a compact $\tau$-invariant set $K\subset \Om\times C_\gamma$, which were introduced in Novo et al.~\cite{noos7} and in Obaya and Sanz~\cite{obsa}, respectively. \begin{defi}\label{defi-persistence} Let $K\subset \W\times C_\gamma$ be a compact $\tau$-invariant set for the continuous and monotone semiflow $\tau$. (i) The semiflow $\tau$ is said to be {\it uniformly persistent} ({\it u-persistent} for short) in the region situated {\it strongly above} $K$ if there exists a $\psi_0\in C_\gamma$, $\psi_0\gg 0$ such that for any $(\w,\varphi)\in K$ and any $\phi\gg \varphi$ there exists a time $t_0=t_0(\w,\varphi,\phi)$ such that $z_t(\w,\phi)\geq z_t(\w,\varphi)+\psi_0$ for any $t\geq t_0$. (ii) The semiflow $\tau$ is said to be {\it strictly persistent at $0$} ({\it $s_0$-persistent} for short) in the region situated above $K$ if there exists a collection of strictly positive maps $\psi_1,\ldots,\psi_N\in C_\gamma$, $\psi_i>0$ for every $i$, such that for any $(\w,\varphi)\in K$ and any $\phi\geq \varphi$ with $\phi(0)> \varphi(0)$ there exists a time $t_0=t_0(\w,\varphi,\phi)$ such that $z_t(\w,\phi)\geq z_t(\w,\varphi)+\psi_i$ for any $t\geq t_0$, for one of the maps $\psi_1,\ldots,\psi_N$. \end{defi} \begin{teor}\label{teoremaDelay} Consider the family of problems with delay~\eqref{family}, $\w\in\W$ with $f$ satisfying conditions $\rm{(C1)}$-$\rm{(C4)}$, and assume that there exists a minimal set $K\subset \W\times C_\gamma$ for the induced skew-product semiflow $\tau$ which admits a flow extension. For each $(\w,\varphi)\in K$ consider the linearized IBV problem of~\eqref{family} along the semiorbit of $(\w,\varphi)$, given in~\eqref{linear family}, and calculate the matrix $\bar A+\bar B=[\bar a_{ij}+\bar b_{ij}]$ given in~\eqref{A+B}. Without loss of generality, we can assume that the matrix $\bar A+\bar B$ has the form \begin{equation}\label{triangular} \left[\begin{array}{cccc} \bar A_{11}+\bar B_{11} & 0 &\ldots & 0 \\ \bar A_{21}+\bar B_{21} & \bar A_{22} +\bar B_{22}& \ldots& 0 \\ \vdots & \vdots &\ddots & \vdots \\ \bar A_{k1}+\bar B_{k1} & \bar A_{k2}+\bar B_{k2} & \ldots& \bar A_{kk}+\bar B_{kk} \end{array}\right]\, \end{equation} with irreducible diagonal blocks, denoted by $\bar A_{11}+\bar B_{11},\ldots, \bar A_{kk}+\bar B_{kk}$, of size $n_1,\ldots,n_k$ respectively ($n_1+\cdots + n_k=n$). \par For each $j=1,\ldots,k$, let us denote by $I_j$ the set formed by the $n_j$ indexes corresponding to the rows of the block $\bar A_{jj}+\bar B_{jj}$, and let $L_j$ be the linear skew-product semiflow induced on $K\times C([-1,0],\Pi_{i\in I_j} E_i^\gamma)$ by the solutions of the $n_j$-dimensional linear systems for $(\w,\varphi)\in K$ given by \begin{equation}\label{bloque j} \left\{\begin{array}{l} \des\frac{\partial u}{\partial t}= D_j\Delta u+A_{jj}(\w{\cdot}t,x,z(t,\w,\varphi)(x),z(t-1,\w,\varphi)(x))\,u(t,x)\\[.2cm] \; + B_{jj}(\w{\cdot}t,x,z(t,\w,\varphi)(x),z(t-1,\w,\varphi)(x))\,u(t-1,x)\,,\;\, t>0\,,\;x\in \bar U, \\ \bar\alpha_j(x)\,u+\delta\,\des\frac{\partial u}{\partial n} =0\,,\quad t>0\,,\;\,x\in \partial U,\\[.2cm] u(s,x)=\psi^j(s,x)\,,\quad s\in [-1,0]\,,\;\,x\in \bar U, \end{array}\right. \end{equation} for the corresponding diagonal blocks $A_{jj}$ and $B_{jj}$ of $D_y f$ and $D_{\wit y} f$ in~\eqref{A and B}, respectively, for $D_j$ and $\bar\alpha_j(x)$ respectively the $n_j\times n_j$-diagonal matrices with diagonal entries $d_i$ and $\alpha_i(x)$ for $i\in I_j$, and initial value $\psi^j\in C([-1,0],\Pi_{i\in I_j} E_i^\gamma)$. Then, $K^j=K\times \{0\}\subset K\times C([-1,0],\Pi_{i\in I_j} E_i^\gamma)$ is a minimal set for $L_j$ which admits a continuous separation of type~II. Let $\Sigma_p^j$ be its principal spectrum. \par If $k=1$, {\it i.e.}, if the matrix $\bar A+\bar B$ is irreducible, let $I=J=\{1\}$. Else, let \begin{align*} I&=\{j\in\{1,\ldots,k\} \,\mid\, \bar A_{ji}+\bar B_{ji}=0 \text{ for any } i\not= j\},\\ J&=\{j\in\{1,\ldots,k\} \,\mid\, \bar A_{ij}+\bar B_{ij}=0 \text{ for any } i\not= j\}, \end{align*} that is, $I$ is composed by the indexes $j$ such that any block in the row of $\bar A_{jj}+\bar B_{jj}$, other than itself, is null, whereas $J$ contains those indexes $j$ such that any block in the column of $\bar A_{jj}+\bar B_{jj}$, other than itself, is null. Then, some sufficient conditions for uniform and strict persistence at $0$ are the following: \begin{itemize} \item[(i)] If $\Sigma_p^j\subset (0,\infty)$ for any $j\in I$, then $\tau$ is uniformly persistent in the area situated strongly above $K$. \item[(ii)] If $\Sigma_p^j\subset (0,\infty)$ for any $j\in J$, then $\tau$ is strictly persistent at $0$ in the area situated above $K$. \end{itemize} \end{teor} \begin{proof} We skip some details in the proof, since it often follows arguments in the proofs of Theorem~5.8 in Novo et al.~\cite{noos7} and Theorem~5.3 in Obaya and Sanz~\cite{obsa} for delay equations without diffusion, for (i) and (ii) respectively. \par Note that a convenient permutation of the variables takes the matrix $\bar A+\bar B$ into the form~\eqref{triangular}, and $\bar a_{ij},\,\bar b_{ij}\geq 0$ because of (C4) and the definition. Also, we maintain the notation introduced in Theorem~\ref{teor-linearized sk} for the variational problems. Besides, for any map $v$, let us denote $v^j=(v_i)_{i\in I_j}$, for $j=1,\ldots,k$. \par To see (i), we distinguish three cases. \par\noindent \textbf{(A1)}: $k=1$, that is, $\bar A+\bar B$ is an irreducible matrix. Then Theorem~\ref{teor-sep cont tipo II} says that $K$ admits a continuous separation of type~II, and since $\Sigma_p^1\subset (0,\infty)$, the abstract Theorem~4.5 in~\cite{noos7} implies that $\tau$ is u-persistent in the area strongly above $K$. \par\noindent \textbf{(A2)}: $k>1$ and $\bar A+\bar B$ is a reducible matrix with a block diagonal structure. In this case the argument goes exactly as in case (C2) in the proof of Theorem~5.8 in~\cite{noos7} for delay equations without diffusion. The key is to apply Theorem~4.5 in~\cite{noos7} to each of the uncoupled linear skew-product semiflows $L_j$, which admit a continuous separation of type~II and have positive principal spectrums. In all, we find a map $\psi_0\gg 0$ and a $t_0>0$ such that $D_{\!\varphi} z_{t}(\w,\varphi)\,\psi_0\gg 2\,\psi_0$ for $t\geq t_0$ and $(\w,\varphi)\in K$. Then, Theorem~3.3 in~\cite{noos7} provides the u-persistence in the zone strongly above $K$. \par\noindent \textbf{(A3)}: $k>1$ and $\bar A+\bar B$ is a reducible matrix with a non-diagonal block lower triangular structure, that is, at least one of the non-diagonal blocks in \eqref{triangular} is not null. This time we combine the arguments in case (C3) in the proofs of Theorem~5.6 for PDEs and Theorem~5.8 for delay equations in~\cite{noos7}. As in case (A2), the aim is to find a map $\psi\gg 0$ and a $t_1>0$ such that $D_{\!\varphi} z_{t}(\w,\varphi)\,\psi\gg 2\,\psi$ for $t\geq t_1$ and $(\w,\varphi)\in K$, so that Theorem~3.3 in~\cite{noos7} applies. Note that, since for $j\in I$ the systems \eqref{bloque j} are uncoupled, arguing as in (A2) we already have the appropriate maps $\psi_0^j\gg 0$ for $j\in I$ and the appropriate $t_0>0$, so that if $\psi\gg 0$ with $\psi^j=\psi_0^j$ for $j\in I$, then $v_t^j(\w,\varphi,\psi)\gg 2\,\psi^j$ for $t\geq t_0$ and $(\w,\varphi)\in K$, for each $j\in I$. That is, it remains to adequately complete the other components of $\psi\gg 0$. \par Since $1\in I$, we move forwards filling the gaps, so take $l=\min\{j\in \{2,\ldots,k\}\mid j\notin I\}\geq 2$. Then, at least one of the blocks to the left of $\bar A_{ll} +\bar B_{ll}$ is not null, that is, there exists an $m<l$, $m\in I$ such that $\bar A_{lm} +\bar B_{lm}\not=0$, so that $\bar a_{i_1k}+\bar b_{i_1k}>0$ for some $i_1\in I_l$ and $k\in I_m$. For $u(t,x)=v(t,\w,\varphi,\psi)(x)\geq 0$ by Proposition \ref{prop-monotone}, from~\eqref{linear family}, the block lower triangular structure of the linearized systems, condition (C4), and since $k\in I_m$ with $m\in I$, we have that \vspace{-0,2cm} \begin{multline*} \frac{\partial u_{i_1}}{\partial t}(t,x)=d_{i_1}\Delta u_{i_1}(t,x)+\sum_{j=1}^{i_1} \big( a_{i_1j}(\cdot)\,u_j(t,x)+b_{i_1j}(\cdot)\,u_j(t-1,x)\big)\\ \geq d_{i_1}\Delta u_{i_1}(t,x)+2\,a_{i_1k}(\cdot)\,(\psi_0^m)_k(0,x) +2\,b_{i_1k}(\cdot)\,(\psi_0^m)_k(-1,x) +a_{i_1i_1}(\cdot)\,u_{i_1}(t,x) \end{multline*} for $t\geq t_0$ and $x\in \bar U$, where $(\cdot)$ stands for $(\w{\cdot}t,x,z(t,\w,\varphi)(x),z(t-1,\w,\varphi)(x))$. Then, we consider the auxiliar family of scalar parabolic PDEs for $(\w,\varphi)\in K$, \begin{equation*} \des\frac{\partial h}{\partial t}= d_{i_1}\Delta h+ 2\,a_{i_1k}(\cdot)\,(\psi_0^m)_k(0,x) +2\,b_{i_1k}(\cdot)\,\,(\psi_0^m)_k(-1,x) + a_{i_1i_1}(\cdot)\,h(t,x) \end{equation*} for $t>0$, $x\in \bar U$, with boundary condition $\alpha_{i_1}(x)\,h(t,x)+\delta\,\frac{\partial h}{\partial n}(t,x) =0$ for $t>0$ and $x\in \partial U$. Since $\bar a_{i_1k}+\bar b_{i_1k}>0$ means that $a_{i_1k}(\w_1,x_1,\varphi_1(0,x_1),\varphi_1(-1,x_1))+b_{i_1k}(\w_1,x_1,\varphi_1(0,x_1),\varphi_1(-1,x_1))>0$ for some $(\w_1,\varphi_1)\in K$ and $x_1\in U$, and $(\psi_0^m)_k(0,x_1), (\psi_0^m)_k(-1,x_1)>0$, one can apply the same dynamical argument used in Theorem~5.6 in~\cite{noos7} to conclude that there exist a $t_{i_1}>0$ and a map $\psi_{0i_1}\in E_{i_1}^\gamma$ with $\psi_{0i_1}\gg 0$ such that $h(t,\,\cdot\,,\w,\varphi,0) \gg 2 \,\psi_{0i_1}$ for any $(\w,\varphi)\in K$ and $t\geq t_{i_1}$. Note that a version of Lemma~2.11~(ii) in N\'{u}\~{n}ez et al.~\cite{nuos3} for Dirichlet boundary conditions in the intermediate space $E_{i_1}^\alpha$ has been used. Finally, consider $\psi_{0i_1}\in C([-1,0],E_{i_1}^\gamma)$ the identically equal to $\psi_{0i_1}$ map, which satisfies $\psi_{0i_1}\gg 0$, and take any initial condition $\psi\gg 0$ with $\psi^j=\psi_0^j$ for $j\in I$ and $\psi^l_{i_1}=\psi_{0i_1}$. Then, comparing solutions of the two previous problems (see Martin and Smith~\cite{masm0,masm}), we can conclude that $(v_{i_1}^l)_t(\w,\varphi,\psi)\gg 2\,\psi_{i_1}^l$ for $t\geq t_0+t_{i_1}+1$ and $(\w,\varphi)\in K$, and we are done with the component $i_1\in I_l$. \par The argument for the rest of components in $I_l$, if any, is similar and relies on the irreducibility of the block $\bar A_{ll}+\bar B_{ll}$; and for the remaining blocks, if any, is just the same. The proof of (i) is finished. \par To see (ii) we consider again three cases, in accordance with Theorem~5.3 in~\cite{obsa}. \par\noindent \textbf{(B1)}: $k=1$, that is, $\bar A+\bar B$ is an irreducible matrix. By (i), we already know that $\tau$ is u-persistent. To see that it is also $s_0$-persistent, take $\psi_0\gg 0$ the map given in Definition~\ref{defi-persistence}~(i) and $t_*\geq 1$ the time given in Theorem~\ref{teor-sep cont tipo II}~(i). Now take $(\w,\varphi)\in K$ and $\phi\geq \varphi$ with $\phi(0)> \varphi(0)$. Then, $\phi_i(0)> \varphi_i(0)$ for some $i$ and Proposition~\ref{prop-monotone}~(ii) applied to the linearized systems implies that $v_i(t,\w,\varphi,\phi-\varphi)\gg 0$ for any $t>0$. Then it cannot be $D_{\!\varphi} z_{t_*}(\w,\varphi)\,(\phi-\varphi) =0$, and necessarily $D_{\!\varphi} z_{t_*}(\w,\varphi)\,(\phi-\varphi) \gg 0$. By continuity, $D_{\!\varphi} z_{t_*}(\w,\lambda\phi+(1-\lambda)\varphi)\,(\phi-\varphi) \gg 0$ for $\lambda\in[0,\varepsilon]$ for a certain $\varepsilon>0$, and using~\eqref{mean value}, $z_{t_*}(\w,\phi)\gg z_{t_*}(\w,\varphi)$. To finish, apply the u-persistence to $(\w{\cdot}t_*,z_{t_*}(\w,\varphi))\in K$ together with the semicocycle property~\eqref{semicocycle}. \par Now, for $k>1$, take $\phi\geq \varphi$ with $\phi(0)> \varphi(0)$ and distinguish two possibilities: \par\noindent \textbf{(B2)}: $k> 1$ and $\phi_i(0)> \varphi_i(0)$ for some $i\in I_j$ with $j\in J$. In this case we follow the arguments in case (C2) in the proof of Theorem~5.3 in~\cite{obsa} for delay equations without diffusion. Basically, a family of $n_j$-dimensional systems of nonlinear parabolic PFDEs with delay over the base flow in $K$ is built, in such a way that it is a minorant family for the components $y^j(t,x)=z^j(t,\w,\varphi)(x)$, and besides the linearized systems along the orbits in a minimal set are precisely the systems~\eqref{bloque j}, with irreducible matrix $\bar A_{jj}+\bar B_{jj}$. Then, to this family case (B1) applies, and thus there exist a $\psi_0^j\in C([-1,0],\Pi_{i\in I_j} E_i^\gamma)$, $\psi_0^j\gg 0$ and a $t_0^j>0$, associated to its u-persistence. Then, using standard arguments of comparison of solutions, one can check that $z_t(\w,\phi)\geq z_t(\w,\varphi)+\psi_j$ for any $t\geq t_0^j$, for the map $\psi_j \in C_\gamma$ defined by $\psi_j^j=\psi_0^j$ and $\psi_j^m=0$ if $m\not=j$, which satisfies $\psi_j>0$. Just remark that the maps $\{\psi_j\}_{j\in J}$ built in this way are the appropriate collection required in Definition~\ref{defi-persistence}~(ii). \par\noindent \textbf{(B3)}: $k> 1$ and $\phi^l(0)=\varphi^l(0)$ for any $l\in J$. Then, consider $i$ such that $\phi_i(0)> \varphi_i(0)$ with $i\in I_j$ for some $j\notin J$. Now we distinguish two situations: \par\noindent \textbf{(B3.1)}: There exists an $m\geq 1$ such that $\bar A_{j+m,j}+\bar B_{j+m,j}\not=0$ with $j+m\in J$. In this case we search for a time $t_1>0$ such that $z^{j+m}(t_1,\w,\phi)>z^{j+m}(t_1,\w,\varphi)$, for then we can apply case (B2) together with the semicocycle relation~\eqref{semicocycle}. \par As a first step, let us study the components $v_t^j(\w,\varphi,\phi-\varphi)$. Write $L_j(t,\w,\varphi,\psi^j)=(\tau(t,\w,\varphi),w_t(\w,\varphi,\psi^j))$ for the linear skew-product semiflow induced by the solutions of \eqref{bloque j} for $(\w,\varphi)\in K$ and $\psi^j\in C([-1,0],\Pi_{i\in I_j} E_i^\gamma)$. By condition (C4), a comparison of solutions argument says that $v_t^j(\w,\varphi,\phi-\varphi)\geq w_t(\w,\varphi,\phi^j-\varphi^j)$ for $t\geq 0$. Besides, applying Proposition~\ref{prop-monotone}~(ii) to $L_j$, since $\phi^j_i(0)>\varphi^j_i(0)$, we get that $w_i(t,\w,\varphi,\phi^j-\varphi^j)\gg 0$ for any $t>0$. Therefore, it must be $w_{t_*}(\w,\varphi,\phi^j-\varphi^j)\gg 0$ for $t_*\geq 1$ the time given in Theorem~\ref{teor-sep cont tipo II}~(ii) for $L_j$. Then, the linear semicocycle property~\eqref{linear semicocycle} and Proposition~\ref{prop-monotone}~(ii) imply that $w_t(\w,\varphi,\phi^j-\varphi^j)\gg 0$ for $t\geq t_*$, so that also $v_{t}^j(\w,\varphi,\phi-\varphi)\gg 0$ for $t\geq t_*$. \par Finally, take $i_1\in I_{j+m}$ and $k\in I_j$ such that $\bar a_{i_1k}+\bar b_{i_1k}>0$. Then, for $u(t,x)=v(t,\w,\varphi,\phi-\varphi)(x)$ recall that with (C4), $u(t,x)\geq 0$ by Proposition~\ref{prop-monotone}~(i), and since $k\in I_j$, $u_{k}(t,x)> 0$ for any $t\geq t_*-1$ and $x\in U$. Now, arguing as in the proof of Theorem~5.1 in Novo et al.~\cite{nonuobsa}, associated to the minimal set $K$ and to the open set $U_{i_1k}=\{(\tilde \w,\tilde \varphi)\in \Om\times C_\gamma\mid a_{i_1k}(\tilde\w,x,\tilde\varphi(0,x),\tilde\varphi(-1,x))+b_{i_1k} (\tilde\w,x,\tilde\varphi(0,x),\tilde\varphi(-1,x))>0 \;\text{for some } x\in U\}$, there exists a $T_0>2$ such that for any $(\tilde \w,\tilde \varphi)\in K$ there is a $t_0\in (2,T_0)$ such that $\tau(t_0,\tilde\w,\tilde \varphi)\in U_{i_1k}$. Applying this property to $\tau(t_*,\w,\varphi)\in K$ there exist a $t_0\in (2,T_0)$ and an $x_0\in U$ such that \begin{multline*} \wit a_{i_1k} + \wit b_{i_1k}:= a_{i_1k}(\w{\cdot}(t_*+t_0),x_0,z(t_*+t_0,\w,\varphi)(x_0),z(t_*+t_0-1,\w,\varphi)(x_0))\\ +b_{i_1k}(\w{\cdot}(t_*+t_0),x_0,z(t_*+t_0,\w,\varphi)(x_0),z(t_*+t_0-1,\w,\varphi)(x_0))>0\,. \end{multline*} Now, by (C4), on the one hand $u_{i_1}(t,x)$ satisfies the following parabolic inequality \[ \frac{\partial u_{i_1}}{\partial t}(t,x) \geq d_{i_1}\Delta u_{i_1}(t,x) +a_{i_1i_1}(\w{\cdot}t,x,z(t,\w,\varphi)(x),z(t-1,\w,\varphi)(x))\,u_{i_1}(t,x) \] for $t> t_*$ and $x\in \bar U$, together with the corresponding boundary condition. Then, if it were $u_{i_1}(t_*+t_0,x_0)=0$, the minimum principle for scalar parabolic PDEs would say that $u_{i_1}(t,x)=0$ for any $(t,x)\in [t_*,t_*+t_0]\times \bar U$, so that in particular $\Delta u_{i_1}(t_*+t_0,x_0)=0$ and $\partial_t u_{i_1}(t_*+t_0,x_0)=0$. But on the other hand, then \[ \frac{\partial u_{i_1}}{\partial t}(t_*+t_0,x_0) \geq \wit a_{i_1k}\,\,u_k(t_*+t_0,x_0) + \wit b_{i_1k}\,\,u_k(t_*+t_0-1,x_0)>0\,, \] a contradiction. Therefore, $u_{i_1}(t_*+t_0,x_0)>0$, so that $v_{i_1}(t_*+t_0,\w,\varphi,\phi-\varphi)>0$ and by Proposition~\ref{prop-monotone}~(ii), $v_{i_1}(t,\w,\varphi,\phi-\varphi)\gg 0$ for any $t>t_*+t_0$. Take such a $t_1>t_*+t_0$ and use relation \eqref{mean value componente} together with a continuity argument to conclude that $z_{i_1}(t_1,\w,\phi)\gg z_{i_1}(t_1,\w,\varphi)$, so that $z^{j+m}(t_1,\w,\phi)>z^{j+m}(t_1,\w,\varphi)$, as we wanted. \par\noindent \textbf{(B3.2)}: For any $m\geq 1$ such that $\bar A_{j+m,j}+\bar B_{j+m,j}\not=0$, $j+m\notin J$. In this case, we take the greatest $m\geq 1$ such that $\bar A_{j+m,j}+\bar B_{j+m,j}\not=0$ and we argue as in case (B3.1) to find a $t_1>0$ such that $z^{j+m}(t_1,\w,\phi)>z^{j+m}(t_1,\w,\varphi)$. Since $j+m\notin J$, again there is an $l\geq 1$ such that $\bar A_{j+m+l,j+m}+\bar B_{j+m+l,j+m}\not=0$. If for some such $l\geq 1$, $j+m+l\in J$ we fall again in case (B3.1), and if not, we are again in case (B3.2) and we just iterate the procedure. Since $k\in J$, in a finite number of iterations we fall in case (B3.1). The proof is finished. \end{proof}
1,108,101,565,111
arxiv
\section{Introduction} \label{sec:1} The dynamics of test free particles in curved spacetimes, i.e., the geodesic structure, include important information about the gravitational field and the geometry. In stationary spacetimes, there can be stationary orbits of particles, which are geodesics along timelike Killing fields. Furthermore, if it is also axisymmetric, the stationary orbits can be circular orbits. Such fundamental orbits associated with spacetime symmetries are useful to understand various observable phenomena (e.g., the stellar motion and black hole shadow) around the black hole. In the Schwarzschild black hole spacetime, there exist both stable and unstable circular orbits of particles. Let $r$ be the circumference radius, and let $M$ be the black hole mass. We know that the stable circular orbits exist in the range $r\geq 6M$, and the unstable circular orbits in the range $3M<r<6M$, where we have used geometrized units. There exist the innermost stable circular orbit (ISCO) at their boundary $r=6M$ and the unstable photon circular orbit at the last circular orbit $r=3M$. These are fundamentals of physical phenomena in the vicinity of a black hole. In the Kerr black hole spacetime, both stable and unstable circular orbits also appear~\cite{Wilkins:1972rs}. In the last two decades, higher-dimensional black holes have also been actively studied~\cite{Emparan:2008eg}, so that some of them are parametrized by the spacetime dimension $d$. The parametrization allows us to distinguish between $d$-dependent/independent properties and also tells us some special properties in a specific $d$. Such dimensionality often appears in the analysis of gravitational properties through geodesic structure, which is the first step in the study of black holes. Unlike the 4D case, in a higher-dimensional static and spherically symmetric vacuum black hole, there is no stable circular orbit because no stable balance is formed between the gravitational and centrifugal forces~\cite{Tangherlini:1963bw}, which is a generic feature of higher-dimensional black holes with a spherically symmetric horizon~\cite{Hackmann:2008tu}. This property carries over to circular orbits in the 5D Myers-Perry black holes~\cite{Frolov:2003en, Kagramanova:2012hw, Diemer:2014lba} and equatorial circular orbits in the singly rotating Myers-Perry black holes in arbitrary dimensions~\cite{Cardoso:2008bp}. Though it was pointed out that stable stationary/bound orbits can exist in the Myers-Perry black holes~\cite{Igata:2014xca} at least when there is no upper limit on the black hole spin parameters (so-called the ultraspinning limit~\cite{Emparan:2003sy}),% \footnote{In higher-dimensional AdS black holes, stable stationary orbits can appear because of the asymptotic structure~\cite{Delsate:2015ina, Grunau:2017uzf}.} they tend not to appear for the higher-dimensional Myers-Perry family in general due to the dimensional dependence of the law of gravity. However, there seems to be some exceptions to the nature that stable circular/bound orbits are less likely to appear under the diversity of higher-dimensional gravity. As one of the rich properties of higher-dimensional spacetimes, there is the topological variety of spatial cross sections of horizons. In 5D asymptotically flat, stationary, and biaxisymmetric spacetimes, the allowed horizon topology is not only the sphere $S^3$ but also the ring $S^1\times S^2$ and the lens $L(p, q)$~\cite{Hollands:2007aj,Hollands:2010qy,Cai:2001su,Galloway:2005mf}. We are gradually learning that the nontrivial horizon topologies of black objects can give rise to a mechanism for the appearance of stable stationary orbits that is different from the case of spherical black holes. In 5D black ring spacetimes~\cite{Emparan:2001wn}, stable stationary orbits are absent in the fat regime, as is the spherical cases, but appears in the thin regime~\cite{Hoskisson:2007zk, Igata:2010ye, Grunau:2012ai, Igata:2013be}. The existence of a nut% \footnote{This terminology is often used to denote an isolated fixed point of one-parameter $U(1)$ isometry~\cite{Gibbons:1979xm}.} outside the horizon plays an essential role in the existence of stable stationary orbits. Recently, even for the black rings in more than 5D, the existence of stable stationary orbits and their dimensionality have been revealed using the blackfold approach~\cite{Igata:2020vdb, Igata:2020dow}. In 5D black lens spacetimes~\cite{Kunduri:2014kja, Tomizawa:2016kjh}, stable circular orbits can also exist~\cite{Tomizawa:2019egx}. Even in this phenomenon, it is also essential that the centers (i.e., the nuts) are located outside the horizon. Concerning a higher-dimensional black hole with disconnected components of the horizon cross section, we encounter a nontrivial question of whether stable stationary/bound orbits exist and, if so, how they are distributed. This paper aims to clarify how the many-body nature of black holes in higher-dimensional spacetimes affects the existence of test particles' stationary orbits. To explore this, we adopt the two-body black hole configuration of the Majumdmar-Papapetrou~(MP) geometry~\cite{Majumdar:1947eu,Papaetrou:1947ib, Myers:1986rx}, which is kept static by balancing the gravitational and Coulomb force between the two with electric charges of the same sign. Since this family has singularities on the horizon but not outside it, we can examine the existence of the timelike/null geodesic Killing orbits and their stability throughout the domain of outer communication. The geodesic structure of the 4D MP dihole spacetime has been analyzed in detail in terms of circular/bound orbits and their stability~\cite{Chandrasekhar:1989vk, Contopoulos:1990, Wunsch:2013st, Dolan:2016bxj, Ono:2016lql, Assumpcao:2018bka, Nakashi:2019mvs, Nakashi:2019tbz}, chaos~\cite{Contopoulos:1991, Shipley:2016omi}, and shadows~\cite{Nitta:2011in, Patil:2016oav}. The particle dynamics in higher-dimensional MP spacetimes were investigated in Ref.~\cite{Hanan:2006uf}. Since the center is located outside the horizon in a higher-dimensional two-body black hole spacetime, as well as the black rings and black lenses, we can expect the appearance of stable stationary orbits in the higher-dimensional MP dihole spacetimes. This paper is organized as follows. In Sec.~\ref{sec:2}, we introduce the MP dihole spacetime in $d$ dimensions and formulate particle dynamics on the spacetime. Focusing specifically on stationary orbits, we identify the conditions for their existence and clarify criteria to determine whether they are stable. In Sec.~\ref{sec:3}, for $d=5$, we clarify the dependence of the sequences of stationary orbits on the dihole separation parameter. In addition, based on these results, we give some critical values for the separation parameter. We discuss these properties for $d\geq 6$ as well. Section~\ref{sec:4} is devoted to a summary and discussions. Throughout this paper we use units in which $G=1$ and $c=1$. \section{Formulation} \label{sec:2} We focus on the MP geometries in $d$ dimensions~($d\geq 4$). The metric and the gauge field are given by% \footnote{This is a solution in the $d$-dimensional Einstein-Maxwell theory, whose action is given by \begin{align} S=\frac{1}{16\pi G} \int \mathrm{d}^d x \sqrt{-g} (R-F_{\mu\nu}F^{\mu\nu}), \end{align} where $R$ is the Ricci tensor, and $F_{\mu\nu}$ is the field strength of the gauge field, and we have restored the $d$-dimensional Newton constant $G$. } \begin{align} g_{\mu\nu}\:\!\mathrm{d}x^\mu \:\!\mathrm{d}x^\nu &=-U^{-2}\:\!\mathrm{d}t^2+U^{2/(d-3)} \mathrm{d}\bm{r}\cdot \mathrm{d} \bm{r}, \\ A_\mu \:\!\mathrm{d}x^\mu&=\sqrt{\frac{d-2}{2(d-3)}} U^{-1}\:\!\mathrm{d}t, \end{align} where $\mu$, $\nu$ are spacetime indices, and $\mathrm{d}\bm{r}\cdot \mathrm{d} \bm{r}$ is the $(d-1)$-dimensional flat metric, and $U$ is a harmonic function on $\mathbb{R}^{d-1}$~\cite{Majumdar:1947eu, Papaetrou:1947ib, Myers:1986rx}. When $U$ has two point sources, the geometry represents a two-centered black hole spacetime. Using a $d$-dimensional cylindrical coordinate system $(z, \rho, \phi_1,\ldots, \phi_{d-3})$ on $\mathbb{R}^{d-1}$, where $z$ is a cylindrical and Cartesian coordinate, $\rho$ is a radial coordinate from the $z$ axis, and $\phi_a$ ($a=1, \ldots, d-3$) are polar coordinates orthogonal to the $\rho$-$z$ plane, the metric of the MP dihole spacetime is given by \begin{align} \label{eq:met} g_{\mu\nu}\:\!\mathrm{d}x^\mu \:\!\mathrm{d}x^\nu &=-U^{-2}\:\!\mathrm{d}t^2+U^{2/(d-3)}\left( \mathrm{d}z^2+\mathrm{d}\rho^2+\rho^2\:\!\mathrm{d}\Omega^2_{d-3} \right), \\ U&=1+\frac{M_+}{r_+^{d-3}}+\frac{M_-}{r_-^{d-3}}, \\ \label{eq:rpm} r_{\pm}&=\sqrt{(z\pm a)^2+\rho^2}, \end{align} where $M_\pm$ are masses of two extremal black holes placed at $z=\mp a$ on the $z$ axis, and $\mathrm{d}\Omega_{d-3}^2$ is the metric on the unit $S^{d-3}$. We assume that the two black holes have equal mass, $M_+=M_-=M$, in what follows. We focus on particle dynamics in the dihole spacetime. Let $p_{\mu}$ be canonical momenta conjugate to coordinates, $x^\mu$. The Hamiltonian of a freely falling particle with unit/zero mass is given by \begin{align} H=\frac{1}{2} g^{\mu\nu}p_\mu p_\nu=\frac{1}{2}\left[\:\! -U^2 p_t^2+U^{-2/(d-3)} \left( p_z^2+p_\rho^2+\frac{1}{\rho^2}\gamma^{ab}p_ap_b \right) \:\!\right], \end{align} where $g^{\mu\nu}$ is the inverse metric of $g_{\mu\nu}$, and $\gamma^{ab}$ is the inverse of the metric on $S^{d-3}$. The momentum $p_t=-E$ is a conserved energy because $H$ is independent of time $t$. The quadratic quantity $\gamma^{ab}p_ap_b=L^2$ is also a constant of motion associated with spherical symmetry on $S^{d-3}$. We consider stationary orbits on which a particle takes constant $z$ and $ \rho$. The on-shell condition of geodesic motion $g^{\mu\nu}p_\mu p_\nu+\kappa=0$, where $\kappa$ is squared particle mass, yields \begin{align} &U^{2(4-d)/(d-3)}(\dot{z}^2+\dot{\rho}^2) + V=E^2, \\ &V(\rho, z; L^2)= \frac{L^2}{\rho^2 U^{2(d-2)/(d-3)}}+\frac{\kappa}{U^2}, \end{align} where the dots denote the derivatives with respect to an affine parameter along the geodesic. We call $V$ the effective potential. Let us focus on particles with $\kappa=1$ staying in stationary orbits. The conditions of the stationary orbits for $V$ and $V_i:=\partial_iV$ ($i=z, \rho$) are written as \begin{align} \label{eq:Vz} V_z&=-\frac{2\:\!U_z}{U^3}\left( \frac{d-2}{d-3} \frac{L^2}{\rho^2\:\! U^{2/(d-3)}}+\kappa \right) =0, \\ \label{eq:Vrho} V_\rho&=-\frac{2L^2}{\rho^3 U^{2(d-2)/(d-3)}}-\frac{2\:\!U_\rho}{U^3} \left( \frac{d-2}{d-3} \frac{L^2}{\rho^2 U^{2/(d-3)}}+\kappa \right)=0, \\ \label{eq:V=E2} V&=E^2, \end{align} respectively, where $U_i:=\partial_iU$ ($i=z, \rho$) take the forms \begin{align} U_z&=-(d-3)M \left( \frac{z+a}{r_+^{d-1}}+\frac{z-a}{r_-^{d-1}} \right), \\ U_\rho&=-(d-3) M\rho \left(\frac{1}{r_+^{d-1}}+\frac{1}{r_-^{d-1}}\right), \end{align} respectively. Solving the condition~\eqref{eq:Vrho} for $L^2$, we have \begin{align} \label{eq:L=L0} L^2=L_0^2:=-\frac{(d-3) \:\!\rho^3 U_\rho U^{2/(d-3)}}{(d-3)U+(d-2)\rho \:\!U_\rho}. \end{align} Note that $L_0^2$ must not be negative, which is a necessary condition for the existence of a stationary orbit. From the condition~\eqref{eq:V=E2} together with $L_0^2$, we obtain \begin{align} E^2=E^2_0:=V(\rho, z; L_0^2)=\frac{(d-3) U+\rho\:\! U_\rho}{U^2 \left[\:\! (d-3)U+(d-2) \rho \:\!U_\rho \:\!\right]}, \end{align} which also must not be negative. Note that if $L_0^2\geq 0$, we have $E_0^2>0$. The condition~\eqref{eq:Vz} implies $U_z=0$, which defines curves on the $\rho$-$z$ plane. These curves are distributed in the range $|z|<a$ and always include $z=0$. Now we define a family of the curves on the $\rho$-$z$ plane satisfying $L_0^2\geq0$, \begin{align} \gamma_0:=\{\:\!(\rho, z)\:\!|\:\! U_z=0, L_0^2\geq 0\:\!\}, \end{align} which provides the sequence of stationary orbits. If evaluated at points on $\gamma_0$, the quantities $L_0$ and $E_0$ give the angular momentum and en ergy of a particle in stationary orbits, respectively. Note that all of the stationary orbits we are considering here are circular orbits. This is because, due to the spherical symmetry on $S^{d-3}$, particles in stationary orbits always move geodesically along a certain great circle on the sphere. Therefore, we will call the stationary orbit a circular orbit. Now we look for a subset of $\gamma_0$ in which the stationary orbits are stable. Let $(V_{ij})$ be the Hessian matrix of $V$, where $V_{ij}:=\partial_j \partial_i V$. We define $h$ and $k$ as the determinant and the trace of $(V_{ij})$, i.e., $h(\rho, z; L^2):=\mathrm{det}(V_{ij})$ and $k(\rho, z; L^2):=\mathrm{tr} (V_{ij})$, respectively. In terms of $h$ and $k$, we define the region $D$ in which the circular orbits are stable as \begin{align} D:=\{\:\! (\rho, z)\:\!|\:\! h_0>0, k_0>0, L_0^2 \geq 0 \:\!\}, \end{align} where $h_0$ and $k_0$ are defined by \begin{align} h_0&:=\left.h(\rho, z; L_0^2)\right|_{U_z=0}, \\ k_0&:=\left.k(\rho, z; L_0^2)\right|_{U_z=0}, \end{align} respectively. The restriction denoted by $U_z=0$ means that we have directly dropped the terms including $U_z$. Thus, we can visualize the sequence of stable circular orbits by $\gamma_0$ and $D$ in the $\rho$-$z$ plane. Thus, we can visualize the sequence of stable circular orbits by the overlap of $\gamma_0$ and $D$ in the $\rho$-$z$ plane. Note that $D$ only serves to find the subset of $\gamma_0$. Here, we summarize the quantities $E_0$, $L_0$, $h_0$, and $k_0$ evaluated on $z=0$. We introduce the $(d-2)$-dimensional radial coordinate defined by $R:=\sqrt{\rho^2+a^2}$ for simplification of both calculations and expressions, where note that $R\geq a$. Let us use units in which $M=1$ in what follows. The energy and angular momentum of a particle in a circular orbit on $z=0$ are given by \begin{align} \label{eq:E0} E_0^2(\rho, 0)&=\frac{R^{2(d-3)}(R^{d-1}+2a^2)}{ (R^{d-3}+2)^2 f}, \\ \label{eq:L0} L_0^2(\rho, 0)&=\frac{2(d-3)(R^2-a^2)^2(R^{d-3}+2 )^{2/(d-3)}}{R^2f}, \end{align} respectively, where \begin{align} f(R):= R^{d-1}-2(d-3) R^2+2(d-2)a^2. \end{align} Note that $f(R)$ must be always positive on $\gamma_0$. The derivatives of $E_0(\rho, 0)$ and $L_0(\rho, 0)$ with respect to $R$ are given by \begin{align} \label{eq:dE0} \frac{\mathrm{d}E_0(\rho, 0)}{\mathrm{d}R} &=\frac{(d-3)\:\! g}{R(R^{d-3}+2)(R^{d-1}+2\:\!a^2) f}, \\ \label{eq:dL0} \frac{\mathrm{d}L_0(\rho, 0)}{\mathrm{d}R} &=\frac{g}{2 R(R^{d-3}+2)(R^2-a^2) f}, \end{align} respectively, where \begin{align} g(R):=8(d-2)a^4+\left[\:\! 2(3d-1)R^{d-1} +(d-1)R^{2(d-2)} -8(d-4)R^2 \:\!\right]a^2 \cr -6(d-3) R^{d+1} -(d-5) R^{2(d-1)}. \end{align} The signs of these derivatives on $\gamma_0$ are determined by that of $g(R)$. The quantities $h_0$ and $k_0$ evaluated on $z=0$ are given by \begin{align} \label{eq:h0} h_0(\rho, 0) &=\frac{16(d-3)^2R^{2(2d-9)}\left[\:\! R^2-(d-1)a^2\:\!\right] g}{(R^{d-3}+2)^6 f^2}, \\ \label{eq:k0} k_0(\rho, 0) &=\frac{4(d-3)R^{2(d-5)}\left[\:\! g+R^2(R^{d-3}+2)^2\left[\:\! R^2-(d-1)a^2\:\!\right] \:\!\right]}{ (R^{d-3}+2)^4 f}, \end{align} respectively. Now let us discuss some $d$-independent properties. One property common to these systems is that the center of the system (i.e., the center of the two black holes) is located outside the horizon. This fact leads to a common property in the structure of $V$. On $z=0$, the expansion of $V$ around $\rho=0$ is written as \begin{align} V(\rho, 0)=\frac{a^{2(d-2)} L^2}{(a^{d-3}+2)^{2(d-2)/(d-3)} \rho^2}+O(\rho^0). \end{align} Note that the power of $\rho$ in the leading term does not depend on $d$. If $L\neq0$, then $V(\rho, 0)$ diverges in the limit $\rho\to 0$, which shows the appearance of the centrifugal barrier near the center. Since gravitational force acts attractively, there always exists a stable balance between the gravitational force and the centrifugal force in the $\rho$ direction near the center. In the $z$ direction, $V$ makes a local maximum in the range $a\leq R<a \sqrt{d-1}$ because $V_z(\rho,0)=0$ from the reflection symmetry, and $V_{zz}(\rho, 0)>0$, where \begin{align} V_{zz}(\rho, 0)=\frac{4\left[\:\! R^2-(d-1)a^2 \:\!\right]}{(R^2-a^2)(R^{d-3}+2)^3}\left[\:\! \frac{(d-2) L^2 R^{2(d-4)}}{(R^{d-3}+2)^{2/(d-3)}}+ (d-3) R^{2(d-5)}(R^2-a^2) \:\!\right]. \end{align} Hence, $V$ always makes a saddle point near the center, so that no stable circular orbits appear there. \section{Stable/unstable circular orbits} \label{sec:3} \subsection{$d=5$} We consider how the sequence of circular orbits varies as the dihole separation gradually decreases from a sufficiently large value in the 5D MP spacetime. We first check the explicit forms of quantities evaluated on the symmetric plane $z=0$. From Eqs.~\eqref{eq:E0} and \eqref{eq:L0}, $E_0^2$ and $L_0^2$ in $d=5$ are given by \begin{align} \label{eq:E05} E_0^2(\rho, 0)&=\frac{R^4(R^4+2a^2)}{(R^2+2)^2f}, \\ \label{eq:L05} L_0^2(\rho, 0)&=\frac{4(R^2-a^2)^2(R^2+2)}{R^2f}, \end{align} respectively, where \begin{align} \label{eq:f5} f(R)=R^4-4R^2+6a^2. \end{align} From Eqs.~\eqref{eq:dE0} and \eqref{eq:dL0}, the derivatives of $E_0(\rho, 0)$ and $L_0(\rho, 0)$ take the forms \begin{align} \frac{\mathrm{d}E_0(\rho, 0)}{\mathrm{d}R} &=\frac{2 g}{R(R^2+2)(R^4+2\:\!a^2) f}, \\ \frac{\mathrm{d}L_0(\rho, 0)}{\mathrm{d}R} &=\frac{g}{2 R(R^2+2)(R^2-a^2) f}, \end{align} respectively, where \begin{align} g(R)=4\left[\:\! (a^2-3)R^6+R^2(R^2-4\:\!a^2)+(7a^2-1)R^4+2\:\!a^2(R^2+3\:\!a^2) \:\!\right]. \end{align} From Eqs.~\eqref{eq:h0} and \eqref{eq:k0}, the quantities $h_0$ and $k_0$ take the forms \begin{align} h_0(\rho, 0)&=\frac{64 R^2( R^2-4a^2)g }{(R^2+2)^6f^2}, \\ k_0(\rho, 0)&=\frac{8\left[\:\! g+R^2(R^2+2)^2(R^2-4a^2) \:\!\right]}{(R^2+2)^4 f}, \end{align} respectively. Besides $z=0$, there is the following branch of $U_z=0$: \begin{align} \label{eq:z0} z=z_0(R):=\pm \sqrt{R(2\:\!a-R)}, \end{align} where $a<R\leq 2a$ (i.e., $0<\rho\leq \sqrt{3} a$). The particle energy and angular momentum in a circular orbit on $z=z_0$ are given by \begin{align} \label{eq:E0z0} E_0^2(\rho, z_0) &=\frac{4\:\!a^2(R-a)^3(4\:\!aR+1)}{ \left[\:\!2a(R-a)+1\:\!\right]^2 F}, \\ \label{eq:L0z0} L_0^2(\rho, z_0)&=\frac{\left[\:\! 2a(R-a)+1 \:\!\right](R+a)^2}{aF}, \end{align} respectively, where \begin{align} F(R):=4\:\!a R^2-(4\:\!a^2+1) R-3\:\!a. \end{align} Note that $F(R)$ must be always positive on $\gamma_0$. The derivatives of $E_0(\rho, z_0)$ and $L_0(\rho, z_0)$ with respect to $R$ are given by \begin{align} \label{eq:dE0c/dR} \frac{\mathrm{d}E_0(\rho, z_0)}{\mathrm{d} R} &= \frac{G}{(R-a) \left[\:\! 2\:\!a (R-a)+1 \:\!\right](4\:\!a R+1 ) F}, \\ \label{eq:dL0c/dR} \frac{\mathrm{d}L_0(\rho, z_0)}{\mathrm{d} R} &=\frac{G}{2(R+a)\left[\:\! 2\:\!a(R-a)+1 \:\!\right]F}, \end{align} respectively, where \begin{align} G(R):=8\:\!a^2(R-a)^3-4a\:\!R^2-(28\:\!a^2+1) R+a(8\:\!a^2-5). \end{align} The signs of these quantities on $\gamma_0$ are determined by that of $G(R)$ . The quantities $h_0$ and $k_0$ take the forms \begin{align} h_0(\rho, z_0)&=\frac{128\:\!a^2(R-a)^3(2\:\!a-R)G}{RF^2\left[\:\! 2\:\!a(R-a)+1 \:\!\right]^6}, \\ k_0(\rho, z_0) &=\frac{8\:\!a^2(R-a)\left[\:\! 8\:\!a^2R^4-4\:\!a(4\:\!a^2+1)R^3+(8\:\!a^4-8\:\!a^2-1)R^2+12a^3 R+3\:\!a^2 \:\!\right]}{R^2 \left[\:\! 2\:\!a(R-a)+1 \:\!\right]^4 F}, \end{align} respectively. The sign of $h_0(\rho, z_0)$ on $\gamma_0$ is also determined by that of $G(R)$. For $a\gg1$, the sequence of circular orbits is of a typical form as shown in Fig.~\ref{fig:d=5}-(a). The black solid line shows $\gamma_0$, and the blue shaded region shows the region $D$. On $z=0$, stable circular orbits exist within the range of $\sqrt{3}\:\!a \leq \rho<\infty$ because both $h_0(\rho, 0)$ and $k_0(\rho, 0)$ are positive in this range [i.e., $g(R)\geq 0$]. The green point $(\rho, z)=(\sqrt{3}a, 0)$ corresponds to a marginally stable circular orbit~(MSCO), where $h_0$ vanishes, i.e., $g(R)=0$. Furthermore, stable circular orbits also appear on the branch $z=z_0$ and extend from the MSCO to the ISCOs, which correspond to the red points, where $h_0$ also vanishes, i.e., $G(R)=0$. On the sequences of stable circular orbits, we have $\mathrm{d}E_0/\mathrm{d}R>0$ and $\mathrm{d}L_0/\mathrm{d}R>0$. The sequences $\gamma_0$ appearing outside the region $D$ are those of unstable circular orbits. They are distributed in $0\leq \rho\leq \sqrt{3} a$ on the $z=0$ plane and in $R_{\mathrm{L}}<R\leq 2a$ on $z=z_0$, where $R_{\mathrm{L}}$ is given as a solution to $F=0$, \begin{align} R_{\mathrm{L}}:=\frac{1}{8\:\!a}\left[\:\! 4\:\!a^2+1+\sqrt{16\:\!a^4+56\:\!a^2+1} \:\!\right]. \end{align} The positions $(R, z)=(R_{\mathrm{L}}, z_0(R_{\mathrm{L}}))$ are shown by white points in Fig.~\ref{fig:d=5}-(a). Note that $E_0^2$ and $L_0^2$ become infinite here. This corresponds to photon circular orbits because the divergence of the unit mass quantities indicates the massless limit and the ratio $L_0/E_0$ remains finite even in this limit. Consequently, we find that the last circular orbits correspond to unstable photon circular orbits. \begin{figure}[t \centering \includegraphics[width=15.7cm,clip]{D5.pdf} \caption{Sequences of circular orbits in the 5D MP dihole spacetimes. Units in which $M=1$ are used. Each black solid line shows $\gamma_0$, which is a sequence of circular orbits. Each blue shaded region shows $D$, in which circular orbits are stable. The boundaries of $D$ are determined by $h_0=0$. Points colored by red, green, and blue denote the ISCO, MSCO, and OSCO, respectively. Each white point indicates an unstable photon circular orbit. (a) $\gamma_0$ between the red and green points overlaps $D$. (b) $\gamma_0$ between the red and white points does not overlap $D$.} \label{fig:d=5} \end{figure} As the value of $a$ gradually decreases, the ISCOs approach the MSCO. Eventually, they merge when $a$ reaches a specific value, $a_0$, as shown in Fig.~\ref{fig:d=5}-(b). Then the two real roots of $h_0(\rho, z_0)=0$ must be degenerate at $R=2a$, i.e., \begin{align} G(2a)=a (8\:\!a^4-64\:\!a^2-7)=0. \end{align} As a solution to this equation, we define \begin{align} a_0:=\frac{1}{2}\sqrt{16+3\sqrt{30}}=2.8474\cdots. \end{align} When $a$ is within $a_0\geq a \geq a_*$, where \begin{align} a_*:=\sqrt{3}, \end{align} the quantities $h_0(\rho, 0)$ and $k_0(\rho, 0)$ are not negative in a half-line region $\sqrt{3}\:\!a \leq \rho<\infty$,% \footnote{If $a\geq \sqrt{3}$, then $g(R)>0$ and $f(R)>0$ in the range $2\:\!a \leq R<\infty$.} and hence, stable circular orbits appear there. The point $(\rho, z)=(\sqrt{3} a, 0)$ corresponds to the ISCO, colored by red in Figs.~\ref{fig:d=5}-(b) and \ref{fig:d=5}-(c), where $h_0(\rho, 0)=0$. Let us now consider the reason why stable circular orbits exist at infinity for $a\geq a_*$. The asymptotic expansion of $V(\rho, 0)$ at $\rho\to \infty$ is given by \begin{align} \label{eq:expa} V(\rho, 0)-1=-\frac{4-L^2}{\rho^2}+\frac{ 4(a^2-3)+6(4-L^2)}{\rho^4}+O(\rho^{-6}). \end{align} If $L^2<4$, the leading term, the sum of the Newtonian gravitational potential and the centrifugal potential, is negative, and if $4(a^2-3)+6(4-L^2)>0$, the subleading term is positive. Furthermore, if $0<4-L^2\ll 1$, then we find a local minimum point of $V(\rho, 0)$ in the asymptotic region, \begin{align} \rho\simeq 2\sqrt{\frac{2(a^2-3)}{4-L^2}}. \end{align} In order for this value to be a nonzero real number, we have $a>a_*$. For the marginal case $a=a_*$, we also find a local minimum point of $V(\rho, 0)$ in the asymptotic region,% \footnote{ The expansion~\eqref{eq:expa} around $a=a_*$ is \begin{align} \label{eq:Veps} V(\rho, 0)-1 &= \sum_{l=1}^{\infty}V_{(2l)},\quad V_{(2)}=-\epsilon/\rho^2, \quad V_{(4)}=6\:\!\epsilon/\rho^4, \quad V_{(6)}=14(2-3\epsilon)/\rho^6, \end{align} where $\epsilon=4-L^2>0$. Comparing each term in the limit $\rho\to \infty$ and $\epsilon \to 0$, we have \begin{align} \left|\:\!V_{(4)}/V_{(2)}\:\!\right| =O(\rho^{-2}), \quad \left|\:\!V_{(6)}/V_{(2)}\:\!\right =O(\epsilon^{-1}\rho^{-4}), \quad \left|\:\!V_{(6)}/V_{(4)}\:\!\right =O(\epsilon^{-1}\rho^{-2}). \end{align} If $\epsilon$ and $ \rho$ satisfy $\epsilon \rho^4=O(1)$ in this limit, $V_{(2)}$ and $V_{(6)}$ are dominant even in the asymptotic region, and $V_{(4)}$ is negligible. Then, we find a local minimum point of $V(\rho,0)$ in the asymptotic region, $\rho\simeq \sqrt[4]{84}\epsilon^{-1/4}$.} and therefore, stable circular orbits appear in $3\leq \rho <\infty$ [see Fig.~\ref{fig:d=5}-(c)]. This is why we can conclude that there exist stable circular orbits even at infinity in the range $a\geq a_*$. Note that in the case of a 5D static and spherically symmetric black hole, there is no stable circular orbit~\cite{Tangherlini:1963bw, Hackmann:2008tu}. This implies that the existence of stable circular orbits is due to the dihole separation, and furthermore, in the range of $a\geq a_*$, its effects can be observed at infinity. When $a<a_*$, there is no longer stable circular orbit at infinity. However, for $a_*>a>a_{\mathrm{c}}$, where $a_{\mathrm{c}}$ is determined by the discussion below, a sequence of stable circular orbits draws a segment with finite length on $z=0$, as shown in Fig.~\ref{fig:d=5}-(d). The segment appears in the interval \begin{align} R_{\mathrm{ISCO}}\leq R\leq R_{\mathrm{OSCO}}, \end{align} where $R_{\mathrm{ISCO}}:=2a$ and $R_{\mathrm{OSCO}}$ correspond to, respectively, the radii of the ISCO and the outermost stable circular orbit (OSCO), which solve $h_0(\rho, 0)=0$ and are denoted, respectively, by the red point and the blue point in Fig.~\ref{fig:d=5}-(d). Both radii of the ISCO and the OSCO monotonically decrease as $a$ decreases and are degenerate at $a=a_{\mathrm{c}}$, where \begin{align} a_{\mathrm{c}}:=\frac{\sqrt{10+6\sqrt{3}}}{4}=1.1289\cdots, \end{align} which are determined by the degenerate condition of two roots of $h_0(\rho, 0)=0$ at $R=2a$, i.e., \begin{align} g(2a)=8a^4 (32a^4-40a^2-1)=0. \end{align} The sequences of unstable circular orbits are distributed to $0\leq R\leq 2a$ and $R_{\mathrm{OSCO}}<R<\infty$ on $z=0$ and to $R_{\mathrm{L}}<R\leq 2a$ on $z=z_0$. In the range $a\leq a_{\mathrm{c}}$, there is no overlapping set of $\gamma_0$ and $D$, i.e., there is no stable circular orbit. Therefore, we focus only on the $a$-dependent deformation of $\gamma_0$. As $a$ decreases from $a_{\mathrm{c}}$ to $a_{\infty}$, the outline of $\gamma_0$ remains the same as that in Fig.~\ref{fig:d=5}-(e), where \begin{align} a_\infty:=\frac{\sqrt{6}}{3}=0.8164\cdots. \end{align} When $a=a_\infty$, the function $f(R)$ in Eq.~\eqref{eq:f5} vanishes only at $R=\sqrt{2}$ (i.e., $\rho=2\sqrt{3}/3$), where $E_0$ and $L_0$ diverge. This means that there exists an unstable circular orbit not for massive particles but for massless particles, which are shown by a white point on the $z=0$ plane in Fig.~\ref{fig:d=5}-(f). In the range $a< a_{\infty}$, the sequence of $\gamma_0$ on $z=0$ separates into two parts. Note that the outer boundary of the inner sequence and the inner boundary of the outer sequence are unstable photon circular orbits, which are located at $R=[\:\!2\pm\sqrt{2(2-3\:\!a^2)}\:\!]^{1/2}$. When $a=a_1$, the sequence of $\gamma_0$ on $z=z_0$ vanishes at $(R, z)=(2a, 0)$, and at the same time the inner boundary of the outer sequence on the $z=0$ plane vanishes at the same point, i.e., $R_{\mathrm{L}}=2\:\!a= [\:\!2+\sqrt{2(2-3\:\!a^2)}\:\!]^{1/2}$, where \begin{align} a_1:=\frac{\sqrt{10}}{4}=0.7905\cdots. \end{align} In $a<a_1$, the sequence on $z=z_0$ no longer appears, and the inner and outer sequences on $z=0$ only appear. In the limit $a\to0$, the geometry approaches the extremal Reissner-Nordstr\"om black hole spacetime with mass $2$. Then the inner sequence on $z=0$ disappears, and the inner boundary of the outer sequence, the unstable photon circular orbit, limits to $(\rho, z)=(2, 0)$ (see the Appendix). We summarize the dependence of characteristic radii on $a$ in Fig.~\ref{fig:CO5}. \begin{figure}[t \centering \includegraphics[width=8cm,clip]{CO5.pdf} \caption{Dependence of characteristic radii on $a$ in the 5D MP dihole spacetime. Units in which $M=1$ are used. The curves colored by green, red, and blue show the MSCO, ISCO, and OSCO, respectively. Note that the OSCO appears only in the range $a_{\mathrm{c}}\leq a<a_*$. Dashed and dot-dashed curves colored by black show unstable photon circular orbits on $z=0$ and $z=z_0$, respectively.} \label{fig:CO5} \end{figure}% \subsection{$d\geq6$} We consider the dependence of sequences of circular orbits on the separation in the 6D MP dihole spacetime. We summarize several quantities associated with circular orbits and their stability. From Eqs.~\eqref{eq:E0} and \eqref{eq:L0}, $E_0^2$ and $L_0^2$ in $d=6$ are given by \begin{align} E_0^2(\rho, 0)&=\frac{R^6(R^5+2\:\!a^2)}{(R^3+2)^2f}, \\ L_0^2(\rho, 0)&=\frac{6(R^2-a^2)^2(R^3+2)^{2/3}}{R^2 f}, \end{align} respectively, where \begin{align} f(R):=R^5-6R^2+8\:\!a^2. \end{align} From Eqs.~\eqref{eq:dE0} and \eqref{eq:dL0}, the derivatives of $E_0(\rho, 0)$ and $L_0(\rho, 0)$ take the form \begin{align} \frac{\mathrm{d}E_0(\rho, 0)}{\mathrm{d} R} &=\frac{3g}{R(R^3+2)(R^5+2\:\!a^2)f}, \\ \frac{\mathrm{d}L_0(\rho, 0)}{\mathrm{d} R} &=\frac{g}{2R(R^3+2)(R^2-a^2)f}, \end{align} respectively, where \begin{align} g(R):=-2(R^2-a^2)\left[\:\! 5R^2(R^3+2)+2f \:\!\right] -R^2(R^2-5\:\!a^2)(R^3+2)^2. \end{align} From Eqs.~\eqref{eq:h0} and \eqref{eq:k0}, the quantities $h_0$ and $k_0$ take the forms \begin{align} h_0(\rho, 0)&=\frac{144 R^6 (R^2-5\:\!a^2)g}{(R^3+2)^6 f^2}, \\ k_0(\rho, 0)& =-\frac{24 R^2 (R^2-a^2)\left[\:\! 5R^2(R^3+2)+2f\:\!\right] }{(R^3+2)^4f}, \end{align} respectively. Besides $z=0$, there exists the following branch that satisfies $U_z=0$: \begin{align} z=z_0(R). \end{align} Unlike Eq.~\eqref{eq:z0} in the case $d=5$, however, it is not possible to write $z_0$ explicitly in this case because the condition is given as an algebraic equation of the fifth degree or more. In the case $a\gg 1$, the sequences of circular orbits typically show the shape depicted in Fig.~\ref{fig:d=6}-(a), where black solid curves are $\gamma_0$. The set $\gamma_0$ contains $z=0$ and a part of $z=z_0$. The angular momentum $L_0^2$ and energy $E_0^2$ diverge at the boundaries of $\gamma_0$ on $z=z_0$ corresponding to the two white points, where there exist unstable circular orbits for massless particles rather than massive particles. As is seen from the asymptotic expansion of $V(\rho, 0)$ at $\rho\to \infty$, \begin{align} V(\rho, 0)-1=\frac{L^2}{\rho^2}-\frac{4}{\rho^3} +O(\rho^{-5}), \end{align} the leading term ``the centrifugal potential" and the subleading term ``the Newtonian gravitational potential" do not make a potential well, so that there is no stable circular orbit in the asymptotic region. The nonexistence of stable circular orbits is not only in the asymptotic region but in the whole region. In fact, unlike in $d=4, 5$, the region $D$ does not appear in $d=6$. We can interpret that the centrifugal force barrier at the center is ineffective to make a local minimum of $V$. When $a$ takes the value \begin{align} a_\infty:=\frac{3}{10}\sqrt[6]{720}=0.8981\cdots, \end{align} there appears a (white) point on $z=0$ [corresponding to $R=R_\infty:= \sqrt[3]{12/5} $ (i.e., $\rho=\rho_\infty:=\sqrt[6]{720}\sqrt{11}/10$] such that $f(R)$ vanishes, i.e., $E_0^2$ and $L_0^2$ diverge. This indicates the appearance of an unstable photon circular orbit. As $a$ is further decreasing, the outer boundary of the inner sequence approaches the $z$ axis, whereas the inner boundary of the outer sequence goes away from the $z$ axis [see Fig.~\ref{fig:d=6}-(c)], and at $a=a_1$, where \begin{align} a_1:= \dfrac{\sqrt[3]{22}\sqrt[6]{5}}{5}=0.7328\cdots, \end{align} it coincides with the point $(\rho, z)=(2a_1, 0)$, where $ \rho_1=2\:\!a_1$. At the same time, the boundaries of $\gamma_0$ on $z=z_0$ also limits to the same point. In other words, the three unstable photon circular orbits are degenerate there~[see Fig.~\ref{fig:d=6}-(d)]. For $a\geq a_1$, $\gamma_0$ contains only two sequences on $z=0$. In the limit $a\to 0$, the inner sequence vanishes at the origin, and the inner boundary of the outer sequence limits to $(\rho, z)=(2, 0)$ (see the Appendix). \begin{figure}[t \centering \includegraphics[width=11.3cm]{D6.pdf} \caption{ Sequences of circular orbits in the 6D MP dihole spacetimes. Units in which $M=1$ are used. Each black solid line shows $\gamma_0$. Each white point indicates an unstable photon circular orbit. } \label{fig:d=6} \end{figure} Finally we comment on how the $a$ dependence of the sequences of circular orbits changes as $d$ increases. At least for $d=7, 8, 9, 10$, we have checked that the change in the sequences of circular orbits is qualitatively the same as in the case of $d=6$. Indeed, it can be seen from Figs.~\ref{fig:PCO} that the dependence of the radii of unstable photon circular orbits on $a$ is qualitatively the same for $d=6, 7, \ldots, 10$. The critical values of the separation and the radius, $a_{\infty}$, $ \rho_{\infty}$, $a_1$, and $ \rho_1$, which are defined in the same sense as the $d=6$ case, are summarized in Table~\ref{table:1}. Note that $\rho_1=\sqrt{d-2} a_1$. This result suggests that the qualitative properties of stable/unstable circular orbits are common in the MP dihole spacetime in $d\geq 6$. \begin{table}[t] \begin{tabular}{lllll} \hline\hline $d$~~~~~~~~~&$a_1$&$\rho_1$&$a_{\infty}$&$\rho_{\infty}$ \\ \hline \\[-5mm] $6$& $\dfrac{\sqrt[3]{22}\sqrt[6]{5}}{5}=0.7328\cdots$& $2\:\!a_1=1.4656\cdots$&$\dfrac{3}{10}\sqrt[6]{720}=0.8981\cdots$& $\dfrac{\sqrt[6]{720} \sqrt{11}}{10}=0.9929\cdots$ \\[2mm] $7$& $\dfrac{\sqrt{2}\sqrt[4]{57}}{6}=0.6476\cdots$~~~~~& $\sqrt{5}\:\!a_1=1.4481\cdots$& $\dfrac{4\sqrt[4]{150}}{15}=0.9332\cdots$& $\dfrac{\sqrt{35}\sqrt[4]{24}}{15}=0.8729\cdots$ \\[2mm] $8$& $\dfrac{\sqrt[5]{58}}{\sqrt[10]{7^7}}=0.5769\cdots$& $\sqrt{6}\:\!a_1=1.4131\cdots$& $\dfrac{5\sqrt{3}\sqrt[5]{5}\sqrt[10]{7^3}}{21\sqrt[10]{2}}=0.9517\cdots$& $\dfrac{\sqrt[5]{5}\sqrt[10]{7^3}\sqrt{51}}{\sqrt[10]{2} 21}=0.7848\cdots$ \\[2mm] $9$& $\dfrac{\sqrt[6]{82}}{4}=0.5210\cdots$& $\sqrt{7}\:\!a_1=1.3786\cdots$& $\dfrac{3\sqrt[6]{3}}{\sqrt{14}}=0.9628\cdots$& $\sqrt{\dfrac{5}{14}}3^{1/6}=0.7176\cdots$ \\[2mm] $10$& $\dfrac{1}{3}\sqrt[7]{\dfrac{110}{9}}=0.4766\cdots$& $2\sqrt{2}\:\!a_1=1.3481\cdots$& $\dfrac{7\sqrt[7]{7}}{6\sqrt[14]{8}\sqrt[7]{9}}=0.9701\cdots$& $\dfrac{\sqrt[7]{7} \sqrt{23}}{6\sqrt[14]{8}\sqrt[7]{9}}=0.6646\cdots$ \\[2mm] \hline\hline \end{tabular} \caption{Critical values of $a_1$, $\rho_1$, $a_{\infty}$, and $\rho_\infty$ for $d=6, 7, 8, 9, 10$. } \label{table:1} \end{table} \begin{figure}[t \centering \includegraphics[width=16.0cm]{PCO.pdf} \caption{ Dependence of characteristic radii on $a$ in $d=6, 7, \ldots, 10$. Units in which $M=1$ are used. Dashed and dot-dashed curves colored by black show unstable photon circular orbits on $z=0$ and $z=z_0$, respectively. } \label{fig:PCO} \end{figure} \section{Summary and discussions} \label{sec:4} We have considered the dynamics of particles, focusing on circular orbits, in the $d$-dimensional MP dihole spacetime~($d\geq 5$). Using the on-shell conditions for geodesic motion, we have clarified the conditions for the existence of circular orbits in terms of a 2D effective potential and have also provided a prescription for determining whether these orbits are stable. Applying this formalism to the case of $d=5$, we have shown the dependence of sequences of stable/unstable circular orbits on the dihole separation. One of the most remarkable features is the appearance of stable circular orbits because it was shown in the previous works that they were not found in single black holes with a spherical horizon~\cite{Tangherlini:1963bw, Hackmann:2008tu}. Particularly for the large separation $a\geq a_*=\sqrt{3}$, they appear from the ISCO to infinity, whereas for $a_{\mathrm{c}}(=1.1289\cdots)<a<a_*$, they exist only in a restricted region between the ISCO and the OSCO. Therefore, we can interpret this phenomenon as a result caused by the existence of two horizons. In other words, the center of the system is shifted off the horizon, and as a result, the effect of the centrifugal barrier near the center affects the existence of stable circular orbits in the intermediate region. Furthermore, the existence of the stable circular orbits in the asymptotic region is also affected by the power law of gravitational force specific to 5D. Let us note that as shown in Ref.~\cite{Wunsch:2013st, Nakashi:2019mvs} for the 4D MP dihole spacetime, stable circular orbits exist in the asymptotic region for arbitrary separations. Therefore, we can conclude that the appearance of the OSCO is a proper phenomenon of 5D. In the cases of $d=6, 7, \ldots, 10$, we have found that there is no stable circular orbit for any values of $a$. These results suggest that stable circular orbits do not appear for $d\geq 6$ in general. On the other hand, how the sequences of circular orbits change by the separation $a$ for $d\geq 6$ is qualitatively the same as $d=5$. We expect that this property is also dimension independent in $d\geq 6$. It is worth noting that though the existence of a stable photon circular orbit is one of the specific geodesic structures in the 4D MP dihole spacetime, it is absent in $d\geq 5$. As seen in the Schwarzschild-Tangherlini and Myers-Perry black hole backgrounds, stable circular orbits tend not to appear for any dimensions of $d\geq 5$. This seems to be a property common to black objects with a large value of $d$. In fact, even in the MP dihole spacetime, we have found that stable circular orbits tend to be absent for any dimensions of $d\geq 6$. However, our results in 5D imply that there is a mechanism for the existence of stable circular orbits even in higher dimensions due to many-body effects, i.e., the existence of the center outside the horizon. In contrast to the fact that the metric is analytic on the horizon of the 4D MP multi-black hole~\cite{Hartle:1972ya}, the horizons of higher-dimensional ones are generally not smooth~\cite{Welch:1995dh, Candlish:2007fh}. For $d=5$, the metric can be $C^2$ on the horizon but cannot be $C^3$ in general. For $d>5$, the metric is not even $C^2$ on the horizon, which leads to unavoidable curvature singularities. Hence, it should be noted that the results we have obtained in the case of $d\geq 6$ may include the effect of singularities. For comparison with observations, we should discuss a higher-dimensional model of the universe, e.g., a higher-dimensional black hole spacetime in which the extra dimensions are compactified. In particular, a 5D Kaluza-Klein black hole spacetime with a twisted $S^1$ fiber behaves as a 5D spacetime near the horizon ($S^3$ topology), whereas it effectively behaves as a 4D flat spacetime (i.e., a 4D flat spacetime with a compact dimension) in the asymptotic region~\cite{Tomizawa:2011mc, Tomizawa:2018syg}. From this point of view, the 5D Kaluza-Klein black holes connect 4D spacetimes and 5D spacetimes as well as they have the feature of both 4D and 5D. Therefore, we expect that the size of the extra dimension should affect the sequence of stable circular orbits. This is an interesting issue for the future. \begin{acknowledgments} This work was supported by the Grant-in-Aid for Early-Career Scientists~[JSPS KAKENHI Grant No.~JP19K14715 (T.I.)] and Grant-in-Aid for Scientific Research (C) [JSPS KAKENHI Grant No.~JP17K05452 (S.T.)] from the Japan Society for the Promotion of Science. S.T. is also supported from Toyota Institute of Technology Fund for Research Promotion A. \end{acknowledgments}
1,108,101,565,112
arxiv
\section{Introduction:\protect\\ } \par Occasionally in the development of quantum theory and quantum field theory, something fundamental and simple is overlooked. This is the case with the introduction of the ordered Poisson bracket and its consequences. It is shown in this paper that the time-dependent Schr\"odinger equation and the commutation relation between position and momentum, the quantum bracket $[\hat q, \hat p]=i\hbar$ \cite{Dirac}, is in fact a consequence of the principle of invariance under a one parameter canonical transformation of the c-number symmetric bracket. Furthermore, the relation between expectation values and classical dynamics, and the probability interpretation of quantum theory are a consequence of this procedure. In addition, a c-number dynamical equation is derived, which provides the fundamental condition for the boson and fermion operator commutation relations. \par Although the idea of the symmetric analog of the Poisson bracket has appeared in the theory of differential geometry and algebraic ideals \cite{D-V}, and in classical constraint dynamics \cite{FandK}, its clear relevance to fundamental physics has not until now been demonstrated. The idea of the ordered Poisson bracket and related symmetric and antisymmetric brackets has been introduced in \cite{GandK} to provide a c-number analog of the usual boson commutator and fermion anticommutator for quantum fields. From the basic concept of the ordered bracket, the antisymmetric and symmetric brackets are defined. The principle of invariance of the antisymmetric bracket under a one parameter canonical transformation leads to Hamilton's dynamical equations, and the generator of this transformation is the Hamiltonian. What is new and surprising is that the analogous property for the symmetric bracket leads to Schr\"odinger's equation, and the generator of the one parameter canonical transformation in this case is the expectation value of the Hamiltonian operator. Furthermore; these c-number brackets provide a natural derivation of the boson and fermion commutation relations, when operator infinitesimal time development equations are sought which have the c-number equations as a displacement state expectation value. \par In this paper dimensionless phase space coordinates are used such that $q_i\rightarrow q_i/q_o$, $p_i\rightarrow p_i/p_o$, and $q_o p_o=\hbar$. For a given mass $M_o$, the natural units of length, time, and energy are respectively, $\lambda={2\pi{\hbar}/{M_oc}}$, $T_o={2\pi{\hbar}/{M_oc^2}}$, and ${E_o}=M_oc^2=\hbar\omega_o$. If $M_o$ is chosen to be the Planck mass, $M_P=\sqrt{\hbar c/G_N}$, then these units can be expressed in terms of natural physical constants(Planck's reduced constant $\hbar$, the speed of light $c$, and the Newtonian gravitational constant $G_N$). \section{Complex phase space and c-number brackets} \par Ordinary classical dynamics is usually discussed in terms of real-valued phase space vector variables of the form $(\vec q,\vec p)$. However, its relation to quantum theory and to fermion systems is much more transparent if one changes these real phase space vector variables to the complex-valued dimensionless phase space vector variables $\vec a \equiv (\vec q + i\vec p/)/\sqrt{2}$ and their complex conjugates $\vec a^{\, *} = (\vec q - i\vec p)/\sqrt{2}$. In terms of components of both of these types of phase space vector variables, the usual Poisson bracket of ordinary classical dynamics is \begin{eqnarray} \{f,\, g\} &&\equiv \sum_k\left ({\partial f\over\partial q_k} {\partial g\over\partial p_k} - {\partial g\over\partial q_k} {\partial f\over\partial p_k}\right )\nonumber\\ &&= -\,{i} \sum_k\left ({\partial f\over\partial a_k} {\partial g\over\partial a^*_k} - {\partial g\over\partial a_k} {\partial f\over\partial a^*_k}\right ). \label{notes1}\end{eqnarray} From the second Poisson bracket representation given above, we abstract the semi-bracket, which we call the ordered Poisson bracket, \begin{equation} \{f\circ g\}=\sum_k{\partial f\over\partial a_k} {\partial g\over\partial a^*_k} ={\partial f\over\partial{\vec a}}\cdot {\partial g\over\partial{\vec a}^*}. \label{notes2}\end{equation} We note that while $\{f\circ g\}$ is linear in each of its two argument functions $f$ and $g$, it is neither antisymmetric nor symmetric under their interchange. However, it does satisfy the identity $\{f\circ g\} = \{g^*\circ f^*\}^*$, which is in algebraic correspondence with the Hermitian conjugation formula for the product of two Hilbert-space operators, i.e., $\hat f\hat g = (\hat g^{\dagger}\hat f^{\dagger} )^{\dagger}$. From Eq. (\ref{notes2}) we define the c-number symmetric and antisymmetric brackets \begin{equation} \{f,\, g\}_{\pm} \equiv \{f\circ g\} \pm \{g\circ f\}, \label{{notes2a}}\end{equation} where we note $\{f,g\}=-i\{f,g\}_-$. We readily calculate the c-number symmetric and antisymmetric brackets for the components of $\vec a$ and $\vec a^{\, *}$, \begin{equation} \{a_i,\, a_j\}_{\pm} = 0 = \{a_i^*,\, a_j^*\}_{\pm}, \quad \{a_i,\, a_j^*\}_{\pm} = \delta_{ij} = \pm\{a_j^*,\, a_i\}_{\pm}. \label{notes4}\end{equation} \par Infinitesimal canonical transformations, which leave the brackets invariant, are now introduced. The canonical transformations of ordinary classical dynamics are mappings of the complex phase space vectors $\vec a\to\vec A(\vec a,\vec a^{\, *})$ and $\vec a^{\, *}\to \vec A^*(\vec a,\vec a^{\, *})$, which preserve the antisymmetric c-number Poisson bracket relations among the complex phase space vector components. Also we consider the canonical transformations of complex vector phase space mappings which preserve the c-number symmetric bracket relations among the complex phase space vector components. It is important to note that the complex phase space vectors are related to ordinary classical mechanics phase space coordinates in the case of the antisymmetric bracket; however, in the symmetric bracket case they correspond to the expansion coefficients of either quantum wave functions or the c-number limit of quantum fields. \par Specializing now to infinitesimal phase space transformations $\vec a\to\vec A = \vec a + \delta\vec a(\vec a,\vec a^{\, *})$, we readily calculate the c-number antisymmetric and symmetric brackets for the components of $\vec A$ and $\vec A^*$ to first order in $\delta\vec a$ and $\delta\vec a^{\, *}$, \begin{equation} \{A_i,\, A_j\}_{\pm} = {\partial (\delta a_j)\over\partial a_i^*}\pm {\partial (\delta a_i)\over\partial a_j^*}\, , \quad \{A_i^*,\, A_j^*\}_{\pm} = {\partial (\delta a_i^*)\over\partial a_j}\pm {\partial (\delta a_j^*)\over\partial a_i}\, , \label{notes6}\end{equation} \begin{equation} \{A_i,\, A_j^*\}_{\pm} = \delta_{ij} + {\partial (\delta a_i)\over\partial a_j} + {\partial (\delta a_j^*)\over\partial a_i^*} = \pm\{A_j^*,\, A_i\}_{\pm}. \label{notes7}\end{equation} If we now impose the requirement that this infinitesimal phase space vector transformation preserves the c-number antisymmetric or symmetric bracket relations among the complex phase space vectors, we obtain the three equations, \begin{equation} {\partial (\delta a_j)\over\partial a_i^*} = \mp {\partial (\delta a_i)\over\partial a_j^*}\, , \quad {\partial (\delta a_j^*)\over\partial a_i}= \mp {\partial (\delta a_i^*)\over\partial a_j}\, , \quad {\partial (\delta a_i)\over\partial a_j} + {\partial (\delta a_j^*)\over\partial a_i^*} = 0. \label{notes8} \end{equation} The last of these equations is independent of the value of the $\mp$ symbol, and it is satisfied in particular for one-parameter infinitesimal $\delta\vec a$ which are of the form \begin{equation} \delta a_i = -\,{i}(\delta\lambda) {\partial G\over\partial a_i^*},\qquad \delta a_j^* = {i}(\delta\lambda) {\partial G\over\partial a_j}, \label{notes9}\end{equation} where $\delta\lambda$ is a real-valued infinitesimal parameter and $G(\vec a,\vec a^{\, *},\lambda)$ is a real-valued generating function. We thus can readily verify that the last equation in Eq. (\ref{notes8}) is satisfied. From the two immediately preceding equations Eq. (\ref{notes9}), we obtain the form of the equation which governs any continuous one-parameter trajectory of sequential infinitesimal canonical transformations in the complex vector phase space: \begin{equation} i{da_i\over d\lambda} = {\partial G\over\partial a_i^*} \quad \hbox{or} \quad -i{da_i^*\over d\lambda} = {\partial G\over\partial a_i}\, . \label{notes11}\end{equation} In the most general circumstance, $G$ may have an explicit dependence on $\lambda$. These equations may be rewritten as the pair of real equations: \begin{equation} {dq_i\over d\lambda} = {\partial G\over\partial p_i}\, , \quad {dp_i\over d\lambda} = -\, {\partial G\over\partial q_i}\, , \label{notes12}\end{equation} which are generalized Hamilton's equations. For the case of ordinary classical dynamics, antisymmetric bracket case (for which $\mp = +$ in Eq. (\ref{notes8})), the first two of the group of three equations which were given above are satisfied identically for the one-parameter infinitesimal $\delta\vec a$ of the generating function form which has just been given in Eq. (\ref{notes9}). However, for the symmetric bracket case (for which $\mp = -$ in Eq. (\ref{notes8})), the first two of that group of three equations impose the following constraint on those real-valued generating functions $G(\vec a,\vec a^{\, *},\lambda)$ of continuous one-parameter canonical transformation trajectories: \begin{equation} {\partial^2 G\over\partial a_i\partial a_j} = 0 = {\partial^2 G\over\partial a_i^*\partial a_j^*}\, . \label{notes13}\end{equation} \section {Fermion c-number dynamics} \par For the symmetric bracket case, which we call fermion c-number dynamics, the generating functions of the continuous one-parameter trajectories of sequential infinitesimal canonical transformations are constrained to be constant or linear in each of $\vec a$ and $\vec a^{\, *}$, as well as real-valued. The most general form for the generating function is therefore \begin{equation} G(\vec a,\vec a^{\, *},\lambda) = G_0(\lambda) + \sum_k\left (\tilde g_k(\lambda)a_k^* + \tilde g_k^*(\lambda)a_k\right ) + \sum_l\sum_mG_{lm}(\lambda)a_l^*a_m, \label{notes14}\end{equation} where $G_0(\lambda)$ is real and $G_{lm}(\lambda)$ is a Hermitian matrix. Upon putting this constrained form for $G$ into the complex phase space form of the generalized Hamilton's equations Eq. (\ref{notes11}), we arrive at \begin{equation} i{da_i\over d\lambda} = \tilde g_i(\lambda) + \sum_j G_{ij}(\lambda)a_j, \label{notes15}\end{equation} which is a (possibly) inhomogeneous linear equation of matrix Schr\"odinger form. If $\tilde g_i(\lambda) = 0$, the preceding equation is a general homogeneous type of Schr\"odinger equation, whereas if $\tilde g_i(\lambda) \propto\delta(\lambda - \lambda')$, it is a general propagator type of Schr\"odinger equation. It is clear that the c-number dynamics of the symmetric bracket case must be linear and describable by a Schr\"odinger type equation. \par The generating functions of the continuous one-parameter canonical transformation trajectories are usually considered to be observables of classical theory when they have no explicit dependence on the parameter. Thus we restrict $G(\vec a,\vec a^{\, *},\lambda)$ to have no explicit $\lambda$-dependence. In the present case it is always possible to suppress the inhomogeneous part if the Hermitian matrix $G_{lm}$ is not singular. This is done by making the canonical transformation \begin{equation} a_i\to A_i = a_i + \sum_j\left (G^{-1}\right )_{ij}\tilde g_j. \label{notes16}\end{equation} It is easily verified that these transformed $A_i$ also satisfy the c-number symmetric bracket relations. In terms of these $A_i$'s, the generalized Hamilton's equations become \begin{equation} i{dA_i\over d\lambda} = \sum_j G_{ij}A_j, \label{notes17}\end{equation} which are of the homogeneous Schr\"odinger matrix equation form, while $G$ itself becomes \begin{equation} G(\vec A,\vec A^*) = G_0 - \sum_l\sum_m\left (G^{-1}\right )_{lm}\tilde g_l^*\tilde g_m + \sum_l\sum_mG_{lm}A_l^*A_m, \label{notes18}\end{equation} which has no inhomogeneous term. \section{Derivation of the time-dependent Schr\"odinger equation} \par The result found in Eq. (\ref{notes17}) following from the invariance of the symmetric bracket can now be used to derive the time-dependent Schr\"odinger equation. Choosing the parameter $\lambda$ to be a time parameter $t$ and assuming that the canonical transformation Eq. (\ref{notes16}) has been made, the dynamical equation Eq. (\ref{notes17}) for a time-independent $G_{ij}$ becomes \begin{equation} i{\dot a}_i(t)=\{g(\vec a,\vec a^{\, *}),\, a_i(t)\}_{+}=\sum_jG_{ij}a_j(t). \label{se1}\end{equation} Keeping the last term only in Eq. (\ref{notes18}) and changing $\vec A\rightarrow \vec a$ and $G(\vec A,\vec A^{\, *})\rightarrow g(\vec a,\vec a^{\, *})$, the real valued generating function becomes \begin{equation} g(\vec a,\vec a^{\, *})=\sum_i\sum_ja^*_i(t)G_{ij}a_j(t). \label{se2}\end{equation} The Hermitian matrix element $G_{ij}$ is associated with an Hermitian operator $\hat G$ such that \[ G_{ij}=\bra{i}{\hat G}\ket{j},\] where ${\ket{i}}$ form a orthonormal complete set of states with identity operator $I=\sum_i \ket{i}\bra{i}$. A general state expanded in this basis is \begin{equation} \ket{\psi(t)}=\sum_i a_i(t)\ket{i}, \label{se3}\end{equation} with $a_i(t)=\scp{i}{\psi(t)}$. From Eq. (\ref{se1}), follows the relation \begin{eqnarray} \sum_i{\dot a}_i(t)\ket{i} &&=i{\partial \over \partial t}\ket{\psi(t)}=\sum_i\sum_j\ket{i}G_{ij}a_j(t)\nonumber \\ &&=\sum_i\sum_j\ket{i}\bra{i}{\hat G}\ket{j}a_j(t)\nonumber \\ &&=\sum_i\sum_j\ket{i}\bra{i}{\hat G}\ket{j}\scp{j}{\psi(t)}=I{\hat G}\ket{\psi(t)}\nonumber \\ &&i{\partial \over \partial t}\ket{\psi(t)}={\hat G}\ket{\psi(t)},\nonumber \\ \label{se4} \end{eqnarray} which is the time-dependent Schr\"odinger equation when $\hat G$ is identified with the Hamiltonian operator $\hat H({\hat q},{\hat p})$. \par We see here the difference in the interpretation of the quantities $a_i(t)$ in the case of the antisymmetric and symmetric brackets. In the former these are just the complex coordinates associated with position $q_i$ and momentum $p_i$; whereas, in the latter, they represent the expansion coefficients of a general quantum state in terms of an orthonormal basis. Both brackets lead to the completeness relation \begin{equation} \bra{q}\{\ket{\psi(t)},\bra{\psi(t)}\}_{\pm}\ket{q'}=\delta(q-q'). \label{se5}\end{equation} This is seen from \begin{eqnarray} &&\bra{q}\{\ket{\psi(t)},\bra{\psi(t)}\}_{\pm}\ket{q'}\nonumber \\ &&=\sum_i\sum_j\bra{q}\{a_i,a^*_j\}_{\pm}\ket{i}\bra{j}{q'}\rangle\nonumber \\ &&=\sum_i\sum_j\bra{q}\delta_{ij}\ket{i}\bra{j}{q'}\rangle=\bra{q}I\ket{q'}=\delta(q-q').\nonumber \\ \label{se6} \end{eqnarray} \section{Derivation of $[{\hat q},{\hat p}]=\lowercase{i}\hbar$} \par The principle of symmetric bracket invariance leads to quantum mechanics because it leads to the time-dependent Schr\"odinger equation and to a derivation of the Dirac bracket relation $[\hat q, \hat p]=i.$ Firstly, one considers the results of the invariance of the antisymmetric bracket under a one parameter canonical transformation. The time development of a real function $f(\vec a,\vec a^*,t)$ is given by \begin{equation} \dot f=-i\{f,H\}_{-} +{{\partial f}\over{\partial t}}, \label{(1)}\end{equation} and the dynamical equations for the coordinates are \begin{eqnarray} \dot a_i &&=-i\{a_i,H\}_{-}=-i{{\partial H}\over{\partial a_i^*}}\nonumber \\ \dot a_i^* &&=-i\{a_i^*,H\}_{-}=i{{\partial H}\over{\partial a_i}}.\nonumber \\ \label{(2)}\end{eqnarray} These are equivalent to Hamilton's equations of classical mechanics. \par The invariance of the symmetric bracket under a one parameter canonical transformation gives dynamical equations for the coordinates(wave function expansion coefficients in this case). It is convenient to write Eq. (\ref{se1}) and its complex conjugate as \begin{eqnarray} i{{\partial {\vec a}}\over{\partial t}}&&=\hat G\cdot {\vec a}\nonumber \\ -i{{\partial {\vec a^*}}\over{\partial t}}&&={\vec a^*}\cdot\hat G.\nonumber \\ \label{(3)}\end{eqnarray} These are Schr\"odinger's equations for $a_i$ and $a_i^*$ (i can also be a continuous index). The time development of a real function $\bar f(\vec a,\vec a^*,t) ={\vec a^*}\cdot\hat F \cdot {\vec a}$, which depends on the generator for the one parameter canonical transformation $g(\vec a,\vec a^*)={\vec a^*}\cdot\hat G \cdot {\vec a}$, is given by \begin{eqnarray} {\dot{\bar f}}&&={\dot{\vec a^*}}\cdot\hat F \cdot {\vec a}+ {\vec a^*}\cdot\hat F \cdot {\dot{\vec a}}+ {\vec a^*}\cdot{\partial\hat F\over \partial t} \cdot {\vec a}\nonumber \\ {\dot{\bar f}}&&=-i {\vec a^*}\cdot [\hat F,\hat G]\cdot{\vec a} +{\overline{\partial f}\over{\partial t}},\nonumber \\ \label{(4)}\end{eqnarray} which follows from Eq. (\ref{(3)}). Here $\hat F$ and $\hat G$ are Hermitian matrices(operators). For the discrete index case \begin{equation} \bar f(\vec a,\vec a^*,t) =\sum_i\sum_j\scp{\psi(t)}{i}\bra{i}\hat F(t)\ket{j}\scp{j}{\psi(t)} =\bra{\psi(t)}\hat F(t)\ket{\psi(t)}, \label{(4a)}\end{equation} and \begin{equation} \bar f(\vec a,\vec a^*,t) =\int\int\scp{\psi(t)}{p}\bra{p} \hat F(t) \ket{p'}\scp{p'}{\psi(t)} dpdp', \label{(4b)}\end{equation} for the continuous index $p$. The form of $g({\vec a},{\vec a}^*)$ shows that classical results are to be associated with expectation values. When $\hat G$ is identified with the Hamiltonian operator $\hat H$ and $\ket{i}$ is an eigenstate of the Hamiltonian with eigenvalues $E_i$, the bilinear form of $g({\vec a},{\vec a}^*)$ leads to the statistical interpretation of quantum mechanics. This is seen from \begin{equation} \bar H=\vec a \cdot\hat H\cdot \vec a =\sum_i E_i\vert a_i\vert^2=\bra{\psi(t)}\hat H\ket{\psi(t)}, \label{(4c)}\end{equation} with \[ \scp{\psi}{\psi}=\sum_i \vert a_i\vert^2=1. \] \par The classical dynamics case(antisymmetric bracket result Eq. (\ref{(1)}) for the Hamiltonian \begin{equation} H(a,a^*)={{p^2}\over{2m}}+V(q) \label{(5)}\end{equation} gives the result for $f(a,a^*)=q$ that $\dot q=p/m$. For the c-number symmetric bracket result Eq. (\ref{(4)}) to give a result for $\bar f$ that corresponds to classical mechanics, one identifies $\hat G$ with the Hamiltonian operator $\hat H({\hat p},{\hat q})$, and observes that $\dot{\bar q}=\bar p/m$ when $[\hat q,\hat p]=i$. This is found from the expectation value \begin{equation} \bar f=\bra{\psi(t)}\hat f(t)\ket{\psi(t)} =\bra{\psi}\hat U^\dagger(t)\hat f(t)\hat U(t)\ket{\psi}, \label{(5a)}\end{equation} with $\ket{\psi}=\ket{\psi(0)}$, $\hat U(t)=exp(-it\hat H)$ and the relation \begin{equation} {d\bar f\over dt}=\bra{\psi(t)}i[\hat H,\hat f] +{\partial \over \partial t}\hat f(t)\ket{\psi(t)}, \label{(5b)}\end{equation} which corresponds to Eq. (\ref{(4)}) when Eq. (\ref{se3}) is used. Choosing $\hat f(t)=\hat q$ and using the Hamiltonian operator found from Eq. (\ref{(5)}), one finds \begin{equation} {d\bar q \over dt}=\bra{\psi(t)}i[{{\hat p}^2\over 2m},\hat q]\ket{\psi(t)} ={\bar p\over m} \label{(5c)}\end{equation} when $[\hat q,\hat p]=i$, which of course is equivalent to $[{\hat{a}},{\hat{a}}^\dagger]=1$. The appropriate correspondence between force and the potential function follows from Eq. (\ref{(5b)}) when $\hat f(t)={\hat p}$. This gives the result \begin{equation} {d\bar p\over dt}=-\bra{\psi(t)}{\partial V(q)\over \partial q}\ket{\psi(t)}, \label{(5d)}\end{equation} since the quantum bracket relation between ${\hat p}$ and ${\hat q}$ implies \begin{equation} [{\hat p},V(q)]=-i{\partial V(q)\over \partial q}. \label{(5e)}\end{equation} This is of course the well known result of Ehrenfest \cite{ehren}, and the appropriate states to use in the evaluation of these expressions when associating them with corresponding classical equations is the minimum uncertainty displacement states discussed in a Sec. VI. These results clearly shows that quantum mechanics is a consequence of the principle of symmetric bracket invariance, and this is clearly an advance in the understanding of the origin and properties of quantum theory. \par The above proof depends upon the association of the quantum operators with observed quantities through the prediction of distributions for the spectrum of the operators and expectation values. Naturally, the generating function $g(\vec a^*,\vec a)$ given in Eq. (\ref{se2}) leading to the invariance of the symmetric bracket, is of this form. Since $\hat G$ can be identified with the Hamiltonian $\hat H({\hat q},{\hat p})$, this requires the existence of the expectation values for the operators $\hat p$ and $\hat q$. A related derivation of the result $[\hat q,\hat p]=i$, which depends upon the association of distributions and expectation values of the selfadjoint operators $\hat q$ and $\hat p$ with the classically observed values, is found in \cite{TGPRL}, and a similar approach is found in \cite{ah}. The argument of Dirac \cite{Dirac} leading to $[\hat q,\hat p]=i$ is incorrect because it depends upon the non-classical concept of non-commuting quantities in the definition of the classical Poisson bracket. \par It is easily seen that the argument above for the non-relativistic Hamiltonian leading to the quantum bracket result applies to the relativistic Dirac Hamiltonian associated with a fermion. Furthermore, the importance of the natural relation between the expectation value of an operator and its observed classical values, which emerges from the principle of invariance of the symmetric bracket, also resolves the dilemma of Dirac where he finds the eigenvalues of $\dot {\hat q}$ to be $\pm c$, \cite{Dirac2}. The correct result for a free relativistic Dirac particle of mass $m$, momentum $p$, and energy $E$ is \begin{equation} \dot{\bar q}={p\over E}=\beta, \label{(6)}\end{equation} where, using $\hbar=c=1$ and the conventions of \cite{LandL}, \begin{eqnarray} p&&=\gamma \beta m\nonumber \\ E&&=\gamma m\nonumber \\ \gamma&&=1/\sqrt{1-\beta^2}.\nonumber \\ \label{(7)}\end{eqnarray} This follows from the time derivative of the expectation value \begin{eqnarray} \dot{\bar q}&&={d\over dt}\bra{\psi(t)}\hat q\ket{\psi(t)}\nonumber \\ &&=-i\bra{\psi(t)}[\hat q,\hat H]\ket{\psi(t)}\nonumber \\ &&=\bar u(p)\vec{\gamma}u(p)/2E={\vec{p}\over E},\nonumber \\ \label{(8)}\end{eqnarray} where for a free Dirac particle \begin{eqnarray} \hat H &&=\gamma^0\vec\gamma\cdot \vec p + \gamma^0 m,\nonumber \\ \scp{x}{\psi(t)} &&=\psi(x,t)={1\over \sqrt{2E}}u(p)e^{-ipx}\nonumber \\ px &&=p^0t-\vec p\cdot \vec r\nonumber \\ \bar u(p) &&=u^\dagger(p)\gamma^0\nonumber \\ \bar u(p)u(p) &&=2m.\nonumber \\ \label{(9)}\end{eqnarray} This removes the need for the notion of zitterbewegung, which is associated with the Heisenberg operator but not with the observed mean value of the operator through the expectation value. \section{Quantum field operators associated with $\lowercase{a_i}$ and $\lowercase{a_i^*}$} \par For each index $i$, one can associate an operator with the complex numbers $a_i$ through the matrix element \begin{equation} a_i=\bra{a_i}{\hat{a}}_i\ket{a_i}. \label{o1}\end{equation} As shown in the next section, the operator relations which are consistent with the infinitesimal time development equations for both bracket relations Eq. (\ref{se1}) and Eq. (\ref{(2)}) involving the complex numbers $a_i$ are \begin{equation} \{a_i,\, a_j^*\}_{\pm} = \delta_{ij}=[{\hat{a}}_i,{\hat{a}}^\dagger_j]_{\pm}. \label{o2}\end{equation} Introducing the notation $a=a_i$, we can discuss both the case of boson operators and fermion operators without loss of generality. In both cases, the states to use in Eq. (\ref{o1}) are defined as displacement states \begin{equation} \ket{a}={\hat D}(a)\ket{0}. \label{o3}\end{equation} In the boson case, the displacement operator is \begin{equation} {\hat D}(a)=e^{a{\hat{a}}^\dagger-a^*{\hat{a}}}, \label{o4}\end{equation} such that \begin{eqnarray} \sigma(q) &&=\sigma(p)=1/\sqrt{2},\quad \sigma(q)\sigma(p)=1/2 \nonumber \\ \sigma^2(A) &&=\bra{a}\A^2\ket{a}-\bra{a}\A\ket{a}^2\nonumber \\ a &&=\bra{a}{\hat{a}}\ket{a},\quad a^*=\bra{a}{\hat{a}}^\dagger\ket{a}.\nonumber \\ \label{o5}\end{eqnarray} The interpretation of $a_i$ in this case is clear. The state $\ket{a_i}$ is the minimum uncertainty state, and the $a_i$'s are the complex numbers that appear in the antisymmetric bracket Eq. (\ref{notes4}) and Hamilton's equations, i.e. classical coordinates. \par The fermion case can be treated in a similar manner; however, there are some modifications in interpretation. The displacement operator in this case is \begin{eqnarray} {\hat D}(\xi) &&=e^{\xi{\hat{a}}^\dagger-\xi^*{\hat{a}}},\quad \xi=|\xi|e^{+i\phi}\nonumber \\ \ket{a} &&={\hat D}(\xi)e^{-i\phi/2}\ket{0}=\cos(|\xi|)e^{-i\phi/2}\ket{0} +e^{+i\phi/2}\sin(|\xi|)\ket{1}.\nonumber \\ \label{o6}\end{eqnarray} This give the following: \begin{eqnarray} a &&=\bra{a}{\hat{a}}\ket{a}={\sin2|\xi| \over 2}e^{i\phi}\nonumber \\ a^* &&=\bra{a}{\hat{a}}^\dagger\ket{a}={\sin2|\xi| \over 2}e^{-i\phi}\nonumber \\ \bra{a}{\hat{a}}^\dagger{\hat{a}}\ket{a} &&=\sin^2|\xi|,\quad \bra{a}{\hat{a}}{\hat{a}}^\dagger\ket{a}=\cos^2|\xi|\nonumber \\ &&\bra{a}{\hat{a}}{\hat{a}}^\dagger +{\hat{a}}^\dagger{\hat{a}}\ket{a}=1,\nonumber \\ \label{o7}\end{eqnarray} when \begin{eqnarray*} {\hat{a}}\ket{0} &&={\hat{a}}^\dagger\ket{1}=0 \nonumber \\ {\hat{a}}\ket{1} &&=\ket{0},\quad{\hat{a}}^\dagger\ket{0}=\ket{1}. \nonumber \\ \end{eqnarray*} An analogous calculation for $\sigma(q)$ and $\sigma(p)$ for the fermion case gives \begin{eqnarray} \sigma(q) &&=(1-\sin^2(2|\xi|)\cos^2(\phi))^{1/2}/\sqrt{2}\nonumber \\ \sigma(p) &&=(1-\sin^2(2|\xi|)\sin^2(\phi))^{1/2}/\sqrt{2}\nonumber \\ \sigma(q) &&\sigma(p)\geq 0.\nonumber \\ \label{o8}\end{eqnarray} The last inequality in Eq. (\ref{o8}) does not violate the minimum uncertainty inequality, $\sigma(q)\sigma(p)\geq 1/2$, since $[{\hat q},{\hat p}]\neq i$, and ${\hat q}$ and ${\hat p}$ are not conjugate coordinates. \section{Infinitesimal c-number transformations and their relation to boson and fermion operators} \par The infinitesimal transformations induced by the c-number symmetric and antisymmetric brackets have analogous relations involving operators, and these lead naturally to the boson and fermion operator relations Eq. (\ref{o2}). For the antisymmetric bracket, the transformation associated with a time $dt$ is \begin{equation} a_i(dt)=a_i(0)+idt\{H(\vec a,\vec a^*),a_i(0)\}_-, \label{it1}\end{equation} and the appropriate operator equation to associate with this c-number equation is \begin{equation} {\hat{a}}_i(dt)={\hat{a}}_i(0)+idt[{\hat H}({\hat{a}},{\hat{a}}^\dagger),{\hat{a}}_i(0)]. \label{it2}\end{equation} With ${\hat{a}}_i={\hat{a}}_i(0)$, the commutation relation for ${\hat{a}}_i$ and ${\hat{a}}^\dagger_j$ in this case follows from the relation $[{\hat q},{\hat p}]=i$, which is a consequence of Eq. (\ref{(5c)}), and is the boson commutator \begin{equation} [{\hat{a}}_i,{\hat{a}}^\dagger_j]=\delta_{ij}. \label{it3}\end{equation} The associated classical Hamiltonian is found from the normal ordered matrix element \begin{equation} H(\vec a,\vec a^*)=\bra{\vec a}:\hat H:\ket{\vec a}, \label{it4}\end{equation} with $\ket{\vec a}=\ket{ a_0}\ket{ a_1}\cdots\ket{a_n}$, where $\ket{ a_i}$ are the minimum uncertainty states defined in Eq. (\ref{o4}). Here normal ordering is defined as moving the operators ${\hat{a}}^\dagger_i$ to the left according to the boson commutation operation. In this way, the c-number equation Eq. (\ref{it1}) is a consequence of the expectation value of Eq. (\ref{it2}), using the displacement states $\ket{\vec a}$ found from Eq. (\ref{o4}). \par The infinitesimal c-number transformation associated with the symmetric bracket implies both the boson commutation and fermion anticommutation relations. For the c-number symmetric bracket, the infinitesimal transformation is \begin{equation} a_i(dt)=a_i(0)-idt\{g(\vec a,\vec a^*),a_i(0)\}_+, \label{it5}\end{equation} and the appropriate operator equation to associate with this is \begin{equation} {\hat{a}}_i(dt)={\hat{a}}_i(0)+idt[{\hat g}({\hat{a}},{\hat{a}}^\dagger),a_i(0)], \label{it6}\end{equation} with \begin{equation} \hat g=\vec {\hat{a}}^\dagger\cdot\hat G\cdot \vec {\hat{a}}, \label{it7}\end{equation} and ${\hat{a}}_i={\hat{a}}_i(0)$. It is now shown that Eq. (\ref{it6}) yields the c-number equation Eq. (\ref{it5}) when the operators satisfy either the boson commutator or fermion anticommutator relation Eq. (\ref{o2}). This follows from using \begin{eqnarray} [\hat g,{\hat{a}}_i] &&=-\sum_j\sum_k([{\hat{a}}_i,{\hat{a}}^\dagger_j]{\hat{a}}_k+{\hat{a}}^\dagger_j[{\hat{a}}_i,{\hat{a}}_k])G_{jk} \nonumber \\ &&=-\sum_j\sum_k(\delta_{ij}{\hat{a}}_k)G_{jk},\, boson\,\, case \nonumber \\ &&=-\sum_j\sum_k((1-2{\hat{a}}^\dagger_j{\hat{a}}_i){\hat{a}}_k+2{\hat{a}}^\dagger_j{\hat{a}}_i{\hat{a}}_k)G_{jk},\, fermion\,\, case \nonumber \\ &&\equiv - \sum_k G_{ik}{\hat{a}}_k, \nonumber \\ \label{it9}\end{eqnarray} and one finds \begin{eqnarray} {\hat{a}}_i(dt) &&=\sum_j (\delta_{ij}-idtG_{ij}){\hat{a}}_j \nonumber \\ \bra{\vec a}{\hat{a}}_i(dt)\ket{\vec a} &&=\sum_j \bra{\vec a}(\delta_{ij}-idtG_{ij}){\hat{a}}_j\ket{\vec a} \nonumber \\ a_i(dt) &&=\sum_j (\delta_{ij}-idtG_{ij})a_j, \nonumber \\ \label{it10}\end{eqnarray} which agrees with Eq. (\ref{se1}) and Eq. (\ref{it5}). Here the state $\ket{\vec a}$ is defined as the direct product of displacement states, $\ket{\vec a}=\ket{ a_0}\ket{ a_1}\cdots\ket{a_n}$, found from either Eq. (\ref{o4}) for the boson case or Eq. (\ref{o6}) for the fermion case. \section{Quantum fields} \par It is seen from the above that the infinitesimal transformations obtained in both the antisymmetric and symmetric bracket case have corresponding operator equations, if the operators ${\hat{a}}_i$ and ${\hat{a}}^\dagger_j$ satisfy the boson commutation relations in the former case and the fermion anticommutation relations in the latter. Thus the expansion of quantum fields in these operators is a natural consequence of the relations found for the associated c-numbers. In both cases the usual quantum field expansion \cite{LandL} is \begin{equation} \Psi(\vec r,t)=\sum_i({\hat{a}}_i(t)\psi_i(\vec r)+\bd_i(t)\psi_i^*(\vec r)), \label{qf1}\end{equation} where $\bd_i(t)={\hat{a}}_i(p_{i0}<0)$, with four-momentum time component $p_0$, is an antiparticle creation operator. The associated c-number fields are found by forming the matrix element with the displacement state $\ket{\vec a}$ appropriate to either the boson or the fermion case. \par The Dirac equation, which is of Schr\"odinger type, can of course describe a c-number fermion system, but the Klein-Gordon and Maxwell equations, although they are linear, are not of Schr\"odinger type. For example, in one spatial dimension a discretized version of the Klein-Gordon equation is \begin{equation} \ddot q_i - (1/(2\Delta x))^2 (q_{i + 2} - 2q_i + q_{i - 2}) + m^2 q_i = 0. \label{cx1}\end{equation} This can be replaced by the first-order equation pair \begin{equation} \dot q_i = p_i, \quad \dot p_i = (1/(2\Delta x))^2 (q_{i + 2} - 2q_i + q_{i - 2}) - m^2 q_i, \label{cx2}\end{equation} which is a version of Hamilton's equations for the particular Hamiltonian (time evolution generating function and observable) \begin{equation} H(\vec q,\vec p) = {1\over 2}\sum_k\left (p_k^2 + (1/(2\Delta x))^2 (q_{k + 1} - q_{k - 1})^2 + m^2 q_k^2\right ). \label{cx3}\end{equation} The constraint equations Eq. (\ref{notes13}) on fermion system c-number generating functions $G$, which were previously written in terms of the complex $(\vec a,\vec a^{\, *})$ vector phase space variables, translate in terms of the real $(\vec q,\vec p)$ vector phase space variables into the pair of real-valued constraint equations: \begin{equation} {\partial^2 G\over\partial q_i\partial q_j} = {\partial^2 G\over\partial p_i\partial p_j}\, , \quad {\partial^2 G\over\partial q_i\partial p_j} = -\, {\partial^2 G\over\partial q_j\partial p_i}. \label{cx4}\end{equation} For the discretized Klein-Gordon Hamiltonian given above, we have that \begin{equation} {\partial^2 H\over\partial q_i\partial q_{i + 2}} = -(1/(2\Delta x))^2 \neq 0 \quad\hbox{and}\quad {\partial^2 H\over\partial p_i\partial p_{i + 2}} = 0, \label{ cx5}\end{equation} which is not in accord with the first of the preceding pair of c-number fermion system generating function constraint equations. Thus the Klein-Gordon equation is not of Schr\"odinger type and cannot describe a c-number fermion system. \par We have seen that c-number fermion dynamics is necessarily described by a Schr\"o\-ding\-er type equation, i.e., is necessarily already first quantized, and it has no classical version Therefore, its quantization with anticommutators is inevitably second quantization. On the other hand, the boson commutation relations are consistent with the results of the antisymmetric c-number bracket equations, and the first and second quantized theories involving bosons commutation relations can be directly related to the classical theories through the displacement states Eq. (\ref{o4}). \section{Time development of $\lowercase{a_i}$ and $\lowercase{{\hat{a}}_i}$} \par From the infinitesimal transformations which preserve the brackets, it is possible to obtain the global representations of the operators which produce the time development of the coordinates $a_i$. For the c-number antisymmetric bracket case, the time development operator obtained from Eq. (\ref{it1}) is \begin{eqnarray} a_i(t) &&=U(t)a_i=e^{i t\delta_-(H)}a_i \nonumber\\ \delta_-(H)a_i &&=\{H,a_i\}_- \nonumber\\ \delta_-^2(H)a_i &&=\{H,\{H,a_i\}_-\}_-, etc. \nonumber\\ a_i(0) &&=a_i. \nonumber\\ \label{td1}\end{eqnarray} As an example, if $H=a^*a$, then one finds $a(t)=e^{-it}a$. Under time development, one can show that the antisymmetric bracket is invariant, \[ \{a_i(t),a_j^*(t)\}_-=\{a_i,a_j^*\}_-=\delta_{ij}. \] The proof is as follows: \begin{eqnarray} &&\{a_i(t),a_j^*(t)\}_- \nonumber\\ &&=\sum_{n=0}^\infty\sum_{m=0}^\infty {(it)^{n+m} \over n!m!}\{\delta^n_-(H)a_i,\delta^m_-(H)a_j^*\}_- \nonumber\\ &&=\sum_{p=0}^\infty {(it)^{p} \over p!}\sum_{m=0}^p{p! \over (p-m)!m!} \{\delta^{p-m}_-(H)a_i,\delta^m_- (H)a_j^*\}_- \nonumber\\ &&=\sum_{p=0}^\infty {(it)^{p}\delta_-^p(H)\over p!}\{a_i,a_j^*\}_- =e^{i t\delta_-(H)}\{a_i,a_j^*\}_-=\delta_{ij}. \nonumber\\ \label{td2}\end{eqnarray} In the above, $p=n+m$, and use has been made of \[ \delta_-^p(H)\{a_i,a_j^*\}_- =\sum_{m=0}^p {p \choose m}\{\delta^{p-m}_-(H)a_i,\delta^{m}_-(H)a_j^*\}_-, \] which follows from the Jacobi identity \[ \delta_-(H)\{a_i,a_j^*\}_-=\{\delta_-(H)a_i,a_j^*\}_- +\{a_i,\delta_-(H)a_j^*\}_-. \] \par The time development generated by the c-number symmetric bracket can be studied in a similar manner. Here the time development operator obtained from Eq. (\ref{it5}) for the c-number phase space coordinates is \begin{eqnarray} a_i(t) &&=V(t)a_i=e^{-i t\delta_+(g)}a_i \nonumber\\ \delta_+(g)a_i &&=\{g,a_i\}_+ \nonumber\\ \delta_+^2(g)a_i &&=\{g,\{g,a_i\}_+\}_+, etc. \nonumber\\ a_i(0) &&=a_i, \nonumber\\ \label{td4}\end{eqnarray} and $g={\vec a^*}\cdot {\hat G}\cdot {\vec a}$. Since \[ \{g,a_i\}_+=\sum_{j}G_{ij}a_j, \] one finds \begin{equation} a_i(t)=\sum_j(\delta_{ij}-itG_{ij}+{(it)^2\over 2!}\sum_k G_{ik}G_{kj} +\dots) a_{j}. \label{td4a}\end{equation} Defining the operator $\hat G$, which must be Hermitian since $g$ is real, as done after Eq. (\ref{se2}), we see that Eq. (\ref{td4a}) becomes \begin{eqnarray} a_i(t) &&=\sum_j\bra{i}(I-it{\hat G}+{(it)^2\over 2!}{\hat G}^2+\dots)\ket{j}a_j \nonumber\\ &&=\bra{i}e^{-it\hat G}\ket{\psi}=\scp{i}{\psi(t)}, \nonumber\\ \label{td5}\end{eqnarray} since $a_j=\scp{j}{\psi(0)}=\scp{j}{\psi}$. \par It is now easy to demonstrate that the invariance of the c-number symmetric bracket results from the unitary transformation ${\hat U}(t)=e^{-it\hat G}$. This is seen from \begin{eqnarray} &&\{a_i(t),a^*_j(t)\}_+ =\{\bra{i}{\hat U}(t)\ket{\psi},\bra{\psi}{\hat U}^\dagger(t)\ket{j}\}_+, \nonumber\\ &&=\sum_k\sum_l\{\bra{i}{\hat U}(t)\ket{k}\bra{k}\psi \rangle,\langle\psi\ket{l}\bra{l}{\hat U}^\dagger(t)\ket{j}\}_+ \nonumber\\ &&=\sum_k\sum_l\bra{i}{\hat U}(t)\ket{k}\bra{l}{\hat U}^\dagger(t)\ket{j}\{a_k,a_l^*\}_+ \nonumber\\ &&=\sum_k\sum_l\bra{i}{\hat U}(t)\ket{k}\bra{l}{\hat U}^\dagger(t)\ket{j}\delta_{kl} =\bra{i}{\hat U}(t){\hat U}^\dagger(t)\ket{j}=\delta_{ij}. \nonumber\\ \label{td6}\end{eqnarray} \par In the case of the quantum boson or fermion operators, the invariance of the commutation relations Eq. (\ref{o2}) follows from the unitarity of the time development operator, ${\hat U}(t)=e^{-it{\hat H}}$ obtained from Eq. (\ref{it2}) for the boson case or ${\hat U}(t)=e^{-it\hat G}$ obtained from Eq. (\ref{td5}) for the fermion case, such that \begin{eqnarray} {\hat{a}}_i(t) &&={\hat U}^\dagger(t){\hat{a}}_i {\hat U}(t) =\sum_{n=0}^\infty {(it)^n{\hat D}^n({\hat H})\over n!}{\hat{a}}_i \nonumber\\ {\hat D}({\hat H}){\hat{a}}_i &&=[{\hat H},{\hat{a}}_i] \nonumber\\ {\hat D}^2({\hat H}){\hat{a}}_i &&=[{\hat H},[{\hat H},{\hat{a}}_i]], etc. \nonumber\\ {\hat{a}}_i(0) &&={\hat{a}}_i. \nonumber\\ \lbrack{\hat{a}}_i(t),{\hat{a}}^\dagger_j(t)\rbrack_{\pm} &&={\hat U}^\dagger(t)\lbrack{\hat{a}}_i,{\hat{a}}^\dagger_j\rbrack_{\pm}{\hat U}(t) =\delta_{ij}. \nonumber\\ \label{td3}\end{eqnarray} \section{Angular Momentum} \par It is interesting to note that the c-number antisymmetric bracket generates the algebra of orbital angular momentum, and that there is a c-number differential operator representation of the $SU(2)$ Lie algebra. The components of orbital angular momentum are (for $i,j$, and $k=1,2$ or $3$) \begin{equation} l_i=i\sum_i\sum_j\epsilon_{ijk}a_ia_j^*, \label{am0}\end{equation} with $\epsilon_{ijk}$ antisymmetric in its indices and $\epsilon_{123}=1$. For the classical coordinates $a_i{}$, the following relations are found: \begin{eqnarray} \{l_i,l_j\}_- &&=i\sum_k\epsilon_{ijk}l_k,\nonumber \\ \{l^2,l_j\}_- &&=0\nonumber \\ \{l_i,q_j\}_- && =i\sum_k\epsilon_{ijk}q_k,\,\,\,\{l_i,p_j\}_-=i\sum_k\epsilon_{ijk}p_k.\nonumber \\ \label{am1}\end{eqnarray} A differential operator representation for the $SU(2)$ algebra is given by \begin{eqnarray} {\hat J}_{\pm} &&={\hat J}_1\pm i{\hat J}_2 \nonumber\\ {\hat J}_+ &&=a^*{\partial \over \partial a}, \,\,\,\, {\hat J}_-=a{\partial \over \partial a^*} \nonumber\\ {\hat J}_3 &&={1\over 2} (a^*{\partial \over \partial a^*}-a{\partial \over \partial a}) \nonumber\\ \lbrack{\hat J}_3,{\hat J}_{\pm}\rbrack u(j,m) &&=\pm {\hat J}_{\pm}u(j,m) \nonumber\\ \lbrack{\hat J}_+,{\hat J}_-\rbrack u(j,m) &&=2{\hat J}_3u(j,m)=2m u(j,m) \nonumber\\ u(j,m) &&={a^{*j+m}a^{j-m}\over\sqrt{(j+m)!(j-m)!}} \nonumber\\ {\hat J}^2u(j,m) &&={\hat K}({\hat K}+1)u(j,m)=j(j+1)u(j,m) \nonumber\\ {\hat K} &&={1\over 2} (a^*{\partial \over \partial a^*}+a{\partial \over \partial a}) \nonumber\\ {\hat J}_{\pm}u(j,m) &&=\sqrt{j(j+1)-m(m\pm 1)}u(j,m\pm 1). \nonumber\\ \label{am2}\end{eqnarray} For the functions $u(j,m)$ the inner product is define, with $j\geq j'$, as \begin{eqnarray} \langle u(j,m)|u(j',m')\rangle &&={1\over 2\pi N(j,m)}\int_0^{2\pi}{\partial^{4j} u^*(j,m)u(j',m')\over \partial^{2j}a^*\partial^{2j}a}d\phi\nonumber \\ &&=\delta_{jj'}\delta_{mm'}\nonumber \\ N(j,m) &&={(2j)!(2j)!\over(j+m)!(j-m)!}\nonumber \\ \label{am3}\end{eqnarray} with $a=\rho e^{i\phi}$. \section{Conclusions} \par In this paper it has been shown that both quantum theory and the quantum field theory of bosons and fermions are a natural consequence of the principle of invariance of the symmetric bracket, a concept which is analogous to the bracket invariance principle that appears in classical dynamics. Just as the invariance of the c-number antisymmetric bracket under a one parameter canonical transformation leads to dynamical equations, which determine the classical dynamical flow in coordinate phase space, the invariance of the c-number symmetric bracket under a one parameter canonical transformation leads to a dynamical equation, Eq. (\ref{it5}), which determines the dynamical flow of quantum states. In the former case, the dynamical equations are Hamilton's equations; however, in the later, the dynamical equation is a time-dependent Schr\"odinger type equation Eq. (\ref{notes17}), which is equivalent to generalized Hamilton's equations Eq. (\ref{notes11}) or Eq. (\ref{notes12}) for the real part $q_i$ and imaginary part $p_i$ of the coordinates $a_i(t)=\scp{i}{\psi(t)}$. The truly remarkable consequences of the principle of invariance of the symmetric bracket are the derivation of the time-dependent Schr\"odinger equation and the quantum bracket relation $[\hat q,\hat p]=i\hbar$. This argument makes the time-dependent Schr\"odinger equation a consequence of bracket invariance, and it replaces with a logical derivation the heuristic conjectures of Schr\"odinger \cite{esh2},\cite{esh4}, and \cite{esh5} leading to the discovery of his famous equation. Furthermore, it removes $[\hat q,\hat p]=i\hbar$ and the time-dependent Schr\"odinger equation from the status of postulates of quantum theory. Along with these results comes the association naturally of expectation values of quantum operators with corresponding classical quantities, and the statistical interpretation of quantum theory. In addition, the c-number time development equation Eq. (\ref{it5}) found from this principle provides a natural condition for the emergence of the quantum field theory of bosons and fermions, when the antisymmetric bracket is associated with the boson operator commutator and the symmetric bracket is associated with the fermion operator anticommutator. It is clear that both the first quantized and second quantized theories of bosons have an associated c-number dynamics; namely, classical dynamics and classical field theory. These are found from the expectation values and matrix elements of operators using boson minimum uncertainty displacement states. However, fermion c-number dynamics is not classical dynamics, but it is already a first quantized theory, as seen from the derivation in Eq. (\ref{se4}). The second quantized version is the quantum field theory of fermions. The c-number coordinates in this case satisfy the c-number dynamical equation Eq. (\ref{it5}), and they are found as matrix elements of fermion operators using the fermion displacement states. \par It is clear that the fermion dynamics resulting from the symmetric bracket invariance, allows the gauge couplings that are known to lead to renormalizable theories for fermion dynamics, i.e. QED, and QCD. The old four-fermion theory of beta decay, which did not require the intermediation of the W boson, clearly has an equation of motion which involves fermion phase space variables in a nonlinear fashion, which thus cannot be of the (necessarily linear) Schr\"odinger equation type that is here required by invariance of the symmetric bracket. This does not mean that effective theories with nonlinear fermion interactions are not useful approximations. Examples of such approximate theories are the Hubbard model \cite{Frad} and the composite vector boson model \cite{TGPR}, where nonlinear interactions may be introduced using path integral methods with auxiliary fields. \vfill\eject
1,108,101,565,113
arxiv
\section{Introduction} In the currently accepted cosmological model, galaxy formation is intimately connected to the formation of the large-scale cosmic structure. To test this model, we need to measure the relative distribution of light and matter in the Universe. The mass distribution on small scales, from galaxies to galaxy clusters, has been usually inferred by assuming that systems are in dynamical equilibrium. On very large scales, the mass overdensities are small enough that the linear theory of density perturbations can be used to measure the mass distribution from the relation between the mass density field and the peculiar velocities of galaxies \cite{zaroubi02}. On intermediate, mildly non-linear scales, $\sim 1-10h^{-1}$~Mpc,\footnote{We use $H_0=100 h$ km s$^{-1}$ Mpc$^{-1}$ throughout.} neither the dynamical equilibrium hypothesis nor linear theory are valid. No robust way of measuring the mass distribution in this regime was available until the 1990s, when both gravitational lensing and the caustic technique were developed. Here, we provide an overview of how the caustic technique came about and what it has contributed so far. \section{The context of mass estimators} \subsection{The assumption of dynamical equilibrium} Galaxy cluster mass estimators measure either the total mass within a given radius $R$ or the mass radial profile. Traditionally, both kinds of estimators are based on the assumption that the cluster is spherical and in dynamical equilibrium. The virial theorem is usually applied when the number of galaxy redshifts is not large: the galaxy velocity dispersion $\sigma$ and the cluster size $R$ are sufficient to yield an estimate of the cluster total mass $M=\sigma^2 R/G$ \cite{zwicky37}, where $G$ is the gravitational constant. More accurate measurements require a surface term correction \cite{the86, merritt87}, which can decrease the estimated mass by a substantial factor ($\sim 20\%$, on average \cite{girardi98}), and knowledge of the galaxy orbital distribution; however, although this distribution can only be reasonably guessed in most cases, its uncertainty has only a modest impact ($\sim 5\%$) on the final mass estimate. These uncertainties become an order of magnitude larger if the galaxies are not fair tracers of the mass distribution. When the number of galaxy spectra is large enough that we can estimate the velocity dispersion profile, we can apply the Jeans equations for a steady-state spherical system. The cumulative mass is \begin{equation} M(<r) = - {\langle v_r^2 \rangle r \over G} \left[{{\rm d}\ln\rho_{\rm m}\over {\rm d}\ln r} + {{\rm d}\ln\langle v_r^2 \rangle\over {\rm d}\ln r} + 2\beta(r)\right] \; . \label{eq:jeans} \end{equation} However, as in the virial theorem, the application of equation (\ref{eq:jeans}) requires the assumption of a relation between the galaxy number density profile and the mass density profile $\rho_{\rm m}$. Moreover, we do not usually know the velocity anisotropy parameter \begin{equation} \beta(r)=1-{\langle v^2_\theta\rangle + \langle v^2_\phi\rangle\over 2\langle v^2_r\rangle} \; , \label{eq:beta} \end{equation} where $v_\theta$, $v_\phi$, and $v_r$ are the longitudinal, azimuthal and radial components of the velocity $v$ of the galaxies, respectively, and the brackets indicate an average over the velocities of the galaxies in the volume ${\rm d}^3{\bf r}$ centered on position ${\bf r}$ from the cluster center. Therefore, we can not measure $M(<r)$ without guessing $\beta(r)$, or vice versa. A common strategy is to measure the velocity distributions of different galaxy populations which are assumed to be in equilibrium and thus to trace the same gravitational potential. This method can help to break this mass-anisotropy degeneracy, although not completely (see \cite{biviano06, biviano08} for very lucid reviews of these methods). We can estimate the mass profile when observations in the X-ray band provide the intracluster medium (ICM) density $\rho_{\rm gas}$ and temperature $T$. The assumption of hydrostatic equilibrium of the ICM yields a relation similar to equation (\ref{eq:jeans}) \begin{equation} M(<r) = - {k T r\over G\mu m_{\rm p}} \left[{{\rm d}\ln\rho_{\rm gas}\over {\rm d}\ln r} + {{\rm d}\ln T\over {\rm d}\ln r} \right] \; \label{eq:ICM} \end{equation} where $k$ is the Boltzmann constant, $\mu$ the mean molecular weight, and $m_{\rm p}$ the proton mass. Note that the term analogous to $\beta$, which appears in equation (\ref{eq:beta}), is now zero, because, unlike the galaxy orbits, the ICM pressure is isotropic. When a sufficient angular resolution and energy sensitivity are not available to measure the X-ray spectrum at different clustrocentric radius and thus estimate the temperature profile, an isothermal ICM is usually assumed. However, the departure from this assumption appears to be substantial in most clusters where the density and temperature structures can be measured (e.g., \cite{mark98, deg02}). For estimating the cluster mass when detailed observations of the cluster are unavailable, we can use a scaling relation between the mass and an observable average quantity. The most commonly used scaling relations are those involving ICM thermal properties, as the X-ray temperature (e.g. \cite{pierpaoli03}; see also \cite{borgani06, borgani08} for reviews). In this case, however, the complex thermal structure of the ICM can significantly bias the cluster mass estimate \cite{rasia06}. Rather than using an X-ray observable, one could use, in principle, the integrated Sunyaev-Zel'dovich effect, which yields a correlation with mass which is tighter than the mass-X-ray temperature correlation \cite{motl05}. However, this correlation is currently valid only for simulated clusters, and still needs to be confirmed by upcoming cluster surveys. \subsection{Dropping the dynamical equilibrium assumption} The astrophysical relevance of galaxies as gravitational lenses was first intuited by Zwicky \cite{zwicky37}, but it was only fifty years later that the first gravitational lens effect was measured in a galaxy cluster \cite{lyn86}. The lensing effect is a distortion of the optical images of sources beyond the mass concentration; this distortion depends only on the amount of mass along the line-of-sight and not on the dynamical state of this mass. The obvious advantage is thus that the dynamical equilibrium assumption, that is essential for all the methods listed above, becomes now unnecessary. The lensing effect can be classified as strong or weak lensing, depending on its intensity. Strong lensing creates multiple images of a single source and can be used for measuring the cluster mass in its core, where the gravitational potential is deep enough. In the outer regions, the lensing effect is weaker and it only yields a tangential distortion of the induced ellipticities of the shape of the background galaxies; weak lensing can thus measure the depth of the potential well from the center to the cluster outskirts. The most serious disadvantage of gravitational lensing for measuring masses is that the signal intensity depends on the relative distances between observer, lens and source, and not all the clusters can clearly be in the appropriate position to provide lens effects that are easily measurable. Moreover, weak lensing does not generally have a large signal-to-noise ratio and weak lensing analyses are not trivial (see e.g., \cite{schnei06}). In 1997, Diaferio and Geller \cite{diaf97} proposed the caustic technique, a novel method to estimate the cluster mass which is not based on the dynamical equilibrium assumption and only requires galaxy celestial coordinates and redshifts. The method can thus measure the cluster mass on all the scales from the central region to well beyond the virial radius $r_{200}$, the radius within which the average mass density is 200 times the critical density of the Universe. Prompted by the $N$-body simulations of van Haarlem and van de Weygaert \cite{haarlem93b}, Diaferio and Geller \cite{diaf97} noticed that in hierarchical models of structure formation, the velocity field in the regions surrounding the cluster is not perfectly radial, as expected in the spherical infall model \cite{regos89, hiotelis01}, but has a substantial random component. This fact can be exploited to extract the escape velocity of galaxies from their distribution in redshift space. Here, we will provide an overview of this method. \subsection{Masses on different scales} It is clear that weak lensing and the caustic technique can be applied to scales larger than the virial radius because they do not depend on the assumption of dynamical equilibrium. However, the other estimators we mentioned above do not always measure the total cluster mass within $r_{200}$, as, for example, the virial analyses, based on optical observations, usually do. X-ray estimates rarely go beyond $\sim 0.5 r_{200}$, because on these larger scales the X-ray surface brightness becomes smaller than the X-ray telescope sensitivity; gravitational lensing only measures the central mass within $\sim 0.1 r_{200}$, where the strong regime applies. Of course, scaling relations do not provide any information on the mass profile, but they rather provide the total mass within a given radius which depends on the scaling relation used: typically, X-ray, optical and Sunyaev-Zel'dovich scaling relations yield masses within increasing radii, but still smaller than $r_{200}$. \section{History} The spherically symmetric infall onto an initial density perturbation is the simplest classical problem we encounter when we treat the formation of cosmic structure by gravitational instability in an expanding background \cite{gunn72, bert85}. The solution to this problem provides two relevant results: the density profile of the resulting system and the mean mass density of the Universe $\Omega_0$. The former issue has a long history that we do not review here (see, e.g., \cite{zaroubi96, delpop04}). The basic idea is simple: we can imagine a spherical perturbation separated into individual spherical mass shells that expand to the maximum turn-around radius, the radius where the radial velocity $v_{\rm pec}(r)$ equals the Hubble velocity, before starting to collapse. This simple picture enables us to predict the density profile of the final object if we assume that mass is conserved, there is no shell crossing, and we know the initial density profile of the perturbation, namely the initial two-point mass correlation function $\xi(r)$; $\xi(r)$ contains the same information as the power spectrum $P(k)$ of the mass density perturbations, if these are Gaussian variates. For scale-free initial power spectra $P(k)\propto k^n$, the final density profile is $\rho\propto r^{-\alpha}$ with $\alpha$ depending on $\Omega_0$ and $n$. The spherically symmetric infall can also be used to estimate $\Omega_0$. When the average mass overdensity $\delta(r)$ within the radius $r$ of the perturbation is small enough, we can compute the radial velocity of each shell of radius $r$ according to linear theory \begin{equation} {v_{\rm pec}(r)\over H_0 r} =-{1\over 3}\Omega_0^{0.6}\delta(r)\; . \label{eq:vpec-lin} \end{equation} In the simplest application of this relation to real systems, we assume that galaxies trace mass, so that the galaxy number overdensity is simply related to $\delta$; a measure of ${v_{\rm pec}}$ thus promptly yields $\Omega_0$. In the 1980s this strategy was applied to the Virgo cluster and the Local Supercluster; galaxies in these systems are close enough that we can measure galaxy distances $d$ independently of redshift $cz$, and thus estimate the projection along the line of sight, $v_{\rm pec}^{\rm los}=cz-H_0d$, of the radial velocity $v_{\rm pec}$. These analyses indicated that $\Omega_0=0.35\pm 0.15$ \cite{davis83}, in agreement with the most recent estimates \cite{dunkley08}, but at odds with the inflationary value $\Omega_0=1$, which, at that time, was commonly believed to be the ``correct'' value. A slight complication derives from the fact that the external regions of clusters are not properly described by linear theory. We can use instead the spherical infall model. In this case, $\delta$ and $\Omega_0$ are still separable quantities and we can recast equation (\ref{eq:vpec-lin}) as \begin{equation} {v_{\rm pec}(r)\over H_0 r} = -{1\over 3}\Omega_0^{0.6}f(\delta) \label{eq:vpec-nonlin} \end{equation} so that we can still measure $\Omega_0$ once $\delta$ is known. Typical approximations are $f(\delta)=\delta(1+\delta)^{-1/4}$ \cite{yahil85} and $f(\delta)=\delta(1+\delta/3)^{-1/2}$ \cite{vill86, cupani08}. A more serious complication is that departures from spherical symmetry can be large in real systems and the radial velocities $v_{\rm pec}$ derived from their line-of-sight components can be affected by relative uncertainties of the order of 50\% \cite{vill86}. The measure of absolute distances to galaxies remains a difficult problem today. Thus, estimating $\Omega_0$ from the infall regions of clusters might not be trivial. However, this complication can be bypassed by the intuition of Reg\"os and Geller \cite{regos89} who were inspired by the work of Shectman \cite{shect82}, Kaiser \cite{kais87} and Ostriker {\it et al.} \cite{ostr88}. Kaiser showed that when observed in redshift space (specifically the line-of-sight velocities of galaxies $cz$ versus their clustrocentric angular distance $\theta$), the infall pattern around a rich cluster appears as a ``trumpet horn'' whose amplitude ${\cal A}(\theta)$ decreases with $\theta$. The turn-around radius is identified by the condition ${\cal A}(\theta)=0$ \cite{ostr88}. For the Abell cluster A539, Ostriker {\it et al.} \cite{ostr88} found the turn-around radius $\theta_{\rm ta}\sim 2^\circ\sim 3 h^{-1}$~Mpc. Although the galaxy sampling in the infall region of this cluster was too sparse to measure $\Omega_0$ (they only set a lower limit $\Omega_0>0.03$ with equation \ref{eq:vpec-nonlin}), the proposed strategy was intriguing, because it was showed that measuring galaxy distances independently of redshift was unnecessary. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Figures/vanhaarl93-low.eps} \caption{Caustics (solid lines) according to the spherical infall model (equation \ref{eq:regos}) in the Coma cluster (lower panels). The symbols show the galaxy positions in the redshift diagram. Larger amplitudes correspond to increasing cosmic densities $\Omega_0=0.2$, $0.5$, $1.0$. The mass overdensity $\delta$ is estimated from the galaxy number densities (upper panels) based on CfA data (left panels), or APM data (right panels). From \cite{haarlem93}.} \label{fig:vanhaarl93} \end{figure} \begin{figure} \centering \includegraphics[angle=-90,width=0.8\textwidth]{Figures/vanhaarl93b-low.eps} \caption{Redshift diagrams of clusters in an $N$-body simulation of the standard Cold Dark Matter (CDM) model with $\Omega_0=1$. The dots show the dark matter particle positions. The left column shows the redshift diagrams of the same cluster, observed along three different lines of sight, that has just accreted a group. The right column shows the redshift diagram of another cluster, observed along three different lines of sight, that has not had substantial mass accretion in the recent past. In the right column, the caustics according to the spherical infall model are also shown as solid lines; the smaller (larger) amplitude corresponds to $\Omega_0=0.5$, ($\Omega_0=1$), whereas $\Omega_0=1$ in the simulation. From \cite{haarlem93b}.} \label{fig:vanhaarl93b} \end{figure} Reg\"os and Geller \cite{regos89} quantified this intuition by showing that the relation between the galaxy number density $\bar n(r)$ in real space and the galaxy number density $n(cz,\theta)$ in redshift space is \begin{equation} n(cz,\theta)=\bar n(r)\left(r\over cz\right)^2 {1\over J} \end{equation} where $J$ is the Jacobian of the transformation from real to redshift space coordinates. When $J=0$, $n(cz,\theta)$ is infinite. This condition locates the borders of Kaiser's horn which are named {\it caustics}. We can now use equation (\ref{eq:vpec-nonlin}) to relate ${\cal A}(\theta)$ to $\Omega_0$ (equation 34 of \cite{regos89}): \begin{equation} {\cal A}(\theta)\sim \Omega_0^{0.6} rf(\delta) \left[ -{{\rm d}\ln f(\delta)\over {\rm d}\ln r}\right]^{-1/2} \label{eq:regos} \end{equation} where $r$ and $\theta$ are related by the transformation from real to redshift space coordinates. \begin{figure} \centering \includegraphics[angle=-90,width=0.7\textwidth]{Figures/diaf97-fig03.eps} \caption{Caustic amplitude vs. projected clustrocentric distance for a simulated cluster in three different CDM cosmologies (columns) with $[\Omega_0,\Omega_\Lambda]$ as shown above the upper panels. The cluster is shown right after a major merger (upper row) and at equilibrium (lower row). The cosmic time is shown by the scale factor $a$. The crosses show the actual caustic amplitude. The solid lines show the line-of-sight component of the square of the escape velocity: $\langle v^2_{\rm esc, los}(r)\rangle^{1/2} = \{- 2\phi(r)[1-\beta(r)]/[3-2\beta(r)]\}^{1/2}$. The dashed lines show the prediction of the spherical infall model. Clustrocentric distances are in units of the virial radius $r_\delta$. From \cite{diaf97}.} \label{fig:vesc} \end{figure} Unfortunately, the caustics appeared to be very fuzzy when a sufficiently dense sampling of the infall region of a rich cluster like Coma was obtained \cite{haarlem93}; consequently, the measure of $\Omega_0$ was rather uncertain (Figure \ref{fig:vanhaarl93}). This disappointing result was attributed to the fact that the assumption of spherical symmetry is very poorly satisfied and that the substructure surrounding the cluster distorts the radial velocity field \cite{haarlem93b} (Figure \ref{fig:vanhaarl93b}). Being so sensitive to the cluster shape, locating the caustics in the redshift diagram did not appear to be a promising strategy to measure $\Omega_0$. A breakthrough came when Diaferio and Geller \cite{diaf97} took a step further than van Haarlem and van de Weygaert. In hierarchical clustering scenarios, clusters accrete mass episodically and anisotropically \cite{colberg99} rather than through the gentle infall of spherical shells. Moreover, clusters accrete galaxy groups with their own internal motion. Therefore, the velocity field of the infall region can have substantial non-radial and random components. These velocity components both make the caustic location fuzzy, and, more importantly, increase the caustic amplitude when compared to the spherical infall model. This intuition opened the way to interpret the square of the caustic amplitude ${\cal A}^2(\theta)$ as the average, over the volume ${\rm d}^3{\bf r}$, of the square of the line-of-sight component of the escape velocity $\langle v^2_{\rm esc, los}(r)\rangle =\left[-2\phi(r) g^{-1}(\beta)\right]^{1/2}$, where $\phi(r)$ is the gravitational potential profile and $g$ (equation \ref{eq:gbeta}) is a function of the velocity parameter anisotropy $\beta(r)$. The crucial point here is that the equation ${\cal A}^2(r) = \langle v^2_{\rm esc, los}(r)\rangle $ holds {\it independently of the dynamical state of the cluster}. This interpretation works amazingly well. Figure \ref{fig:vesc} shows the results of $N$-body simulations of the formation and evolution of a galaxy cluster in Cold Dark Matter (CDM) models with different cosmological parameters. The caustic amplitude (crosses) and $\langle v^2_{\rm esc, los}(r_\perp)\rangle $ (solid lines), as a function of the projected distance $r_\perp$, agree at all scales out to ten virial radii $r_\delta$\footnote{See \cite{diaf97} for the proper definition of the virial radius $r_\delta$ in these plots.} and independently of the dynamical state of the cluster: immediately after a major merging (upper panels) or at equilibrium (lower panels). The spherical infall model (dashed lines), which should only hold for $r_\perp>r_\delta$, always severely underestimates the actual caustic amplitude. These simulations and those in \cite{diaf99} also show another relevant result: the major effect of the cluster shape is not to make the caustics fuzzy but rather to yield different caustic amplitudes depending on the line of sight. The identification ${\cal A}^2(r) = \langle v^2_{\rm esc, los}(r)\rangle $ can be immediately used to measure the cluster mass. If we assume spherical symmetry, the cumulative total mass $M(<r)$ is \begin{equation} GM(<r) = r^2{{\rm d}\phi\over {\rm d}r} = -{r\over 2} \langle v^2_{\rm esc, los}\rangle g(\beta)\left({{\rm d}\ln \langle v^2_{\rm esc, los}\rangle \over {\rm d}\ln r} + {{\rm d}\ln g \over {\rm d}\ln r}\right) \; . \label{eq:dphi} \end{equation} However, in realistic situations, the two logarithmic derivatives are comparable, and we thus need to know $\beta(r)$: this is generally not the case. Moreover, the most serious obstacle in using equation (\ref{eq:dphi}) is the fact that sparse sampling and background and foreground galaxies yield the estimate of $\langle v^2_{\rm esc, los}(r)\rangle$ too noisy to extract accurate information from its differentiation. To bypass this problem, Diaferio and Geller \cite{diaf97} suggested a different recipe to estimate the cumulative mass \begin{equation} GM(<r)={\cal F}_\beta\int_0^r \langle v^2_{\rm esc, los}(r)\rangle {\rm d}r = {\cal F}_\beta\int_0^r {\cal A}^2(r) {\rm d}r \end{equation} where ${\cal F}_\beta\approx 0.5$ is a constant. This recipe has been applied to a large number of clusters ever since and it is now becoming a popular tool to measure the mass in the cluster infall regions. Below, we justify this recipe and show how it works in practice. \section{The caustic method} In hierarchical clustering models of structure formation, clusters form by the aggregation of smaller systems accreting onto the cluster from the surrounding region. The accretion does not happen purely radially and galaxies within the falling clumps have velocities with substantial non-radial components. Specifically, these velocities depend both on the tidal fields of the surrounding region and on the gravitational potential of the clusters and the groups where the galaxies reside. In the previous section, we have seen that, when viewed in the redshift diagram, galaxies populate a region with a characteristic trumpet shape whose amplitude, which decreases with increasing $r$, is related to the escape velocity from the cluster region. The escape velocity $v_{\rm esc}^2(r)=-2\phi(r)$, where $\phi(r)$ is the gravitational potential originated by the cluster, is a non-increasing function of $r$, because gravity is always attractive and ${\rm d}\phi/{\rm d}r>0$. Thus, we can identify the square of the amplitude ${\cal A}$ at the projected radius $r_\perp$ as the average of the square of the line-of-sight component $\langle v^2_{\rm los}\rangle$ of the escape velocity at the three-dimensional radius $r=r_\perp$. To relate $\langle v^2_{\rm los}\rangle$ to $\phi(r)$, we need the velocity anisotropy parameter $\beta(r)$ (equation \ref{eq:beta}). If the cluster rotation is negligible, we have $\langle v^2_\theta\rangle=\langle v^2_\phi\rangle=\langle v^2_{\rm los}\rangle$, and $\langle v^2_r\rangle=\langle v^2\rangle-2\langle v^2_{\rm los}\rangle$. By substituting this relation into equation (\ref{eq:beta}), we obtain $\langle v^2\rangle=\langle v^2_{\rm los} \rangle g(\beta)$ where \begin{equation} g(\beta) = {3-2\beta(r)\over 1-\beta(r)}\; . \label{eq:gbeta} \end{equation} By applying this relation to the escape velocity at radius $r$, $\langle v_{\rm esc}^2(r)\rangle=-2\phi(r)$, and by assuming that ${\cal A}^2(r)=\langle v^2_{\rm esc, los}\rangle$, we obtain the fundamental relation between the gravitational potential $\phi(r)$ and the observable caustic amplitude ${\cal A}(r)$ \begin{equation} -2\phi(r)={\cal A}^2(r)g(\beta) \; . \label{eq:rig-pot} \end{equation} To infer the cluster mass to very large radii, one first notices that the mass of a shell of infinitesimal thickness ${\rm d}r$ can be cast in the form $G{\rm d}m=-2\phi(r){\cal F}(r){\rm d}r = {\cal A}^2(r)g(\beta) {\cal F}(r) {\rm d}r$ where \begin{equation} {\cal F}(r)=-2\pi G{\rho(r)r^2\over \phi(r)}\; . \end{equation} Therefore the mass profile is \begin{equation} GM(<r)=\int_0^r {\cal A}^2(r) {\cal F}_\beta(r) {\rm d}r \label{eq:rig-massprof} \end{equation} where ${\cal F}_\beta(r) = {\cal F}(r) g(\beta)$. Equation (\ref{eq:rig-massprof}) however only relates the mass profile to the density profile of a spherical system and one profile can not be inferred without knowing the other. We can solve this impasse by noticing that, in hierarchical clustering scenarios, ${\cal F}(r)$ is not a strong function of $r$ \cite{diaf97}. This is easily seen in the case of the Navarro, Frenk and White (NFW) \cite{nfw} mass density profile, which is an excellent description of the dark matter distribution in these models: \begin{equation} {\cal F}_{\rm NFW}(r) = {r^2\over 2(r+r_s)^2}{1\over \ln(1+r/r_s)} \; \end{equation} where $r_s$ is a scale-length parameter. If clusters form through hierarchical clustering, ${\cal F}_\beta(r)$ is also a slowly changing function of $r$ \cite{diaf97, diaf99}. We can then assume, somewhat strongly, that ${\cal F}_\beta(r)= {\cal F}_\beta={\rm const}$ altogether and adopt the recipe \begin{equation} GM(<r)={\cal F}_\beta\int_0^r {\cal A}^2(r) {\rm d}r \; . \label{eq:recipe-massprof} \end{equation} When ${\cal F}_\beta=1/2$, this recipe proves to yield mass profiles accurate to 50\% or better both in $N$-body simulations and in real clusters, when compared with masses obtained with standard methods, namely Jeans equation, X-ray and gravitational lensing, applied on scales where the validities of these methods overlap \cite{diaf05}. It is appropriate to emphasize that equations (\ref{eq:rig-pot}) and (\ref{eq:rig-massprof}) are rigorously correct, whereas equation (\ref{eq:recipe-massprof}) is a heuristic recipe for the estimation of the mass profile. \begin{figure} \centering \includegraphics[angle=90,width=0.5\textwidth]{Figures/thresholds-3cl.ps} \caption{Velocity dispersion of the galaxies on the main branch of the binary tree of three real clusters while walking towards the leaves (see \cite{diaf05}). There is an obvious plateau when entering the tree sector with the cluster members. The filled dots indicate the chosen $\sigma$ used to cut the binary tree and thus select the cluster members.} \label{fig:thresholds} \end{figure} \subsection{Implementation} The implementation of the caustic method requires: (1) the determination of the cluster center; (2) the estimate of the galaxy distribution in the redshift diagram; (3) the location of the caustics. For estimating the cluster center, the galaxies in the cluster field of view\footnote{We clarify that to apply the caustic technique we already need to know that there is a cluster in the field of view. The caustic technique, as it is currently conceived, is not a method to identify clusters in redshift surveys, as the Voronoi tessellation \cite{ram01} or the matched filter \cite{post96}.} are arranged in a binary tree according to the pairwise projected energy \begin{equation} E_{ij}=-G{m_i m_j\over R_p}+{1\over 2}{m_i m_j\over m_i+m_j}\Pi^2 \label{eq:pairwise-energy} \end{equation} where $R_p$ and $\Pi$ are the projected spatial separation and the proper line-of-sight velocity difference of each galaxy pair respectively; $m_i$ and $m_j$ are the galaxy masses which are usually set constant, but can also be chosen according to the galaxy luminosities. By walking along the main branch of the tree from the root to the leaves, we progressively remove the background and foreground galaxies. We identify the cluster members by computing the velocity dispersion $\sigma$ of the galaxies still on the main branch at each step: $\sigma$ remains roughly constant when we move through the binary tree sector which only contains the cluster member (Figure \ref{fig:thresholds}), because the cluster is approximately isothermal. The cluster members provide the cluster center and therefore the redshift diagram $(r,v)$. The galaxy distribution $f_q(r,v)$ on this plane is estimated with an adaptive kernel method. At each projected radius $r$, the function $\varphi(r)=\int f_q(r,v) dv$ provides the mean escape velocity $\langle v_{\rm esc}^2\rangle_{\kappa,R}=\int_0^R{\cal A}_\kappa^2(r)\varphi(r)dr/ \int_0^R\varphi(r)dr$ where ${\cal A}_\kappa$ is the amplitude of the caustics located by the equation $f_q(r,v)=\kappa$. The appropriate $\kappa$ is the root of the equation $\langle v_{\rm esc}^2\rangle_{\kappa,R}=4\sigma^2$, where $\sigma^2$ is the velocity dispersion of the members identified on the binary tree. Further technical details of this implementation are described in \cite{diaf99, serra}. \section{Reliability of the method} \begin{figure} \centering \includegraphics[angle=-90,width=0.7\textwidth]{Figures/diaf97-fig06a.eps} \caption{Median mass profiles, measured with the caustic technique, of dark matter halos in samples extracted from CDM models. The cosmological parameters $[\Omega_0,\Omega_\Lambda]$ are shown above the upper panels. The upper row shows the most massive halos: $M(<r_\delta)\ge 10^{14} M_{\odot}$ for the high-density model, and $M(<r_\delta)\ge 2\cdot 10^{13}M_{\odot}$ for the low-density models. The lower row shows the least massive halos: $10^{13}M_{\odot}\le M(<r_\delta)< 10^{14} M_{\odot}$ for the high-density model, and $10^{12}M_{\odot}\le M(<r_\delta)< 2\cdot 10^{13} M_\odot$ for the low-density models. The numbers of halos in each sample are indicated in each panel. The error bars indicate upper and lower quartiles at each projected distance $r_\perp$. From \cite{diaf97}.} \label{fig:nbody97} \end{figure} \begin{figure} \centering \includegraphics[angle=90,width=0.8\textwidth]{Figures/comb_prof.ps} \caption{Radial profiles of the caustic amplitude (upper row), cumulative mass (middle row), and mass-to-light ratio (lower row) of a simulated cluster observed along ten different lines of sight. The thin lines are the profiles estimated from the individual redshift diagrams. The thick lines are the real profiles. In the lower panels, the solid line is the mean mass-to-light ratio of the simulated universe. Left and right columns are for a cluster in a $\Lambda$CDM and a $\tau$CDM model, respectively. From \cite{diaf99}.} \label{fig:nbody99} \end{figure} \subsection{Comparison with simulations} The caustic technique was tested on $N$-body simulations of cluster formation in CDM cosmologies. Dark matter only simulations showed that the caustic amplitude and the escape velocity profiles agree amazingly well out to ten virial radii, independently of the cosmological parameters and, more importantly, of the dynamical state of the cluster (Figure \ref{fig:vesc}). These simulations also showed that the technique works on both massive and less massive clusters (Figure \ref{fig:nbody97}). In the latter case, the scatter is larger because of projection effects and sparse sampling. In the most massive systems, the mass is recovered within $20\%$ out to ten virial radii in most cases. To test the implementation of the caustic method in realistic cases, we can use $N$-body simulations where the galaxies are formed and evolved with a semi-analytic technique \cite{kauffmann99}. Figure \ref{fig:nbody99} shows the mass profile of a single cluster observed along ten different lines of sight in such simulations \cite{diaf99}. When comparing this figure with Figures \ref{fig:vesc} and \ref{fig:nbody97}, where all the dark matter particles were observed, we see that the caustic technique performs in this latter case better than when only the galaxies are available. This difference clearly originates from the sparser sampling of the velocity field provided by the galaxies. Figure \ref{fig:nbody99} also shows that projection effects cause the most relevant systematic error. However, the uncertainty on the mass profile remains smaller than 50\% out to $8 h^{-1}$~Mpc from the cluster center. \begin{figure} \centering \includegraphics[angle=90,width=0.75\textwidth]{Figures/diaf05-low.ps} \caption{Comparison between caustic, lensing, and X-ray mass estimates. The left, middle and right columns are for A2390, MS1358 and Cl~0024, respectively. {\it Top panels}: Redshift diagrams with the galaxies (dots) and caustic locations (solid lines). Line-of-sight velocities $v$ are in the cluster rest-frame. {\it Middle panels}: Three-dimensional cumulative mass profiles. The solid squares show the caustic mass estimates; the solid lines are the best-fitting NFW profiles to the data points within $1 h^{-1}$ Mpc; the dotted lines are the best-fitting NFW profiles to the X-ray measures (from left to right: \cite{allen01, araba02, ota04}); the dashed lines are the best-fitting isothermal (A2390, \cite{squires96}; MS1358, \cite{hoekstra98}) or NFW models (Cl~0024, \cite{kneib03}) to the gravitational lensing measures. The left and right vertical dotted lines show the radius of the X-ray and gravitational lensing fields of view, respectively. The two filled circles show the virial estimates of A2390 and MS1358 \cite{carlberg96}. {\it Bottom panels}: Projected cumulative mass profiles; lines are as in the middle panels. The open diamonds show the weak lensing measures: A2390, \cite{squires96}; MS1358, lower limit to the mass profile \cite{hoekstra98}. Filled diamonds show the strong lensing measures: A2390, \cite{pierre96}; MS1358: \cite{allen98, franx97}; Cl~0024: upper symbol, \cite{tyson98}, lower symbol, \cite{broadh00}. Error bars in all panels are 1-$\sigma$; error bars on points where they seem to be missing are smaller than the symbol size. From \cite{diaf05}.} \centering \label{fig:caus-vs-lens} \end{figure} \subsection{Caustic vs. lensing} In equation (\ref{eq:recipe-massprof}) the choice of the constant filling factor ${\cal F}_\beta$ is based on $N$-body simulations alone. Therefore, it is not guaranteed that the caustic technique can recover the mass profile of real clusters if the simulations are not a realistic representation of the large-scale mass distribution in the Universe. Other than the caustic technique, the only method for estimating the mass in the outer regions of galaxy clusters is based on weak lensing. The comparison between these two methods was performed on the clusters A2390, MS1358 and Cl~0024 which are at the appropriate redshift to have a reasonably intense lens signal and a sufficiently high number of galaxy redshifts \cite{diaf05}. Figure \ref{fig:caus-vs-lens} shows the redshift diagrams and the mass profiles of these systems. Caustic and lensing masses agree amazingly well. The most impressive result is for Cl~0024. This cluster is likely to have experienced a recent merging event \cite{czoske02}, and it probably is out of equilibrium: in this system the caustic mass and the lensing mass agree with each other, but disagree with the X-ray mass, which is the only estimate relying on dynamical equilibrium. This result therefore proves the reliability of the caustic technique and its independence of the dynamical state of the system in real clusters. \section{Application to real systems} \begin{figure} \centering \includegraphics[angle=180,width=0.7\textwidth]{Figures/diaf05-coma-low.ps} \caption{{\bf Top panels}: Galaxy distribution in the redshift diagram of Coma for three galaxy samples of increasing size. There are 332, 480, and 691 galaxies within the caustics in the samples L4.25, L10.0, and C10.0, respectively. Note that these samples are not substantially larger than the samples in Figure \ref{fig:vanhaarl93} used to estimate $\Omega_0$ with the spherical infall model. The bold lines indicate the location of the caustics. Half the distance between the caustics defines the amplitude ${\cal A}(r)$ shown in the middle panels. {\bf Bottom panels}: The bold lines are the caustic mass profiles. The two error bars show the range of the X-ray mass estimates listed in \cite{hughes89}. Short-dashed and long-dashed lines are the cumulative mass profile for a softened isothermal sphere and an NFW density profile with parameters obtained by fitting the mass profile in the range $[0,1]h^{-1}$ Mpc. Shaded areas in the middle and bottom panels indicate the 2-$\sigma$ uncertainty. From \cite{diaf05}.} \label{fig:coma} \end{figure} \subsection{Mass profiles} Geller {\it et al.} \cite{geller99} were the first to apply the caustic method to a real cluster: they measured the mass profile of Coma out to $10 h^{-1}$ Mpc from the cluster center and were able to demonstrate that the NFW profile fits the cluster density profile out to these very large radii, thus ruling out the isothermal sphere as a viable model of the cluster mass distribution (Figure \ref{fig:coma}). A few years later, the failure of the isothermal model was confirmed by the first similar analyses based on gravitational lensing applied to A1689 \cite{clowe01, lemze08} and Cl~0024 \cite{kneib03}. The goodness of the NFW fit out to $5-10 h^{-1}$ Mpc was confirmed by applying the caustic technique to a sample of nine clusters densely sampled in their outer regions, the Cluster And Infall Region Nearby Survey (CAIRNS, \cite{rines03}), and, more recently, to a complete sample of 72 X-ray selected clusters with galaxy redshifts extracted from the Fourth Data Release of the Sloan Digital Sky Survey (Cluster Infall Regions in the Sloan Digital Sky Survey: CIRS, \cite{rines06}). CIRS is currently the largest sample of clusters whose mass profiles have been measured out to $\sim 3 r_{200}$ (Figure \ref{fig:cirs}); Rines and Diaferio \cite{rines06} were thus able to obtain a statistically significant estimate of the ratio between the mass within the turn-around radius $M_{\rm t}$ and the virial mass $M_{200}$: they found an average value of $M_{\rm t}/M_{200} = 2.2\pm 0.2$, which is $\sim 50\%$ smaller than the value expected in current models of cluster formation \cite{tinker05}. The caustic technique is not limited to clusters, but, when enough redshifts are available, it can also be applied to groups of galaxies: on a sample of 16 groups both the NFW mass profiles and the ratio $M_{\rm t}/M_{200} = 2.3\pm 0.4$ are confirmed \cite{rines08b}. Rines {\it et al.} \cite{rines07, rines08} also used the CIRS sample to estimate the virial mass function of nearby clusters and determined cosmological parameters consistent with the WMAP values \cite{dunkley08}; they also showed that velocity bias is absent in real clusters. A good fit with the NFW profile out to $\sim 2 r_{200}$ was also found by Biviano and Girardi \cite{biviano03} who applied the caustic technique to an ensemble cluster obtained by stacking 43 clusters from the Two Degree Galaxy Redshift Survey (2dGFRS, \cite{colless01}): here, unlike the previous analyses, the caustic method was not applied to individual clusters, because the number of galaxies per cluster was relatively small. \begin{figure} \centering \includegraphics[angle=0,width=0.5\textwidth]{Figures/cirs-f12.eps} \caption{Scaled caustic mass profiles for the CIRS clusters. The thin solid lines show the caustic mass profiles normalized by $r_{200}$ and $M_{200}$, the total mass within $r_{200}$. The long-dashed line shows a singular isothermal sphere, the solid lines show NFW profiles (with concentrations $c=3, 5, 10$ from top to bottom at large radii). The short-dashed lines are Hernquist profiles with scale radii different by a factor of two. From \cite{rines06}.} \label{fig:cirs} \end{figure} The caustic method does not rely on the dynamical state of the cluster and its external regions: there are therefore estimates of the mass of unrelaxed systems, for example, among others, the Shapley supercluster \cite{reisenegger00}, the poor Fornax cluster, which contains two distinct dynamical components \cite{drinkwater01}, the A2199 complex \cite{rines02}. \subsection{Mass-to-light profiles} By combining accurate photometry with the caustic mass of A576, Rines {\it et al.} \cite{rines00} were able to measure, for the first time, the profile of the mass-to-light ratio $M/L$ well beyond the cluster virial radius: they found an $R$-band $M/L$ profile steadily decreasing from $\sim 0.5$ to $4 h^{-1}$ Mpc, indicating that, in this cluster, dark matter is more concentrated than galaxies. Slightly decreasing $M/L$ profiles were also measured in the outer region of five (including A576) out of the nine CAIRNS clusters in the $K$-band \cite{rines04}. The remaining CAIRNS clusters have an $M/L$ profile which remains roughly flat at radii larger than $\sim 1h^{-1}$ Mpc. Coma shows a remarkably flat $K$-band $M/L$ profile out to $10h^{-1}$ Mpc \cite{rines01a}. A flat $M/L$ profile beyond $\sim 0.5 h^{-1}$ Mpc was also found in A1644 in the $H$-band \cite{tustin01}. These results are due to two reasons: (1) a larger predominance of less luminous late-type galaxies in the cluster outer regions; and (2) the fact that the $K$-band $M/L$ ratio of real galaxy systems increases with the system mass \cite{ramella04}. In fact, clusters form by accretion of smaller systems, as indicated for example by the optical and X-ray observations of the A2199 complex \cite{rines01b}, and as expected in current hierarchical models of structure formation \cite{springel06}; therefore, the cluster surrounding regions, which mostly contain galaxy groups, should naturally have smaller $M/L$. The positive $M/L-$mass correlation was also obtained in semi-analytical models of galaxy formation \cite{kauffmann99} and is well described by the statistical technique based on the conditional luminosity function \cite{vandenbosch04}. The infall regions are the transition between the dense cluster regions and the field \cite{balogh04, IAU195}, and the internal properties of galaxies do not vary abruptly at the virial radius \cite{rines05}. Therefore galaxy surveys in the outskirts of clusters, as those mentioned above, can clearly constrain models of cluster and galaxy formation. \section{Conclusion and perspectives} The caustic method and gravitational lensing are the only two techniques currently available for measuring the mass profile of clusters beyond their virial radius. The caustic method requires a sufficiently dense redshift survey with a large field of view and is only limited by the time needed to measure a large enough number of galaxy spectra; this observing time increases quickly with cluster redshift. On the other hand, lensing requires wide-field photometric surveys that need high angular resolution and extremely good observing conditions; moreover, the lensing signal is strong enough only when the cluster is within a limited redshift range $z\approx 0.1-1$. When the caustic technique was proposed, multi-object spectroscopy was not routinely applied to measure galaxy redshifts, and the request of 100 or more redshifts in the outskirts of clusters appeared demanding. Nowadays this task can be accomplished more easily and the popularity of the caustic technique has begun to increase. The caustic technique has been tested on $N$-body simulations and the mass profiles are accurate to better than $\sim 50\%$ out to $\sim 3-4 r_{200}$. On the three systems where both the caustic method and lensing could be applied, the two methods yield consistent mass profiles. This consistency also holds in Cl~0024 whose X-ray mass profile disagrees with the caustic and lensing profiles; this disagreement is most likely due to the fact that this cluster is out of equilibrium and thus the X-ray mass is unreliable. The uncertainties on the caustic mass profile are almost totally due to projection effects. In fact, the method assumes that the cluster is spherically symmetric, and this is rarely the case; therefore the redshift diagram from which the caustic mass is extracted can vary substantially when the cluster is observed along different lines of sight. The size of this systematic error is comparable to the systematic uncertainty we have with lensing methods which measure all the mass projected along the line of sight. What the caustic technique actually measures is the line-of-sight component of the escape velocity from the cluster (equation \ref{eq:rig-pot}). If we can measure the velocity anisotropy parameter $\beta$, the caustic technique thus yields a direct measure of the profile of the cluster gravitational potential. This brief review shows that the caustic technique is a powerful tool for the analysis of clusters and their external regions, but its full potentiality still needs to be exploited. For example, the $\sigma$ plateau, that appears when walking along the binary tree (Figure \ref{fig:thresholds}), provides a clean way to identify the cluster members. This issue still needs a throughout investigation \cite{serra}, but very preliminary results, based on a large sample of synthetic clusters, show that $\sim 90\%$ of the galaxies within the caustics are cluster members and that the interloper contamination is comparable or lower than other methods \cite{wojtak07}. An additional byproduct of the caustic machinery is the identification of cluster substructures from the distribution of the galaxies in the binary tree \cite{serna96}. This topic has also been currently investigated \cite{serra}. \acknowledgments I thank Alfonso Cavaliere and Yoel Rephaeli for the invitation to this fruitful and well organized school. It is a pleasure to acknowledge the hospitality of SIF during my stay in Varenna. I wish to thank Margaret Geller and Ken Rines who largely contributed to the development and dissemination of the caustic method. I also thank them for a careful reading of the manuscript and suggestions. Support from the PRIN2006 grant ``Costituenti fondamentali dell'Universo'' of the Italian Ministry of University and Scientific Research and from the INFN grant PD51 is gratefully acknowledged.
1,108,101,565,114
arxiv
\section{introduction} Recent progress in preparing, controlling, and measuring the macroscopic quantum states of superconducting circuits with Josephson junctions \cite{YAMAMOTO2003,STEFFEN2006,PLANTENBERG2007,NEELEY2008, DICARLO2009,BIALCZAK2010,REED2010} makes realization of a quantum computer an experimental possibility \cite{CLARKE2008}. Two major roadblocks -- decoherence and scalability -- may soon be overcome by the so-called Resonator/zero-Qubit (RezQu) architecture, recently proposed by J. Martinis \cite{RezQu}. Some of the basic operations of the RezQu architecture (such as the idling operation, the generation and measurement of the single-excitation states, as well as the single-excitation transfer operation called MOVE) were analyzed in a joint paper \cite{IDLING-PAPER}. It was found, that the RezQu architecture is capable of providing high-fidelity performance required for quantum information processing. In spite of the optimistic conclusions presented in Ref. \cite{IDLING-PAPER}, an important problem of generating {\it high-fidelity entangling} operations in the RezQu architecture still remains. One such operation is the controlled-Z (CZ) gate, given in Eq. (\ref{eq:CZmatrix}). It is believed that, in the RezQu architecture, the CZ gate may easily be produced using the SWAP-based {\it three-step} approach, similar to that of Ref. \cite{Haack-2010}, in which one logic qubit is moved to the bus, transferring the qubit excitation onto the bus, while the other qubit is tuned close to resonance with the bus for a precise duration. After the needed phase is accumulated (as in Refs. \cite{Strauch-2003,Yamamoto-10}), the excitation is moved back from the bus to the original qubit. We have simulated this three-step approach for realistic RezQu parameters and found some difficulties with it, which are described in Section \ref{sec:problems}. This prompted us to look for a more direct scheme, which is not beset by such difficulties. The scheme is described in Sec. \ref{sec:Implementation}. It does not rely on the loading and unloading of the bus, but instead, uses a second order anticrossing for the required phase accumulation ({\it cf.} Ref. \cite{Zheng-2009}). \section{The qubit/bus/qubit device} \begin{figure} \includegraphics[angle=0,width=1.00\linewidth]{fig1} \caption{ \label{fig:1} Schematic diagram of a qubit/bus/qubit RezQu device. The qubits may be supplemented with memory resonators (dashed lines). Here, $q$ -- qubits, $b$ -- bus, $m$ -- memory resonators.} \end{figure} A three-component RezQu device is depicted in Fig. \ref{fig:1}. In the rotating wave approximation (RWA), its dynamics is described by the Hamiltonian \begin{eqnarray} \label{RWAhamiltonian1} &&H(t) = \sum_{i=1,2}H_{i}(t) + \omega_b a^{\dagger}_b a_b \nonumber \\ && + g_{b1} \left(\sigma_{1}^{-}a^{\dagger}_b + \sigma_{1}^{+}a_b\right) + g_{b2}\left(a^{\dagger}_b \sigma_{2}^{-} + a_b\sigma_{2}^{+} \right), \end{eqnarray} where \begin{equation} H_{i}(t) = \begin{bmatrix} 0 & 0 & 0\cr 0 & \omega_{i}(t) &0 \cr 0&0& 2\omega_{i}(t) - \eta_{i} \end{bmatrix} \end{equation} are the Hamiltonians of the qubits whose frequencies $\omega_{i}$ may vary in time and the anharmonicities $\eta_i$ are assumed to be constant, \begin{equation} \sigma_{i}^{-} = \begin{bmatrix} 0 & 1 &0\cr 0 & 0 &\sqrt{2} \cr 0 & 0 &0 \end{bmatrix}, \quad \sigma_{i}^{+} = \left(\sigma_{i}^{-}\right)^{\dagger}, \end{equation} are the qubit lowering and raising operators, $\omega_b$ is the bus frequency (which is held fixed), $a^{\dagger}_b$ and $a_b$ are the creation and annihilation operators for the bus photons, and $g_{b1}$, $g_{b2}$ are the bus-qubit coupling constants. In our numerical simulations we will assume that $\eta_1=\eta_2\equiv \eta$ and $g_{b1}=g_{b2}\equiv g_b$. \section{Some difficulties with the SWAP-based controlled-Z gate implementation} \label{sec:problems} \begingroup \begin{table} \caption{ \label{tab:1} Configuration of the $(q_1 b q_2)$ system before, after, and during the CZ gate. All frequencies are defined by $\nu = \omega/2\pi$ (GHz) and are {\it bare}. Both qubit anharmonicities are $\eta/2\pi = 0.2$ GHz and assumed to be constant. The coupling is chosen to be $g_b/2\pi = 75$ MHz to guarantee sufficiently fast, $t_{\rm gate} = 45$ ns, gate operation. Left arrows indicate the near-resonant two-excitation frequencies.} \begin{ruledtabular} \begin{tabular}{lll} Bare & Before and & Optimized CZ frequencies at \\ frequencies & after CZ & the $200\leftrightarrow 101$ anticrossing\\ \hline $\nu_{q1}$&6.6&6.6\\ $\nu_{b}$&6.0&6.0\\ $\nu_{q2}$&6.5&6.40959\\ $\nu_{110}$&12.6&12.6\\ $\nu_{101}$&13.1&13.00959 $\leftarrow$\\ $\nu_{011}$&12.5&12.40959\\ $\nu_{200}$&13.0&13.0 $\leftarrow$\\ $\nu_{020}$&12.0&12.0\\ $\nu_{002}$&12.8&12.61918\\ \end{tabular} \end{ruledtabular} \end{table} \endgroup In what follows, we assume that the qubit/bus/qubit $(q_1 b q_2)$ system starts and ends in the (default) configuration, as shown in Table \ref{tab:1}. The initial and final qubit frequencies, $\omega_{q1}>\omega_{q2}$, are chosen in such a way as to avoid the $|100\rangle \leftrightarrow |001\rangle$ and $|200\rangle \leftrightarrow |101\rangle$ crossings. Then, the standard \cite{Yamamoto-10} three-step SWAP-based CZ gate implementation (Fig. \ref{fig:2}) suffers from the following major drawback: it produces a large number of Landau-Zener (LZ) transitions, each of which degrades the resulting gate's fidelity. The RezQu architecture based on fixed couplings may not be flexible enough to provide the needed controllability to counter the effects of all these transitions in a three-step CZ manner. Controlling only the qubit frequencies may not be enough to achieve the needed $10^{-4}$ accuracy of the CZ gate, using {\it experimentally reasonable number} of parameters. As Fig. \ref{fig:2} shows, the very first ramp of the initial SWAP operation already contains one such LZ crossing, leading to the leakage from state $|101\rangle$ to state $|200\rangle$. Here we propose to turn this particular drawback into an asset by dropping the loading and unloading SWAP operations altogether and employing the mentioned $|200\rangle \leftrightarrow |101\rangle$ anticrossing to accumulate the needed 101-phase during the CZ operation (see Fig. \ref{fig:3}). \begin{figure} \includegraphics[angle=0,width=1.00\linewidth]{fig2} \caption{ \label{fig:2} (Color online) One- and two-excitation frequencies (in GHz) of the qubit/bus/qubit system in the usual 3-step SWAP-based CZ gate implementation. Optimization with at least {\it four} naturally chosen parameters at $g_b/2\pi = 25$ MHz gives the gate error, as defined in Eq. (\ref{eq:gateAccuracy}), of no less than $2\times 10^{-3}$. Compare with Fig. \ref{fig:3}, where an alternative, single-step CZ-generating scheme is presented. } \end{figure} \begin{figure} \includegraphics[angle=0,width=1.00\linewidth]{fig3} \caption{ \label{fig:3} (Color online) One- and two-excitation frequencies (in GHz) of the qubit/bus/qubit system in the single-step CZ gate implementation. A two-parameter optimization at $g_b/2\pi=75$ MHz gives $>99.99$\% fidelity for total gate duration of $t_{\rm gate} = 45$ ns. The optimized parameters are the undershoot, $(\omega_{101}-\omega_{200})/2\pi=9.59$ MHz, and the undershoot duration, $t_{\rm undershoot}=29.1$ ns (measured between the central points of the two ramps). The widths of the error-function-shaped ramps were held fixed at $\sigma_{\rm in}=\sigma_{\rm fin}=3$ ns. Compare with Fig. \ref{fig:2}.} \end{figure} \section{Potential problems with the proposed scheme} The following two problems may arise in our scheme. First, the presence of additional system elements (qubits, memory resonators, etc.) may result in additional states that are near-resonant with the states $|200\dots\rangle$ and $|101\dots\rangle$, thus leading to unwanted leakage. Here we ignore this complication and assume that under realistic conditions it will always be possible to isolate this particular anticrossing sufficiently well. Second, being a second-order process, accumulation of the 101-phase may proceed too slowly compared with the qubit coherence time (currently at about $t_{\rm coherence} \simeq 500$ ns). However, the following argument shows that this is not necessarily true. In the case of a similar second order resonance $|100\rangle \leftrightarrow |001\rangle$, the effective (via the bus) $q_1$-$q_2$ coupling \cite{Pinto-2010} is given by \begin{equation} g_{\rm eff}^{100\leftrightarrow 001} = \frac{2g_b^2\omega_b}{\omega_{q1}^2-\omega_b^2}. \end{equation} Then in our case we should have \begin{equation} g_{\rm eff}^{200\leftrightarrow 101} = \sqrt{2}g_{\rm eff}^{100\leftrightarrow 001}. \end{equation} Choosing $g_b=75$ MHz (experimentally achievable coupling) and setting $\omega_{q1}/2\pi \approx 6.4$ GHz (near-resonant condition), we find for $\omega_{b}/2\pi = 6.0$ GHz, \begin{eqnarray} \frac{g_{\rm eff}^{200\leftrightarrow 101}}{2\pi} &=&\sqrt{2} \left(\frac{2\times 0.075^2\times 6.0}{6.4^2-6.0^2}\right) \nonumber \\ &\approx& 0.0192 \; {\rm GHz} = 19.2 \; {\rm MHz}, \end{eqnarray} which gives an experimentally reasonable duration of the corresponding phase accumulation, \begin{equation} t_{2\pi{\rm \; pulse}}^{200\leftrightarrow 101} = \frac{\pi}{g_{\rm eff}^{200\leftrightarrow 101}} \approx 26 \; {\rm ns}. \end{equation} In the actual implementation shown in Fig. \ref{fig:2}, the total gate duration had to be prolonged to $t_{\rm gate} = 45$ ns in order to correctly produce the final populations of $|100\rangle$ and $|001\rangle$ states. \section{Implementing the single-step CZ gate} \label{sec:Implementation} The proposed single-step CZ gate resulting from a two-parameter optimization with the RWA Hamiltonian of Eq. (\ref{RWAhamiltonian1}) is depicted in Fig. \ref{fig:3}. To provide some intuitive understanding of how the generated gate works, the overlaps between the time-evolved logic states and some of the time-dependent (comoving) system eigenstates are given in Fig. \ref{fig:4}. \begin{figure} \includegraphics[angle=0,width=1.00\linewidth]{fig4} \caption{ \label{fig:4} (Color online) Some of the overlaps between the time-evolving computational states and the comoving system eigenstates in the single-step CZ gate implementation. $U(t)$ stands for the unitary operator of the time evolution up to time $t$.} \end{figure} Our gate is implemented in the computational basis consisting of the full system eigenstates \cite{IDLING-PAPER}. It has the form \begin{equation} \label{eq:CZmatrix} {\rm CZ} = \begin{pmatrix} 1&0&0&0\cr 0&e^{-i\varphi_1}&0&0\cr 0&0&e^{-i\varphi_2}&0\cr 0&0&0&-e^{-i(\varphi_1+\varphi_2)}\cr \end{pmatrix}, \end{equation} where $\varphi_1$ and $\varphi_2$ are some arbitrarily accumulated phases. Due to the use of the system {\it eigenstates}, these phases can always be adjusted simply by waiting. The optimization was performed at fixed $t_{\rm gate} = 45$ ns by minimizing the function \begin{eqnarray} \label{eq:gateAccuracy} {\rm Error}(U) &=& {\rm Error}_1+{\rm Error}_2+{\rm Error}_3+{\rm Error}_4 \nonumber \\ &=&\left(1-|a_1|^2\right) + \left(1-|a_2|^2\right) + \left(1-|a_3|^2\right) \nonumber \\ && + \left\vert 1 +\frac{ a_1a_2a^*_3}{\left\vert a_1a_2a^{*}_3 \right\vert} \right\vert \geq 0, \end{eqnarray} where \begin{equation} a_1 = \langle \overline{100} |U| \overline{100}\rangle, a_2 = \langle \overline{001} |U| \overline{001}\rangle, a_3 = \langle \overline{101} |U| \overline{101}\rangle, \end{equation} with respect to the undershoot magnitude and the undershoot duration (measured between the central points of the ramps), with additional constraints ${\rm Error}_1 +{\rm Error}_2+{\rm Error}_3<10^{-4}$ and ${\rm Error}_4<10^{-10}$. The widths (standard deviations) of the error-function-shaped ramps were held fixed at 3 ns. The results are presented in Fig. \ref{fig:3}. In the above, $U$ is the unitary operator representing the CZ pulse, and the overbars stand for the prefix ``eigen-.'' The optimization function ${\rm Error}(U)$ was defined so that for $U={\rm CZ}$, as given in Eq. (\ref{eq:CZmatrix}), ${\rm Error}({\rm CZ})=0$. Notice, that we do not have to take into account the phase of the $|000\rangle$ state, since the corresponding frequency $\nu_{000}$ can always be set to 0. \section{CZ gate as an idling error} Our CZ gate may be viewed as a particular example of an ``idling error'' \cite{IDLING-PAPER}, which is a measure of how fast the phase of the {\it computational} eigenstatestate $|\overline{101}\rangle$ accumulates relative to the phases of eigenstates $|\overline{100}\rangle$ and $|\overline{001}\rangle$. The error is characterized by the running frequency $ \Omega_{ZZ} = \varepsilon_{101}-\varepsilon_{100}-\varepsilon_{001}+\varepsilon_{000}$, with $\varepsilon_{ijk}$ being the corresponding eigenenergies, and physically arises due to the level repulsion between 101 and other levels in the two-excitation subspace of the system. Consequently, a superposition of computational states evolves as \begin{eqnarray} |\psi(t)\rangle &=&\alpha_{000} e^{-i\epsilon_{000}t} |\overline{000}\rangle + \alpha_{100} e^{-i\epsilon_{100}t} |\overline{100}\rangle \nonumber \\ && + \alpha_{001} e^{-i\epsilon_{001}t} |\overline{001}\rangle \nonumber \\ && + \alpha_{101}e^{-i\Omega_{ZZ}t} e^{-i(\epsilon_{100}+\epsilon_{001}-\epsilon_{000})t}|\overline{101}\rangle, \end{eqnarray} and so, after a time $t_{\rm cp} =\pi/\Omega_{ZZ}$, the state $|\overline{101}\rangle$ gets multiplied by -1. Thus, in systems with nonlinearities, the CZ gate can always be generated simply by waiting. For the qubit/bus/qubit RezQu device, using Eq. (\ref{RWAhamiltonian1}), we find in fourth order, \begin{widetext} \begin{equation} \label{eq:OmegaZZ_4th_order} \Omega^{(4)}_{ZZ} = \frac{ 2g_{b1}^2 g_{b2}^{2} \left\{ \omega_1 \eta_1 (2\omega_b-\omega_1-\eta_2) + \omega_2 \eta_2(2\omega_b-\omega_2-\eta_1) - \omega_b\left[\omega_b (\eta_1+\eta_2)-2\eta_1\eta_2\right] \right\} } {(\omega_1-\omega_b)^2(\omega_2-\omega_b)^2 \left[\omega_1-(\omega_2-\eta_2)\right]\left[(\omega_1-\eta_1)-\omega_2\right]}, \end{equation} \end{widetext} which for the above mentioned (and fixed) $\omega_1/2\pi = 6.6$ GHz and $\omega_2/2\pi = 6.5$ GHz, at $g_b/2\pi = 75$ coupling, produces the CZ gate after about 130 ns, which is too long. For a more efficient CZ generation, the system parameters must be set to maximize $\Omega_{ZZ}$. One such choice, $\omega_2 \approx \omega_1-\eta_1$, which corresponds to the $200-101$ anticrossing, was made in our single-step implementation. \section{Conclusion} To summarize, we introduced a scheme for single-step generation of a high-fidelity controlled-Z gate in a three-component RezQu architecture. Despite the use of a second-order anticrossing, the accumulation of the needed 101-phase proceed sufficiently fast compared with the qubit coherence time. Different from the usually considered proposals, our CZ scheme does not rely on the MOVE operations transferring excitations to and from the bus. The resulting simplicity of the generated gate may prove useful for implementations in the first generation solid state quantum computers. \begin{acknowledgments} This work was supported by NSA/IARPA/ARO Grant No. W911NF-10-1-0334. \end{acknowledgments}